this post was submitted on 11 Apr 2026
151 points (89.5% liked)

Programming

26473 readers
493 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS
 

...and I still don't get it. I paid for a month of Pro to try it out, and it is consistently and confidently producing subtly broken junk. I had tried doing this before in the past, but gave up because it didn't work well. I thought that maybe this time it would be far along enough to be useful.

The task was relatively simple, and it involved doing some 3d math. The solutions it generated were almost write every time, but critically broken in subtle ways, and any attempt to fix the problems would either introduce new bugs, or regress with old bugs.

I spent nearly the whole day yesterday going back and forth with it, and felt like I was in a mental fog. It wasn't until I had a full night's sleep and reviewed the chat log this morning until I realized how much I was going in circles. I tried prompting a bit more today, but stopped when it kept doing the same crap.

The worst part of this is that, through out all of this, Claude was confidently responding. When I said there was a bug, it would "fix" the bug, and provide a confident explanation of what was wrong... Except it was clearly bullshit because it didn't work.

I still want to keep an open mind. Is anyone having success with these tools? Is there a special way to prompt it? Would I get better results during certain hours of the day?

For reference, I used Opus 4.6 Extended.

top 50 comments
sorted by: hot top controversial new old
[–] Blackmist@feddit.uk 2 points 3 hours ago

I think it's mostly going to be useful for boilerplate generation, and effectiveness is going to vary wildly based on what language you're using. JS or Python? It'll probably do OK. Plenty of open source for it to "learn" from. Delphi? Forget it.

Brief experimentation showed it liked to bullshit if it was wrong, rather than fix things.

[–] webkitten@piefed.social 9 points 5 hours ago* (last edited 5 hours ago) (1 children)

Don't just use it as a drop in replacement for a programmer; use it to automate menial tasks while employing trust but verify with every output it produces.

A well written CLAUDE.md and prompt to restrict it from auto committing, auto pushing, and auto editing without explicit verification before doing anything will keep everything in your control while also aiding menial maintenance tasks like repetitive sections or user tests.

[–] Feyd@programming.dev 3 points 5 hours ago (1 children)

verify with every output it produces.

I agree that you can get quality output using these tools, but if you actually take the time to validate and fix everything they've output then you spend more time than if you'd just written it, rob yourself of experience, and melt glaciers for no reason in the process.

prompt to restrict it from auto committing, auto pushing, and auto editing without explicit verification

Anything in the prompt is a suggestion, not a restriction. You are correct you should restrict those actions, but it must be done outside of the chatbot layer. This is part of the problem with this stuff. People using it don't understand what it is or how it works at all and are being ridiculously irresponsible.

repetitive sections

Repetitive sections that are logic can be factored down and should be for maintainability. Those that can't be can be written with tons of methods. A list of words can be expanded into whatever repetitive boilerplate with sed, awk, a python script etc and you'll know nothing was hallucinated because it was deterministic in the first place.

user tests.

Tests are just as important as the rest of the code and should be given the same amount of attention instead of being treated as fine as long as you check the box.

[–] webkitten@piefed.social 1 points 4 hours ago

I agree it's not perfect; I still only use it very sparingly, I was just just saying as an alternative to trusting everything it does out of the box.

[–] JubilantJaguar@lemmy.world 2 points 3 hours ago (1 children)

Recently I used it (some free-tier DuckAI model, not Claude) to write a Python script for pasting PNGs into PDFs (complete with Tk interface) while applying a whole bunch of custom transformations. Simple enough, but a total chore with all the back-and-forth of searching for relevant unfamiliar libraries and syntax checking and troubleshooting. Inevitably it would have taken me the whole afternoon by hand. With AI I knocked it out in 25 minutes. That was my epiphany moment.

Since then I've noticed a general problem with AI coding. It almost always introduces too much complexity, which I then have to waste time untangling (and often just understanding) before I can proceed. Whereas if I had done it "my way" from the start I might have got there earlier. But I figure this problem is kinda on me.

[–] thedogz22@thelemmy.club 1 points 1 hour ago

And for me, therein lies why my use of it has become reduced to a really complex rubber duck, or to write something out that I could do by hand, but making my robot butler do it is just faster. Anyone actually leaning into today's generative AI models for generating code that requires complexity or thought... they shall reap what they sow in the years to come.

[–] x00z@lemmy.world 3 points 4 hours ago

The trick about vibe coding is that you confidently release the messed up code as something amazing by generating a professional looking readme to accompany it.

[–] arthur@lemmy.zip 3 points 4 hours ago

I'm using (Gemini 3.1 pro in) Gemini cli to build a complex (personal) project to explore how to use these tools. My impression is that the code produced by LLMs is disposable/throwaway. We need to babysit the model and be very hands on to get good results.

[–] Evotech@lemmy.world 1 points 3 hours ago

You need to use plan mode

[–] shaggy@beehaw.org 0 points 2 hours ago

I've had an opposite experience. Here are some guidelines I follow:

  1. Setup a foundation of rules and knowledge for Claude to fall back on. I define expectations, common definitions, behaviors and anything else that's not project specific upfront.
  • in Claude.md I reference different domains of behavior, definitions, and rules (Claude has conventions for storing this type of stuff, so ask it to handle organizing information too)
  • create a top-level project definition: this defines what "knowledge" is. It allows you to build up what Claude knows later on as you work on your project. "Update knowledge", "add this to your knowledge", etc
  • create a top-level rule: all information in knowledge must have one source of truth. Whenever needed reference the original knowledge source instead of duplicating it. Now you can ask it to "review your knowledge", "audit and flag knowledge"
  1. explicitly explain everything, leave nothing ambiguous; explain like you're explaining the problem to a new developer who's not familiar with the plan or codebase at all. Don't ask it to write code right away. Ask it to write a plan/spec. Review the plan, make changes and discuss it until the plan is 100%. This plan can include implementation details if you're ok with that, but it's not necessary (sometimes I write a separate referenced file called implementation.md beside the plan and have the plan reference it.).
  • Your role as a developer is shifting from writing code, to writing specs, and reviewing code
  1. Once there is nothing left to describe, and no ambiguity in your plan, have it use your plan to write the code. This works amazingly well for me.

A benefit to this method is that there is less wasted effort on my part. If Claude writes the code wrong, I can trace the reason for the mistake to a gap in the plan. I can then update the plan, throw away the code (if I have to), and have Claude reimplement the code again.

Rinse and Repeat.


Keep knowledge, plans, and implementation details clearly separated (you can copy your latest successful knowledge files to new projects to get started on future projects even faster).

Keep the goals of each plan as small and granular as possible (easier the define plans). Knowledge, plans, and implementation details all get tracked in your repository just like your code does.


I'm a career developer, and have been writing code for over 20 years. I'm adding this bit because I understand how AI driven development can look like a threat to developers. Over this last year, I've had a shift in this thinking though. I can take what I've learned through my career and use it to inform writing successful specifications Claude can use to write effective code. Claude may not solve all of our coding problems, but if used effectively, it solves nearly everything you throw at it.

[–] thedeadwalking4242@lemmy.world 1 points 4 hours ago

I use it for tedious transformations or needle ones haystack problems.

They are better at searching for themes or concepts then they are at actually doing any "thinking tasks". My rule is that if requires a lot of critical thinking then the LLM can do it.

It's definitely not all they say it is. I think LLMs will fundamentally always have these problems.

I've actually had a much better time using it for in line completion as if recent. It's much better when the scope of the problem it needs to "solve" ( the code it needs to find and compose to complete your line ) is like the Goldilocks zone. And if the answer it gives is bad I just keep typing.

I really hate the way LLM vibe coded slop is written and architecture. To me is clear these things have extremely limited conception. I've compared it to essentially ripping out the human language center, giving it a keyboard and asking it to program for you. It's just no really what it's good at.

[–] No1@aussie.zone 1 points 4 hours ago (1 children)

were almost write every time

Claude: You too are human, human.

[–] BenevolentOne 2 points 3 hours ago

If you make a spelling error, Claude thinks, "we're doing low quality work", and it does.

[–] Michal@programming.dev 3 points 6 hours ago (1 children)

You can't really just use Claude code raw. You have to give it detailed instructions, use Claude skills,observe results, update prompts. It can be just as consuming, but rather that doing the productive work, you're just reviewing and correcting AI. People who have success using AI have invested time in their setup and are continuously adjusting it.

[–] KeenFlame@feddit.nu 1 points 4 hours ago

But all in all extremely much faster. That's the reason it is not useless. Everyone whines that it takes so much time when no it is not close to manual. It's not a magic pill and you need the know how still, but no, it does not take "just as time consuming". You are more productive. But yes, it is also more boring.

[–] sobchak@programming.dev 15 points 10 hours ago

Key is having it write tests and have it iterate by itself, and also managing context in various ways. It only works on small projects in my experience. And it generates shit code that's not worth manually working on, so it kind of locks your project in to being always dependent on AI. Being always dependant on AI, and AI hitting a brick wall eventually means you'll reach a point where you can't really improve the project anymore. I.e. AI tools are nearly useless.

[–] kunaltyagi@programming.dev 10 points 11 hours ago

Don't jump right in to coding.

Take a feature you want, and use the plan feature to break it down. Give the plan a read. Make sure you have tests covering the files it says it'll need to touch. If not, add tests (can use LLM for that as well).

Then let the LLM work. Success rates for me are around 80% or higher for medium tasks (30 mins--1 hour for me without LLM, 15--30 mins with one, including code review)

If a task is 5 mins or so, it's usually a hit or miss (since planning would take longer). For tasks longer than 1 hour or so, it depends. Sometimes the code is full of simple idioms that the LLM can easily crush it. Other times I need to actively break it down into digestible chunks

[–] stickyprimer@lemmy.world 8 points 11 hours ago

almost write

Indeed.

[–] onlinepersona@programming.dev 8 points 12 hours ago

It's not called "correct" coding for a reason.

That's why people are wrong so often: they feel like something is right, but don't check. That's how you get anti -vaxxers, manospere people, MAGA, QAnon, Brexit, etc.

[–] athatet@lemmy.zip 22 points 17 hours ago (2 children)

The reason you kept going around in circles and reintroducing bugs you already got rid of is because LLMs don’t remember things. Every time you tell it something it tells it the entire conversation again so it has all the parts. Eventually it runs out of room and starts cutting off the beginning of the convo and now the llm can’t ‘remember’ what it was you were even talking about.

[–] KeenFlame@feddit.nu 1 points 4 hours ago

Kind of, but it really depends on the workflow. Simple 3d math does not extend to a codebase that is impacted by context window

[–] Railcar8095@lemmy.world 3 points 10 hours ago

For that you can ask to update a documentation/status file on every update. You can manually add the goal and/or tasks for the future.

With that, I improved my success a lot even when starting new sessions (add in the instructions file to use this file for reference, so you don't have to remind every time)

[–] Feyd@programming.dev 129 points 23 hours ago (2 children)

producing subtly broken junk

The difference between you and people that say it's amazing is that you are capable of discerning this reality.

[–] OwOarchist@pawb.social 39 points 22 hours ago (7 children)

What I don't get, though, is how the vibe code bros can't discern this reality.

How can they sit there and not see that their vibe-coded app just doesn't do what they wanted it to do? Eventually, you've got to try actually running the app, right? And how do you keep drinking the AI kool-aid when you find out that the app doesn't work?

[–] Lumelore@lemmy.blahaj.zone 23 points 19 hours ago* (last edited 19 hours ago) (1 children)

Vibe code bros aren't real programmers. They're business people, not computer people. Even if they have a CS degree, they only got that because they think it'll get them more money. They lack passion and they don't care about understanding anything. They probably don't even care about what they're generating beyond its potential to be used in a grift.

I graduated college not that long ago and my CS classes had quite a few former business majors. They switched because they think it'll be more lucrative for them but since they only care about money they didn't bother to actually learn the material especially since they could just vibe code through everything.

[–] b_n@sh.itjust.works 5 points 10 hours ago

So much this.

After working in tech companies for the last 10 years I've noticed the difference between people that "generate code" and those that engineer code.

My worry about the industry is that vibe coding gives the code generators the ability to generate even more code. The engineers (even those that use vibe tools) are not engineering as much code by volume compared to "the generators".

My hope is that this is one of those "short term gain, long term pain" things that might self correct in a couple of years 🤞.

[–] Feyd@programming.dev 32 points 22 hours ago

They're the same people that copied code from stack overflow that you had to tell them how to actually fix every PR. The difference is the C suite types are backing them this time

[–] tleb@lemmy.ca 8 points 18 hours ago (1 children)

Eventually, you've got to try actually running the app, right?

At least at my company, no, they just start selling it.

[–] pinball_wizard@lemmy.zip 4 points 13 hours ago

Yes. Exactly. In my experience, there's more code shops that ship shit than that catch their mistakes.

load more comments (4 replies)
[–] JustEnoughDucks@feddit.nl 0 points 7 hours ago* (last edited 7 hours ago)

I wonder if it was even able to compile. I am a shitty hobby coder who just does it to make my embedded hardware projects function.

I have yet to get compilable code out of any of the AI bots I have tried. Gemini, mistral, and chatGPT. I am not making an account lol.

I have gotten some compilable python and VBA code for data analysis stuff at work, so I wonder if it is because embedded stuff uses specific SDKs that it can't handle.

Either way I have given up on it for anything besides bouncing ideas off of or debugging where electromagnetics issues could lie (though it has been completely wrong about that also even though it is using the wrong concepts, it just reminds me of concepts that I might have overlooked)

[–] cecilkorik@lemmy.ca 77 points 23 hours ago* (last edited 23 hours ago) (2 children)

No, I think you do get it. That's exactly right. Everything you described is absolutely valid.

Maybe the only piece you're missing is that "almost right, but critically broken in subtle ways" turns out to actually be more than good enough for many people and many purposes. You're describing the "success" state.

/s but also not /s because this is the unfortunate reality we live in now. We're all going to eat slop and sooner or later we're going to be forced to like it.

[–] vga@sopuli.xyz 1 points 4 hours ago* (last edited 4 hours ago)

Maybe the only piece you’re missing is that “almost right, but critically broken in subtle ways”

Sure, but you have to note that it reaches that point in minutes. Sometimes on a task that would take humans a week. The power is not that it creates correct stuff, it's that it creates almost correct stuff 100 times faster than human. Plus the typical machine benefits: it never gets tired, demotivated, etc.

So then the challenge becomes being able to be that human, who can review stuff extremely well and rapidly, being natural in probing the stuff LLMs tend to be wrong about. Sort of like the same challenge that every tech lead had before LLMs too, but just subtly different, because LLMs don't exactly think like we do.

[–] pinball_wizard@lemmy.zip 3 points 13 hours ago

"almost right, but critically broken in subtle ways" turns out to actually be more than good enough for many people and many purposes. You're describing the "success" state.

Exactly. The consequences are at worst a problem for "future me", and at best "somebody else's problem".

AI didn't create this reality, but it's certainly moved it into the spotlight and to "center stage."

[–] tohuwabohu@programming.dev 16 points 19 hours ago

I use my own brain to sketch out what I want to work and how. Before writing any code, I use the LLM to point out gaps and how to close them. Pros and cons of certain decisions. Things you would discuss with colleagues. Then, I come up with a plan for the order I want the code to be written in and how to fragment that into smaller, easy to handle modules. I supervise and review each chunk produced, adapt code mostly manually if required, write the edge case tests - most importantly, run it - and move to the next. This is how I use it successfully and get results much faster than the traditional way.

At my job though I can witness how other people use it. I was asked to review a fully vibecoded fullstack app that contains every mistake possible. Unsanitizised input. Hardcoded tokens. Hardcoded credentials. 2500+ LoC classes and functions. Business logic orchestrators masquerading as service. Full table scans on each request. Cross-tenant data leaks. Loading whole tables into the memory. No test coverage for the most critical paths. Tests requiring external services to run. The list goes on. Now they want me to make it production ready in 8 weeks "because you have AI".

My point: This was an endorphine fueled vibecoding session by someone who has no experience as developer, asked the LLM to "just make it work", lacking the ability to supervise the work that comes with experience. It was enough to make it rum locally and pitch a "system engineered w/o any developer" to management.

Those systems need guidance just as a Junior would and I am strongly and loudly advocating to restrict access to this incredibly useful tool to people who know what they do. Nobody would allow a manager to use a laser cutter in a carpentry workshop without proper training, worst case is they will burn down the whole shack.

I appreciate you having a open mind about it at least. I needed some time to adjust as well. I don't even use Opus, most of the time my workflow consistently produces usable code with Sonnet. Maybe you can try what I explained initially? Just don't try any language you're not familiar with, that will not end well.

[–] Prove_your_argument@piefed.social 22 points 20 hours ago (1 children)

Have you been coding professionally long?

I find that the only time I can use these chatbots for a task I really need to already know what i'm doing so that I can read the output and fix the issues. This is like having junior devs on your team and being a code reviewer more than being a full time coder. They get a lot of things wrong but there's so much usable that you can save a ton of time over doing everything yourself from scratch.

Just like with junior devs, you can send them back to fix what you know is wrong and give them feedback to improve various things you would prefer done another way. There's no emotions though, so you can just be blunt and concise with feedback.

[–] pinball_wizard@lemmy.zip 5 points 14 hours ago* (last edited 2 hours ago)

They get a lot of things wrong but there's so much usable that you can save a ton of time over doing everything yourself from scratch.

Your experience with Junior devs has been quite different from mine.

I work with Junior devs because someday they will be senior devs who owe me a favor, even though they've always only costed me time.

Edit: I also work with junior devs because sometimes a tiny corner of my job is both mind-numbingly boring, and also weirdly difficult to automate away.

I assign that work to junior devs because I don't want to do it.

In doing so, I am wasting the boss's money, since I could do it faster.

But I consider it but just another part of the price of hiring me, because it keeps me happy.

[–] dgdft@lemmy.world 37 points 22 hours ago* (last edited 22 hours ago) (2 children)

Vibe coding, in the sense of telling the model to make codebase changes, then directly using the output produced, is 100% marketing bullshit that does not scale beyond toy examples.

Here’s the rub: Claude is extremely useful as an advanced autocomplete, if and only if you’re guiding it architecturally through every task it runs, and you vet + revise the output yourself between iterations. You cannot effectively pilot entirely from chat in a mature codebase, and you must compile robust documentation and instructions for Claude to know how to work with your codebase.

You also must aggressively manage information in the context window yourself and keep it clean. You mentioned going in circles trying to get the robot to correct itself: huge mistake. Rewind to before the error, and give it better instructions to steer it away from the pitfall it fell into. Same vein, you also need to reset ASAP after pushing into the >100k token mark, because the models start melting into putty soon after (yes, even the “extended” 1M-window ones).

I’m someone who has massively benefited from using modern LLMs in my work, but I’m also a massive hater at the same time: They’re just a tool, not magic, and have to be used with great care and attention to get reasonable results. You absolutely cannot delegate your thinking to them, because it will bite you, hard and fast.

For your use case (3D math), what I recommend is decomposing your end goal into a series of pure functions that you’ll string together. Once you have that list, that’s where Claude comes in. Have it stub those functions for you, then have it implement them one at a time, reviewing the output of every one before proceeding.

[–] something183786@lemmy.world 7 points 14 hours ago* (last edited 14 hours ago) (1 children)

My preferred way of using LLM coders is:

  • plan only
  • read the spec file I just wrote
  • optionally ask me questions in ‘qa.md’, I’ll reply inline Repeat until it stops asking me questions, then switch to a different model and ask again. I usually use both gpt5.3-codex AND Claude Sonnet

Then I have it update the spec. I start a new session to have it implement. Finally review the code. If I don’t like it, undo and revisit the spec. Usually it’s because I’m trying to do too much at once. And I need to break it down into multiple specs.

Adversarial reviews are also great ways to prune bad ideas and assumptions from plans. Have helped me out greatly and made the better LLMs often go "plan said do X, but doing that is a unknown huge risk that may take longer then the rest of the plan".

The superpowers plugin does the brainstorm, qa, design plan, implementation plan, implement, review quite well. It should aid the process of actually doing feature type work. I also add adversarial reviews into the process, saves a lot of time debugging what went wrong after implementation.

[–] eodur@piefed.social 7 points 15 hours ago

This is the most pragmatic take I've read and it resonates strongly with my own experience. Claude can be a very useful tool, but like any other there is a learning curve and often many sharp edges. I've had Claude build some reasonably complex code bases, but it takes work. Its pretty decent at "coding" but pretty terrible at the rest of software engineering.

[–] pixxelkick@lemmy.world 24 points 22 hours ago* (last edited 22 hours ago)
  1. Did you have MCP tooling setup so it can get lsp feedback? This helps a lot with code quality as it'll see warnings/hints/suggestions from the lsp

  2. Unit tests. Unit tests. Unit tests. Unit tests.

I cannot stress enough how much less stupid LLMs get when they jave proper solid Unit tests to run themselves and compare expected vs actual outcomes.

Instead of reasoning out "it should do this" they can just run the damn test and find out.

They'll iterate on it til it actually works and then you can look at it and confirm if its good or not.

I use Sonnet 4.5 / 4.6 extensively and, yes, its prone to getting the answer almost right but a wrong in the end.

But the unit tests catch this, and it corrects.

Example: I am working on my own fame engine with monogame and its about 95% vibe coded.

This transform math is almost 100% vibe coded: https://github.com/SteffenBlake/Atomic.Net/blob/main/MonoGame/Atomic.Net.MonoGame/Transform/TransformRegistry.cs

The reason its solid is because of this: https://github.com/SteffenBlake/Atomic.Net/blob/main/MonoGame/Atomic.Net.MonoGame.Tests/Transform/Integrations/TransformRegistryIntegrationTests.cs

Also vibe coded and then sanity checked by me by hand to confirm the math checks out for the tests.

And yes, it caught multiple bugs, but the agent automatically could respond to that, fix the bug, rerun the tests, and iterate til everything was solid.

Test Driven Development is huge for making agents self police their own code.

[–] Gsus4@mander.xyz 29 points 23 hours ago* (last edited 23 hours ago)

Their usual (crap) defense is:

a) you're not paying enough, so of course it is crap

b) you're not prompting right, you need to use detailed, precise language...

c) that is just anecdotal evidence, you need to do an actual study, yadda yadda.

d) it will improve...

(any other anyone has noticed?)

[–] ThirdConsul@lemmy.zip 10 points 19 hours ago* (last edited 19 hours ago) (1 children)

.net runtime after 10 months of using and measuring where LLMs (including latest Claude models) shine reported a mindboggling success rate peaking at 75% (sic!) for changes of 1-50 LOC size - and it's for an agentic model (so you give it a prompt, context, etc, and it can run the codebase, compile it, add tests, reason, repeat from any step, etc etc).

Except it was clearly bullshit because it didn’t work.

Welcome to the LLMs where everything is hallucinated and correctness doesn't matter.

Is anyone having success with these tools

Define success.

Is there a special way to prompt it?

It gets better the more you use it, you will learn what works for you, and what does not. Right now the hot shit is "autonomous agent swarms" peddled by the token sellers as a way to output correct massive features. Do not touch that for now.

What helps with Claude / llms 101:

  • when it tells you something about an API, using a tool or whatever, tell it tool version and order it to give you documentation page proving the solution is possible.

  • when it oneshots a working solution you will get a dopamine hit. Be aware of that, as it can be addictive or make you trust it. Do not trust it, it sucks long term.

  • it will alwyas default to below average solution. Know where your hotspots are, and be extra judgy there.

  • it will get lazy and lie to you, especially with tests

  • it will not propose code refactors on its own.

  • despite the token peddlers claims, no matter if your using the 1M token context window model, the shit degrades when the context window is over 20k-30k tokens - so switch context windows often for better outcomes, but that means you will be burning more money - which obviously benefits the token peddlers.

  • do not trust the hype - so far any and all tall claim of a breakthrough from the token peddlers were a lie (e.g. vibing working os that can run Doom, vibing a next.js 96% replacement in a week, vibing a browser, compiler, vibing a browser jailbreak via Mythos)

Would I get better results during certain hours of the day?

Afaik USA timezone has worse performance.

[–] Kissaki@programming.dev 2 points 8 hours ago (1 children)

.net runtime after 10 months of using and measuring where LLMs (including latest Claude models) shine reported a mindboggling success rate peaking at 75% (sic!) for changes of 1-50 LOC size - and it’s for an agentic model (so you give it a prompt, context, etc, and it can run the codebase, compile it, add tests, reason, repeat from any step, etc etc).

I assume this is from https://devblogs.microsoft.com/dotnet/ten-months-with-cca-in-dotnet-runtime/?

[–] ThirdConsul@lemmy.zip 1 points 7 hours ago

You assume correctly.

load more comments
view more: next ›