No one ever read that code and no one ever will. This is just setting themselves up so that the only possible contributor for features and bugfixes is Claude. Besides all else, baked in reliance on paid services (yeah I know they're part of Anthropic now)
I assume that it's pretty simple. It makes sense for large projects to avoid having hundreds of dependencies. Each dependency can bring several types of risks
The zig compiler is written in zig. The zig standard library is written in zig. So there are two obvious reasons:
Someone had to be the first to write a base64 implementation, it feels natural that it would be zig itself.
A compiler or standard library really shouldn't have thousands of dependencies, sometimes re-implementing simple things is is simply more pragmatic.
With that in mind, it's probably still that way in the rewrite only because it was a 1:1 mechanical translation.
Otherwise, had it been done by a rust programmers, the new zig compiler would have ditched a lot of in-house code and added 7,202,831 crates dependencies instead.
Either use https://crates.io/crates/base64 or publish this pretty good base64 translation as its own crate on crates.io. Then you can remove for good the base64 from the codebase of a damn Javascript runtime, and just depend on the external crate (either the existing base64 crate, or the crate you just created)
This is just setting themselves up so that the only possible contributor for features and bugfixes is Claude.
Not even, the code is probably too big for its context window.
The saving grace is that it appears this is more of a language translation. You could in theory write a deterministic translator to do the same thing, but then you wouldn't get billions in venture capital and "surprise" code.
I know you weren't really asking the question, but I'll answer anyway.
Those review bots generally don't load everything in to one context window. They all do slightly different things, but the general gist is (each stage will use it's own context window, and often a stage will have several independent contexts):
Load the PR, project-level files like AGENTS.md, and the list of changed files (but not the contents).
Create one or more sub-agents to investigate a different area (e.g. "review the UI change", "review the database change", if it splits horizontally, it could also split it vertically).
Collect the list of findings from all the subagents, group common themes together.
Create a new set of subagents, one for each finding that the first batch found, with the goal to verify the comments.
Report what's found.
With that kind of approach it can scale quite well despite the context window limitations. This costs money of course, you can easily spend $20-30 worth of tokens per review, but it's still cheap compared to the human cost and compared with the value of the Bun platform to Anthropic.
There are also bad review bots that just do things file-by-file and have no idea what's going on so only comment on irrelevant style issues, etc.
This costs money of course, you can easily spend $20-30 worth of tokens per review, but it's still cheap compared to the human cost and compared with the value of the Bun platform to Anthropic.
It's cheap for us, as most of the cost is passed by companies like Anthropic to the investors. Anthropic is exposed to costs that are much closer to objective values (although they'll get eaten by the investors anyway). The value here is not in the product itself, but in the claim that the LLM made it. Supposedly.
It doesn't, but with enough reasoning and compaction the relevant ideas behind what the code does can be stored and accessed as needed or added to the context (if it's always important)
FOSS-"community" reviews every single line and thing, because that is how it works and why FOSS-stuff is the best and most secure stuff there is! Right? And that is why XZ vulnerability was discovered before it made it in... Right? Right?!
It's a transpilation. You have the old code and the new code. You can just run the two versions with random inputs (fuzzing) and make sure they produce the same output and side effects.
I've done something similar. Create a branch like 1.2.0 that diverges from main, then branch from 1.2.0 with features. Each feature pull into 1.2.0 is relatively small, but the full branch back into main is massive
+1000000 -4000 is absolutely unhinged. at that point the PR review is basically just vibes and running the test suite. nobody is reading a million lines of diff, you just trust the benchmarks and pray
Too late. These people are Peter Thiel acolytes and he's been dictating the direction of tech since the 90's.
And yet, this high school dropout used untold millions of dollars of VC money to damage his own reputation and that of everything that he is associated with - including Bun, Anthropic, Rust, Zig, etc.
I read your comment, but I disagree. These things absolutely matter in our lives beyond simply paycheck. The current iteration of tech leaders are drastically shaping out lives and it's rarely, if ever, to the benefit of us.
VC dictating which tech receives funding and which does not is clearly a broken model that should be disregarded with something better.
While I know this comment is satire: the "no memory errors" bit is particularly funny because that was part of the reason given for the Rust rewrite, yet the merged code has over 10k unsafe blocks. It's like rewriting a JS project in TS using any everywhere lol
Maybe github is only showing the first 100 files? There are definitely more. Ran this on commit 11a2e2c20b6746689298a1da76cec35b13d3405e, which was updated about 30 min ago.
Iâm not a crustacean but knowing your vulnerability surface explicitly has value. You can target the unsafe blocks for extra scrutiny, possibly removal.
Yes, the idea is when you write idiomatic rust you isolate unsafe blocks and call functions from the outside. The goal is to have as little unsafe code blocks as possible. This is not what's happening here.
There are so many unsafe blocks because it is a straight transliteration of the zig source code.
It would be foolish to combine a rewrite with a refactor. Make sure the rewrite is correct, then use the unsafe blocks to guide refactoring.
They only just now merged the rewrite. Complaining that itâs not idiomatic yet is fundamentally misunderstanding how codebase migrations work.
Also, Iâll remind you that every line of the previous codebase was 100% unsafe (in the rust sense). Iâm willing to bet many of those unsafe blocks are simply due to the FFI boundary calling out to compiled C libraries. Thatâs not the sort of thing you tackle at the same time as a million line rewrite.
â.claude/workflows/phase-a-port.workflow.js
You are an adversarial Phase-A verifier. Find every place the draft .rs DEVIATES from PORTING.md.
I wonder how
This whole thread is an overreaction. 302 comments about code that does not work. We havenât committed to rewriting. Thereâs a very high chance all this code gets thrown out completely.
Not to defend this mess, but I don't know how you are all falling for this. The commit is literally named revert proc.exited change in spawn.test.ts. It's putting the test back to the way it was from before a42bf70, a commit FROM EARLIER IN THE SAME PULL REQUEST.
/u/tracernz you can make this point, but find a legitimate example, please.
The thing is, you can send off somebody to fix 4k loc. But once it's in the million range, it's done. Then you have to chase coverage or whatever metric.
Anyone have advice on how to remove this clown makeup I've got on after I defended Bun saying that everyone was overreacting and that Jarred wouldn't just go all in on some AI coded slop on a massively complicated project?
Super weird. 90k star project, so evidently popular and in production use. +1M lines and -4K gets accepted and it's not even faster? What even is this world anymore.
Well, as background, the company behind bun was bought by Anthropic, and there has been renewed interest in avoiding memory bugs with the existence of Anthropicâs Claude Mythos, so I think that's why they're rewriting it in Rust. (Though I think mostly unsafe Rust at the moment, so there's no security benefit yet.)
It's a combination of both things. Zig took a hard stance against AI contributions, making Bun's contributions, especially after it was acquired by Anthropic, completely impossible, even if you don't take into account the quality of such contributions. It would be infeasible for Anthropic to solve its own issues it found in Zig, and Zig famously do not prioritize issues from sponsors, which means Anthropic would have had zero control over Zig. While they don't have any control over Rust either, at least Rust is a mature language compared to Zig, much less likely to have serious issues. And Anthropic might be well positioned to join the Rust Foundation along with many other companies that have an interest in Rust, and I believe Rust does not have the same anti-AI policy as Zig does. With this move, Anthropic may be getting not only a better memory-safety story with Bun, but also much fewer language-related bugs, much better library support and maybe even a small amount of control over the language it relies on (better than zero with Zig).
From what ive read, Buns zig compiler changes were bad, hacked around vibe-coded slop that brought a lot of bugs in runtime. If they were responsible adults, instead of throwing a tantrum like a child and vibe-slopping an entire port to a different language, they could've, you know, be adult developers and clean up/iron out their changes in a way that actually provides value, where most of the code was either written, or properly reviewed by human, instead of the superficial and not-at-all metric of "good" "but it compiles faster", while fucking up the runtime
I don't think compromise is a real option here. If Zig has a strong policy against the core tools products from Anthropic, moving separate ways is the correct choice, regardless of what you think about either party.
Itâs a straight transliteration of the old code base, not a clean room rewrite.
They know the structure of the code for much the same reason I know what happens in each chapter of the German translation of Dracula, despite having read it in English.
Learn to accept and move on when you make a minor error, like misreading âmerged inâ as âfull replacement ofâ, instead of digging in your heels. Itâs not a big deal unless you make it one.
You know that lowest common denominator, the army of incoherent morons the industry picked up over the past decade, when low-skill dev jobs that could "ship fast!" were the most important thing to shareholders?
You're now interacting with those people. They should never have been allowed within 20 feet of this industry, and if they were the only ones losing their jobs in this current madness, then I'd be all in favor of the madness.
As someone you used Claude to build (rewrite) a project from Go to Rust, Zig, Java, Python and Typescript.. it works. It's VERY fast (days, not months including tests, etc). BUT.. I still would be hard pressed to release it without expert code reviews. I don't think we're there yet.
After maybe a half hour of tinkering with my pi-agent it literally one-shot porting an entire legacy condense with one prompt. Thereâs been no issues; it actually found, documented, and intentionally preserved real bugs in the old code base.
If we didn't need to verify the output of AI generated code I don't think language would matter, but AI isn't good enough to write perfect code.
Using a language with stronger guarantees pushes a lot of that human verification into the compiler. The bottleneck is verifying the code, so if you can eliminate whole classes of bugs with a memory safe/thread safe language it seems like a no brainer. Of course, you still need to verify logic, but IMHO that's an easier problem for most programmers than understanding memory safety or concurrency bugs.
The obvious counterpoint to that is that the code is mostly unsafe rust, so while the language offers memory and thread safety, they've literally opted out of the big advantages to switching over to Rust, without a clear roadmap to safe Rust, which means this is really just a massive mess that will be impossible for any human to actually work on in the future.
It really doesn't look like most of the code is unsafe though? Cloned it out of my own curiosity and ran cargo-geiger, seems like if you're looking at just bun's crates something like \~7% of their expressions are include unsafe. The third party deps that did get pulled in end up having almost the same ratio of unsafe as the generated code. IMHO, some of the runtime unsafe looks like it could be undone, but the unsafe around memory/file/network handling is all fine to remain as "unsafe". The goal isn't to remove all unsafe, it's to ensure your "unsafe" is carefully handled.
Category
Pkgs
Total exprs
Unsafe exprs
% unsafe
Everything (workspace + deps)
207
876,545
61,023
6.96%
Bun's own crates (bun_*)
100
659,798
46,524
7.05%
ââ Bun FFI wrappers (*_sys)
14
17,234
3,153
18.30%
ââ Bun non-FFI
86
642,564
43,371
6.75%
Third-party deps
107
216,747
14,499
6.69%
And top ten bun crates ordered by unsafe expression count
Why are you assuming no one understands the code? Do you think Jarred had the full code base in his head anyway? Once you have a very large code base, you can only be familiar with a small amount of it at any time. If I came back tomorrow to my work project and it had been rewritten from Java to Rust, I don't think I would have any problem (assuming I know Rust well enough) working on it even without AI assistance as long as it used the same architecture and "style" (since that helps with finding things, making some basic assumptions - e.g. all requests are handled by implementations of Controllable, that does not change just because of the language unless the language is really weird!), which is what they tried to do.
I really don't agree with the position that just because AI wrote the code, the code is not understandable anymore. I've been using AI to write code, but I still understand basically everything it does and most of the time, the result is very similar to what I would've done by hand with few exceptions (sometimes it just goes in the wrong direction from a design point-of-view but in such case you provide more details about what you actually wanted and it does the right thing).
The amount of new code is incommensurable and was generated in a few weeks. Look at the diff summary.
I really don't agree with the position that just because AI wrote the code, the code is not understandable anymore.
I didn't say that (nor in this case mean it). But not that you bring it up - anything beyond a superficial understanding of the code requires spending time with it and thinking on it. If you skip the last part (which you clearly do if you generate anecdotal amount of code), you simply embrace superficial understanding.
I'm tired of these comparisons. We've created machines to automate algorithmic tasks. We only tolerate human flaws because we are humans. Automating human folly is the most ridiculous thing you can defend.
It's still a human task. Agent's don't decide to work on their own. You still need to prompt it and hit enter. So we're automating human tasks, just non deterministically. But when you think about it, getting humans to do a task was never deterministic, so why would automating that task need to be?
We've created machines to automate algorithmic tasks.
Because up until a few years ago, this was all that was possible. I think you're just stuck in the past, the same way someone might've once believe it was impossible to automate anything with a computer.
It's not determinism we want but predictability. Nondeterminism is useful because it allows for error tolerance. Chaotic systems are deterministic and unpredictable (e.g. double pendulum, weather), while Monte Carlo algorithms are nondeterministic and predictable (that's their entire point).
Trust is built on predictability. The only reasons calculators are used is because people know that to expect. The only reasons compilers are used is because people know that formal language constructs will translate, possibly non-deterministically, to a predictable outcome.
If you're calling unpredictable systems "automation", that is your right, but I'm sure this is the kind of attitude you wouldn't want to be treated with. Or perhaps you're enthusiastic about increasing unpredictability in food production, medicine and education?
Sure. High-risk application will take longer. An industry that uses "THE SOFTWARE IS PROVIDED âAS ISâ, WITHOUT WARRANTY OF ANY KIND..." will adopt faster.
Trust is built on predictability. The only reasons calculators are used is because people know that to expect.
People use calculators to give them faster calculations than doing them by hand, not because of trust. It's speed. People prefer calculators to working out long division by hand because of speed, not trust.
The only reasons compilers are used is because people know that formal language constructs will translate, possibly non-deterministically, to a predictable outcome.
Except for when they don't. See: compiler bugs.
Or perhaps you're enthusiastic about increasing unpredictability in food production, medicine and education?
But we absolutely do deal with unpredictability in those areas. What are you talking about? The entire reason we have testing of food and inspectors is because of the unpredictability of food production and medicine. We have FDA inspectors exactly because growing food isn't deterministic. We have to test for potential side effects in medicine because human bodies aren't deterministic. Don't even get me started on how unpredictable academia can be. In all these cases, we built predictable, human controlled, deterministic guardrails around nondeterministic systems. AI for coding is no different.
Its a trivial task for humans to follow conventions and perform checks with their higherups and use their common sense .
Are you sure its trivial? There are waaaay too many developers who do not follow conventions or check with higherups or use common sense.
That is not trivial for AI. And it is not always performing the same steps. Its how it works.
Honestly compared to some of the people I've worked with, it seems very trivial for AI in comparison. AI is easily better than half of the devs I've had the misfortune of working with.
If you are working with bottom of the barrel devs for sure. But I am not looking forward a future where every change is made and supervised by a 2 weeks bootcamp dev that doesnt care and has no responsibility over the code.
If you consider 50% to be "bottom of the barrel" sure. Like it or not, that's the world we live in. It's the world most people in most industries live in. A lot of people are bad at their jobs and don't give a shit.
And I have bad news for you: that's not the future. That's the past. Bootcamp devs who didn't give a shit were around long before AI coding. That's not even remotely new.
The structure can have pretty much an 1:1 correspondence. So it doesn't have to be in context all at once.
An AI can definitely do that if it divide and conquers one module at a time.
And static analysis can be used to more deterministically verify that module interfaces are preserved.
This whole thing is an insane move, and it will definitely not be flawless. But the translation quality is probably better than most people reflexively assume.
Just by the internals of LLMs you must know it is not a deterministic approach but a random sampling one. Sure it may be better than random guessing but it is still a pain.
When one removes 4k lines and the ai spins out 1M to replace them, how long do you think it takes them to read the change and understand what the new 1M does ?
The contribution they rejected in zig wasn't really about AI (even though they don't want that), the contribution was really low quality and wasn't a good change for the language.
If Anthropic had wanted to troll Zig they should have forked it, call the fork Zag (where the A is the Anthropic logo) and just watched the cargo culters all abandon Zig for Zag.
But then that wasn't likely as Anthropic isn't run by 1990s-era Bill Gates.
Bun was already running on a fork of Zig, where they added their own features that were a) made with AI, because bun's lead is obsessed with it and b) the zig team weren't particularly interested in anyway. So instead of dealing with this situation, they decided to abandon zig
I think this is just part of the issue. Zig is rather C like in the way it's coded and very different from C++ and Rust, two languages that give one the same low level abilities and run time characteristics but allow for a lot of abstractions and syntax sugar, making code often read more high level.
Recently, Bun's creator was lamenting that they have a lot of memory issues and crashes due to using Zig. Since this is not a problem other Zig users have, there is an argument to be made that LLMs are better at writing code for languages like Rust that are more high level. And if you don't write the code yourself, but rather read and approve it, memory errors slip through far more easily. Having it prevented on compiler level probably helps with agentic coding big time.Â
It is an issue other zig users have though, they just might not find it a problem. Just look at the zig issues page even they have a bunch of memory related bugs with the compiler.
Yeah, they're rewriting it because the developers are LLM-brain and were just given free access to LLM tools. Now also add that these developers have the potential to gain multigenerational wealth in the tens/100s of millions of dollars if Anthropic IPOs.
Nothing to do with Claude. Poor memory management in zig made them want the compile time memory checking of rust⌠that then proceeded to not use by using unsafeÂ
I would never undertake a rewrite in this way. But any time I have done a rewrite: Do zero behaviour change. Fix bugs and add features and security etc. later.
(Though I think mostly unsafe Rust at the moment, so there's no security benefit yet.)
Safe Rust is not a security benefit. It's not "safe" in the sense you seem to be implying. There is a peripheral security benefit, to the extent that your programmers are not capable of writing certain memory bugs using safe Rust, but unsafe Rust is not dangerous Rust, it's Rust in computer science mode, without the guardrails.
I don't think that's the reason why. Anthropic is deeply invested in AI for coding, and it's an unfortunate reality that tons of projects in the world are hamstrung by their initial implementation language, with a rewrite being too costly to do.
And so, what if AI agents get good enough to drive that cost down to be reasonable? There's a lot of money to be made for Anthropic if it's true.
Technically speaking Rust's aliasing requirements only apply to references. If you primarily or exclusively use raw pointers, there are no such requirements.
I'll let the nerds at MIT explain it better: https://web.mit.edu/rust-lang_v1.25/arch/amd64_ubuntu1404/share/doc/rust/html/book/first-edition/unsafe.html#:~:text=Rust's%20main%20draw%20is%20its,to%20verify%20this%20is%20true.
But here's the summary: image let's you do 3 additional things that you normally can't do
Performance isn't all that matters. Bun wasn't very reliable, their issue tracker was full of segfaults. Remember that Zig is not a memory safe language and requires manual memory management. In a code base of this size, errors are bound to happen. And a lot of errors if most of the code is written by AI. Moving to a higher level language that prevents most of these issues is only logical.
I never understood this argument, Ghostty is written in zig and if you search for segfault in the github's closed issues there's like 30 results, and half of them aren't actually segfault. Maybe it's not the language, maybe the code was shit to begin with and the slop added on top since 2025 didn't help.
Another example is TigerBeetle, the code is so robust you could put it on a plane.
Rust and Zig are both fine languages, but let's not pretend a rust rewrite is going to solve bun's realiability.
People forget that Jarred Bun is a high school drop out and a Peter Thiel fellow. Don't expect competent from a person okay hanging out with Epstein associates and monarchists.
To be fair, the existence of several high-skill software engineering teams that can produce high-quality, stable Zig programs doesnât imply that a team of JavaScript new-grads equipped with an AI slop swarm can do it.
Zig maximizes ergonomics and freedom, neither of which make sense to give to an AI.
I don't see how vibe coded segfaults are Zig's fault. Nor do I believe that Rust will be some sort of silver bullet that fixes all of the coding problems to the satisfaction of someone who was too impatient to finish high school.
Bun's development has always seemed very unprofessional to me, which is a shame because it's genuinely a cool project. But they always do weird stuff like this without giving it much thought. I would never rely on it.
From the beginning I had a weird feeling about Bun. They were too quick to brag about faulty benchmarks, too eager for hype, too hungry for venture capital. It felt different from other open source projects, less trustworthy. The last few months have proven those instincts true. The way Jared Sumner has embraced AI slop is disrespectful of the craft that is software engineering.
Bun feels less trustworthy than ever, and I donât know why anyone wouldnât use either Node or Deno.
I never to this day understood why anyone would pick Bun over Node/npm. It's historically had huge issues with runtime stabilityâmost of the issue backlog is crashesâand Node compatibility.
Oh, it installs packages 2 seconds faster? That's so useful for something I do once a month.
They were good at marketing. They also made the right call on Node and npm compatibility, which Deno implemented too late. I think a lot of people bought into the hype and picked it expecting a drop-in replacement that was faster than Node without ever actually benchmarking their specific use case.
That speed actually is a huge bonus to me. I don't actually use it for anything else tho, just the package manager and occasionally running TypeScript scripts
I'm not a JS-focused dev, so they're not my main environments either way, or maybe because of that, but I've always felt comfortable with Deno but refused to install Node. Deno is a single executable, and has a execution permission system. Node - I have no idea. It's a big installation I don't know about, runs arbitrary install scripts, etc etc
Bun felt better than Node because it's a single executable. But, yeah, it's certainly no longer an independent FOSS tool - a node-compatible node replacement.
Deno does more and fundamentally improves the ecosystem tooling, which I like. I just haven't worked much with it.
I integrated deno lint to lint webbrowser JS, which is useful even without committing or using anything of the JS, TS, or npm ecosystem.
I just use Node + pnpm. Itâs pretty nuts to use fk bun as a package manager instead of something more stable like pnpm.
And any performance advantages Bun may have over pnpm will probably disappear soon anyway, since pnpm is also being rewritten in Rust (just not in the messy way Bun was).
At the time when I started using Bun it appeared to be the lowest friction way to produce a standalone executable from a Node.js project and also the HTTP server is using uWebSockets which is good but I was always running into issues creating a repeatable install as uWS versions are quite strictly pinned to specific Node versions. I see that the Deno APIs look pretty similar to what Bun is doing though, so I may be tempted to make the switch with the recent slopping going on in the Bun project.
have they made it simpler though? I remember when I looked into it before it was quite experimental and nowhere near as easy as bun build --compile --target=bun-linux-x64 ./index.ts --outfile myapp. From a quick browse of that page, I don't think you can even build for other platforms using the Node thing?
When I skimmed it earlier, there was a note about some things not being possible when generating cross-platform executables, so presumably it is possible, but it's not clear how you'd do it.
It seems slightly more complicated (you need to create a config file rather than passing everything as command-line arguments) but it's definitely possible. That said, it looks like it's got the issue that a few recent NodeJS features have where the documentation is pretty poor and makes the feature seem much less usable than it is.
This is not just an noise ââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ this is the future
The future is an universal programming language where you can re use code just importing it and it compiled to any platform or networking layout, llms will become way less attractive after that, because writing a program will be as easier than with a llm, deterministic faster and cheap.
Not even close , llvm is mostly focused in imperative semantics for code generation, I am taking about something that have temporal logic and network logic natively, which output the set of program that fulfills your intend instead of just a program and each platform select the most optimal program to run. You can even parse human language to the semantics of this new language, so not llms are not the future
We should care because Bun was a popular tool in the (holy /s) triangle of Node, Deno, and Bun. This much of a shift puts the established Bun project and tooling into question.
This is crazy on so many levels, this much code debt and knowledge gaps in a code base is unsustainable imo.
As someone managing a pretty huge codebase myself, one of the hardest things long term is keeping an actual mental model of the system understanding why things exist.
I use AI generated code. Carefully and I keep note of things I will need to go deeper into and probably refactor later for it to be scalable, but generating so much code and merging so quickly is simply insane
Life is probably easier if you don't care about understanding the project you're working on. It's not craftsmanship or engineering. I guess it's a different kind of product development. Letting go of understanding. Embracing a blackbox [under your supervision] instead.
Surely it's a personality thing, whether one can feel comfortable with that. Maybe it's easier when one didn't understand how things work in the first place.
We'll see how it will work out. I'm also skeptical that this leads to quality and maintainability long term. Maybe this is the first iteration and they will improve the substance over time. Who knows.
I've heard that it has some type of sandboxing, being recommended by the yt-dlp project. Probably a good idea especially with the current state of security in the JavaScript ecosystem.
Wouldn't you want to wait and see if the result is worse first?
I saw that some test appears to have been just "worked around", but that does not mean it happened in a large amount of cases. I would expect Bun to only merge this after doing extensive "real world" testing, but TBH no idea how much they actually did.
From the announcement, they told Claude to translate the code "mechanically" , i.e. as much 1-to-1 as possible. That means you would only need to review a representative sample of that translation to be fairly certain the output is as expected, especially if the translation was done not by Claude, but by a transpiler which Claude came up with.
That's why I think they may be right, you shouldn't need to review every line as long as there's an adequate amount of tests that prevents regressions. The details are still not made public, so I would wait a bit to make any decisions. If you tried the new Bun version in your code base, perhaps you could contribute to providing feedback if something went wrong... have you not even tried it?
Like do you even understand what you're saying? Let's say I do run my codebase with the new runtime and it "works" (I see no perf issues and no instant crashes). You realize that's only a third of the battle right?
Another third is maintainability which is literally gone out the window now. Only an LLM with an absolutely obscene context size (Claude and Gemini are the only 2 I can think of) would even begin to be able to ingest this
The final third is security. Nobody is auditing this. I don't care how good the "transpiler" is, you still audit your fucking code lol. This is unauditanble.
Oh and 6 days is simply not enough time to wait to merge a complete rewrite of a runtime.
In Enterprise it takes about a month (at least) of testing just to update the version of a platform we're using. Years to update the language version we're writing against. And they waited 6 days to merge a complete rewrite (in 6 days) of the engine lots of these platforms would be running on.
Another third is maintainability which is literally gone out the window now.
Based on what are you concluding that?? The code should be mostly the same with a different syntax if the goal of the translation was achieved. Did you look around the new code? It seems totally fine.
Only an LLM with an absolutely obscene context size (Claude and Gemini are the only 2 I can think of) would even begin to be able to ingest this
Have you ever used AI in a big project like that? That's absolutely not true. The LLM will make a plan with instructions for each step of the translation, then a "sub-agent" (with clean context) will work only on that. This is how stuff like this is even possible.
The final third is security. Nobody is auditing this.
This should actually be better now?! Zig is not memory-safe, so unless there's a lot of unsafe Rust in the translation, the new code should be considered safer. Was the Zig code audited?? I find that hard to believe, you seem to be making assumptions out of thin air.
In Enterprise it takes about a month (at least) of testing just to update the version of a platform we're using.
I work in security myself. Security does not happen by people looking at code, it happens by people running tests and identifying patterns that may be unsafe, then investigating those with more care.
If it takes you a month to do simple things like update a version, then I'm sorry but you don't know what you're doing and that explains all the baseless assumptions and conclusions you're making.
You sound like a complete dufus thatâs high on AI farts.
The code should be mostly the same with a different syntax
SHOULD. Yeah it SHOULD. That does not mean it IS. I donât know what sort of finger doodling you do for job, but some of us do real work where building on a foundation of SHOULD is an absolute no-go.
That means you would only need to review a representative sample of that translation to be fairly certain the output is as expected
I told my coworker to sweep the floor of the whole hangar, I'm sure that he swept the whole floor given that this one square inch is clean now. No need to look, trust me bro.
That does not follow. Unless you know whether the test that was worked around represents a real regression, and if so, whether that happened in a significant amount of tests, you cannot arrive at that conclusion at all. To the contrary, the claim in the PR is that the new implementation fixed many existing issues, and that by being written in a memory-safe language, it probably prevented even more unknown issues. I find that plausible and without more evidence to the contrary, I think I will believe Jarred for now.
Unless you know whether the test that was worked around represents a real regression, and if so, whether that happened in a significant amount of tests, you cannot arrive at that conclusion at all.
All we can tell, from the approval of a single worked-around test, is that working around tests exists as a policy. That is more than enough to make alternatives better on this objective axis.
I would expect Bun to only merge this after doing extensive "real world" testing, but TBH no idea how much they actually did.
The entire rewrite was started only like 2 months ago, and wasn't even functional for most of that time. They couldn't do any real world testing, even if they wanted to, there just wasn't any time allocated for it.
It's ridiculous alright, but also a good opportunity to fork the Zig version and continue with something sane. I mean who will seriously consider Bun in production now when there's literally nobody understanding the code anymore? This is a dick marketing move using a once-useful project as a stepping stone to prepare for a grand IPO.
There's a lot of hate for this AI-driven migration on this thread. I'd like to summarize what I'm seeing to find out if I'm understanding the gist of the skepticism:
The code wasn't thoroughly reviewed by humans, meaning it could be hard to maintain and is more likely to have bugs.
The code still has a lot of unsafe blocks
Tests were re-written, possibly damaging coverage
Things were great as-is, no need to migrate languages.
2 and 3 I can dismiss out-of-hand. For 2, the code was 100% unsafe in Zig and now the areas that are unsafe are highlighted and easier to fix piecemeal. That was a stated goal of the author. For 3, the only given example turned out to be a misunderstanding of the commenter: they saw what looked like a regression in the test, going from a deterministic wait for quiescence to a sleep, but that was a rollback of an attempt to improve the test, so it's in the same state as the mainline prior to the merge.
#4 is presumptous at the least: if you aren't working on a codebase every day, you shouldn't second guess well-evidenced design decisions by those who are. Jarred presented several reasons why this was a good move for Bun.
1 is the more substantial argument. However, I've worked 15 years in a mono repo with hundreds of millions of lines of code written by very well-compensated engineers and I am highly skeptical of the open-loop quality humans can bring to the table. Donald Knuth himself was so confident in the source code to his TeX layout engine that he offered to double the bounty on each subsequent bug. He had to freeze it. There's gonna be bugs. From what I've heard, Bun's testuite is substantial, and I'll take one test over the attestations of 3 Knuth's.
Basically the same story with code maintainability, but that's an even weaker argument: this was a migration, which means the architecture will largely be intact. I haven't personally verified this so I'm open to new evidence, but if that's the case, then any "interesting" choices by the LLM design-wise are quite limited in scope.
Okay, what did I get wrong here? Are there other arguments?
Because it's absolutely fucking stupid, was done in a terrible way, and there is no way that there is any kind of reliability with it.
Those are assertions, not arguments. What did I get wrong?
There are lots of [counter-arguments].
I summarized and addressed all the serious arguments that I saw on this post. I acknowledged the strongest argument, the one observing that an enormous line count in the merge implyng less-than-thorough review. But you're the first to reply in a substantive way. I can't "refuse to acknowledge" what doesn't exist. The other reploy spent most of their comment insulting me and the rest making false statements: they claimed that no one on this post was arguing that AI-driven migration could introduce bugs, and that's easily falsifiable. Search for the word "mistake" if you're not convinced.
In case you didn't know, they're going a progressive rollout. The old codebase isn't deleted. As far as I can tell, they take risk management seriously.
There are some people who have real project dependencies on bun, and they have the all the prerogative in the world to be concerned. One million lines of zig --> rust is a miraculous feat, and the onus is on bun to prove it worked. There doubtlessly are errors, and short term, every bug will be jumped on as proof positive that the entire effort is suspicious.
But the majority of your downvotes are coming from people who are economically fearful. Either they have good jobs now or they're students/juniors who aspire to good jobs, and they're afraid the robots are going to rob them of their opportunities.
There no way that cohort will engage with your points honestly.
If the headline gave any hint that this topic was about AI, we'd have non-technical audiences pouring to downvote as well.
Donald Knuth himself was so confident in the source code to his TeX layout engine that he offered to double the bounty on each subsequent bug. He had to freeze it. There's gonna be bugs.
No one said humans don't produce bugs.
I am highly skeptical of the open-loop quality humans can bring to the table.
Cool, then go tell your chatbot about it instead of bothering fellow humans.
1:1 transpilers that donât share the same runtime and type system generally donât work, yes. Youâd need some degree of âwell maybe this is the semantically most similar outputâ which means youâre in LLM territory anyways. We could perhaps start with how Rust handles memory completely differently?
Yeah man no idea how they compiled stuff before LLM /s
>We could perhaps start with how Rust handles memory completely differently?
There is nothing in Rust that prevents it from being a compiler target. Unsafe rust exist if there are cases where you think it's hard. I'm sure LLM slop they made is full of unsafe rust too.
Buddy I worked on compilers professionally for 6 years and I am here to tell you that you cannot arbitrarily transpile from one language to another when they donât share the same runtime.
Yes? Youâre inherently in a probabilistic world for a problem like this. Itâs exactly why LLMs can be effective.
You can argue if Bun requires such a change or not, thatâs debatable. But it is not debatable that you could create a Zig to Rust transpilerthat works well for a complex codebase.
Not that simple, this is the approach TSGO took and they spent many months working on this for a language that maps much better to its predecessor than Rust wouldÂ
It's far more complex to make it write a transpiler than make it translate one languages to another, it's the original use case of the Transformer after all
Just a quick thought, but I imagine that this has totally obliterated the ability of any previous maintaner to contribute. Even if you previously had a grasp on the codebase, it was just torn from under you. Not to mention that I don't think people are very excited to deep dive to an LLM slop codebase.
It looks like Bun will be an LLM only product from now on.Â
The thing I'm most confused by is who decides who gets to be contributors now? If I open up Claude, tell it to pull down the Bun codebase and suggest me a code change, does that mean I get to make a pull request as long as it passes tests? I mean, it's not different than what he's doing.
It's not that you can or can't be a contributor. It's more like everyone who was contributing before had some knowledge of the Zig codebase. Now, it's all latin (or, well Just).
Any resemblance between the old and new codebase is mostly coincidental.
All of the repository's history will be obscured by the "rewrite in Rust" commits, which will make it harder for contributors to understand why some part of the code exists or works a certain way
Any bugs will have to go through the extra process of identifying if they existed only before or after the rewrite
Seeing that the only reviewers were LLM products is pretty telling honestly that this codebase no longer wants human contributions.
The rewrite on Rust was done over a much longer period of time, and with more human control. But for the sake of AI tools' PR, it's presented as done in a hurry and automatically.
So the formal reason is some memory safety issues and the elephant in the room / informal / way more strategic reason is anti-LLM policy at the heart of an Anthropic owned project?
Never a dull moment.
what was the big hurry to merge this?
where is the promised blog post by Jared?
the right way to do this would be to have a rust branch, let a bunch of external users test that, esp the ones known to use it heavily, and then fix+merge.
instead we got a bunch of twitter posts boasting about it, promising a careful merge when it was guaranteed it'd meet his claimed standards and broke nothing, and of course none of that is remotely true
I love rust. I work in rust. It is so funny as an OG bun hater to see the project just get steamrolled like this, to just get a full AI rewrite to rust.
Here's a secret. Just because it's in rust, does not mean it's good code. It's just probably memory-safe code. This is going to cause so many edge cases and gotchas it's unreal. GL bun community.
the bun-in-rust move is interesting because it admits the original zig bet was strategic, not technical. zig made bun differentiated. rust will make it boring. the reason any production team would adopt bun was always going to come down to "does this thing get patched when there is a CVE", and rust has the contributor pool that zig does not.
It's a big vibecoded nightmare filled with unsafe code all around the place. The whole of the project will go into garbage soon when it turns out that a million lines of hallucinations isn't maintainable.
"vibecoded nightmare" depends on what got reviewed. if oven/bun has actual maintainers doing code review on the rust changes, the AI assist is just typing acceleration. if they are merging unread, that is the disaster scenario. the proof will be in the next 6 months of CVE response time. that is the actual test for whether the codebase is maintainable.
probably true, hard to tell from the outside. the actual signal will be when the first non-trivial security vuln drops and we see how long the patch takes. fast turnaround = team knows the codebase. slow turnaround = AI wrote it and nobody can read it.
fair, no team can review a million LOC by hand. so the actual test is whether the codebase has the testing + tooling discipline to make rust compile-time guarantees do most of the review work. if it does, the team only needs to review the unsafe blocks and the public API. if it does not, the project is dead. that is the binary.
the reason any production team would adopt bun was always going to come down to "does this thing get patched when there is a CVE", and rust has the contributor pool that zig does not.
They just vibed a million lines of code into existence. I don't think that contributor pool is anywhere in their top-10000 list of concerns.
fair correction. "rust contributor pool" assumed organic contributors but the maintenance burden of a million LOC of AI-generated rust on a small team is different from a million LOC of human-written rust by the same team. the latter is hard, the former is harder. you are right to push on that. we will see.
no, its because the core language weâve been singing the praises of has a principled stance against the robot so weâre going with what the robot says instead out of spite
If Zig isn't going to upstream the compiler improvements on principal of course Bun is going to leave. They don't want to maintain a fork indefinitely. The entire saga is stupid.Â
So, I like Bun. I think for what it's trying to do, which is be a nice batteries-included-style runtime that could be a usable alternative to Python in the Node ecosystem, is really great. I'm using it for a personal project where I can use its single-EXE distribution feature to make releases easy.
That being said: lmao what the hell this was just an experiment LAST WEEK
Thought about this for a minute. I'll say, if they can pull this off, this is actually really fascinating. The idea that if you just throw a stupid amount of resources onto a problem, you can get a working solution in such a fast time is interesting to me. Obviously technical debt will be an issue I guess but at some point these AI agents are going to be rewriting and reviewing so much of the code so fast that -- and arguably we're already there -- the actual code is more fluid and changing with whatever the needs of the project are, rather than being this well-understood behemoth.
Now... If I were shipping actual production code with Bun, well, first of all it always seemed kinda bleeding-edge so this wouldn't surprise me too much but also, I'd maybe wait it out a bit before biting the bullet.
miversen33@reddit
Lol +1000000 -4000
How do you even review this?
qwertydiy@reddit
So Bun is now AI slop. Great thing I chose to use Deno in production instead. This will definitely cause 0 SRE issues /s
Advance_Diligent@reddit
LGTM
ScriptingInJava@reddit
đ
dodeca_negative@reddit
đ˘
Dragon_yum@reddit
Oh not the frontend fell off
lunacraz@reddit
is that normal?
Nova_496@reddit
Yeah, thatâs not very typical, Iâd like to make that point.
al2o3cr@reddit
A bug? In software? One in a million
Swook@reddit
Letâs gamble, try merging
userrr3@reddit
No one ever read that code and no one ever will. This is just setting themselves up so that the only possible contributor for features and bugfixes is Claude. Besides all else, baked in reliance on paid services (yeah I know they're part of Anthropic now)
I_AM_GODDAMN_BATMAN@reddit
I read the base64 implementation, it's pretty good translation.
kaelima@reddit
Why would you write your own base64 implementation?
nemec@reddit
remember, these are Javascript developers so reinventing the wheel is in their nature.
vonmoltke2@reddit
I thought adding a library dependency to save four lines of code was in their nature?
Gogo202@reddit
I assume that it's pretty simple. It makes sense for large projects to avoid having hundreds of dependencies. Each dependency can bring several types of risks
kaelima@reddit
Ok, so why is the implementation dependant on simdutf::base64?
Gogo202@reddit
I guess nobody asked Claude either of your questions...
Top-Rub-4670@reddit
The zig compiler is written in zig. The zig standard library is written in zig. So there are two obvious reasons:
Someone had to be the first to write a base64 implementation, it feels natural that it would be zig itself.
A compiler or standard library really shouldn't have thousands of dependencies, sometimes re-implementing simple things is is simply more pragmatic.
With that in mind, it's probably still that way in the rewrite only because it was a 1:1 mechanical translation.
Otherwise, had it been done by a rust programmers, the new zig compiler would have ditched a lot of in-house code and added 7,202,831 crates dependencies instead.
kaelima@reddit
Sure, but the base64 implementation already relies on simdutf::base64 which is a external dependency.
userrr3@reddit
Let me clarify, I didn't mean no one will read any of it, I meant no one will read all of it
protestor@reddit
You know what's even better?
Either use https://crates.io/crates/base64 or publish this pretty good base64 translation as its own crate on crates.io. Then you can remove for good the base64 from the codebase of a damn Javascript runtime, and just depend on the external crate (either the existing base64 crate, or the crate you just created)
cake-day-on-feb-29@reddit
Not even, the code is probably too big for its context window.
The saving grace is that it appears this is more of a language translation. You could in theory write a deterministic translator to do the same thing, but then you wouldn't get billions in venture capital and "surprise" code.
parawaa@reddit
The biggest files are around 9k lines:
Substantial-Elk4531@reddit
p.rs is really well named, I know exactly what it does just from the filename
parawaa@reddit
I know right? But to be fair this was decided before the rewrite (
p.zig). Still a useless name tho.Kn0wnSoul@reddit
Do you never p?
rexspook@reddit
You simply donât. Itâll be a mess
Daishiman@reddit
It's gonna be fine, really. Well-directed LLMs are capable of producing high quality code at astounding rates.
Wonderful-Habit-139@reddit
/s?
martin7274@reddit
"Claude rewrite it rust, make no mistakes plz"
dlg@reddit
And donât mention the goblins
account312@reddit
Unless they make it go faster.
Atulin@reddit
This, but unironically. That's exactly how this rewrite was done.
lunacraz@reddit
"this is for a life saving procedure"
DragonSlayerC@reddit
YOLO
ComfortableJacket429@reddit
Thatâs the thing, you donât
Lalli-Oni@reddit
Many approaches. We aren't used to large reviews. We shouldn't but that doesn't mean we are helpless in these situations.
For one, that diff could still have been reviewed by multiple PRs.
If not, then you can split it by concerns, areas or go by commits/updates.
Got to keep integration in mind ofc.
AI is heavy on test lines, so not surprised about tons of line additions, but if you dig into the tests they are often made to fit the case.
voyagerfan5761@reddit
Yeah, I wonder đ
Cafuzzler@reddit
Does that much code even fit into the context window?
hu6Bi5To@reddit
I know you weren't really asking the question, but I'll answer anyway.
Those review bots generally don't load everything in to one context window. They all do slightly different things, but the general gist is (each stage will use it's own context window, and often a stage will have several independent contexts):
With that kind of approach it can scale quite well despite the context window limitations. This costs money of course, you can easily spend $20-30 worth of tokens per review, but it's still cheap compared to the human cost and compared with the value of the Bun platform to Anthropic.
There are also bad review bots that just do things file-by-file and have no idea what's going on so only comment on irrelevant style issues, etc.
Ruben_NL@reddit
If this huge PR would be just $20-30, I might start using it, as a addition to human reviews.
hu6Bi5To@reddit
That's true, you can probably add a zero or two for a PR this big.
tpolakov1@reddit
It's cheap for us, as most of the cost is passed by companies like Anthropic to the investors. Anthropic is exposed to costs that are much closer to objective values (although they'll get eaten by the investors anyway). The value here is not in the product itself, but in the claim that the LLM made it. Supposedly.
Cafuzzler@reddit
I was and thank you
frostbite305@reddit
It doesn't, but with enough reasoning and compaction the relevant ideas behind what the code does can be stored and accessed as needed or added to the context (if it's always important)
vivainio@reddit
AI doesn't work like that
sysop073@reddit
I really thought you were joking
SinisterCheese@reddit
FOSS-"community" reviews every single line and thing, because that is how it works and why FOSS-stuff is the best and most secure stuff there is! Right? And that is why XZ vulnerability was discovered before it made it in... Right? Right?!
zacker150@reddit
It's a transpilation. You have the old code and the new code. You can just run the two versions with random inputs (fuzzing) and make sure they produce the same output and side effects.
agildehaus@reddit
You get the public to do that by making a release.
shockputs@reddit
mostly comments, if you actually check the code.
SnowdensOfYesteryear@reddit
Yeah what's the point of a review? If you want it, push into master directly
nollayksi@reddit
@claude please review. Make no mistakes and find all mistakes.
Rigamortus2005@reddit
Inshallah and vibes
jordansrowles@reddit
I've done something similar. Create a branch like 1.2.0 that diverges from main, then branch from 1.2.0 with features. Each feature pull into 1.2.0 is relatively small, but the full branch back into main is massive
Lothrazar@reddit
TF is bun
supermitsuba@reddit
These are things Sir Mix Alot's anaconda like.
subsecond@reddit
Take my upvote. That shit is funny (and true).
supermitsuba@reddit
Node js runtime replacement.
Pesthuf@reddit
You have to wonder why they didn't just rewrite Claude CLI in Rust, but hey, good for them.
headykruger@reddit
I thought there was an ongoing rewrite of Claude code
miversen33@reddit
This time in html 2 lol
synn89@reddit
Naw. TCL/TK. Gotta have that desktop GUI.
dodeca_negative@reddit
mothzilla@reddit
Someone should rewrite Rust in Claude.
hyrumwhite@reddit
What if we just, like, wrote the code in markdown, then we have Claude write it in every language, and like , you just pick your favorite flavor?
Darkoplax@reddit
Jarred and Bun team are not the Claude Code team
I trust Bun but not as CC
Wonderful-Habit-139@reddit
Jarred works on Claude Code as well.
Darkoplax@reddit
Is he ? he just joined
Wonderful-Habit-139@reddit
https://jarredsumner.com/ read for yourself.
Atulin@reddit
They kinda-sorta are, since Bun is owned by Antrophic now
ichiruto70@reddit
Why do think a cli being written in rust is gonna be better?
lurkinas@reddit
bcuz rust superior. memory safety. borrow check. entire class of issues now impossible, bro.
Stijndcl@reddit
Itâs just a cli tool that sends some api calls, thereâs no reason for it to require half a gb of memory. A rust one would be far more lightweight.
ichiruto70@reddit
Thats not because of the language, its because of how it is written. And with their pace, I am not sure if it will become better in Rust.
I have written CLIs in Rust and JS and I like the JS ecosystem more for it. But thats my opinion :)
phillipcarter2@reddit
âŚbut they did? This happened a while ago
Stijndcl@reddit
No it didnât
BrodatyBear@reddit
What? While ago they leaked code and it was still js/ts/react mess.
The only riir were community ones.
charmander_cha@reddit
Provavelmente serĂĄ feito um dia, os modelos entendem mais de typescript e python, ĂŠ mais uma questĂŁo de conseguir entregar rapidamente
buildingstuff_daily@reddit
+1000000 -4000 is absolutely unhinged. at that point the PR review is basically just vibes and running the test suite. nobody is reading a million lines of diff, you just trust the benchmarks and pray
programming-ModTeam@reddit
No content written mostly by an LLM. If you don't want to write it, we don't want to read it.
vips7L@reddit
The state of our industry is so sad. It was amateurish before but not itâs a clown show.Â
thy_bucket_for_thee@reddit
Does it make you feel better that these people may capture a $100million bag and help dictate the direction of tech for the rest of your life?
CherryLongjump1989@reddit
Too late. These people are Peter Thiel acolytes and he's been dictating the direction of tech since the 90's.
And yet, this high school dropout used untold millions of dollars of VC money to damage his own reputation and that of everything that he is associated with - including Bun, Anthropic, Rust, Zig, etc.
vips7L@reddit
As long as I still can get paid it ultimately doesn't matter and if I can't get paid I can start burning down data centers.
programming-ModTeam@reddit
r/programming follows platform-wide Reddit Rules
vips7L@reddit
Bootlickers.Â
thy_bucket_for_thee@reddit
I read your comment, but I disagree. These things absolutely matter in our lives beyond simply paycheck. The current iteration of tech leaders are drastically shaping out lives and it's rarely, if ever, to the benefit of us.
VC dictating which tech receives funding and which does not is clearly a broken model that should be disregarded with something better.
helloworldpi@reddit
Fastest rewrite I have ever seen. I wonder how they did it!
fill-me-up-scotty@reddit
Youâre absolutely right!
TheAlaskanMailman@reddit
Good catch đ that youâre absolutely right!
iamapizza@reddit
I made no mistakes.Â
Beginning_Book_2382@reddit
Alright, let's get straight to the point--no fluff.
BriguePalhaco@reddit
git checkout .
ECrispy@reddit
and honestly, thats the imp part!
lelanthran@reddit
The Core Insight: Complete test coverage can be done by The onion method if certain constraints are upheld as described in Ousterhout 2023.
In this way, Bun is positioned as the best option for Javascript programs. No GC, no memory errors, just lightning-fast runtime.
deividragon@reddit
While I know this comment is satire: the "no memory errors" bit is particularly funny because that was part of the reason given for the Rust rewrite, yet the merged code has over 10k unsafe blocks. It's like rewriting a JS project in TS using any everywhere lol
warpedgeoid@reddit
FFI requires unsafe
axiosjackson@reddit
How did you see this? I searched on github "unsafe" and filtered by rust and only got about 100 results.
ccapitalK@reddit
Maybe github is only showing the first 100 files? There are definitely more. Ran this on commit 11a2e2c20b6746689298a1da76cec35b13d3405e, which was updated about 30 min ago.
deividragon@reddit
Clone and grep
The_Northern_Light@reddit
Iâm not a crustacean but knowing your vulnerability surface explicitly has value. You can target the unsafe blocks for extra scrutiny, possibly removal.
deividragon@reddit
Yes, the idea is when you write idiomatic rust you isolate unsafe blocks and call functions from the outside. The goal is to have as little unsafe code blocks as possible. This is not what's happening here.
The_Northern_Light@reddit
There are so many unsafe blocks because it is a straight transliteration of the zig source code.
It would be foolish to combine a rewrite with a refactor. Make sure the rewrite is correct, then use the unsafe blocks to guide refactoring.
They only just now merged the rewrite. Complaining that itâs not idiomatic yet is fundamentally misunderstanding how codebase migrations work.
Also, Iâll remind you that every line of the previous codebase was 100% unsafe (in the rust sense). Iâm willing to bet many of those unsafe blocks are simply due to the FFI boundary calling out to compiled C libraries. Thatâs not the sort of thing you tackle at the same time as a million line rewrite.
lelanthran@reddit
After 5m, this went to -2. Apparently I should have tacked on
/sat the end :-/cs_irl@reddit
That's not speculation, that's just good observation.
Redkast@reddit
dr__potato@reddit
Clean. Excellent research.
omac4552@reddit
Very good question, hope we get an answer any time soon!
TheAlaskanMailman@reddit
I wonder why? I wonder if itâs written in some prompt factory that owns it.
iIoveoof@reddit
Werenât they saying just yesterday that they werenât even sure if this would work?
hotcornballer@reddit
Next week : Today we are welcoming bun2.0
It takes your javascript and sends it to the claude api with the prompt "transform this is x86 assemblly, make no mistakes"
accelmickey001@reddit
Jokes on you it is bun 1.4 https://xcancel.com/jarredsumner/status/2055213234517659671
zombiecalypse@reddit
Yeah, but having a gradual migration plan sounds like work and won't give you as much karma as a Rust rewrite on its own!
tukanoid@reddit
Its a rust revibe, no karma here for this shit
polyploid_coded@reddit
Yeah, from nine days ago https://news.ycombinator.com/item?id=48019226
bcgroom@reddit
This is funny with hindsight
Top-Rub-4670@reddit
It was funny at the time, too, because it was clear that he was lying/being defensive.
The_Northern_Light@reddit
âMerged inâ is not the same thing as âfully replacingâ
tracernz@reddit
It rewrote the tests so they pass. Problems solved.
Example: https://github.com/oven-sh/bun/pull/30412/commits/68a34bf8ed550ed69f4a0c18cff5ca9bd41d36ef
mr_birkenblatt@reddit
lmao, that's such a claude thing to do
read_volatile@reddit
Not to defend this mess, but I don't know how you are all falling for this. The commit is literally named
revert proc.exited change in spawn.test.ts. It's putting the test back to the way it was from beforea42bf70, a commit FROM EARLIER IN THE SAME PULL REQUEST./u/tracernz you can make this point, but find a legitimate example, please.
mr_birkenblatt@reddit
I wasn't commenting on whether this is a legitimate case but rather on that this is something Claude has done many times before
youngbull@reddit
Sounds like the test was written by telling an LLM to chase coverage. How do I know this? Bless me, Father, for I have sinned...
100GHz@reddit
The thing is, you can send off somebody to fix 4k loc. But once it's in the million range, it's done. Then you have to chase coverage or whatever metric.
ericonr@reddit
Amazing that it even removed the very comment explaining why a hardcoded sleep is bad...
How I enjoy reviewing reasonably sized PRs by my very human coworkers at this moment.
farsightfallen@reddit
Anyone have advice on how to remove this clown makeup I've got on after I defended Bun saying that everyone was overreacting and that Jarred wouldn't just go all in on some AI coded slop on a massively complicated project?
đ¤Ąđ¤Ąđ¤Ą
jimmerz28@reddit
Really should have become apparent to you a long time ago
https://x.com/jarredsumner/status/1969751721737077247
Schneestecher@reddit
Hahahaha
sebovzeoueb@reddit
Gutted tbh because I've been really enjoying Bun but I guess it's going to go downhill now.
lood9phee2Ri@reddit
emulating your very worst cow-orkers at lightning speed.
bphase@reddit
Super weird. 90k star project, so evidently popular and in production use. +1M lines and -4K gets accepted and it's not even faster? What even is this world anymore.
thomas_m_k@reddit
Well, as background, the company behind bun was bought by Anthropic, and there has been renewed interest in avoiding memory bugs with the existence of Anthropicâs Claude Mythos, so I think that's why they're rewriting it in Rust. (Though I think mostly unsafe Rust at the moment, so there's no security benefit yet.)
tracernz@reddit
I think theyâre rewriting it mostly because zig doesnât accept AI code, and thatâs an optics problem for anthropic.
renatoathaydes@reddit
It's a combination of both things. Zig took a hard stance against AI contributions, making Bun's contributions, especially after it was acquired by Anthropic, completely impossible, even if you don't take into account the quality of such contributions. It would be infeasible for Anthropic to solve its own issues it found in Zig, and Zig famously do not prioritize issues from sponsors, which means Anthropic would have had zero control over Zig. While they don't have any control over Rust either, at least Rust is a mature language compared to Zig, much less likely to have serious issues. And Anthropic might be well positioned to join the Rust Foundation along with many other companies that have an interest in Rust, and I believe Rust does not have the same anti-AI policy as Zig does. With this move, Anthropic may be getting not only a better memory-safety story with Bun, but also much fewer language-related bugs, much better library support and maybe even a small amount of control over the language it relies on (better than zero with Zig).
Unfair-Sleep-3022@reddit
They could also, you know, make the contributions responsibly? They have a lot of engineers.
awesomeusername2w@reddit
What do you mean by responsibly? I don't think anthropic would agree that any use of AI is irresponsible and zig stance is no use of AI.
tukanoid@reddit
From what ive read, Buns zig compiler changes were bad, hacked around vibe-coded slop that brought a lot of bugs in runtime. If they were responsible adults, instead of throwing a tantrum like a child and vibe-slopping an entire port to a different language, they could've, you know, be adult developers and clean up/iron out their changes in a way that actually provides value, where most of the code was either written, or properly reviewed by human, instead of the superficial and not-at-all metric of "good" "but it compiles faster", while fucking up the runtime
awesomeusername2w@reddit
I mean, will it be accepted if it's LLM written but properly reviewed? Seems like zig's maintainer explicitly says no.
tukanoid@reddit
Im sure it could be reasoned about if the code quality is actually good
tpolakov1@reddit
I don't think compromise is a real option here. If Zig has a strong policy against the core tools products from Anthropic, moving separate ways is the correct choice, regardless of what you think about either party.
PM_ME_DPRK_CANDIDS@reddit
lol can you imagine? an ai company being responsible?
IanisVasilev@reddit
Why would the language matter if nobody understands the code anyway?
The_Northern_Light@reddit
Itâs a straight transliteration of the old code base, not a clean room rewrite.
They know the structure of the code for much the same reason I know what happens in each chapter of the German translation of Dracula, despite having read it in English.
IanisVasilev@reddit
The_Northern_Light@reddit
That is not inconsistent with what I said.
justin-8@reddit
Yeah, but they didn't remove 1 million lines of existing code. They're removed 4,000 and added 1 million
csdt0@reddit
Because all the zig code is still there
IanisVasilev@reddit
At least parts of it are removed in the diff.
The_Northern_Light@reddit
Learn to accept and move on when you make a minor error, like misreading âmerged inâ as âfull replacement ofâ, instead of digging in your heels. Itâs not a big deal unless you make it one.
IanisVasilev@reddit
If parts of the old implementation are removed, that makes it non-functional. Unless I am missing something obvious.
AreWeNotDoinPhrasing@reddit
Yes, you are missing something. Read the comment chain you've replied in... they already explained that.
IanisVasilev@reddit
So the old code is functional?
AreWeNotDoinPhrasing@reddit
Asked and answered. Are you a bot?
The_Northern_Light@reddit
Iâm beginning to suspect that most people in this thread would fail fizzbuzz, if they even are programmers at all.
TheChance@reddit
You know that lowest common denominator, the army of incoherent morons the industry picked up over the past decade, when low-skill dev jobs that could "ship fast!" were the most important thing to shareholders?
You're now interacting with those people. They should never have been allowed within 20 feet of this industry, and if they were the only ones losing their jobs in this current madness, then I'd be all in favor of the madness.
The_Northern_Light@reddit
đ¤Śââď¸
Stupid
The_Northern_Light@reddit
. . . because they haven't switched to the new implementation yet, they just merged it in. It isn't active, bun today is still zig.
Temporary_Jacket9477@reddit
As someone you used Claude to build (rewrite) a project from Go to Rust, Zig, Java, Python and Typescript.. it works. It's VERY fast (days, not months including tests, etc). BUT.. I still would be hard pressed to release it without expert code reviews. I don't think we're there yet.
The_Northern_Light@reddit
Yeah Iâve done the same at my work.
After maybe a half hour of tinkering with my pi-agent it literally one-shot porting an entire legacy condense with one prompt. Thereâs been no issues; it actually found, documented, and intentionally preserved real bugs in the old code base.
frostbite305@reddit
I can second this, did lots of cross-language ports (albeit for unimportant projects) using opus and it's pretty damn good at it
colorfulchew@reddit
If we didn't need to verify the output of AI generated code I don't think language would matter, but AI isn't good enough to write perfect code.
Using a language with stronger guarantees pushes a lot of that human verification into the compiler. The bottleneck is verifying the code, so if you can eliminate whole classes of bugs with a memory safe/thread safe language it seems like a no brainer. Of course, you still need to verify logic, but IMHO that's an easier problem for most programmers than understanding memory safety or concurrency bugs.
SortaEvil@reddit
The obvious counterpoint to that is that the code is mostly unsafe rust, so while the language offers memory and thread safety, they've literally opted out of the big advantages to switching over to Rust, without a clear roadmap to safe Rust, which means this is really just a massive mess that will be impossible for any human to actually work on in the future.
colorfulchew@reddit
It really doesn't look like most of the code is unsafe though? Cloned it out of my own curiosity and ran cargo-geiger, seems like if you're looking at just bun's crates something like \~7% of their expressions are include unsafe. The third party deps that did get pulled in end up having almost the same ratio of unsafe as the generated code. IMHO, some of the runtime unsafe looks like it could be undone, but the unsafe around memory/file/network handling is all fine to remain as "unsafe". The goal isn't to remove all unsafe, it's to ensure your "unsafe" is carefully handled.
bun_*)*_sys)And top ten bun crates ordered by unsafe expression count
bun_runtimebun_installbun_jscbun_bundlerbun_corebun_http_jscbun_sysbun_allocbun_iobun_uws_sysrenatoathaydes@reddit
Why are you assuming no one understands the code? Do you think Jarred had the full code base in his head anyway? Once you have a very large code base, you can only be familiar with a small amount of it at any time. If I came back tomorrow to my work project and it had been rewritten from Java to Rust, I don't think I would have any problem (assuming I know Rust well enough) working on it even without AI assistance as long as it used the same architecture and "style" (since that helps with finding things, making some basic assumptions - e.g. all requests are handled by implementations of
Controllable, that does not change just because of the language unless the language is really weird!), which is what they tried to do.I really don't agree with the position that just because AI wrote the code, the code is not understandable anymore. I've been using AI to write code, but I still understand basically everything it does and most of the time, the result is very similar to what I would've done by hand with few exceptions (sometimes it just goes in the wrong direction from a design point-of-view but in such case you provide more details about what you actually wanted and it does the right thing).
IanisVasilev@reddit
The amount of new code is incommensurable and was generated in a few weeks. Look at the diff summary.
I didn't say that (nor in this case mean it). But not that you bring it up - anything beyond a superficial understanding of the code requires spending time with it and thinking on it. If you skip the last part (which you clearly do if you generate anecdotal amount of code), you simply embrace superficial understanding.
RecursiveServitor@reddit
The amount of code in the Linux kernel is insurmountable. I guess no one new ever starts contributing. Literally impossible.
IanisVasilev@reddit
Are we in a bad analogy competition here or what?
RecursiveServitor@reddit
People can read the generated code and understand it. You're making a silly claim.
IanisVasilev@reddit
It's not about whether then can but about whether they do.
reivblaze@reddit
Assuming the AI is capable of maintaining the structure its also a wild assumption
IanisVasilev@reddit
Assuming that machine learning finds the patterns we want it to find is already pretty wild, but now we're all in for a wild ride unfortunately.
BusinessWatercrees58@reddit
As opposed to assuming the human finds the patterns we want it to? Seems like a wild ride no matter what
IanisVasilev@reddit
I'm tired of these comparisons. We've created machines to automate algorithmic tasks. We only tolerate human flaws because we are humans. Automating human folly is the most ridiculous thing you can defend.
BusinessWatercrees58@reddit
It's still a human task. Agent's don't decide to work on their own. You still need to prompt it and hit enter. So we're automating human tasks, just non deterministically. But when you think about it, getting humans to do a task was never deterministic, so why would automating that task need to be?
Because up until a few years ago, this was all that was possible. I think you're just stuck in the past, the same way someone might've once believe it was impossible to automate anything with a computer.
IanisVasilev@reddit
It's not determinism we want but predictability. Nondeterminism is useful because it allows for error tolerance. Chaotic systems are deterministic and unpredictable (e.g. double pendulum, weather), while Monte Carlo algorithms are nondeterministic and predictable (that's their entire point).
Trust is built on predictability. The only reasons calculators are used is because people know that to expect. The only reasons compilers are used is because people know that formal language constructs will translate, possibly non-deterministically, to a predictable outcome.
If you're calling unpredictable systems "automation", that is your right, but I'm sure this is the kind of attitude you wouldn't want to be treated with. Or perhaps you're enthusiastic about increasing unpredictability in food production, medicine and education?
red75prime@reddit
Are you assuming that machine-learning systems can't have lower error rates than humans?
IanisVasilev@reddit
Some of us need more than hope.
red75prime@reddit
Sure. High-risk application will take longer. An industry that uses "THE SOFTWARE IS PROVIDED âAS ISâ, WITHOUT WARRANTY OF ANY KIND..." will adopt faster.
BusinessWatercrees58@reddit
People use calculators to give them faster calculations than doing them by hand, not because of trust. It's speed. People prefer calculators to working out long division by hand because of speed, not trust.
Except for when they don't. See: compiler bugs.
But we absolutely do deal with unpredictability in those areas. What are you talking about? The entire reason we have testing of food and inspectors is because of the unpredictability of food production and medicine. We have FDA inspectors exactly because growing food isn't deterministic. We have to test for potential side effects in medicine because human bodies aren't deterministic. Don't even get me started on how unpredictable academia can be. In all these cases, we built predictable, human controlled, deterministic guardrails around nondeterministic systems. AI for coding is no different.
IanisVasilev@reddit
I see there is no way for us to understand each other. Have fun.
BusinessWatercrees58@reddit
I was looking forward to your thoughts but oh well. I hope you have a good day
reivblaze@reddit
Its a trivial task for humans to follow conventions and perform checks with their higherups and use their common sense .
That is not trivial for AI. And it is not always performing the same steps. Its how it works.
BusinessWatercrees58@reddit
Are you sure its trivial? There are waaaay too many developers who do not follow conventions or check with higherups or use common sense.
Honestly compared to some of the people I've worked with, it seems very trivial for AI in comparison. AI is easily better than half of the devs I've had the misfortune of working with.
reivblaze@reddit
If you are working with bottom of the barrel devs for sure. But I am not looking forward a future where every change is made and supervised by a 2 weeks bootcamp dev that doesnt care and has no responsibility over the code.
BusinessWatercrees58@reddit
If you consider 50% to be "bottom of the barrel" sure. Like it or not, that's the world we live in. It's the world most people in most industries live in. A lot of people are bad at their jobs and don't give a shit.
And I have bad news for you: that's not the future. That's the past. Bootcamp devs who didn't give a shit were around long before AI coding. That's not even remotely new.
jippiex2k@reddit
The structure can have pretty much an 1:1 correspondence. So it doesn't have to be in context all at once.
An AI can definitely do that if it divide and conquers one module at a time.
And static analysis can be used to more deterministically verify that module interfaces are preserved.
This whole thing is an insane move, and it will definitely not be flawless. But the translation quality is probably better than most people reflexively assume.
reivblaze@reddit
Just by the internals of LLMs you must know it is not a deterministic approach but a random sampling one. Sure it may be better than random guessing but it is still a pain.
jippiex2k@reddit
Yeah that's why i mentioned static analysis.
The LLM can invoke a deterministic tool to verify its output, and reiterate the process until it converges.
reivblaze@reddit
How do you solve these though?
jippiex2k@reddit
1: Language servers for rust and zig already exist?
2: yeah this is expensive. but they are owned by anthropic and get subsidized tokens.
Also to be clear; I'm not claiming this is an effecient way of doing a rewrite, and I'm not even claiming the rewrite is a good move.
I'm just claiming that the code translation itself is probably better and more structure-preserving than most people imagine.
100GHz@reddit
When one removes 4k lines and the ai spins out 1M to replace them, how long do you think it takes them to read the change and understand what the new 1M does ?
vips7L@reddit
The contribution they rejected in zig wasn't really about AI (even though they don't want that), the contribution was really low quality and wasn't a good change for the language.
Jay_D826@reddit
I donât know anything about the people at Zig but this comment makes them sounds pretty dope
hu6Bi5To@reddit
If Anthropic had wanted to troll Zig they should have forked it, call the fork Zag (where the A is the Anthropic logo) and just watched the cargo culters all abandon Zig for Zag.
But then that wasn't likely as Anthropic isn't run by 1990s-era Bill Gates.
pragmojo@reddit
I would actually love to see an AI driven language compete with a 100% human driven one just as a natural AB test
gonz808@reddit
how is this is any way relevant to the bun project?
officerthegeek@reddit
Bun was already running on a fork of Zig, where they added their own features that were a) made with AI, because bun's lead is obsessed with it and b) the zig team weren't particularly interested in anyway. So instead of dealing with this situation, they decided to abandon zig
Caesim@reddit
I think this is just part of the issue. Zig is rather C like in the way it's coded and very different from C++ and Rust, two languages that give one the same low level abilities and run time characteristics but allow for a lot of abstractions and syntax sugar, making code often read more high level.
Recently, Bun's creator was lamenting that they have a lot of memory issues and crashes due to using Zig. Since this is not a problem other Zig users have, there is an argument to be made that LLMs are better at writing code for languages like Rust that are more high level. And if you don't write the code yourself, but rather read and approve it, memory errors slip through far more easily. Having it prevented on compiler level probably helps with agentic coding big time.Â
purple-yammy@reddit
It is an issue other zig users have though, they just might not find it a problem. Just look at the zig issues page even they have a bunch of memory related bugs with the compiler.
Unfair-Sleep-3022@reddit
There's also a lot more Rust code out there to plagiarize
Darkoplax@reddit
bro this theory really has no legs at all; I feel like you have to be really dumb to believe this
thy_bucket_for_thee@reddit
Yeah, they're rewriting it because the developers are LLM-brain and were just given free access to LLM tools. Now also add that these developers have the potential to gain multigenerational wealth in the tens/100s of millions of dollars if Anthropic IPOs.
Now hard to see how the incentives line up.
hyrumwhite@reddit
Nothing to do with Claude. Poor memory management in zig made them want the compile time memory checking of rust⌠that then proceeded to not use by using unsafeÂ
rexspook@reddit
Why would they use unsafe rust? I write rust for one of the largest CDNs and rarely need unsafe rust.
Kimos@reddit
I would never undertake a rewrite in this way. But any time I have done a rewrite: Do zero behaviour change. Fix bugs and add features and security etc. later.
The_Northern_Light@reddit
Why would you refactor while youâre doing a massive rewrite?
TheChance@reddit
Safe Rust is not a security benefit. It's not "safe" in the sense you seem to be implying. There is a peripheral security benefit, to the extent that your programmers are not capable of writing certain memory bugs using safe Rust, but unsafe Rust is not dangerous Rust, it's Rust in computer science mode, without the guardrails.
phillipcarter2@reddit
I don't think that's the reason why. Anthropic is deeply invested in AI for coding, and it's an unfortunate reality that tons of projects in the world are hamstrung by their initial implementation language, with a rewrite being too costly to do.
And so, what if AI agents get good enough to drive that cost down to be reasonable? There's a lot of money to be made for Anthropic if it's true.
tiajuanat@reddit
Unsafe Rust is still stricter than Zig and C, but yes, it's much easier to blow your foot off.
Unfair-Sleep-3022@reddit
Would you mind explaining how? Genuinely curious
lets-start-reading@reddit
Rust unsafe blocks come with the requirement that all of Rustâs aliasing rules are upheld. Zig, like C, brings no such requirement.
simspelaaja@reddit
Technically speaking Rust's aliasing requirements only apply to references. If you primarily or exclusively use raw pointers, there are no such requirements.
tiajuanat@reddit
I'll let the nerds at MIT explain it better: https://web.mit.edu/rust-lang_v1.25/arch/amd64_ubuntu1404/share/doc/rust/html/book/first-edition/unsafe.html#:~:text=Rust's%20main%20draw%20is%20its,to%20verify%20this%20is%20true.
But here's the summary: image let's you do 3 additional things that you normally can't do
Unfair-Sleep-3022@reddit
Isn't that the rust book? I don't think MIT nerds wrote that lol
But thanks!
tiajuanat@reddit
It was the first link from Google
A1oso@reddit
Performance isn't all that matters. Bun wasn't very reliable, their issue tracker was full of segfaults. Remember that Zig is not a memory safe language and requires manual memory management. In a code base of this size, errors are bound to happen. And a lot of errors if most of the code is written by AI. Moving to a higher level language that prevents most of these issues is only logical.
hotcornballer@reddit
I never understood this argument, Ghostty is written in zig and if you search for segfault in the github's closed issues there's like 30 results, and half of them aren't actually segfault. Maybe it's not the language, maybe the code was shit to begin with and the slop added on top since 2025 didn't help.
Another example is TigerBeetle, the code is so robust you could put it on a plane.
Rust and Zig are both fine languages, but let's not pretend a rust rewrite is going to solve bun's realiability.
danudey@reddit
Most people are not mitchellh.
thy_bucket_for_thee@reddit
People forget that Jarred Bun is a high school drop out and a Peter Thiel fellow. Don't expect competent from a person okay hanging out with Epstein associates and monarchists.
SirClueless@reddit
To be fair, the existence of several high-skill software engineering teams that can produce high-quality, stable Zig programs doesnât imply that a team of JavaScript new-grads equipped with an AI slop swarm can do it.
Zig maximizes ergonomics and freedom, neither of which make sense to give to an AI.
CherryLongjump1989@reddit
I don't see how vibe coded segfaults are Zig's fault. Nor do I believe that Rust will be some sort of silver bullet that fixes all of the coding problems to the satisfaction of someone who was too impatient to finish high school.
TonyAtReddit1@reddit
It didnt translate Zig into safe Rust though. It translated Zig into Rust with
unsafecalls everywhere.You're assuming good intent here, when the real reason is Anthropic bought Bun and wanted to use this as an advertisement for Claude
atilaneves@reddit
Why would it be faster?
grobblebar@reddit
performance review based on LOC changes.
IanisVasilev@reddit
6755 commits.
haCkFaSe@reddit
Squash merge.
Substantial-Elk4531@reddit
New commit message: "Friday night refactor, 5th beer, will look at it again Monday"
fghjconner@reddit
In fairness, they haven't removed the existing Zig code, so it's more like +300k lines. Still an absolute insane decision though, imo.
Unfair-Sleep-3022@reddit
Anthropic owns it. That's all.
NuclearVII@reddit
This is all that was needed to be said.
insanitybit@reddit
They claim that it is faster.
They state that the dev team was spending a lot of time on memory safety errors in the zig codebase, rust gives the tools to solve those problems.
thats_a_nice_toast@reddit
Bun's development has always seemed very unprofessional to me, which is a shame because it's genuinely a cool project. But they always do weird stuff like this without giving it much thought. I would never rely on it.
DapperCam@reddit
The author of the PR said in a hacker news topic that this was just an experiment and not that serious. Iâm guessing that was a lie?
CherryLongjump1989@reddit
I have heard so many coworkers say that over the years that I knew this would be getting merged as soon as I heard it.
_kilobytes@reddit
As one of the comments on hackernews read, he only said that to avoid and deny any criticism
ZorbaTHut@reddit
Sometimes things start as experiments and then it turns out the experiment worked.
"We changed our mind" doesn't make the original statement into a lie.
DapperCam@reddit
Itâs been 9 days lmao
ZorbaTHut@reddit
Have you never changed your mind over time periods smaller than that?
DapperCam@reddit
Not when it comes to merging a full rewrite of a tool used by millions of people (where nobody has even read the code in the rewrite).
ZorbaTHut@reddit
I mean, let's be honest here, have you ever even had the opportunity to change your mind about something like that? I doubt you have.
DapperCam@reddit
The world has officially gone insane
ViscountVampa@reddit
Stop getting in the way of Zorba being objectionable, they need this.
vincentofearth@reddit
From the beginning I had a weird feeling about Bun. They were too quick to brag about faulty benchmarks, too eager for hype, too hungry for venture capital. It felt different from other open source projects, less trustworthy. The last few months have proven those instincts true. The way Jared Sumner has embraced AI slop is disrespectful of the craft that is software engineering.
Bun feels less trustworthy than ever, and I donât know why anyone wouldnât use either Node or Deno.
FuckOnion@reddit
I never to this day understood why anyone would pick Bun over Node/npm. It's historically had huge issues with runtime stabilityâmost of the issue backlog is crashesâand Node compatibility.
Oh, it installs packages 2 seconds faster? That's so useful for something I do once a month.
LetrixZ@reddit
It can run Typescript without installing anything
HelloXhale@reddit
Modern nodejs can also run typescript natively!
Note: that wasnât possible back when Bun released. Iâm glad competition is making all the runtimes better
WorriedGiraffe2793@reddit
no it can't, it only does type striping
node doesn't even look at the tsconfig
ao_zame@reddit
Just like Node and Deno.
CherryLongjump1989@reddit
It does more than node/npm on top of being massively faster.
vincentofearth@reddit
They were good at marketing. They also made the right call on Node and npm compatibility, which Deno implemented too late. I think a lot of people bought into the hype and picked it expecting a drop-in replacement that was faster than Node without ever actually benchmarking their specific use case.
Devatator_@reddit
That speed actually is a huge bonus to me. I don't actually use it for anything else tho, just the package manager and occasionally running TypeScript scripts
Kissaki0@reddit
I'm not a JS-focused dev, so they're not my main environments either way, or maybe because of that, but I've always felt comfortable with Deno but refused to install Node. Deno is a single executable, and has a execution permission system. Node - I have no idea. It's a big installation I don't know about, runs arbitrary install scripts, etc etc
Bun felt better than Node because it's a single executable. But, yeah, it's certainly no longer an independent FOSS tool - a node-compatible node replacement.
Deno does more and fundamentally improves the ecosystem tooling, which I like. I just haven't worked much with it.
I integrated deno lint to lint webbrowser JS, which is useful even without committing or using anything of the JS, TS, or npm ecosystem.
ao_zame@reddit
I just use Node + pnpm. Itâs pretty nuts to use fk bun as a package manager instead of something more stable like pnpm.
And any performance advantages Bun may have over pnpm will probably disappear soon anyway, since pnpm is also being rewritten in Rust (just not in the messy way Bun was).
sebovzeoueb@reddit
At the time when I started using Bun it appeared to be the lowest friction way to produce a standalone executable from a Node.js project and also the HTTP server is using uWebSockets which is good but I was always running into issues creating a repeatable install as uWS versions are quite strictly pinned to specific Node versions. I see that the Deno APIs look pretty similar to what Bun is doing though, so I may be tempted to make the switch with the recent slopping going on in the Bun project.
MrJohz@reddit
Node also has SEA/Single Executable Applications now: https://nodejs.org/api/single-executable-applications.html
sebovzeoueb@reddit
have they made it simpler though? I remember when I looked into it before it was quite experimental and nowhere near as easy as
bun build --compile --target=bun-linux-x64 ./index.ts --outfile myapp. From a quick browse of that page, I don't think you can even build for other platforms using the Node thing?MrJohz@reddit
When I skimmed it earlier, there was a note about some things not being possible when generating cross-platform executables, so presumably it is possible, but it's not clear how you'd do it.
It seems slightly more complicated (you need to create a config file rather than passing everything as command-line arguments) but it's definitely possible. That said, it looks like it's got the issue that a few recent NodeJS features have where the documentation is pretty poor and makes the feature seem much less usable than it is.
toolbelt@reddit
This is not just an noise ââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ this is the future
criloz@reddit
The future is an universal programming language where you can re use code just importing it and it compiled to any platform or networking layout, llms will become way less attractive after that, because writing a program will be as easier than with a llm, deterministic faster and cheap.
miversen33@reddit
So llvm
criloz@reddit
Not even close , llvm is mostly focused in imperative semantics for code generation, I am taking about something that have temporal logic and network logic natively, which output the set of program that fulfills your intend instead of just a program and each platform select the most optimal program to run. You can even parse human language to the semantics of this new language, so not llms are not the future
bluebird173@reddit
Insane programming larp. you sound insane
programming-ModTeam@reddit
Your post or comment was overly uncivil.
emdeka87@reddit
EM dashes for a one-liner is wild
metroid-maniac@reddit
Technical debt machine go brr
Squalphin@reddit
There is no technical debt if you can rewrite all of it anytime, every time, or at least that must be the thought process behind it.
hyrumwhite@reddit
No, that just means your known tech debt becomes unknown tech debt
ViscountVampa@reddit
Yep, who needs potable water, anyway.
Unfair-Sleep-3022@reddit
I mean, why should we care about anything Anthropic does that generates hype for its tool?
In other news, the petrol industry has published yet another study proving climate change is not real
Kissaki0@reddit
We should care because Bun was a popular tool in the (holy /s) triangle of Node, Deno, and Bun. This much of a shift puts the established Bun project and tooling into question.
Unfair-Sleep-3022@reddit
I would never use it after it got bought
ECrispy@reddit
so Bun is now WORN? since no human has, can or will ever read/review any of the 900+ loc, and all future contributions will be 99% claude anyway.
how exactly is this NOT vibecoded Jared?
Kissaki0@reddit
What does WORN stand for?
ECrispy@reddit
Write Once Read Never
Seanitzel@reddit
This is crazy on so many levels, this much code debt and knowledge gaps in a code base is unsustainable imo.
As someone managing a pretty huge codebase myself, one of the hardest things long term is keeping an actual mental model of the system understanding why things exist.
I use AI generated code. Carefully and I keep note of things I will need to go deeper into and probably refactor later for it to be scalable, but generating so much code and merging so quickly is simply insane
Kissaki0@reddit
Life is probably easier if you don't care about understanding the project you're working on. It's not craftsmanship or engineering. I guess it's a different kind of product development. Letting go of understanding. Embracing a blackbox [under your supervision] instead.
Surely it's a personality thing, whether one can feel comfortable with that. Maybe it's easier when one didn't understand how things work in the first place.
We'll see how it will work out. I'm also skeptical that this leads to quality and maintainability long term. Maybe this is the first iteration and they will improve the substance over time. Who knows.
warpedgeoid@reddit
Stop building server-side code in fucking JavaScript or TypeScript. Problem solved.
Nova_496@reddit
the language is comfy doe...
KadmonX@reddit
Aren't Rust devs worried that their language will be perceived as the language of AI slop?
shuraman@reddit
There are so many bots here, whatâs going on with Reddit
m00fster@reddit
Thatâs what a bot would say. Are you a bot?
Substantial-Elk4531@reddit
I'm not a bot
But if you'd like, we can talk about how to spot bots on websites such as reddit
TheReservedList@reddit
Youâre not crazy â This is a legitimate insight.
ghillerd@reddit
You're not just noticing the death of the internet â you're engaging with a pivotal moment in history in real time.
bluebird173@reddit
You're absolutely right
rooktakesqueen@reddit
You know what I really love? When I do
git blameand the whole file comes from a single commit labeled "rewrite bun in rust"Wonderful-Habit-139@reddit
6755 commits apparently.
rooktakesqueen@reddit
In a single PR, which I assume would get squashed, but perhaps not
jug6ernaut@reddit
Squashed or not 6k of LLM generated commits is also pretty worthless.
RagnarDannes@reddit
Back to Deno I go...
cake-day-on-feb-29@reddit
I've heard that it has some type of sandboxing, being recommended by the yt-dlp project. Probably a good idea especially with the current state of security in the JavaScript ecosystem.
renatoathaydes@reddit
Wouldn't you want to wait and see if the result is worse first? I saw that some test appears to have been just "worked around", but that does not mean it happened in a large amount of cases. I would expect Bun to only merge this after doing extensive "real world" testing, but TBH no idea how much they actually did.
miversen33@reddit
It's not possible to even partially review this, nor fully test considering the PR was initially opened 6 days ago.
It's truly wild that they just merged that big of a change that fast
renatoathaydes@reddit
From the announcement, they told Claude to translate the code "mechanically" , i.e. as much 1-to-1 as possible. That means you would only need to review a representative sample of that translation to be fairly certain the output is as expected, especially if the translation was done not by Claude, but by a transpiler which Claude came up with. That's why I think they may be right, you shouldn't need to review every line as long as there's an adequate amount of tests that prevents regressions. The details are still not made public, so I would wait a bit to make any decisions. If you tried the new Bun version in your code base, perhaps you could contribute to providing feedback if something went wrong... have you not even tried it?
EveryQuantityEver@reddit
Absolutely not.
miversen33@reddit
How the fuck would I know?
Like do you even understand what you're saying? Let's say I do run my codebase with the new runtime and it "works" (I see no perf issues and no instant crashes). You realize that's only a third of the battle right?
Another third is maintainability which is literally gone out the window now. Only an LLM with an absolutely obscene context size (Claude and Gemini are the only 2 I can think of) would even begin to be able to ingest this
The final third is security. Nobody is auditing this. I don't care how good the "transpiler" is, you still audit your fucking code lol. This is unauditanble.
Oh and 6 days is simply not enough time to wait to merge a complete rewrite of a runtime.
In Enterprise it takes about a month (at least) of testing just to update the version of a platform we're using. Years to update the language version we're writing against. And they waited 6 days to merge a complete rewrite (in 6 days) of the engine lots of these platforms would be running on.
Lmfao
RagnarDannes@reddit
Yeah this dude is acting like you can honestly word for word translate Zig to Rust. Despite semantically incompatibility between the languages.
renatoathaydes@reddit
Wow you seem really inexperienced to me.
Based on what are you concluding that?? The code should be mostly the same with a different syntax if the goal of the translation was achieved. Did you look around the new code? It seems totally fine.
Have you ever used AI in a big project like that? That's absolutely not true. The LLM will make a plan with instructions for each step of the translation, then a "sub-agent" (with clean context) will work only on that. This is how stuff like this is even possible.
This should actually be better now?! Zig is not memory-safe, so unless there's a lot of unsafe Rust in the translation, the new code should be considered safer. Was the Zig code audited?? I find that hard to believe, you seem to be making assumptions out of thin air.
I work in security myself. Security does not happen by people looking at code, it happens by people running tests and identifying patterns that may be unsafe, then investigating those with more care.
If it takes you a month to do simple things like update a version, then I'm sorry but you don't know what you're doing and that explains all the baseless assumptions and conclusions you're making.
Ok_Individual_5050@reddit
Inexperienced my arse. This is basic engineering stuff. There is no physical way this is engineered properlyÂ
owogwbbwgbrwbr@reddit
This is 4/10 bait
LIEUTENANT__CRUNCH@reddit
You sound like a complete dufus thatâs high on AI farts.
SHOULD. Yeah it SHOULD. That does not mean it IS. I donât know what sort of finger doodling you do for job, but some of us do real work where building on a foundation of SHOULD is an absolute no-go.
Rokuro142@reddit
I told my coworker to sweep the floor of the whole hangar, I'm sure that he swept the whole floor given that this one square inch is clean now. No need to look, trust me bro.
cholwell@reddit
AI has one shot this whole fucking industry holy shit weâre gonna become laughing stocks
floodyberry@reddit
everyone, calm down, they told the hallucination machine not to hallucinate, so it's obviously a 1-1 rewrite!
kairos@reddit
Did you read this and think "yeah, this makes sense and is very professional"?
Hawtre@reddit
Have you ever tried eating shit? You might like the taste of it, if you haven't tried it before.
renatoathaydes@reddit
Classy.
Hawtre@reddit
It's a fairly apt comparison in this case, regardless of how classy you think it is
lelanthran@reddit
Well, presumably the alternatives do not have tests that "worked around" things, so by definition alone they are already safer than this.
renatoathaydes@reddit
That does not follow. Unless you know whether the test that was worked around represents a real regression, and if so, whether that happened in a significant amount of tests, you cannot arrive at that conclusion at all. To the contrary, the claim in the PR is that the new implementation fixed many existing issues, and that by being written in a memory-safe language, it probably prevented even more unknown issues. I find that plausible and without more evidence to the contrary, I think I will believe Jarred for now.
lelanthran@reddit
All we can tell, from the approval of a single worked-around test, is that working around tests exists as a policy. That is more than enough to make alternatives better on this objective axis.
BeefEX@reddit
The entire rewrite was started only like 2 months ago, and wasn't even functional for most of that time. They couldn't do any real world testing, even if they wanted to, there just wasn't any time allocated for it.
DanceWithEverything@reddit
God Iâm happy I moved off Bun. This should terrify anyone using this in production
emboss_@reddit
It's ridiculous alright, but also a good opportunity to fork the Zig version and continue with something sane. I mean who will seriously consider Bun in production now when there's literally nobody understanding the code anymore? This is a dick marketing move using a once-useful project as a stepping stone to prepare for a grand IPO.
whatThePleb@reddit
LLM gone wild
gladfelter@reddit
There's a lot of hate for this AI-driven migration on this thread. I'd like to summarize what I'm seeing to find out if I'm understanding the gist of the skepticism:
2 and 3 I can dismiss out-of-hand. For 2, the code was 100% unsafe in Zig and now the areas that are unsafe are highlighted and easier to fix piecemeal. That was a stated goal of the author. For 3, the only given example turned out to be a misunderstanding of the commenter: they saw what looked like a regression in the test, going from a deterministic wait for quiescence to a sleep, but that was a rollback of an attempt to improve the test, so it's in the same state as the mainline prior to the merge.
#4 is presumptous at the least: if you aren't working on a codebase every day, you shouldn't second guess well-evidenced design decisions by those who are. Jarred presented several reasons why this was a good move for Bun.
1 is the more substantial argument. However, I've worked 15 years in a mono repo with hundreds of millions of lines of code written by very well-compensated engineers and I am highly skeptical of the open-loop quality humans can bring to the table. Donald Knuth himself was so confident in the source code to his TeX layout engine that he offered to double the bounty on each subsequent bug. He had to freeze it. There's gonna be bugs. From what I've heard, Bun's testuite is substantial, and I'll take one test over the attestations of 3 Knuth's.
Basically the same story with code maintainability, but that's an even weaker argument: this was a migration, which means the architecture will largely be intact. I haven't personally verified this so I'm open to new evidence, but if that's the case, then any "interesting" choices by the LLM design-wise are quite limited in scope.
Okay, what did I get wrong here? Are there other arguments?
EveryQuantityEver@reddit
Because it's absolutely fucking stupid, was done in a terrible way, and there is no way that there is any kind of reliability with it.
There are lots of them. You are refusing to acknowledge them.
gladfelter@reddit
Those are assertions, not arguments. What did I get wrong?
I summarized and addressed all the serious arguments that I saw on this post. I acknowledged the strongest argument, the one observing that an enormous line count in the merge implyng less-than-thorough review. But you're the first to reply in a substantive way. I can't "refuse to acknowledge" what doesn't exist. The other reploy spent most of their comment insulting me and the rest making false statements: they claimed that no one on this post was arguing that AI-driven migration could introduce bugs, and that's easily falsifiable. Search for the word "mistake" if you're not convinced.
gladfelter@reddit
In case you didn't know, they're going a progressive rollout. The old codebase isn't deleted. As far as I can tell, they take risk management seriously.
drekmonger@reddit
There are some people who have real project dependencies on bun, and they have the all the prerogative in the world to be concerned. One million lines of zig --> rust is a miraculous feat, and the onus is on bun to prove it worked. There doubtlessly are errors, and short term, every bug will be jumped on as proof positive that the entire effort is suspicious.
But the majority of your downvotes are coming from people who are economically fearful. Either they have good jobs now or they're students/juniors who aspire to good jobs, and they're afraid the robots are going to rob them of their opportunities.
There no way that cohort will engage with your points honestly.
If the headline gave any hint that this topic was about AI, we'd have non-technical audiences pouring to downvote as well.
Rokuro142@reddit
No one said humans don't produce bugs.
Cool, then go tell your chatbot about it instead of bothering fellow humans.
Glacia@reddit
If anyone there had a brain they would've vibecoded a zig to rust transpiler but nope, let's burn some tokkens bro
The_Northern_Light@reddit
Tell me you donât understand programming language theory without telling me:
kevkevverson@reddit
Come on man, donât speak like that
The_Northern_Light@reddit
Okay.
That guy is an idiot who doesnât know what heâs talking about.
Glacia@reddit
Enlighten me mister programming language theorist
The_Northern_Light@reddit
Well youâre arguing with a compiler writer who has already explained why that is impossible so Iâll save my breath
Glacia@reddit
That's the best you can do? Ok
phillipcarter2@reddit
Itâs because you probably canât.
Glacia@reddit
"probably"
phillipcarter2@reddit
1:1 transpilers that donât share the same runtime and type system generally donât work, yes. Youâd need some degree of âwell maybe this is the semantically most similar outputâ which means youâre in LLM territory anyways. We could perhaps start with how Rust handles memory completely differently?
Glacia@reddit
Yeah man no idea how they compiled stuff before LLM /s
>We could perhaps start with how Rust handles memory completely differently?
There is nothing in Rust that prevents it from being a compiler target. Unsafe rust exist if there are cases where you think it's hard. I'm sure LLM slop they made is full of unsafe rust too.
phillipcarter2@reddit
Buddy I worked on compilers professionally for 6 years and I am here to tell you that you cannot arbitrarily transpile from one language to another when they donât share the same runtime.
Glacia@reddit
So in your world LLM can do it but a program cant? Ok
phillipcarter2@reddit
Yes? Youâre inherently in a probabilistic world for a problem like this. Itâs exactly why LLMs can be effective.
You can argue if Bun requires such a change or not, thatâs debatable. But it is not debatable that you could create a Zig to Rust transpilerthat works well for a complex codebase.
klayona@reddit
Zig has a C backend and there is c2rust, it'll be the worst Rust imaginable but it already exists.
Urik88@reddit
Not that simple, this is the approach TSGO took and they spent many months working on this for a language that maps much better to its predecessor than Rust wouldÂ
Glacia@reddit
If your goal is to just port 1 code base it's not hard. You dont need a full fledged compiler to do this.
mcel595@reddit
It's far more complex to make it write a transpiler than make it translate one languages to another, it's the original use case of the Transformer after all
sebf@reddit
This is the whole GenAI point: BaaS (Brain as a Service).
Sigmatics@reddit
Github app can't deal with it, keep getting "io error on socket"
Dminik@reddit
Just a quick thought, but I imagine that this has totally obliterated the ability of any previous maintaner to contribute. Even if you previously had a grasp on the codebase, it was just torn from under you. Not to mention that I don't think people are very excited to deep dive to an LLM slop codebase.
It looks like Bun will be an LLM only product from now on.Â
wmcscrooge@reddit
The thing I'm most confused by is who decides who gets to be contributors now? If I open up Claude, tell it to pull down the Bun codebase and suggest me a code change, does that mean I get to make a pull request as long as it passes tests? I mean, it's not different than what he's doing.
Dminik@reddit
It's not that you can or can't be a contributor. It's more like everyone who was contributing before had some knowledge of the Zig codebase. Now, it's all latin (or, well Just).
Any resemblance between the old and new codebase is mostly coincidental.
Atulin@reddit
They're owned by Antrophic, they don't want maintainers others than Claude
jer1uc@reddit
This was exactly my thought, especially when:
Seeing that the only reviewers were LLM products is pretty telling honestly that this codebase no longer wants human contributions.
skewuo@reddit
awesome!
BenchEmbarrassed7316@reddit
Okay, I'll offer a conspiracy theory:
The rewrite on Rust was done over a much longer period of time, and with more human control. But for the sake of AI tools' PR, it's presented as done in a hurry and automatically.
sleeping-in-crypto@reddit
Much more likely this is true
Archeelux@reddit
I really liked bun, but deno maybe my new runtime, this is beyond bizzare
No_Tea2273@reddit
I should note that ryan dahl (cofounder deno) is also very much in the vibe coding territory as a person https://xcancel.com/rough__sea
Archeelux@reddit
gUeSs I'LL bE lEfT BeHiNd
Mission_Honeydew_402@reddit
So the formal reason is some memory safety issues and the elephant in the room / informal / way more strategic reason is anti-LLM policy at the heart of an Anthropic owned project? Never a dull moment.
ppppppla@reddit
Wonder how many CVEs this is going to spawn.
PixelPhoenixForce@reddit
lets gooooo
ECrispy@reddit
what was the big hurry to merge this? where is the promised blog post by Jared?
the right way to do this would be to have a rust branch, let a bunch of external users test that, esp the ones known to use it heavily, and then fix+merge.
instead we got a bunch of twitter posts boasting about it, promising a careful merge when it was guaranteed it'd meet his claimed standards and broke nothing, and of course none of that is remotely true
Packeselt@reddit
I love rust. I work in rust. It is so funny as an OG bun hater to see the project just get steamrolled like this, to just get a full AI rewrite to rust.
Here's a secret. Just because it's in rust, does not mean it's good code. It's just probably memory-safe code. This is going to cause so many edge cases and gotchas it's unreal. GL bun community.
KandevDev@reddit
the bun-in-rust move is interesting because it admits the original zig bet was strategic, not technical. zig made bun differentiated. rust will make it boring. the reason any production team would adopt bun was always going to come down to "does this thing get patched when there is a CVE", and rust has the contributor pool that zig does not.
veryusedrname@reddit
It's a big vibecoded nightmare filled with unsafe code all around the place. The whole of the project will go into garbage soon when it turns out that a million lines of hallucinations isn't maintainable.
KandevDev@reddit
"vibecoded nightmare" depends on what got reviewed. if oven/bun has actual maintainers doing code review on the rust changes, the AI assist is just typing acceleration. if they are merging unread, that is the disaster scenario. the proof will be in the next 6 months of CVE response time. that is the actual test for whether the codebase is maintainable.
EveryQuantityEver@reddit
There is no way that any significant amount of it got reviewed.
KandevDev@reddit
probably true, hard to tell from the outside. the actual signal will be when the first non-trivial security vuln drops and we see how long the patch takes. fast turnaround = team knows the codebase. slow turnaround = AI wrote it and nobody can read it.
veryusedrname@reddit
Name the team that is capable to review a million lines of code. Then name the timeframe.
KandevDev@reddit
fair, no team can review a million LOC by hand. so the actual test is whether the codebase has the testing + tooling discipline to make rust compile-time guarantees do most of the review work. if it does, the team only needs to review the unsafe blocks and the public API. if it does not, the project is dead. that is the binary.
lelanthran@reddit
They just vibed a million lines of code into existence. I don't think that contributor pool is anywhere in their top-10000 list of concerns.
KandevDev@reddit
fair correction. "rust contributor pool" assumed organic contributors but the maintenance burden of a million LOC of AI-generated rust on a small team is different from a million LOC of human-written rust by the same team. the latter is hard, the former is harder. you are right to push on that. we will see.
Fancy-Mushroom-6062@reddit
Relax itâs just a marketing campaign, it wonât be merged
pagoru@reddit
It's already merged
ShacoinaBox@reddit
@claude rewrite this in rust also add as many unsafe blocks as necessary to ship this piece of shit out the door PRONTO!! thanks
franklindstallone@reddit
What a way to waste time.
pysk00l@reddit
Wait a fuckin' minute! We were told this is just some guys personal branch, that there are no official plans to completely rewrite bun etc etc.
And now the branch has already been merged?
I won't even go over 1million+ lines of vibe coded shit, as several other posters here have already raised this concern
monochr_me@reddit
Is it squashed commits?
Salkinator@reddit
Hey letâs fundamentally change the core language weâve been singing the praises of for years because the robot said it was cool. Jesus Christ
MrEpic382RDT@reddit
no, its because the core language weâve been singing the praises of has a principled stance against the robot so weâre going with what the robot says instead out of spite
bucket13@reddit
If Zig isn't going to upstream the compiler improvements on principal of course Bun is going to leave. They don't want to maintain a fork indefinitely. The entire saga is stupid.Â
umlx@reddit
Move 'zig'. For great justice.
Sloshy42@reddit
So, I like Bun. I think for what it's trying to do, which is be a nice batteries-included-style runtime that could be a usable alternative to Python in the Node ecosystem, is really great. I'm using it for a personal project where I can use its single-EXE distribution feature to make releases easy.
That being said: lmao what the hell this was just an experiment LAST WEEK
Sloshy42@reddit
Thought about this for a minute. I'll say, if they can pull this off, this is actually really fascinating. The idea that if you just throw a stupid amount of resources onto a problem, you can get a working solution in such a fast time is interesting to me. Obviously technical debt will be an issue I guess but at some point these AI agents are going to be rewriting and reviewing so much of the code so fast that -- and arguably we're already there -- the actual code is more fluid and changing with whatever the needs of the project are, rather than being this well-understood behemoth.
Now... If I were shipping actual production code with Bun, well, first of all it always seemed kinda bleeding-edge so this wouldn't surprise me too much but also, I'd maybe wait it out a bit before biting the bullet.
OdderG@reddit
I believe that this rewrite attempt will become a "proof" for Anthropic sales people to sell their AI with the selling point of "No human needed"
Affectionate-Job8651@reddit
bun was made by claude, gpt, or other ai
UnmaintainedDonkey@reddit
WTF??? They said this was a tongue on cheek test? Now they actually merge it??
UnidentifiedBlobject@reddit
Now time to rewrite it in assembly.
Zilch274@reddit
Do it in binary like a real man
MaLiN2223@reddit
Why not, it would improve speed, and I recently heard that code is for AI now, not for humans to read.Â
/s just in caseÂ
BipolarKebab@reddit
Rust 5/14
thecarlpetera@reddit
lgtm
lkajerlk@reddit
that was quick