Rendered at 03:04:12 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
ds82 17 hours ago [-]
Is there a counter petition?
The author of the PR is a long time nodejs contributor & conference speaker. He explicitly claims that `I've reviewed all changes myself.`.
In the end it's a question if you trust him to submit a useful, well-reviewed PR. Doesn't matter if it was created using AI or not.
cj 1 days ago [-]
I have no unique perspective to add other than an obvious question: If the PR is low quality, why not just close/reject it? Does it matter if it's AI assisted or not?
OptionOfT 1 days ago [-]
Because PRs with AI need to be reviewed with a lot more scrutiny, simply because AI is good at generating code that looks good, but isn't necessarily correct.
So now you're looking at a PR that at face value looks good, but doesn't reflect the author's skill and understanding of the subject.
Meaning now you shift more work to the owners of the codebase, as they have to go through those verifications steps.
1 days ago [-]
th3tekllc 11 hours ago [-]
This makes 0 sense. It shouldn't matter if AI wrote the PR or a human.
jkubicek 4 hours ago [-]
If a person writes 100 lines of code there’s a (valid) assumption that someone thinks this 100 lines of code is worth writing. With AI it takes no effort to write 10,000 loc. Asking someone else to figure out if that code is worth merging just offloads the effort to someone else who didn’t ask for it.
whattheheckheck 1 days ago [-]
Just deprioritize it and make the mr openers do more verification
rendaw 21 hours ago [-]
What sort of verification?
whattheheckheck 11 hours ago [-]
Everything that a maintainer would need to prove to themselves to merge it can be codified in a pipeline.
Or some kind of protocol for building those things in the MR so that any new behavior explicitly demonstrates the new states and transitions.
This is hard if the new MR introduces a completely different paradigm outside the mental model of the reviewer and maintainer. Might be better off completely forking the project and running it in parallel aka taking on the maintainer duties if they feel so inclined to completely change things
schwede 22 hours ago [-]
One reason is that AI can create PRs at a scale that can just overwhelm maintainers not to mention drowning out non-AI PRs.
chjj 1 days ago [-]
That means all AI code would simply be rejected. This saves time.
spoiler 1 days ago [-]
If AI writes a for loop the same way you would... Does it automatically mean the code is bad because you—or someone you approve of—didn't write it? What is the actual argument being made here? All code has trade offs, does AI make a bad cost/benefit analysis? Hell yeah it does. Do humans make the same mistake? I can tell you for certain they do, because at least half of my career was spent fixing those mistakes... Before there ever was an LLM in sight. So again... What's the argument here? AI can produce more code, so like more possibility for fuck up? Well, don't vibe code with "approve everything" like what are we even talking about? It's not the tool it's the users, and as with any tool theres going to be misuse, especially new and emerging ones lol
fatata123 13 hours ago [-]
[dead]
chjj 1 days ago [-]
If this is your opinion, I ask you: are you okay with AI reviewing the PRs as well, or do you prefer a human to do it?
Think carefully before responding.
spoiler 1 days ago [-]
I don't know why you have to qualify your sentence with "think carefully before you respond" it makes it seem like you're setting up some rhetoric trap... But I'll assume it's in good faith? Anyway...
I don't mind if a review is AI-assisted. I've always been a fan of the whole "human in the loop" concept in general. Maybe the AI helps them catch something they'd normally miss or gloss over. Everyone tends to have different priorities when reviewing PRs, and it's not like humans don't have lapses in judgement either (I'm not trying to anthropomorphise AI, but you know what I mean).
My stance is same about writing code. I honestly don't mind if the code was written `ed` on a linux-powered toaster from 2005 with 32x32 screen, or if they wrote it using Claude Code 9000.
At the end of the day, the person who's submitting the code (or signing off a review) is responsible for their actions.
So in a round-about way, to answer your question: I think AI as part of the review is fine. As impressive as their output can be sometimes be, it can be both impressively good and impressively bad. So no, only relying on AI for review is not enough.
ray_v 1 days ago [-]
It sounds like what you'd send to an LLM lol.
"Think carefully, make no mistakes."
chjj 18 hours ago [-]
Yeah, it never works though, as you can see from this example.
chjj 18 hours ago [-]
You should use AI.
pan69 1 days ago [-]
> A 19k lines-of-code Pull Request was opened in January, 2026.
Such a PR should be rejected simply because of the shear size of it, regardless of AI use. Seriously, who submits a 19k line PR? Just make many small ones.
spoiler 1 days ago [-]
The PR touched a lot of internals, including module code and mirrors the fs APIs. So, yes it was big, but the commit history was largely clean and followed a development story, and it was tested. The code quality was decent too. I didn't review all of it because I don't have a personal stake in this though.
I suggest EVERYONE in this thread go read the the GitHub PR in question. There's some good arguments for and against AI, and what it means for FOSS... But good lord you will have to sift through the virtue signalling bullshit and have patience for the constant moving of goalposts
tracker1 1 days ago [-]
How would you go about breaking up this particular set of functionality into smaller PRs, exactly? It's meant to introduce a virtualized file system... the size is dictated by the feature itself.
Also, no mention at all regarding the test coverage, or impact if any on existing code paths specifically.
ramon156 20 hours ago [-]
There's multiple features, not just VFS.
tylerchilds 1 days ago [-]
On the one hand, agreed
On the other hand, I haven’t and I believe many of us, have never paid node any money so it feels weird to dictate their approach.
I can see the good intention in this move, but it's not realistic. The genie isn't going back in the bottle, so the priority shouldn't be artificial limits, but more emphasis on review and sets of eyes required to sign off on a merge.
cpursley 1 days ago [-]
If they allow AI in Node it just might do a full rewrite into Rust, Go or Elixir ;)
mtndew4brkfst 1 days ago [-]
Well, survivorship bias means that Elixir is loudly populated by AI maximalists now. Just go look at the last several years worth of US/EU Elixirconf talks schedules, it's maybe a third of each cohort and included in keynote slots.
bhttrrrrrt 1 days ago [-]
How is that survivorship bias
mtndew4brkfst 1 days ago [-]
Because people who enjoyed working with Elixir otherwise but don't want to participate or support that kind of environment have mostly left when the trend became clear. So the folks who are sticking around are the ones who are neutral-to-positive on AI. This means explicitly or implicitly surveying that group for opinions on AI's place in development work, such as while designing a conference schedule, are going to be missing most of those voices who might once have objected. It will continue to skew harder towards favoring AI in the future with most of the possible sources of more-critical opinions leaving.
That to me seems to match the definition of survivorship bias quite well?
thedevilslawyer 23 hours ago [-]
Maybe selection bias.
bwestergard 1 days ago [-]
This is how I would deal with the problem if I maintained node: "Please, use your tokens and experimental energies to port to Rust and pass the following test suite. Let us know when you've got something that works."
ramesh31 1 days ago [-]
This is a silly reactionary response. Where is the line? Can I use AI to look up APIs? Write documentation? What if I write a function and ask AI to test it? What if I manually implemented an idea that I thought about after chatting with AI a few weeks ago?
Stop treating this like it's going to go away. We need actual solutions for the FOSS community that make reviewing AI assisted work tractable.
tredre3 1 days ago [-]
> Stop treating this like it's going to go away. We need actual solutions for the FOSS community that make reviewing AI assisted work tractable.
I don't think it should be up to reviewers and maintainers to put in the work to figure that one out. You want to "disrupt" the open-source pipeline? Fine, then you must propose a solution for the problems that your disruption is now causing.
Come up with a system so that I, a maintainer, can review a large volume of AI-generated PRs where the contributor often has neither the inclination nor the skills of assessing the quality of what they're proposing.
The system must be effective at preventing me from waste time on very obvious slop, it must also work offline and be free, because most maintainers are unpaid volunteers.
If you can offer that solution, I'm sure more projects would be open to giving carte blanche to AI-authored PRs.
vova_hn2 1 days ago [-]
I don't see, how such policies can possibly achieve more good, then harm.
A person, who posts slop for whatever reason, or runs bots that post slop, will simply ignore them.
An honest person, who cares about the quality of their contribution and genuinely wants to improve the project will be more limited in the choice of tools to do so.
So, this policy only serves to limit honest contributors, while doing absolutely nothing to stop spammers/slopposters.
canmi21 1 days ago [-]
[dead]
huflungdung 1 days ago [-]
[dead]
graphememes 1 days ago [-]
Honestly, this is a small pebble but feels like a ripple in the reasons why node.js is losing to bun and others.
johnny22 1 days ago [-]
Bun has claude code generated commits as we speak (as robobun).
manwe150 1 days ago [-]
Which does pull into question the future stability or quality of bun. As much as I don’t think nodejs should ban AI, the quality of some recent robobun AI commit message and code quality looked like hallucinated slop to me.
The author of the PR is a long time nodejs contributor & conference speaker. He explicitly claims that `I've reviewed all changes myself.`.
In the end it's a question if you trust him to submit a useful, well-reviewed PR. Doesn't matter if it was created using AI or not.
So now you're looking at a PR that at face value looks good, but doesn't reflect the author's skill and understanding of the subject.
Meaning now you shift more work to the owners of the codebase, as they have to go through those verifications steps.
Or some kind of protocol for building those things in the MR so that any new behavior explicitly demonstrates the new states and transitions.
This is hard if the new MR introduces a completely different paradigm outside the mental model of the reviewer and maintainer. Might be better off completely forking the project and running it in parallel aka taking on the maintainer duties if they feel so inclined to completely change things
Think carefully before responding.
I don't mind if a review is AI-assisted. I've always been a fan of the whole "human in the loop" concept in general. Maybe the AI helps them catch something they'd normally miss or gloss over. Everyone tends to have different priorities when reviewing PRs, and it's not like humans don't have lapses in judgement either (I'm not trying to anthropomorphise AI, but you know what I mean).
My stance is same about writing code. I honestly don't mind if the code was written `ed` on a linux-powered toaster from 2005 with 32x32 screen, or if they wrote it using Claude Code 9000.
At the end of the day, the person who's submitting the code (or signing off a review) is responsible for their actions.
So in a round-about way, to answer your question: I think AI as part of the review is fine. As impressive as their output can be sometimes be, it can be both impressively good and impressively bad. So no, only relying on AI for review is not enough.
"Think carefully, make no mistakes."
Such a PR should be rejected simply because of the shear size of it, regardless of AI use. Seriously, who submits a 19k line PR? Just make many small ones.
I suggest EVERYONE in this thread go read the the GitHub PR in question. There's some good arguments for and against AI, and what it means for FOSS... But good lord you will have to sift through the virtue signalling bullshit and have patience for the constant moving of goalposts
Also, no mention at all regarding the test coverage, or impact if any on existing code paths specifically.
On the other hand, I haven’t and I believe many of us, have never paid node any money so it feels weird to dictate their approach.
@indutny explains their views in that thread.
That to me seems to match the definition of survivorship bias quite well?
Stop treating this like it's going to go away. We need actual solutions for the FOSS community that make reviewing AI assisted work tractable.
I don't think it should be up to reviewers and maintainers to put in the work to figure that one out. You want to "disrupt" the open-source pipeline? Fine, then you must propose a solution for the problems that your disruption is now causing.
Come up with a system so that I, a maintainer, can review a large volume of AI-generated PRs where the contributor often has neither the inclination nor the skills of assessing the quality of what they're proposing.
The system must be effective at preventing me from waste time on very obvious slop, it must also work offline and be free, because most maintainers are unpaid volunteers.
If you can offer that solution, I'm sure more projects would be open to giving carte blanche to AI-authored PRs.
A person, who posts slop for whatever reason, or runs bots that post slop, will simply ignore them.
An honest person, who cares about the quality of their contribution and genuinely wants to improve the project will be more limited in the choice of tools to do so.
So, this policy only serves to limit honest contributors, while doing absolutely nothing to stop spammers/slopposters.