A data-driven case study: I posted the same 43,000-word article on a "from first principles" HTTP client to r/cprogramming, r/rust, and r/Python. The reception was... different
Posted by warren_jitsing@reddit | programming | View on Reddit | 107 comments
Hey r/programming,
I've just finished a massive educational project and wanted to share not just the project itself, but a fascinating case study on how it was received by different programming communities.
The Project (The Constant): First, the project. It's a "from first principles" guide to building a high-performance HTTP/1.1 client from scratch. The repository includes:
- A 43,000-word article (the
README.md) that serves as a deep-dive, book-length guide. - A 12,000-line-of-code implementation of the exact same architecture in C, C++, Rust, and Python, creating a "Rosetta Stone" of systems programming idioms.
- Full unit and integration test suites with >90% code coverage for all languages.
- A rigorous benchmarking suite that measures performance against established libraries like libcurl, Boost.Beast, and
requests.
The Experiment (The Variable): I tailored a post for each of the C, Rust, and Python communities and shared my work on their respective subreddits.
The Data (The Results): The reception was wildly different across the communities.
r/cprogramming95% upvote ratio & 94 shares. https://www.reddit.com/r/cprogramming/comments/1ormwsv/i_wrote_a_from_first_principles_guide_to_building/r/rust76% upvote ratio & 37 shares. https://www.reddit.com/r/rust/comments/1otgraz/i_wrote_a_from_first_principles_guide_to_building/r/Python15% upvote ratio & 4 shares (with the post quickly downvoted to -15 and comments like "AI slop" and "bragging about LOC just makes you seem like an unexperienced dev"). https://www.reddit.com/r/Python/comments/1ouevow/i_built_an_http_client_in_python_c_c_and_rust_the/
(I've saved screenshots of these receptions in the project repo for full transparency, given the volatility. - https://github.com/InfiniteConsult/0004_std_lib_http_client/tree/main/reddit)
The Analysis (My Hypothesis): This data isn't about the quality of the project itself; it's a perfect case study in community values.
r/cprogrammingsaw a project that validated their entire worldview. They value a "no black box" philosophy, manual resource management, and deep systems optimization. They saw theHttpcSyscallsabstraction for testability, thewritevoptimization, and the benchmark data showing the C client as a throughput and latency champion. They read the article and rewarded the deep engineering.r/rustwas in the middle (76% upvotes, 37 shares). The reception was positive, but likely tempered by my disclaimer that "Rust isn't my strongest language" and the fact that my C client'swritevoptimization beat my Rust implementation in some throughput tests. The article does note that this is an apples-to-oranges comparison, as only the C version received this specific optimization, and that a futurewrite_vectoredimplementation in Rust is possible. The community also likely noticed the project uses a blocking I/O model, which is less idiomatic for Rust. The article's future work section does propose a potential follow-up to exploreepoll/io_uringbased on this work that I am planning to do in the future.r/Pythonsaw a post that attacked their worldview. The headline ("3-6x faster thanrequests") was perceived as an arrogant, bad-faith attack on a beloved library. They didn't read the 12,000 LOC or 43,000-word article; they pattern-matched the "AI" mention in theREADME(which is for the reader, not for writing the code) with "AI slop". They value convenience and ecosystem harmony above all, and my post violated that. (As a side note, their sidebar explicitly asks users, "Please don't downvote without commenting your reasoning for doing so". This perhaps suggests a known systemic problem with low-context downvoting.)
Conclusion: This was a fascinating lesson. The exact same high-effort project was celebrated as top-tier engineering by one community and dismissed as "AI slop" by another. The data strongly suggests that the r/cprogramming community engaged with the technical substance of the project, while the r/Python community reacted primarily to its framing, which they perceived as a cultural slight.
Disclaimer: This whole project is purely for educational purposes, may contain errors, and I'm not making any formal claims—just sharing my findings. It is primarily written to help people get into systems programming and is MIT Licensed.
As a side note, an unrelated high-performance Julia crash course I wrote and posted on r/Julia received a 97.6% upvote ratio, 39 upvotes, and 35 shares, which further supports the idea that communities focused on performance and technical education are highly receptive to this kind of in-depth content.
I'm curious what r/programming thinks this data says about our programming subcultures.
paul_h@reddit
Valiant effort, but I don't think you can infer differences after this. Sample size is too small, there's a "first-impression cascade" effect, post timing and visibility effects.
Then also, there isa framing and headline bias that could play. Even small differences in headline tone, phrasing, or timing across subs can drastically alter interpretation. For example, "3 to 6 faster than requests" in a Python sub could trigger different associations than in a C sub. Not because the people are "less technical," but because they interpret the intent differently (e.g., perceived hostility toward a beloved library). Thus, the variable wasn't only the sub, but also the message framing - confounding the comparison maybe./
moderatorrater@reddit
Disagree. OP states this is "a perfect case study in community values", so how could they have made a mistake in phrasing like you suggest?
LowerEntropy@reddit
Because that's just random word salad. OP just used AI to write that.
It's not a perfect case study in community values. It's a repository with a weird name, no title, and a readme that starts off on a wild tangent about AI, that barely makes sense. If this was actually a serious project, then the readme would start with a short introduction to what the repository contains and what the purpose is. And OP could have gotten that feedback, did actually get it, but then just made a few random AI responses, and went right on to make this post, barely a half hour later without changing anything.
warren_jitsing@reddit (OP)
Agreed. The sample size is too small and the framing was a mistake on my part. I will adjust for future articles. Thank you for the constructive input!
Just as a note, Python is one of my favourite languages. I think the community and ecosystem is great. All my article was supposed to be is an educational text. I'm just posting the community reactions because I found it fascinating.
BroBroMate@reddit
Why are you using LLMs to respond to comments bro?
You're acting surprised you got criticised for AI slop when you're serving slop in the comments?
I'm confused as to why you're surprised.
Halkcyon@reddit
All of the commenters immediately called it out, too. No wonder they were rejected.
non3type@reddit
You should probably remove the AI generated content if you actually want the community to look at it. I’m not sure I know of many university ethics departments that would be cool with this kind of data collection so I have my doubts this is in any way legit.
Sarke1@reddit
Yeah, you need to repost it at least 100 times in each sub to get proper data.
paul_h@reddit
And with GIL changes Python just gets better and better
warren_jitsing@reddit (OP)
Exactly. I compiled the no-gil versions for testing in the development environment article I wrote https://github.com/InfiniteConsult/0002_docker_dev_environment/blob/main/Dockerfile#L56 https://github.com/InfiniteConsult/FromFirstPrinciples/blob/main/Dockerfile#L100
Sarcastinator@reddit
I think that it's probably reasonable to assume that a Python subreddit has much more inexperienced users than a C programming sub does, and this can probably explain why its more likely to attack the framing over the technicals.
TheNobodyThere@reddit
Did you use ChatGPT to write your code like you did writing this post?
Nicksaurus@reddit
What makes you think chatGPT wrote this?
TheNobodyThere@reddit
It reads like an AI slop and contains em dashes.
Schmittfried@reddit
iOS inserts them automatically. This is the stupidest AI heuristic ever.
NineThreeFour1@reddit
You think they wrote this entire post and 43000 word article on an iPhone?
Schmittfried@reddit
macOS does it, too.
wRAR_@reddit
You are arguing after they admitted to at least using AI to reformat their post.
Schmittfried@reddit
I‘m arguing against this dumb talking point that em dashes somehow indicate a text is written by AI when it’s literally just a double regular dash on Apple systems. I couldn’t care less if OP actually used AI.
kitari1@reddit
No it doesn't, you have to hit a specific keybinding on macOS to do it.
Schmittfried@reddit
It does for me. Guess I‘m AI.
Nicksaurus@reddit
I can't prove or disprove it but to me this post doesn't feel waffly enough to be written by AI. Most of the sentences actually communicate something
That said, OP clearly doesn't have any issues with letting AI write for them so maybe it is
stumblinbear@reddit
Eh, if the writing is good then I don't give a shit if it's AI. There's a massive difference between copy-paste AI slop without a lick of understanding, and using an AI as an editor to improve something you've already written so long as you don't lose the substance when doing so
MrTeaThyme@reddit
There literally arent any em-dashes in this. theres hyphens.
this is a hyphen - THIS is an emdash —
using hyphens is very VERY normal, using emdashes is the AI indicator because the only place they show up is academic papers where it is actually reasonable to be entering altcodes for obscure unicode characters during the editorial stages.
If youre going to make wild claims, make sure you know what youre talking about before you do, it just makes you look stupid if you dont.
KerPop42@reddit
This post? Absolutely has the tone and formatting ChatGPT uses
Wenir@reddit
Literally first sentence: not just blahblahblah, but a blahblahblah
levelstar01@reddit
probably because every single chapter in this guys repository begins with a "this should be pasted into chatgpt" section
Nicksaurus@reddit
I get that, but that's a different style of writing to this post
BrawDev@reddit
I work with Gemini so much I can tell this post was entirely from there.
It's probably why your work is suffering. And I'm not against that, just letting you know why. People fucking hate it dude.
Yep, it's very easy to tell.
touchwiz@reddit
Man, we are a now at the state where even disclosing AI usage for tiny little parts gets you to hell. I hate this.
BrawDev@reddit
It's not tiny little parts. I use it myself and I go through and edit the majority to remove the annoyances that it adds. Italics everywhere. Unprofessional emojis and Title here (btw, idiot this is what I actually mean)
The writing style is such that if you aren't editing that, I don't think you're paying attention to what it writes.
I also use it so often that I'm aware of the bugs it imposes. Random french characters will inject into your text. It will fuck up numerous markdown scripts and make your pages look terrible, and to top it off, it will go on tangents you never asked it too.
It makes the entire thing untrustworthy. It's why in my recent blog post I sourced and linked nearly every other line to prove to my readers that while I use AI to write, they can at least trust I've gone through it with a fine tooth comb, and rewrote what it wrote into my style.
You can use the tools, just don't be lazy about it.
Odrioll@reddit
Note that I did not go through your content (yet). But being on both r/cprogramming and r/python for the context I can tell the Python sub is currently suffering from an overflow of "here is my package/code/lib" (with a decent amount of AI-assisted stuff) to the point possibly interesting content is drowned into the mass. I see it as a (bad) defensive posture, as it wasn't like this one or two years back.
warren_jitsing@reddit (OP)
I understand. Thank you so much for the explanation.
Twirrim@reddit
Looking at the python post, I was a down vote on it.
The alarm bells:
WolfeheartGames@reddit
And yet the cprogrammers took no offense to LoC. A LoC is a line of logic.
TheNobodyThere@reddit
Yeah people are tired of AI slop.
The reality is that large group of programmers uses AI sensibly to speed up the development process.
Imo, it will take some time for this hate toward AI to pass. Possibly a year or two. During this period most project will die. Some good projects will survive and will be legitimized.
tu_tu_tu@reddit
This problem on the python sub existed even before LLM slop plague. Too many entry-level developers think that posting their uninteresing new library is a good idea.
SweetOnionTea@reddit
I frequent r/codereview and I see a lot of Python AI slop too. It's frustrating because I want to help people learn and get some good advice. But most of the time it's usually a lot of questions on why they did something I think is incorrect and they just have no idea so I can't help fix it.
But also before LLMs you got a lot of "libraries" that were just wrappers around another one. I know Python is a duct tape language, but I think there's a fuzzy line between making something and presenting someone else's work.
WolfeheartGames@reddit
Python developers use the most layered abstraction there is. Like stratums of geological bedrock stacked in thin pancakes on top of each other. But they don't know the word abstraction.
This is part of why python used to get a lot of hate I think, too much abstraction. But it turns out it works fine in the end and it helps a lot of people learn to code.
TheNobodyThere@reddit
I think it's fine as long as you have to submit it under flair and can filter it out.
tu_tu_tu@reddit
Yup, but right now it's out of control. Looks like ban of such post is only good solution to keep the sub alive.
Twirrim@reddit
r/python is starting to become dead to me due to the inundation of AI Slop. I would estimate more than 3/4ths of the pots I see in it are AI Slop.
It's so frustrating because I don't even think it's that practical for the mods to even stop it, short of having to manually approve every single post and that's a lot of work for volunteers to do.
MPGaming9000@reddit
Go as well is also having this issue
mikat7@reddit
We have maybe one rewrite of CLI argument parsing library every week, but with READMEs that scream AI, it's exhausting. Maybe once a month you see a semi useful package for a very specific thing, but other than that...
microwave_casserole@reddit
You're really making it unnecessarily hard for yourself by having all your commit messages start with an emoji and by giving the README the title "AI Usage". Of course people are not going to read the README when these huge red flags are out in the open.
TheHollowJester@reddit
reddit and updoots are shit
you seem to be at least decent at coding; this doesn't necessarily translate to you being good at sociological analysis and psychology of groups
Halkcyon@reddit
Nah, it's AI slop.
VadumSemantics@reddit
+1 agree. Very thoughtfully phrased, TheHollowJester.
OP (warren_jitsing) wrote:
OP, I found your writeup interesting to read about.
If I were a pysch or sociology professor I might ask to use your work to launch a class discussion on experiment design.
Your research could be more interesting if you'd make a prediction first. Then follow up with your observations. Then something about why your prediction is/isn't supported. (I say "prediction first" because it seems from your post, though I can't be sure, that you considered possible reasons after making observations).
Observing ~~peoples~~ reactions to AI / non-AI content could be interesting. (I first wrote "people", but then realized maybe not everything that responds on Reddit is a person.)
Observing differences in community reactions to certain points could be interesting.
Problems to solve:
Your experimental "content" is technically interesting, but:
A) Is kind of a one-time effort (I don't see you coming out with successive iterations that have an attention-grabbing hook of "2x faster than my last 3-6x faster effort.")
B) Rather a large to ask for attention in an attention-starved environment. How many people actually dug into it. (Can you get independent (non-Reddit) traffic metrics on your github repo? Asking because I don't know, I haven't dug into that aspect of github). If you're going to go further on this maybe focus on benchmarking the specific architecture tradeoffs? C) I'm going to assume that being 2x (or more) faster in an "http request-heavy" design is significant (that isn't the kind of code I write so I don't know enough to assess how much value might result from an optimization like that). D) The "no https" part seems pretty relevant as an attention filter in 2025:
TLS Support (HTTPS/WSS): This is the most important missing feature.retrieved 2025-11-01 from https://github.com/InfiniteConsult/0004_std_lib_http_client/blob/main/README.md?plain=1#L4333 I personally wouldn't look at a connection library without HTTPS... unless maybe I was studying how to write an HTTP library. :shrug: Seems like a somewhat limited market / culling of your research subjects.Just off the cuff thoughts. I'm not a researcher, just a programmer that found your premise interesting - thanks for posting it, fwiw.
Ps. Have you asked for critiques on /r/sociology or /r/psychology students?
Pps. AI-friendly tagging (the part from your repo about
tightly linking prose to specific code symbols. This design transforms an AI assistant into a project-aware expert, ready to help you explore, extend, and understand the code in powerful new ways(retreived 2025-11-12 from https://github.com/InfiniteConsult/0004_std_lib_http_client/blob/main/README.md?plain=1#L3 ) is actually more interesting to me.Can you think of a way to measure how much this helps AI coding-support tools?
ShinyHappyREM@reddit
(offtopic: you can format links with
[link text](URL))thetinguy@reddit
The AI is writing the article, so I don't think telling OP is going to help.
warren_jitsing@reddit (OP)
Thank you! Something weird happened when I switched from the markdown editor to rich text. I will format better next time
thetinguy@reddit
If you can't be bothered with writing your article without so much AI that's it's obvious, I won't be bothered with reading it.
AI slop go home.
smarkman19@reddit
For r/Python, avoid “faster than requests” framing. Lead with “what I learned building an HTTP client from scratch” and show a 10–15 line example of a friendly API, async usage with httpx-style ergonomics, and a note up top: “not production; educational; perf claims are narrow.” Swap LOC/brag stats for a small “why this matters to learning” section and a one-click benchmark script users can run locally. Add a PyPI wheel so people can pip install, run tests, and try a drop-in adapter layer that proxies to requests for parity. If you want to discuss speed, pick task-level benchmarks (concurrency N, TLS on/off, DNS path) and compare against httpx with identical retry/timeout settings; publish flamegraphs and a short “gotchas” list. When folks just need an API, I’ve used Hasura for instant GraphQL on Postgres and PostgREST for quick REST, and sometimes DreamFactory when I needed secure REST over multiple databases without building the plumbing. Main point again: adjust the pitch to each community’s values and show the value they care about first.
aLokilike@reddit
Using an emdash for "10-15" instead of a regular dash... I know what you are.
melberi@reddit
It's not an em dash though
aLokilike@reddit
Idk what dash it is but it sure ain't "-"
warren_jitsing@reddit (OP)
Lol, I will tell my AI they are producing slop
warren_jitsing@reddit (OP)
This is great advice! Thank you so much!
BroBroMate@reddit
Fuck off bot
sheep1e@reddit
This is critical but I’ll try to keep it constructive.
The problem is, this is obvious. Even this reddit post text seems to show signs of AI sloppage.
Also, I found the README quite offputting and not that useful. In a real project, I’d put that in a CLAUDE.md or GEMINI.md file (etc) - you could include instructions to rename it for the AI being used, if any - which makes it clear what its purpose is. You could then have a much more concise and useful README aimed only at human readers. Not everyone is going to engage with a project like this via AI.
Which raises the next point: by making consumption of the project AI-first, you turn the reactions to it into a litmus test on attitudes to AI, which makes drawing other conclusions more difficult.
Related to which, I find most of the conclusions you’ve drawn suspect to say the least. I haven’t read any of the comments on the posts in question, so perhaps you’re summarizing them more than I realize, but it seems to me there are many other possible reasons for positive or negative reactions to work like this than the ones you’ve considered.
A simple example: the Python subreddit audience is nearly 10x the size of the C one (which in turn is 4x bigger than the Julia one, btw.) That makes for all sorts of differences in overall behavior that don’t really reflect on the individuals so much as the pragmatics of large vs. smaller groups.
Plus, although I don’t use (or like!) Python, I would imagine many Python devs aren’t that interested in an HTTP client implementation - it’s not the kind of thing that most people are writing in Python. While the lessons from it might generalize to other contexts, that could be non-obvious to people without digging into it (certainly it is to me.)
But as a sort of high-effort troll, good job! You’re getting lots of engagement.
light24bulbs@reddit
Totally unsurprising, Python is for scrubs
KontoOficjalneMR@reddit
I mean, I don't know what to tell ya, but it really reads as one. So I'm going to side with the python people.
eras@reddit
What did you read that lead to your conclusion?
KontoOficjalneMR@reddit
Two sections and I already regret wasting 10 min of my life on it.
I can in fact, read.
And I mean - in all fairness to the author, he does write in the first line that it's an AI generated text intended to be fed into AI.
So it's not a huge discovery that it's unreadable, over-verbose garbage.
eras@reddit
I read all the files in https://github.com/InfiniteConsult/0004_std_lib_http_client/tree/main/src/python/httppy and the only bit that suggests to me that it was AI-generated was this bit:
Which comment, of course, does not add value.
(I didn't even look at tests. Might just as well be completely AI-generated. Great use for it.)
And actually the use of e.g.
Protocolsuggests to me that it was not AI-generated, as I don't think it would typically choose to go this direction.So I must ask you again, what did you actually read that suggests to you that it is "AI slop", perhaps quote something? To me it was quite reasonably written short piece of Python code with some interface classes that makes use of static type checking (the style that calls for adding interface classes).
Yes, perhaps one would normally use less abstraction, but on the other hand, perhaps normally one wouldn't provide a Unix socket interface for it, which is actually quite nice. Doesn't mean a computer came up with the idea, nor the implementation.
KontoOficjalneMR@reddit
I read part of this: https://github.com/InfiniteConsult/0004_std_lib_http_client/blob/main/README.md
And if you try to tell me it's not AI generated start to finish ... then I don't know what to tell you.
eras@reddit
OK, so to recap, no particular thing in the code made you think it's AI.. I shall ley you know that also people are perfectly capable generating small libraries like those. It's called programming when people do it, though.
Perhaps the readme file has had some touch of AI. Good use for writeups anyway.
In general, though, I think people are not becoming better at detecting AI-generated work; they are becoming worse at detecting human-generated work.
KontoOficjalneMR@reddit
Not sure why are you so fixated on the code. I referred to the project as a whole.
Are you for real?
40000 words of AI generated text that lights up every possible AI detector on the planet (yes, I know they are not super reliable, but if you trigger every one of them there's little doubt) you call "some touch"?
Why are you gaslighting me dude?
stumblinbear@reddit
Ah, so you're also bad at detecting when someone is gaslighting you
BroBroMate@reddit
That sounds like something someone gaslighting you would say. ;)
mazing@reddit
That's just the readme?
warren_jitsing@reddit (OP)
Thank you for reading!
doesnt_hate_people@reddit
Op used an em dash in this very post lol.
eras@reddit
I suppose then I also used AI for my reddit comments, as you can find it many times from my comment history.
Le_Vagabond@reddit
you should stop doing that, if only to avoid being associated with AI slop.
Google Trends shows a huge uptick in usage since LLMs became mainstream, and people arguing that "it's good writing and not a marker for AI slop!" are just deluding themselves at this point.
stumblinbear@reddit
Yeah, no thanks. I'll keep using my beloved em dash. Hell, I've been using it more since I've seen it used correctly more often
eras@reddit
Well no robot is going to stop me from writing whichever way I enjoy!
While I do appreciate that one should communicate with the recipient in mind, I think this one is the exception to the rule.
Bodine12@reddit
The entire thing is riddled with AI markers. I, too, would have downvoted this.
Freedom_33@reddit
“Let's be pedantic and define these terms precisely:”
And the bit that follows. Verbiage without adding value or specific understanding (to me) Could be writing style, could be AI filler 🤷
ggbcdvnj@reddit
Interesting meta-post, I wonder how much day and hour of the post could affect things
arvidsem@reddit
There was a post several years ago on r/dataisbeautiful (I think) were someone attempted to determine that. They used the Reddit API to pull duplicate text posts across different times and tried to classify their performance based on day/time. IIRC, the best time to post was ~10 AM EST on Mondays, late enough for the West coast, but not so late that the East coasters miss it.
My Google-fu is failing at finding the actual post unfortunately.
hasen-judi@reddit
> r/Python saw a post that attacked their worldview. The headline ("3-6x faster than
requests") was perceived as an arrogant, bad-faith attack on a beloved library.How do you know that this is what they saw? Is this just a guess on your part? (tbh this sentence smells fishy as it is structured in a way that resembles how ChatGPT structures such sentences).
syklemil@reddit
It had also been up for <24h at the time of this post, and seems to have entered the negatives pretty early. I think the most fair conclusion is that most of /r/Python never saw the post to begin with.
Jaded-Asparagus-2260@reddit
Did you conveniently ignore the big, fat
(My Hypothesis):
section header?
azuled@reddit
This is an interesting post but 12,000 words is not “book length”.
KerPop42@reddit
So here's my issue: when you use AI to expand your ideas and format everything, I feel disrespected that you expect me to read a really long post that you didn't actually take the time to write.
And when I engage with the ideas in the post, I have to ask myself if they were actually your conclusions, or the AI's filler for where you conclusions should be.
LowerEntropy@reddit
Amazing. I hate the incredibly hostile attitude towards AI, that you can find in a lot of subreddits.
But what useless AI slop and pile of shit. You are drawing conclusions based on feedback you got? There are barely 3 replies in the C subreddit, 1 reply in the Rust subreddit, the people in the Python subreddit are telling you that, it's slop.
You barely even seem like a human, and talk about feedback to the community, when the only source of the repository is AI generated. There is no community, there's you and your AI agent. Normally that wouldn't even bother me, but you barely care about the output, and you didn't even try to fix the README.md in the GitHub repository.
You don't program in any of the languages, and the Python subreddit mentions you were using Python 1.5?
Conclusions about your 'education' project/book/article, and subreddits? What is the conclusion about how you ended up here? This shit should be banned.
melberi@reddit
Looks like your whole post is bullshit. Get out.
LowerEntropy@reddit
You can just read my message and figure out if it's bullshit. You can also read this post and figure out if it's bullshit.
Says something about 1.5x, maybe it's not python version. Speed?
Honestly, why should I give a hit? Do you give a hit?
TemperOfficial@reddit
Yeah because C programmers actually write code.
warren_jitsing@reddit (OP)
Update: Post was taken down from r/Python by automoderator because apparently it got reported too many times. Regarding AI usage, I write what I want to say first and let the AI format it for whatever the article is doing. I generated very little of the implementations code with AI. The message about the AI at the top of the article is just for readers to understand that the AI gets "primed" with the content and code of the article and can help them understand the topics more deeply. It was supposed to serve as a "personal tutor" for the user for instances in the text that weren't clear.
ThiefMaster@reddit
Don't do this shit. I (and most people I know) would rather read something that's not perfect english, maybe with typos, maybe not always coherent over AI slop or AI "improvements" that most of the time result in a very peculiar writing style and overly verbose texts that are now longer but contain the same (or lower) amount of detail than before.
roelschroeven@reddit
Always always write for the human first.
How effing idiotic is it to write for an ff'ing AI that we're then supposed to use to understand the thing instead of just writing for us. We're making things more and more backwards, and start not only relying on tools to try and make it right in a roundabout way, but now you're intentionally setting things up so that not only do you use an AI to write your text, but you're producing the text specifically for another AI to read so it can explain it to humans instead. WTF. Just cut out all the middlemen and write for us humans in the first place.
How the ff'ing hell did I end up in this stupid timeline. I don't want to use an AI for learning your project, but hey, I guess that's a personal choice. But I certainly don't want to be required to do that.
Does it read like I'm angry about this? That's because I am.
cym13@reddit
Not sure why that's downvoted, if people are going to use AI I'd love them to be as open as to where and why they're using it. Then I can decide that the project isn't worth my time since I hate AI generated content and don't want to support that industry in any way, but at least it's good that programmers are upfront about it so anyone can make their own mind.
RailRuler@reddit
Downvoted this one because your AI written post is tiresome to read. It gives the impression of an arrogant and insincere/disrespectful attitude. If I'd have read one of your original posts I'd have assumed you were trolling/trying to provoke the sub somehow.
economic-salami@reddit
I say there are statistically and contextually significant differences.
nicheComicsProject@reddit
Tremendous effort, good job.
warren_jitsing@reddit (OP)
Thank you so much!
raam86@reddit
bad bot
warren_jitsing@reddit (OP)
Lol, reminds me of this Brooklyn Nine-Nine clip https://www.youtube.com/watch?v=0FoeZ7CM3UE
cym13@reddit
I really see no basis to claim that. I think it's much more likely that, given that python has more issues with AI slop than C or Rust people were expecting that, and once the first comments mention AI slop the downvotes come down fast.
warren_jitsing@reddit (OP)
I think your hypothesis is probably better than mine. Thanks for the input!
Glathull@reddit
Most of my professional work is with Python, and I do love the language and community.
But . . . Python is the gateway drug for new programmers these days. The variance in skill, expertise, age, and experience you find among Python programmers is just generally bigger than you get with people who work in C or Rust. And I would also point out that because of the differences in language design, you just have very different concerns in your head when you think about code in Python than you do in C or Rust even if you could section off the less experienced Python programmers and only deal with the more experienced folks.
For those reasons, I would argue that this isn’t a good comparison, and I would say that your premise isn’t particularly good. This is not, in my opinion, a good example of a “data-driven” anything. It falls into a bucket of things that are definitely attempting to involve data. But in no way is this data meaningful or even useful.
I think your choice of variables is not good. You tried to tailor your posts to appeal to the different communities, but we have no way of knowing what your effect might have been. I really fail to understand what you were envisioning as the independent and dependent variables. It’s also very unclear what hypothesis you may have been trying to test.
It’s a cool project, and I respect the effort. But because of the way you’ve failed to design a good experiment, the data doesn’t really mean anything. You did a thing, and you got some numbers. That doesn’t really get anywhere close to being data-driven, and I’m not sure what case was being studied.
If you want to learn something from this, I would encourage you to go back to the drawing board and think carefully about what you are trying to measure or test, what would be the best way to get data to measure that thing, and try again.
For whatever it’s worth, Python people absolutely have an insane attachment to the requests package. I kinda get it like it’s a glass of ice water when you’re in hell. But that’s just because the tools available in the standard library were so bad for so long, a lot of people haven’t noticed that things are actually kinda okay these days. Well, that and the guy that wrote requests appears to have debilitating mental illness, and we all feel pretty bad for him. Anyway, there a lot of emotional attachment there that’s going to mess with almost any data you get from talking about requests. So I would consider trying to find ways to avoid that if you decide to try again.
Schmittfried@reddit
inb4 the actual experiment is posting this to test the prejudice programmers have about other communities. :P
Interesting idea!
BlueGoliath@reddit
Each version should have been in its own repo with a fourth repo containing benchmark scripts and results in README. It was poorly done and combined with AI being used, no wonder it got downvoted.
C programming subreddits generally are more positive and technical focused. It's something that could have been seen without this.
levelstar01@reddit
do you so-called people ever feel embarassment over posting this shit?
warren_jitsing@reddit (OP)
Quick note on benchmarks: For each open source library, I posted on the relevant discussions/issues pages (boost::beast, libcurl, requests [direct email], reqwest). I did my best to ensure that the usage was correct for the comparisons. The benchmarks actually helped the boost maintainer pick up on a performance regression on the client side HTTP implementation (see Acknowledgements). Optimized boost and the vectored C client are very close indicating that maybe we hit a hardware limit. Readers are free to use the ./run-benchmarks.sh script to reproduce the results. Results may vary depending on setup.