Added fake latency to a 200ms API because users said it felt like it was 'making things up'. It worked. I'm still uncomfortable about it.
Posted by Ambitious-Garbage-73@reddit | ExperiencedDevs | View on Reddit | 401 comments
The API call took 200ms. Measured it, verified it, fast as hell.
Three weeks after launch the client tells me users are complaining the results "don't feel right". Not wrong, not slow. Just don't feel right.
I spent two days looking for bugs. Nothing. Results were correct, latency was fine.
Then a user screenshot came through. The user had written: "It feels like it's just making something up. It comes back too fast."
The feature was a search over a knowledge base. In the user's mental model, that should take a second. When it came back instantly, it broke their model - they read it as "this didn't actually process anything."
I added a minimum display time of 1.2s with a loading animation. API still ran and returned in 200ms. User sees 1.2 seconds of "working".
Complaints stopped within a week.
The part I can't shake: the technically correct solution was perceived as broken. The technically dishonest solution fixed it. I explained it in my update as "improved feedback during result loading" which is... technically accurate.
Anyone else been here? Curious how others frame this to themselves - is fake latency just accepted UX practice or does it bother you the way it bothers me?
teerre@reddit
Alright, people. This is one those. It has, atm, 16 reports saying OP is a bot. Can one of you explain how you determined that?
suboptimus_maximus@reddit
Personally I think only a bot could be so wrong as to think 200ms is low latency... Fast as hell? What the fuck is OP smoking?
Maybe I'm just getting old and from a time when men were men and wrote their own device drivers. I mean, I started my career in interactive software where 33.3 ms was all the time we had to compute the next frame at 30 fps, and that's super slow by today's standards. Even 60 fps is mediocre these days and that's 16.6 ms latency... This just cannot be serious, 200 ms is a fifth of a fucking second, that's an eternity by any standard in anything related to UI/UX or interactive computing, easily a more than perceptible delay far a human. And rarely in the history of computing has anyone ever complained about anything being too fast.
CVSThrowaway12@reddit
Not only that, they literally have a substack about using AI in prod…why would a bot have a whole substack about using AI in prod?
SkittlesAreYum@reddit
It's the totality of it, but to give one specific example would be this part:
99% of people would not make a new paragraph for that. They would combine them. Again, that's not the only thing, but it's one of the biggest tells.
Daex33@reddit
"Complaints stopped within a week." is such a classic Claude sentence.
Far_Composer_5714@reddit
Ooo interesting. I'm not familiar with Claude itself, so it was definitely unusual for me to not notice GPTisms, because apparently it isn't. That would explain it.
TheScapeQuest@reddit
"Anyone else? Curious..."
Once you see it, it'll drive you mad.
Far_Composer_5714@reddit
The problem isn't just one it's that there's an onslaught of AI-isms there are key phrases like that Yes but also the structuring and sentence pacing it's why there were 78 people saying that it was a bot... It reads like a bit so feels bad to read because of it.
cookingmonster@reddit
Biggest tell. It is maddening to read this and it's clearly engagement bait.
Ambitious-Garbage-73@reddit (OP)
Eu não sou bot e estou chateado por acharem isso, perdi o interesse em postar nesse sub, trago algo legal pra comunidade e sou tratado assim. Vão todos pra merda
yojimbo_beta@reddit
Why are you pretending you didn't write this?
https://www.reddit.com/r/cscareerquestions/comments/1sevkjh/comment/oeuyd9w/?context=3
teerre@reddit
Ok, this translate to
Which I'm pretty sure think is not something a bot would say. So I apologize for all these comments. I found quite weird that of all posts this would be singled out. Not quite sure how that happened
SkittlesAreYum@reddit
He doesn't have to be a bot to still be a low effort post. I stand by my original statement the OP was written entirely by an LLM. I don't really care if he manually asked it for a question or it was entirely bot generated.
Izkata@reddit
A lot of the replies further up think they did LLM-assisted translation and the phrasings that set off peoples' LLM sense got inserted during that.
yojimbo_beta@reddit
I'm not concerned about you being a bot. I am concerned about you clearly using LLMs to generate all your messages, because it means I am not interacting with a real person.
I also doubt your story is genuine, because I have worked on hundreds of APIs and I've never seen a user complain that data a analysis was too fast.
po000O0O0O@reddit
Literally any post that is separated into one or two sentence paragraphs, many of which single word, and/or with a staccato pacing, that ends with a general call for responses ("anyone else been here?") is sus as hell.
ChristianValour@reddit
Short punchy paragraphs is online copywriting 101. If AI is using short, punchy paragraphs to write, it's because the model determined that this is the kind of writing that engages human readers - which is exactly the same conclusion that most half decent internet content creators came to a decade ago.
I feel obligated to say "I hate AI slop as much as anyone" - because I do, but the general vibe I'm getting from this whole thread is "OP knows how to write engaging content - Oh, must be AI!"
Some people are actually just good writers.
po000O0O0O@reddit
I agree you are correct overall about the writing style.... but you hardly saw that style of writing on reddit before LLMs. Reddit wasn't exactly a hot spot for "half decent internet content creators". It was for nerds like us. You saw that kind of writing on SEO'd-to-hell professional webpages, but not a random self post in a niche subreddit. This is not the only niche subreddit I see this exact formula used in btw
ChristianValour@reddit
Well, I can't argue with that lol.
nigirizushi@reddit
I write like that so it's easy to read. And I always used em-dashes. Fucking hate AI
IlllIllIIIlIllIIIIlI@reddit
The post is absolutely AI. Maybe it's just translated / polished with AI but if you can't tell this then you should find some mods who can spot AI writing.
Randromeda2172@reddit
It's got all the hallmarks of LLM writing:
- Short hook (The API call took 200ms. Measured it, verified it, fast as hell)
- Lot of short, abrupt sentences to maximize impact. Each paragraph is basically one sentence with a similar progression of - premise, result, consequence
- OP barely knows a lick of english in their other posts, but here's they're a stylistic author
- "Has anyone else experienced this" in the last paragraph.
elperroborrachotoo@reddit
like every modern "impact" writing advice?
Like... Hemingway?
Like decades of garnering social network engagement do?
I'm not even questioning the result, but the methods become more and more desparate.
byzantinian@reddit
OP can't even read English, and their posting history is dominated by Claude-generated slop or Claude-related discussion. They ain't Hemingway...
elperroborrachotoo@reddit
EMCoupling@reddit
So we got a lot of Hemingways posting in this sub then?
northrupthebandgeek@reddit
God forbid people write competently. Must be fucking bots, the lot of 'em.
elperroborrachotoo@reddit
The topics are certainly different, but who am I to gatekeep a writer from finding new interests?
adreamofhodor@reddit
“The part I can’t shake” is classic Claude.
Ziiiiik@reddit
That’s where I thought AI too
Frechetta@reddit
Not just Claude. That's the part that tipped me off too, and I've never used Claude.
alitayy@reddit
So you would say that it's ... the part you can't shake? >:D
KitchenError@reddit
You are aware that LLMs got their writing style from real world texts written by human authors?
All those "hallmarks of AI" lists conveniently keep ignoring that they make no sense at all, due to this.
ReikaKalseki@reddit
I will not comment on whether this particular post is AI, but I do want to echo your sentiment regarding the trigger-happy witch hunt and extreme rate - and tolerance of - false positive identifications.
I get accused of using AI to write what I write continuously, and for the few people from which I can even tease out a specific indication, it is the most basic and frankly ridiculous shit like "you can spell", "u dunt talk liek a t33nager t3xting", and "you used a dash! A dash!".
Arguably worse, more often than not, pointing out how stupid these criteria are gets zero sympathy because the prevailing opinion on the internet seems to be "I don't care if we hang a million innocent people, AI is so bad that you have a moral obligation to turn your brain off and bandwagon the moment someone makes the suggestion".
-Nocx-@reddit
I hear what you’re saying but despite what you’ve written, your writing style is absolutely nowhere near as blatant as OP’s post or OP’s post history. I am hesitant to call it a “witch hunt” because the programming subs went from fun to browse to an absolute chore. This isn’t people’s feelings making them behave irrationally - there is a quantifiable drop in post quality over the last year.
Before I could read an interesting headline and think and my first reaction would be to genuinely engage with it. Now I have to guess a) whether or not the event really happened and b) whether or not the person actually cares about an answer. Obviously there was a chance that people karma farmed before, but on a sub like this it was way, way, way less likely.
I don’t disagree that false positives are bad, but when people get brain drain from reading AI slop post after AI slop post it will desensitize them from absorbing information from any posts at all. I would rather someone write a post in broken English so I can at least ascertain the effort they put into the post. At least then I wouldn’t have to question whether or not the post is authentic for the sake of it being more readable.
ReikaKalseki@reddit
And yet the accusation happens at least weekly, sometimes daily. With just as much confidence as the people saying the OP here is AI.
"Witch hunt" has nothing to do with whether the thing being hunted is actually bad; the term refers to the hypervigilance and high tolerance of error in accusations. And that applies here.
Additionally, part of me suspects you are reading my statements through the lens of this subreddit and only this subreddit, but that is not the case; not only do I have little experience here, being both new and not really interested in 90% of what gets discussed in this space, but I am far, far more active in other spaces - here is my reddit breakdown alone, ignoring every other platform - which is where every single accusation of being AI has occurred. Perhaps witch hunts are less of an issue in this subreddit specifically - the likely bias to older ages probably helps with that - but in general, AI witchhunting is a goddamn plague online.
Aside from what I said above with respect to "this subreddit vs the internet as a whole", I also want to specifically point out that in fact people are, by and large, irrational. The majority of people who are anti-AI, as with anything, are that way not because of any personal experience with enshittification it has driven, or even any real understanding of the talking points they regurgitate, but because they adopted and now parrot the moral crusade, perhaps because said something on one time. That applies to every other issue too, and AI is no exception.
-Nocx-@reddit
I don’t want to overly argue semantics but that’s a very liberal use of the word witch hunt - generally witch hunts are unsubstantiated. That clearly isn’t the case here. Browsers of the sub have plenty of substantive evidence that posts are becoming increasingly similar to AI posts in a manner that didn’t happen before. I am operating purely off intuition here, but you can probably pull the top posts on any programming sub and do some rudimentary heuristic using n-gram analysis or vector space similarity and notice an uptick in AI-adjacent writing over the last few years.
Now obviously since what I’m saying is anecdotal and I haven’t done the actual work to analyze my vibes and I could still be wrong, right. But that doesn’t change the fact that it’s a platform-wide problem and people’s feelings around it are reasonable. Calling it a witch hunt makes it seem like it’s a non-issue because they’re acting with no logical basis. There is a logical basis, even if they cannot properly describe it all of the time.
Your case specifically is highly irregular. Most people do not comment on Reddit that much, and (not trying to be disrespectful) but most people are not online as much as you. So you are going to get a disproportionate amount of the blowback policing your writing.
And while I’m sorry that’s happening to you, people can still be adversely affected by something even if they cannot correctly identify the cause of it. People that blame other demographics for labor woes can still be suffering from the accumulation of capital by a greedy establishment even if their blame is misplaced. They are responding to a problem they’re experiencing - which is a rational behavior - they just may not know how to identify the root of the issue or how to solve it, which is why it appears irrational.
binarycow@reddit
Not too long ago, someone kept insisting that I was using AI to write comments.
They would not accept that I simply wasn't doing so. 🤷♂️
johnpeters42@reddit
If "predominantly" is defined as just "most commonly", then probably not. They were pushed (via training feedback and/or whatever other means) to weight certain inputs more, due to them being considered "better", and they settled on a very specific series of tells. The em dashes, the bold text, the bullet points, the staccato sentences, the specific words like "delve", the "curious" at the end. Yes, humans do all these things, but most humans don't do all of them all the time.
Stuffy123456@reddit
Well, everything is “quietly” sounding similar in the same sing songy voice.
dbxp@reddit
I agree, there is this weird pushback now against well formatted posts. I believe now the bots can be made to make less 'perfect' posts to hide that they are AI
josephjnk@reddit
I’m with you here. In this case the poster’s account is clearly sus, but I don’t think it’s good to try to focus too hard on writing style.
I’m sick of AI, of the people who think it’s acceptable to wire it into social media, of the people who think it’s acceptable to have it pretend to be human, of the people who accuse me of writing with an LLM, of the pressure to contort the way I write because some assholes trained LLMs on the same corpus of texts that I partially learned from, of just all of it. Together this has all pushed me away from contributing to and spending time on Reddit, but it’s the false accusations and pressure to be un-bot-like that are the most off putting.
The goal should be to increase the ratio of human to AI content on this site. Things which discourage human contributions are also harmful to this end. Think, “the optimal amount of fraud is nonzero” but for AI slop.
HDK1989@reddit
This is clearly an AI post, if you can't notice that it's fine, but it's not paranoid when others can.
KitchenError@reddit
I was speaking about the general paranoia, not this particular example. I have seen too many times that real people were wrongly accused of being AI. The witch hunt is harmful to real people.
Alexandur@reddit
Sure, that does happen, but the widespread use of generative AI passed off as genuine content is also harmful to people. And yes, there are hallmarks that make it easy to spot.
Frillback@reddit
Don't even need this. I read a variation of this story on Reddit years ago. Creative writing in the past became an AI copypasta of today.
Hot-Equivalent2040@reddit
'not wrong, not slow. doesn't feel right' also. not A, not B. Punchy short sentence.
dizekat@reddit
Hadn't reported him, but just look at the profile. Bio:
>Dev writing honestly about AI in production. Newsletter: honestaidev.substack.com | Prompts: lucastrba.gumroad.com/l/lsdyel
Plus the story just reeks of bullshit. Many, many of us have implemented queries that respond in <200ms and never seen that sort of complaint actually materialize. Even in today's enshittified world, there's still many <200ms queries you encounter daily, without their operators adding adding fake latency.
If there is a perceived benefit to longer queries, it is not so easy to detect, requiring A/B studies without exposure of the A group to B condition and vice versa, or comparison of conversion rates etc.
northrupthebandgeek@reddit
Eh, it ain't that outlandish. TurboTax famously uses this exact same trick for the exact same reason and with the exact same results.
dizekat@reddit
I’m not saying that Turbotax doesn’t.
I am saying that nobody’s slamming every less enshittified piece of tax software with complaints for not having a delay. Whatever weird shit turbotax did they probably had to do a lot of quantitative testing to figure out, and it may even be as simple as their conversion rate for paid stuff getting higher with a delay in free.
northrupthebandgeek@reddit
Except that's not what TurboTax found. In their case it (allegedly) really was a matter of users trusting the software less if it finished tasks too fast.
dizekat@reddit
1:paywall and 2: does it actually say they were slammed with people complaining their product was not shit enough? I told you several times that this kind of advanced enshittification requires massive a/b studies with “how much do you trust our product” questionaries and the like.
northrupthebandgeek@reddit
Sorry. Doesn't seem to be the case for me, weirdly enough.
It doesn't say much of anything about how Intuit came to that decision, other than “To offset these feelings [of anxiety], we use a variety of design elements—content, animation, movement, etc.—to ensure our customers’ peace of mind that their returns are accurate and they are getting all the money they deserve.” That article's more about the prevalence of so-called “benevolent deception” in various websites than it is about trying to exactly quantify its effectiveness.
You must have me confused with someone else, because you only told me once :)
Which I'm sure Intuit has not only conducted, but continues to conduct every day. If they're like the other large-user-count websites for which I've worked, they're almost certainly using tools like LaunchDarkly that make that (relatively) trivial with per-user/per-session flipping of feature flags.
In any case, across the millions upon millions of people using TurboTax every year, it wouldn't be surprising if they got at least one survey response along the lines of “it's too fast; I don't think it caught everything”.
dizekat@reddit
> it wouldn't be surprising if they got at least one survey response along the lines of “it's too fast; I don't think it caught everything”.
Look, what I am saying is that the OP's story with users complaining to a "client" is simply not how it works. You don't get that sort of feedback without surveys and all the proper testing, and even then it is very easy to screw that up since most people don't respond to surveys, etc.
And you have to also measure the sentiment afterwards, to be sure that there isn't a large enough subgroup of users who perceive the delay negatively. And you have to have a quite large userbase to obtain statistically significant results. Which isn't something that you are going to see in a situation described in OP.
northrupthebandgeek@reddit
Those are all assumptions that may hold in one environment but might not in another. Maybe you don't get that sort of feedback, but that doesn't mean nobody does. Weirder things can, do, have, and will happen.
dizekat@reddit
Oh come on. Within a week the complaints stopped, lol. So they kept happening at a gradually reducing rate during the week?
Pure absolute unadulterated bullshit.
engineered_academic@reddit
Your post was blocked because you put a link that was in his bio. I unblocked you.
dizekat@reddit
Ahh thanks. Yeah didn’t think of the advertisement rule when copy pasting.
foofoo300@reddit
or maybe you are wrong, ever thought of that?
user has in his bio "Dev writing honestly about AI in production."
This is a story that has been posted over and over, it reeks of ai phrases and overall just plain stupid engagement bait. But sure he brought what "something cool to the community"
By doing what exactly? copy pasting a story and selling it as his own, slow clap
yojimbo_beta@reddit
Why are you leaving this up? I don't want to be a member of a subreddits filled with slop
teerre@reddit
Me neither, but read the edit
IndependentProject26@reddit
So glad “Honest AI dev” confirmed he’s not a bot
IndependentProject26@reddit
Lol now we’ve got mods telling us not to report engagement bait.
Any contenders for a replacement for this sub?
serial_crusher@reddit
The real Singularity is the point where we all stop caring whether a post is AI-generated or not. It's a good story and plenty of developers have had to do this sort of thing, even if OP didn't.
Even if OP is a bot, there's a nonzero chance that its user directed it to implement something like this...
campbellm@reddit
"Looks shopped, I can tell by the pixels".
Jazzy_Josh@reddit
You are meming but the end of the phrase "...and from seeing quite a few 'shops in my time" carries weight.
campbellm@reddit
And zero evidence.
PinkPanther909@reddit
I don't contribute a lot to this forum, I'll admit. But after the OP's non-English "I'm not interested in this forum anymore, go to hell" post -- they literally posted 5x more times less than an hour later:
https://old.reddit.com/r/ExperiencedDevs/comments/1se1g7m/added_fake_latency_to_a_200ms_api_because_users/oepww7v/
https://old.reddit.com/r/ExperiencedDevs/comments/1se1g7m/added_fake_latency_to_a_200ms_api_because_users/oepvnk8/
https://old.reddit.com/r/ExperiencedDevs/comments/1se1g7m/added_fake_latency_to_a_200ms_api_because_users/oepvozg/
https://old.reddit.com/r/ExperiencedDevs/comments/1se1g7m/added_fake_latency_to_a_200ms_api_because_users/oepwvlt/
I deeply apologize if I'm witch hunting, but all of this just smells funny. If I were a real person and using an LLM for translation, why not say so? Why go back on my word less than an hour later and post some quippy 1-2 sentence, "Hmm, interesting -- yes!" replies after all the communal upset?
I've archived this page just for posterity in case it gets blasted or the OP deletes their messages (which bots frequently do after the 'engagement' farming has dried up after a day or two) https://web.archive.org/web/20260407134709/https://old.reddit.com/r/ExperiencedDevs/comments/1se1g7m/added_fake_latency_to_a_200ms_api_because_users/
MathmoKiwi@reddit
This whole story is identical to ones that have been shared before. Of how introducing latency improved user experience.
Is it a coincidence? Perhaps.
Maybe.
DeepHorse@reddit
"llm-assisted translation" pretty heavily aided by it and of a fake/reposted story. I don't know how this post is still up and why this mod has trouble seeing this lol
Rymasq@reddit
this subreddit has a serious AI witch hunt problem and a lot of the people here seem to immediately downvote anything even moderately suggesting the benefits of AI.
FreeAsianBeer@reddit
Not one of the reporters, but OPs post history looks very much AI generated.
Lentil-Soup@reddit
Using AI to create posts doesn't make someone a bot.
SimpleMetricTon@reddit
Also not a reporter but I think this is important (looking at post history). Going by the post alone gives some tingles but makes me worried about false positives. Not everyone writes the same way and some people do sound like AI... or rather, AI sounds like them.
dbxp@reddit
Potentially they're using AI to help word things as English doesn't appear to be their first language but they don't appear to be purely AI
DigmonsDrill@reddit
"The account is entirely a bot, soup to nuts" is a much more specific accusation than "the account is a person who asks AI to write fake engagement posts."
alitayy@reddit
I hope you don't interpret this as disrespectful, but I think you're being very naive here. I'm not saying that OPs account and posts are AI-generated (though I am suspicious of it), but I do think it's quite clear that they're a bot of some sort. They could be a karma-farming bot, but I think it seems far more likely that they're some kind of astroturfing or sentiment flooding bot. Look at the sheer number of posts they make a day, the vast majority seemingly pushing a similar anti-AI or anti-LLM sentiment with the occasional departure from that topic. I'm as anti-AI as the rest of them (well, I think it has its uses, but you know), but this is pretty clearly some kind of astroturfing. I'm sure people exist who have the time and energy to make dozens of Reddit posts/comments every day all about the same thing but I think you're being a bit naive and foolish here to feel so staunchly confident about thinking this account is clean. I for one am glad they're supposedly "done" with this sub. We don't need that here.
SimilarDisaster2617@reddit
Every single post is about a different subject. Many posts a day. Posts talk about things that happened "months ago" or "years ago". An every single post feels like they are creating a discussion about a big thought that takes time to develop and should only happen a few times a month max.
Jazzy_Josh@reddit
They also submit posts asking for perspectives then never reply inside them. Engagement/model farming.
teerre@reddit
So you think the bot posted on "tipofmypenis" asking about some porn image? That's dedication
Spider_pig448@reddit
Again, no explanation? Just vibes?
Jazzy_Josh@reddit
Also not a reporter because I am just seeing this, but auto-generated username is always a huge red flag for me. Takes 10 seconds to come up with a real username if you need a throwaway.
wisconsinbrowntoen@reddit
The way it's written.
northrupthebandgeek@reddit
“You can tell it's AI because of the way it is.”
didntplaymysummercar@reddit
Someone with that JD Vance profile pic calling people bots is peak comedy 😂
HappyZombies@reddit
I didn’t report, but come on….It literally reads like a GPT post, the typical “language” signs are all over this post lol.
DeepHorse@reddit
i'm surprised this isn't blatantly obvious to everyone at this point...
Illustrious_Arm_6325@reddit
maybe the people responding seriously are just bots themselves...
northrupthebandgeek@reddit
Says the one with an autogenerated username.
didntplaymysummercar@reddit
This sub popped into my feed today. I'll now mute it. Thanks for keeping my feed clean of trash by showing me so soon you allow bullies and harassers to run your sub for you, jannie!
MooseBag@reddit
I think giving people the benefit of the doubt is a good and kind approach. Their comment here probably isn't LLM generated, but the original post definitely is. If you take a moment and look through their post history you can also see periods where there are bursts of comments being posted to multiple different subreddits within seconds from each other and I can't really see how that would be consistent with normal human behavior.
To me this is looking like someone handed over access to their reddit account to some Openclaw instance and gave it free rein to post and comment wherever. I don't think it's intended to be malicious but it's low effort and it's not real. Whether that type of content is adding value to this subreddit is ultimately up to you, but I think a lot of people come here to engage with real humans.
AdreKiseque@reddit
The text of the post definitely feels like it was written by AI, but that doesn't mean OP isn't a real person.
aeroverra@reddit
I was on your side until I saw ai detector
positivelymonkey@reddit
Edit 2 response makes me even more certain this was a complete bot post or highly automated farm. Why lash out and guilt trip like that? Total manipulation.
Either way, trash post.
ExperiencedDevs-ModTeam@reddit
Rule 2: No Disrespectful Language or Conduct
Don’t be a jerk. Act maturely. No racism, unnecessarily foul language, ad hominem charges, sexism - none of these are tolerated here. This includes posts that could be interpreted as trolling, such as complaining about DEI (Diversity) initiatives or people of a specific sex or background at your company.
Do not submit posts or comments that break, or promote breaking the Reddit Terms and Conditions or Content Policy or any other Reddit policy.
Violations = Warning, 7-Day Ban, Permanent Ban.
Shayla4Ever@reddit
I read a lot of Claude writing and this has Claude fingerprints allllll over it
byzantinian@reddit
OP's entire post history is about using Claude. OP and the mod want to get pissy about them not being a bot then fine, but OP didn't just use Claude as a translator, it's entirely AI-generated bullshit text.
KallistiOW@reddit
I engage with AI on a daily basis and have very high suspicion that OP's post is AI-generated
Cahnis@reddit
Sure, OP is not a bot, but he 100% generated his post using AI and posted it here. Might as well be a bot.
I prefer broken english than this linkedin-tier thread.
bighappy1970@reddit
Bunch of JERKS here, IMO. And it takes one to know one.
If there’s No human involved, I get it.
But you all are delusional if you think it’s reasonable to say if AI touches a post it should be removed.
I’ve always thought most of the folks in this sub were very very far experienced, now I I feel like I have proof! Bunch of wankers!
bighappy1970@reddit
Shall I call you whaaamulance? You poor thing! Having to sit there and pass judgement on someone because they feel AI does a better job at communicating their point, or is simply easier for them? How nice it must be to be so perfect. Jerk
annoying_cyclist@reddit
Maybe a different tack than "they're a bot": if you look through their history, you see a lot of submissions in the past few days that:
(that's an incomplete view, btw; they had some pretty popular ones in CSCQ over the weekend that were removed by moderators)
It is really easy to use them to generate engagement bait that takes advantage of existing tensions in a community for the poster's own purposes (accumulating karma, editing in ad links later, market research for what content works well on their blog, etc). To me, this isn't a positive contribution to the community. It's the reddit equivalent of posting a ragebait meme on Instagram, and seems against the spirit of rule 9 if nothing else. Them maybe being a bot or using an LLM is almost beside the point: it makes their ragebait harder to identify as that than it used to be, it lowers the bar for them to write it, but it doesn't change what they're doing.
Villhermus@reddit
I think he's not a bot, but he's using an LLM to write in english, his portuguese comments are quite human, but the more recent english posts have the typical bot quirks.
brobi-wan-kendoebi@reddit
Seriously?
Intrepidd@reddit
100% AI score on gptzero
Mad_Season9607@reddit
This is the way..
xmcqdpt2@reddit
People said xyz. The API was returning abc.
I did x and I did y.
The solution: something something
Has anyone else ...?
100% this is AI. Probably specifically Claude.
Graxwell@reddit
The content is entirely AI-generated. https://www.pangram.com/history/b76166ef-ee2c-492b-9683-8eef3b98bfb2
teerre@reddit
Right. But a) Why should I trust this website? and b) As discussed below, OP seems to not be an English-speaker, so it would make sense to use AI to translate their post
OpticCostMeMyAccount@reddit
Panagram is the gold standard for AI detection, with conservative false positives compared to products like GPT Zero
Alkyen@reddit
That sounds like the gold standard in astrology. AI detection isn't a real thing at this point
OpticCostMeMyAccount@reddit
I didn’t realize astrology managed to have a false positive rate < 0.005, fascinating.
teerre@reddit
Do you have any sources on that?
OpticCostMeMyAccount@reddit
Here ya go! https://arxiv.org/pdf/2402.14873
kbielefe@reddit
FWIW, I have an AI skill for detecting and fixing prose that sounds like AI (one of those based on the wiki article ). It thinks low chance. My guess would be click-baity with a little help from AI for language translation.
The skill's report if you care:
Detection Report
Domain: blog/social (conversational anecdote with personal reflection, typical of developer forums like Reddit or Hacker News)
Overall severity: LOW
Patterns found: 0 (no clear lexical patterns from the catalog; statistical signals are absent or minimal)
Findings
Statistical Signals
Summary
This text shows strong hallmarks of human authorship: conversational tone, varied rhythm, personal voice, and idiomatic expressions that feel authentic to a developer's reflective post. No high-priority AI patterns or statistical regularities were detected, making it highly unlikely to be AI-generated (estimated <10% probability). If it's edited AI output, any original tells have been thoroughly humanized already. No rewrites recommended unless for style preferences.
sweetno@reddit
You don't need AI to see it's AI authored. It's written in a magazine article style, not reddit pal style.
kbielefe@reddit
I'm not saying you do need AI, just that you should be able to articulate specific elements that point to it being AI generated, as opposed to a human writing in a magazine article style.
ScreenOk6928@reddit
OP recently made 12 unique comments across multiple subreddits in the span of 1 minute, in case you're still wondering whether or not they're a bot (or using a bot to post).
https://imgur.com/a/sibhmdO
Frank_White32@reddit
exactly what i came here to say.
please, this is clearly an llm generated post made to farm engagement.
turn-based-games@reddit
https://www.reddit.com/r/ExperiencedDevs/comments/1se1g7m/comment/oeout5l
tuvstarr18@reddit
OP is brazilian and so am I. Looking at his history, comments written in portuguese look legit and not AI generated.
eliquy@reddit
Either they are a bot, or they write exactly like a bot. In which case, do we really want to encourage human posts that mimic bot slop?
sweetno@reddit
It's that default pretentious click baiting journalistic prose style AI uses for some reason.
OpticCostMeMyAccount@reddit
Can the mods really not tell how obviously AI this is?
teerre@reddit
u/Ambitious-Garbage-73 reply here (just posting again because I'm not sure that edits actually ping users)
VeganBigMac@reddit
I'm not a reporter, but did have the "Oh, I'm reading AI" reaction.
This was the biggest tell for me. Just a few things, one being that AI's seem to love talking about "metal models". But also, the "it broke their model" just isn't really a thing people say, but it follows a standard construction of speech, so it sounds logical.
That said, looks like OP is Brazilian so maybe they are just dropping it their Portuguese text into a model and saying "Translate this to English and make it sound like a reddit post" or something and the model is just adding in llm-isms.
newEnglander17@reddit
funnily enough, my brother in marketing WOULD write exactly like that quote. I groan when I read his professional stuff.
TimMensch@reddit
I talk about mental models all the time. I don't know if I've ever said "broke their mental model," but it's not impossible that I could have said that.
I mean, LLMs say what they do because they're parroting text from people who talk that way.
Maybe this is AI, or maybe it's AI translation as you suggest. Don't know. But it is annoying that well-written text is accused of being AI.
I've even had one of my own comments accused of being AI. I use LLMs for code autocomplete, but haven't ever needed it wanted to use AI for writing reddit comments. But a brief search of my comment history will find the same style text dating back something like fifteen years? So it's annoying.
VeganBigMac@reddit
You misunderstood me and actually indirectly corrected with more natural speech. I was saying that "broke their mental model" is more natural is more natural than "broke their model". It's a common construction to shorten phrases like that, but shortening "mental model" to just "model" is just not something you see in practice.
I'm sorry that people are accusing you of being an LLM I guess, but again, just thinking that it is "writing properly" is a misunderstanding of the tells of LLM generated text (in 2026). Instead, it's generally writing properly but making non-natural sentence constructions. The most famous one these days is probably the "It's not just X, it's Y", with Y being a hyperbolic, marketable rephrasing. Humans DO talk like that, but specifically people writing marketing copy or, like, generic blog posts. So when you see it in a reddit comment, you see a lack of understanding the social context.
I_pretend_2_know@reddit
Funny how all of the "it's fake" comments are variations of "because it feels so".
My take: OP is Brazilian, the user is probably Brazilian too and most 3rd World rich people are a disgusting mix of arrogance and ignorance. They complain just because they're in the mood to do so.
gastro_psychic@reddit
We really have some Reportin’ Robertos in this sub.
KingdomOfAngel@reddit
The entire post literally sounds like ChatGPT, how are you a mod and cannot see that??
royrese@reddit
I'll pile on since I just got here.
Cahnis@reddit
You can just read the text, it follows a formula. High frontloaded with information first paragraph. Storytelling, engagement hook at the end.
rochismoextremo@reddit
Not a reporter but I also recall reading this exact same post in this sub already
Zeragamba@reddit
OP also doesn't look like they've responded at all in this thread either
Unfair-Sleep-3022@reddit
Look at their history -- they have another story that's literally the same solution
Not to mention it reads very LLM like
jasnah_@reddit
There was a post almost identical to this earlier that was about forcing a delay with a timeout on form submissions, worded way too similarly to be a coincidence
JuliusCeaserBoneHead@reddit
It follows the AI hook you and engage story pattern
4dr14n31t0r@reddit
I am not a native speaker myself and I usually struggle to tell what feels ChatGPT-y and what feels human. However, when the title/content of the post seems too surreal, interesting and clickbaity, my spider sense activates.
Affectionate-Turn137@reddit
It reads like GPT, and he's not a native english speaker. The guy speaks Spanish in other comments... He's obviously farming engagement with these generic "dev war stories". I recognized his name from other engagement bait posts that follow a similar, soulless GPT style.
Tartiflan1@reddit
Less about fake latency and more about the gap between perceived work and displayed work. A flat 1.2s spinner corresponds to nothing, so if a curious user ever opens the network tab the trust problem gets worse than where it started.
What usually works better in that situation is tying every 200 to 400ms of the wait to an actual phase. "Parsing your question", "searching the knowledge base", "ranking results". Total still lands under a second and a half but each flicker is attached to something real. Users stop saying it feels made up because they can point at the step that did the looking, and you stop feeling like you're running a labor illusion.
The part I'd push back on is the self-flagellation. You didn't lie, you exposed an implicit contract your users had. Retrieval equals time, and you broke it by being too fast. No amount of correct backend work was going to close that gap on its own. Calling it "improved feedback" in the changelog is accurate. Actual lying would be inventing a result that doesn't exist.
pseudophenakism@reddit
Hey! Congratulations! You just recreated one of the most famous consulting cases around. Houston in the 90’s was experiencing extremely user dissatisfaction with long wait times in the baggage claim. What did they do? Streamline? No. Optimize? No? Make the walk from the gate to the baggage claim longer? Heck yes! And guess what, it raised customer satisfaction immediately.
Responsible_Sir_7423@reddit
The user perception problem is one of the most underappreciated things in product engineering. People don’t experience speed, they experience confidence. A 200ms response that feels instant gets questioned. A 400ms response that feels deliberate gets trusted.
The uncomfortable part is that this isn’t irrational. It’s a reasonable inference from a world where fast often does mean cheap.
Excellent-Basket-825@reddit
Well known effect. Youbare good
Ambitious-Garbage-73@reddit (OP)
You're so cute!
AnbuBees@reddit
ts is a bot and this purely anecdotal story has been passed around for over a decade in the online programming space even though I've never had this scenario ever occur in real world. it's 2026 there is no one complaining about response being too fast anymore, obvious karma-farming account.
throw-the-money-away@reddit
I did this in my job a few years ago. added a second delay because what we needed to show the user went on really fast.
Fast forward some years, and the thing is slow as hell. I got to the code with the delay I'd forgotten by then, removed it, and told everyone in the meeting I optimized the loading of that screen by a whole second
cutelittlecorgis@reddit
TurboTax did the same thing. Users complained that it seemed like it wasn't carefully calculating their tax return. So they added a screen with artificial loading bars. The funny thing about that screen is that you could just skip the loading bars by clicking "next."
CoreyTheGeek@reddit
Sometimes in my UIs I'll add a load time just cause I made a cool loading roller and I want people to see it 🤣
In all seriousness though, there's this idea of "weight" on a lot of products that give things a "premium, high quality feel" and this falls right into (my personal opinion) that where giving something a bit of delay makes it feel "weighty." Like opening a menu, if it just instantly pops up in the screen it's sharp, jarring, yuck. But if you fade it in with a slight slide up? Oooooo baby that's premium Apple stuff right there 😂
_atwork@reddit
Wasn’t this almost exact thread posted earlier today or yesterday, but it was about a form submission? What in the LLM content farm bullshit is this?
LikesAlgae@reddit
Also just step back for a second. If you work in a real company and you're a senior dev are you really fucking around with API response time yourself? I lead a team of 16 if you do this without letting the team or product or management know, you're on thin ice. You do not fuck with our API like a cowboy coder with no code or business review. 6x latency without any approval? Holy shit.
Children on cscareerquestions see this garbage and parrot it. Now we got LLMs cosplaying with this troupe too.
Ambitious-Garbage-73@reddit (OP)
The form submission one you're thinking of was r/webdev I think? Similar problem yeah but different project. I posted there first and someone suggested I share it here since the ExperiencedDevs crowd might have dealt with it in enterprise contexts.
EmeraldHawk@reddit
OP has made 25 posts in the past 2 days, looks like all of them AI generated (this one sure is). All the software subreddits are overrun with this garbage.
Kaenguruu-Dev@reddit
Well at least his username is fitting
BaNyaaNyaa@reddit
This post reads like a LinkedIn post.
Ok-Kaleidoscope5627@reddit
Noo. This type of content is good. This is literally LLMs eating their own tail. We need more crappy bullshit content like this to slowly poison them.
samplekaudio@reddit
This is a common pattern in UI/UX design. So common that it has its own name: labor illusion.
It's weird, but it's more psychologically satisfying for many users to have the impression that your app is doing some kind of substantial work behind the scenes, and that impression can't be formed if the interaction is too fast.
Your experience is a perfect example. Many progress bars and loading screens are fake exactly for this reason.
allllusernamestaken@reddit
We A/B tested it and making the loading screen artificially longer increased funnel efficiency so now our product roadmap includes multivariant testing to find the optimal fake loading time.
Parking_Watch3157@reddit
We had a weird conversion rate drop on a search pathway once after I optimized it to be very fast. All testing and code review showed the results were identical.
We added a fake delay and the spinner back and conversion rate went up.
We started calling it the "look in the back" syndrome.
ritchie70@reddit
I have a spot in an app where all it’s doing is deleting a handful of files but it just felt deeply unsatisfying when I pushed the button so I tossed in a two second progress bar. Feels much better.
scodagama1@reddit
It also gives a nice buffer for future improvements and smooths out occasional latency spikes
Otherwise during periods of slowdowns or on bad Internet the app may feel sluggish
NoUniverseExists@reddit
Yeah!! I would also implement a random latency because if the users access it too many times, they might become too used to that fixed latency, and go nuts when it takes a few milliseconds longer. With the random latency (probably with a normal distribution around a fixed mean), the user might feel it is actually always doing something meaningful. And when it takes a little longer, they will not perceive it. (Obviously, it needs some careful adjustments and measurements in order to work as intended.)
samplekaudio@reddit
Yeah true. The better path for OP would probably be to update the UI after a minimum amount of time, so it's never faster than 1.2 seconds but doesn't arbitrarily add a full second of loading should the initial response take longer than normal.
klowny@reddit
If you've ever used tax filing software you'll see it all over the place. All the software is doing is a bit of math, which computers are very fast at, but you'll get a whole 30s long "double checking your numbers" transition screen.
samplekaudio@reddit
Yeah that's a great example and now I will be even more frustrated the next time I file taxes
alnyland@reddit
For some tools like that that I know what they’ve done, I want a “im a nerd, just do the stuff” button.
Maybe no animations, no fancy, just useful.
syklemil@reddit
Ah, the shibboleet. I know it well.
alnyland@reddit
Gosh, it’s been a while since I’ve seen an xkcd I hadn’t seen before.
Cheers
Unlikely_Eye_2112@reddit
Ages ago I had a wonderful time with an OS coded in assembly. Kolibri OS I think. I ran it in Virtual Box and there wasn't very many programs for it but everything happened instantly.
It frustrates me to no end that we all have insane sci fi computers compared to the C64 I started on and we're still waiting for windows to open / close.
Poat540@reddit
Taxes take 2 ms all of a sudden
FoxyWheels@reddit
This is why, if I can, I stay in a shell. Vim doesn't give me BS. Command line tools give feedback as fast as possible.
adamfowl@reddit
Time for CLI tax prep tools! Now with pipelining!
thekwoka@reddit
And we'll render the TUI with react native web running in headless chrome and then rendering to ASCII for the terminal.
I bet we can even make it best 20fps. We have the technology!
Master-Broccoli5737@reddit
this fucking enrages me.
pissfartt@reddit
h&r block has a precheck screen that i'm 99% sure is just a gif because when I went to go actually file thats when all the errors came up.
lIllIlIIIlIIIIlIlIll@reddit
I noticed this years ago and I hate this so much. It's like the final cherry on top of a wasted Saturday. Spend another 30 seconds waiting on a Timer.Sleep() because people feel like they're not getting good value out of tax software that finishes in less than a second.
valleyman86@reddit
I feel like thats also just to make users think the software is worth it because whatever it’s doing is taking a while so it must be hard.
7HawksAnd@reddit
Facebooks “skeleton ui” was one of the first classic/modern examples of this.
bobdarobber@reddit
No? Skeleton UI is a technique to reduce how jarring a loading state is by constructing the layout of the page visually before it loads
7HawksAnd@reddit
You’re wrong. It was well documented that they introduced the skeleton UI effect because people didn’t believe their feed when they opened the site and it said your friend made this post 3 seconds ago.
They didn’t believe Facebook could be that real time.
I’ll get the article if it still exists. This was a big deal in the UX community when it came out somewhere in the early 20 teens.
bobdarobber@reddit
But the point is that it could’ve just been a spinner to achieve the same goal you’ve stated
7HawksAnd@reddit
My only point was that Facebook was one of the first to do it 🤷♂️
subma-fuckin-rine@reddit
also see: loading bars
cmpthepirate@reddit
Also in comparison web sites - the results are available pretty much instantly but they delay them to make them feel more 'trustworthy'
Pethron@reddit
And it probably bothers only programmers and engineers that spot this miles away 🤷♂️
The_North_Lord@reddit
interviews: "optimize this to run in O(log n)" - production: "cool now add a 1.2s delay so it doesn't look suspicious"
tmswfrk@reddit
Man, that explains a lot of what I remember of windows vista.
Keep_Being_Still@reddit
Another example is ATMs. They don't need to make any noise when collating the notes to spit out. But you will here a whirring noise regardless.
coderstephen@reddit
I have a different theory as to why this happens: People are used to slow software. The world is filled with poorly-optimized software made with tight deadlines, and megacorp websites that must remain durable with hundreds of millions of users. Users have been trained to expect that "substantial work" takes a few seconds or more to process.
Software that does what it is supposed to do and loads instantly is not the norm, so their natural response is suspicion. In their mind, which is more likely: Your software performs better than every other app the user has used, or that your code has a bug? Their justifiable experience leads them to the latter. Their only mistake is committing the Prosecutor's Fallacy: Assuming that because this is often true in the majority case, that it must be true in your case as well.
Izkata@reddit
Maybe we can train them the other way over time with a sleep that's 10 seconds now and 0 seconds in a few years:
Math.max(0, (new Date(2030, 1, 1) - new Date()) / 120621854942 * 10)coderstephen@reddit
Ha, not a bad idea.
Ambitious-Garbage-73@reddit (OP)
That's an interesting way to frame it. I think you're probably right that the baseline expectation is already shifted. The users I was dealing with were mostly non technical sales people who had been using a much slower legacy tool before ours so yeah their mental model of how long things should take was calibrated to something way slower.
Ambitious-Garbage-73@reddit (OP)
Yeah labor illusion is exactly what my PM ended up calling it too after the fact. I didn't know the term when I did it, I just knew the complaints stopped. The weird part is now I can't remove the delay because stakeholders saw the loading animation and decided it makes the product feel more premium.
max123246@reddit
Ugh are you kidding me. This is why every UI I use feels so much worse when ripgrep exists?
klowny@reddit
Hah, funny how you bring up fzf because for a while I kept thinking "shouldn't this be faster it's just a terminal application". Then I switched to skim and was like that's more like it.
The difference seems to be milliseconds, but I suppose that's noticeable when everything else in the terminal is faster than perception.
max123246@reddit
Haha fair, I think I switched to skim too just last week. Fzf just has more mindshare in my head
BadLuckProphet@reddit
Welcome to "normies ruin everything" where technical design goes out the window because of "feelings". In this episode we discuss "Terminals: The most efficient interface with a PC or a scary techy thing that requires memorization and prevents discoverability" You decide!
In a more serious tone, no. A bunch of UIs and their backends that you use were also programmed by interns who don't even know what ripgrep or fzf are.
Which is also a cyclical loop because people EXPECT software to be garbage so we make software LOOK like garbage to closer meet user expectations. Lol.
chargers949@reddit
Shitty devs exist too don’t forget those. I’m one of em!
Mortimer452@reddit
Just like literally any online travel booking website. "Searching for deals..." actually completes in 0.3 seconds but they gotta make it look like it's really searching hard
user0015@reddit
I learned something new. Had no idea there was a term for this.
While not, recent, many moons ago when I was a junior, I worked at a nice company that dealt with health insurance and various HIPAA regulations. One of the most "important" pages we had was that after a user signed into their webportal, we had a large page to let them know we were securing their connection while handling their health.
Our "secure connection request" was a 3 second spinwait in javascript to make it look like we were doing something.
tokyodingo@reddit
This reminds me of a story a professor told me. He was a former electrical engineer and when silicon transistor amplifiers came out they had to add weights to them because people thought the light ones were cheap.
kbielefe@reddit
I once fixed a legit bug that caused a certain request to take upwards of a minute. After my fix it returned nearly instantaneously. It was so unnerving I thought I was accidentally caching or something. I left that department shortly afterward, so I don't know if they decided to add fake work.
asdfopu@reddit
Why are you folks engaging on this AI slop post?
kex@reddit
Why are you? Every reply adds engagement.
asdfopu@reddit
It’s worse for their training if all responses call them out as AI slop instead of engaging with the topic
Ozymandias0023@reddit
Man, people are dumb
ducktap3-beats@reddit
the famous "act busy" from Singapore xD
Crowdfundingprojects@reddit
Yeah, totally normal and in the beginning I hated it. Then I just learned people are either stupid or they simply do not have time and therefore run their brains on fix assumptions and expectations and rarely deviate from them. Therefore we have to satisfy them. No matter what if otherwise they would be less happy.
In ecommerce I built a quick tool that sends customers their order download for a digital product instantly if their message to support contains certain keywords and they didn’t get their order bc they misspelled their email address initially for example. All good, but then people complained „Thanks but feels like I‘m talking to AI.“ alright then I put 2 more emails in the queue with total 1 day of delay before the correct links are sent in a third. „Please be patient, bla Bla. Now in the second email I‘m even upselling customers. Potential unlocked.
stumazzle@reddit
Not just in software. Back in the day Ford(and probably other OEMs) was getting complaints about inconsistent oil pressure. Oil pressure jumps around a bit based on engine speed/temp. Their solution was to put in a fake gauge that just took a signal from the dummy pressure switch, so if it had pressure above the dummy light threshold the gauge would sit perfectly in the middle and didn't move
Ninjez07@reddit
I have been there, getting asked to add a loading delay to slow things down because the backend was too fast and we didn't get to see the fancy animation. I was indignant, but design was adamant.
I feel like it wouldn't take much to educate the user to expect super snappy response times from your super great system, and in the process raise the bar for your competitors to not seem like snails in comparison, but I guess leaning into the psychology of "the system takes time to do work and that's a good thing actually" does have it's advantages too.
Dry_Hotel1100@reddit
I think this example loosely fits what’s called “labor illusion,” but the explanation here is a bit oversimplified.
The original idea isn’t really a UX pattern or "add fake loading screens", but that making effort visible can increase perceived value.
Also, adding artificial delay doesn’t actually make the result more trustworthy - it just changes perception. Depending on the user, it can even backfire if it feels fake or unnecessarily slow.
So while the behavior described is real, I’d be cautious about treating delay as a general solution rather than focusing on making real processing transparent.
spline_reticulator@reddit
....Reticulating splines...
fragglet@reddit
An infamous example is the HP-12C calculator which has become the "standard" calculator in the finance industry. They've been making them for the best part of 40 years and the modern ones are ARM CPUs emulating the original 80s chips.
The modern CPUs are capable of performing the calculations 20x faster than the originals but they still retain the same slow speed of the original chips because they found that users didn't "trust" the calculations if they ran faster. I think there's a special flag you can set to make it run at full speed.
Sensitive-Ear-3896@reddit
This is the actual reason Java created non blocking io
Dangerous_Stretch_67@reddit
There must be the opposite when too many loading bars on a sketchy website are presented: the labor disillusion. Or maybe the sunk cost fallacy, when they give you a long loading screen, and then try to make you pay at the end.
farmer_sausage@reddit
Labor illusion is also how I describe managements work. Illusion of work with actually nothing accomplished lol
sheeshboi12345@reddit
my team recently had a substantial win on a really high traffic surface also. added ~2s of latency with copies/design making it seem as though requests were processing and our algorithm was “working”. try it out, you should be able to see it going from survey to products on linkedin
Ambitious-Garbage-73@reddit (OP)
the "working" in quotes is doing a lot of heavy lifting there. honestly though this is just good product engineering. the algorithm isn't wrong, the perception of it was
who_you_are@reddit
I remember we added some similar padding, though not exactly for that reason.
We basically make it so our queries (and loading screen) take some specific times over time.
At first, it was mostly our added delay, then, when queries become naturally slower, our added become smaller.
The users couldn't see a difference in the delay because it was the same. That prevented complain that our system become slower.
haragoshi@reddit
This is genius
robertshuxley@reddit
insert suffering from success meme
eldoran89@reddit
The difference between technically correct and good ux....it's frustrating in a technical sense but from a ux perspective it just improves ux and that's good...it's not any different to a loading bar that shows a progressing bar even though the process has no clue of the actual progress and just semi randomly fills the bar...
IndependentProject26@reddit
Mod is doubling down on this bs so it’s going to be up to us to flood AI posts with insults
Upbeat-Lynx-3876@reddit
The labor illusion is real and it hurts my engineering soul every time. But users want to see the machine sweat. Fast feels like cheating. Slow feels thorough. It's backwards but it works.
zezer94118@reddit
I'm almost 100% sure that's what they do on TurboTax
Zeragamba@reddit
this is actually a pretty common problem. Airlines and travel planning apps have run into the same issue. If the results are too quick, then it feels to users as if your application didn't actually do anything, where as if it takes a second or two, the application did a lot of work to verify that you're getting the best deal.
SpiderHack@reddit
Canonical example of this is a 2 floor elevator. Prople hated when it only contained a single , "operate" button, and instead wanted 2 individual floor buttons.
Just do what the customer users want, regardless of if it is "optimal" or not.
Zeragamba@reddit
People also like having a "door close" button to press, even if it actually does nothing more than light up
ConspicuousPineapple@reddit
This has more to do with the fact that elevators come with this button, and then the building decides if that button should do something or not. Nobody would give a shit if that button wasn't there in the first place. What are they gonna do, take the stairs instead? This hurts nobody.
In fact I've noticed the opposite of this phenomenon. This urban legend that these buttons don't usually work is so popular that people actually think it's true even when they test it. But when you actually measure things you notice these buttons work more often than not.
A confusing point is that they never work right away. There's a minimum delay before the door can close and that's probably playing into the myth.
MyUsrNameWasTaken@reddit
The button still works instantly and is necessary when in firefighter mode
uniquesnowflake8@reddit
People notably don’t feel this way for search engines, why do you think that’s the case
samplekaudio@reddit
That's an interesting observation. I'm just guessing, but I imagine it has to do with how people conceive of the type of work being done by the computer and relate it to human action.
Processing a file or generating something might be perceived as more substantial than presenting a list of results if the results are thought of as already existing, like Google is just presenting you with a list that already exists. The more substantial task should take longer, since a similar task would take longer for a human.
dizekat@reddit
I think it's probably that Google probably done better studies with better methodology.
It is too easy to conduct a bad study - for example you can give people questionnaires where they can rate ease of use, speed, trustworthiness, etc etc and then do A/B testing and find out that the slower version has lower rating for speed but higher trustworthiness, which can happen for all sorts of different reasons. It's even easier to not even conduct any study and just go with the vibes.
Psychology is difficult, and majority of findings do not replicate.
Chapter_III@reddit
I also think that the user experience demands an “evidence of work performed.” Search engines can lean on this intrinsically because all of the search results shown serve as this proof themselves. If the result was only one match, I’m sure users would be much more skeptical.
Tiarnacru@reddit
It's a pretty well known phenomena of human psychology. People think it's too good to be true or faked. Seeing the site doing something as it "thinks" makes them trust it more. It's not logical, but humans aren't logical.
Ambitious-Garbage-73@reddit (OP)
The airline comparison is actually what convinced my PM it was fine. I pulled up kayak and showed him the fake searching animation they do and he was like ok yeah just do that.
ConspicuousPineapple@reddit
I mean, maybe? But let's not fool ourselves here, airlines and such do this because they can serve ads during the "loading" screen.
EternalBefuddlement@reddit
Deliberately adding friction for UX. We do it at my place, a lot of the checks are fairly instantaneous but if things are too quick then it comes across as fake, phony etc.
It's part of a common belief that computers need to think as we do.
Teh_Original@reddit
I wonder if we have been trained to believe by intuition that computers are 'slow'.
Ashken@reddit
I definitely think that’s a possibility. I think the notion that computers a slow by default can be because of that fact that there’s some generations of people that can still remember dial-up. And if that was their first interaction with the technology they may have been conditioned by it.
Rainbows4Blood@reddit
Dial Up. And offline users who used computers when every other interaction involved waiting and listening to your Hard Disk going clickediclick for a few seconds.
pinkycatcher@reddit
Many of our most impactful applications have historically been slow. Imagine loading a website, people are used to websites taking a while, they're used to Outlook taking a second to load e-mails, when searching for e-mails they're used to seeing e-mails pop up one by one.
wixie1016@reddit
Maybe people need to think faster
SkyPL@reddit
It's removing friction, not adding it. Too fast response was the friction here. Just most people do not get it.
NiteShdw@reddit
I get what you're trying to say but what you're describing isn't "friction". Friction is where the UX legimately makes it more difficult for the user to accomplish the task they want.
This is more about "perception" than friction.
ACoderGirl@reddit
My tax software definitely does this, too. The kinda computations it should be dealing with will be basically instant to a human, yet it has an "optimize" button that takes a second or so. I don't think the button even does anything, since it's also displaying the current return on the sidebar in real time.
FatalCartilage@reddit
We can search the entire internet in a few hundred milliseconds but it should take seconds to add some numbers together? People have bad intuition :(
felixthecatmeow@reddit
As someone who obsessively compares flights before booking this is soooo annoying...
diablo1128@reddit
... and here I am thinking airline websites are too slow as the query shouldn't need to take this long.
xtreampb@reddit
Then you got me over here, “this is taking too long, I bet it’s timing out and not getting all the results.”
The_Big_Sad_69420@reddit
No omg I abhor the slow loading states of airline websites … they drive me crazy
mccirus@reddit
If it’s stupid but it works then it’s not stupid.
Ambitious-Garbage-73@reddit (OP)
my entire career summarized in one sentence
DoingItForEli@reddit
Add another 5 second delay, turn your "working" message into "thinking", and they'll think the system is cutting edge
Ambitious-Garbage-73@reddit (OP)
we actually considered adding a "verifying results" step that does absolutely nothing for 3 seconds. PM loved it. I still dont know if im proud or ashamed
axtran@reddit
This is a good opportunity for an aggressive throbber. Funny how many times I've had to do this...
dash_bro@reddit
Huh. I thought this was pretty common? UX design and making users feel like something is truly being computed. Usually we just add this at the front end, with a nice loading animation.
johnla@reddit
I had a same experience. One of the early day chat bots I was implementing into the site. The chat bot should respond so fast it was jarring and it felt like it wasn’t “thinking” hard enough.
Added fake loading thinning animation for a seconds for the user to take in what was going on and suddenly the bot feels smarter.
It’s just user behavior and psychology.
rupayanc@reddit
ran into the same thing on an internal tool about 3 years ago. search results came back in under 100ms and QA kept filing bugs saying the feature wasn't working. we added a 400ms spinner and the bug reports stopped overnight. the uncomfortable part is that we accidentally trained users to equate speed with unreliability, and there's a whole body of UX research on "perceived performance" that says the same thing: trust requires ceremony.
captain_obvious_here@reddit
Long ago, I worked on a search engine. We built it pretty much from scratch, and while it wasn't Google, it was pretty good for our small team. And one of the features we were really proud of was how fast it could search through literally billions of documents.
The marketing team used to study users perception and stuff like that, so we could improve our product. And one of the leading "issues" people mentioned was "it didn't really search because it was too fast".
The front-end team handled it with a fucking
setTimeoutand sure enough, our customers usage and satisfaction raised to the roof.To the backend team, who worked our ass off to make it all very fast, it felt like a gut punch lol
b_rodriguez@reddit
What was in the users screenshot?
Herrowgayboi@reddit
I'm glad to hear i'm not the only one perplexed by this... When I joined FAANG as a Sr SWE, one of my first projects was to rearchitect an old system that was a monolith. No one wanted to touch it because how fragile the code was, but also, how terribly slow it operated to where API calls could take up to 5 seconds (since there was a dependency tree)
After rearchitecting the system, we achieved nearly 350ms latency and when we launched, we had a LOT of customer complaints... Not about the system, but how unbelievably fast the system became and it became a huge question of whether our responses were actually accurate. It got to the point where I almost questioned my self and tested all the API's with the old API's, as well as new vs old flows. Same exact data. When discussing with UX, their guidance was to put at least 1-3 seconds of loading time. I was absolutely shocked... Well, when we ended up implementing it, we now do max between the time it took for the API to finish loading or a calculated range of 1\~3 seconds using a random calculation.
To this day, i'm still perplexed by this decision and it does bother me because in engineering we're pushed to the limits in terms of building things with break neck speeds, but at the same time, too fast and it'll bother customers.
KallistiTMP@reddit
See, now you just need to do a little market segmentation.
Make a lighting tier that only takes 400-600ms for twice the price.
And an advanced AI deep research tier that costs 3 times as much in order to achieve latencies as high as 6,000+ ms! With full MCP support!
netk@reddit
Michelangelo faced a similar situation when creating the statue of David.
Heavy-Report9931@reddit
this is literally what turbo tax is doing. People were skeptical anything was done because the api calls are fast so they added waits for now reason
babige@reddit
Yeah you have rediscovered fire 😂
montdidier@reddit
This whole post is absurd.
Uneirose@reddit
This reminds me of the airport baggage problem.
People kept complaining about the long wait time for baggages at a certain flight, even if it's objectively much faster than others.
They decide to make the landing and baggage claim location a lot farther. The complain stopped.
I think the problem is that the expectation of thinking still there. Another potential solution would actually heavily make it seems like it works fast because of an Algo.
"Thinking done in 200ms powered by Hype-fast thinking"
Or like google "summarizing x webpages in y seconds"
99Kira@reddit
ok bot
Responsible_You_1211@reddit
Can anyone help me upgrade my character and get cool weapons I’m new
fatstupidlazypoor@reddit
Years ago they put a steel bar in the handset portion of corded phones. 12 yr old me (circa 1989) was initially puzzled by this and then pleased with my rapidly coming to the correct conclusion as to why it existed.
Lothy_@reddit
Hmm… hmm… I think this might be slop.
bbshjjbv@reddit
Perceived performance is real
Aggressive_Ad_5454@reddit
Interesting observation. Uncanny valley from bring too fast.
Write your code so the total response time is 1200ms. So, if something goes extra slow with your API and it takes 500ms, your extra delay will be 700ms, for example. That way you’ll reduce “too slow” complaints as well as “too fast” complaints.
Unfair-Sleep-3022@reddit
200ms is not "fast as hell" tbh
mofreek@reddit
I’m surprised this reply is so far down., I was thinking the same thing. We’re talking milliseconds, right?
200ms is in the “acceptable, but could use improvement” range for an API call. And if it’s an API call, the “users” are developers. Is a developer going to say an API is too fast? No, they’re going to write tests that verify the call does what it’s supposed to.
I’m calling this out as an AI post simply because OP doesn’t know WTF they’re talking about.
Beneficial_Bread5348@reddit
What a nice but deeply unsettling problem. The solution actually adds some technical dept, I can see the threads pools being saturated already.
I think I would try to slowly make the user adapt to the quick API. How about introducing a "reverse exponential backoff" mechanism in which the API delay decreases over time based on the date. Today it would be 200ms, in 1 month 180ms and eventually drop to zero.
Confident-Alarm-6911@reddit
Legendary session bassist Leland Sklar added a switch to his bass that does nothing. He calls it the 'producer switch'-when a producer asks for a different sound, he flips the switch (making sure the producer can see) and carries on. He says this placebo has saved him a lot of grief, every engineer sometimes has to do the same.
Ambitious-Garbage-73@reddit (OP)
This is the best analogy in the entire thread and I'm stealing it. I basically installed a producer switch on an API endpoint. At least Leland Sklar gets paid session rates for his, I just got a slack message saying the app feels more premium now.
DamePants@reddit
Saving that little story away.
Slime0@reddit
Maybe this could be fixed by adding info like "searched 5000 records" or other relevant information that confirms work was actually done? If it's a search that's not finding anything then you just have to prove that it wasn't a situation where nothing was actually searched.
Ambitious-Garbage-73@reddit (OP)
We actually do show a count now after the delay finishes. Something like 'analyzed 12,847 records' which is real data. That helped too but the delay alone was what killed the complaints initially. The count was more of a nice to have.
ArtSpeaker@reddit
Why add it to the api and not the interface itself? There will be other uses for that API, no?
Ambitious-Garbage-73@reddit (OP)
Good question actually. The API serves the main web app and a mobile app. The mobile team wanted to handle their own loading states so we kept the API clean initially. But the web team's UX person wanted the delay baked in server side so they didn't have to fake it on every client. It's a bit messy honestly, we'll probably move it to the client eventually.
AlmightyLiam@reddit
Seems like they did add it to the UI. They mentioned adding a loading animation and stated the api still returns in 200ms
ArtSpeaker@reddit
Thanks!
AssistFinancial684@reddit
Increase the latency the more desperate they appear for added effect
Unique-Squirrel-464@reddit
Nice way to think outside the box, I had to do something similar in the past for a client app. They were non-technical and all of the competition apps were slower, I couldn’t talk them out of it so I just added a delay 🤷♂️
hustler-econ@reddit
The discomfort is misplaced , you didn't lie about the result, you adjusted the presentation to match the user's mental model. Surgeons scrub longer than strictly necessary when patients are watching. Same instinct.
Muhammadwaleed@reddit
you just realised how the world works...! thanks.
llima1987@reddit
IIRC, when HP upgraded the hardware for HP 12C, they had to do the same. People just didn't trust the calculator to come to the right result so fast.
brobi-wan-kendoebi@reddit
This post is AI, look at OP history. Bot account
ConspicuousPineapple@reddit
This has all the obvious signs too, how can people not see it?
eliquy@reddit
Could be the people not seeing it are bots as well. could be, where reddit is going, we won't need eyes to see
brobi-wan-kendoebi@reddit
Yep. Dead internet theory becoming real before our eyes. If anyone has any experienced devs type closed community group communities, DM me. Growing sick of Reddit by the day.
keithslater@reddit
Yeah I read almost this same story hours before this one was posted.
wiriux@reddit
He doesn’t even reply to his own posts as people normally do. Probably bot with some manual responses to make it seem real.
donniedarko5555@reddit
Speaking of AI, I had chatgpt look at a music file to try to tell me why I liked it and I had the same issue OPs users had even as someone who very much is familiar with signal analysis and knows how it works under the hood.
My brain short circuited to "you didn't even listen to it!"
So whether or not OP is a bot its a real UX consideration
Abadabadon@reddit
Not reading all that, clanker
SkittlesAreYum@reddit
I'm embarrassed at everyone else replying to as if it's a real post. Come on guys, this is obviously written by an LLM.
Great_Northern_Beans@reddit
I'll ask - what artifacts of LLM usage are there in this post? Because I read it and am embarrassed to admit that I totally thought it was a human.
Sure there's a hyphen buried in there, but I use them too, see: the first sentence that I wrote in this comment. So I don't see that as a real indicator of AI text generation.
Is it just the length of the post? Something to do with sentence structure?
Abadabadon@reddit
Ai written posts read more like a LinkedIn novella than they do a human.
Its like it's building a story. Problem, climax, resolution, morale of the story.
Then they build suspense throughout the entire thing aswell; usually something like "it wasn't x. It wasy" or "some statement to describe a problem. How i felt. Then, describing the problem even further - or maybe instead how we got started to our solution, as if we're now writing a theme."
jt_redditor@reddit
for some reason this phrase is very common on ai texts: "not x, not y, just z"
royrese@reddit
The sentence structure thing is a big giveaway for me and it's because that rule is good writing that we are taught in school. In practice, unless you are a professional writer or do a first draft and then a revised copy of a reddit post, you will not be able to hit that mixed up cadence with perfect consistency across an entire 500 word post.
Normal people have some almost run-on sentences, some boring sentences, go on stupid tangents by accident, etc.
There are some phrases that set off massive alarm bells for me, too, like this sentence:
AI writes like that all the time and I don't know anybody who talks like that.
The other big thing that a lot of lazy AI posters will do is put up a ridiculously "hook"y title, then have that last question "does anybody else...?", "curious if anybody else...?" asking for engagement to the post. It's possible for a human writer to put the effort in to do this, but AI writers seem to ALWAYS do this.
SkittlesAreYum@reddit
Yes, it's sentence structure. I can't fully describe it because I'm an engineer and not an English major, but it just seems off. Too many paragraphs, most of them are three sentences, and also a strange attempt to be verbose and concise that seems grammatically correct but few people write like that.
> The API call took 200ms. Measured it, verified it, fast as hell.
> I spent two days looking for bugs. Nothing. Results were correct, latency was fine.
> I added a minimum display time of 1.2s with a loading animation. API still ran and returned in 200ms. User sees 1.2 seconds of "working".
These are the big red flag sentences. Why are they structured like that? Did we need paragraph breaks after each one? I think they are also using incorrect grammar, which is nothing new from a human so you'd think it would look more human, but actually it just looks like a fake attempt to act human to me.
I also remembered I read that LLMs often use a "rule of three" when writing. Look how many of the paragraphs use three sentences, or the sentences have three quick topics.
Sir_Edmund_Bumblebee@reddit
Also the random audience engagement questions at the end are a hallmark of these AI posts. Humans ask questions too sometimes, but AI posts always end with a "What are your thoughts? Here's my engagement question to drive commenting?"
Great_Northern_Beans@reddit
Fascinating. I also use a "rule of two or three"(ish) while writing and break up paragraphs based on that. Not because there's any grammatical reason for doing so, but mostly just to limit the size of blocks while viewing on mobile.
I wonder if similarly structured posts get more engagement online (i.e. being more legible than walls of text) and models cling to that signal as a source of "good writing". Given the utterly massive training corpuses of social media data that get shoveled into these things, it probably wouldn't be surprising if they adopted some sort of "engagement seeking" behavior with their writing structure.
Anywho, thanks for the detailed response!
SkittlesAreYum@reddit
It's not a bad rule and I tend to follow it as well. I think it's the strictness with which the LLMs seem to follow it that gives it a smell.
Another example is this. I think the vast majority of people would write this as some variation of "The API call took 200 ms. I [or we] measured it and verified it. It was fast as hell". It keeps the three pieces but sounds more natural. The OP version is just wrong somehow.
nullbyte420@reddit
Bots love replying to these.
nullbyte420@reddit
Yep, obvious bot post with plenty of bot comments to go. Social media is doomed.
Aggravating_Yak_1170@reddit
Title: The Ontological Trap: Why Performance Without Perception Feels Like Deception
As engineers, we’re trained to optimize for every millisecond. So, when I launched a search feature that returned results in 200ms, I thought we had a winner. It was fast, clean, and verified. Then the feedback started coming in: “It doesn't feel right.” Users were convinced the system was "making things up." In their minds, searching a massive knowledge base is a heavy lift—it should take a second. Because the result was instant, they assumed the system hadn't actually processed anything. The "Technically Dishonest" Solution: I added a minimum display time of 1.2 seconds with a loading animation. The API still finished in 200ms, but the UI forced a "working" state for an extra second. I framed the update as "improved feedback during result loading." The Result: The complaints stopped within a week. In UX, this is known as The Labor Illusion. If people don't see the "work" being done, they don't trust the output. It reveals a profound truth: human trust isn't built on raw efficiency, but on the visibility of effort. It’s a strange feeling as a developer—realizing that the technically perfect solution is sometimes experientially broken. We aren't just building for machines; we’re building for human psychology. Have you ever had to "nerf" your performance to win user trust? #SoftwareEngineering #SystemDesign #UX #TechLead #PhilosophyOfTech
brobi-wan-kendoebi@reddit
Yep. Almost immediately recognizable. Check account history. Eat it, clanker
iwinulose@reddit
Now add pro/enterprise tiers with “priority access” to results. 4x the cost, half the ballast. And keep some of that latency budget for yourself—some day you’ll actually need to do something in that time.
OAKI-io@reddit
this is actually well documented in ux research. perceived quality tracks with perceived effort, and instant responses feel like shortcuts. you're not deceiving anyone, you're matching the mental model they already have. the discomfort is valid but you made the right call.
dyoh777@reddit
Add a cool animation spinner and always show it for .5 seconds otherwise no work has actually been done by the system haha
Ambitious-Garbage-73@reddit (OP)
We had almost the exact same situation but with a search feature. Results came back in under 100ms and users kept saying it felt broken. Added a 1.5 second skeleton loader with a subtle pulse animation and complaints dropped to zero overnight. The PM called it a UX improvement. I still think about that sometimes when people talk about optimizing response times.
ElGatoPanzon@reddit
I had a Telegram bot which would respond "too fast" because it felt like nobody could possibly write that fast. So for every message I added a minimum of 3-5 random seconds then gave it a WPM to calculate the fake lag up to 10 seconds of showing "typing.." in the status and split every single message on paragraph new line. It was a huge hit because it looked like someone was actually writing out those messages. Definitely a psychological thing.
Equal_Kale@reddit
This is nothing new - very early in my career I worked on early Public Telephony switches. WHen the switches first went fully digital, we found that lots of times users making long distance calls would dial, then abandon the call.
They expected to dial, put the phone to the ear and hear the connection being made (old analog step gear would make some noise on the line). With a fully digital implementation you'd get silence so people would just hang up early as clearly they didn't feel it was working and so it would be tallyed as an abandon in the call records and you could see this as abnormal stats from what would be expected.
So we modified the software to insert some "white noise/working" noise/sound back to the end user while the call was being setup and then pull that sound off once the connection was established. (Yes I'm sorta old).
Whitchorence@reddit
You're absolutely right!
Few_Raisin_8981@reddit
1.2s to mine some sweet Bitcoin
calmighty@reddit
We did this 20 years ago for a search that users could not accept how fast it returned. We introduced a loading screen with an ad between the last step and results. Complaints stopped and we captured some ad revenue.
CheddarBiscuits10@reddit
don't write your posts using chatgpt
bighappy1970@reddit
Get over it
CheddarBiscuits10@reddit
no thanks
imp0steur@reddit
Why the hell not? It’s still English and readable.
CheddarBiscuits10@reddit
it's lame and sad that people can't express themselves without AI anymore. plus, it reads like an AI wrote it which, for a personal anecdote, is not a good thing. it also brings up the question of whether op is even a real person or a bot, which you can see from the stickied comment is a real question. that's why
ConspicuousPineapple@reddit
You might have touched up the punctuation but this is still very clearly written by AI.
randomInterest92@reddit
I've had the same experience. A b2b app that had a statistics dashboard. The backend was extremely unoptimized. For some customers it took 10s+ to load. I optimized it within a day to basically load instantly. Wasn't any rocket science. Mostly just reducing n+1 queries.
Anyway. After releasing it, customers complained that it doesn't "refresh" the data. I first didn't understand their issue at all. After having a call with one of the customers. I realized that the instant load was off putting for them because they got used to it loading 10s+.
I then added a little 1s loading animation that just says "calculating stats". All complains gone, lol
Doc_Mercury@reddit
That's just part of UX design, don't worry about it. The older trick is to add a full second of fake latency at rollout, then cut it down to a few hundred milliseconds when complaints about performance start coming in.
The optimal solution is the one that users like, not the one that's maximally efficient.
BigPeteB@reddit
When I was 7 and learning to use QBasic to make the computer do cool stuff, I worked out how to use all the ASCII box drawing characters to print menus like a lot of other DOS apps had.
I added some delays to make it draw slower, because in my kid brain, that meant the computer must be doing a lot of complicated work behind the scenes, which made my stuff look powerful.
germanheller@reddit
had almost the exact same thing with a document processing feature. users would upload a PDF and get results back in under a second. support tickets kept coming in saying "it didnt actually process my file". added a 2 second delay with a progress animation and the tickets stopped overnight.
the uncomfortable part is that you now have to maintain fake latency. some new dev will find it in 6 months and "optimize" it away, and the cycle starts again
ProbablyPuck@reddit
UX is not exact. Keep the changes for a few months and then release a "performance improvement" that takes it out again.
Empero6@reddit
oooo clever.
ProbablyPuck@reddit
Lol, sometimes user expectations are more impactful than true performance.
Intelligent_Thing_32@reddit
Cool AI post bro
HakX@reddit
ban this AI slop post
thekwoka@reddit
UX studies have covered this that if the thing feels like a lot of work, users want to feel like work was done, and work takes some time.
Under 400 is good for faster interactions, but real work happening should take a bit longer.
Imagine you have a big project and you hit save and it's instant. You'd be worried it didn't actually save.
TheRealMVP__@reddit
In my very first company we were told to add more dependencies to a mobile app because it looked ridiculously small (and fast) considering how much company wanted to charge for it and how important it was for the other company. On top of that, client was notified with a delay about the project being ready for the same reason. I don’t remember price range but I remember it was created by a student within a month. Ofc officially again there was a whole “team” behind it 😄
D-Alembert@reddit
Welcome to gamedev, where what players want and what players believe they want are two entirely separate things
EmmitSan@reddit
Well known phenomenon. It doesn’t actually take several seconds to get flight/hotel info, either, but travel sites wait that long because customers don’t believe the results if it’s “too fast”
happy_hawking@reddit
If it provides a better ux, it's the right thing to do.
I once read that heavy action needs to feel heavy, otherwise the user won't trust it. You got the actual proof that it is like this.
In situations like this I sometimes add a minimum loading time in the UI if the user needs to feel the weight of their actions. But I try to not do stuff like this on the API because usually API users should be technical enough to understand what's going on.
dashdanw@reddit
add it to the interface, not the API
XJ--0461@reddit
Learned about stuff like this years ago in college.
An example given was ATMs. They could be faster, but people don't trust it.
Professional_Hair550@reddit
Add the typing effect. Idk what you're doing, but I've been trying to reduce API response time since forever and min I have been able to achieve was 15 seconds.
xamott@reddit
I just want to say thanks for this post. Gave me a good chuckle.
cowardly-duck@reddit
It's a common pattern for some websites, like thlse finding cheaper deals (like the flight price comparaison websites), the first seconds showing only a few average deals then suddenly "discover" great new prices.
Tbh it's kinda effective for deals, but I'm really surprised for search because with Google we're used to fast search (so it's the usual anchor usually). I made a little internal knowledge base search for my company and it's also a few hundred Ms and no one complains.
I don't know if "it doesn't feel right" without any specifics is something I'd accept as valid, but if it works for you and keeps the peace all good
craig1f@reddit
Understanding things like this is what UX is all about.
Adding a delay is fine. Then, in the future you can remove the delay if you need to to make a major update look more substantial.
AI does this same thing right now. You'll notice when you use Claude, or any AI really, the response doesn't come in all at once. It streams several words a second, and takes a while to show up on your screen. That delay is bullshit and is meant to make it feel like you're being talked to by an intelligence in real time.
Empanatacion@reddit
That delay is real with AI responses. It really hasn't formulated the end of the response when it starts streaming in the beginning. That's the reason you will sometimes see it stop halfway through a response, erase the in-progress message and then say that answering the question is a policy violation.
It is classifying its own response in real time to decide if it wants to pull the plug.
craig1f@reddit
Literally just asked claude. You're answering a different thing than I'm answering:
```
...
That said, there is a small UX consideration baked in: the rendering often has a slight character-by-character typing animation layered on top of the actual token arrival, which smooths out the visual cadence. So the raw delivery is real, but the precise visual pacing you see might be slightly polished.
...
```
That's the part I'm talking about. The character-by-character typing animation.
brainhack3r@reddit
Serious answer ... do this.
Make this the "free" version of the API.
Then have a 'pro' version that's $$$ and remove the latency :)
fragglet@reddit
One idea you might want to consider: instead of setting a fixed, hard coded latency of 200ms, make the code look at today's date and calculate a latency value that decreases to zero over the next, say, 18 months. That will give the users time to gradually adapt to the new normal.
Ninja-Sneaky@reddit
It is like that story of a gametester for an fps shooter that said a certain pistol was underpowered, the devs didn't touch the numbers but made the sound louder. The tester came back confirming it was fixed
VictoryMotel@reddit
I wonder if this was actually "users" or a single nonsense person that is ballsy enough to complain about something they can't even explain.
campbellm@reddit
My company had a "chatbot" that was 100% deterministic based on the radio buttons/select boxes that the user chose, and we had to add delays and "simulated typing" for the same reason. Make it LOOK like a chatbot, that was "thinking".
Low_Entertainer2372@reddit
counter strike did this back in 1.6
Ettiquettes@reddit
I'm working in a starup, where they have initially hired me for a feontnnd developer role, then slowly I was asked to work with backend eevelopement, building Middleware, deployment I'm cloud, later I became a fullstack developer their and then they have asked me to manage the development team, I'm still paid the junior frontnd developer role salary, I have built more than five web applications, two differemt backend Middleware, worming with ai and beded system engineers, I'm gonna complete my one year, and my manger is speaking behind my back, he suddenly changes the entire architecture and the flows and he asked me to cpmvert the react native code into dart to design a flutter app, If something fails , he just poits out to the frowntend and the development team, as such. I have tried my best but it's jot working out, should I stay here or just leave.
corny_horse@reddit
lol I'd probably add a "turbo" button and let users who want it to be faster click on it when they search
anoppe@reddit
Ha cool. Now you have quite some margin to increase ‘user performance ‘ by lowering the delay every nou and then 😀
horizon_games@reddit
I almost always do a minimum 1 second for loading indicators, otherwise the flash is just distracting and no one believes it hit the server that fast.
Subject-Turnover-388@reddit
ai;dr
thewindjammer@reddit
Did I just read a post similar to this recently?
BigBadButterCat@reddit
I'm gonna disagree with some of the other commenters in here and say that I think this is a reasonable reaction by users. Suspecting bullshit on instantaneous results is a learned behavior, not ignorance from thinking computers need as long as humans to do math.
I've experienced too many shitty useless shallow search functions in software, so I completely get why people would find instant results suspicious. I don't have that same feeling when typing out a complex function on a calculator or whatever. It's not that I think the computer should always take its time, it's that good search specifically often does (or at least did in the past) take its time.
nucleardreamer@reddit
As a Design Engineer, this is not surprising at the least, I've done this many many times. Sometimes UX and expectation setting one way or another.
An opposite example - uploading assets. I will often start an upload immediately on selection, then have a dialog step right after (modal, confirmation, etc) that the user has to spend at least ~500ms reasoning and pressing go.
By the time they click the button, it looked like "it did it fast" because it was already uploaded or nearly done.
mpanase@reddit
If it takes less than 300ms on the UI, it's confusing. It's too fast. Slow it down.
Everything needs to take 300ms to 1s.
Very common UX pattern.
TheCharalampos@reddit
Was so frustrating when I was building an internal tool and my coworkers kept flagging the fake delays I added and said they were useless.
They were fine programmers but none of them understood the minute those delays were removed I'd be getting 10-20 bug reports from users because "something was off"
Sensitive-Ear-3896@reddit
Just make sure it’s a non blocking delay, and say increased system scalability by 50% on your resume
Tricky_Tesla@reddit
Isn’t it possible to add a “fast button with laser icon” to give them the pretense?
drink_with_me_to_day@reddit
GozerDestructor@reddit
I used to do genealogy as a hobby, about 15 years ago. That meant working with a lot of "people finder" web sites, which can tell you someone's age, where they've lived, other family members at the same address. This sort of search is usually free so that they can upsell you on the sort of dirt you'd want to gather on an enemy - bankruptcies, liens, divorces, etc.
And this pattern is everywhere in that industry. They'll make you wait about a minute for your search results, while watching a pretend realtime display - "Searching through millions of drivers licenses... searching property records in 200 countries... searching for criminal records..." Some of these sites will even require you to occasionally click something to keep the search going, or make the whole thing grind to a halt if you remove focus from that window.
I never had the illusion that any of this was real. The search probably took three or four seconds at most, the rest is a dog and pony show to convince you that it did a lot of hard work so that you'd be willing to pay $40 to "unlock the report".
toiletscrubber@reddit
thats pretty dumb for you to add a fake latency
1One2Twenty2Two@reddit
I mean, could you have just told the client that the returned results were right?
Crafty_Independence@reddit
The client could probably verify that for themselves, but you still have to deal with users feeling that it's wrong in spite of the evidence
max123246@reddit
I would find some other way to solve this. Users are just used to slow searches, that's not their fault. We shouldn't subject them to slow searches in the name of UX...
Crafty_Independence@reddit
Lol. Tell that to all the FAANG companies that standardized this UX practice
max123246@reddit
): Like I bet you could just put a loading bar but display all the info immediately and people would still feel like work is being done. Idk, it just seems crazy to me to make a worse product when there must be some better work around
Aggravating_Yak_1170@reddit
Here is for ppl lurking here to make a linkedin post.
Title: The Ontological Trap: Why Performance Without Perception Feels Like Deception
Body: As engineers, we’re trained to optimize for every millisecond. So, when I launched a search feature that returned results in 200ms, I thought we had a winner. It was fast, clean, and verified.
Then the feedback started coming in: “It doesn't feel right.” Users were convinced the system was "making things up." In their minds, searching a massive knowledge base is a heavy lift—it should take a second. Because the result was instant, they assumed the system hadn't actually processed anything.
The "Technically Dishonest" Solution:
I added a minimum display time of 1.2 seconds with a loading animation. The API still finished in 200ms, but the UI forced a "working" state for an extra second. I framed the update as "improved feedback during result loading."
The Result: The complaints stopped within a week. In UX, this is known as The Labor Illusion. If people don't see the "work" being done, they don't trust the output. It reveals a profound truth: human trust isn't built on raw efficiency, but on the visibility of effort.
It’s a strange feeling as a developer—realizing that the technically perfect solution is sometimes experientially broken. We aren't just building for machines; we’re building for human psychology.
Have you ever had to "nerf" your performance to win user trust? #SoftwareEngineering #SystemDesign #UX #TechLead #PhilosophyOfTech
Gusatron@reddit
This is a clanker.
https://www.reddit.com/r/webdev/comments/1sdw0df/comment/oelfmj9/?context=3
Made a similar post about a spinner in r/webdev
pommi15@reddit
Question is: do you very, very slowly reduce the 1,2seconds? Like 5ms every week? Then youd have the real speed in about four years and users maybe wont notice.
polaroid_kidd@reddit
Years ago we had to add a fake "checking X" progress bar to all credit card sign up pages for the same reason.
It's absurd but it works..
SkittlesAreYum@reddit
This is the most obvious AI-written post that was ever written by AI. And half of you are engaging with it?!
Plus, this is the most boring topic ever. The idea of adding fake latency is older than Reddit times three. And still it's getting engagement?
FishWash@reddit
GPT ass post
gjionergqwebrlkbjg@reddit
It's funny how much people in this subreddit bitch about LLMs, but eat this shit up.
teerre@reddit
I understand this is not the actual topic, but I do find funny how perspective on "fast" can differ between developers
I have a hard time understanding how that's a real interaction. If the results are right, you can say it's really that fast, I'm that good, thank you very much. Let them file a ticket if they can prove there's something wrong
putin_my_ass@reddit
They will file a ticket even if they can't.
slatsandflaps@reddit
Years ago I was tasked with recreating a Flash app entirely in HTML and Javascript because emerging markets needed access and the Flash app was too heavy to download on dialup and 2G networks. The result was an app that had the same functionality but was so much smaller and faster than it's Flash counterpart that it was scrapped because the team in charge of the Flash app was concerned it would make them look bad.
labab99@reddit
Fuck you clanker
Astec123@reddit
We had a very similar problem with users during UAT unhappy that something came back to quickly and as people have pointed out this is a common issue in the experience with various names and fixes.
What we did to solve the problem was to call the fix out in the change log and push it into the test environment artificially slowing down the application.
Once that went out to production we had another round of updates to make and included within that was to return the operation of the app back to how it was at the start before they complained it wasn't working. We did a 'try the faster new experience' option button that made a few minor visual tweaks but fundamentally just remove the throttling we had applied.
Asked for feedback about the improvements we'd made in the search speed and wouldn't you know it people saving a second every time they did a search brought them on board.
TLDR, slow it down artificially, then frame the actual speed it works at as an itterative improvement in the way the app works and make users try it out when compared to the 'slow' way of working.
iLikePeopleThatAre@reddit
I totally can see why the users felt that way. Have you ever pretend to think about something because answering fast would either make someone look stupid or make it seem like you are pulling it from your ass?
Best case, reduce the delay over time until you remove it.
DrShocker@reddit
I have never been known to hold back if I know the answer to a problem. 🤣
imo there's gotta be a way to do this that both makes people think it's working and doesn't waste my time.
DamePants@reddit
I’d totally believe this is a thing. I hate having to do any part of my job in a browser because it is always slower than me running the query directly against the db or via some cli. As a backend dev it’s easier than me having to debug who broke our half baked internal tools website.
Greedy_Bar6676@reddit
This reads like all other ai posts
cyborg_danky@reddit
I work in performance side of our main flagship app and this makes me very uncomfortable 😂
Outside-Storage-1523@reddit
Why? User wanted it, it’s part of the requirement, and you delivered. Best piece of engineering work /s.
truedima@reddit
https://en.wikipedia.org/wiki/Placebo_button
Exists in many customer facing fields. Like Sound Engineering/Mixing for instance.
secretL@reddit
This is so wild, when you break it down it makes some sort of sense but in the end it's still bizarre. We're slowing computers down because people don't understand how fast computers are?
swoed@reddit
Swap "Loading..." to "Thinking..." and now users are happy to wait for a better result
Kukaac@reddit
Electronics manufacturers do this as well. They add metal weights to the items, so it feels heavier, which is a sign of quality.
Alikont@reddit
A client once asked to add loading animation to website because instantly loaded website did not create impression of "a serious business".
Krukar@reddit
Shopify did the exact same thing a decade ago.
niveknyc@reddit
I've had to do this for interactive experiences for brands, specifically for a "product quiz" for a major supplement product brand, the branded loading animations blipped by because the content loaded too fast, had to add artificial delays to make the experience feel like something important was actually happening behind the scenes. I felt the customers / users needed to feel like there was some sort of magic computational nonsense going on when we processed the decision matrix to present them with product choices.