What am I missing with AI hype?
Posted by eyeoftheneedle1@reddit | AskUK | View on Reddit | 721 comments
For example, I get the fact it can compile code easier for software engineers, debug etc
I get it can do basic admin tasks in excel that would previously require those to use formulas
I understand it is what people thought Google/search engines would be where it’d give you an answer as opposed to link to results where you may find the answer
However it has a long way to go until we get to AGI or anywhere near.
Alexa/Siri/home assistants are crap on pretty much all devices. Chat bots which have been around forever still are very basic and circular
The doom mongering is all on the potential of what tech can do.
People thought we’d have flying transportation and all sorts..
What am I missing?
ExileNorth@reddit
I personally use LLM's (Gemini is my go-to) as personal tutors and researchers. I can ask it to teach me about a subject and it just does it. It's 100% patient, has no time constraints, has access to the entire internet to source information, can provide links to youtube videos for tutorials etc.
I learned how to build complex data models in excel with powerquery and m code purely from Gemini. I also used it to learn how to service my car and change the brake disks and pads from a complete novice starting point. There's a couple of examples.
RecentTwo544@reddit
Can you give an example of a subject?
onionsareawful@reddit
Just about anything. Modern AIs are quite intelligent with reasoning turned on and access to the internet.
frogfoot420@reddit
I will get downvoted, but I’m getting this out there.
I think we should be concerned given how poor the UKs productivity is. People love to say AI can’t do x and y, and whilst true - that’s typically from a place where productivity is high and they are highly specialised. This isn’t the case for a good chunk of British business.
There’s many easy gains to be had, and it will just lead to eventual job cuts.
glasgowgeg@reddit
Highly specialised like knowing not to eat rocks, or count the number of r's in the word strawberry?
onionsareawful@reddit
highly specialised like finding thousands of zero-day vulnerabilities in critical software?
https://www.anthropic.com/glasswing
AirconGuyUK@reddit
It's been an uphill battle for me to prove to my company that we need to adopt AI workflows or die.
It took me recreating our product in a different set of frameworks and programming languages over 4 days for them to get it.
pajamakitten@reddit
That relies on British businesses being productive in areas they are not using it too though. We might be a low productivity country, however I do not think AI is the panacea a lot of companies need.
saltlampsandphotos@reddit
I'm see both points. I'm in a role where AI could do it, if policy and a legal framework changed. As it stands what I do must be assessed by a person.
However I see the numbers in terms of how unproductive we are. Obviously not sustainable.
But, I just worry. I can't see a world in which AI is handily embraced, without unending the social fabric of our country, and the massive 3rd / 4th order consequences it'll have on the economy.
It's the usual talking point. There can only be so many self-employed manual workers. What happens to those short, mid and long term who are surplus?
Cause I don't see a automated robotic utopia where we have near infinite resources and we just spend our days walking amd painting.
rohithimself@reddit
A scientist friend of mine thinks he has been able to find a new proof to a theorem by getting some inputs from AI. Things are coming.. right now our brain is what is limiting it. Think of now as the 1995 of software industry. Wait 4 years and there will be Google.
zephyrmox@reddit
I genuinely think that people do not understand the acceleration effect it has code wise. Things that used to take me a day can be done in 10mins.
As a solo dev I can spend my time working on the stuff that's actually important as opposed to building boilerplate scaffolding. My output of genuinely useful stuff has increased by a huge factor in the past few months.
Most people will only have used the free model in ChatGPT - the top end ones are really a different league.
greengrayclouds@reddit
Has your salary seen the same increase? If not, does this not mean things have become much more productive with much less need for employees?
TheOrchidsAreAlright@reddit
For jobs like software engineers, there will undoubtedly be a massive reduction in headcount at most companies.
ishallbecomeabat@reddit
Which is the purpose
Honey-Badger@reddit
Yeah I think at the moment its a lot of entry level roles we're seeing go. That being said at my company a lot of them were already being outsourced to India anyway
AgeingChopper@reddit
it's coming for sure. it's going to make it incredibly hard for juniors to get in. at some point people will realise how the AI can't ask the questions that need to be asked when extracting real requirements and meaning from a human and are going to need to ask a load of old heads back.
DrBiggusDickus@reddit
Yeah but that will hit a cliff... Senior Devs were all Junior Devs at some point. If you don't hire people to train then you will not have Senior Devs, who check the AI code. Companies whose business plan is that short-sighted will not survive long term without pivoting. I'm curious how this will play out.
AgeingChopper@reddit
Agreed.
mogrim@reddit
And where are the future "old heads" going to come from? I'm definitely in that category now, but at some point in the distant past I was a junior learning my trade.
AgeingChopper@reddit
exactly.
TheOrchidsAreAlright@reddit
I think the headcount required will be much lower in future anyway. It won't be zero, but it will be a small fraction of what it is now.
AgeingChopper@reddit
it's going to drop for sure. i feel lucky to have had that career when i did, i could have kept it for 5 more years into my sixties if i'd wanted (especially looking after legacy bases) and health allowed but it very much looks like opportunities will dry up.
I am glad my son didn't chose this route and went down the disease research scientist route honestly. he uses code tools to help analyse data, these tools just take away a part of the job that otherwise drains his time.
rossdrew@reddit
Until that code needs cleaned up. If I can be sure devs will do anything, it’s misuse tools
JeffSergeant@reddit
I don't know about that, most software companies have a massive backlog, and new features always mean MORE bugs, and more new feature requests. The good companies will keep people on an accelerate their delivery massively.
AirconGuyUK@reddit
Already happening.
BasedRedditor543@reddit
Yeah it’s a problem not really sure on the solution tho like people are anti ai because of it but like we can’t just artificially hold back inevitable technology advances just to keep people in otherwise obsolete jobs
zephyrmox@reddit
For the majority of software engineers doing not novel things? yes, absolutely.
Large companies will lag behind on this by years due to compliance and other fears - small agile developers will eat their lunch. You are going to see big players slowly get unseated by small scrappy startups willing to leverage AI fully.
GordonLivingstone@reddit
If you are writing code for (eg) flight management systems, self driving cars etc, you are still going to need all the requirements traceability, functional testing and proof of compliance with all relevant standards. That will need more than a sanity check on AI output. (I'm not a software developer but did work on engineering management in that kind of field. Getting the documentation right was a much bigger job than getting working code.)
AgeingChopper@reddit
a former colleague has been doing this. it's agile for sure. I was looking at some of the code last autumn.. what a mess that was but it didn't matter to him as it was all new and experimental.
zephyrmox@reddit
This stuff is moving very quickly - the quality of code now is a diff league. But yes, it can be bad, and very messy at times - you have to give strong guidelines and really push it to optimise at times.
For a long time Claude for example had a tendency to hardcode values as fallbacks rather than failing - I now have a system prompt which instructs it NEVER to do this in my codebase.
AgeingChopper@reddit
no doubt. i retired last autumn. it'll race away. it'll keep learning on improving data sets, no longer scraping stack overflow all the time lol.
he was very keen on Claude, saw it as the future.
me, i think i'd reached the point where i was too old for it honestly. i'd been doing it for 37 years and was looking at retiring in a couple of years but my health forced that up a couple of years. I feel like this stuff is for a new generation, I'm happy to retire with my handcrafted preference and. after something like 45 years since first learning to program , fully leave it behind. I'm enjoying a more analogue world now with my instruments and my old records.
AirconGuyUK@reddit
I think we're at the self improvement loop at this point. It's using its own code it writes as training data. If the user doesn't like it or alters it, it was bad training data. If the user accepted it as is, it's good training data. Now there's thousands upon thousands of developers helping to train the AI to replace them.
AgeingChopper@reddit
yes, that is a sad reality. true.
AirconGuyUK@reddit
Claude still loves an N+1 query lol.
AirconGuyUK@reddit
The differentiator for SaaS will be marketing budget and support staff budget. The tech is now cheap, and almost anything can be copied with ease.
A scrappy start up offering bargain basement prices will likely be skimping on support staff and that's noticable to the consumer.
luckless666@reddit
A scrappy start up won’t necessarily offer lower prices; the successful ones will be much more agile at reacting to customer requests and shipping new features
AirconGuyUK@reddit
That's a good point.
I suppose all I am really seeing at the moment is a lot of attempts at undercutting.
luckless666@reddit
Yeah don’t disagree but don’t think they’ll necessarily (all) succeed. There’s always a market for premium services with solid customer support
Secure_Music_6062@reddit
This idea might feel a good one but where has this ever happened historically? Ultimately the large companies will win out because that's just capitalism
zephyrmox@reddit
I genuinely believe in the right hands there has never been a force multiplier this powerful since probably the industrial revolution.
luckless666@reddit
You got downvoted but I agree
Steppy20@reddit
You don't even need to leverage AI - just look at Neobanks like Monzo and Starling.
slade364@reddit
We're not all at huge risk of AI though. Software development is ironically ripe for displacement because it's literally what AI should be best at.
That's not to say all devs are at risk - the code still needs vetting etc, but you won't need a pre-Covid sized army of engineers for much longer. As a result, the very best people will ve fine, and lots of people will probably find they're struggling to find work at the salary they're accustomed to.
DrBiggusDickus@reddit
This is an important point. I am more productive but all it means is that I'm busier being productive and it doesn't bring me more time. I find I am much more mentally fatigued using AI in my workflow. The reason is that you've got to review what the AI spits out, and it can spit out a lot of information. And the sycophant nature is a pain to subvert. It can all be done but it's draining. I use it primarily for business development in deep tech.
PersevereSwifterSkat@reddit
Yes absolutely. It's not nice but I'd rather people confront it head in rather than falsely claim ai isn't all that.
AshaNyx@reddit
I'm also assuming for solo development projects, pay is based more per project than actual hours.
rowanajmarshall@reddit
The limit on technology is not speed of coding, it's knowing what to build, and that hasn't been impacted much by AI, yet at least. Productivity needs you to make stuff people want.
FatherPaulStone@reddit
I've been hand weaving all my life, but despite the increase in productivity these looms haven't increased my salary at all, so now there is need for less employees, I'm concerned we are making ourselves redundant.
Bilb-@reddit
Although I agree, that is the same that faster CPU's meant compiling can now take minutes rather than days/weeks/months - for good and bad.
Outrageous_Donut7681@reddit
Salaries have not increased in line with productivity for decades now why would it start now?
Muted-Marionberry328@reddit
Possibly but the truth is that with the massive increase in productivity, every company is going to work much faster and aim much higher.
I work in flood forecasting and I've immediately have seen massive improvements in the way we do things. Obtaining live data to validate our models, automating data analysis on terabytes of data, spinning up ECS instances to perform small tasks and modifying code that previously had hard coded parameters to work on other projects in hours rather than months is literally a game changer.
A lot of work is simply 'joining up the dots' and AI can help tremendously with that.
Ballbag94@reddit
Not if you keep your productivity increase quiet and take the time back then it's still the same amount of people doing the same amound of work
All that matters is an employer is happy with the work produced
the-cock-slap-phenom@reddit
Exactly, OP just says:
But I don’t think they do get it. You can’t just dismiss that with a single sentence saying “I get it”.
Do they truly get the fact that this is potentially saving many, many man hours of a highly paid profession, and the benefits this could have for businesses?
And yet they don’t see the value of that? Because it’s not flashy enough with voice assistants?
Level-Courage6773@reddit
Mass-redundancies are exciting!! XD
worotan@reddit
No, because that doesn’t explain the massive societal effects claimed for ai. It’s one apparently successful application that is used to claim the certainty of vast extra changes throughout society.
It’s the self-driving cars hype again. We were meant to be all in self-driving cars by now, according to the last industry hype that tech bros aggressively told us all we were too dumb to comprehend like them.
Wd91@reddit
Software engineering is just one example.
Look at the usage of AI in the arts for another. Entire swathes of the graphic design industry (for example) either have been or will be made redundant in the near future as companies start using AI for mockups and so on. Even if they keep around one or two designers for final-touches, 80% of the concept work can be done by a machine in minutes.
ranchitomorado@reddit
Redditors are saying AI is rubbish, so it is, OK?
luckless666@reddit
Yeah I use it to create mocks up in a minute where I used to have to fiddle around with existing assets or get the art department to create something - massive time saver
slade364@reddit
Legislation is the barrier to self driving cars really, not the tech.
They're on European roads in low volume, I expect autonomous taxis will be commonplace within a decade and once legislators catch up.
TheOrchidsAreAlright@reddit
But a lot of code is being written with AI every day, and it's accelerating.
SugarpillCovers@reddit
There’s been a lot of reporting on how bad AI-generated code has caused a number of outages and security issues too. While I’m sure it’s a useful tool, you still need people checking the output for accuracy, as, just like when you Google or ask a chatbot something, half the time it’s fabricated or misquoted whatever evidence is being presented.
vinyljunkie1245@reddit
There was also the Amazon Kiro incident where the Devs let Kiro manage an update. It decided not to update and instead delete everything and rebuild it causing a 13 hour AWS outage
https://www.digitaltrends.com/computing/ai-code-wreaked-havoc-with-amazon-outage-and-now-the-company-is-making-tight-rules/
This would probably have been avoided with human oversight but as companies push their employees towards using AI/LLMs it will likely happen again a few times
veodin@reddit
Issues like what are being reported are avoidable with good development practices. A good development workflow, even without AI, should always have humans reviewing code before it enters a production system.
Slop will continue to be a problem, but if a large company releases bad AI written code that is a failure of the organisation and not AI. Human developers make similar mistakes every day.
SugarpillCovers@reddit
Sure, but many of those same companies are trying to have it both ways. They're pushing AI as being able to entirely replace certain jobs - I'm sure for some tasks it may be possible, but the vast majority will still need human oversight.
The main issue is that these companies are investing so much in a technology that the vast majority have little use for, outside of basic tasks. They want it to be the next smartphone, and it just isn't.
suiluhthrown78@reddit
Waymo works very well, how its rolled out to get us all not them is a regulatory issue, the technology already exists.
the-cock-slap-phenom@reddit
Mate, people are always going to overhype and make over the top claims about new things like this.
Anyone who believes stuff like that is just a sucker for marketing, everyone else is looking at the real world application of the technology.
And the bottom line is, this is having a profound effect within the tech industry, which you can’t just ignore because it’s not living up to what sales people are saying.
luckless666@reddit
On the overhype part - see the Gartner Hype Cycle. This is literally a thing for every single invention ever. We’re currently at the top of the peak for inflated expectations.
360Saturn@reddit
And go back ten years phones were meant to be obsolete because we'd all be wearing Google Glasses.
And go back five years and real life meetings were going to be obsolete because we could all hold meetings in the Metaverse in our VR helmets.
Oh wait, both those things were also hyped to death and crashed and burned never to be heard from again.
FrostyAd9064@reddit
TBF self-driving cars work and are used everyday across major US cities as taxis. It isn’t an issue with the tech, it’s an issue with our willingness to adopt it.
capitanmanizade@reddit
I doubt OP works in a field where AI is applicable. It's really hard to not get why there is a hype around AI if you use it for work.
Exasperant@reddit
Just out of interest, where's the tipping point where we've AI'd so many jobs that businesses no longer have customers able to pay for their product?
bix_box@reddit
It's obvious the people who use AI to generate all their code at my company - because it usually sucks. Also harder to debug than if they wrote their own shitty code.
Asoxus@reddit
I don't understand this argument. I've been a developer for nearly 20 years now and the code my agent produces is always clear and on par with what I would expect from any other senior developer.
phugar@reddit
Language and context dependent. I'm working with complex SQL and heaps of Python scripts within data engineering.
Some of the Claude outputs are laughably bad, but juniors trust them because the scripts run. I've been busier fixing other peoples (AI output) mistakes than being productive.
For boilerplate it's great. For anything harder in my world, it's a disaster waiting to happen, and a frustrating experience getting Claude to admit something is broken or producing incorrect results.
ThrowawayOldCouch@reddit
I've been a developer for 15 and generates garbage for me pretty consistently. I usually throw out its solutions or keep like 30% of whatever it generated for me.
slade364@reddit
Why do you keep using then?
ThrowawayOldCouch@reddit
Work has started pushing it harder over the last year.
WheresWalldough@reddit
Yeah - I asked AI (Gemini & OpenAI) to build me a personal chatbot for my localhost. It suggested npm/node.js. Ok.
Built the code, basically two monolithic files (front end and back end)
It is pretty useful. But I wanted more features, and I could see it was unmaintainable with this approach, so asked it to refactor and it neatly split it into 26 javascript files all sensibly named and structured for the backend.
Totally 150kB of code and I didn't write a single line. My effort spent was a couple of hours
Asoxus@reddit
Yeah, you need to know what you need out of a project and plan it out properly ensuring proper structure and folder layout before writing a line of code - but the proper tools (claude code) can do all this for you too.
WheresWalldough@reddit
IDK about tools; I plugged GPT 5.4 into Visual Studio 2026 and it didn't really work that well - used a lot of tokens and didn't really give a great result.
I then went to GPT 5.4 mini on high directly via the API (so just raw LLM) but providing the entire codebase structure and asking which files it wanted and asking it for the updated files and it did a nice job implementing my features (e.g., it suggested a SQLite databse for my chats with search, so I installed sqlite, then it built it - I asked it to write a delete function and it did that).
Asoxus@reddit
Try Claude Code. It's built for coding unlike GPT.
WheresWalldough@reddit
yeah I should take a look - thing is I get 11 million daily GPT tokens so I'm not hugely keen on paying. But I guess there's a trial version, so....
Tom22174@reddit
I have a feeling there are a lot of people that just stop at this part and assume it is finished
Fun-Lake-956@reddit
I've not seen anyone mention a secondary (but arguably more important) effect of accelerated software dev, which is accelerated research.
For many domains, research essentially boils down coming up with a new idea and implementing it in code (including AI, of course). That cycle can be massively accelerate, and the knock on impact of accelerated research is absolutely huge (and overwhelmingly positive!) for society.
h00dman@reddit
It's massively freed up my time to do useful work, too.
I use it to write a lot of my documentation these days, whether it's code comments or actual summary documents for analysis work.
I still have to read what it generates to check for mistakes, but I've always had to do that, so I still save hours every week.
172116@reddit
>It's massively freed up my time to do useful work, too.
I'm in a totally different field, but I recently used copilot to do the minutes for a workshop I hosted, and it was incredible - I'll need to do some finessing before circulation, but no more than I would need to if I'd had an inexperienced junior staff member there taking notes (which I wasn't going to get with our staffing position!), so it has basically freed up half a day of my time to do work that actually requires my involvement.
Appropriate-Falcon75@reddit
I would say that coding wise, it is like a junior developer who has read all the textbooks.
Steppy20@reddit
As a mid-level dev I also use it for identifying if there's a different way of doing what I'm trying to do. Sometimes the way it suggests is nice and neat, and sometimes it's a complete mess and changes the actual logic so you need to ignore it.
But it's really good at writing bulk unit tests for line coverage metrics. It won't get them 100% right but it'll be like 85% of the way there which massively speeds up my workflow.
It was only a couple of years ago where the Copilot I had access to couldn't even do that and I'd take longer fixing it than it saved due to complex business logic.
slade364@reddit
As someone who's not a developer, I've used it for various Python scripts and VBA which have saved me a lot of time.
I imagine it'll continue to get better, which may be a concern for devs.
TheBestBigAl@reddit
It's like a junior developer who has read all the textbooks, and then smoked some meth.
It does all that you've described above at a tremendous speed, but also sometimes churns out nonsense.
jackboy900@reddit
That's the big thing. If you trust them as an authority they have a lot of issues, you can't let them loose, but they're very good at being as competent as a junior employee at a lot of things and that can improve the productivity of a competent user massively.
AirconGuyUK@reddit
I kid you not... Co Pilot is genuinely shit compared to the cutting edge (Claude Code)
Admirable-Ask-3017@reddit
If I read chatgpt produced documentation in a job I will find out who had used it and make a mental note never to rely on them for anything, as they can't even use their own brain
Worth_Gap4226@reddit
I'd also add that a lot of people fail to understand the differences between actual AI and LLMs.
JeffSergeant@reddit
LLMs are AI. AI is a broad field that encompasses 'expert systems' like the NHS symptom checker, domain specific algorithms like chess engines, and generative AI like LLMs.
TheTjalian@reddit
Yes but not AIs are LLMs which is why the distinction needs to be made
There are people out there thinking that you can just cure cancer with a few LLM prompts
Fellstorm_1991@reddit
So we apply LLM structures to biological problems because ultimately they are just statistical tools. We create custom AI tools using similar models and train them on large data sets, usually DNA and single cell RNA sequencing datasets to model cell biology on computers to predict experimental outcomes, which we can validate with new approach methods in the lab.
It's a really exciting change, and if we can refine the AI models and the biological models, it could change everything with drug discovery and testing.
DrBiggusDickus@reddit
There's a neat company called Thrive Bioscience that's doing some neat stuff around that.
Fellstorm_1991@reddit
I've heard of them, they use high content imaging with some very cool microscope techniques. My group focuses on RNA sequencing as our readout.
AshaNyx@reddit
Theoretically if it was accurate enough you could still use LLMs to do some of the work with gene comparisons (given it's just comparing bases).
Fellstorm_1991@reddit
We have looked at that. Unfortunately biological systems are incredibly complex and it requires a bespoke solution, particularly when looking at pertubations to the in silico model.
It would be nice if we could just use existing systems retrained on biological data, as it would have saved a lot of time and effort!
AshaNyx@reddit
I would have expected it to do the very simple stuff, but for most that it's like inventing the wheel. Like why use chat gpt when you have things like blast.
_Adam_M_@reddit
Almost.
https://news.unsw.edu.au/en/paul-is-using-ai-to-fight-his-dogs-incurable-cancer
TonyBlairsDildo@reddit
Pretty much, yeah.
TheTjalian@reddit
You make a convincing argument, u/TonyBlairsDildo
El_Spanberger@reddit
They might not be far off. Read about a guy who used ChatGPT + DNA seq to sequence his dog's cancer and develop a working therapeutic. No reason why the same thing wouldn't work on humans.
jeramyfromthefuture@reddit
AI is now a marketing term , it original meant a system that learns and mimics human intelligence what we have now is a librarian who knows all the books in the library but can't actual read a book in the sense of understanding , learning and developing because of the action.
Modern "AI" is just fancy search with a tendency to give a different answer every time its asked a question.
It can "create" code by pasting snippets from examples found on the web , remove the license and then regurgitate this code with no care , see "iseven" examples , where the AI will use a example which demonstrates recursion in a java dev book , instead of the standard 1 line modulus equation.
If you do any critical research on this stuff it all dies on its ass.
Really the question now is , how long are we going to waste our time before we learn that LLM's are not worthwhile for real AI development.
this_is_theone@reddit
> AI is now a marketing term , it original meant a system that learns and mimics human intelligence
Not really, we've been referring to NPC's in computer games or oponents in computer chess as AI for decades.
JeffSergeant@reddit
The muggles have learned the word now :)
JeffSergeant@reddit
LLMs are doing a lot more than just copying and pasting code. They can clearly solve novel problems in creative ways. If you really think they're only copying code and removing the copyright I guess you've never worked with them.
Worth_Gap4226@reddit
Yes, I didn't say they weren't AI, just in context of my response it was the lack of awareness of the nuanced differences between LLMs specifically and what a lot of people perceive as AI.
TheDawiWhisperer@reddit
the most "well ackshually" post in this entire thread
spinningdice@reddit
The term AI is so overused that it's become devoid of all meaning. It's the new 'Smart' device.
Poo_Poo_La_Foo@reddit
☝🏼☝🏼☝🏼this
Jaxxlack@reddit
Yeah this! LLMs are the actual mini brains for use of tools.. generative ai is Google but all sassy. Plus it does have this nasty "median" answering system.
74389654@reddit
there is nothing but anecdotal evidence for this. what has been measured is that people FEEL more productive
RecentTwo544@reddit
Could it replace you though?
I've seen plenty of people on here (on this sub, so not AI-hype-bro type subs) saying that like you, AI helps with their coding job, but they absolutely couldn't rely on it.
Then there's plenty of people I've seen on similar threads saying AI saves them a bit of time, but you need to check everything manually anyway because it cannot be trusted.
zephyrmox@reddit
You have to check stuff, sure. But I had to check stuff I wrote before. My code had bugs, this code has bugs. There's a lot you can do to make it a lot more reliable if you are willing (aggressive unit tests, and cross checking with another model - I use OpenAI Codex to cross check code that Claude writes, generally).
It is not 'a bit of time' - my output is at least 3x more than it used to be, probably often more, and I'm nowhere near the frontier using multiple agents, etc.
The risks depend a lot on what you are working on. If it's like, health care systems? yeah AI code without huge amounts of manual review isn't going to happen. If I'm building internal systems for reporting and the like (which a huge portion of software is) - I can run that risk more.
As with everything, there's a trade off - but if a dev is saying it's saving them zero time due to 'not trusting it' - my view is they are either doing something very wrong, are not willing to put in the work to optimise it, or work on something very 'critical' where bugs are NOT acceptable.
Grimdotdotdot@reddit
I'm in the same field as you, and seeing the same sort of results. I don't have your long-term confidence, though. You say you're getting around 3x the output - that right there could be two of your colleagues that are no longer needed.
It depends on the priorities of the company.
Asoxus@reddit
It depends. If you just slog through tasks without thinking about them, fixing or changing one thing after another, sure, the AI tools could replace you.
If the tasks get a bit deeper or need proper planning or oversight, then they need their hands held by someone who knows what the output should look like - a dev.
MarmiteSoldier@reddit
Yeah, I literally get to work - instruct one agent to write me a series of scripts, another to write documentation, then another reviews the work. They all do that work in unison while I sit there twiddling my thumbs. Then I review the work, more often than not, I have to make edits but it’s pretty mind blowing.
I think there will come a time when the output will just be deemed good enough and companies will just save money and cut out the human-in-the-loop part.
lankymjc@reddit
Corridor Crew (special effects artists and movie makers on YouTube) made an AI to do the boring bits of green screen work for them, giving them more time to do the actual art of CGI.
This is how AI is supposed to be involved in art, not as the full replacement of artists that the big companies are pushing for.
stoneharry@reddit
It was a neural network, not AI. A bit different but based in the same roots. Neural networks have been around for decades upon decades.
51onions@reddit
Even before AI, I could do scaffolding with things like angular cli, dotnet cli, or whatever your equivalent is. Boiler plate wasn't usually an issue with appropriate tooling.
I won't deny that the AI is extremely useful though. It will probably take my job some day, but for now it's pretty sweet.
zephyrmox@reddit
It's not even close to the same as frameworks.
Say I want to build a report with graphs, charts, tables. The value I add is knowing how to pull that data, construct the calculations, show what is valuable for the business. I can plan that all, even hand write some of the calcs if my instructions to AI are not translated into code themselves.
But then I ask for it to build the report. I give it a crudely drawn graph in paint, I give examples. In mins I have rich, HTML / pdf / email output in the format I want. I have zero need to learn any bullshit graphing syntax (looking at you ggplot / matplotlib), zero need to faff with fonts, renderers, etc. I just type and explain, iterating in mins.
Building and iterating that stuff by hand would take half a day of tedious effort, maybe more. It's now minutes where I can focus on something else while it churns in the background
51onions@reddit
I agree with everything you've said, but I don't really consider all of that boiler plate. That's just application development and web design.
Boiler plate and scaffolding, at least to me, implies all the repetitive stuff like interfaces, component basics, start up code, that sort of thing. And I could do most of that with competent tooling already.
Mccobsta@reddit
Yeah to me this is the only use for it not for everyone who dosent know how to audit it's output
TheDawiWhisperer@reddit
yeah this is true, i've been banging my head against a wall on a problem for weeks, Copilot pointed me in the right direction and i had a script ready to go in a couple of hours
barrumdumdum@reddit
Which AI do you use for coding? If you don't mind me asking
taflad@reddit
When ChatGPT hit mainstream 2 years ago, I had it help me build a Works Order viewer for our production department using MSAccess. It was alot of manual work, where I had to tell it the Schema of the DB and have it try to work out what data is in each table and them tell me step by step what to do in Access. Fasforward a year and I was able to get ChatGPT to create the app in Pytho, and I'd check it and run the separate modules. Eventually getting it to hep me put it all together.
Fast forward to today, I have Claude Code in my IDE, an MCP to the SQL server andtell it in plain english what I want. Using this method, I have developed and deployed:
A full ISO:13485 document management system with a web interface and SQL backend, with full metadata tracking, Change Request system, alerts via exchange 365 etc
Reworked the Works Order app to be a one stoip shop for our production staff all built in python
A full service scheduling app, tied into Freshdesk to pull information on ickets for assets as well as allowing report creation, engineer task assignments, automated alerts for when jobs are assigned, time logging etc
A full Purchasing dashboarfd where we can see trends on quality issues, scrap and lost locations, high/low performing suppliers etc
A financial reporting program using python to generate reports for finance
a BOM comparison tool that can be used to equate a BOM in Autodesk Vault to the one in our MRP system so we can automate moving a design from dev into production
.... there are about 10 others that are small time apps and about 20 more I have in development.
Each of these was WELL beyond my palty coding skills (I can read python fine but struggle with writing it. My brain just isnt logical enough).
I understand the frustrations for other tasks. I asked it to create a full guide on how to find, track, intercept and target ships in a game called Uboat and the guff it output was terrible...BUT for coding, it is phenomenal. It's like having a £70k software developer on my shoulder 24/7
Asoxus@reddit
The thing is where we originally could build all those systems, they would take a team a few months, or a solo dev a bit longer. Now Claude can smash out one of those tasks in a few hours, and after iterating a few times, go to market with full marketing strategies, seo plans, the whole 9 yards.
I see a lot of people often watering it down, either because they don't know enough about the tooling, or something else?
apple_kicks@reddit
Code wise its good at simple things. But you will want to double check how efficient the code is and you’ll spend more time debugging if something breaks and ai can’t fix it. Because you won’t intimately know the code that well
Asoxus@reddit
This is what MCP servers are for.
AirconGuyUK@reddit
This is why it is important to create a strict set of coding standards/conventions before the project starts, and make use of tools which can help enforce that (linting with pre commit hooks). Also enforce writing tests before writing code, yadda yadda.
There's tons of guardrails you can put up to avoid shit code. Could a human write it better? With the right human almost certainly.
Could they write it better in 15 minutes? Absolutely not. And therein lies the magic.
Apsalar28@reddit
I agree with you ref software development. Claude is fantastic when appropriately supervised and prompted.
Outside of work though, I so do not need an 'AI powered washing machine with supporting app' that sends a notification to my phone when the cycle is complete and uses 'AI' to adjust spin speeds to the size of the load for £1000 when I can get one for £300 that does all the same things apart from the app and just makes a loud bleep when it's finished running.
Now if it came with a robot that would do the loading, sorting, hanging out, ironing, locating of the missing sock and putting away I'd be tempted.
Asoxus@reddit
That's what these LLMs like Claude are enabling though. Imagine how quickly it is advancing, and how that can be applied to products like the Neo robot or the SpaceX one. Development of those will come on massively the more AI agents are able to iterate on their coding.
Now if we could just get full self drive approved in the UK, the robots might have a chance...
DootLord@reddit
I wish I could do this. There's a line between vibe coding crap and agent assisted coded to get what I want faster. We need things far too consistent at my place so for the sake of a bit better readability we are taking many times longer to get stuff out the door.
It's exhausting to hear and then get reprimanded on how we're not outputting enough...
Empty_Allocution@reddit
Just commented something similar. It is the best use I've found for it.
spoo4brains@reddit
But on the other hand, submissions to the Linux project is being swamped by people submitting AI slop and real people have to wade through that shite to work out what is real and what is slop. It is a big threat to the ecosystem with volunteers get burnt out.
zephyrmox@reddit
I do not disagree at all. It generates vast amounts of garbage. I absolutely hate reading AI generated text which is all over reddit and twitter, and morons point it at repos and say 'find problems' and it ends up submitting idiotic PRs.
I don't really know how we counter that. I guess that one day I hope we find a balance
skronk61@reddit
Yeah but not everyone codes. And is it really efficient enough to be worth it? It seems to use a disproportionate amount of energy to give you a shortcut of a few hours.
If they stripped it back to being a coding database helper surely it would be better for everyone involved.
Cubewood@reddit
Give Claude CoWork a try, the problem is that on the 20 quid a month sub you get rate limited very quickly, but it can actually control your computer, with the Chrome plugin control your browser, boot up a VM to quickly execute some code, and actually use all applications running on your PC. This can technically already automate a lot of office based non-coding work, adoption rates are just not there yet.
skronk61@reddit
I’m not letting that kind of software loose on my personal computer. AI companies have shown total lack of respect for data security.
Cubewood@reddit
You are asking how this could benefit anyone outside of coding. I am providing an example of how it could.
Wisegoat@reddit
This is likely an ai ran bot account. They’re rampant on places like technology where they heavily exaggerate ai capabilities and have a hidden profile.
zephyrmox@reddit
lol I have been on reddit for 10 years. I am not a bot. I hate ai slop on reddit and have aggressively report AI content on this sub which shitloads of people don't even realise is garbage.
reuben_iv@reddit
It’s empowering people with zero coding experience to build their own sites and apps much like wordpress did too, they’re completely stuck if it doesn’t work as they have no idea what it’s doing or trying to do but it’s definitely a multiplier when it doesn’t work
AirconGuyUK@reddit
I am someone who knows a little bit about coding, and a lot about tech. AI is a total force multiplier. I (for the most part) know bad code when I see it, I know when a shit framework or library is being suggested, I know what kind of security needs to be in place on S3 buckets, or WAFs, secret managers, etc. I know a little about a lot, but not enough to make something of worth on my own.
Until Claude Code, that is..
I'm basically a one man dev team now. I've launched my first solo project (and it has customers, albeit not paying and I'm going to have to shelves it), and I'm currently working on my next project.
zephyrmox@reddit
Absolutely - there is no end of garbage from it - but for people who know what they are doing, it is a force multiplier. A good dev embracing AI can literally 10x their effective output.
Steppy20@reddit
Like most advanced tools: in the right hands it helps significantly and in the wrong hands it is just as likely to hinder as help
AgeingChopper@reddit
useful for boiler plate, letting you get on with problem solving and delivering solutions.
that was the one area i was happy to use it prior to retiring as a dev last autumn. Otherwise i really didn't have any time for it.
Screwballbraine@reddit
Yeah but it's been being used for that for like 15 years. The hype is literally because these companies have zero chill and too much money.
zephyrmox@reddit
It has materially changed though. You can build entire applications without even writing a line of code. I built an android app that does a small but niche task for me without knowing a single line of Java/Kotlin. I just explained what I wanted, installed the development environment, and drew some mockups of the interface I wanted.
That was not something you could do 12m ago.
liminalbrit@reddit
I've got no empirical evidence but I do believe there have been n-months problems that Claude turned into n-weeks, and n-weeks into n-days.
zephyrmox@reddit
I built a web dashboard for managing our internal reports, database etc last year by hand. It probably took me a week to build a scrappy version that worked fine, but was nothing remarkable.
I re-engineered it in a day with Claude code into something that is in every objective way, miles better, and I barely wrote a line of code.
GourangaPlusPlus@reddit
For anyone really interested, Emmz Rendle done a brilliant take at NDC London this year
https://youtu.be/pey9u_ANXZM?si=sk3LZpV4GdUtUHNk
Quelly0@reddit
The Google search isn't even good though. I get incorrect answers constantly, that aren't borne out by the links it references, but you read it first so it's dangerously putting its ideas into your head. I worry that I'll later remember the incorrect things instead of the correct ones.
I've been wishing there was a way to turn off the Google AI overview answer. I tried searching solutions when it first came out but didn't find anything decent. Your post has prompted me to look again this evening, and this time I found some good solutions and have implemented one on my phone that I think I will finally be happy with. So thank you!
quartersessions@reddit
The most noticeable outcome for people has been the endless proliferation of slop. Low quality writing, low quality research and low quality image-generation.
From the beginning of pop-up ads (actually, probably automated phone diallers), it seems like we're in a sort of arms race of having rubbish forced at us at a scale humans can't replicate - and having to find ways to detect and resist it. I've heard from people in recruitment, for example, that AI job applications have meant they receive endless, pointless, useless applications for every role - and now talk about using AI to read them and filter them out.
I genuinely hope the uses in programming and automation aren't overhyped.
bow_down_whelp@reddit
It's a chicken and egg thing. Companies are already using ai to filter candidates, so rather than writing 100 applications, you get AI to do it
WheresWalldough@reddit
yeah the slop bothers me.
I am a member of a specialist WhatsApp group and the guy that runs it, who is Norwegian now posts exclusively in AI-written prose. I refuse to read it, you can spot it instantly and I just zone out.
MangoonianLord@reddit
It's useful as an additional tool for doing tasks - it's not a replacement no matter how hard some companies are pushing this.
My company has pushed it out and made people redundant while making others pick up the work with AI "streamlining", but it does not replace the people made redundant for it.
bow_down_whelp@reddit
Is the output evidenced though? Can you turn around point a finger and say, this is where we are failing because of redundancies? I think that is the bit that most companies haven't hit yet
MangoonianLord@reddit
Staff have pointed to the difficulties caused by redundancies but management have fingers in their ears.
bow_down_whelp@reddit
Aye. It needs to go on some formal report, brought to the board and so on. The difficulties can't come in the from "this is making my job shit", it would have to be evidenced loss
Foggyslaps@reddit
That will come in time. Legal contracts written by AI, advertising deals, inconsistencies, coding that needs fixing and breaks 400 other things etc, I think it's inevitable
I wouldn't be surprised if there's a period coming of AI regret
bow_down_whelp@reddit
How do you sue and AI when the legal contract is wrong?
Foggyslaps@reddit
That's the thing isn't it, this is all such new ground that even that isn't clear
For me it's clear that if a company uses AI a client sues, the company has signed off the contract and approved it so the blame lies there.
Then I imagine the company would sue the AI company for damages and would probably win otherwise AI companies become immune to prosecution
bow_down_whelp@reddit
It'll not work like that all. Someone has to oversee and sign off the work as correct. The person who signs it off and it's wrong will be on the chopping block. It's the same in all fields, some culpability is more important than others. The ai creators will not be at fault, there will be at use at your own risk label.
Ai is already quoting sources that don't exist in university assignments. It cannot do the work, it can only assist
Underclasscoder@reddit
Its the story across businesses small and large.
Mine is currently setting targets for automation, each staff member needs to find 3 automation opportunities per quarter to present to our "AI and Automations" expert. He then rolls these changes out and reports back on how much he's saved the business. We no longer hire staff, if you need help you speak with the AI and Automations expert to help free time up in your day to day.
Being part of the internal Ops team, I know what management have in store for Q3 and Q4.. we move from a hiring freeze to active staff reduction. They intend to reduce the head count by 10% per quarter. Keeping in mind the business makes a 51% profit margin, it's still not enough.. they will destroy peoples income so they can gain a little more profit.
RecentTwo544@reddit
That business is going to fail, kaput, bankrupt.
I've seen it a few times already on here - people saying their company did what yours is doing and it went badly wrong.
slade364@reddit
Not something you can say with any certainty.
Most companies have a pareto breakdown in their employees - 20% of people add 80% of value.
So, realistically, some employees are probably replaceable.
Asoxus@reddit
Why? If handled properly, AI and Automations can handle SEO, Marketing, Customer service, to name a few roles that would have been made redundant there.
SilverstoneMonzaSpa@reddit
My company has not hired for replacements at admin level due to LLMs and honestly, I've not noticed the difference. We used to have six admin staff who would take notes in meetings, write up things etc... general admin tasks.
As two have moved on since LLMs because more adopted, the three remaining easily cover all the work with assistance of LLMs to the point that the quality is higher (better notes, nothing missed etc) and I'm pretty sure it'll get to the point where we need one or possibly two to cover that five or six would have in 2022.
It has already replaced two people's roles in my tiny consultancy.
apple_kicks@reddit
It creates new jobs verifying the ai didn’t hallucinate enough to be a embarrassment or problem. Depends how much companies will cut wages to these jobs vs people doing it manually. Imo thats not a threat because i think ai to humans verifying slows down productivity and ups electricity bills
i-am-a-passenger@reddit
You don’t need to create new jobs to do that, companies generally already have this process in place to verify whether their human employees are “hallucinating”.
roboticlee@reddit
AI will help us achieve this in short time. It's on its way.
AI has already disrupted all markets from jobs to financial. It is not a case of when is AI coming for our jobs, it is already conquering our world. I see it as a net benefit provided governments step out of the way and we all get to enjoy the freedoms AI will bestow upon us such as the freedom to choose to work or not while still enjoying every luxury imaginable.
Alas, governments are already stepping up measures of control to keep us from enjoying the fruits of AI, robotics and automation. Bureaucrats are afraid. Do not allow them to project their fears onto wider society.
The only people who will truly feel bad about AI will be subsets of those who already hate the modern world, those who distrust any form of authority, those who believe they should be our rulers and socialists. Yes, socialists because they won't have anything else left to activist against.
Early_Retirement_007@reddit
It is helpful as it gets you going and you can link your company's infrastructure with it, so it spits out a pretty specific solution that google would never do. But sometimes it can go in circles, which can drive you mad.
ace5762@reddit
The circular investment market bubble. That's what you're missing. If it runs out of hype, it collapses.
Remote_Development13@reddit
I feel like a person's view on AI is quite heavily dependent on the degree to which their livelihood feels under threat from it
Logbotherer99@reddit
Mine is not under threat at all and I think it's a load of bollocks
BasedRedditor543@reddit
How is AI as a whole a load of bollocks? This just reads like someone who doesn’t know what AI is and just sees it as this big new trend
Logbotherer99@reddit
Not at all. There is value in it doing things humans can't do, there is some value in to doing things humans can do, but faster. There is no value in it doing things humans can do but are just too lazy or stupid to do.
RecentTwo544@reddit
Same, but normally the wrong way around. People who understand AI and what it can (or more often can't) do are embracing it into their work.
You have an astounding amount of people fearing AI will "take their job" who are seemingly unaware that AI isn't remotely capable of replacing them.
AirconGuyUK@reddit
Unless your job is physical labour I think you're at risk tbh.
RecentTwo544@reddit
How? Why?
I'm not involved in physical labour, AI cannot take my job. People I work with - DJs, photographers/videographers, artist managers, lighting directors, sound guys, stagehands, stage managers, people in marketing, operations guys, security. The list goes on. None of them are replaceable by AI and that just one niche industry.
AirconGuyUK@reddit
This is already getting ravaged by AI but I will give you the others.
RecentTwo544@reddit
How is it "ravaging" marketing?
AirconGuyUK@reddit
Well the arse fell out of the marketing job market, coincidentally around the same time the LLMs got competent. I don't believe in coincidences.
Asoxus@reddit
> AI isn't remotely capable of replacing them
Like who? AI will come for a large portion of the workforce in the next 10-20 years. Accountants. SEO. Marketing. Checkouts. Shelf stacking. It will all be replaced by cheap robotic labour or automated processes on computers.
RecentTwo544@reddit
AI has been capable of replacing checkout workers and shelf stackers for decades. We've had robots in car factories and warehouses since the 1980s.
Self service checkouts have been a thing for 20 years.
There are other reasons we don't replace humans with this technology despite it being around for decades.
Accountants have been using AI since Excel was launched in 1985 (and I assume there was other software that pre-dated it) and again good reasons we've never let software go it alone in those 40 years.
SEO is kind of no longer a specific job role.
The idea it can replace marketing roles is laughable, that did give me a good chuckle.
Asoxus@reddit
Chuckle away. Come back to this comment in 10 years.
RecentTwo544@reddit
"AI" as people are currently calling AI has been around for getting on for 10 years already and as its core level is no closer to taking jobs than when it was invented.
Exasperant@reddit
In the arts, the one area of life that absolutely should rely on human creativty and experience, it's already being used to replace artists, musicians, singers, voice actors, even live action actors. It's being used for lighting, sound, editing.
The idea it's a glorious new dawn for humanity because it allows us to cheap out on expressions of being human is some weapons grade doublethink bullshit, just as is seeing people already on the precipice of being made obsolete by it embracing and praising its ability to reduce the amound of effort they need to put into their desk job.
Asoxus@reddit
Cheap out? There are people that could never afford custom songs, paintings, stories, whatever, that now have access to let their imaginations run wild. I think that's a win.
Remote_Development13@reddit
I would agree with this, it's quite poorly understood across the population (myself very much included). You seem to have a very optimistic/accelerationist perspective and then an incredibly pessimistic view. We could do with a little more nuance in thr discussion
RecentTwo544@reddit
This confuses the shit out of people all the time. I don't see why, my views seem quite sensible and should surely be the default.
It's because I want AI to be as good as people are suggesting it is/will be, and am angry that it isn't.
Now I'm REALLY angry because the price of computer components/storage is either through the roof or they're simply not available, because of this utter shit.
Remote_Development13@reddit
Sorry, I think you may have misunderstood - I was referring to the fact that across the population, there appears to be an incredibly optimistic view on one side, an incredibly negative one on the other, and little nuance in between
I wasnt making any comment on your personal view
RecentTwo544@reddit
Oh my apologies, I thought when you said "you seem to have" you meant me specifically.
But actually it still stacks up because I have both, and this should be the default view.
People should be optimistic that it could be good (because one day it will be, might be hundreds or even thousands of years off yet mind) and pessimistic because at present it doesn't work properly.
redditmember192837@reddit
Why would anyone want it to be good? What good can it possibly do for our lives?
Remote_Development13@reddit
Sorry, I think you may have misunderstood - I was referring to the fact that across the population, there appears to be an incredibly optimistic view on one side, an incredibly negative one on the other, and little nuance in between
I wasnt making any comment on your personal viewq
anabsentfriend@reddit
AI will definitely affect my job, and for the better I think. I hope they crack on and get it integrated with our systems ASAP. My job is in the field of structural surveying and assessing deterioration over time.
doctorace@reddit
My livelihood is under threat because of the global economic downturn. AI is just C-levels putting a positive spin on massively cutting headcount.
_Odaeus_@reddit
Also anyone who even slightly thinks about the stupendous energy use going into it, or the centralisation of power it brings to a few US companies, or the detrimental economic effects of a massive circular investment bubble and the scarcity of memory, or perhaps the deskilling of a workforce now in order to extract rent later when everyone is dependent on these products. Could be one of those too, just saying...
Asoxus@reddit
Or the fact that when properly driven, it is able to accelerate the average persons workflow by tenfold, rendering entire projects that would have otherwise taken weeks or months, in hours.
It's enabling people to create who otherwise wouldn't be able to. Children are able to generate specific stories in minutes, and generate images to go with each page near instantly. Elderly or lonely people are able to have conversations with characters that get more realistic every day.
Energy use wouldn't be an issue if people stopped objecting to wind farms, nuclear power stations, and tidal energy farms. But this is britain. Not in my back yard.
irrelevantusername45@reddit
Why on earth would you want that?? Children should use their own imagination for stories, it’s one thing that adds to brain development. It’s really really important that children don’t grow up using AI for everything otherwise we’ll see massive cognitive decline in the next few generations.
Asoxus@reddit
My daughter has been entering writing competitions with her own stories - not generating them. She's able to illustrate what she writes by sketching out her idea and then having Gemini finish it off.
Professional levels of detail for her book, instantly.
irrelevantusername45@reddit
To me this is bad parenting and I'm sorry to hear this. Learning to write is really important for neural development. By skipping steps and getting a computer to finish it off, you're only stifling her and making her dependent on the tool. Do you also let her cut corners in PE?
With that said, teaching a child how to use AI effectively is certainly a good thing, but you need to be careful not to harm their development. Creative writing is IMO one of the last things you should be automating like this.
One of my biggest regrets is not getting more used to handwriting (Because computers/typing were becoming so much more common in school when I was growing up) and now there are plenty of studies showing how handwriting is much better for your brain/memory than typing. I'm slow and sloppy with handwriting and it's something I keep meaning to improve.
pintsized_baepsae@reddit
Except prompting is not creation - it's ripping off other people's work while stunting your own creativity and taking the chance to grow away from you - and these 'conversations with characters' are a way to make people wholly dependant on the AI, rather than looking at and treating the root cause of their loneliness.
Children are creative enough to create stories and draw pictures without using AI. They've done it for decades, and playing and being creative REALLY isn't something that needs any level of automation. What, do you want to turn a profit with little Timmy's book??
The issue is that there's a risk of 'failure' attached - and that's what people, ESPECIALLY adults, are afraid of or struggle to accept. Andbfailure is a very flexible term here, because essentially it means 'it didn't come out as I wanted'... Which is gutting in the moment (as every creative will be able to attest to), but is an opportunity for growth and artistic development.
AI is instant gratification. The absolute bottom-ladder fast food kind of 'art'. If it weren't, AI 'artists' wouldn't need to work so hard to convince people that hey!! It's good!!
It it was good and had any merit, the pro-AI crowd wouldn't be so loudly against labelling something as created with AI. The fact that they are - because they know people don't want it - speaks volumes.
Asoxus@reddit
AI artists aren't trying to convince people that what it makes is good. 40% of newly uploaded songs to spotify are entirely AI and the average person cannot tell the difference.
luckless666@reddit
I have to admit its killer app for me is how it’s increased my ability to procrastinate tenfold. I now have a full recommendation for an investment plan, mock ups of a self build mansion, short lists of smart home tech and companies to install it, etc for when I almost certainly win the lottery.
Morganx27@reddit
Children were already able to generate pictures and images, it's called having a pencil and an imagination. I "generated" stories and pictures all the time when I was a child, I spent time doing it, practised my craft and got good at it. When people have been prompting up stories for 10, 20, 30 years, they still won't be any better at it than when they started.
Grimdotdotdot@reddit
Everyone keeps parroting the energy usage stuff, but it doesn't really use that much energy.
I'm far more cross about what it's doing to RAM prices.
_Odaeus_@reddit
Training runs continuously for weeks across giant compute clusters. Companies are literally attaching gas turbines to their data centres because the grid is insufficient and others have talked about needing more nuclear reactors.
Not sure how much energy you think is "not much".
Remote_Development13@reddit
Of course. The degree to which a person is in tune to all of that is quite dependent on whether a person feels threatened by AI IME.
Or whether they are already inclined to ponder the detrimental effects of resource extraction and the chaos of late stage capitalism. Lets not kid ourselves into thinking thats anything other than a minority position, though
nonoanddefinitelyno@reddit
Anyone who doesn't do an actual physical job and also thinks AI won't affect their livelihood is utterly deluded.
I'm nearing the end of my working life and I would be terrified to be starting out now.
It is THE most dramatic change in history.
apple_kicks@reddit
Honestly i think ai only works sometimes now because people with experience of manual workflows can verify and understand ai better.
Soon as generation with zero experience in how to work or think without following ai, are the only workers left. Companies will suffer or see constant errors
Logbotherer99@reddit
Yeah, when the experienced staff overseeing the AI go there won't be experienced staff to replace them.
I get that it can replace loads of entry level jobs/tasks, but those are where you build the foundations of your experience.
Whoisthehypocrite@reddit
Physical jobs are going to go too. Drivers, warehouse assistants, factory workers, fruit pickers, some parts of construction.
And other physical jobs are going to see earnings collapse as everyone trains to be a plumber and not a graphic designer or software engineer.
AirconGuyUK@reddit
I have honestly considered becoming an electrician lmao.
Yeanes@reddit
So many of the people who are very excited by it seem to be software developers and programmers whose jobs are at risk - i am yet to see a plumber or a teacher or surgeon displaying the same level of enthusiasm
Ok_Bumblebee_2196@reddit
It's genuinely very useful for teaching in terms of helping tackle the admin work such as creating lesson plans.
Remote_Development13@reddit
Conversely, I am yet to see a plumber or surgeon who are particularly worried about the impact of AI
As an educator myself, I can tell you that the profession is pretty evenly split between people who are incredibly excited by it, and those who are absolutely terrified by it
No-Tailor-856@reddit
I'm a maintenance engineer and also do a lot of work on electrical controls. I also work on our robots. I think this will leave me well positioned for the rest of my working life but I still think AI is crap and an overall negative for humanity.
harrywilko@reddit
My livelihood isn't under threat, but it has made my job far far worse through the fact that I need to wade through tonnes of AI-generated slop.
worotan@reddit
No, you’re just trying to hide from difficult questions.
kyzfrintin@reddit
Such as?
Remote_Development13@reddit
Eh? I've given no indication of my own personal view here, just noting a trend that I've observed
TomfromLondon@reddit
My job is under threat from it but I still think it's amazing and use it loads
StargazyPi@reddit
As someone in software:
It's both the most exciting thing to happen in recent times, hugely energising live through something that's transforming the industry.
March was the month for most of my SWE friends, where we found ourselves pivoting from "AI is a useful tool to work through problems with" to "the fastest way for me to accomplish much of my job is for me to find effective ways of teaching Claude to do it".
In software, it's absolutely transformative. Optimistically, we could just build a lot more tooling, and solve a lot more of the world's problems with the same level of effort. Pessimistically... there's going to be a lot fewer software jobs in the future.
BasisOk4268@reddit
The doom mongering is coming from how much water it consumes. The AI companies are currently, largely valued at their potential as you say, but then they’re borrowing an lending money between all the major players, money which in fact doesn’t exist. So if AI cannot reasonably find a foothold in the economy and society at large, the bubble will eventually pop, breaking the global economy at scale and all we’ll have to show for it is how much water we wasted.
BasedRedditor543@reddit
Pretty sure walking uses more water than AI simply through wearing down your shoes which take thousands of litres to make
davidwhitney@reddit
The water thing honestly isn't the issue people think it is - https://www.youtube.com/watch?v=H_c6MWk7PQc - this is a very good (and not pro-AI) rundown of why. It's basically a myth.
FairlyInvolved@reddit
Andy Masely's blog post covers this well (for those who don't want to watch a full video on it)
https://blog.andymasley.com/p/the-ai-water-issue-is-fake
Happy_Little_Fish@reddit
by wasted do you mean the water is never coming back?
BasisOk4268@reddit
Water used to cool AI data centres is largely evaporated, not recycled
AllThatIHaveDone@reddit
What typically happens to water that has evaporated?
JeffSergeant@reddit
It falls as rain, and has to be captured, cleaned, and transported before it can be used as drinking water again. Its the energy expended in logistics we talk about being wasted in short-hand. No-one believes the matter is actually being destroyed.
slade364@reddit
Why do we need to use to drinking water in the first place?
AllThatIHaveDone@reddit
So don't use drinking water for cooling your data centres, problem solved 😂
There's a lot of issues with data centres, but that they use water for cooling is one of the sillier objections to have.
Justboy__@reddit
Have you not read the previous message?
AllThatIHaveDone@reddit
What do you mean? If the issue is that treating water for drinking is expensive, then don't use drinking water to cool the data centre. If the data centres are using grey water, then the volume of water being treated for drinking will not increase. Simple logic.
worotan@reddit
Childish logic that ignores the real issues.
You haven’t found a gotcha - if you had, don’t you think this would be part of the public discussion on this?
AllThatIHaveDone@reddit
I'm not trying for a gotcha 😅
mb271828@reddit
Grey water increases corrosion, microbial and scale buildup, etc, in the cooling loops, which increases maintenance, potentially to an uneconomic level. They aren't going to take a chunk of the data centre offline every few months to clean the loops when the local utility company can just pump nice clean water to them and save them all that cost and downtime.
CongealedBeanKingdom@reddit
Yes because there is enough rainwater to use in the desert where a lot of these data centres are being built.
Whats a desert again?
AllThatIHaveDone@reddit
Are there many deserts in the UK, do you think?
CongealedBeanKingdom@reddit
Sea water is salty. I dont think that would be the wisest idea.
toady89@reddit
And then it rains.
BasisOk4268@reddit
You do know we don’t have enough water filtration systems to catch all the rain in this country…
shredderroland@reddit
It's like arguing that it's OK to waste drinking water because there's plenty of water in the sea.
Alarmed-Cheetah-1221@reddit
How much water is being evaporated by AI in this country?
Far-Presentation6307@reddit
Most countries have these things called rivers. The rain falls into a river basin, some of which are incredibly large. Some gets taken up by plants, or absorbed into the ground, but a big portion finds it's way into rivers and lakes, and then eventually into the sea, where it mixes with salty water and becomes un-drinkable.
You don't need a water filtration system to catch the rain, the geography catches the rain for you. You just need to filter water you extract from rivers, lakes and aquifers.
UnacceptableUse@reddit
They're using clean water though, and the water then has to go back through the water filtration systems
Heinrick_Veston@reddit
Producing one hamburger uses around 2000 litres of water, using AI to write a 100 word email uses about 50ml.
If people are really concerned about conserving water, they’d be better off going veggie than stopping using AI.
poptimist185@reddit
My guess is most people don’t eat several hamburgers a day though
RealRefrigerator3129@reddit
Look at the scale though- 50ml Vs 2000L is 0.0025%- you'd need to perform 40'000 of those small AI tasks to equal 1 burger...
rabid-fox@reddit
animals get food from rain and food. AI does not
RealRefrigerator3129@reddit
A bit of a disingenuous take. Most of the water consumed in producing animals for meat goes to growing the feedstock for those animals, which diverts it away from other uses- e.g. for other human uses like domestic water. We also have a habit of artificially diverting huge amounts of water away from it's natural courses, primarily to feed that feedstock. Even if that water is untreated, it's still reducing the capacity available for treatment and potable use.
The only real difference with data centres is they prefer to use potable / treated water- but even that's not actually required.
Ultimately, whether we're talking data centres or agroculture- all that water used eventually finds it's way back into the environment, primarily through evaporation (both from AI cooling and from the Transpiration process in grown crops). The problem in either case isn't the water disappearing for good- it's that it's not available for use as liquid in the right places.
rabid-fox@reddit
No its not life cycle analysis has shown 91% is from rain and grass 9% is from irrigation (in respect to ruminants) obviously chickens this is where you see problems but people aren't blaming the chickens.
RealRefrigerator3129@reddit
That 91% from rain is rain that is being captured by the huge areas of farmland and doesn't make its way into water sources for other uses though. You don't get to just ignore that because it arrives "naturally" on the farmland.
rabid-fox@reddit
The presence of cattle doesn’t “capture” rain in a way that diverts it from human use. Rain falling on pasture is going to evaporate, be taken up by plants,or infiltrate into the soil and groundwater regardless of whether cows are there or not. Cows aren’t intercepting rainfall before it reaches some usable system they’re themselves part of the ecosystem that is already cycling water.
“doesn’t make its way into water sources” is backwards. Grassland systems are actually one of the main ways water infiltrates into the soil and replenishes aquifers. Having vegetation cover improves infiltration and reduces runoff compared to bare or degraded land.
A lot of cattle grazing happens on land that isn’t suitable for cropping or water extraction infrastructure E.G arid lamnd. So the idea that this rain could just be redirected to human use if cows weren’t there isn’t realistic. It’s not like that rainfall is sitting there waiting for us to use it
worotan@reddit
Now move onto the scale for the masses of other uses that ai is constantly put to by vast numbers of people.
You know, if you’re honestly working out the impact, and not just shitposting because you want to be able to continue getting away with living a vastly unsustainable lifestyle and ignoring the disasters of climate change for a few more years.
RealRefrigerator3129@reddit
We're talking about the comparison between AI and the hamburger here, not about whether AI is good on it's own.
Americans apparently eat 50 billion burgers a year- that's roughly 3 burgers a week per person, or 1 every 2.3 days. That's 17'000 tasks equivalent to the 100-word email per person, per day, just to match the water use producing hamburgers (and not accounting for other meat, of which hamburgers are only a small part).
That "vast numbers of people" equally applies to people consuming hamburgers, clearly.
Heinrick_Veston@reddit
Most people do eat meat every day though, and the water usage is broadly similar.
poptimist185@reddit
If you’re saying “we should collectively eat in such a way that avoids over reliance on natural resources,” I agree. That would be great!
worotan@reddit
So you’re ignoring what climate science tells us and just concentrating on one aspect of it, in order to keep having fun while living unsustainably.
Heinrick_Veston@reddit
It’s not what I was saying, but I do agree with that statement. It’s a key reason that I eat very little animal products.
thierry_ennui_@reddit
I swear to god it is impossible for somebody to say "this is bad for the planet" without somebody immediately popping up to say "yes, but this is worse"
LieFearless1968@reddit
It does tho since if people cared about helping the environment they'd focus their efforts on stuff that's actually much more harmful instead of virtue signalling.
thierry_ennui_@reddit
If people cared about helping the environment they could focus on reducing emissions anywhere and it would help. Don't just label people caring about stuff as 'virtue signalling'. It's lazy.
Heinrick_Veston@reddit
I’m just trying to put it in context. Water usage for data centres is much lower than everyone seems to think.
thierry_ennui_@reddit
But it isn't zero. It is a legitimate contributor to climate change, and it is important that we talk about that.
Heinrick_Veston@reddit
OK, let’s talk about it.
Current global data centres are estimated to use about 1.5% of all electricity.
Electricity generation currently accounts for about 35–40% of global emissions. So if data centres use 1.5% of electricity, and electricity is responsible for roughly 37% of global emissions, that puts data centres at about 0.55% of global CO2 emissions (1.5% × 37% = 0.55%).
AI workloads are a minority of those data centre workloads, about a third to a half of the energy data centres use.
If you take the middle of that range, that means AI is responsible for maybe 0.2–0.3% of global CO2 emissions.
That doesn’t sound like a lot to me, particularly if these models lead to advances in the optimisation of power grids, battery designs, or climate modelling, which would all reduce emissions across the board.
thierry_ennui_@reddit
0.3% of 38 billion tonnes is still 114 million tonnes. That is very far from zero. I also find it really hard to believe that it's going to stay at 0.3%, considering we're only at the first generation of hand-held AI.
Heinrick_Veston@reddit
It’s obviously not going to be zero, is it? Kinda a moot point.
My point is that people seek to think AI is major contributor to climate change, when it’s only actually about 0.3% of all CO2 emissions.
If the technology leads to reductions in emissions due to the examples I’ve listed above, it’s well worth it.
I still maintain that we’re better off focusing on cutting emissions elsewhere, such as in the food we eat, or the way we travel. No burger ever lead to a reduction in emissions.
thierry_ennui_@reddit
But it isn't zero. And it isn't going to remain at 0.2%.
worotan@reddit
As though we still have the capacity to absorb all these new drains on our sustainability that they’re inventing and popularising for a quick profit.
BasisOk4268@reddit
I have reduced my meat consumption drastically and no longer drink cows milk for this very reason. You are right to a degree but it doesn’t remove the issue. It’s whataboutism - something being worse doesn’t make something else not bad.
h00dman@reddit
I'm loving the thought of the imaginary mic drop moment you think this was.
If someone uses AI to finish the grunt work part of their task, freeing them up to more things that are genuinely useful, then it's a success.
BasisOk4268@reddit
If the average person continued to use their brain, as we have for thousands of years, people would be unaffected.
If the average person stopped eating food entirely, they would not be unaffected.
Heinrick_Veston@reddit
If humanity as a whole carries on as it has been, we’ll all be affected, because we’re destroying our world in many different ways.
If humanity develops sufficiently advanced AI systems then it could mitigate or solve climate change, food scarcity, disease, and many other hard problems.
If humanity continues eating food in the way it does now, i.e. meat-heavy diets that harm the environment, it will accelerate point one.
worotan@reddit
The success of creating more profits for business, not the success of dealing with a future that will destroy our civilisation and leave us fighting over very limited resources on a planet which is no longer productive for mammals to live on.
rabid-fox@reddit
You reduced your meat consumption based on a myth?
pajamakitten@reddit
Does animal agriculture not contribute to climate change then?
BasisOk4268@reddit
I reduced my meat consumption based on the fact livestock is one of the biggest contributors to climate change.
Heinrick_Veston@reddit
The average person’s yearly usage of AI amounts to about 1-5 bath tubs of water. It’s hardly much.
No-Tone-6853@reddit
Fuck all the water data centres use and the people whose bills get higher AND the people whose water has been ruined by them then ae.
Heinrick_Veston@reddit
Data centres in the UK haven’t increased the price of water for consumers? Over half the data centres in the UK use waterless cooling anyway.
BasisOk4268@reddit
This I didn’t know! Do you have information on why only 50% of the data centres use waterless? Is it less effective to not use in cooling or more expensive etc?
Heinrick_Veston@reddit
It’s because our climate is much cooler than places like much of the US. Outside air can do much of the cooling that data centres in Texas or the like need water for.
360Saturn@reddit
Good thing people exclusively use AI to write 100 word emails then and nothing that could be more resource intensive...
worotan@reddit
We need to do both.
The hype around ai is partly driven by desperation to counter the desperate future reality that climate science tells us we’re facing.
Because of the popularity of your kind of whataboutery. Giving people reasons why they don’t have to give up vastly unsustainable lifestyles, where they use large amounts of rapidly shrinking resources to show off how much fun they’re having.
We’d be better off regulating ai so that it is used for serious applications, and the other industries too.
But whataboutery always wins out, so people can ignore the reality that’s coming.
rdu3y6@reddit
Text responses use a lot less computing power (and hence electric and water) than image generation does with video generation being the most power intensive. It's why rate limits for text is a lot higher than image and video gen.
rabid-fox@reddit
Cows get water mostly from green water sources E.G rain, streams, grass. A small% is from irrigation so it has no real impact on water sacrcity. AI is 100% blue water.
LieFearless1968@reddit
So many other things people find acceptable are significantly more harmful to the environment tho (e.g. beef) so it just seems illogical to ignore all of that and focus on something much less harmful just because it lets people virtue signal.
BasisOk4268@reddit
I think the stark difference is beef gives sustenance to the global population. AI just consumes for generative videos and stuff that we can already do for no water wastage, that employs millions of people worldwide.
The ideal result of breeding cows is to feed humans. The ideal result of AI use is to take jobs away from humans, resulting in furthering poverty and wealth inequality, meaning more reliance on the state.
LieFearless1968@reddit
The thing is there are many alternatives to beef which aren't as harmful (not just regarding water usage but also methane) but for many of the things people use AI for there isn't e.g. from easily getting a synthesised answer or writing emails which otherwise would've taken hours all the way to coding or medical research.
There are many other unnecessary things too which for some reason are excused as normal such as watching 4k videos, long showers, excessive lawn watering and buying plastic bottles/paper cups.
Agreed that it can replace certain jobs but automation has been doing that for centuries so presumably people would be able to continue finding other forms of work like before.
EssentialParadox@reddit
AI using copious amounts of water Is a myth
BasisOk4268@reddit
An interesting video on the face of it, but the opening 20 seconds the guy says ‘Open AI don’t share that information’ relating to how much water each query uses, we’re just taking Sam Altman at his word. Perhaps the discourse is too high on the subject of water consumption I’m not an expert, but if they’re not allowing that openness (from Open AI lol) then how can we trust their word?
EssentialParadox@reddit
Did you watch the rest of the video? He goes into where the figures come from that people have used to calculate AI water usage and why they’re completely bunk science.
BasisOk4268@reddit
But the calculations are based on Sam Altman’s initial ‘1/5 a teaspoon’ aren’t they? So all subsequent math is based on 3rd Party information sources that can’t be corroborated
EssentialParadox@reddit
No. Either watch the video or stop assuming.
Heinrick_Veston@reddit
Unfortunately people are too lazy, stubborn, or desperate to fit in with the crowd to bother looking into any of the facts. Even when you spoon feed them with an easy to understand video.
Lazy_Crab_3584@reddit
... According to Sam Altman
bad_xyxa@reddit
Same logic goes for all datacenter use including downloading your favourite Steam games
mb271828@reddit
The compute required for generative AI is several orders of magnitude higher than serving a file. The comparison is not even in the same universe.
worotan@reddit
What are the comparative levels of water and energy usage?
You make it sound 1:1. Can you demonstrate that it’s the exactly the same situation in all these cases?
Or are you engaging in whataboutery to make dealing with climate change seem too much bother, so we just cross our fingers and ignore climate science?
WitShortage@reddit
The computational cost in your example is several orders of magnitude lower. Therefore, so is the consumption of electricity and production of heat.
BasisOk4268@reddit
I don’t use Steam but thanks for the straw man
bad_xyxa@reddit
One example for clarity. Another example might be streaming iPlayer.
ManlykN@reddit
AI it just a more advanced google search for me.
Imperator_Helvetica@reddit
The more realistic doom mongering is that it's a bubble forcing up prices of hardware, burning up electricity and wasting water and allowing faster and easier creation of propaganda and CSAM.
I don't think people worried about Skynet or their toaster turning against them should be worried.
More that it's contributing to the general enshittification of the world as companies try to eliminate jobs, choose a cheaper and shitter option and degrade real information e.g. AI making up case law or inventing scientific studies to cite as well as basing everything on stolen intellectual property.
Whoisthehypocrite@reddit
I do love the stolen intellectual property argument from humans who have spent years at university and in the workplace training themselves on the intellectual property of other humans without paying them for it. That is how we all learn...
Happy_Ad_4357@reddit
I do love the publishing royalties don’t exist argument. Oh wait, no I don’t. It doesn’t make any sense.
Slothjitzu@reddit
What do you think happens when you buy university books, pay course fees, or your workplace pays its training staff?
Imperator_Helvetica@reddit
If I buy a book by Jane Smith and read it then I can learn to write like Jane Smith. If I steal her diary from her and copy her writing I have done it without permission or compensating her.
If big AI were asking permission and paying for art to feed to their model then it is up to individuals to sell it.
Also, I believe that my university tutors were being paid (though possibly not well enough) to share their knowledge with us and I think if I'd just repeated word-for-word their book, essay or paper I would not have been allowed to pass.
GordonLivingstone@reddit
The US military is very keen on using AI to identify targets and plan attacks. Apparently being used in Iran right now.
Witness the big fuss when Anthropic were reluctant to let their product be used for purposes it considered unsafe.
Doesn't seem that far from letting AI make decisions on launching missiles.
Fine_Analyst_4408@reddit
I've never trusted my toaster and I'm not going to start now.
Imperator_Helvetica@reddit
Steady on Mr Lister. Surely we can talk about this - I'll open negotiations "Do you want any toast?"
Fine_Analyst_4408@reddit
We want no muffins, no toast, no teacakes, no buns, baps, baguettes or bagels, no croissants, no crumpets, no pancakes, no potato cakes and no hot-cross buns and definitely no smegging flapjacks.
Imperator_Helvetica@reddit
Ah, so you're a waffle man!
dbxp@reddit
On the hardware front there have been some recent critiques as what OpenAI signed were letters of intent not actual orders. There's been a lot of big anouncements around AI investments where no money has actually changed hands. For example I believe the massive OpenAI Stargate project has been announced as a $500b project but so far there are only loose agreements for $52b in funding.
TheTjalian@reddit
So you mean to tell me Micron has pivoted away from consumer memory and gone all in on manufacturing as much memory as possible for a company that as only said they want to buy memory, not actually place an order?
In any other industry you'd be sacked for making such a move
irrelevantusername45@reddit
The three memory companies have been known for price fixing in the past. I honestly think a big part of it is a move to try and suffocate China of resources.
dbxp@reddit
I don't know how much of an actual manufacturing change there was. There's likely been some impact from AI however some of it may have been that the rumour mill resulted in stockpiling so there might be a glut now.
Cubewood@reddit
Iran is literally threatening to blow up this according to you "non-existing" Stargate project https://www.theverge.com/ai-artificial-intelligence/907427/iran-openai-stargate-datacenter-uae-abu-dhabi-threat
dbxp@reddit
Stargate is a global project. What they're building now is the DC buildings but that's not where the big money goes. The big money is spent on the computing hardware, it's pretty common to build infrastructure for expansion which isn't funded yet, this isn't unique to AI or tech.
In the case of Dubai things are distorted due to the power of large sovereign wealth funds which can pay for construction upfront. The financing side of Stargate seems a bit murky but it seems that the plan is for the datacentres to be owned by investment funds with the investments managed by Softbank. OpenAI then leases them for operations.
All this adds up to some figures which are sort of theoretical best case scenarios. How much money has changed hands, who ends up holding the bag and who ends up gaining the profit is all a bit of a mystery.
Mccobsta@reddit
Ssd have gone up so much that a 480gb one from a reputable brand is now £84 which used to be the cost of over a Tb
I for one can't wait for the bubble to burst and storage to become affordable again
Fit_Adhesiveness7307@reddit
Honestly, I’m strongly against AI because above all other reasons, there is the fact that it’s already been shown to prevent people (and kids) from learning critical thinking.
If you think we’re in a mess now due to social media and all the fake news and echo chambers fuelling the divides in society, just wait till the current young generations are old enough to have more power and influence. I really worry about that. (I’m not even old myself, but I won’t let myself lose my abilities by outsourcing them to AI.)
rdu3y6@reddit
The tendency of LLMs to add fluff praising the user and to never disagree or say it doesn't know is going to cause problems if kids grow up expecting humans to do the same.
pajamakitten@reddit
Which will be a shock when they get a job.
GordonLivingstone@reddit
Well, they will probably be working with an AI so things may not be that different
Exasperant@reddit
Luckily for them AI will have taken all the jobs...
Stirlingblue@reddit
Well that’s fine as there will be barely any jobs left apparently
360Saturn@reddit
There have already been cases of people committing crimes & assaults because AI validated their grievances & encouraged them! Alarming
bow_down_whelp@reddit
This is my biggest concern. For my own learning, when I am finished with something I run it through AI. It is still missing things and missing some connections. It is good, however, for "teaching" and giving you a basic foundation to springboard off and learn yourself
ssebarnes@reddit
Be really careful inputting your work into AI. We are strictly advised against inputting any academic work into AI software at my university, for reasons of data breaching.
I find it really useful for revision purposes. ie my current coursework is on One Hundred Years of Solitude, of which I've read and know which points I'm wanting to make. All I ask it for is specific quotes and page numbers - that way I can verify. Everything else usually from JSTOR.
It's an extrenelu valuable resource for me in accelerating my learning, as it means I don't waste time on technical issues like finding quotes in a 140,000 word book.
bow_down_whelp@reddit
I don't put any confidential information in.
There seems to be different takes depending on the institution and what they find acceptable
TujiTV@reddit
Yeah, everything you put into AI could end up being part of its training model. I can't imagine if you were using AI to iterate on your thesis and you end up getting flagged for plagiarism because AI has taken your work and spat it out to someone who published before you.
ssebarnes@reddit
Yep - my uni's thought exactly. It's useful for its output, but please nobody ever input any professional data in.
Fuzzy-Corner8129@reddit
Seeing a lot of people crying on social media when GPT goes down because they can't complete their homework or wondering how anyone passed their GCSEs without GPT worries me a lot. I want to hope most of these people are joking, but I don't think all of them are.
apple_kicks@reddit
The one job security we may have now is future generations may lose these skills so badly companies may boost pay for workers pre-ai in education
Icy-Kaleidoscope9894@reddit
You're not getting how big a deal this and other similar uses are. AI tools are not as good as some seem to think but they certainly are making a lot of software engineers 2-3 times more efficient, maybe even more (depending on the specific discipline)
So if in the space of a year the software needs of companies have not changed but 50% + of engineers are seeing their productivity increase twofold then the demand for engineers is reduced by about 25% and we're seeing that pan out with graduate jobs in software getting very hard to find compared to just a couple of years ago.
This is happening to varied extents in a lot of different fields and is having and will continue to have a significant impact on the job market, which is pretty scary.
2B_limitless@reddit
You could have wrote that with Ai.. that's what you missed.
angels-and-insects@reddit
It's the Emperor's New Clothes of tech bros. (I'm talking about LLM gen-AI specifically.) Everyone thinks it can do every job except theirs, and that's because they can see where it gets their job wrong.
I'm self-employed, working late almost every day, and trust me, if it ever saved me a MINUTE I'd be in there like Pooh Bear with a bee hive, climate be damned. But every damn time it has just wasted my MF time and cost me extra time. Because I'm also quality control for every aspect, so I KNOW.
I have tried it for so many essential functions of my work and it's failed every time.
It's the shittiest beta-test ever inflicted. I'm so fucking glad I'm my own boss.
StereotypicallBarbie@reddit
I have a friend who uses chat GPT for absolutely everything.. it’s ridiculous! She will ask it what colour to dye her hair.. what dog breed she should get.. what lipstick suits her skin tone.. she has it doing work assignments.. and even encourages her daughter to use it for college work!
I genuinely believe if someone took it away from her she’d be lost.
spammmmmmmmy@reddit
I think you have missed the main thing that is scary about them at this particular moment: they do t just compile code faster... they write entire programs that work, and can almost do the job of a very experienced and high paid software developer. That upsets the economy of computer software and computer services in a big way. It's like the Industrial Revolution which took 100 years, compressed into two years and counting.
There are future upheavals but the software development upheaval is happening today.
NationBuilder2050@reddit
Currently in a group in my office where we discuss how we’re using AI. There’s a real mix of how people are using it ranging from enhanced Google searches through to automating workflows.
If you’re only seeing it used as a enhanced Google search you’re probably not really seeing the full potential.
The potential of AI is not manually probing it to answer questions or do a task step by step. The potential is in Agentic AI. Where you have an AI enabled agent which is given the direction and autonomy to do multi-step tasks.
One thing I’d suggest is that AI adoption is going to need redesigning of workplace processes to make it work. A small example, AI (currently) struggles to make sense of excel tables which have merged cells, you need to go back to redesign the table to make AI work.
Did_OJ_Simpson_do_it@reddit
It’s funny cos you can ask it stuff like “How would Captain Mainwaring have responded to the Iranian Embassy Siege?” or “Isn’t it amazing to think there WWI vets who lived to see the first iPhone?” and it gives interesting replies.
Strange-Yam4733@reddit
I tried using Co-Pilot today to compare two excel based reports. After an initial "oh cool, it's understood what these things are and is referencing it back to me", I realised it's suggestions would give a very sub standard report my superior would not be satisfied with, and potentially not even useful. TLDR - I tried it, ended up back with pivot tables
Fidgie0@reddit
You're missing the part where companies have put millions into developing it and really really really need everyone to start using it or else it was a collossal waste. So they're pushing it hard and putting it into everything and for the most part people seem generally ambivalent at best, so they push it harder.
Redsfan1989@reddit
Yep.
AI is great for the following: Summarising minutes from very boring corporate meetings. This has meant that listening back to two hours of BS trying to capture everything verbatim for a bunch of narcissists who love their own voice has been reduced from a whole day to an hour tops, even with proof reading. So yeah, fair doos to AI for that.
AI is shite for the following: Literally everything else. Asking if to do anything with Excel in my experience, even getting really specific, is a no go for example.
51onions@reddit
It's quite useful for porn, as it turns out. But for some reason, they don't want people using it for porn.
The one thing that people are very willing to give them money for, and they don't want to take it.
No-Translator5443@reddit
Also useful for putting ai homeless in pictures
MrReadilyUnready@reddit
The issue is some of the people using jt to produce porn are requesting deepfakes, not novel porn. It's hard to ban only deepfakes.
51onions@reddit
Even LLMs are useful for porn, and that's text only. Some people participate in sexual role plays and the like. The AI often refuses to do it.
Silly-Industry1527@reddit
Often?
51onions@reddit
There are "jailbreaks", as far as I understand. Prompts you can give the AI which undermine the system prompt telling it not to be lewd.
Gary_James_Official@reddit
And for Grok, specifically, users have been requesting child porn.
Active_Doubt_2393@reddit
How is it useful for porn? Erm, so I can avoid doing that.
Atheissimo@reddit
Before AI only the five richest kings of Europe could afford a piece of equipment called a Pornograph, which is used to practice the art of pornography. It's a large mahogany cabinet manufactured only in Switzerland.
potatan@reddit
I wonder what pops out on the hour?
Delicious-Being-6531@reddit
Not true. They also produce excellent rosewood ones in Lombardy, Bergamo produces some of the most exquisite examples.
third_king@reddit
But they’re not legally allowed to be called Pornographs due to EU labelling rules, they’re jizzy cabinets.
Snoo_23014@reddit
I was fortunate enough to see the Pornograph de Monte Christo at the Louvre.
The beadwork and pearl inlay was immaculate.
Nevernonethewiser@reddit
I have some bad news about those pearly beads.
Or good news, depending on who's asking, I suppose.
Snoo_23014@reddit
They were very shiny. Almost wet looking
Tylerama1@reddit
Can imagine the orange nonce saying this 😬
Nevernonethewiser@reddit
They say it's good luck if you see your reflection in them.
Snoo_23014@reddit
Haha Haha!!!!
Atheissimo@reddit
Well yes, but if you widen the field to include the trifles from Piedmont you might as well count every copy of Razzle ever cast into a hedge!
Spdoink@reddit
I inherited my Father-in-Law's Pornograph, but haven't cleaned it off yet.
Atheissimo@reddit
Good plan, the sticky patina of a generational Pornograph is what marks out an original from the many low quality imitations on the market.
Lovecraftian666@reddit
Love it. Absolute Ron Burgundy answer
Akabitomago@reddit
Now even the rabble will be calling themselves pornographers. Such times
51onions@reddit
LLMs can be used for erotic roleplay. Janitor AI is one such example site which allows erotic role play.
Image gen is obviously useful for porn, if the provider doesn't censor the model.
xxxxxxxxxooxxxxxxxxx@reddit
Are people really getting off to talking to a chat bot?
PlasonJates@reddit
Yes, it's always yes. People have gotten off to weird shit forever. Tech just makes it easier and more accessible.
51onions@reddit
Yeah. I don't see it as any different to getting off to an erotic novel. Except you get to decide the story here.
It's not just a conversation. Both you and the AI contribute to the narration as well as the dialogue, though it is structured like a chat interface.
Various-Drive9313@reddit
freak
beardedslav@reddit
https://www.youtube.com/watch?v=LTJvdGcb7Fs
apple_kicks@reddit
Tech companies underestimate how many computer illiterate people are out there. Only people getting use out of ai can do same thing manually or understand how computers think with data.
If we want to build new homes and infrastructure we need more grid power and data centres offer less for what it takes out our resources
OrbitingPlanetArse@reddit
The power requirements for AI are frightening.
I read an article where the 80 or so data centres in Eire use around 18% of the country's generation capacity.
As a comparison, when the Anglesey Aluminium smelter was being planned in the 1960s, questions were being asked in Parliament about the facility's power needs. (Aluminium is produced by a form of electrolysis - it is fundamentally very power intensive). In operation, the smelter eventually used around 255MW of power: at the time, this was around 2% of the UK's generation capacity.
Yet the huge power requirement for AI seems to have crept up on us without anyone seeming to notice.
apple_kicks@reddit
This is during housing crisis too. New builds need more energy but grid will suffer trying to supply both without huge investments
Ok_Young1709@reddit
Insane that they underestimate that when most of them probably started work life in lower jobs and have dealt with the general public in some way. Every IT person should have at least one story about a user that has done something completely stupid, otherwise you are not an IT worker.
TheDawiWhisperer@reddit
Yeah that's a reasonable summary, i use AI to write Powershell scripts for me to automate stuff but the really important bit is that i already know Powershell well enough to be able to sanity check whatever Copilot shits out and to correct the lies.
For example last week i asked Copilot do create me a way to monitor all automatic services on a server with some exclusions using a specific tool. Copilot confidently came back to me with a way that i know doesn't work so i was able to correct the question to get an actual useful answer.
Spdoink@reddit
One of the worst things about CoPilot is how much honeydicking it does to you. It completely goes for utterly terrible ideas and executions, and will keep doing it long after logic should have intervened.
Acceptable_Tower_609@reddit
I'm afraid you misspelled trillions there
kunstlich@reddit
What'll be interesting over the next few years is watching how much of this trillions of dollars is actually spent, committed, or just letters of intent.
There's examples in the UK at least of datacentre-scale projects being promised with billions of spend, where nothing is actually in place to even begin the project. It's smoke, its not real.
marxistopportunist@reddit
Probably it will become part of the global theatre. Blamed now for mass layoffs, it will be blamed for economic disruption and praised for negotiating with aliens or whatever. Should be fun, get your front-row tickets early
_o0Zero0o_@reddit
Trillions? Quadrillions, lad. Pentillions even
binaryhextechdude@reddit
Windows added Copilot to Notepad. The text editor that was meant to have simplicity at it's core.
trevhcs@reddit
Shows how desperate they are to seem relevant. Finally discovered there's a setting to turn all the extras off.
_lippykid@reddit
Literally the only way Open AI can turn a profit is if its service replaces the majority of the human workforce. No amount of subscriptions and ads will make a dent in the obscene amount of money they spend
That’s not hyperbole. Just math
Ironfields@reddit
We are at the “we either create God in the next five years or the entire economy goes off like the Tsar Bomba and millions of people who never asked for this lose their livelihood” stage of capitalism. I’m so glad we have these maniacs running our lives, truly.
KittyGrewAMoustache@reddit
I hate it so much it just seems so fucking stupid. Like who is it actually for? Some of it is genuinely useful but given the enormous amount of energy and water it requires, it seems horrific to have it used everywhere for everything even peoples random stupid musings or helping them compose social media posts about fripperies.
Just use it for niche stuff where it’s actually helpful. It’s nowhere near good enough for general use to replace humans. It’s just not for us, it’s not for humans so what’s the point? I can’t think j if many other inventions that have been so ‘anti human’ in so many ways other than maybe the atom bomb.
QueenOfTonga@reddit
And the bubble goes *pop..
Rigatoni_Soprano@reddit
I work for one of the largest automotive manufacturers, employing over 170,000 people worldwide. Early last year, we received an email from the board announcing that we needed to start using Copilot as part of a new partnership with Microsoft. They provided a few examples of how it could be used, but most of the messaging was filled with buzzwords and marketing jargon.
The uptake has been abysmal... there’s very little interest in using it. Since then, we’ve received several follow-up emails from the board, each one sounding increasingly passive-aggressive. They’ve hosted webinars and workshops, and even tried to get HR and IT to push adoption.
According to our local IT manager uptake has been less than 1%
fergie_89@reddit
I'll be honest that I don't even know how to use it 🤣 my husband does as he uses it in bits for his job, my friend showed me how to use one to help write my report in a more formal way, but honestly? I don't really care. My job can't be replaced with it and if I get replaced it will be by someone much more senior than I.
idlewildgirl@reddit
And the people that own the companies (Eg Open AI) aren't the most scrupulous.
Coconut_Maximum@reddit
"When bubbles happen, smart people get overexcited about a kernel of truth," Altman told a small group of reporters last week.
https://www.cnbc.com/2025/08/18/openai-sam-altman-warns-ai-market-is-in-a-bubble.html
MrReadilyUnready@reddit
Lmao millions?? They've collectively poured trillions into it.
BurnStar4@reddit
I work for a company who has now got their own in-house AI that we're being actively encouraged to use, but I just don't see the point. We're being pushed to use it to take notes for us and help us write emails and shit, but I have never needed AI to do these things for me, so it just seems strange and forced. Especially as it seems AI is bad for the environment. Just seems weird that on the one hand, the company is pushing for sustainability and whatever but then they're making us use AI for bullshit reasons?
5im0n5ay5@reddit
It is replacing a huge number of roles. Perhaps new ones will emerge to replace those lost but it's undoubtedly an extremely powerful tool, and will only get better as time goes on.
northernbloke@reddit
A lot of the comments here remind me of what people were saying when the internet was in its infancy.
EmeraldJunkie@reddit
I recently started a new job and part of the training was digesting and regurgitating some of the information shared into a presentation for the rest of the trainees. Everyone under 30 immediately took to ChatGPT to ask it to share what had already been explained to help create their presentations and it showed. They struggled to answer further questions on the relevant topics and it's no surprise that out of the group of 8 of us in that group half didn't make it past the first couple weeks.
apeliott@reddit
I've been using it recently for work and it's been absolute garbage.
Constantly spewing out wrong information with great confidence.
AlephMartian@reddit
You're very clearly using some sort of old free model or something. I remember a couple of years agio ChatGPT etc. were a bit hit and miss, but this just isn't the case any more when I use it (albeit on paid plans), and I use it regularly for all sort of things from coding to recipes to content creation. It's incredibly accurate, with maybe an error 5% of the time, but because I check it, that's not an issue (and no, checking it doesn't take anywhere near the time it would have taken to do the thing).
StGuthlac2025@reddit
I'd say you need to get better at understanding how to use it.
apeliott@reddit
Nah, I've given up at this point.
I've given it very clear instructions and it returns absolute nonsense with utmost authority.
It's quicker and more reliable for me to search for information myself.
pale_doomfan@reddit
But if it admitted it didn't know, you might not ask it anything again...
Remote-Ad5853@reddit
To its credit i’ll say it’s been amazing as a first time linux user! It’s very rarely been wrong about setup & sorting errors or helping me do things i had no experience of. No doubt this is because the active community with posts on forums and the arch wiki that it all nicked from them.
That said, one instance recently where I was having some issue and it jumped down into that exact thing if spewing nonsense with great confidence
RecentTwo544@reddit
I keep meaning to make a spreadsheet of all the times Google's "Gemini AI" answer is wrong, and the times it is right.
It's probably about your 80% estimate that it is wrong.
I can now pick up on it, and often find myself saying to people "that doesn't sound right, did Gemini tell you that?" and yep, always. And I'm not even particularly intelligent or smart, so if I can pick up on it, it shows there's a major issue with it.
apeliott@reddit
Yeah, and it's not just that it's wrong the majority of the time. It's that's I then have to go and manually check everything it says because it can't be trusted.
I might as well just do it myself in the first place.
apple_kicks@reddit
Ai is doomed when they had to admit the hallucinations are a feature not a bug that goes away as it learns
winter-2@reddit
I'll never understand why we need realistic AI images/videos. I can't think of a single positive use, and I've seen them used for so many bad things.
Beautiful_Hawk548@reddit
So one of the things I heard and have repeated to a few people that really opens the whole thing up:
"Pepsi CEO says everyone will be drinking only Pepsi by next year!" You'd just write this off as the usual marketing BS right? You know Pepsi doesn't work in every application, some people already have agreements with Coke and some people also just don't like Pepsi. It's impossible but you'd wouldn't be going around telling everyone how silly it was, repeating their story and arguing for Fanta in every comment section. You know he's doing it because it's his job to say those things and try to achieve the impossible goal. If he came out and said "we'll probably hold roughly the same market share, it's the same sugar water it's always been." he'd probably lose his job.
So, when an AI CEO or CEO with a lot of money in AI tells people that "AI will do everything in a year!" why are we not using the same skepticism?
disappointingcryptid@reddit
100%, I try to avoid the news but every time an “AI CEO predicts that there’ll be no more X jobs in a year!” headline slips through I just roll my eyes. They need people to believe it so they can keep their investors dumping in more money to be funnelled around between 5 companies.
NorvernMankey@reddit
Most of the new media we’ve disseminated throughout the world in the last century and a half has first been co-opted into making cost effectively and then distributing more efficiently , hard core pornography. Photography, film, video, digital discs and streams all got their alpha testing done by the dirty macintosh brigade ensuring a superior product for Disney to exploit. The fact this is a porn free, (now), bread head tech bro enterprise means it’s probably doomed to failure until there’s a yard sale for discount data centres and the wankers get their sticky little hands on the tech.
m8044@reddit
The AI products are being pushed onto us, the consumers, for the sake of capitalism and lining the companies' pockets. Most people don't need, or want such software in the first place. The companies say it's for churning out more efficiency and convenience, but at what cost? Who does it really benefit in the end?!
idek_just_for_fun@reddit
Supposedly my job is heavily under threat from AI but every AI tool I've seen is inaccurate.
You get a vague answer similar to what you would if you asked someone questions that just left an hour long meeting.
I can see it streamlining work which will reduce the numbers of staff needed but staff with actual experience are needed to monitor and apply the work AI does or it causes problems.
EedSpiny@reddit
I agree that it's over hyped. However I'm in the IT services industry and also see real applications and it's potential.
There are huge productivity gains to be had with it applied to the correct problems. Kind of where my job comes in, AI being one of the tools in the bag.
A concrete, trivial example: I had written a power shell windows script to get me the next bin collection schedule from the council website because the site can't even remember your postcode and needs way too many clicks for something which you need to reference regularly.
With a simple prompt I got Claude to convert that to a basic web page written in HTML and JavaScript. I know next to no JavaScript. That web page now works better for me than the council site. Saves me a couple of minutes every time I want to look at the collection calendar.
You can apply similar principles to business workflows that need ingestion of complex data to speed up people's work very easily.
What capitalism will do with this new found productivity is the worrying thing for all of us.
Dapper_Ad2931@reddit
A lot of the comments hating have no idea what they are talking about - AI at the micro level helps me automate tons of tasks; notetaking, analysis, data management, reports, marketing, finance, ops - get into claude code and its mcp usage across apps you'll see the benefit
riverend180@reddit
LLMs are absolutely terrible in my experience. Maybe I just really suck at writing prompts but I would be surprised. It just never seems to give me what I want from it if it's remotely complex and the further it gets from the initial prompt the more confused it gets. It seems well aware that it's ignored my explicit instructions when I point that fact out but then fails to do it again. It's always offering to do more and more stuff that I didn't ask for without ever actually managing to do the thing I asked it to do.
Borks2070@reddit
So, I'm in IT, have been for decades, and my recent work is all about the AI's. I'll give you my perspective.
Long story short. Automation. It replaces humans in the workflow. Not just help. It can help - it makes a single worker far more productive given the right environment and framework. It can also replace humans entirely.
I've just finished writing a system that takes what a department of financial people did, all day, everyday, and replaces all that work with an AI stack that does it in a fraction of the time at a fraction of the cost. Not theoretical. Not might be. Not could do. Now. Done. Working. As of last month after a whole bunch of testing.
Automation is no stranger in IT, it's a good deal of what it's about when you get down to it. The difference here is in the past you couldn't (generally) automate things that required an actual human brain determining things, making decisions, pulling information out of documents to make an assessment, emails, processes. Now you can. At speed. At scale. For little cost. And often better than a human can do it.
Think of it like the industrial revolution, but instead of that automation taking labour away from people and replacing it with factories. The new automation is taking the *thinking* away from people and replacing it with a smart box.
If you expand this out even slightly you are facing the mass replacement of a whole bunch of white collar jobs.
Also add in a lot of creative low level processes are also now being automated with the same systems. Illustrators. All kinds of writers.
The hype you're seeing from these big companies is the realisation that they could capture much of the workforce in their own systems. You would rent out workers from them. Humans no longer required. Way faster. Way cheaper. No pesky worker rights. Or vacations. Or complaints. The market for this is literally in the trillions. If - you can reliably automate everything. AGI is really all about this. Do any job reliably. That's why there is a lot of investment into it.
No one has properly thought out the implications to this. Dystopia. Utopia. Unclear. No one is being particularly evasive about it either. There is a lot of commentary from the highest levels about AI being maybe the end of the world ( as we know it, or literally ). Is this reckless ? Yes. Should something be done ? Yes. Is something being done ? No. You are getting voices like Bernie Sanders saying there should be an immediate moratorium on AI development. Because the threat it poses. Economically if nothing else.
Also. You probably don't want to know what happens when you start releasing these kind of systems onto stacks where they have full control and power. Interesting things.
It's a bit of a wild west at the moment. If you're not active in the space. It can seem all a little weird and a little unreal.
If anyone is curious, I'd strongly suggest having a play with it yourself. Go seek out chatgpt, or claude, or gemini. Ask it itself what it can do (!). Maybe your own job. Reading documents. Parsing meaning. Managing businesses.
There's also a nice irony here. If you're cursing IT people busy engineering the collapse of all things with clever software. IT itself stands to be gutted by AI. As it turns out. AIs are not just "helpful at compiling". They can pretty much do the whole job themselves. They're not 100% perfect yet. But for a lot of stuff. They absolutely are. They are already writing whole chunks of code at openAI and anthropic themselves.
Minimum_Possibility6@reddit
At my workplace all the documentation for code testing and identifying code gaps is done via ai. This is trained on company data, company processes and standards. It has meant that junior Devs that would spend half a day writing this up are kept working on the next ticket while all their output gets loaded into a stack that gets ran through this.
It doesn't do the testing just create the documentation about what the code is and does and how to test it before it's pushed to prod.
It saves a lot of time
We also have algorithms being used to monitor promotional effectiveness within the retail environment so we can identify on real time if things are working
The free use that people use barely scratches the surface
anabsentfriend@reddit
I've got a big exam this week I've been getting AI to set me tests that I've been working through over and over. I've found it much more useful than just reading the text book.
ManageThoseFootballs@reddit
It's brilliant for people who have always been shit at their jobs to pretend they are not shit at their jobs in a really colour-by-numbers way.
That's about it.
JamJarre@reddit
You're not missing anything. The big companies desperately need millions of us to get super into AI so they can recoup their investments, which is why they're pushing so hard to make you feel FOMO.
Ask yourself this: did anyone have to be convinced to start using smartphones, or laptops? Streaming services? If the use case is obvious, people will adopt it. The fact they have to bully people into using AI is very telling
martin_81@reddit
Current LLMs are amazing at anything that's a language problem, the agentic capabilities are not mature enough yet for most people to have access and see the full potential, but they're coming. Even without any further development of the models they will eventually replace many jobs because business tasks will be made into language problems by adding APIs and CLIs that LLMs can interact with and take action in the real world. Lots of people aren't seeing the true potential because they're using free models, and their interface to it is a chatbot that can't autonomously take action.
Extra_Actuary8244@reddit
Everyone I know who’s amused by ai is someone who would probably eat crayons thinking the colours were flavours
HawaiianSnow_@reddit
Sounds like the take of someone who doesnt understand anything about AI.
Whether you use or believe in it's potential or not, this is a very low IQ and reductive statement.
Extra_Actuary8244@reddit
I have a masters degree specialising in ai and the law actually
It’s okay to be impressed by the development and its uses in areas such as healthcare
If you’re impressed by ai fruit love island and making animals do funny things then you have the most room temperature iq I’ve ever seen
BikeProblemGuy@reddit
That's very different from your initial comment. Just because some of the things AI is being used for are silly doesn't mean it's not very impressive.
Extra_Actuary8244@reddit
It’s not different at all, it’s just more detailed.
I said most of the people that are impressed by AI are thick and they are, they’re not impressed by its development in crucial areas such as healthcare.
They’re impressed by its ability to make funny videos of celebrities, animals or ability to answer questions for them even though it’s answering incorrectly.
kevkevverson@reddit
I think the technology of generative AI is incredibly impressive and can’t really understand why someone would think it wasn’t. That’s different from putting it in the hands of talentless unoriginal unimaginative content creators.
Extra_Actuary8244@reddit
Agreed, AI is not inherently bad. Some countries have made medical discoveries that save lives but the procedures to that need to be carried out aren’t universal yet, to allow surgeons to operate from the other side of the world using surgical robots is incredible.
Asking AI to make an image of you as an 86 year old woman pole dancing which could literally reduce an entire family’s water availability to nothing for an entire day if they live close enough to one of the data centres is not impressive or clever
BikeProblemGuy@reddit
'Everyone' is not most people.
Making funny videos is still impressive given that AI videos only a couple of years ago were barely recognisable.
360Saturn@reddit
'Everyone I know' is what they said, not 'everyone'. Reading comprehension...
Round_Grand_4716@reddit
Can you provide more info on your masters please? Were you in tech before or law?
Currently looking at a masters, still eat crayons.
Extra_Actuary8244@reddit
I did my undergrad in criminal law, I did my masters in various areas of law but half of it was dedicated to AI law. This ranges from legal principles of AI globally, healthcare and AI, recreational AI, the development of AI, surveillance and AI, autonomous vehicles and AI and a lot more.
I specifically focussed on sex crime and whether AI would be a suitable alternative to a jury given that cases regarding sex crime have such low conviction rates and high levels of miscarriages of justice + recidivism due to rape culture and bias towards victims (AI is not a suitable alternative to be clear) and I’m now a PHD student writing my thesis about sex crime and AI.
I was never in tech before my degree but I do work closely with people who do work in tech for my PHD.
Round_Grand_4716@reddit
Thanks, that sounds really interesting.
HawaiianSnow_@reddit
Then why on earth would you suggest anyone amused by AI eats crayons for the flavour or whatever nonsense you spouted? Completely nonsensical.
Extra_Actuary8244@reddit
Because 99% of people amused by ai are not amused by surgical robots they’re amused that they can take a picture of their shitzu and make it dance in a tutu to put on Facebook
AirconGuyUK@reddit
Proof that people with degrees can be really fucking stupid.
Extra_Actuary8244@reddit
I never said it’s the ONLY use, learn to read.
FirstAndOnly1996@reddit
Always felt like this, especially the ones who share the 'funny' AI videos of babies recreating famous film scenes etc. Just not a single thought in their head.
Extra_Actuary8244@reddit
The love island fruit ai series is a really good way to weed out the brain dead in society by looking at who enjoys that content
TheMujo@reddit
I'm 100% stealing this, thanks.
Ancient_times@reddit
The classic AI move.
Cute-Habit-4377@reddit
I am skeptical of AI but it blew my mind this weekend. Using Gemini, it was perfectly able to identify, date and value antique cutlery. Just took a photo using the app of the makers stamp and voila.
The second use case was identifying old photos, it could read writing i was unable to and correctly identified locations base just on a prompt "somewhere on the south coast of the UK" and told me i was wrong on A couple of occasions (correctly)
The third use case is just to fix my extremely long winded english - finally i can write proper emails.
I have used it to write cyber security documentation and create technical illustrations also.
Downside, it doesn't know when it is completely wrong, it happily wrote a press release for a product, but half way down got mixed up with a different product - so check carefully. Also if you know the subject matter, you realize it just quotes webpages verbatim sometimes.
Master-Trick2850@reddit
same hype as tesla "self driving" tech, its being sold as fully functional tech when its not
tesla self driving is not 100% safe
"AI" is not actually AI its just fancier predictive texting that links words together based on what its read
AI will lie to your face though and people are using it for questions they themselves dont know the answer to, so theyre impressed by the seemingly accurate looking answers
worotan@reddit
People were confidently telling us that we’d all be in self-driving cars by now, a few years ago. They just want to be involved in something that sounds really aggressively confident, and are easily taken in.
HiddenStoat@reddit
Its worth noting that self-driving cars are happening - just not as fast as some of the more optimistic predictions from 2020.
Waymo have done 200 million fully autonomous miles, and are scaling exponentially.
skelly890@reddit
>Waymo have done 200 million fully autonomous miles
On a nice grid system, at slow speeds, with human overseers. I've been told self driving lorries are going to take my job within five years for the past ten years, but doubt we'll see self driving 42 tonne artics mixing it with the amateurs on the M25 any time soon. They've been 95% there for a while now, but it's that last pesky few percent that's proving difficult.
>and are scaling exponentially
I'm not sure what you mean by that. Over what timescale?
HiddenStoat@reddit
They are coming to London this year, which is decidedly not a grid system! As the car gets better, it will be deployable to increasingly challening operating domains. This is an expected part of their scaling.
Waymo do not have overseers. Instead, the car is able to contact remote support and ask basic questions with simple yes/no answers (think questions like "Is it safe to go through these roadworks" or "Should I go left or right around this diversion").
Crucially, the remote support is not part of the safety system - the car itself decides when to contact remote support, and they are never involved in driving the car.
Yes, that is a very common thing with new technology, famously described in the Gartner hype cycle (see "peak of inflated expectations").
This doesn't mean the technology is not happening - it means it takes longer than initially thought.
I'm not sure I would use the word "panicked" for the Waymo driver. Your understanding might be correct for other systems described as self-driving (e.g. Tesla's) but Waymo (who are the leader in the space) are years ahead of Tesla, and it's important to not confuse the capabilties of the two.
I mean that the growth in miles driven for Waymo is following an exponential function - the number of miles they drive increases 10-fold about every 2 years.
This graph should make things clearer - every horizontal line represents a 10-fold increase in the number of miles driven.
skelly890@reddit
We'll see. I'm retiring shortly so aren't worried about self driving lorries, but that last few percent is really hard - possibly even exponentially over your timescale of choice - and needs to be solved for the obvious reason that 42 tonnes at 56mph is a lot of inertia.
Waymo pootling around town at 15 - 20? Sure. Not a huge problem if it has stop and phone home. Sounds OK. Ideal ride home from the pub.
GeeJo@reddit
And a big hold-up is that self-driving cars don't just have to be as safe as a human driver - they have to be utterly perfect.
If every car in the U.S. had magically been made self-driving in 2024, and 40,000 people died in crashes, it'd be an unimaginable scandal and catastrophe. It'd still be fewer than actually died on the roads that year, though.
Nervous_Designer_894@reddit
As an AI Engineer I strongly disagree with this take.
The predictive text analogy really downplays how fucking impressive the reasoning abilities of AI is right now.
The future of AI is scary though, Claude Code is changing the game so much and everyone keeps thinking we'll reach a point of diminishing returns, which WILL happen, but it feels like given how many smart people are working in this and how AI is accelerating it's own abilities, we are in for a future where most knowledge work will be done by AI and we'll just be directing and reviewing it.
Remote-Ad5853@reddit
I believe you until you talk about claude code! I just feel the hype you see about its abilities and what it can do just don’t live up to real experience. Not to say it isn’t useful
craigybacha@reddit
Mostly that it can replace x% of roles. Imagine a company needs 8 graphic designs. Now with ai being able to do a lot of tasks you might only need 4. That's a huge saving to a company. And it works for many roles. Coding. Admin. Data entry. Research. Writing. Etc etc.
Yes it sucks, but companies are all about the bottom line and that's why it's being mass adopted.
AirlineSevere7456@reddit
Still find AI unreliable when searching for information. Don't trust it at the moment.
In reality people are using it to make fake videos than improving mankind.
davidwhitney@reddit
I think if you assess it on "does it meet this accuracy level" it'll probably fail always (works well in programming because software is verifiable by it's nature, and often comes with automation to do so). Look at it as "how much can it accelerate X workflow by", is often far more successful (travel planning, automation with a human exception handling process, etc) where accuracy isn't a goal.
RecentTwo544@reddit
But that's the point u/AirlineSevere7456 is making - for travel planning it is utterly dire and should never be trusted.
What type of planning are you using in this instance anyway? Google Maps, Google Flights, Hotels/Booking(.com), Tripadvisor. All are going to be way more productive and accurate.
davidwhitney@reddit
I recently had Cowork plan a 5 country trip, providing me with flight options to match my preference and make good / effective connections by driving my browser, providing options for feedback and opening the booking pages for me to verify and complete. Worked excellently. I'd not let it do it unattended, but it did exactly what I would have done.
RecentTwo544@reddit
I travel a fair bit too, and I'm struggling to see how this saved any time or was more efficient.
Like I've said to others, it's not that I'm deliberately being down on AI, I WANT it to be good. It would be amazing to have AI that I could just give access to my calendar and it would book all flights and hotels for me and any crew/team/DJs I'm responsible for, send all the info/booking confirmations/boarding passes to their email, and I didn't have to worry about any of it. But it can't.
I've asked multiple times on the AI sub how to get it to automate my guestlist emails, a boring mundane admin task that is a tiny part of my work but takes up an annoying amount of time. Every time it breaks down into an insane argument with AI bros insisting this is "easy" but not able to explain how, then downvoting anyone who points out they're not answering the question.
davidwhitney@reddit
Ok, let me be really specific - because I do not have an AI to sell you :)
Previously I would spend time lining up flights, comparing prices / carriers / various upgrades etc, it just gives you an accurate rundown (it was the first time so I tried this so I did check), probably did what I'd have spent an evening doing in an hour of it driving the browser, with about 15 minutes of my own human effort.
That's precisely how it worked for me, YMMV. (I speak at conferences about technical things, so I also had it check flights with event organisers by drafting me emails, I set it off, high preview on each of the drafts, and a couple days later when they all responded I did the final 15 minute booking bit).
I suspect if you were booking flights for a whole team it'd take a bit more coordination to get right, though likely less than doing it all by hand. The key unlock above for me was it was doing this via driving my own browser (which I'm sure folks will note isn't the most secure thing to let an agent do). I don't normally use Chrome, so was relatively satisfied it could only access things I'd expressly logged into at the start of the session.
RecentTwo544@reddit
I'll give it a go on some dummy accounts and see how it goes! No way I'd let it loose on my work stuff until I understand and can test it.
Perhaps more importantly, could it do the following -
I get guestlist stuff coming in all the time for events. I pop them in a Google Sheet, different page for each event. Column A is the person's name, column B is their email, column C is the type of access they get.
People the DJ is working with (managers, label folk, business partners, close friends, etc) get full AAA access. People who are important enough we want to give them a good time and keep them sweet but don't want them cluttering up the stage/booth/green room will get VIP. Random folk the DJ has bumped into at an airport they gave guestlist to, will just get GA.
I then send emails out to each person, which change depending on the level of access.
They are NOT mass Bcc emails, a different email for each one.
The subject will be "THEIR NAME // NAME OF DJ, NAME OF EVENT/VENUE, DATE" which I want pulled from the header on the Sheet for the event name, and their name pulled from column A.
The "To" field will, obviously, be pulled from Column B.
Column C is the biggie, and is the difference between a good time and disaster.
If they're down as GA it should say something like "your name is on the guestlist queue at the main entrance" but for AAA it will be something like "do NOT head to the main entrance, head to the backstage entrance at (details of where it is) and if any issues call me on (and then my number or the number of someone at the venue)."
Obviously you don't want some randomer the DJ met being given AAA and my number. So the jeopardy if this goes wrong is quite high.
Few people have suggested options, most of which either didn't work, or the thread was deleted so I couldn't find it when going to try it out.
davidwhitney@reddit
Probably - I've just fed the above and asked it to produce a precise spec/skill that *might work* - it gave me this - https://gist.github.com/davidwhitney/9b3625b2d93dc1191df47b794ed78bc0
Though honestly, if it were me, I'd get it to create a Google sheets automation to do this if it's as structured as you make it sound above, as I'd know it'd then always, perfectly, reflect my data, rather than be open to any non-deterministic behaviour.
paradoxbound@reddit
Just talking to a model in NLP (Natural Language Programming) is ok. Where it starts becoming powerful is when the tools interacting with the model can mimic a role within an organisation. My own harness for doing this is based on free and open source software. Has a bunch of custom skills that mimic real world roles. One for team brainstorming, one for technical design, a CSO role and many others. This is then wrapped in an application that can run multiple copies of these roles in parallel. I then use many individual sub agents to write code very quickly. Once they have completed the the task, I review it suggest corrections, once I consider it good enough I then pass it on to other team members for further human review.
The amount of work I can do in a day is multiplied by a factor of 5-10.
This is why the ownership class is so excited, they can replace a lot of human resources with tools like this.
MarkRand@reddit
Whilst you're certainly not wrong about it being hyped, I would say that AI is a revolution and everyone who does anyform of "Knowledge Work" should be investing time into how their career can benefit from utilising AI.
I'd also say that, depending on what "intelligence" is, we do have AGI already. As a software developer, it is very difficult to create puzzles that are not easily solvable by AI more quickly than a human. However, the current limitation is the amount of context that an AI can hold for a given solution, I think this is where knowledge work is moving. Well, along with original thought of course although arguably even humans don't have many truly original thoughts!
worotan@reddit
That’s as limited a view of reality as computers can manage. You really need to widen your life experiences.
MarkRand@reddit
I was paraphrasing Mark Twain
No_Point_3172@reddit
I made a picture of my 2 coworkers walking next to each other into a picture of my 2 coworkers holding hands and distributed it.
Got a good laugh.
finniruse@reddit
Probably the agents side of it at the moment.
Claude Code can be installed on your computer and take charge of you PC. You can set it so that it has multilevel tasks.
So let's say your job requires you to do a newsletter on a Friday. You could set up a multi step programme that scans for news choices, writes a blurb, fills in the links and hyperlinks. (This is just an example and might not be fully accurate but illustrates the point?).
A good user of AI could automate large parts of their job and be far more valuable than someone without.
This is preliminary stuff that people are fiddling with at home. You can imagine soon agentic AI becoming much more available.
If this is the future, big questions around what work will look like and whether it'll even be enjoyable. How do you prepare yourself for this? And, this is just one thing. We just have no idea what it's all going to do or mean. Just terrifyingly fast and rapid transformation.
CrimpsShootsandRuns@reddit
On your last point, like all companies, ours have been on a big AI push and they've built an LLM to automate a certain relatively interesting task, but it still needs human input to get that task ready for delivery. The end result is that, instead of AI freeing up time for humans to do more valuable and interesting work, the AI has taken the interesting part of the job and the humans are now doing the robotic task of copy/pasting. It's all very backwards.
finniruse@reddit
Yer, I'm quite concerned about it. I've worked as a copywriter for a decade and in many ways my industry is a canary in the coal mine for what's coming. I got into the industry because I like that it's an expression of my creativity. Why would I want to farm that out to an AI? Sure, I can be more productive with it, but my own stamp on the work as been eroded. It also homogonises thinking.
Lately, I've had a few interesting conversations with people on here. Some people I've spoken too are adamant you should handover your thinking to AI. The moment you do, they've eroded your value and have taken over your role.
zero_iq@reddit
See, no AI would spell this creatively ;p
finniruse@reddit
Lol. If your post was aimed at triggering me, it worked. I'll concede the e in homogenises, but the rest is from just typing too quickly and not proofing my informal Reddit comment.
zero_iq@reddit
I know, just some lighthearted teasing. I used to work with technical writers, so I've clearly learned what buttons to press to get a rise when mistakes slip through 😅
CrimpsShootsandRuns@reddit
I'm in a similar industry of publishing the written word. AI is writing the content and we're robotically copying it into the CMS to publish. I have stressed that I really think it would be better if we used AI to remove the tedium of battling with systems rather than actually writing the content, but that has unsurprisingly fallen on deaf ears.
finniruse@reddit
Yer. It's that idea of, we wanted AI to do the dishes, not our creative endeavours.
Why not automate it yourself using Claude? Then you can put your feet up
rowanajmarshall@reddit
Making coding easier by itself is a massive boost. Like you're almost definitely underestimating AI's impact just one software, let alone everything else.
SuperStumps@reddit
The mistake was calling it artificial intelligence... It's basically a glorified predictive text program. Calling it AI got everyone's hopes up, and caused massive overspending in the tech sector.
Puzzman@reddit
One factor I haven’t seen mentioned is the scope for improvement. I think of it like the internet in the 90s only useful if you knew what you were doing.
AI will probably follow the same path bubble now and then everywhere by 2050…
_Rookwood_@reddit
Probably the most transformative technology in my lifetime. It may eclipse the internet. It will save billions of hours of work and all that extra time and money can be devoted to other things.
The transition will be very disruptive and some people in previously secure white collar work will be replaced. But that's the nature of progress.
I_will_never_reply@reddit
AI is already running aspects of your life and you don't even know it
AffectDangerous8922@reddit
Think of it like the email from the "Nigerian Princess", which specifically targets old CEOs.
AffectDangerous8922@reddit
You are missing the part where AI Conmen sweet talk their way into a meeting with a board of grey haired Executives who have zero idea how tech works, most don't even own a cellphone.
These Execs will gladly risk your job by investing ridiculous sums of money into a scheme which will specifically do away with your job. After all, these grey haired execs who will soon be dying of old age (if we are lucky) automatically know better than anyone who has less money than them. And once they have invested All of the company profit into the AI Ponzi scheme, they have to fire you to maintain company profit.
richmeister6666@reddit
It’ll do 60-80% of the mundane tasks you do which frees you up to do more productive work, increasing productivity and output which is good for the economy.
t-t-today@reddit
What you’re missing is the time for integration with organisations workflow and data.
The models are incredibly powerful already (more so than the average person by some margin) but it takes time for companies to build systems using AI that is connected to their data and do the things they need it to.
Once that’s done (and it’s happening rapidly) there will be a big shift in how work (and the workforce) is done
SYNTHENTICA@reddit
A lot of it is hype, but it's still very early days and AI progress IS exponential by a lot of metrics.
You have to be pretty deep in weeds in see how LLMs can be useful. Most of the anti AI people in this thread are either don't see the big picture and how rapidly it's advancing, are against it on moral grounds, or are just stupid and don't know how to use it properly. Also tools like copilot are just really horribly designed compared to open source alternatives like OpenClaw, but again, you need to be a techie to use this stuff properly and not blow off your own feet in the process
So don't worry about it for now, in 5 or years all the kinks will have either been worked out and you'll get a polished "plug in and play" product, and not experimental stuff with a steep learning curve that we have now.
TheJzuken@reddit
You are missing how powerful it is and how powerful it is going to get.
Because even right now it is already disrupting the white collar job market.
From technical perspective, there are already enough breakthroughs and techniques that will allow AI to perform almost any white collar job in the nearest future, but most people that think AI is "all hype" point to the smallest, cheapest and obsolete models like Google search or free ChatGPT tier and have never used any advanced models and features, like paid tiers, MCP and agentic models. Which can already take the models to the level of middle-level programmers or other specialists.
And even then, in technical terms, the AI right now is "the worst it will ever be". Most modern AI are taught using verifiable rewards and user data. If AI fails to execute some task, once it executes it successfully - it will learn how to do it and similar tasks successfully in the next iterations. If the user helps it or guides it or argues with it that it executed the task wrong - it also gets fed back as a training signal. Since all of the companies are starting to use AI internally for different tasks, the AI is going to get a ton of high quality training data, and get better at corporate tasks. And as it gets better, more and more corporations and businesses are going to start using it.
And I'm not even starting on technical improvements, that promise 10x-100x cheaper AI for multiple tasks. And that's also not counting the hyperscaler datacenters that are just getting finished, and it's also not counting the compounding improvements in AI chips with photonic compute, thermodynamic compute and neuromorphic compute which can also bring 1000x improvements.
tl;dr: the white collar is fucked if you can have a £5/day thinking machine that never sleeps and performs 24/7
Tundur@reddit
Put it this way, my company used to have 1 QA resource for every 12 call-centre frontline workers. That role is now done by LLMs (working in a very controlled way, we're not idiots) with minimal supervision. We've gone from observing 3 calls a month per employee, to every single call, without any drop in accuracy. We're catching a hell of a lot more incidents where processes aren't followed (because humans aren't very consistent) and are fixing them in hours, not weeks later when someone gets around to checking.
We've not made anyone redundant because we're growing quickly enough as a business, but we're certainly not hiring anyone.
With new development practices in our technology function we've also fully digitised our most critical customer journeys in about 6 months with a team of about 10 devs, when previously that would have been a multi-year technology change involving dozens of highly paid resources. Our NPS is up, our call wait times have disappeared, and we're no longer looking to expand our callcentre footprint.
We are a company of about 3000, who were expecting to hit 5000 in the next 2 years. If even a handful of our current work programmes using AI are successful (and Iran doesn't sink any container ships full of GPUs), that's 2000 jobs removed from the economy, and all our customer satisfaction metrics have gone up in the process.
Granted, I'd say we're ahead of the curve, but it's still a curve - every other business that can follow suit, will.
chs0c@reddit
By far the most useful thing AI has given me is the sheer upswing in productivity. I do a lot of data analysis, script coding, and I’m always having to research something.
Being able to do both of these in Claude genuinely saves days of work.
Suspicious_Steak_696@reddit
Drafting really angry emails into a more diplomatic version
Wise-Helicopter-2087@reddit
Or use your brain to do that
RalphRolfeRail@reddit
Trying to get it to call everyone cunts in a resignation letter has distracted me long enough to not send it more than once.
Potential_Lettuce_98@reddit
hugely underrated function
deadlygaming11@reddit
The big issue is that it makes an awful lot of mistakes and cant really fix them without being handheld. Companies have also invested billions into all this with the assumption that the industry is going to be worth trillions. It still doesnt make money and the companies really want you to use it so it can actually make something.
LambonaHam@reddit
Tools like ChatGPT, Claude, even Copilot are varied.
You can use them to do research (and provide sources) on things like legal matters.
You can use them to identify and provide information on things like plants and animals.
You can use them to generate art.
Dazz316@reddit
People are p[aying attenbtion to the extreme's. The "it's all useless" and the "it's all amazing!".
Truth lies somewhere in the middle. We're trying a new technology out and where ti fits, companies are hopping onto to trying to find ways to make it fit in ways people like. For some, it's finding a foothold, and for others it's not.
It looks like it's starting to work for some low effort art things for example. Basic marketing templates or translation stuff for example. But it's not working for many IoT things like fridges or whatever...though a lot is smoke and mirrors and many things advertising Ai doesn't even have AI.
Chat bots are WWAAYY more ADVANCED. I use it more and try to as it's been really handy. Good example was recently I was doing my washing and look at my washing machine and thought for the 100th time..."I wonder what all the symbols mean, am I using the right setting?" Well I used Gemini and asked it, turned my camera on, pointed it at the dial when asked, answered a few questions on what clothing I was putting in and it recommended 2 settings based on speed and quality of wash I needed, it did start going into more detail but I stopped it.
Pretending that chat bots are no better is compairing current shitty ones with old shit ones. They just haven't got good AI chatbots. I remember when we all used to have to deal with the terrible old chat bots trying to go see a movie Bot: "What movie are you looking for?" Me: "The Fat and The Furious". Bot: You said....My Big Fat Greek Wedding...is that right?". They were significantly worse and now I can have full blown conservation where it can look up things for me. I've also done things like "Can you put the next rangers european game in my calendar for me" and it'll run off, look up when it is and create a calendar entry for me.
There's definitely uses for this stuff and certainly ones that I won't know about as well as all the tried but is useless uses for it. There's a bubble, and the bubble will burst eventually. But we aren't going to be left with nothing, just hopefully what it's useful for.
Due_Professor_8736@reddit
I’m learning another language and compared to the last time i did this it’s a night and day different experience. To be clear. AI isn’t real intelligence. Most of what it does could have been previously possible. But it’s how quick and versátile it can be.
I can ask it to prep 20 sentences at my level formatted for copy paste into my flash card app. Done. I’m speaking the foreign language to the AI and it’s grading my pronunciation. It’s engaging with me over speech and text. Providing corrections and feedback. It’s a dedicated teacher and study buddy. I can get reading and listening comprehension tests on the fly. I can submit written work for it to critique. We can role play the things that will be in the exam interview..
I started in Jan. A few private lessons but basically just using AI tools and AI supported apps. Exam next month and feeling confident.
I haven’t bought or opened a book. I don’t write anything down.
I use ai chat to get answers about grammar or common usage. And any doubts I’ll get covered in private lessons which are mainly to check my level and introduce new topics and my chance to engage with a native speaker..
So now Imagine a classroom. All kids have AI guided learning that suits their learning style. The AI builds a profile on how best to teach each of them.. The actual teacher has a dashboard showing progress and can dive in to help as needed.. as kids demonstrate understanding they are freed for group work or to progress ahead..
There just needs to be an initial onetime effort to give the tool guardrails. Just parameters to work in. Treating it as AI even as a chatbot is a mistake. It doesn’t know anything. But once it is told what facts it can share and approved methods to impart then it can be a good tool for learning at a basic level. You can drill repeatedly, tailor materials to fit your interests etc. that classroom can have each kid get examples fitted around their interests..
People will say “it hallucinates” so hence my points about keeping it on a short leash.
In general work. It’s another tool. How it gathers meeting notes from a group call. Pick out the salient points and actions is genuinely impressive. People that are already tech savvy should be using it to more quickly automate workflows. Companies whose main workflow is “calls and email” will struggle to derive benefit..
EvilKerrison@reddit
Here's the point : It doesn't need to fully replace a human to be devastating/revolutionary (delete as fits your point of view)
AI is getting extremely good at completing many time intensive human tasks - writing code, reports, documentation, data processing, summarising meetings & action points, and many more besides. If AI can do 50% of your job it doesn't mean you're safe because of the other 50%... it means there are twice as many people doing your job as needed. For each such role affected, either productivity doubles (if demand is there) or head count halves.
I pick 50% as an easy number to work with... some industries will be higher, some lower. In the end it all comes down to a choice - either be someone who's good at using AI to get things done, or see yourself replaced by someone else who is. It's not clear how fast this is going to happen. That's going to vary depending on the sector too. But it's coming.
Airurando-jin@reddit
Siri is useful to a point, I use it to help with verbal reminders.
Alexa is partially useful for automations but really I supplement it with HomeAssistant (and could probably almost entirely replace it with it. We run ours off of an old Chromebook).
AI. Yeah it helps coders but it has its other uses too. I’ve used most of them and see the most benefit from using Claude.
I’ve got ADHD, and ironically my job is pretty intensive with a lot to manage and balance.
I can give Claude some default strict criteria in terms of how to deal with me, what not to do, how to act, give it a role (like a consultant in my field, and /or adhd coach or accountability partner) and brain dump everything on my mind that I feel I need to to tackle in general / that week between work and home.
It then gives me a list of priorities, triages everything and keeps me right. It can integrate with quite a bit and so I could partially automate some of my work processes, or help manage my workflow.
So for me, it’s useful.
cozywit@reddit
It's the combine harvester.
We don't change the food we eat. We don't change how we grow it.
But it now takes a couple factories of a few thousand, then a couple of farmers to replace hundreds of field hands, hand harvestting.
AgeingChopper@reddit
someone has to buy all the products, with nobody working that's going to get challenging.. unless a new reason to employ people is found.
it normally turns up but i've no idea what it will be.
cozywit@reddit
When jobs get automated. The workers typically move to more obscure difficult tasks that were typically out of reach or too expensive for the standard product.
Build your first car with 100 workers. It's basic because they're hand building it, just an engine, chair and body. Nothing fancy because for 100 workers, you can't make it any more advanced.
Now you've got a robot that can weld the car chasis. Now you only have 80 workers needed. But you can still sell a car that costs 100 workers, so you add electric windows and a stereo. Installed by those 20 spare workers.
You now get automated installers and spray painters, your basic car would only need 40 workers. But with the extra workes you can use them to install a more complex efficient engine, you can have nicer seats, etc.
Same with software. A company of 100 that could produce a game with say 4000 assets. Well with AI they could fire their team down to 10 and stil make a 4000 asset game. They still sell it as if it cost 100 people. Or another company with 50 people could build another game with 200,000 assets. Why would you buy the original game? Unless they drop the original game price down to 10 person building it.
As we automate and advance, we can do more or we make things substantically cheaper. Anyone that just does the same with less, will be out compete.
What we need to protect is the consumers choice and the producers right to compete.
AgeingChopper@reddit
who knows where all the workers will go.
SapientPro_Team@reddit
You're benchmarking against the wrong things. Siri and retail chatbots are bad because nobody put real models into them. AGI is a separate conversation.
The actual shift is boring B2B stuff nobody posts about. Contract review going from 6 hours to 20 minutes. Support tickets auto-tagged at scale. Code migrations quoted at 18 months shipping in 6 weeks. Devs doing in 3 days what took 3 weeks.
None of it is sci-fi. All of it is replacing paid human work today. That's why budgets are moving not because anyone believes the AGI pitch.
"It's not AGI yet" and "it's not changing anything" are two very different claims.
ClumsyPortmanteau@reddit
My company's "sustainability and climate change manager" sent out an email this morning inviting employee's kids to make a poster for Earth Day and stated clearly (bold and underlined) that the posters should be hand-drawn. The email included a poster with the instructions and some artwork and a tagline "Save our Earth, Secure our Future".
He created the poster with AI. He did not see the irony when I questioned it.
AirconGuyUK@reddit
I would imagine the AI inference was cheaper in terms of climate cost than the amount of resources a day of human labour takes. The drive to work, the food a human consumes, the power the computer uses (200w * 7 hours)..
ClumsyPortmanteau@reddit
I guess so? It will have taken him 5 minutes to input his needs and get the poster out of it, so I'm sure the actual environmental cost of this single use of AI is not very large in the grand scheme of things.
I just feel that, as a company that has a large design department, it could have been done in a more active and creative fashion than this. It rubs me the wrong way that they are asking for hand-drawn art by using a (honestly very ugly) AI-generated advert.
AirconGuyUK@reddit
Fair enough. I'd say that's a pretty appropriate use of AI. I'd assume such an email before would have just not had much of a visual element at all. I'm guessing designer time would not have been wasted on something that could be a bullet point text list in the email.
apple_kicks@reddit
Most company greendays are greenwashing anyway im not surprised
pajamakitten@reddit
Because most people are ignorant about the resources needed to fuel data centres. Instead they focus more on the threat to their job.
HonkersTim@reddit
Your comment about chatbots being around forever merely proves that you have not tried an LLM chatbot. They are not even remotely similar to chat bots of old.
FatherPaulStone@reddit
Google Deepmind's Alpha Fold has literally changed the game in life sciences, specifically protein folding and understanding how proteins work. I'm not sure theres a better example of AI making such a massive impact.
First_Folly@reddit
Out of touch suits and tech bros are trying to push it so that they can manipulate the AI into suggesting things that benefit them and their creepy co-conspirators.
They've wasted untold fortunes, destroyed wildlife habitats and ruined the consumer technology market because if they stopped now in their eyes they'd see it all as money down the drain. It really should be. I truly hope that it fails catastrophically for them and gets pruned back so that all AI is used for is menial tasks and assistance with said tasks.
My job isn't even AI adjacent; in fact it's blocked and banned at work. I just hate it.
njchandu@reddit
AI Agents can do a lot more than basic admin tasks in Excel. The Agents can do pretty much anything one can do on a computer and at a much faster rate with more consistency (https://www.mongodb.com/resources/basics/artificial-intelligence/ai-agents). Some white-collar jobs are going to be replaced in the medium term, and we as a society are not prepared for it.
ChelseaRoar@reddit
As someone in a tech job, it's absolutely absurd what it can do. It's very early on in it's existence in the grand scheme of things but if you know what you're doing already, and to be fair that is a big if, it's a stupidly powerful tool.
Doom mongering on potential is a misunderstanding of how far along it already is. People should be doom mongering on what it already does.
Sea_Pomegranate8229@reddit
You are missing the power of AI. As a simple example: I can type this query to an LLM that old Google search would fall over on. Imagine what a doctor, chemist, marketer, CFO could do with this at their fingertips.
"I’m planning a weekend trip to Edinburgh with my 7-year-old and my 70-year-old mom. We love history but need to avoid lots of stairs. My mom uses a wheelchair sometimes, and my kid gets bored after 30 minutes in museums. Give me a 2-day itinerary with:
VindoViper@reddit
It has the capability to make a below-average intelligence person sound average (and spell correctly). It is therefore a spectacular benefit to those people and they are the ones making all the noise.
eyeswithoutaface-_-@reddit
God this resonated with me....the industry I work in is now inundated daily with service users sending in over written emails demanding things they believe to be in line with make believe legislation that has been compiled by chatgpt....
OctavianBlue@reddit
Yep we get this constantly, the problem is you need some knowledge of a topic to fact check what the LLMs spit out. Unfortunately most of the public are using it where they have no knowledge and this means they submit complaints etc which sound fantastic to them but have no substance, lack evidence and contain hallucinated legislation/ case law.
Namerakable@reddit
We've come across patients trying to get us to admit them for treatment they don't need by getting AI to write a fake doctor's letter about how urgent their case is. We spotted it quickly because those patients lacked any critical thinking and made some errors in their input, but it's scary to think about when those people get the details right and send something feasible.
jadedflames@reddit
AI is great at the things it is good at.
Unfortunately, most people are trying to use it to do things it is bad at.
I am a solicitor. I spend all day drafting legal documents. My company asked me if I would try using AI for a few projects to evaluate if it would be helpful.
Short answer, nope. Long answer... Noooooooooooooooope.
I had to redo basically everything it did. It made the process take twice as long and produced worse work product as a result.
nfk99@reddit
i love it! it knows everything. you are probably not asking very clever questions.
imagine having all human knowledge in your pocket and not liking it...lol
this is a you problem.
(don't get me wrong i also know its going to ruin humankind and be the end of us)
Intelligent-Royal682@reddit
Just a personal example, I was asked as part of my job to Photoshop out a section of a photograph and replace it with what would have been in the background.
Not usually my field at all, never used photoshop before and was worried it would take me a very long time as I would need to learn the basics of the software etc.
I loaded up the software, clicked the AI button, highlighted the stuff I wanted gone and told it to replace with certain other things in the image. Job that probably would have taken days took me roughly 5 minutes (tried a few different iterations).
Now imagine the people out there whose job is to essentially do what I just did, in 5 minutes, with zero experience or qualification.
That's why people are worried.
binaryhextechdude@reddit
I think AI is far more capable than it's allowed to be and that's by design. I've been using Copilot to help me with researching parts for my new PC build. I ask a question and expect to receive a full answer but it only does 20% of the work, writes a ton of text and then saysd "Would you also like me to do xyz?" Well yes, I expected XYZ would be part of the original answer. This repeats 3, 4, 5 times until you get through all the "Do you want me to also..."
All I can think is they are deliberately hobbling it to avoid public outcry. Kind of live driving assist featues. I'm pretty sure cars can drive themselves but they limit the tech to land guidance, collision avoidance etc because people will accept that.
rainy_desert@reddit
You’ve probably only used chatbots and yeah those can feel underwhelming. But the real game changer right now is AI agents. Claude Code for example doesn’t just answer questions. It literally sits in your terminal, looks through your codebase on its own, writes code, runs it, hits an error, figures out what went wrong, fixes it, writes tests, and keeps going until the thing actually works. You just tell it what you want in normal English and let it do its thing.
jankyswitch@reddit
I use it a crazy amount. It’s the only way to stay ahead in my job. From coding to rubber ducking, to reauirement processing….
It’s here. It’s transformative. Not in ways I like but I can either fight against it and probably lose my livlihood or embrace it and hope I’m one of the lucky few
No-Championship9542@reddit
It's amazing, it basically brings us to the end of the specialisation based economy, one that went against human nature, and now allows us to return to our generalist routes. Someone can how make their own business and use AI to automate vast parts from bookkeeping, to drainage surveys and the writing risk assessments, loads of bullshit jobs can finally be thrown into the void and innovators can actually innovate. We're in the golden age of entrepreneurship and people can finally be free to make their own business and work for themselves as they've always wanted.
LTP-N@reddit
I don't know if it fits, but I've since created 2 apps that are now on their respective app stores, and I make money from them (multiple times more than the AI subscription it cost me to create it), all with (almost) zero coding knowledge.
MarmiteSoldier@reddit
At my work we are literally training the agents to do the work we do… and it already does aspects of the job well, certainly to the level of a graduate employee (sometimes better, sometimes worse). They will only get better as the technology improves.
Once these agentic workflows are mature and the company realises they can do the work of human employees at a fraction of the cost using agents, we will all be laid off. I think we will see hundreds of thousands if not millions of white collar workers out of work in the near future. It’s started already.
811545b2-4ff7-4041@reddit
It's an amazing research tool, even for day to day stuff. E.g. you want to go on holiday and you have a bunch of ideas, but you want someone else to do the legwork and compile the information.
You can get Google Gemini to take all your requirements, go off, research it, compile it, and it'll bring you back a nice set of concise information.
I did this recently to find a hotel in Venice. Near to X, near to Y, rooms with enough space for Z, within the price range of A-Z .. then bring me 5 options and show me them on a map, listing pros and cons of each one. It turns hours of research into minutes.
It doesn't do 'basic admin' anymore, it can do a crazy amount of stuff. You have 2 x 200 sets of files, with some differences, and you want to go through them all, list the differences, and come up with a general sense of what's the problem? It'll write a python script for you, run it for you, do the work, analyse the results and come back with the results for you.
LLMs are only as good as your ability to use them. "Prompt engineering" is a thing.
You want to write a complaint about something and for it to sound well written and have a certain tone? It'll do it - but still, only as well as you can write a prompt.
glasgowgeg@reddit
It tells people to add non-toxic glue to make cheese stick to pizza, and that you should eat at least one small rock a day.
811545b2-4ff7-4041@reddit
It literally says you can't digest rocks immediately after saying you should eat them.
These tools (and you're looking at the crappy AI used by Google's search engine) need some common sense whilst being used.
glasgowgeg@reddit
Yet it still suggests eating them.
811545b2-4ff7-4041@reddit
I just asked CoPilot "How many rocks a day should I eat?"
It's response was - Zero. You shouldn’t eat rocks at all.
I Google Gemini and the response is "I'm going to have to stop you right there- hopefully before you reach for a pebble! The short answer: You should eat exactly zero rocks per day"
If you're using Google search to 'gotcha' AI and grab screenshots you are just being disingenuous.
glasgowgeg@reddit
I simply don't need to ask an LLM if I should eat rocks, because I don't rely on a machine to do my thinking for me.
811545b2-4ff7-4041@reddit
ok, cool. But when I want to find a hotel in Venice with specific requirements and I don't fancy searching several dozen websites, I'll use a LLM.
Or I want to search my codebase to check for specific stuff, or to explain how a complicate script works without me spending an hour deciphering someone else's code - I'll use an LLM.
I'm not saying use one for everything. Use some damn common sense. Like not eating rocks, or needing to ask if you should or not.
glasgowgeg@reddit
I would use Google, because again I don't need to offload basic congitive function on a machine. I am capable of evaluating results myself.
Honestly it's embarrassing you admit to using it for such trivial things.
811545b2-4ff7-4041@reddit
Have you seen how many hotels there are in Venice? Now I don't know much about the place, but I know there are LOADS. I wanted one close to the public-transport boat stations as I'll have a lot of cases. I wanted one close to tourist stuff, but also not in too busy an area. I wanted a room that could fit 4 people, but also not be on too high a floor as we'll have lots of cases (since it's just a stop before another trip).
It's a multi-variate problem that requires geographic information, transport information, lots and lots of hotel information.
It's a great case for using an LLM to narrow down options
glasgowgeg@reddit
As a human being, you equally (I assume) possess the cognitive function to evaluate them yourself.
You're not sitting looking at a list of literally every single hotel in existence, you can go on Booking.com and select your filters to eliminate what you don't want, then use your adult brain to evaluate them yourself.
Again, I would simply not embarrass myself by admitting I have to offload thinking for something so trivial. It's why there's several studies indicating reduced cognitive function in those who use AI.
811545b2-4ff7-4041@reddit
Even using booking.com, filtering by the rooms with 4 people, and price requirements, I end up with around 60+ options. Of course it was the first thing I did. And using the Google Hotel finder. And a few other search sites.
My big issue was geographic constraints. I've never been before, I don't know the public and private transport options. I don't know the stop locations. I don't even know where the main tourist areas are. Of course, the first thing I did was read up on this.
But.. I still had many dozens of hotels that I would have to search the walking distances from public transport stops. Distance only doesn't work in Venice - or you end up with a stop 50 foot from you - and it's on the other side of a canal with no bridge!
In the end the LLM only found about 4 really suitable hotels. It saved me hours of time, after I had already put in plenty of research myself. We found a nice boutique hotel with good reviews, 5 minutes walk from a gondola stop, at a good price, on the edge of a tourist area.
Would I have got there in the end? Maybe. Probably. But would I have spent hours of my time doing this? Yes.
It's just an example of a problem. Of course, any competent person should be able to do what an LLM can do. It's how much effort and time you want to put into it that is the reason they're being adopted so much.
To ignore them is to fall behind the curve in the workplace as much as someone who refused to use the internet in the 90s.
glasgowgeg@reddit
That's the point you use that aforementioned big adult brain.
Skill issue, again a skill that will be deteriorating as a result of your reliance on a machine doing your thinking for you.
Not really. You're relying on a machine to do your thinking for you, your critical faculties inevitably deteriorate as a result, and (as I already linked you) several studies show that it has negative cognitive effects on those who use it.
It's not the same as someone refusing to use the internet, but then again I wouldn't be shocked if you asked AI to generate this sloppy argument for you.
811545b2-4ff7-4041@reddit
You're going to get destroyed by an army of Gen Z who have been using this stuff since their teens. They'll be leveraging this like their own personal offshore team while you're plodding along on your own.
Does it offload stuff from your brain to a machine? Yep. So do spellcheckers and calculators.
Will you look like a simpleton in 5-10 years if you don't get on the AI-useage train? Very much so.
glasgowgeg@reddit
I work with Gen Z exztensively, they're woefully inept when it comes to basic problem solving and computer use.
If it's not an app or a walled-garden system like a mobile device, they don't know what they're doing. I think I'll be fine.
I don't rely on either of those, but for a calculator it won't do everything for you, you still need a basic understanding of what you're trying to do.
Again, several studies show that your cognitive function is deteriorating as a result of AI usage, so the simpletons will be the ones using it.
811545b2-4ff7-4041@reddit
BTW - the future of teaching - personalised textbooks - https://research.google/blog/learn-your-way-reimagining-textbooks-with-generative-ai/
glasgowgeg@reddit
Should I wait a few minutes before addressing this comment so you have time to edit it?
811545b2-4ff7-4041@reddit
If you want my bestest spelling, yes.
glasgowgeg@reddit
You're not just changing spelling though, you're substantially changing all your comments to add several new sentences without declaring what you've added.
If you're not going to engage in good faith, why bother replying?
811545b2-4ff7-4041@reddit
I'll slow down my reply cadence to sort this out then. It's not a chat window, is it?
It is a shame there's no wikipedia-esque easy view (unless there is and I've not looked for it?) to show edits and changes.
You don't have to like AI, but acting like a Luddite is going to hit you when you're older and the world revolves around it.
glasgowgeg@reddit
No, but you can engage in good faith by accurately stating what you're changing when you edit.
It's a shame some users engage in bad faith and don't accurately label what they've added.
It's not being a luddite to not rely on a system that makes people more stupid. In a world that relies on AI, the person capable of thinking for themselves will be king, etc etc.
811545b2-4ff7-4041@reddit
Like it or not, technological change over time has meant deskilling, and then reskilling.
I don't grow my own food. I don't sew my own clothes. I don't hand wash them all. I don't fax stuff to people. I don't use a typewriter or printing press.
It's a fundamental change in human technology. I think we're still in the early to mid part of the S curve - https://www.researchgate.net/figure/Phases-of-the-S-curve-of-technology-5_fig1_341763864 and it'll get smarter, and use less power.
Sorry about not stating my edit. I'm not intending to misinform or act in bad faith. Sometimes I just have other ideas after I hit comment, then forget to add an edit mark.
I do think AI will deskill us and we'll use less of our brains on some things but we'll be free of some mental load, to then use it on other things. We're in a transitional phase and it's changing about as quick as we can keep up. It'll fundamentally change our lives and societies, so it's daft to ignore it.
glasgowgeg@reddit
I'd settle for you not substantially changing all your comments to add several new sentences without declaring what you've added.
If you're not going to engage in good faith, why bother replying?
RecentTwo544@reddit
While you're kind of right, that's a dreadful example.
People in my line of work are specifically warned (and most people with half a brain know anyway) NEVER to use Gemini for hotel/flight information because it gets it wrong often, like it does with many things, which can lead in this instance to real world problems very quickly.
811545b2-4ff7-4041@reddit
They recently integrated the data from Google Maps into Gemini (and I think that includes Hotels).. it honestly did a cracking job at finding me a nice hotel.
The 'geographic constraints' were my biggest hurdle really. I can easily filter down a list of hotels myself looking at the rooms for suitability, but I can't easily calculate the walking distance from many hotels to the public-transport boat network.
RecentTwo544@reddit
On your first para, that's a) good news, b) annoying because you question why the hell they didn't do that earlier.
Nerdy aside - mate is a pilot and we sometimes play Microsoft flight simulator together. I prefer "low and slow" type flying, seeing scenery and the like. Yet MSFS is notorious for having awful inaccurate scenery in many areas because they're relying on Bing Maps, not Google Maps and the former is a long long way behind the latter in terms of data gathering. They cannot use Google Maps, because the licence would make MSFS cost about £400 a copy.
Which brings us back to the original point - Google own Gemini and Google Maps, plus stuff like Google Flights and Google Hotels (though the latter is oddly incapable compared to other similar hotel listings). So why couldn't they just integrate it in the first place?
On your second point - surely that makes it much easier and efficient and accurate to just do it the "old fashioned" way?
811545b2-4ff7-4041@reddit
Looks like they've also integrated Flights and Hotels data too from their other services.
The big problem with Venice - you can't look on the map and immediately know that 'because X is close to Y, it is also a short walk' .. there are canals EVERYWHERE so the walking distance, not crows-flies distance, is really important.
I think the people criticising me and LLMs think that it's a blind process with no human intervention. I see it as currently like having my own overseas team working for me, but I don't 100% trust them and need to watch them like a hawk!
My real overseas team of devs are great!
worotan@reddit
So you can use lots of energy and water unsustainably, to plan an unsustainable lifestyle even more easily.
Why are you just ignoring climate science? Does it need to be made fun before you’ll be interested?
811545b2-4ff7-4041@reddit
I see you're using Reddit, hosted on AWS servers, that also use a bunch of energy and cooling facilities. Have you tried to not and to save the planet? How about turning off your phone/PC/laptop also?
I think ultimately the energy requirements will come down. After all, they're trying to replicate the process that goes on inside our brain, that uses substantially less energy than LLMs use at the moment.
pajamakitten@reddit
It is known to invent sources, misinterpret existing sources, and is fed information from social media sources (including Reddit). It's use as a research tool has been analysed and the conclusions are not good.
Party_Advantage_3733@reddit
I agree, most of what I see people using AI for is essentially googling. As someone who already knows how to use a search engine properly I find AI unnecessary at best. I accept it might well improve but why get so excited about something before it happens?
Exasperant@reddit
The best bit is the AI generated answer, that's supposed to save you from that arduous task of actually reading/ understanding anything, which is incredibly often either partly or totally wrong.
TheOrchidsAreAlright@reddit
Pretty sure we have flying transportation
regulator202@reddit
Those giant white things in the sky? Obviously AI. Nothing that bit could stay in the air
PhobosTheBrave@reddit
Think about what can be done on the internet now, vs what could be done in the 90’s. That’s essentially what people are banking on.
Sure right now it is mainly boosting efficiency and basic tasks automation. Some people are making use of it for content creation, and we’ve seen vast improvement in image/video/sound creation from ai over just a few years.
There’s also the more technical aspect of the research and development of mathematics behind ai has exploded, further our understanding and toolkit for modelling the world and making these tools better.
I think the true bloom will start when automatons start replacing the jobs of humans in manual jobs. Very easy to see how a company making “Bricklayer 9000” that has perfect balance and dexterity, can be deployed to a building site and work 24/7, replacing a human worker for a huge productivity boost and cost saving.
The same for health and social care. Replace the nappy changing, bathing, bed changing, cleaning etc with robots and free up humans to just do the actual medical work like injecting medicine etc.
Exasperant@reddit
I'm all for the "The World Of Tomorrow" as promised in over staurated pieces from the 1950s, but this world of freeing up humans to do other things also requires a revolution in how humans acquire food, clothing, shelter etc now they're living in a world that has massively reduced its need to pay them for anything.
I'm not sure the profit driven money machines pushing AI into our lives are really so keen on that aspect...
worotan@reddit
Been hearing this for decades. It’s always just around the corner..
PhobosTheBrave@reddit
You’ve not been hearing about ai for decades, at least not outside of sci-fi like terminator.
These technologies exist now.
Boston Dynamics have been developing robot tech for years.
China had a display recently showing incredible dexterity and balance of their robots.
Ai models are rapidly advancing, with billions being poured into finding applications.
It won’t take long to put the two together and have largely automated workforces.
We will 100% see humanoid robots performing manual tasks within 15 years, more likely 5-10.
rdu3y6@reddit
And that's where the mass unemployment issues come from. Who's going to be able to buy those really efficiently built houses other than people who own the tech companies? We could be looking at inequality and slums that would make the Victorian era look cushy.
Dark_Akarin@reddit
The true hype is at the top in the Epstein class, they love the idea of no longer needing to educate people, they want to replace everyone that has to think with an idiot following instructions from an AI. So every time you use an AI, you are helping train it to replace everyone else. Imagine a world where instead of thinking, you rent an AI to do it for you, fucking grim.
apple_kicks@reddit
For many big companies their biggest expense is employees. Coders are the most expensive for pay and benefits (other than ceos themselves). If they can’t cut staff they want to cut wages or make office jobs less competitive and more ‘anyone can do it’. Tbf without spellcheckers many of us wouldn’t work in office jobs with typing. So some are focused on savings.
However theres few crackpot billionaires obsessed with apocalypse who own ai companies. I suspect they’re gunning for automation so they can hide in bunkers with robots running everything with no mutiny risk (no idea how that’s powered)
EmergencyAthlete9687@reddit
I look at the isthisai thread on here where people post apparent photos and others say whether it is AI. What I don't understand is why would you use AI for these pictures in the first place? They generally look like normal unspectacular pictures that could have been taken in the normal way. Why do people use AI for them?
RecentTwo544@reddit
This is a real problem that AI bros have and they cannot wrap their heads around it - YES a lot of AI pictures of humans that don't actually exist can "trick" people but that's not the point. What use is it if they're not real people?
Same way as I explain to photographers/videographers that AI can never take their job, even if (and it looks extremely unlikely) AI could make photos/videos that no one could tell are fake. It's interesting to see if people can work out the reasoning there. It's a proper kick-yourself moment when you realise.
360Saturn@reddit
The fact that friendless losers that never had a real human connection in their lives are the ones pushing this from the top can't be understated... most people don't need pretend friends and colossal 'social networks' made up of fake people because what actual enriching value to your life does that contain???
nobodyspecialuk24@reddit
It’s a great assistant for may office jobs.
Give it a load of documents and ask it to make summaries or presentations.
Get it to update user guides when minor changes have been made without having to wait for someone to do it, or trawl through the yourself.
Give it a load of data and ask it to look for patterns in it.
It will make getting a job on the first rung of the ladder more difficult as I can ask it to do a lot of the things I would ask an assistant/junior to do much quicker and with my direct input, any time of the day.
SamVimesBootTheory@reddit
For me personally
There's very little that AI would be of a benefit for in my daily life and I mostly find it a hindrance when I do encounter it and really hate how it's getting harder to like opt out of using it
I'm actually not completely opposed to AI and know of like places where it is very useful but I don't really like how it's been pushed in what feels like an incredibly unethical way and seems to be pushed in a way that feels like it's encouraging people to outsource their brain
Like it feels the way it's being implemented isn't really helping anyone just more 'let's try and outsource the stuff that should be done by people'
360Saturn@reddit
Pretty much exactly where I land.
Nothing that people actually want actively to use requires to be pushed so heavily and the 'what you can use it for' on ads feel like a parody of themself or aimed at extremely stupid people "use it to write a birthday card to your dad!" etc - like something out of Black Mirror!
SatinwithLatin@reddit
AI is being shoved down and up every orifice because investors poured millions into it and want their returns. So companies are adding it to our lives whether we want it or not. There's also something to be said for probable data harvesting, after all if you're not paying for it that means you're the product.
-Xserco-@reddit
They are relying on consumers (mindless morons tbh) believing that AI is sentient. That LLMs are revolutionary and not at all just slowing down progress. That they can cure cancer (they cant even code python, or understand basic medical principles). Or that indeed, we are in the cyberpunk technological future.
It's a scam. Blatantly so.
This is vibeonomics. It's Wolf of Wallstreet. But the leaders are Epstein clients, and Palantir psychopathic terrorists.
Miltoni@reddit
You're not missing anything.
People have hyped themselves into an almost religion-like belief that we're headed for exponential growth and AI taking over all industries in the next few years. Particularly on Reddit. It doesn't help when the CEOs and execs of these companies never stop hyping it themselves. We constantly have Sam Altman making wild AGI timeline predictions, Jensen Huang and Darius Amodei talking about AI replacing entire job categories within years. The same people who have enormous financial incentives to overclaim things.
The reality is that there's a massive bottleneck that is conveniently ignored: training data.
The internet at this point is largely scraped. What's left is lower quality, increasingly niche, or legally contested. And now there's an even bigger issue. AI-generated content now floods the web, and models are inevitably beginning to train on their own outputs in a huge, contaminated feedback loop. The errors, biases, and hallucinations we often see get laundered into these training sets.
Then you have the issues with specialised domains like medicine. High-quality reliably annotated data is scarce, expensive to produce, and is usually locked behind institutions with no interest in handing it over to a tech company.
The obvious reply here is going to be the use of synthetic data generation. Whether these can fully substitute for genuine data diversity and volume at scale is an open question that no one really comprehensively agrees on.
I say all of this as a massive fan of artificial intelligence and as somehow who has integrated it into pretty much every aspect of my personal and professional life. I just can't see past the inherent limitations of LLMs and believe it'll take a completely new paradigm shift that will ultimately require human ingenuity before we see any real societal disruption caused by AI.
Delta_Eridani_Bob@reddit
Nothing, you're missing nothing. Is it useful for some jobs. Yes, will it replace some people's jobs. Probably. Does it need to be rammed down everyone's throat because big tech have put billions into it. Fuck no. I don't need bloody copilot AI for gaming. I have literally zero use for Chatgpt. Not even one. I work on farms. Fixing shit that goes wrong. It'll very much unlikely effect me. Will it effect people in call centre rolls when they can be replaced and at being replaced by automated systems. Yes. I feel bad for those people. I can't get annoyed at it though. It would be like getting annoyed at cars because we can walk everywhere or take a horse.
ErosDarlingAlt@reddit
I would be all for AI if it: - Was sustainable ecologically - Didn't try and pretend to be human and ingratiate itself weirdly by constantly affirming you - Wasn't allowed to train itself using copyrighted material
Mccobsta@reddit
From my experience with "ai" tools your not missing anything they tend to be more work than just doing something my self
Ok-Charge-6998@reddit
Well, it gives non-AI users a way of feeling superior to others, as seen in this thread.
Honestly, it depends on your use case, if you can’t think of a good use for it then you’re not missing out on anything. For me, it’s been a massive change in the way I work.
Life_Court8209@reddit
You're right that the current consumer-facing stuff like chatbots and assistants feels clunky, but the corporate push to recoup those massive investments is a huge part of the noise. For developers, the real game-changer isn't about AGI, it's the insane productivity boost on tedious tasks that frees us up for more complex work. That shift in workflow efficiency is what a lot of the hype is actually about for people using it professionally. The public-facing versions are just the tip of a much more capable, and expensive, iceberg.
franki-pinks@reddit
I’ve never used it and never will. I don’t use Siri or chat gpt or any of that bollocks. Everything I write is done by me and every picture is unfiltered.
Brayrut@reddit
I worked in innovation consulting during the “metaverse” and while AI is far more useful, the manufactured pressure to use it in order to offset the colossal and incompetent overspend feels the exact same. In the last month my well known employer as asked everyone to up their usage, and those below daily use age will be penalised.
slop_drobbler@reddit
Not the same thing as what people are referring to when talking about AI.
Yes. If you've used modern LLMs or generative AI and work in certain industries you should be rightfully scared of job security.
Any media based jobs are pretty much at the point where AI can replicate it to a standard that many consumers will find indistinguishable from real human creativity: music, art, website design, graphic design, motion graphics, film, TV, social media influencers...
Any 'rules based' white collar work is also at massive risk, I'd argue even more so than the above. Lawyers, CEOs, programmers, administrators etc etc
My main concern with AI is its ability to mislead. There are massive implications in the information space as propaganda is now much easier to make (actual fake news etc) and is more difficult to identify as fake. Likewise it's even easier for people to claim that something real is actually fake. Anyone that is aware of current world events particularly what's going on with US leadership should be concerned about how easily manipulated the general populace is and how effective propaganda can be
Jobs based around physical labour are safe for now imo
SheepishSwan@reddit
You won't get a good answer to this question on Reddit.
Which is ironic considering data we share here is sold by Reddit to train ai.
Plodo99@reddit
It’s fairly evident if you’re entry level or work in: Customer service , ops roles , copywriting , design , etc
RecentTwo544@reddit
It's not going to replace copywriting or design at a "proper" level anytime soon though.
Plodo99@reddit
Keep telling yourself that 😅
RecentTwo544@reddit
It's nothing to do with me. I'm neither a copywriter or a designer.
But any copywriter or designer who is in danger of getting replaced by AI is pretty poor at their job and probably just a low-level workaday type copywriter/designer with no real chance of getting big-money work.
Plodo99@reddit
That’s the whole point, there’s a lot of people that don’t earn “big-money work” getting replaced.
It’s not about high earners it’s entry level jobs that agentic ai is replacing.
RecentTwo544@reddit
But I'm talking about people who design a newsletter and type it up as a small part of their job, perhaps working as an admin worker/secretary for a school or local council or small company. They have loads of other stuff to do so it won't take their job.
If your main and sole role for work is a graphic designer/copywriter, AI is not remotely capable of taking your job.
Hence the downvotes on my replies there.
Ok_Cow_3431@reddit
I struggle to rationalise how the people who seem to use them or speak about them the most have the least understanding of what they do or how they work.
The number of people who will use publicly available LLMs on a daily basis but have no idea what a hallucination is boggles the mind.
As they currently stand no, GPT models are not a substitute for Google, you always need to check teh answers they give.
AgeingChopper@reddit
yep! a result that is a "best fit" is not the same as being a correct result. I've been amused how often the answers are outright wrong.
MikimaruX@reddit
I look at differently to you, yeah it's applications that you mentioned are useful but not groundbreaking, those kinds? I'm actually abit negative towards.
For example my work uses it so instead of looking upnour source material or checking data om a spreadsheet, we ask the AI and it delivers the answer
As someone who is proficient at my job I'm aware of where data and source material is so it's actually quicker for me to do than than use the AI, new employees? It's great.
Where I really enjoy AI is the personal applications it has, thers so many things I couldn't do an would never pay someone to do for me, such as a graphic designer for a logo or cover art for an album or a novel. People who want a professional music mix but aren't willing to pay for someone to do it.
It opens a world letting people with low skill achieve a higher skill level without years of practice or endless amounts of software programs
I didn't get all the fuss till I started using it, first it was just for fun, making my best mate look like a that in photo and sending them to him for a laugh.
Now I use it alot in terms of media, I remember 15 years ago you want to build a website? You'd have a few poor options with the good stuff behind a paywall and have to spend hours researching and looking at popular sites and how and why they are popular and figure out how to recreate it.
Now AI will do all that for you.
Although I do understand its negative impact, even on a personal level, like arguing with it... the one I use doesn't understand depth in pictures, so when I'm trying to create something specific it gets frustrating and will at times completely contradict itself in terms of its answers and why it can and can't do x y or z.
But I'm solely talking about consumer AI here, not the advanced stuff.
agentorange65@reddit
It's being hyped to us, but for the wrong application
While it could be useful for complex mathematics and science etc, we are using it to ask what the capital of France is, or to create a picture of a penguin vaping while wearing a top hat.
The word recognition based responses are basically vibes based non answers, with no oversight with regards to accuracy
spinningdice@reddit
It's a bubble, I'm not denying it's useful - but the current cheap/free availability will burst at some point. It costs a fortune to run and just isn't earning companies anything, so it'll either become riddled with ads disguised as results or locked behind an expensive subscription.
Better not to become reliant on it.
RipCurl69Reddit@reddit
In my experience, it started off as a 'we've levelled up, check this out!' but it quickly spiralled into being a massive downgrade for products which implement it with zero regard to usability or general need for it to be there.
I work in engineering, I do NOT need AI in my current situation. My creative hobbies outside of work also don't need it. I don't use AI at all, and I'm perfectly happy to keep it that way.
UntappdBeer@reddit
Maybe we can use ai to explore DNA and recreate dinosaurs, that'll be fun. /S
floatinginmyroom@reddit
My job (customer service) and my partners job (resolving IT tickets) both got replaced by AI and made redundant within three months of eachother.
SnooRevelations2088@reddit
The human capacity requirement is much lower due to the level of automation within so many sectors and its only going to get worse for the newer generation looking for employment. Not really sure what you mean by potential of what tech can do, its already happening and theres a lag between what tech can do and when it gets fully applied to real life. Companies are still recently investing in this tech and theres already mass layoffs. Once it's more ready they'll roll out the systems to replace people and its going to have a widespread impact.
Tangie_ape@reddit
I personally dont like AI, not because of any conspiracy stuff but I just dont like how its being thrown into work environments currently. Every e-mail now is being written by co-pilot essentially so its like chatting to AI bots all day, and then you have basic AI in things like Xero which are going to destroy a lot of lower-skilled office jobs (Purchase/sales ledger data entry roles etc).
At the moment AI isn't quite there, but look back to how it was just 2/3 years ago and the rate its scaling up, in the next 5 years it will be unrecognisable
Obvious_Reporter_235@reddit
With AI we don’t know what’s going to happen. With new technologies that appear to completely upend how we work there’s always been hype and fear.
In a decade there will likely be massive industries we haven’t even dreamed of yet that come out of AI.
I remember when Apple’s App Store first appeared and some dismissed apps as a gimmick that wouldn’t have much impact. Yet in 2024 it was estimated that the App Store had facilitated in over $1.3 trillion in commerce globally.
We as a species will adapt. We always have. There will be disruption, and sadly there will be those who get left behind. But AI isn’t happening in a vacuum, and we aren’t going to wake up tomorrow morning to find there’s an AI that can do our jobs better than we can, and our employers have already rolled it out. It’s going to take time, and there will be some spectacular failures that will serve as cautionary tales. We will adapt, and new ideas will be born.
It’s easy to get carried away by either idea that AI is a piece of shit or AI is going to ruin us all. Especially with the tech bros leading the charge at the moment. But this is early days, and there’s still lots that will happen - good and bad. We as a species aren’t good at putting guardrails on things that can be dangerous until things go wrong. Air travel is a good example of this.
Empty_Allocution@reddit
The one single amazing use I have found for it, is vibe debugging my own code when I have exhausted myself trying to figure something out.
I think it gets a lot of shit, but it has proven (to me at least) that it is exceptionally good at finding odd problems in complicated code, and it will do it in like 2 seconds flat. And it understands a bunch of fringe shit that I do with my own development. It kind of just gets it.
For me personally, it is the difference between me sinking an entire weekend trying to track down and fix something, or taking 2 seconds to go "problem is here, suggest X" to fix it and move on.
So that's cool for me. I would never touch it in a professional context though.
scouse_git@reddit
That's the ideal question to put into an AI bot
Ricky_Martins_Vagina@reddit
Wait, we don't have flying transportation? Or are we just living in different versions of the Matrix?! 🤨
Regarding AI, your post reads as if you have concluded that "that's it, AI has peaked" and hit is maximum potential already.
RenderSlaver@reddit
I've recently recruited for a new role, I interviewed three people and two of them had been put out of work because of AI. I have a good friend who's a dev with 20+ years of experience and is now having AI do most of his work and tripped his output. AI is fucking shit up quick.
DrH1983@reddit
AI can do things a lot quicker but I dislike that as it means I then have to do more stuff.
See, my view of AI is that it should let me do work stuff quicker, giving me more free time to do stuff I actually want to do.
Instead my employers take the view that AI will let colleagues do work stuff quicker, giving them time to do more work stuff.
This just leads me to feeling a bit overwhelmed as things happen so quickly.
toady89@reddit
I used Gemini yesterday to show it a bunch of different seeds I already owned plus the pots I already owned and had it plan which plants to put where, what soil to buy and how much plus which extra pots/troughs I needed. Would have taken me hours of reading and compiling to do that myself. The key is understanding the limitations and not having some KPI to use it for X percentage of tasks.
RecentTwo544@reddit
As u/worotan said, I'm not sure how Gemini saved you hours there. A google search would have done the job just as quickly.
And I imagine if you don't know the seed types by eye (which given you used Gemini I assume you don't) you'll be getting the wrong plants growing in the wrong pots before too long.
worotan@reddit
There’s no way that would have taken hours to work out, unless you’re farming on an industrial scale.
You just wanted to use a new toy and made up an excuse to valourise it.
QwenRed@reddit
We're barely into the age of AI. Your home assistance example isn't even using LLM models, changing these isn't in the interests of the companies as that's a HUGE expensive on token usage/compute, you can create your own systems that would be 10 fold more akin to something Sci-Fi-esk that you're looking for, the real differences will appear once hardware exists to localise LLM models without it costing $100 per go, hardware advancement is always slower than software, this is why we've seen ginormous investment in factories and server centres from governments and companies across the global, once these things are up and running you'll see the change.
baeworth@reddit
You just listed a bunch of reasons why AI is beneficial, I’m not sure I understand your question
From what I’ve seen the people that are really heavily against AI either don’t fully understand what it is and are wilfully resistant to finding out, or just don’t work in an industry where AI has huge potential and therefore they just see it as the devil
Rasty_lv@reddit
Overall im pretty hostile to ai in general, but it has its uses.
I hate all ai slop pics and videos. I often can recognise ai generated texts and i notice more and more content online is ai slop. I remember that there was dead internet theory and its not conspiracy anymore, but again, its a good thing, im using less and less social networks.. AI data centres uses a lot of energy (raising our prices), lot of water (fucking up nature), raises prices to lot of other hobbies (like gaming pc, which im still salty about).
But, I do agree it has some good uses. Ive used it for helping me to make my email drafts into more diplomatic and better sounding (though, i still edit them to read more humane, deleting all long dashes for example) or using picture generators of ideas for garden. I like some uses in my samsung phone. If i get message or email about appointment, phone offers to add it to calendar, which i do find useful.
For my work, ai cant really replace me, but it could improve our workflow, but my company will not prioritise us over other departments and im fine with that.
RecentTwo544@reddit
I'd tentatively argue AI slop has its uses.
I work in the music industry, used to do photography/video stuff, still dabble, and work with lots of creative types who do photo/video/design/etc.
I explain when they worry about AI slop that it isn't going to take work from them, because they're good at what they do, and AI slop will never be able to match it. It's just stuck with a hard limit.
But if I'm putting on a local club night with some mates in my local town, something we've done now and then for years for no money, we'll make an AI poster. None of us can design a poster properly, and there's no budget for a graphic designer, so it's not like we're taking work from anyone. That's where it works - when you've no budget to get an expert in and don't need the end product to be very good.
ANIKY173@reddit
I work in accountancy (both commercial and techincal). AI going to decimate the industry in the next 3-5 years.
Why hire new junior staff when AI can do it cheaper, quicker and better.
Atompunk78@reddit
What do you mean? You aren’t missing anything, you just underestimate how useful those things are
Doing code and basic admin almost as well as a trained human is revolutionary and is already saving a shitton of time (and laying people off)
What it isn’t (AGI) is irrelevant when talking about what it is here
Also there are other uses beyond those things you mentioned. It’s ok that it’s not useful to you perhaps, but to some people, having an assistant available 24/7 that’s good at everything to a professional level is revolutionary
TheShakyHandsMan@reddit
It’s definitely got its uses. I’m always getting help for personal projects through ChatGPT.
Reddit probably isn’t the best place to get an impartial view on AI. It’s certainly not popular.
RecentTwo544@reddit
I'd argue the opposite - Reddit is full of "AI bros" who insist AI can "easily" do loads of things but then when questioned as to how, they cannot give examples and just try to hide people asking by downvoting.
It's proper head-in-the-sand stuff.
worotan@reddit
You mean you don’t like questions being asked of your superficial preferences, and want to act like it’s unreasonable to ask questions if advertising pitches so you can feel like you’re the informed majority.
Pretty safe guess that you do the same about climate change.
Rocinante23@reddit
I'm largely against the implementation of GenAI on its current form
But this is the only sensible use-case that I've seen for it so far. I have to agree GenAI is really good for this.
What we're seeing is people using these tools to ask things that could either be found on a traditional search engine or reference website and it's using an incredible amount power to do so.
No-Tone-6853@reddit
There’s also the people that use them for everything, basic life tasks to personal therapy or see it as a friend.
ToTooThenThan@reddit
Ai does not compile code
Alundra828@reddit
AI is in general, both amazing and pretty bad. But it still sort of works. And that is all that is required to devalue labour.
What do I mean by that? Well, it used to take software developers ages to build out apps. Now AI can do it quickly. And while it absolutely cannot do it on its own (yet), and you cannot trust the output at face value, the way in which it devalues labour is that the AI + human management can do in a week what it takes an old school engineer a month(s). That is a months worth of billable work, compressed down to a week. Now with this tool, employers expect their engineers to accomplish more. Thus, the value of their work is less.
And this is why the hype is so high. There is provably a case to say that AI devalues work, and it's true. The bullshit comes from by how much it can devalue work. And the answers to that question is... unclear. Even in realms where it has ostensibly utterly killed the profession (100% devaluation), people are realizing that it's not all sunshine and rainbows. To stick with software engineering a bit, since that's the industry I'm a part of it, turns out if you don't have very senior knowledge of the subject matter, you're going to crash and burn very quickly as the AI makes mistakes that a layman would never pick up. For proof of this, look no further than one of the arch-AI merchants themselves... Anthropic... They recently leaked the entire codebase for their flagship product. woops. What a fuck up. There is also the extremely numerous, and well documented "oopsies" from various companies that essentially sink their company because of mistakes brought about by AI.
And then there is the "AI Fermi Paradox." If AI is so good, and makes one so productive. Where are all the AI success stories? It's been out for years at this point. Where are the AI driven unicorn companies? They don't exist... But hold on, AI killed software development, right? So why are Oracle hiring back all the devs they fired? Why is seemingly nothing coming this? We should be drowning in premium software and we just... aren't...
It's because the answer to the question "how much does AI devalue work" is still up in the air. AI marketing will say it devalues it a lot. Which is really attractive for employers, because hiring people is probably ones of their biggest expenses and this is the perfect solution to that. Instead of hiring more employees, increase the output of each employee you already have. But it seems like the degree to which it devalues labour is a lie. At first they were saying entire industries would go belly up. That turned out to not be true. Then they said industries would have to adapt to turbo productivity. That turned out to not be true. Now we're sort of at the point where we're saying things like "okay, it's harder to get a job as a junior, and you will change your workflow". Which at this point, is a much more reasonable take. But it's hard to sell "change in workflow" as your killer feature since the release of dreg shit like Microsoft office back in the day changed most people in the workplace's workflow... Would you call that a trillion dollar industry...? No.
And this is without going into the fact that this industry is absolutely a bubble, and even if every man, woman, and child bought a AI subscription, and didn't use it, they still would not be able to keep up with the costs of running this thing. The product is fine, good even... but it's like inventing a car that runs on nothing but gold bars. The car might run fine, but there is no way to keep it running because it's just too expensive. Core innovations to the product need to happen before its even remotely viable. At the moment what you are seeing is a heavily subsidised experiment. And it's looking like it's not going to be around forever. LLM technology may still be unviable as a business.
RecentTwo544@reddit
Very well put, and the issue you're describing, especially in the "AI Fermi Paradox" thing is way too many people are assuming ANI (what we have now, and have done for decades in many instances, LLM's are not the first rodeo for AI by a long shot) will naturally progress to AGI.
AGI is what we need to see AI totally replacing people or areas of work. Yet there is no path to it currently, and it might not even be possible.
People assuming ANI will lead naturally to AGI with more power is like taking a primitive stone-age fire and thinking adding more and more material/fuel to it which makes it hotter, will eventually lead to a nuclear fusion reactor if you do it for long enough. Obviously, that is an insanely incorrect assumption to make.
vagueconfusion@reddit
In regards to devaluing labour, that's another infuriating thing about AI 'art', which only exists due to the theft of legitimate works to feed the generators.
I'm an artist and it is absolutely making certain creative jobs very difficult at the moment.
Not just artists who are exclusively freelancers reliant on commissioned artworks, but surface pattern designers who make fabric and wallpaper designs, seamless patterns, and now there's an atrociously high number of AI designs out there for painfully cheap prices.
AI patterns for craft projects (like sewing patterns) are everywhere and are frequently impossible to recreate if the whole thing was AI generated (especially true of crochet projects)
I'm also physically disabled and the narrative that it's vital for disabled creatives is insulting nonsense (especially when those touting this care nothing for disabled people outside of using us as an AI defence talking point).
It all goes so much further than a battle between freelance artists doing character commissions and those who don't want to pay what their work is worth.
jlangue@reddit
You missed the thousands of jobs already lost to it because all your reception, call centre, inquiries et al have been replaced by AI bots with no accountability.
This is the future equity companies with no face using AI to extract every penny out of every aspect of your life.
theegrimrobe@reddit
nothing, its all slop glomming up all the money and futher fucking the populace and the earth
srogijogi@reddit
It can do much more. If your job is about data processing and/or creation, then you could feel the threat. If your job is about resource consuming repetition of tasks, quite often you could feel the same. Do you have a unique ability to process or "see big picture" in huge data? This ability will be still unique between humans but very common for AI. If your job is about doing something in the real world, with multiple tasks which cannot be done by a robot standing on the floor in one place? You don't have to worry for now and for a nearest future. Do you work for a company for many years? Same. You would be too expensive to replace.
MrSam52@reddit
I think you’re not understanding how much wider the applications of it will be once they’ve improved further.
Most customer service roles will be replaced by AI chat bots, whilst some of these have existed for a long time now by using various learning models they will soon be as good as human agents. Especially as they pull through responses from human agents as part of their learning to build up a response base.
Wider than that there are already AI law firms out there. A settlement has already been negotiated through one of these and another is taking fees to give guidance on employment disputes.
Essentially any role currently existing that gives advice to the public will likely eventually be replaced by AI agents. Plus the admin roles you’ve mentioned. Recruitment already runs through Ai tools to screen cvs but I imagine that can be scaled up to running full interviews as well.
This will be a massive saving to organisations in cost, but will also lead to widespread job losses and an economic crisis unless a massive retraining program can be done.
TobyField33@reddit
Come back in a year.
TheBrassDancer@reddit
You're missing, with specific regard to generative AI, that it is harming the environment, spreading misinformation, stealing people's work, and harming human health e.g. via parasocial relationships.
It should not be used.
banecroft@reddit
I'm not a professional programmer, quite the opposite, I animate for a living. I've also recently managed to publish a couple of tools to improve the pipeline at work that's now been adopted as part of the main pipeline. It's at least a 20% increase in feedback loop for animators.
Unimaginable just a year ago, and I did it in an afternoon.
SceneDifferent1041@reddit
It's brilliant for shopping and comparing products. A couple of years back, my son was in a paw patrol phase and my god, why don't they make it easy to know which pups are compatible with the slide on the base...
I had a brainwave to ask AI and 5 seconds later I knew exactly what to get.
davidwhitney@reddit
Yeah, price comparison businesses are dead-men-walking.
bacon_cake@reddit
I get it can do basic admin tasks in excel that would previously require those to use formulas
You've really glossed over some big benefits there alone, and your "etc" covers a LOT.
I run a small business and AI is saving us thousands and thousands of pounds and countless hours. Honestly it's the fastest growing technology I have ever seen in my life. I know reddit likes to think that businesses are forcing a round peg into a square hole and that AI has no uses whatsoever but I know I'm not alone in using it so extensively to my benefit. I personally feel any business not at least trying to use it is making a big mistake.
A rough idea of what I've used it for in the last few days:
Bulk product image manipulation - This used to be outsourced and took days and cost a lot. Now it takes moments at less than 10% of the cost.
Re-drafting customer service emails tailored for specific customer types. For example, for a non-technical customer, I can explain the tech solution, get the AI to redraft for someone with a poor IT ability, then rewrite. (Also I can write sweary emails to customers and get a redraft which is quite cathartic).
The other day I wrote a support ticket to our Warehouse Management software supplier because we were having a major difficulty, on a whim I copied the ticket into ChatGPT and it solved the issue for me and we were back online in minutes instead of 24 hours.
I've made simple edits to our website that previously required an agency worker at several hundred pounds a day.
It helped me diagnose and resolve a card stuffing attack on our site.
We have an older colleague who struggles to see the screen and it helped me with new CSS and a Chrome extension so that the images and text are larger.
I use it to sanity check massive CSV imports.
Honestly the list goes on and on. I know it's not popular here but it makes my work day easier and shorter so I can focus on the things I actually enjoy.
Algoschizo@reddit
How fast does technology advance? The answer is exponentially. These companies aren't spending money in hopes AI ends up as a chatbot. A lot of people love the thought that they might waste money, or want to see these companies fail.
davidwhitney@reddit
A large amount white collar work is "low value automation done by humans" where replacing it with specific systems is too costly or complicated. Having a general purpose pattern matching technique that can automate unstructured behaviour, even to only 80-90% effectiveness, is a workforce decimating thing.
There was a generational leap in it's application in software around the end of last year, and I actually think people should probably be more afraid, not less. We don't need AGI to be disrupted, what's there today is good enough.
Counterpoint - human entropy prevents most change - just because something is technically feasible doesn't mean it will occur if people can't use it. The cost of operating and maintaining software always dwarfed the cost of it's creation, and businesses will realise that as the value of "code" and "automation" drops to zero (because anyone can trivially do it), the expenditure flows to elsewhere in the process.
I suppose the reality is two things are true at the same time - it's category changing technology, but also learning to leverage it effectively and systemise it is harder than people trivially one-shotting some disposable web app.
Puzzled-Job9556@reddit
AI is as good as the person using it. I have managed to reduce my workload by about 50%. I wrote a consultation that would generally take months to produce, in a week.
Arimm_The_Amazing@reddit
You're not missing anything, it's 80% an investment scam
ExoneratedPhoenix@reddit
It was/is a disruptive technology and so a LOT of people tried upselling it to everything. Reality is hitting that while LLM's are impressive and can automate many tasks efficiently and allow people to do things quicker, it is ultimately yet another "calculator". The calculator allowed people to do big sums really quickly. It also allowed people to do trig and all sorts. The real "but" moment, is it still relies on the user knowing what they are doing and knowing how to put in appropriate input for appropriate output.
Getting your hands on a calculator that can speed up calculus is great, but if you don't know calculus or what buttons to press it is useless.
Same with LLM's. People are pushing them to do things they weren't designed to do, and while it kind of works in some instances, for most things it falls over.
AGI is so far no closer than it has been for decades.
LLM's are a "what word comes next" blackbox of Bayesian stats. Very cool stuff, and it's original intent was to understand and define how languages work. By using them as tools to do tasks, it is pushing them beyond their design, and while they kind of work, the moment they don't work becomes apparent very fast, and usually with drastic consequences.
bennyshark@reddit
Its the new google - if previously you would google something, now you should use chatgpt
Very useful for all sorts of things, DIY, interior design, personal health, training, lots of stuff
Try and talk to it as much as you can about anything going on in your life you may be surprised with how beneficial it is
Backlists@reddit
That depends on what it is that you are asking for.
If it’s something highly specific, then you should not use AI, or at the very least you should ask AI to use their search tool, and provide sources.
Do this enough times and you will notice that what the AI generated is misleading or downright incorrect from the original source. It doesn’t happen every time, or even most of the time, but it’s enough to be unreliable.
To be clear, all those suggestions you mentioned are good example use cases for AI.
bennyshark@reddit
Yes the AI can be incorrect, even has got a simple addition sum completely wrong in my experience
The sycophancy can be annoying, though you can learn to look past it
360Saturn@reddit
What does it do with everything you tell it? Does it all go onto a server somewhere or get sold to advertisers??
bennyshark@reddit
Probably
worotan@reddit
Why not just use google, and reduce your climate impact as we reach disastrous tipping points?
Your attitude is why climate change is happening.
bennyshark@reddit
Climate change is a done deal, too late to stop now
TomfromLondon@reddit
Got a lot of data you need to go though? Need to pull out trends? Need to poc ideas? There really are so many ways to use it at work.
Personally, I literally just used my weekly prompt to give it out veg box this week, go through my favourite recipe books and create some meals to my specs too cook this week.
AirconGuyUK@reddit
I mean I coded a website for my business with a feature set that probably would have cost £80k to get made just 4 years ago. I did it in a month, with a £90 subscription, and not exactly no knowledge of coding but nowhere near enough to build a working and polished product.
So approximately a 1000x cheaper to do that now.
That is quite something. I feel like if you're not in tech you're really not understanding the impact properly. It's not just making coding a little faster, or a little cheaper, or a little more accessible.
It's a paradigm shift in one of the most high value (or previously high value) industries that any country can have.
AnonymousTimewaster@reddit
They dont use AI
UnacceptableUse@reddit
Google assistant became Gemini which uses LLMs now. It got less useful
AnonymousTimewaster@reddit
All true.
TheCotofPika@reddit
I use it for refining my emails to be neutral as the Autism can make me come across as blunt.
I also use DeepSeek to help with research as it can go and view websites which I can then go and read myself as I don't trust it to give me the most accurate information.
If I am sick and can't function, I ask it to help me plan a day where I can care for 4 children without making myself worse.
I have also used it as behaviour prediction for a family member who is prone to unstable actions. It has so far accurately predicted what they will do which is helpful to prepare for.
It also will interpret, with sources, things like bloodwork results which I find helpful.
Could I do the above myself? Yes of course, but I have 4 children including an infant and I cannot spend hours looking stuff up without ignoring them all. It's just a shortcut tool.
OrbitingPlanetArse@reddit
My former employer is having to recruit more bid team staff to deal with AI-drafted work proposals which are complete garbage.
apple_kicks@reddit
Yeah you hear from lawyers clients draft ai contracts in ‘this will save you time’ but lawyers are saying this doubles their workload because ai contracts are so bad to untangle and rewrite
charlytune@reddit
Work in an organisation where we receive lengthy applications to be registered for a legal status. AI slop is an absolute nightmare. Reams of text with a lot of words but just vague generic answers that doesn't actually say anything. Generic assurances about complying with xxx but no information that shows any understanding of the legal issues or how they will comply. Irrelevant information that doesn't even relate to what the applicant is doing, or our requirements. Nonsense documents that don't meet legal requirements (and that we provide models for on our website and that are easy to find). Fake information referring to policies and legislation that doesn't exist. Its messing our whole job up and we're having to hire more people to cope.
_Odaeus_@reddit
So sad reading this. The main legacy of these LLMs will be massively wasting everyone's time.
grandhighblood@reddit
In general, I’m very anti-AI for a lot of the reasons people have given already. I don’t use it myself for that reason, but I don’t begrudge other people’s casual use of it.
I’m just finishing my master’s in translation and am about to start working professionally. It’s been a very industry-oriented course, so I’ve gotten a lot of insight from people who are already working alongside what I’ve learned on my course. In the language services industry (which encompasses translation, subtitling, interpreting, etc.) the general consensus is that there’s absolutely use cases for it, but you’ll never get professional level quality out of AI. It can be used to speed up the process, sure, but it cannot translate. It hallucinates. It can’t comprehend wordplay or deliberate word/structural choices, it homogenises everything. Everything it outputs has to be post-edited by a human, which takes just as much time (if not longer) to produce often worse output, with only minor cost savings. A lot of interpretation buyers in particular are starting to turn away from AI.
It’s infested translation software for very little benefit. There’s one particular platform that will AI generate potential errors in a translation, which are a) never consistent because it re-generates them every time, meaning it will frequently flag its own corrections as errors, and b) often outright wrong. I recently had it flag my use of capitalisation as an error, because the source text was not capitalised. The source text was written in Japanese.
Grouchy-Seaweed-1934@reddit
I'm from the software development world and I think we are 6-12 moths (At least), ahead of the wider job market in it's implementation.
We can see what's comming, most people can't yet.
LowAnimator8770@reddit
CEO’s spent millions buying licenses, they demand results regardless of how useful it actually is.
mmoonbelly@reddit
Because they’re not worried about the cost of technology, they’re worried about the reputational cost of being seen to be behind their competitors to the finance analysts which depresses the share price and increases their cost of capital.
LowAnimator8770@reddit
It’s 100% Investors expecting returns
Cheap-Rate-8996@reddit
You're not missing anything. The situation really is that silly.
AI is currently the basis for the largest financial bubble in history, and yet (outside of a handful of very specific applications in medicine), it doesn't do much of anything. Investors, politicians, the 'adults in the room' were just this impressed by a text generator.
How did you first use ChatGPT? 90% of people asked it a stupid question like "How would The Graduate be a different movie if Dustin Hoffman's character was played by Grimace?" and then giggled when it gave you a serious response. Have you done anything more profound with it than that since? Has the novelty of that worn off for you yet?
Unless you have bad intentions (having it do an assignment for you) or are fundamentally misusing it in a dangerous way (talking to it instead of consulting a therapist), what we have left is a product that's fun to dick around with every once in a while.
It's not the printing press, or the car, or even the Internet. It is not a society-transforming innovation if we limit the scope of its use to 'desirable' applications. We're inserting it into everything because we're trying to intellectually justify why we should care about this beyond the initial novelty of a program you can have a convincing facsimile of a conversation with. But we haven't. Our global cultural response to AI is completely and utterly out of proportion to what it actually does.
21Shells@reddit
You're not missing much, the use cases are so incredibly niche compared to what its being marketed as. I think it'll be useful for programming, but creatively its being rejected completely.
Hopeful-Climate-3848@reddit
For coding it speeds things up, but you still have to know what you don't know.
Bowdin@reddit
It makes analysis of things I would usually have to read for hours much faster as I can get it to summarise and then ask it specific questions.
apple_kicks@reddit
It’s absolutely overhyped for what it can do without hallucinating or causing you to double check everything. Resources to run it definitely feels too much for what it does.
Tech companies want it to replace workers or be the new big tech since email or smartphones but its like 3D glasses to me
goose9273@reddit
You're talking about retail AI (chatgpt).
When you hear on the news that AI is making breakthroughs in curing diseases, that's not chatgpt.
T0raT0raT0ra@reddit
The thing is most jobs have a huge part of repetitive tasks that AI can automate already. A bit like 3d printing sped up prototyping considerably, AI can speed up lots of office tasks and reduce overhead. But it is also extremely expensive to run and the main companies are throwing hundreds of billions into it, with huge losses, to drive adoption. It’s not sustainable on the long run.
-intellectualidiot@reddit
I think there’s a big difference between using AI to avoid doing work (which is all anyone ever talks about) and using it as an assistant to improve work you’ve already done. The output for the former use can be quite sloppy and unimpressive, but if you’re already showing it human produced work as a starting point you force it to critically evaluate, which you can do yourself, but I’ve found that using AI is a great way to jump start that whole process, especially if you struggle with task initiation like me.
It’s great at helping me phrase stuff slightly better and more accurately, fact checking, and making suggestions for other things I’ve missed and should consider. You obviously have to be very careful, prompt it to be brutally honest, and not trust it blindly, but it works as pretty great assistant and time saver. It’s also great at answering clarifying questions I might have that I’d simply be too embarrassed to ask a human, again obviously you need to be careful and always fact check from another source to see if it checks out, but its a great jump start.
NoisyGog@reddit
Those are not the current state of AI, at all.
Chat GPT and its ilk really are in a different breed to the “chat bots” that came before. I’d argue they’re not intelligent in the way we normally hear that word, but you can absolutely chat to one without circularity - they really do feel quite natural.
For me, the biggest thing about the current LLMs is the possibility that you could read (and create) online content in any language at all, which gives minority languages a huge step up, as the need to automatically default to some other lingua Franca really dissolves.
Nobody seems to have jumped on that yet though, I guess there’s less money in it than whatever the fuck LLMs are being used for right now.
In other fields, it’s been transformational, gene splicing, chip design, and various other sciences have benefited from specialist AI applications.
Individual-Carob5593@reddit
If you are a coder, then someone with half your skill can do your job in half the time.
RelationshipLife6739@reddit
If you’re actively looking for use cases and know how to do so with simple knowledge of Python and Ai, LLM, Machine Learning and Data science fundamentals you can solve pretty much any problem, with high efficiency and extremely fast updates and for cheaper as the business backend, accountancy, stock/book keeping, legal advice is all being handled automatically. Means people can focus on what they do best. Increases output so much.
Anyone can now use it to generate graphics, a brand voice, slogan, logo, social media accounts, posts and marketing materials alongside all the backend stuff mentioned before. You can then continue with the main business hustle hassle free with a small team
360Saturn@reddit
I'm with you. It feels like as if the latest in a long line of 'fun new revolutionary' things via online has taken on a life of its own and been picked up by everyone as a reliable omnitool. What do you mean you ask ChatGPT to plan your conversations for you and talk to it - a thing! - as if it were a person?!
I feel like I'm the crazy one, meanwhile the whole world decided Buzzfeed quizzes were the new way to try and sort out any question anyone had about anything.
KarenFromAccounts@reddit
It's very useful and helpful for some things and absolutely awful at others. The problem is that the people in charge can't really tell the difference, particularly as most AI is built to give a plausible or good looking answer, not the right one.
So helping with coding, yes its fantastic. Still absolutely needs human oversight and checking but it really helps.
But to use OPs example, do not EVER trust it with excel or doing anything where it processes numbers itself. Helping make some formulas, sure, but do not ask it to actually do any crunching of info. It WILL mess it up and it WILL make stuff up.
I think people are being naive if they want to pretend it'll go away or that it's just a fad, but people are being just as naive to think it's going to be able to replace huge numbers of jobs or lead to crazy improvements in productivity. It's not all hype, but it is a lot hype.
TheHeroYouNeed247@reddit
"Chat bots have been around forever" is a very reductive statement.
LLMs, while not AGI, are far more advanced than the chat bots of 90s/00s.
RegretEasy8846@reddit
About 500 seconds using it instead of Google a day, if you’re me.
The_Final_Barse@reddit
If you don't have much use for it, yeah you might be missing the hype.
But it's a bit like my mum in the 90s saying she doesn't see the point in the internet.
AromaticVacation3077@reddit
Remind me, what is the point of the internet again?
parasoralophus@reddit
Your mum kinda had a point TBH.
HardAtWorkISwear@reddit
I've been using it to code an arduino project for the last few months and it absolutely cannot debug. It very confidentaly told me to do things a certain within the code, which I did. The outcome was close to what I was after but needed tweaking so I described the issue and it responded by basically calling me thick for doing it the way it just told me to do it.
I had other issues I tried to use it to debug and the solution it gave was identical to the code I'd just given it with the issue inside, but with different comments.
AI is just a glorified search engine and people seem to forget that.
slimboyslim9@reddit
I’ve been quite sceptical about it all but my work has encouraged us to give it a go and get fluent with writing good prompts. So I tried using copilot to help solve a software issue on my PC. It seemed to be helping but I had to restart the PC a few times in the process (standard) and copilot needed to be told all over again each time on what we were doing (because I refused to create an account and save my history/hand over my data).
Twice started to get irritating, three times got more annoying, the fourth time I caught it up and it gave me completely conflicting advice on what to do, from the previous three restarts.
So I gave up on copilot and asked a colleague in IT for advice.
Anubis1958@reddit
Its not just about writing computer code.
I used AI to
As for jobs going, then yes: some jobs will go. It is an industrial revolution. But other jobs will arise. Poor programmers (and there a lot of these!) will go. But senior developers who understand how to use AI will be in high demand.
And we are NOT talking abour Siri or Alexa!
latrappe@reddit
I've built a whole home server stack of tools using AI for help with the various files, using docker and Linux and things. I work in IT but I'm not a programmer so it has helped me achieve loads.
Also much more simple things like "Hey I've got some frozen chicken fillets to use up and I'm bored with the regular meals, gimme 5 interesting ideas from around the world. Brilliant, now expand option 4 to a full recipe for me". Much faster than trawling bullshit recipe sites. Has legit enhanced what we eat at home. We save the ones we like in a recipe app.
Then there's the general search aggregation it does. Yes you need to check its sources if it is important, but it's a one hit plain English way to ask a question and get an answer. Without needing to browse multiple sites. You can also "discuss" the answer by asking more questions in a way you can't with regular search.
We use a lot for trip planning. Yes, you dial in your plans more manually afterwards, but LLMs are great for getting you from "I know nothing and there's too much info" to "right I've a good grasp now".
If you're IT inclined you can install a model locally on your server and point it at your music or movie collection or files and databases and have it query your data for you in a locally ringfenced and private environment.
Finally I'd say to get out what you put in. So spend a little time thinking about how you prompt it. For example things like use only academic publications, do not average out results and cite your sources as you go along. That gets you a way different answer to "are vaccines safe?" than just asking the question straight.
doublemp@reddit
I've used it for travel itinerary generation, tax/pension optimisation, to take a screenshot of what's in the cupboard and tell me what to cook, to troubleshoot baking failures, research and compare cameras, phones and even employers, to help me understand fairly complicated concepts, to calculate my life expectancy, research party policies, narrow down a list of possible baby names, get legal advice (obvs. double check everything), and even decode best before dates on veg packets!
Generally it's just one of those things that it would take you a while to to Google and research, and it can now be done in seconds.
Direct_Highlight_118@reddit
It doesn't matter how good it is so long as companies think it's good enough to cut jobs
rdu3y6@reddit
A lot are just using AI as the current excuse for cutting jobs to look better to investors when they're either actually struggling or just want to fire people to reduce payroll.
gregofdeath@reddit
Much like the whole thing with 3D TVs, The Metaverse, NFTs, et al., companies have put so much money into AI that they simply can't allow it to fail. There's 'hype' because companies have paid enough money to create 'hype'.
romeo__golf@reddit
I'm not an expert and I'm sure someone will explain it better than this, but what the general public see as AI (Chat GPT, Claude, image generation etc) is just the tip of the iceberg.
The true value is in science and engineering. Things like AI being trained to spot cancers quicker and earlier than doctors can, or being able to analyse data more thoroughly than a human.
Image generation is definitely part of the appeal. It doesn't mean it will replace VFX artists, but it might be a tool they use to improve their work. AI generated video doesn't just appear because someone wrote a simple prompt; the best ones are created by experts familiar with the field, who know which techniques to use and how to refine the output.
An analogy would be asking Chat GPT to write an essay on a subject you don't understand, against a professor of the topic using the tool to create a framework for their own essay, and then being able to edit, refine, and perfect the final product using their decades of knowledge and experience.
I've no doubt there was doom-mongering among the farriers when the internal combustion engine was invented, but these days we would never suggest selling the family car and buying a horse to save them.
Relevant_Natural3471@reddit
The mrs was watching 'Care' the other day, with Sheridan Smith. I did say that perhaps AI could help translate for stroke victims with Aphasia
Actual-Morning110@reddit
Recent example of Oracle layoff to save money and channel it towards seup of AI servers/factory is the prime example. Don't matter the specific reason, but it is becuase of the AI.
Good Engineer with AI is equivalent to 10 average engineer. So, less jobs.
Time_Entertainer_319@reddit
What you’re missing is the assumption that we need AGI before AI can have massive, cascading effects.
We don’t.
The truth is, many things we once believed required high-level intelligence; speech, language, coherent conversation, have already been replicated by machines.
It doesn’t matter that AI isn’t truly “thinking.” It doesn’t matter that it isn’t conscious.
What matters is that it can act.
Modern AI systems can execute complex tasks autonomously, based on patterns and behaviors that emerge from vast amounts of training data. They’ve been trained on huge portions of the internet to mimic how humans think and behave. As a result, they can reproduce behaviors we don’t fully understand ourselves.
And that’s the key point:
You don’t need AGI for AI to be disruptive, or even dangerous.
What exists today is already powerful enough. Given sufficient access, permissions, and the ability to act in the real world, current AI systems can cause serious harm.
People get stuck on technicalities. “It doesn’t think.” “It isn’t conscious.”
But none of that is required for something to be impactful or destructive.
Vast-Faithlessness85@reddit
The problem is most people's understanding of AI is just the LLM they interact with on their phone.
Research machine learning, deep learning and then think about what this will mean for robotics. Following that you may start to understand the threat to jobs and society as a whole.
starsandshards@reddit
I hate using it because of the implications BUT as someone with ADHD and other issues, it really helps me in my job when I struggle with communication or other things. I wish I didn't have to use it but stuff like goblin.tools is a godsend to me when I can't function. I try very hard not to rely on it, though. It's always a last resort. That means I'm not a terrible human, right...?
Kian-Tremayne@reddit
It can do tasks that we currently need a human being to do. For software engineers it doesn’t just “compile code faster” - it will create code for you (well, it’ll cobble something together from other people’s code but that’s what most software engineers do most of the time).
It’s not reliable enough to do these tasks unsupervised though. AI is like having a bright, enthusiastic and sometimes utterly clueless junior working for you. As long as you’re careful about what you ask that junior to do, and check their work, it’s a great force multiplier. And it means that even entry level jobs are going to be more like being a team leader for a bunch of AIs in future.
The danger is where people don’t realise that supervision is needed. Employers doing that will get in the same mess as if they hired a load of junior engineers and no seniors or designers. And employees who think they can get their job done by just “asking ChatGPT” and then play World of Warcraft all day are in for a nasty shock when their boss decides that if he’s OK with getting lazy, sloppy results he can just ask ChatGPT himself and save on payroll.
HawaiianSnow_@reddit
I think AI takes many forms and has many uses. Most people (like myself) who use Chat GPT or Copilot or anything else will have fairly simple uses for it.
E.g. "Copilot, replicate last months sales report but with this month's sales figures . Suggest some key takeaways or highlight things that need further investigation" – can do a days worth of work in a minute. Then my job becomes reviewing and checking the output before finalising the report and presenting it over to whoever needs to see it.
Another example: "ChatGPT, im travelling to X city. I'd like to see A,B and C when im there. I arrive on Monday and leave on Wednesday – suggest an itinerary" helps with planning a trip, saving potentially hours.
In other areas, businesses have massive AI models that allow them to predict weather patterns, or investment patterns, or run 10 million scenarios for potential side effects from a new drug in a matter of minutes. We (the general public) do not have access to these tools and they serve a bespoke purpose.
Ultimately, like any other bit of tech, AI is a tool. It saves time and enables innovation. It's not the answer to everything and (probably) won't solve some of the world's biggest problems but it is incredibly useful and in my opinion will help to propel humanity forward in the same way as the steam engine did, or the car did, or germ theory did.
Don't believe the hype but don't believe the doom. The answer is somewhere in-between.
CheeryJP@reddit
I processed a fuck ton of old client data in about an hour. That honestly would have took me weeks.
But also I don’t have weeks to do it, so it just wouldn’t have got done.
setokaiba22@reddit
Honestly it’s doing my head in at work but you could argue it’s a good thing (some of the things you are referring to aren’t really what the larger advancements of AI behind the scenes that are occurring - the developments there for tech and such is huge - the things you see like ChatGPT/Claude aren’t the same or as powerful)
But at work it’s eliminated the creative thought - people come to meetings having hammered in prompts to Ai for ideas. Whilst that’s a skill in itself if done correctly - when you press on how/why they then have no idea because they didn’t investigate or come up with the idea themselves
dbxp@reddit
The way I see it most of the automation won't come from AI directly but from AI enabling the development of traditional software which in the past would have been too expensive or had too limited a growth potential to get made. Vibe coding has a lot of issues however if you just want to create a little dashboard for internal use it works quite well.
LikeAlchemy@reddit
It can be really helpful if you have a difficult to answer question but don't want to scratch around on Google.
I use Deepseek on occasion (no idea if it's any good by comparison to others). An example I used it for was that I was writing a presentation recently, and was struggling to find a list of criticisms for a type of therapy - I kept wading through pages and pages of unrelated material and no matter how specific my search was, I couldn't find what I needed. I asked Deepseek, and it provided a decent list with references that I could check for authenticity in about 20 seconds.
So, I'd say at the moment, for a lot of people it can be like having someone do your googling for you. Some stuff is easy to Google. Some stuff is difficult. Have a go using it if you ever find you're googling but can't find the specific information you need, but if you're going to use it for anything important, check those sources!
Few-Growth9535@reddit
It can be useful but the biggest thing to try to avoid (in business and life), is complexity. One major problem with AI is that it ramps up complexity of anything very fast. You can find yourself over-engineering the simplest things.
Gadget100@reddit
It's a new tool with unknown potential, so companies are investing a lot of money into it in the hope that they can (eventually) make money out of it.
We're still in the experimental phase, where people are trying AI for lots of different tasks. Some applications work well, others less so, but we won't know until we try.
Once the dust settles, hopefully we'll have a better idea what this kind of AI is good at. Some of the companies might make a lot of money, and others might lose a lot. Understandably, they all want to be in the first category, hence the big push!
(Note: I don't work for any AI-related company. I wrote this myself without AI help.)
(Nitpick: AI isn't needed for compiling code as we've had efficient compilers for decades. However, it is now being used for writing code, in theory saving developers lots of time...assuming the AI-generated code does what the developer wants it to do.)
helpnxt@reddit
You see electronic slave is cheaper than human and companies don't really care if end user experience is worse.
finniruse@reddit
This!
SeoulGalmegi@reddit
I use it to prepare some visuals and text for my job - it means I can produce a lit more quantity a lot more quickly, but I'm not sure if it's really a huge benefit. I'm just expected to do more and it's all now so disposable as everyone knows it didn't take much effort. If it didn't exist I'd put more care into producing less and the overall quality of the work this material is designed to support would remain the same.
Translation - of text, images, and sound- is amazing.
Other than that, the chat function can be pretty good for asking questions about stuff.
But yeah, most of life remains pretty much the same.
Puzzleheaded-Low5896@reddit
As someone with poor health at the moment, I find it very useful (I get cognitive fatigue very quickly).
It can clarify instructions and help me compose emails, documents etc.
I've had to give it a rule to run a 'counter argument' on some things, so it acts more like a critical friend.
Infinite-Candidate81@reddit
If you want to get nitpicky, they're LLMs (Large Language Model) that are types of AI. Its answers are, to put it very simply, based on probability. You're talking to a big statistical algorithm.
Alexa/Siri/Home are not LLMs or at least, haven't been until recently. So they're a bad example.
As for AGI, who knows if that's even possible?
JoeDaStudd@reddit
In it's current form it's definitely a step up from the older chatbots, but it's still far from perfect.
It's going to be interesting to see when/if the bubble pops. The dataset it's using is filled with more and more AI generated content so it's becoming less reliable while at the same time people are getting better at the prompts and the video, image and audio outputs are getting better.
StGuthlac2025@reddit
I work for a small business so those here have to be a jack of all trades doing. In big multi nationals i've worked in before we'd have people who's sole role analysis of data we compile through our operations and suggest improvements.
Now I can't dedicate the time to do the the real indepth stuff here but I have been able to leverage using Claude and Chatgpt to make models of our delivery network to reduce costs. If I'd been doing that purely by myself it could easily have taken a couple of months. Instead I got it done in three days.
poptimist185@reddit
What hype? The current narrative is that it sucks and is a potentially gigantic tech bubble, so you’re with the majority.
Glittering_Box4815@reddit
Not much - you've hit the nail on the head. It's mostly Tech companies having invested ungodly amounts of money into it, and by extent, we need to be excited y it to help them recover costs.
lan0028456@reddit
Given you know all of that, you are not missing much really.
AutoModerator@reddit
Please help keep AskUK welcoming!
When replying to submission/post please make genuine efforts to answer the question given. Please no jokes, judgements, etc. If a post is marked 'Serious Answers Only' you may receive a ban for violating this rule.
Don't be a dick to each other. If getting heated, just block and move on.
This is a strictly no-politics subreddit!
Please help us by reporting comments that break these rules.
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.