How aggressive is your AI adoption at work?
Posted by Traffodil@reddit | AskUK | View on Reddit | 286 comments
I work for a well known tech company in UK. In the last year or two we have implemented copilot and chat gpt on our network and it can access and interpret all documents amongst other things. It really has its uses but also gets things wrong.
Over the past few months, our leadership have been getting more pushy about us using AI to help with our jobs and has now got to the point where our performance reviews will be ‘tarnished’ if we’re not able to show how we have implemented AI into our work lives.
On one side I can see that it would have taken a lot of money and work to implement. On the other side I can’t help but wonder if I’m just training the AI agents to eventually take my job.
Is this the standard for businesses who primarily exist through computer screens now?
DingoBingoWimbo@reddit
No one at mine uses it much at all
Opening_Succotash_95@reddit
A lot of the managers blatantly use it to write their emails. It's annoying to receive.
Traffodil@reddit (OP)
Our big boss admitted all his emails to us are written by AI. He always made a point of praising us for our hard work and dedication which has now lost any influence or gravitas it may have once had.
D0ntEatPaper@reddit
Lmao wtf are they even paying him for then? It feels like AI should be taking over their jobs instead
3_34544449E14@reddit
There's a pretty strong chance that middle managers get replaced by AI before the junior staff who do things. A lot of middle management is just rules-based decision making: if this project is late then add more resources, etc.
davehemm@reddit
Reply with AI response
Top5CutestPresidents@reddit
Great idea sir 👍 I’m really feeling your vibe 😎 now you’re cooking with gas!
Firm-Resolve-2573@reddit
And honestly? That’s growth!
rdu3y6@reddit
Make sure to leave in the footer from the AI asking if you'd like to rewrite the email in a more formal tone or if there's anything else it can help you with. So he knows you're embracing AI like he is.
No_Quality_6874@reddit
Because he emails with AI? Sounds a bit pettt tbh.
Ome of the most time consuming tasks is sending emails with potential legal consequences. AI makes things much easier to write especially when HR and ETs (or things that could turn to ETs and litgation) are involved.
dospc@reddit
Jeeeesus. No. What the fuck. The emails with legal liability are the ones you need to have 100% manually combed through!
SchoolForSedition@reddit
I edit legal texts. The amount of legal liability that would be incurred by unedited AI is vast. It makes solid grammar and syntax bit its content is often dodgy. Drafting lawyers are starting to have AI reasoning (« copied from the internet so it must be right »). Just wow.
No_Quality_6874@reddit
I also work with legal, i'm not talking about legal text, but communications that can be used in litigation with an employee. "Soften this so it sounds supportive. Bullet points any areas raised I have missed and any areas that could cause issues at the ET". Then engage your brain and dont take it uncritically, it will and does save you hours.
Pyjama365@reddit
Absolutely everything I've seen in a legal context is riddled of errors.
Like people try to 'research' things themselves, but instead of doing actual research to find out actual specific facts from clear sources, and then collating their facts and comparing them to their circumstances to make their own conclusion, they put the whole context from their side into AI in one go, and what comes out the other end is garbage. I suspect part of the problem sometimes is that they will tell it they're "in the UK", when we effectively have 3 different legal jurisdictions, and it smushes together info from all 3 of them, and they don't even realise their prompt was nonsensical.
jiggjuggj0gg@reddit
It’s not exactly a heartfelt thanks if you can’t be arsed to even write it yourself.
volster@reddit
Surely the same is also true of the response - if ai is an invaluable tool to prevent consequences rather than a liability that lets the manager be lazy and phone it in.
Then responding to their ai email with another ai email is equally an invaluable tool which ensures you acknowledge all relevant points while also maintaining a professional tone to someone who potentially poses an existential threat to your career.... Rather than being petty or disrespectful of their time.
No_Quality_6874@reddit
Yup
InvictaBlade@reddit
"Here are 3 bullet points I need to convey in an email. Please draft me this email. It should be at least 500 words"
dospc@reddit
"I have received this email of circa 500 words. Please summarise the key messages as 3 bullet points for me."
InvictaBlade@reddit
Environment -1hp -1hp 1hp -1hp 1hp -1hp 1hp -1hp -1hp -1hp 1hp -1hp 1hp -1hp 1hp -1hp
anoamas321@reddit
So there is a pattern at my work. Big boss uses AI to send an big email, we then use AI to summarise said email, presumably to be similar to his original prompt. Its a total waste of electricity and time
Quiet_Gur5949@reddit
I’m too lazy to open up chat gpt and ask it to do the job it’s quicker to type the email myself.
AmeliaOfAnsalon@reddit
Literally. Using your brain is easier most of the time I don't understand this fucking epidemic
DPH996@reddit
Depends on your line of work and who your stakeholders are. Finding the right tone in an email to a Board of directors, whilst not an insurmountable task, can be made quicker by getting a foundation drafted by AI, rather than going back and forth editing and reshaping a carefully constructed email yourself for 20 minutes.
Similarly, I find it particularly useful where I’m trying to convey complex information and end up just word-vomiting that into an email. In general I’m pretty good at cutting those down to be more succinct, but it’s a process that takes time - and frankly in this world where shareholders want more for less, any edge I can get time-wise is extremely valuable.
AI certainly isn’t perfect, but it’s brilliant for reframing things or setting a foundation. I just wouldn’t ever rely on it completely not to sound completely rigid or accurate - that’s where the human element remains important.
AmeliaOfAnsalon@reddit
It's just like- what are we even doing here? why do we need board of directors or emails to them or whatever? What does this do for society? Everythings is so fucked
DPH996@reddit
To explain to them what risks their business is exposed to? I’m not sure what you’re even arguing against? AI shouldn’t be thinking for us, but if it can help to succinctly communicate a point, that’s a good thing.
AmeliaOfAnsalon@reddit
dead internet theory is overflowing into real life
longhairedfreek@reddit
My boss yesterday send the prompt and response alongside the fake poll for a meeting which suggested Saturday or Sunday for a meeting. Which in Germany is a big no no
TheZag90@reddit
I get a lot of AI emails from colleagues and I just stick the into my well-trained custom GPT and just send it right back.
If you can’t be arsed to write your email to me, I can’t be arsed to read it.
I don’t mind when someone uses it to refine their messaging but I can spot a lazy “do it for me AI” message a mile away.
Unusual_Sherbert2671@reddit
Some of the email replies I've seen are ridiculous, over the top and wordy
mcnoodles1@reddit
You see this sort of thing is why people really are missing the value of what AI can do.
I've digested all none sensitive emails I've sent into a copilot agent and when I need it to write me an email it can mimic the way I would naturally write. Warts and all. . Simple simple solution.
SchoolForSedition@reddit
If you can develop a sense of humour about it, it’s hilarious.
I can see that might be difficult if you are not facing compulsory retirement in any event.
Ultimate_os@reddit
My manager even makes strategic decisions and runs feedback through AI. 😅 it’s ridiculous.
Supergoose5000@reddit
Cause it's easier.
doctorace@reddit
Yes, many office-based businesses are setting employee mandates for AI use and including it on performance evaluation.
Namerakable@reddit
That feels so sinister to me, being forced to train something that will be used as a way to ruin so many lives and make workers obsolete. And you get punished for not participating in our own demise.
doctorace@reddit
We are wage slaves. We are a cost and we have no freedom.
The only ways to stop it are to organise or regulate. ✊
Quothcraftia@reddit
We are getting new job descriptions with the use of AI added in as mandatory, not as an optional tool.
It's currently used by a small group and it turns a note that could be 2 lines into a paragraph of 10-12 that's just unreadable in the moment you need it.
SiSkr@reddit
All in.
Engineers have pretty much unlimited Claude Code, non-engineers have ChatGPT. All with access to our documentation, repositories, diagram boards, project trackers, CRM, observability, you name it.
We're encouraged to use it and actively think AI-first. If there's a problem, we automate it with AI and think of how we can make it as safe and deterministic as possible. Minor regular tasks or fixes? Automated agent picks it up and does it. Workflows and process? Agent summarises it and comes up with recommendations to consider.
I haven't written a line of code manually for... probably a couple of months now - yet the quality hasn't dropped and we're more productive than ever. The big difference, though, is that we didn't just go "yeah, knock yourself out and vibe everything". We got proper training and leadership is actively striving to make agentic workflows trustworthy.
Dunno where AI will end up, but I'm having a ton of fun and learning a lot, and I finally have the headspace to think about bigger things than a one-line bugfix.
SnooMacaroons2827@reddit
Similar here (US tech company in London). We're in the race to get it all working faster than our competitors & customers are. It'll either be shit or bust within 12 months but on a personal level I'm building more, faster than I've ever done in my 25 year career. At the current rate of change I know I'm doing myself out of a job, as it currently exists, eventually.
Safe_Parsley_2208@reddit
Interesting to hear about success cases as well. I see and hear about pretty dim, badly implemented, quality and effort destroying cases
DeepestShallows@reddit
The big question with AI should be: how does this fit in to your QMS?
And there are good answers to that. There are also terrible answers.
BigSkyFace@reddit
This sounds a lot like what my company thinks they're doing but they haven't actually provided training on how to use any of the tools. Naturally this means we've got a really mix in the office with some people barely or not using AI tools at all, and others using it for absolutely everything. I think because of this lack of training, a number of those most keen to use AI for everything are lacking enough of an understand of what the AI is outputting and are just having trust that it's done the work correctly because the result looks right.
Too-Late-For-A-Name@reddit
Exactly the same at my company
JoeBagadonut@reddit
I work in IT for a major UK company. Our COO has mostly stuck with a cautious line of "AI is really exciting but we need to see actual use cases for it". We do use Copilot a bit but the steer currently is very much that it's there to supplement our work rather than replace it.
Safe_Parsley_2208@reddit
This is quite refreshing. Honest leadership.
Realistic-Muffin-165@reddit
I also work in it for a ftse 100 company.
All the developers use Claude code.
Copilot is heavily pushed for every one else. I admit the meeting transcription is pretty good and ignores all the sport related banter that usually takes up most of the time.
Grumblefloor@reddit
FTSE 100 dev here, we're being actively pushed to use Claude with no obvious budgetary restrictions. It's massively increased the amount of work we can get through, but it does still make mistakes necessitating manual oversight.
It's looking like "AI usage" may become something we are measured on in the future.
highjohn_@reddit
I’m an engineer at a financial services firm. All of us who code use Claude as well, but the rest of the company is pretty limited in terms of AI use. Management is pretty apprehensive about giving “normal” people full access to AI, and to be honest that’s probably good.
EarlDwolanson@reddit
The fear with the "normal" people is that then they use their personal accounts...
warlord2000ad@reddit
Over at a previous gambling company I worked at, general staff have access to AI, but it's meant some have used to it to write apps to breach company policy. No recording of meetings allowed, no worry, I record my speaker output and transcribe it, all done with an AI prompt. These little apps are going to cause trouble one day.
EVERYTHINGGOESINCAPS@reddit
Never going to be able to see valuable use cases without allowing people access to find when it can be helpful.
greggery@reddit
They're pushing copilot quite heavily at mine
Racing_Fox@reddit
A mix between ‘use AI where you can’
And ‘absolutely no beer no circumstances use AI to do your work’
So what are we supposed to use it for?
ThanksOld1698@reddit
At a pharmacy, none yet but in convinced the new system where the pharmacist checks the validity of the medication and directions is going to start farming their decision information into a dedicated LLM. We have a new "robot" that generates a list of stuff that needs picking, then somebody walks around grabbing it and scanning it as they go with a gun thing, then at the robot somebody scans each item individually and it prints a label and tells them what bag to put it in.
Im pretty sure at some point in the future they'll remove the pharmacist and it'll just be a computer telling minimum wage workers what to do.
Ok_Net4562@reddit
Its more despairate than aggressive. Were hemmoraging money and 100s of layoffs are on the way. So the higher ups have put a few of us on ai courses to try and save/automate some of the work
WayInevitable2491@reddit
They only allow us to use co-pilot which is rubbish, it can’t give you decent code, it can barely give you direction to a question lol
EuphoricFly1044@reddit
I work with an architect (tech) that uses it all the time. I saw copilot stats recently - per month - per prompt, and the average was about 30-40 per employee. This person's was 780 per month......
And generally you can easily tell. So I don't bother reading them.
cocacola999@reddit
I'm trying to review other architects work currently and feel this. A one pager shouldn't be a 40min plus read with duplicated sections and sales pitches.
Jeoh@reddit
Software engineer at a tech company. We're supposed to use it a lot, but we have personal budgets and somehow there's pushback when you actually use it and reach your limit.
I think it's offensive to use LLMs for interpersonal communication.
YetAnotherMia@reddit
Do you feel like in 5-10 years agenic coding will be able to do a large part of your job? Where do you feel the industry is going?
Former_Intern_8271@reddit
I'm in the industry, it's good for writing fresh code but it's not so good at maintainance, it's getting better but it's slowing, the innovation seems to be more around the interface and putting the intelligence into action in different ways, instead of improving the intelligence driving it all.
It's hard to see how this all works out, sometimes I spend days figuring out a bug that AI can't fix without causing another bug, simply because it can't understand the requirements, even though they're perfectly well defined.
I am worried about the future from an emotional point of view, but the rational side of me can't shake the fact that if engineering gets faster, it gets faster for everyone, businesses are going to accelerate and need 10x the output to keep up with competitors, if each engineer is 10x as productive because AI is picking up the low complexity, highly repetitive work, but 10x the output is required... We're back to the same position.
The concern is that we could reach the natural limit of how businesses can expand, if they have all this new engineering capacity to produce more features and services, that doesn't mean legal, or other parts of the business can keep up, we can only go as fast as the next bottleneck, I guess businesses are going to find out what that is.
No_Ring_3348@reddit
You must realise that even if it can accomplish less than half of what a mid-career developer can, this is a fantastic value proposition as it's <10% of the cost and never sleeps, and this goes for almost everything else LLMs can do, which is a lot.
Jeoh@reddit
<10% of the cost? Have you seen the cost and usage of models these days?
No_Ring_3348@reddit
£220/mo or so at the top end isn't it?
Begalldota@reddit
£220 a month is a fake cost designed to drive market adoption. There’s plenty of evidence that some people were/are (models keep getting silently nerfed) pulling as much as £5k/month compute out of those subscriptions, and it’s completely unsustainable.
You’ll see the price go up and up and up as they try and lose less money.
Former_Intern_8271@reddit
And they're still losing money, costs are going to go up dramatically at some point
Former_Intern_8271@reddit
I'm not sure what your disagreeing with? I never said it wasn't good value, I said it will increase output
slb609@reddit
I agree with this. Though I do find it’s stupid quick for simple bugs like misspelled variables and missing brackets etc, that are just a pain in the arse to find. And I use it sometimes to explain why a behaviour is happening, not necessarily how to fix it.
Anyway - as long as the customers are human, you’re going to need devs who can actually converse with them and work out what the ACTUAL requirements are, not just what’s written down.
absx@reddit
The way I see it, the software developer's role shifts either right, to deal with the verification of the features that are now cheap to produce fast, but this part can also be automated to a good extent. More likely the devs will end up shifting left, closer to the product owners and business stakeholders, to help them create good specs and to decide what's worth implementing. Forward deployed engineers integrated in the business functions, to make sure their hopes and dreams get implemented in a safe and compliant way and according to the organizations architectural framework.
Former_Intern_8271@reddit
Software engineers already spend most of their time dealing with the fallout of one part of an organisation not working well with another and building incompatible systems, I don't see that changing because it's human nature, someone will need to be there to fix that, I think demand will slow down for engineers, but there's still some room hiring to slow down and people naturally leaving or shifting out of the area to other areas to not be replaced before there's talk of huge redundancy, businesses aren't hiring like they used to during covid peaks, but most seem to be hiring still, just at a slower pace.
I don't think headlines about places like Facebook cutting numbers reflects the UK as a whole, we don't have so many of those top massively funded tech companies that pay lots of people to work in experimental or exploratory projects, people in tech in the UK tend to work for more boring businesses that actually make money, they can't just cut an entire division on a whim, shrinking headcount would be a slower process.
YetAnotherMia@reddit
Thanks, that was an interesting read! I do wonder if software is becoming commodified now though.
BlankProgram@reddit
It's already the case that Claude is writing most of the code and the job of an engineer now is to architect and guide it. My opinion is that LLMs still have the ninety-ninety problem, but they have definitely increased output massively, at least in my experience and the experience of colleagues in the industry. I find them incredibly useful for bug hunting and automating well defined and precise tasks, but they suffer from hallucinations and will go really off the rails if you let them run for a long time unsupervised. They are also really dangerous and annoying in the hands of bad engineers because they will just generate a bunch of nonsense and they don't know how to correct it, so you end up with PRs with enormous esoteric descriptions and +5000/-3000 lines that they can't explain.
I don't think anyone knows where the industry will be in 5-10 years. I'm of the opinion that hallucinations are an unavoidable characteristic of LLMs, and if I'm right then engineers will still be required for a long time to come. If I'm wrong, then all bets are off. If these things can really solve coding then the world will be so different in 10 years that it's not really worth thinking about what software engineering as a career will look like.
dl064@reddit
Yeah.
What gets me about Claude, is that
And sometimes I as a human have neglected something so obvious to me I don't state it to Claude, and that context upends what it's been doing.
As you say: it's definitely a skill to use well or not.
intothedepthsofhell@reddit
The higher ups used to press me to consider outsourcing as they could do X job for £Y. I always refused on the basis that if you can't write a decent detailed spec with clear acceptance criteria then what you get for £Y won't be what you wanted. And you'll spend £Z trying to correct it to what you had in your head.
If you can't explain what you want to a human, how will you ever explain it to a computer?
dowhileuntil787@reddit
What gets me is way people type a few bullets in, then get LLMs to pad it out to a five page long diatribe, then the receiver has to use an LLM to distill it back down.
I’m going the opposite direction: I paste my emails into LLMs to see if I could make it more concise. Might even revert to cave speak. Why waste time say lot word when few do trick?
ethanxp2@reddit
Our MD uses AI to turn his 4 bullet points and a call recording into a 15+ page "review document" to pass to us about the clients requirements. Of course being based off 4 bullet points and a short call AI makes vast amounts of assumptions.
We are then supposed to use AI to sum it up so we dont have to read the whole thing.
In short, client then gets quotes/proposals where half of it isnt even what they wanted. I just call the client myself now, it's easier and quicker.
AmeliaOfAnsalon@reddit
Might aswell just write to to the point yourself
WavryWimos@reddit
Doesn't help those of us that go on long rambles. Got ADHD, I could happily ramble on for half a page about a topic I could have covered in two lines. So sometimes it's nice to ask Claude if my waffling could be made a bit more concise — for things like documentation, or some other technical writeup where clarity and ease of reading is more important than personality.
I'm not a massive fan of LLMs but they're not going away soon and just outright handwaving them away doesn't really help anyone. They're a tool like any other and should be used carefully IMO. But I also recognise that this might be an unpopular opinion and will attract a lot of negativity since it's such a touchy subject at the moment.
HollowForgeGames@reddit
Software dev here too.
We use it quite a bit. Don't like specs written in it.
One dev got a "support email" as he wasn't using it enough.
I personally know a Dev that joined 6 months ago and hasn't written a single line of code by hand
abzftw@reddit
Which llm?
Estrellathestarfish@reddit
And it's pointless time wise. Once you've told it what you want and checked what it produced, you could have just written the email, with the bonus that it sounds like a human wrote it.
sprunkymdunk@reddit
That's what I don't get. Like I have no qualms using AI when it saves me time, but emails aren't that.
xxx654@reddit
The uncanny valley on LLM emails is something else. I agree. Offensive.
Revolutionary-Act833@reddit
Absolutely this. LLM anything. It's like reading a page of pure entropy. Very strange feeling that you've read something but not actually absorbed any information.
Tundur@reddit
I would mostly agree, but as a dev there's so much of our communication that is just sharing lots and lots of information that doesn't neatly fit into the English language and which nobody really wants to be personal.
Like, handing over a half-finished bug investigation. My notes will be a dozen pages of insane rambling that made sense in my brain in the context of the fifteen tabs I had open, but would be gibberish to a colleague without guidance. You can paste that into copilot, get it to ask some clarificating questions, and 99% of the time the output is a perfectly constructed report. Yes it reads like souless corporate jargon, but it's accurate and clear (i.e, why said jargon exists in the first place).
Taking complex mental context and working out how to download it all into a coworker's brain is a difficult skill, and I've met very few people who can do it excellently. Gemini can.
Gulbasaur@reddit
I once turned down a protective supplier because their emails were all AI. Asinine, waffling language that often didn't answer my questions but kept commenting on how they wanted to keep it personal.
After the second time I asked directly for pricing information and got a bizarre but cheerful non-response back, I quite firmly asked them to drop communication because saying you want a personal connection while using AI to reply to your emails as hypocritical.
I then got an extremely apologetic email saying they were new to marketing and as a small, young company they didn't realise how it would be perceived. He seemed genuinely surprised I could tell and baby, I could tell.
Ziphoblat@reddit
Could not agree more.
I’ve done so exactly one time so far. Had a colleague I strongly disliked. She would throw her own gran under a bus for 20p.
When she left, I just could not bring myself to write a “heartfelt farewell” message in her goodbye e-card, but I knew that not doing so would be noticed and looked down upon by my colleagues. Somehow copy and pasting a heartless generic AI generated “best of luck for the future” message was infinitely less painful than preparing one myself.
Mystrasun@reddit
Heavy use of Claude to expedite coding tasks, and we're all pretty cool with it. No roles have been made redundant as of yet. The CEO is cognisant of how AI can damage entry level roles, and has made a point of actively encouraging the hiring of interns and people on work experience, and our team recently took on a student intern.
There is heavy encougement to explore AI use to improve workflows, but beyond coding tasks and automating some of our documentation write ups, we haven't seen any direct impact.
One of our interns did recently craft a role for herself as a sort of AI correspondent though, and in our biweekly meetings she usually gets a segment to showcase her findings, and tbh it has been pretty cool to see the things she has dug up.
nobodyspecialuk24@reddit
It’s taken on a lot of the roles of people who’ve recently been made redundant.
The problem is, the teams have been reduced to just 1 or 2 people so they don’t have the time to check the output from Claude, they just pass it in, blindly.
Letter_Effective@reddit
Isn't the risk that once companies become fully addicted to AI and lay off enough of the workers, OpenAI/Anthropic will charge them more and more to use their premium service?
Tundur@reddit
Then they use Gemini. And if Google start charging more, they use someone else. It's a very competitive market and we have workloads dynamically routing between all major providers based on our own cost appetite.
worotan@reddit
No, they’re a product. That’s why they’re provided by private companies, not public utilities. They’re just acting like a utility to get you to trust them, so they can put the prices up.
People are so stupid.
nobodyspecialuk24@reddit
In modern capitalism, they consolidate down to a few players and if they are competitive on price it’s because they all offer a sh!t service.
See UK mobile phone providers for more details.
audigex@reddit
It’s a very competitive market currently while they’re all in a growth phase and happy to lose tens of billions to get customers
But when one goes bust because it can’t transition out of the growth phase into profit, and another couple get bought up by Google and Amazon, we’re gonna see very little competition by comparison
No-Pack-5775@reddit
We're not at that stage, but we would probably be recruiting a lot more of not for the productivity gains. Exec also expect the productivity gains so reluctant to approve new recruitment
Bad time to be entering the workforce imo
nobodyspecialuk24@reddit
I’ve seen the output from people, and it’s obviously wrong in places, but I’m not going to go out of my way to help anyone there anymore, I’m leaving as soon as I can.
It’s clearly on the skids.
Michael_Thompson_900@reddit
My org is pushing for it, but is also extremely cautious. I’m yet to see the benefit. It’s painfully obvious when colleagues use it, and often the output is wrong, not detailed enough or just misses the mark entirely (its ’output for the sake of output’).
My job involves me communicating with people all across the business, and I do find it useful to ask ‘who looks after X in the finance team?’, but still often it is wrong.
Klakson_95@reddit
AI is a brilliant tool, but like any other tool you have to know how to use it to get the right outputs.
fuji44a@reddit
The use of it in my industry has become an issue, due to the rush to use it; the emails are unnecessary, unclear, and sometimes contradictory.
The bigger problem is the use of it to build patterns and blocks for garments; the information, grades, and specs used are just out of date, based on older systems that were already flawed.
Any push back is countered with AI emails telling very experienced techs to trust the AI and being called a Luddite by upper management who have never touched a pattern.
I-live-in-room-101@reddit
We keep getting asked why we’re not fully utilising our Teams copilot licences… the unanimous reply is “because it’s shit and I would spend more time checking and correcting errors than just writing the fucking email or locating the file I need”.
LieutBromhead@reddit
It's so so bad across every microsoft product, but the copilot integration in PowerPoint is the utter shits
ace_rimmer1049@reddit
I'd rather have clippy back than this shit!
I tried to use the excel copilot add in for help with a formula in my spreadsheet and it said "sure, just tell me where your spreadsheet is".
And if you ever challenge it's answers, it immediately capitulates and tells you you're right and it's so sorry and actually here's the right answer. It's like that habitual liar kid that every school has!
socratic-meth@reddit
I would probably just make something up about how I use it. Maybe even ask chat gpt to make something up for me.
zzkj@reddit
Unfortunately the corporate plans record usage per user so they know.
Stock-Bullfrog@reddit
There was a period where people resisted, then the last 6 months since a round of layoffs, AI adoption has exploded internally. Everyone is hitting their usage limits and getting pushback when they ask for more credits :D
Daily churn of Claude slide decks full of hallucinations and everyone building their own data tools, and drawing dodgy conclusions. Its the wild west at the moment.
dunzdeck@reddit
"Tarnished" is a strange choice of words... did they put it like that?
100_Percent_Dark@reddit
Oh were going heavily into it. Reviews will be affected by usage from talking to the CEO.
I work in IT and I'm implementing Claude cowork for the company. It will be great for the type of work they do. Finance tasks, gathering info from multiple sources and making reports from it. The sort of things I expect AI to take time to evaluate, so it sits processing for an hour, doing what would take a human a few days/weeks.
Getting AI it to write your emails, It doesn't even register on the usage scale. If you're using it to process actual data, usage will spike.
shark-with-a-horn@reddit
Nothing sums up moronic management like measuring performance with everything but actual performance
FumblingBlueberry@reddit
Engineer here. Successful personal adoption is now going to be a success metric in reviews. It's all changing so fast that in a few weeks my thoughts and opinion on the matter will be several commits behind. Our team has recently identified a lot of 'AI-exhaustion' - all we talk about, new ways to use, new workflows, sharing what's we've found useful this week, show and tells with the business, half-day AI hackathons... There is a very specific kind of burnout approaching.
Unusual_Sherbert2671@reddit
I review live contracts, I have used AI to summarise things for me and it gets it wrong often.
"Yes your are right, good work'" I can't trust it
intothedepthsofhell@reddit
Where I have any influence I'm pushing back. My concerns are long term that
1) New recruits (software engineering) are not learning the core skills you need to become skilled. All jobs are like a pyramid and you need a solid base of understanding to build on, that you only get by thinking, working things out, and making mistakes
2) Once we're all hooked on AI we're going to be dependent on a few giant non-UK tech firms who can either hike prices or pull the plug. We've already done this with infrastructure - as a country we'd be screwed if Azure or AWS decide to play dirty.
I've told our junior devs there will be 2 groups in the next 5-10 years - those who know how to type in a prompt box, and those who know to how code. Pick which one you want to be in.
Pyjama365@reddit
Thankfully minimal. I was very glad to see a firm-wide email saying "it should be obvious, but you should under no circumstances put any client data into any AI system, because that is a data breach under GDPR"...
Confident_Yak_1411@reddit
I’m aware that the public sector (police, NHS etc) in my country, the UK, is using AI (Copilot) Ona daily basis. I know this because my family works for them.
Can you explain to me how they’ve done this in a way that isn’t a massive data breach? Are they running the model off their own servers?
Pyjama365@reddit
Sorry, as far as I'm aware, CoPilot for instance is ok because it's approved in the same was other Microsoft stuff is. If your organisation has properly vetted a system, that's different (although you may wish to see my other reply on here about a pilot of AI in a hospital setting).
I should have been more specific in my first reply here - the email was more about the risk of people asking other models than CoPilot (which seems to be available if we want it), like on their phone, to write minutes or to find answers to questions that may involve potentially identifiable data.
TheInquisitivePie@reddit
If your workplace pays for O365, then Microsoft secure everything to the same level as SharePoint. They also don’t use any data entered to train their models.
If there’s an “Enterprise Grade Security” badge in the top right of the CoPilot app for example, then that’s what this means.
Insila@reddit
Yeah i was thinking that the GDPR compliance team... Person... Must be crying right now.
Biggest issue is how much shadow AI is used by employees. Especially free versions of AI that uses your input as training data....
elchet@reddit
Orgs who are genuinely taking the reins on this will strike deals with AI providers over data governance. Eg strict limits on where and how data is used far beyond the standard “if you put it in a conversation it’s ours” approach.
Bossman_Mike@reddit
That happened at our place. People were somehow using the public ChatGPT instead of our curated private workspace... from our corporate machines. That's a fuck up and two-thirds.
Insila@reddit
I have currently closed my eyes until IT actually starts qualifying AIs.
Hell, I've heard from friends that it's normal to use your own private subscriptions for AIs as well.
Simply closing down access to all but co pilot, before providing a viable alternative, would lead to torches and pitchforks and people will go to great lengths to bypass the block.
theraininspainfallsm@reddit
Yep. My work has banned LLMs. Not putting any data into it at all.
ross-dirext-words137@reddit
It's goes as far as encouraging copilot. It's good for something's
AirconGuyUK@reddit
Went 0 to 100 real quick. Full steam ahead now.
DeltaMikeXray@reddit
Head of commercial uses it to ask me to generate reports for them. Which turns into a 10pager. I use it to take that 10 pages into one paragraph them follow up with a call to confirm that's what they want.
Accurate-Herring-638@reddit
I'm a university lecturer. Of course many students are adopting it enthusiastically and get AI to write their assignments for them. My main usage is therefore asking co-pilot to check bibliographies for non-existent references. Other than that, I sometimes use it to gain ideas for in-class exercises, but that's about it. I enjoy reading and writing, it's why I became a researcher in the first place, so I don't have any desire to outsource that to AI.
Ok-Humor-5672@reddit
I work in IT in higher education.
Students use it and alarming amount. It's actually really worrying.
Staff seem to be polarised into a few different pots:
I actually teach people about this subject as part of my role and one of my suggestions is to outsource work, not thought.
CongealedBeanKingdom@reddit
Not at all. I dont work in tech and AI wouldn't be able to do my job so it doesn't really effect me. I can use it if I want, but I'd rather use my own brain tbh.
ceehred@reddit
It's certainly been bigged-up here these past few years (software company), and all departments are using it - including in the products we sell. Customers have become very wary and want reassurances on that latter point.
As a senior developer, using GPT for now - moving across to Codex, I at least give it a chance on every occasion - but I find it quite lacking in producing anything near production-ready results, plus it's too arrogant in its ignorance with the specifics of the kinds of problems I have to solve and deal with. Though it is not bad at creating tests and analysis programs (providing I keep my eyes closed). It is also good for prototyping things in a language I'm not an expert in, but even then I can see the results are far from production quality.
Not expecting it to improve too much for me in the short term, since I'm in quite a niche area with a corporate subscription that shouldn't be leaking our stuff to train for anyone outside of the organistation...
FogduckemonGo@reddit
People are adopting it on their own, but I can see it being pushed on my workplace by corporate in the near future - since it now proudly calls itself an "AI-first" company.
vaskemaskine@reddit
Software developer here. We have adopted Gemini to assist reviewing our GitHub PRs, and the vast majority of our devs/devops use Claude daily.
Specialist-Top-406@reddit
I find it baffling to make AI part of performance. Ultimately it’s there as a supporting tool. And my boss uses our AI for everything, without any quality control and it ends up making things so much less efficient.
Our company AI system is also less advanced than public facing ai systems, so it’s frustrating to use because I’m used to more advanced systems. And it’s annoying to know I can’t use public platforms which would actually make better work and use to my work, and settle for something that is years behind.
I feel like people are being tasked this more to have AI imbedded into their performance reviews. But it feels like such a risky and loose thing to monitor. Ultimately anyone can say I used it, and if it’s used well it won’t present as being blurted out by AI.
And it just feels like offering public platforms or encouraging AI use, means data breaches and an inevitable mess for GDPR. Which someone will eventually be tasked to clean up.
It’s useful to utilise AI, but enforcing it as a performance metric will only make people use it more for the sake of it and quality of work will reduce as it will require so much more editing and reviewing.
My friends boss tried to publish their business model straight from chat gpt to their website, and to me if I saw a copy paste AI Job id be like this person is not qualified.
It just doesn’t seem like the right approach to enforce it rather than having it as a learning and development insensitive.
Traffodil@reddit (OP)
This is part of what gets me. I’ve been tasked with creating and using AI agents more. In essence I’ve been told what the solution is, now I’ve got to figure out what the problem is!
Specialist-Top-406@reddit
Exactly! And I guess what is the business case based on? Justification of cost? And if it is used as a performance metric, is there clear governance that states what good or bad use of AI is?
I’m just so confused how this can be measured because it seems like companies are just saying use it or else. And if my company said that, I’d be like I do use it and it’s not always useful or accurate. So if I have to use it and it doesn’t actually meet my desired outcomes, do I have to use it first and then do it again properly?
Like I’ll use it when it’s relevant and if there’s more to explore- provide a training course for it. But if the ask is just use it more, then this is like saying before you meet a deadline just shut your laptop for an hour.
What do they want??? I don’t get it.
TalkDirty2MyIVR@reddit
I work for a CCaaS company who and AI is embedded into almost all of our features. Recently a massive push for Agentic AI agents too, which we recently went to market with.
The company has pivoted to AI heavily, we’re being encouraged to use the corporate ChatGPT account daily in our roles to increase efficiency and reduce the manual work we need to do to build demos. We’re being told to lead with the AI features in our products and recently we released our Agentic AI offering and push that into every customer conversation if the business usecase could realise some value from it.
Every customer we speak to is chomping at the bit to implement it, they all want to implement it and a lot of the conversations are how they can implement it to downsize their contact centre headcount, as well as use it to improve the efficiency of their human agents who don’t get outright replaced.
It’s pretty crazy and a worrying sign of the things to come if it honest.
incredible-derp@reddit
As a software dev, I'm supposed to use AI extensively.
But as a software dev I must say AI is absolutely worthless when it comes to coding anything that requires little intelligence. I spend more time reviewing the output AI generates than actually fruitful added by it.
I can guarantee that my Google search is better than AI results any time of the day.
LieutBromhead@reddit
We are mega on top of being given all the ai tools at my job specific to my role. But for general org use 1) co-pilot is a piece of shit with it's integration across Microsoft products, and 2) general use roles (non engineers or technicans) are all idiots who don't even know how to set their out of office in outlook.
Paradiddles123@reddit
My work has introduced copilot pretty hard and I find it bizarre, I get eloquently written emails from people that can hardly string a sentence together. I want the time it took me to learn how to write back.
I have to question the mindset of spending lots of money on business consultants that just use AI to create things.
OdernAle@reddit
Now when people email me asking me where a document is.
I tell them to stop wasting my time and ask Co-Pilot instead.
jb28737@reddit
UK software eng, working for a US based company. We are pushed hard to use claude for as much stuff as possible. Guidance flip flops weekly from "prompt harder" to "woah those prompts are costing a bit". It's a bit exhausting but I'm gonna ride this wave and see where it spits me out
JorgiEagle@reddit
Big bosses have purchased expensive AI subscriptions because that’s what everyone else is doing.
They now have to justify that expense to the shareholders/their bosses.
Only way to do that is slow adoption by workforce, so must resort to bullying
First_Folly@reddit
It's banned. One or two people were using it to write reports that can be admissable as evidence so websites that offer it were blocked site-wide on every computer.
ItsSuperDefective@reddit
I have not been asked to use it.
Difficult_Grade2359@reddit
I work in the NHS. we are still using pagers and paper notes. AI feels decades away.
frankOFWGKTA@reddit
Misuse of public money lol!
queljest456@reddit
Public Sector role. All forms of AI (including grammar checkers) are blocked on our systems due to fears of them being able to absorb the sensitive information we work with and servers being hosted abroad. Doesn't stop members of the public emailing us with lengthy questions obviously written by AI though.
frankOFWGKTA@reddit
This is why public sectors so far behind.
XA3A12@reddit
They constantly contradict themselves at my company, which is one of the biggest in the UK. They tell us to use it for all of our work, but then tell us we need to check everything it outputs. They showed an example of consolidating 15 spreadsheets into one spreadsheet and cutting out information which wasn’t needed. I asked how could I possibly check that the output is correct without doing the same task from the start myself… Which then wouldn’t have saved any time and would have taken longer overall than just doing it myself. They didn’t have an answer for that lol.
AussieHxC@reddit
Isn't this a relatively basic power query task ?
XA3A12@reddit
Yes, but everything AI can do can be done without AI so I don’t really understand your point?
odysseusnz@reddit
This is my mantra, every time someone suggests using AI to solve a problem, I just retort how about we just fix the problem with the tools we already have.
360Saturn@reddit
At my work the biggest pushers of AI as an omnisolve are the high level managers who apparently don't realise we already have a way that works well to do a lot of tasks, it's just that they themselves don't have the knowledge.
I can't help but feel sometimes it's as if the owner of a restaurant chain was storming into the kitchen and forcing the chefs to throw out all the recipes because RecipeGPT contains all the recipes in the world and can instruct you better how to cook them. But you don't need all the recipes in the world.
AussieHxC@reddit
It's significantly more effort to combine 15 spreadsheets using AI than it is in excel.
rogeroutmal@reddit
Lmao. Using AI to consolidate 15 spreadsheets is like using a sledgehammer to kill an ant.
DeepestShallows@reddit
New ways to automate Excel seem pretty low down the list of priorities. That’s something we’ve been working on as species for decades.
EarlDwolanson@reddit
But that would be the smart use of AI - ask it to write the query, that way you do the job faster and avoid gross errors.
AussieHxC@reddit
That's not how power query works
cloudstrifeuk@reddit
All it takes is an inner join, when you wanted a left outer join and you'll get completely different results.
joan2468@reddit
I use AI more as a supplement / reference check. It still makes mistakes and it won't have all of the context that you do which is why you cannot just use it to effectively do your job for you.
360Saturn@reddit
Pushy and ridiculous, especially given Copilot has now popped up with disclaimers that you shouldn't use it for anything serious and you need to fact check it.
What the hell is the point of using a tool I can't trust to be correct? You wouldn't use a calculator that sometimes gave the wrong answer or a clock that was wrong 1 time out of 10 or 20...
Corrie7686@reddit
We are quite cautious, we have teams using AI over the last year, and are rolling out projects this year to ensure consistency within departments. It's very helpful, but needs checking and needs tailoring . That said ChatGPT has been very valuable for marketing strategy, data analysis and sales scripts / battle cards / emails. Saved a lot of time, we don't have marketing people so it's been invaluable
elchet@reddit
I’m at a startup building capital market products. We are ten people mostly engineering. It’s 100% agentic coding with cursor and claude. Our AI bills are thousands but we are 5x-10x more productive than we would be otherwise.
Srddrs@reddit
I lost my job over the obsession, pushed back too much
LogicalReasoning1@reddit
It’s not actively replacing anyone but it’s essentially filling the roles of the people leaving instead of recruiting replacements
Whatajoka@reddit
Have access to enterprise Copilot. It's there if you want it, ignore it if you don't. Can tell who uses it and 9 times out of 10 it's not a good look.
thierry_ennui_@reddit
I'm a chef, so luckily I won't be replaced until robots become very cheap, which is unlikely before (if) I reach retirement age. I feel very lucky to be an industry where this isn't being forced on us. I can't imagine how frustrating it must be, good luck with it.
Letter_Effective@reddit
AI might not be able to take your job but you are still at risk if enough of your potential customers lose their jobs due to AI and can no longer dine at your restaurant.
Tundur@reddit
That's less of a career or economic point, and more of a military one. By the point we're at massive structural unemployment, people will just start swinging for each other.
thierry_ennui_@reddit
A depressingly good point.
DeepestShallows@reddit
There do seem an awful lot of, you know, real jobs that AI doesn’t have a clear way to touch.
laidback_chef@reddit
Yeah, it's just a shame that wages aren't great, tbf. It was very much a job of passion for me but I realised a few years ago I would like to buy a house and left.
thierry_ennui_@reddit
Yeah. I'm very lucky to have a partner who earns well, so I can continue to do this for the love of the work. If I wasn't so lucky there's no way I'd be able to live on this wage.
boldstrategy@reddit
Bro
https://amp.scmp.com/news/people-culture/trending-china/article/3350001/china-ai-robot-restaurant-analyses-diners-faces-tongues-recommend-health-focused-dishes
thierry_ennui_@reddit
But that's not 'very cheap'.
goobervision@reddit
It being used in restaurants in China, at no point does the article talk about the unit cost, but :
"In Haishu Community Canteen of Yuhang District which uses two robots, customers said they usually spent between 18 and 20 yuan (US$2.6 and US$3) per meal, but that had dipped to between 15 and 18 yuan."
So it's reduced the cost to the consumer for a $2.6-3 meal. It's not an expensive robot v's the cost of labour.
In an ideal world, we would see this for everything so that living is essentially free.
boldstrategy@reddit
Give it a year. The subsidisation from tech is crazy atm
Joneb1999@reddit
Robots and AI can still make your job unnecessary sooner than later. It just needs to be a huge programmable partial or fully AI oven, deep fryer, shallow fryer, air fryer, grill, boiler, microwave, combination machinery with robot arms or other mechanical means that have the ability to knead, mix, fold and shape etc. A production line. It can use picker arms to move ingredients around and be automatically fed from an equally huge adjacent vending cold storage unit that can order more ingredients online when it senses it needs more. Said storage could be filled by robot pallet trucks or forklifts from a fully automated self driving lorry.
It doesn't mean this won't need people. It just means the people are machine operators, supervisors, engineers and programmers. The thing is, fully automated trucks are a possibility now, robot pallet trucks and forklifts are already in operation in warehouses as is robot machinery for many other purposes. Large vending machines are also available now as I have seen in my chemist with multiple staff using it to pick and drop medication if not count it into packaging and seal it. All it needs is someone to design the first fully automated restaurant kitchen. Maybe though if you retire in the next 8 to 10 years you'll still miss it and just have to deal with feeding ingredients to robot chef de parties and commis chefs as the technology evolves. I cant imagine how a robot would deal with an angry ex chef come programmer and machine operator screaming at it. Termination by a choice of many implements? ;-) ;-) Just joking but it is a frightening thought of finding machine parts in my soup instead of human ones.
Sussurator@reddit
You’ll be cooking the jobless to feed the trades(wo)men, according to Reddit these people are the only few who will still have jobs.
TomfromLondon@reddit
You say that but I use it to create so many meals :)
Wise-Youth2901@reddit
It's actually really good to use in many ways. It makes my job easier.
thierry_ennui_@reddit
It's definitely got good uses. I don't think it's ready for the level of public consumption it's getting right now though, and the theft of art/design work in order to generate awful images is a problem.
edmunek@reddit
ai triage of incoming IT tickets seems to be doing better job than my 1st line engineers
Actually_a_dolphin@reddit
AI can absolutely already do the full job of first line support. It takes effort to set it up, but the tech is there.
insomnimax_99@reddit
Yep - because there is significant overlap between the smartest AIs and the dumbest users.
“How do I convert to pdf” doesn’t need a human answering it.
odysseusnz@reddit
Maybe that's a management issue?
tmr89@reddit
Ouch
Realistic-Muffin-165@reddit
That's something I'd like to explore more.
Or just have a list of repeat offenders who won't read their logfiles.
Thisoneissfwihope@reddit
As a data person, it’s been pretty transformative. We have annoying data that we had to manually align. Thanks to Copilot I’ve deepened my learning of excel & PowerBI and now stuff we just couldn’t look at is now doable.
hsw77@reddit
Senior at an MSP. We're not pushing it, because we all think it's shit.
LobCatchPassThrow@reddit
I used AI to analyse my CV and experience to determine how much I should be paid.
Suddenly the company was much less pushy on us using AI :’)
Brian-Kellett@reddit
I work in a school and rail endlessly about the evils of AI.
Headmaster used it to write one of the handful ‘all school’ emails he sent this year.
Head of science relies on it when he can’t delegate the work he should do to someone else - even uses it to write his risk assessments. That’ll be interesting when he gets a kid hurt and someone from HSE looks into it.
Actually he’s just blamed ‘student calculators giving the wrong answer’ after confidently emailing the whole dept. that a mock exam marking scheme was wrong after he checked his own wrong calculation with AI.
Someone uses it to write their assessments of student teacher placements. And reports to middle management. And reports to parents.
Yet to witness someone using it for safeguarding concerns, but I reckon the odds are good given the attitude of the staff I personally know.
Brickie78@reddit
My daughter is doing her A-levels this year and isn't doing as well as her predicted grades.
I suspect that this may at least be in part because the vast majority of the revision material and work they're given is generated in Claude. Unfortunately one of the most enthusiastic adopters is also head of 6th Form so there's no point raising it there, and anyway she's told us under no circumstances are we to "make a fuss" about it.
Brian-Kellett@reddit
Awful.
AmeliaOfAnsalon@reddit
I work in a microbio lab owned by a big international corporation thing. They make us do this corporate bullshit 'goals analysis self review' thing whatever. Including referring to the '7 company energies' whatever the fuck that means. They've now added an AI feature into the website to write the damn goals for you.
So now they want us to do it but also don't want /us/ to do it??? Why even bother??? We never wanted to do it in the first place, we just want to get on with the science.
TheUnSungHero7790@reddit
Other than funny images nobody talks or acknowledges it's existence.
Our office isn't even paperless.
This is a huge company that turns over 2 billion a year too.
Southportdc@reddit
Our IT department view the rest of the company mostly as a nuisance to their security measures, so they're trying their hardest to prevent us using anything except Copilot. I view this as effectively banning AI given my experiences with Copilot so far.
Insila@reddit
I feel like we work in the same company... I convinced my IT security to also allow Claude and block everything else so at least people have an AI that works.
Bossman_Mike@reddit
Where I work the AI tools come up as blocked in Edge, but just press the Allow button and your boots be filled sir.
Insila@reddit
AffectionateComb6664@reddit
I work for an AI first SaaS company
In reality what that means is they are layering agents on top of all processes internally & in the product we sell. It works pretty well I can't lie.
We also now use Claude Co-work which I haven't quite got to grips with but some people are already pros & it's v cool
LowAnimator8770@reddit
Horribly, constantly pushed to use it for everything even when I takes longer than just doing it yourself.
AdministrativeShip2@reddit
We have copilot.
It sent out a memo saying do not use it.
Our regs team refuse to use it as it puts the wrong information in.
We also operate under a strict legal framework and everything must be human attributable.
Our marketing team still want to use it and we keep saying no.
On Monday Im firing a shot across the bows of a direct competitor who has blatantly used AI images in a specific context thats frowned upon by our industry regulators.
Iamthe0c3an2@reddit
I work in identity / security space. The company is much more cautious on the approach to AI. We have glean which has basic access to our documentation but we’ve been told to use it as basically a sanity check thing once you’ve exhausted your knowledge.
anchoredwunderlust@reddit
My mate at a gambling company has been paid a lot extra to train it. Trying to get it to code and all sorts
Scared-Room-9962@reddit
None existent
Andurael@reddit
We’re being advised to cautiously try it, though management are clearly no longer capable of producing slideshows without it.
LivingPage522@reddit
Ai has been banned and im struggling (non programmer who used it for coding).
Trab3n@reddit
Software engineer here - my lines of code commited written by AI is around 90%
All reviewed and it's pretty good in that sense. My project planning and such is also primarily AI driven too.
The sensuous is mainly, it's an expensive tool, but not replacing our jobs yet.
Bran04don@reddit
Am programmer.
Pretty agressive.
chartupdate@reddit
I have much sympathy for the large number using the Microsoft stack, as you are nudged towards Copilot which appears a bit rubbish.
Those of us in the Google ecosystem get to use Gemini and its full seamless integration with everything, and life is much happier.
It has transformed my way of working immesurably.
binarygoatfish@reddit
Don't worry in a few months when they all gone token charging you'll get moaned at for using it
DoctorOctagonapus@reddit
When they announced a pay rise for all employees, they got AI to generate a video of a person making the announcement rather than just posting it on the company intranet.
The higher-ups want us to adopt AI where we can, but it's all been a case of "what can we use AI for". It's a solution in need of a problem.
skibbin@reddit
Weirdly management aren't keen, but Devs are. Especially junior Devs who want to be keeping up with the times. The senior Devs are less keen as they know their jobs will shift to reviewing AI slop generated by juniors.
Dualyeti@reddit
i build something so good on claude - purely vibe coding - its too scarily good - i run it off a private LLM from my home server - I can login at 8am and finish at 9am. If you're a quantity surveyor - hit me up.
redbullcat@reddit
I am likely being made redundant in the coming week or 2 because my job is being done by AI and the bits that can't have been redistributed to others.
It sucks. We were told last year to use AI to help us do our jobs. This year we're being made redundant because "Claude can do it".
Bossman_Mike@reddit
I am worrying for my job too.
To be honest I fell into IT because I've spent far too much time with computers throughout my life. If I had my time again knowing what I do now, I'd probably take a different path.
Might still do that.
ShowmasterQMTHH@reddit
I work for a small company 11 people, we have in house accounts, telesales, distribution and dispatch departments, firled sales, estimating and site management .
Total Ai use - zero.
Jebble@reddit
I'd say 80% of our code is now written by AI, all heavily revkewed by engineers. Also not using simple prompts. We have extensive guardrails and processes, Jura tickets are specifically written for AI agents as well. Nothing goes I to production we wouldn't have written ourselves and are hire more engineers as a result, we've produced much more than ever before, have better monitoring, way less incidents and doubled our revenue.
VodkaMargarine@reddit
I would strongly advise you to lean into it. Spend a lot of time experimenting with those AI tools, figuring out what they are good at and what they aren't good at. Automate your entire job if you can, seriously. They won't lay off the one person who figured out how to use AI, they'll lay off everybody else.
Here is the high level playbook a lot of companies are running with AI at the moment.....
You want to be in the group of people who know how to use it, not in the group who resist. If you do manage to train an agent to do your job, that makes you vaguely. It's far more likely they will give you a more interesting job.
It's kinda like the blacksmiths when the car was invented. Some of them just complained about cars then lost their job. Some of them learned how to fix cars instead. Learn how to fix cars.
jvlomax@reddit
Very aggressive.
CEO has repeatedly said we need to be 100% vibecoding by the end of April. Last town hall he even said he'd been talking to another CEO who had the policy of FIFO...
Fit in or fuck off.
I've polished up my CV
BigSkyFace@reddit
My work is big on AI and is keen to offer us a fairly generous budget to cover subscriptions to various AI tools that the legal and IT teams have vetted and approved. They haven't made it mandatory, so we have a weird mix of usage between employees. Some don't use it at all, whereas others basically have Claude open at all times and use it for absolutely everything they do.
Aside from personal feelings about whether someone does or doesn't want to use AI, If the company wants us to use the tools they haven't gone about their strategy in a good way at all. They keep suggesting more people use various LLMs and remind us about the personal budgets, but then don't have any answers when employees request advice and training on how to use the tools and ask for ideas on what we could do with them.
odysseusnz@reddit
The only aggression in our AI adoption is just how much most people oppose it, including me as Head of Digital and my Director. Our policy currently boils down to 'NO!' and we're constantly telling people not to use it. Some still do, so we've had to let Copilot in to divert them from the more dangerous options, and we're about to run some pilots to trial using it for basic it takes, but I can assure you it is going to fail and fail miserably so it won't get any further.
Bossman_Mike@reddit
It's everywhere at my org and all everyone ever talks about. People seem to have all day every day to post in AI discussion groups on Teams, AI workflows, how best to leverage AI etc. that you wonder how any work is getting done.
We used to be told "do this please", now it's "Ask Claude/ChatGPT to do this please"
Historical_Project86@reddit
Very. I also work for a well-known tech company, a giant in fact. A lot of AI tooling has been developed with github copilot etc. Does the name of your performance review rhyme with "Fonnect"?
811545b2-4ff7-4041@reddit
I got into a team newsletter for my outstanding use of AI !
We're pretty aggressive and I find it really useful for many different activities - but it's not yet reducing any jobs. It's just getting more done, quicker.
My new favourite thing is using the Rufus AI agent on Amazon to help me decide what product version to buy.
laidback_chef@reddit
Very little due to the nature of the work I now do, we are allowed to use an offline version of Copilot, but it's not allowed for emails or any sensitive work.
DimensionPrudent1256@reddit
I work for one of the big 4 banks.
They've been pushing AI on us about as hard as they can. It's even in our development/performance plans for the year.
They won't however, grant most of us an actual co-pilot license.
So we can write emails and other basic stuff.
budgrummur@reddit
They're pushing LLMs hard, including inhouse chatgpt reskins. Outside memory tasks, they're useless, as they don't have access to sensitive data. I'm casually waiting for when we're given a token allocation and being told to not spend it all in week 1.
goobervision@reddit
Enormously, throughout all of the engineering teams.
blamordeganis@reddit
AI definitely has its uses. For example, if you have a large and not very well documented codebase, and you need to do X, and you suspect there’s probably a function in there that already does X (or something close enough that you can modify it), but you’ve no idea where, you can ask Claude to have a look for you, and there’s a good chance that if it exists, it’ll find it. Or if you have some boring, thankless, time consuming job to do, like fixing the typos in a big bunch of on-screen descriptions, and also changing all the American spellings to British ones, then an LLM will probably do as good a job as you would, but in minutes rather than hours, and you can go and do something more interesting instead.
But I think what’s happening more broadly is that board members and major shareholders have got shit-scared by all the AI buzz and doom-saying that their company is toast if they don’t jump on the AI train; and so they’re hiring CEOs (who seem to switch companies every 2-4 years) who promise they can pull their arses out of the fire by driving company-wide adoption of AI. And the easiest way for those CEOs to get the statistics that show they’re actually doing that is by making people use AI for everything, regardless of whether it’s actually useful to do so or not.
I_want_roti@reddit
About as aggressive as a nun in a pub fight
dyslexicmarketing@reddit
We use them all! It's pretty mental. We just laid off a full department of coders and replaced them.
Byeah207@reddit
I have never used it at work and have no plans to do so. We had a training session on Copilot which basically boiled down to 'it can write emails for you'. Thanks but I can write emails myself because I'm not a toddler.
BlazeForth@reddit
Working as an SRE at a finance SaaS company, we use claude and copilot everyday and been using it almost 8 months now. I would say it is not up to a situation which it can replace an engineer. But definitely increased our productivity. Consider it as a helping tool at the moment. Also it's really expensive for an everyday use.
Chilled-Fridge@reddit
Bank. Departments literally judged based on how much AI they are using - departments with less Github Copilot adoption are looked down on. Management are fucking idiots if you ask me.
London-swe@reddit
Us-based tech company - we’re all in on it. All engineers are encouraged and tracked on ai usage, lots of internal tooling that uses ai, third party tools etc.
My workflow this year looks completely unrecognizable from last year and my productivity has at least doubled, if not more. Went from a mild septic to an advocate in the last 6 months.
chis@reddit
It reduces tedious admin tasks so I can focus on the technical support aspects of my job.
i.e summarising text or placing headings and subheadings for a report (you've hopefully already written with your own words).
UnacceptableUse@reddit
We are not legally allowed to use it m. There's been some very cautious experimentation with AI and we have been given access to AI tools to experiment with but we're not allowed to use it for work yet
PrometheusZero@reddit
Also a software developer but a small team as part of a wider company.
We use if here and there. We don't allow it to write code wholesale but it's useful for assisting with writing documentation. Like, if I've written a comprehensive list of functional requirements I can have it re-word all that as a testing plan.
It's also great at spotting where code isn't DRY and suggesting extrapolations to make.
Pyjama365@reddit
Aside from the answer I gave about my current job... when I worked in a hospital, that was a suggestion to do a pilot scheme of using AI to prepare/type clinical correspondence, and then use medical secretaries just to 'tidy it up' before sending, without the secretaries having actually listened to the clinicians' dictations.
I asked the managers who seemed to be the decision makers about this pilot whether this was just fancy text-to-speech (i.e. they were paying AI prices for a version of something that had existed for years), or if it was genuinely AI. They did not seem to understand the question.
I asked instead whether they knew whether it was generative AI, and, if so, asked if there had been any consideration of potential clinical risk given the fact that generative AI can often just hallucinate things (thinking that it particularly does this when it thinks there is a pattern of what typically happens, but the whole point of going to a doctor is that something is not working in the typical way). They did not seem to know what the term generative AI meant, or to have heard of AI hallucination.
I asked whether the AI model used would retain the patient data to increase its own data set and effectively refine/retrain itself, and asked about how patients could opt out of their letters being prepared by the AI. I tried to explain that patients might want to opt out for a number of reasons, but one obvious reason was that any data retained to retrain the model would be an asset of financial value to the AI company, and patients might not want their data to financially benefit the AI company. They said there would be a chance to opt-out, but didn't really seem to understand the point about the patient medical data collectively having quite significant value to a company trying to sell a medical AI model, or understand how AI 'retrains' its future answers by increasing its dataset from information that's input.
Finally, I said that I could potentially see that there might be a use for AI in spotting patterns in scans or trends in test results quicker than humans can do, so I wasn't fully opposed to AI in medical settings, but I didn't understand how they could possibly get AI use for something that was currently done well by (fairly low-waged) humans approved as something that was ethically in line with the hospital's environmental policies generally, and in particular because of the health impacts on people living close to AI data centres. They didn't seem to know that there were any environmental downsides to AI.
I feel strongly that a lot of people buying into AI on behalf of their organisations are doing so without actually understanding the basics of what it is or the risks it can pose, because they're too scared of looking silly for not knowing all about it already or for looking old-fashioned if they don't get on board.
Fungled@reddit
Fairly well known tech brand. Yes, although nothing that specifically and directly forced. I do actually make fairly good use of it and usually hit my limits
As for everywhere else, management is trying to justify their spend on AI
SavlonWorshipper@reddit
We use voice-to-text to make transcripts. Really poor results, but it at least lays out the bones so we can fix everything. It saves a little bit of time.
AI is a pain the the arse. People use it to draft their crime reports, which is fine because the product will be much more coherent than the absolute gibberish we sometimes get. But then they will continue to use it to get "advice" on how they think we should investigate. It is occasionally slightly useful, but the majority of the time it gives a one-dimensional view of the situation which the user then adopts as a complete answer. It can be difficult to dislodge the LLM-implanted ideas, and then we end up with a AI generated complaint.
I am pretty sure AI is also behind the mega-complaints I have been seeing recently, where a person contacts every agency (with any relevance at all) simultaneously, causing major duplication until we work out the other agencies already know, and apportion ownership of the issue. We have internal procedures to do that.
So far AI sucks.
ritasuenbobtoo@reddit
Get onboard or get left behind
aReasonableStick@reddit
I'm unemployed, but theres a huge rise of AI being used to write job adverts and in emails companies send and its really disheartening because im taking my time to make sure my application is good and they're just prompting.
vientianna@reddit
Pharma. We have regular workshops and it’s pushed pretty hard. We have to have an AI related goal in our annual performance review.
We are also having an eco sustainability drive at the moment
yepyep5678@reddit
Heavily pushed but God forbid we move away from excel
Brutos08@reddit
Work for a US tech telcom we are encouraged to use it but not mandated. What I would say is if you are not using it you are at a disadvantage because your competitors are using it. My ex colleague told me his company head told the entire company that if they are not using it during their day to day work then they are at risk of potentially losing their jobs. My take is you won’t lose your job to AI but you will lose it to someone who’s USING AI.
Realistic-River-1941@reddit
Middle management have basically outsourced themselves to ChatGPT.
People at the sharp end have their doubts, but that's why they aren't middle managers, isn't it?
NoseGraze@reddit
I work for a web hosting/software company. It's pretty aggressive. Lots of messaging about "use AI or get left behind." Company metrics about AI usage and wanting to increase it. Lots of AI trainings. General expectation that you use AI (Claude is big for us) as part of daily workflows.
hhfugrr3@reddit
Not at all. I do criminal law and the standard AIs get all coy as soon as crime is mentioned.
Seth-2X4B-523P@reddit
Currently at risk of redundancy....they want to replace staff with AI. Considering retraining (web dev). AI will only get better.
TehDragonGuy@reddit
I'm a software dev and I started a new job about 6 months ago. I went from never using AI at my old job, to using it constantly here, almost instantly. It undoubtedly makes my job easier, but with that comes greater expected productivity, and it does make me feel like a bit of a fraud.
cardboard-collector@reddit
Been using it as a software engineer since 2022 - it has come on leaps and bounds. The biggest issue is it replacing the learning loop.
I’m confident using it for the tech stack I have a decade of experience in as I can easily see if what it’s producing is shite or not.
Using it on side projects completely fabricates the learning journey so I use it more as a pair programmer or documentation finder/explainer.
CaptMelonfish@reddit
We use it as a 1st line agent when fielding basic queries. It has serious guardrails, there's been a lot of dev put in to it, but can be easily bypassed if the customer request a human agent.
Generally it provides good information for the customers and links to our guides etc really well. At my previous place AI was verboten, so this has been rather interesting to see put into practice in a productive way.
vanceraa@reddit
I work at a company in voice AI and we surprisingly use it fairly sparingly except for actual strengths (locating files, tedious data entry, prototyping)
RestaurantBusy724@reddit
I'm glad I got hired before all this AI stuff because my boss loves it and probably wouldn't have a need to hire like he did before.
StanleyChuckles@reddit
Microsoft is pushing it hard to its partners as well.
And_Justice@reddit
My previous job was support in a software group and they started pushing it unilaterally, with no real regard for how effective it was. I think things would have gone the same way in regards to incorporating it.
My understanding was that the investment companies who own the groups were hounding CEOs to use AI which led to CEOs putting pressure in and managers basically ticking boxes at all costs.
I left that place and became a systems administrator at a survey rental company where I am probably one of 2 people in the whole business who ever uses AI and even then, I just use it to debug SQL scripts or help me with syntax.
RevenantSith@reddit
Chef.
Basically nonexistent, and people making some attempt will probably get bullied out of it.
BrandNewNew1@reddit
The majority of our company has copilot now which I find great as a companion and to use alongside my existing apps, rather than "doing the job for me.."
However, from a wider perspective - my company is also really, really pushing Meta apprenticeship courses related to AI & have incorporated AI into our company values. Which would be fine if it wasn't so aggressive - it just comes across like they're using it as a way to push people into having AI targets, goals, etc. On the flip side I suppose it's better to get ahead of the game instead of being left behind. Pros and cons I guess. My opinion, my company is casting it's net too wide & should focus on the team's and people that can really use AI to enhance work streams, products and performance. Or, sell it as part of a product in the bid process.
Sad-Factor-4031@reddit
I work for a children’s charity in a therapeutic service, and we are constantly being asked to try AI to make resources for the kids, or to help us minute important meetings, or to make our reports sound “better”. It’s concerning and I can’t see that it’s been well thought out I.e. there’s limited guidance on what should or shouldn’t be run through the AI systems, and I’d worry people are sticking in confidential children’s data without much thought. I also just generally find it weird it’s being pushed so much and don’t really understand why
dowhileuntil787@reddit
In my main job, not a lot. It’s regulated so the biggest blocker is how to use it safely, and that’s moving a lot slower than the frontier.
In my side gig, I’m cranking out useful work at a pace that would have been unimaginable a few years ago. It’s not as well engineered as if I did it all myself, for sure, but it’s not like I didn’t fuck things up before anyway. In many ways, the principles haven’t really changed: anything important needs to have multiple guard rails.
Appropriate_Trader@reddit
Our developers who know what they’re doing are becoming 5x more productive.
The ones who don’t are becoming 5x more useless and borderline dangerous.
When it’s good it’s amazing but in the wrong hands it’s like a magnifying glass for winging it.
acceberbex@reddit
Our AI is currently more summarising phone calls, taking meetings notes etc through Zoom and Teams. We have Copilot as well but client confidentiality limits what we can and should use. I'm not a fan of some of it as it's not accurate but I do see some benefits on automation and changing work load to have better use of time..but not at the expense of staff. Like if a letter usually takes an hour and it can be done in half, we charge less to the client (win for them) AND we can take on more clients (win for us). Profit remains the same, staff levels the same but just changing how we use our time
GhostCanyon@reddit
I’m an audio engineer and the standard LLMs have had no effect on my industry yet. We have however had our industry turned upside down by some new AI based tools like de-feedback and automation tools like Dugan auto mixer that have made my job much easier. I can imagine a world where humans don’t mix audio at all and an AI agent deals with it but it’s going to be a long time until a robot turns up at a venue and unloads a truck then flys heavy speakers in the air so hopefully I’ve got that.
PolarLocalCallingSvc@reddit
We use AI a lot but ultimately everybody using it is responsible for checking the output.
AdministrativeLaugh2@reddit
Same here. We all use it as senior management have adopted it quite aggressively so there’s licences available for about a dozen tools and basically told everyone to use at least one.
We don’t just assume it’s correct, though, and we make sure whatever it spits out is perfect for consumption and looks good.
Not all departments are so rigorous with it, though.
Ok_Impact9745@reddit
I work with my hands. Depending on what capacity or what industry I will be some sort of maintenance technician.
AI can be useful because we don't have a lot of paperwork but paperwork is just an obstacle for us. You don't eliminate our job by taking our paperwork away but you do make our job easier and take us away from the tools less.
For example I had to scan a signed document but the printer wasn't working double sided.
I had odd number pages 1-25 on one PDF and even number pages 26-2 on other PDF.
I asked copilot to invert the pages on the even numbered one and then I asked it to combine both PDFs into a single file page number 1-26.
After a minute or so of it doing it magical computer stuff it gave me exactly what I wanted.
That task would've been an absolute pain in the arse for me to do without AI.
We also take pictures of components and AI is usually pretty good at identifying it sourcing part numbers etc.
SpectreSingh89@reddit
But we need human eyes to go over what AI wrote. As an example solely relying on AI to go thru CV / Resumé? A person will go thru that "Didn't u fill in documents at current w/p? U haven't mentioned it" or "Eye to detail is missing. That is a must as a mc operator to look out for faults... Not mentioned!" A human can tell u whereas AI will just correct grammar and throw in fancy words.
So at your w/p use AI but do skim reads to make sure AI is saying the way u want it to.
FTR I been job hunting last month already got called for 3 interviews. My CV is done WITHOUT CoPilot and CV outline?? Old format; blank paper, few sentences with bullet points no need for the "CV outline."
thecoop_@reddit
We have access to copilot and use is encouraged but it’s very much up to us how we use it/what for. It would certainly be frowned upon for some tasks, and there is never any expectation that it must be used.
Important_Ruin@reddit
We used ChatGDP then dumped it for Co-Piolt.
They are trying to push it, though it doesn't really seem to do anything other than quick analysis of data (sometimes helpful, but still check its accuracy manually just incase) and summary of notes on customer accounts.
dbxp@reddit
I work as a developer at a tech firm and we've been aggressive with it and it has largely worked. I think when the next financial year starts in a few months we'll have more of a push for efficiency. Something I want to fix is usage of AI where we're not really using the AI features and are more using as just an integration platform via MCPs.
HAH-PAH@reddit
It's pretty ubiquitous across big tech now.
Decent_Confidence_36@reddit
My work is still mostly paper based, directors aren’t even aware of AI I think but for me personally.. chatGPT has designed its fair share of fabrication steel work I just fine tune it and bring it up to uk spec
Rho-Mu13@reddit
The severfield designers missed the memo on the last part of your comment
__badger@reddit
I work in DevOps, I no longer write code. AI writes my code to then be reviewed by different AI before someone reviews my PR with their own AI
Who_Knows_M3@reddit
Doesn't help much in a school kitchen
RoyofBungay@reddit
My work is in complex ticketing for a national travel company. They keep pushing me to use Autopilot to solve insolvable issues. So far I have resisted using such tools as I have developed enough experience and heuristics to get me to roughly the correct outcome before I make a definitive decision.
You_moron04@reddit
Public sector. Currently trying to enter it in now. I think it’s gonna be a complete waste of time and public money but what do I know.
TheZag90@reddit
I have access to and use a lot:
beeurd@reddit
Similar situation here, it's integrated into lots of systems and we're encouraged to use it, but sometimes it makes things more of a hassle.
Total_Rules@reddit
I’m a software engineer and use it heavily as does our whole company.
I have a pro license for all the major AI tools and it’s all hooked into to most of the tools we use at work across the whole organisation.
It has been a huge productivity boost.
thorn312@reddit
Fortunately (?) a lot of what I have to do with work is incredibly granular and set up in such a stupid, roundabout way that there's no way AI could do everything. It could potentially do some, or suggest improvements, but our set up (Web stuff) is an insane clusterfuck using multiple ancient systems and partially custom integrated to make one specific thing work. Bah.
My team is already only 2 people so they can't really reduce it.
Wise-Youth2901@reddit
My company has built its own AI system, which we must use. We use it for admin, research, analysis etc...
Exact-Amount6049@reddit
“It feels less like AI is replacing people overnight and more like it’s becoming a baseline expectation, so the real shift is learning how to work with it without losing your own judgment.
scrotalsac69@reddit
Mostly a load of bollocks, but people are starting to find uses.
The trouble is people are looking for ways ai can help processes rather than adapting processes to make them more effective with ai. That and the fact that most data is not in a form ai can use
AutoModerator@reddit
Please help keep AskUK welcoming!
When replying to submission/post please make genuine efforts to answer the question given. Please no jokes, judgements, etc. If a post is marked 'Serious Answers Only' you may receive a ban for violating this rule.
Don't be a dick to each other. If getting heated, just block and move on.
This is a strictly no-politics subreddit!
Please help us by reporting comments that break these rules.
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.