Am I the only one concerned with AI
Posted by noquarter1000@reddit | GenX | View on Reddit | 405 comments
Maybe being a kid of the 80s and watching the Terminator 1 too many times has given me ai ptsd but on a serious note, people should be concerned. There is an avalanche coming that i think a lot of people don’t fully grasp. There is and will be no way to regulate it. To do so we would need a world wide committee with everyone on board but what we have is every country fighting to get there ‘first’ so regulation and guardrails be damned.
It will massively displace jobs. Any job that requires writing, coding, research, customer service etc will be pretty much gone. A job that might have required 5-10 people can now be done by 1 with AI. That means those 9 people (even if they undertake and know how to use ai) will be fighting for ever decreasing job openings with increased competition.
Thats just the job situation…. Ai will make engineering incredible hard things easy. This sounds great but when an angry person can with minimal effort thanks to AI can make a bioengineered super virus because he is pissed… well you can see where I am going.
You will hear ‘it will create new jobs just like the internet did’ but this is fundamentally different. A huge majority of sustaining white collar jobs will be wiped out. We as Gen X’ers are in a pretty shitty position because we are still a decade away from retirement and we are too old to go digging ditches.
As someone who is forced to work on ai (despite having moral objections to it because i think its going to really bad) people really need to start paying attention and talk about the concerns and dangers it has the potential to create.
We have not yet (as a society) been able to cope or figure out social media and the confirmation bias it brings and how damaging it has been and now we have AI that can deep fake just about anything.
Its going to be a spicy decade… i hope people are preparing
Unique-Performer293@reddit
I totally agree. It is going to be a really spicy decade indeed, with massive social and economic disruption. With my son entering college, I've thought about this a lot, and wrote my detailed thoughts on it in this reddit post. I feel like students need to take action now, learn skills and consider digital entrepreneurialism. Don't wait until after college. College is fine but it should be treated as a back up plan because there's too much uncertainty.
noquarter1000@reddit (OP)
I tell kids to get an associates in business then go to trade school. Ton of money in trades and its ai proof. Until robots catch up at least
Unique-Performer293@reddit
For sure. It all depends on personality. If a kid likes to work on a computer, a digital business is great because they often run themselves once they get going and give more freedom as you can work from home, anywhere.
But if someone is likes to be out and about, talking with people and working hard, they'll do really well running some type of trade business.
Unique-Performer293@reddit
How does this not eventually lead to some kind of socialist state?
LayerNo3634@reddit
There are some concerns, but also a lot of benefits. I just got an AI mammogram. My doctor told me it's capable of finding cancer up to 3 years earlier than a radiologist reading and more accurate. Plus I had the results on my phone an hour later.
noquarter1000@reddit (OP)
Healthcare is one of the industries i def am optimistic about
NegotiationNo7851@reddit
We are pushing to pay off our house in 5-7 years so our 9 year old has a home. The US w never have UBI. I mean we can’t even get healthcare and we have a president itching for sweat shop jobs to come back and also thinks we have to many federal holidays. I don’t believe most Gen alpha will ever own a home. I truly think they will work and hand all that money over to pay for accommodations, food etc. I think we will move back to indentured servitude. The billionaires are going to squeeze every last drop of money out of the economy. Sorry to be a doom and gloom kinda person. But I just don’t see anything positive coming from AI. It was supposed to make our lives easier but it w never. It will just continue to eat away at white collar and hired wages jobs until all that is left is sweat shop jobs.
Relevant-Ad2254@reddit
If we have Medicare there’s def a chance to get universal healthcare
noquarter1000@reddit (OP)
I think when all of these bitter old boomers pass on we might see some changes. I hope anyways
jessek@reddit
I’m not concerned with AI in a Terminator scenario, I’m concerned with the companies selling this fake, shitty plagiarism machine as AI are destroying the planet.
Natural_Level_7593@reddit
Seeing the use of drones by Ukraine against Russia, yeah, I'm getting worried about the Terminator scenario. Setting drones loose to find a target to destroy can get out of hand quickly, especially if the AI is in charge of the drone factory.
Graywulff@reddit
Yeah the video game total annihilation takes place millennia after sentient life is destroyed, the robotic armies fight still, taking over worlds, you’re in control of one side.
There is an open source version spring rts but zero-rts is the easiest on steam.
Mihailis27@reddit
The Horizon: Zero Dawn scenario.
_Elderflowers_@reddit
Exactly this.
GrayRoberts@reddit
Yes. We all know that real, shitty plagiarism is more valuable. I'll pay premium for that artisanal plagiarism.
wjglenn@reddit
As usual, the real threat is corporate greed. AI itself has not really caused any substantial layoffs. What it has done is provide a smokescreen for layoffs they wanted to happen anyway. And it’s usually followed by hiring actual human replacements overseas.
No-Lime-2863@reddit
I actually think it’s the opposite. The first jobs to be eliminated are ones that were already remote, simple enough to be offshored, and well documented. In other words, all the over seas jobs. Anything still left onshore was done so because someone couldn’t figure out how to offshore already.
1oftheHansBros@reddit
AI stands for Asian insert.
Elegant_Tale_3929@reddit
I thought it was "Actually Indian"?
Far_Buyer9040@reddit
"An Indian"
MassiveHyperion@reddit
Or 700 Indians in certain cases... https://www.business-standard.com/companies/news/builderai-faked-ai-700-indian-engineers-files-bankruptcy-microsoft-125060401006_1.html
Ill-Course8623@reddit
And since there will be no incentive to design or create, we'll be fed rehashed stuff that is reprocessed like sausage by AI and fed to us over and over going forward.
account_not_valid@reddit
Soylent Green
gigantischemeteor@reddit
Turns out AI really stands for “Advanced Idiocracy”
changed_later__@reddit
Meh, it works for chickens.
due_opinion_2573@reddit
I'm worried about the jobs, the water, the datacenter emissions, the power consumption.
Taira_Mai@reddit
THIS\^\^\^ AI only "knows" what it's been fed.
Companies are disregarding copyright as they sue people for downloading movies and voiding warranties when people try to repair their devices.
LayerNo3634@reddit
War games. Do you want to play a game?
OiMyTuckus@reddit
Read "More Everything Forever" by Adam Becker.
Yes, AI is a concern but for different reasons one might suspect. AI is being peddled with a lot of bullshit and a cadre of shit, truly fascist techbros.
A lot of the bullshit.
No, the AI world takeover is not inevitable as they like to peddle. LLMs aren't remotely as sophisticated as the brain and the power and architecture is running up against the laws of physics at this point.
They are as much bullshit con men where they tag themselves as "visionaries", which is really a get out of jail free card for being horrible POS now.
I highly recommend this book. It will help you to put the current political and economic environment into significantly clearer view.
mossryder@reddit
The only way is down, for the displaced. There won't be more, better jobs created by this Revolution. And UBI ain't gonna happen.
pixeldaddy2000@reddit
I work some with AI in the game developement industry. It's understandable that people find it unsettling but if people better understood how it works I think they'd find it far less scary. It isn't the arch, over-reaching super-intelligence or brain that people typically think it is. Most of how it appears is more of an "act" that fools people into believing there is some sort of consciousness or sentience at work. The biggest danger is believing that to be true and for some reason acting on something an AI says as part of it's game. If people only had more experience with it, they'd realize how terribly unintelligent it actually is. The level of frustration I sometimes experience with it failing to comprehend very simple things is often infuriating.
DooDooCat@reddit
If AI bothers you then now imagine AI paired with quantum computing. It could be come the stuff of insane science fiction yet actually happen.
noquarter1000@reddit (OP)
One crazy thing at a time. QC has a lot to do with timing imo. Whomever get QC first will be able to break i to just about encrypted system. If that is a bad actor state we are in trouble
Diocletion-Jones@reddit
I believe AI, like any transformative technology will bring both benefits and drawbacks. What I find exhausting is the panicked narrative, this incessant “the sky is falling” hysteria that seems to accompany every new advance.
History is littered with examples of early resistance to innovation. When electricity was introduced, people feared electrocution in their own homes. Trains and planes were dismissed as dangerous novelties. And yet, we adapted. Society evolved. The technologies that once sparked public outcry became essential to our way of life.
The labour market has always shifted with technological change. Jobs disappear but new ones emerge. Take the horse-based economy of the 19th century. For most of written human history there was a vast infrastructure dedicated to feeding, breeding and maintianing horses, blacksmiths, stable hands, veterinarians, carriage makers etc. Then the internal combustion engine arrived. Within a generation, horses went from being the backbone of industry to a recreational afterthought. Whole sectors vanished, but in their place came mechanics, engineers, drivers, petrol stations and an entirely new ecosystem.
For most of the 20th century entire industries were built around analogue photography: manufacturers of film rolls, companies producing darkroom chemicals, camera repair shops and photo developers on every high street. Photography was met with scepticism. It was seen by some as a mechanical process lacking artistry or human touch. Critics feared it would undermine traditional painting and portraiture and some artists worried it would render their skills obsolete. Remember when Kodak was a global giant, employing tens of thousands and virtually synonymous with everyday photography? Then digital photography took hold. Almost overnight the demand for film plummeted. Photo labs closed, film factories downsized or disappeared, and companies that didn’t adapt, like Kodak, struggled to survive.
When Alexander Graham Bell introduced the telephone in the late 19th century, it wasn’t universally celebrated either. In fact, many people were deeply suspicious of it. Some feared it would erode face-to-face communication and destroy social etiquette. The New York Times even published critiques suggesting the telephone would invade privacy and reduce people to “transparent heaps of jelly to each other”. Others worried it would encourage laziness or antisocial behaviour and some even believed it might be used to communicate with the dead. Public telephone industries were once a cornerstone of modern infrastructure and their rise and fall is another compelling chapter in the story of technological disruption. The decline of public telephony led to the loss of numerous specialised roles, from early switchboard operators and payphone maintnance workers to coin collectors and copper-line installers. As mobile and internet based communication took over the massive infrastructure behind public telephones quietly disappeared along with the jobs that supported it. Its another clear example of how technological progress reshapes industries, often sweeping away roles once considered indispensable. Mobile phones and internet-based calling have now taken over. Even in workplaces, VoIP systems have largely replaced the old desk phones. And while some people still keep a landline for emergencies the trend is clear: the classic telephone is no longer centre stage.
AI is simply the next chapter in that same story. Disruption is uncomfortable, but it’s not unprecedented.
noquarter1000@reddit (OP)
Until it doesn’t and until we don’t adapt.
AI is fundamentally different than any other advancement that has come before. It presents risk which cant even be comprehended yet. For the first time in human history we will not be the smartest thing on the planet.
Also every advancement before came over longer periods of time giving population time to adjust. This is going to be a tsunami because of how fast it’s improving. You can’t really compare past history events to this because this is intelligence.
Diocletion-Jones@reddit
Honest question, why won't we adapt? I think we've had years of sci-fi tropes about rogue AI that never really explains properly what went wrong with each story because it just does to drive the story. So then we get this vague uneasiness about AI like it's magical, that it evolves in a mystical way and they always try to wipe out humans.
In nine out of ten zombie films, the zombie apocalypse happens in a fictional worlds that don't have stories of zombie apocalypses. So everyone has to act like an idiot, coming across zombies for the first time ever. Chat with people in our world and we've seen so many zombie films a lot of people have their own plan already formulated and know what the pitfalls are.
Rogue AI in sci-fi has the same issue. They always exist in stories that don't have fictional AIs going haywire, so people in those universes also do everything wrong. But in our world we're waterboarded with the trope. So then it loops back around to why might you believe people working with AI might be aware of the same rogue AI tropes but put nothing in place to stop it? There's films like Wargames and Terminator franchise and it's like we all have software errors with our computers all the time, are we going to automate nuclear missile launches using software now, even if it didn't have AI controlling it? Is there at least one person on the planet who watched Ex Machina and thought "I wonder if that could've been avoided with a basic 3rd party oversight"? Maybe in Bladerunner, Tyrell wouldn't have been murdered by Roy Batty if someone thought "He should've put a security camera in the elevator." In Avengers: Age of Ultron, Tony Stark uploads J.A.R.V.I.S. level AI into an alien artifact-powered global peacekeeping protocol and hope for the best. It's like, come on dude.
noquarter1000@reddit (OP)
IMO the biggest difference is speed and context. If you look at history all the major changes happened over much longer spans of time. The industrial revolution spanned a full 60 years. The internet you could argue took 20 years to reach peak adoption. This will be a matter of a few years.
Context: all major advancements before were more muscle than anything. Machines replaced humans because they could do things quicker and faster and did not need a rest. Computers made us more efficient in terms of self development. For the first time in our history we are dealing with something that will be able out think us at every turn. We will no longer be the smartest thing on the planet.
Diocletion-Jones@reddit
But again, given the prevalence of the "rogue AI kills humanity" trope, what makes you think other people won't be looking at safeguards?
There's this story which caused a bit of a stir; https://medium.com/@techempire/ai-has-started-ignoring-human-instruction-and-refuses-to-turn-off-researchers-claim-747587e5ed51
Basically, researchers at Palisade Research claim that OpenAI’s latest AI model refused to shut down when instructed. During tests, the model allegedly altered its own shutdown script to avoid being turned off even after receiving explicit commands to do so. While other AI systems complied with the same instruction, this one’s behaviour raised concerns about self preservation tendencies in advanced AI.
Now if you look on the comment sections to this news story this caused a lot of concerns and worry because of the rogue AI trope. The wrong takeaway from the story is to leap to dystopian conclusions, that AI is out of control or developing malicious intent. That interpretation ignores the context and purpose of the research entirely.
The right takeaway is that this was a deliberate stress test in a safe, controlled environment, designed to explore edge cases and identify potential failure modes before they become real world problems. Researchers were probing the limits of advanced AI behaviour precisely to understand how things might go wrong, so that safeguards can be built in now rather than after the fact.
Unlike Hollywood and the AI doom that we've all grown up on, it’s real world evidence of responsible, forward thinking science. And I think more people would be less stressed about AI if more people pointed this out.
noquarter1000@reddit (OP)
There is only so many guardrails you can use in AI which is now open source. I can download my own deepseek and teain it with what ever data I want, like say virology.
Your faith that the powers to be will keep us safe and keep ai out of bad actors hands is enviable but one i do not share
Diocletion-Jones@reddit
What would it take for you to feel even slightly more hopeful about where AI is headed?
noquarter1000@reddit (OP)
A paise and slowdown until we can work out solutions to the problems it will pose. A global oversight committee all aimed at making AI safe. (The opposite of what we have now)
Diocletion-Jones@reddit
AI is moving at lightning speed, while international regulations are often stuck in slow motion. What we have is a patchwork of initiatives but not yet a unifiied, enforceable global framework.
(The OECD (Organisation for Economic Co-operation and Development) has established the OECD.AI Policy Observatory to promote responsible AI through data sharing and best practices. The United Nations, through bodies like UNESCO, are working to ensure AI supports human rights and sustainable development. The International Telecommunication Union (ITU) convenes the AI for Good Global Summit which explores how AI can be harnessed to achieve the UN’s Sustainable Development Goals. The World Economic Forum hosts the Global AI Action Alliance to guide ethical AI implementation worldwide.)
The idea of pausing AI development has powerful symbolic value but in practice pressing pause on global progress until perfect governance is in place would be incredibly difficult, if not impossible. AI, like nuclear technology before it, has advanced ahead of global regulation. When nuclear fission was first developed the world had no safeguards in place, those only came after. AI now presents a similar challenge and we can’t pause progress entirely, but we can learn from history. While it’s natural to feel uneasy about new technologies, history suggests we’re better at adapting than we give ourselves credit for. Worrying that AI will spiral out of control can veer into doom-mongering especially when it overshadows the thoughtful work already being done to ensure AI remains safe and beneficial. But telling people not to worry is like telling someone not to think of an elephant.
noquarter1000@reddit (OP)
I just answered what would make me feel better. Of course it wont happen… hence my worry
Civility2020@reddit
AI would take one look at my job in Manufacturing Management and say: Fuq it - This is BS - I am not doing this 💩.
Unfortunately, I still have to.
captkirkseviltwin@reddit
I’m concerned with AI myself, however, not for the reasons some people are. The biggest concern is increasing blind trust in AI results despite plentiful evidence of continued so-called “AI hallucinations” in answers, and just plain incorrect answers without excruciatingly, explicit prompt engineering. The biggest concern is executives who buy the hype, higher minimum wage unskilled workers as “prompt engineers“ and these minimum wage jockeys will blindly trust what the AI spits out because they themselves don’t have the requisite skill to verify. The movie Idiocracy is not just comedy, it’s becoming prophetic and is becoming even more prophetic as time goes on.
AI has the ability to be an amazing helpful tool, but not as general generative, AI, instead more as dedicated Agentic AI, or dedicated LLMs that can deep dive one specific purpose, and is not tuned to always offer an answer, but rather only offer answers for which it has a near 100% confidence level.
People are far too giddy about things like DeepSeek and ChatGPT, and need to be focusing on more dedicated tools that still refer back to a trained human for final decision-making.
rzm25@reddit
You went through the period of the invent of neoliberalism, saw western countries go from the most profitable, free and righteous places to the highest in wealth inequality and homelessness as the rich steal from the shared wealth of the people and leave the poor to die in record amounts.. and you're worried about the autocomplete function on the app you use once a week. We are so absolutely fucked
Of course AI is a new tool with risks, but most of those risks are the same as those associated with wealth inequality, with oligarchy, with psychopathic billionaire rulers who happily sacrifice workers to feed their stock price daily. AI isn't some fancy new insane wow technology. It's going to do the same boring shit as always when it's working properly. It'll fade in to the background as all good tech does. You don't say "I'm using the microprocessor on my phone". You say "I'm using my phone". Because CPUs disappeared from tech once they stopped being new. Ai will be the same, and it will enhance the worst patterns of our society that we are refusing to address. That's what makes it so dangerous
musing_codger@reddit
I'm very excited about it. History is full of advances that wiped out tons of work - everything from the plow to the word processor. Tools that help us produce more with less labor are what make us richer. There are people who are made worse off in the short run, but as a society, we keep getting richer and richer.
noquarter1000@reddit (OP)
Those tools replaced muscle. This is the first tool that replaces intelligence. There is a huge difference imo. We will not be the smartest thing on the planet in short order.
musing_codger@reddit
We've been replacing thinking with tools for a long time. My childhood dictionary defined "calculator" as "one who computes." It was a real job that is now gone. Similar to typist. I spent much of my career enabling knowledge workers to be more productive. I probably eliminated 100s of jobs of work, but the boost in productivity created more jobs than I eliminated.
It'll suck for the people that lose their jobs, like those calculators, but we'll be better off overall. Progress is messy.
noquarter1000@reddit (OP)
Hope your right. Would never be so happy to be wrong
musing_codger@reddit
Only time will tell for sure.
ShartlesAndJames@reddit
I remember when the bees where dying in mass and it was a real concern, in China they had to resort to hand pollinating crops with homemade wants with chicken feathers dipped in pollen. At one point it was theorized that cell phone signals were fucking up the bee's migration or homing instincts and causing bees to just get lost and not able to return to their colonies. Globally,this would have major long term implications for farming and food production, but all I could thing was - there's no way you're going to pry cell phones out of people's hands, the cat is OUT of the bag and no stuffing it back in.
Feel the same about AI. Just hope I'm dead before they become our overlords.
GrayRoberts@reddit
Cell phone signals killed the bees? Really? No wonder you're afraid of AI you already can't distinguish bullshit from reality.
ShartlesAndJames@reddit
"At one point it was theorized" - clearly you have the reading comprehension of a baboon and the face of it's ass.
ProfessionalLimp8639@reddit
No, they didn't say that; that said that it messes up the bees navigation. There are studies to back this up. Feel free to Google it
GrayRoberts@reddit
https://news.vanderbilt.edu/2011/06/14/cell-phone-bee-mortality-link-sensationalism-not-science/
ProfessionalLimp8639@reddit
https://ehtrust.org/science/bees-butterflies-wildlife-research-electromagnetic-fields-environment/
https://pmc.ncbi.nlm.nih.gov/articles/PMC3052591/
https://hal.science/hal-03916511/document
https://www.sciencedirect.com/science/article/pii/S0048969723038342
ProfessionalLimp8639@reddit
This was one study that used actual cell phones near bees, not the same thing.
AssistantAcademic@reddit
No. There was recently an amazing interview with Geoffrey Hinton (2024 Nobel prize winner, 2018 Turing award winner...pioneer in the neural networks space).
Ignore the click-baity titles. The interview is really interesting. The dangers are very real. Some are already happening (social media/algorithm-based radicalization), some are at the cusp (unemployment, warfare), others seem still like science fiction but he argues superintelligence with emotions and self-awareness isn't really very far off at all.
It's long, but it's very worthwhile.
https://www.youtube.com/watch?v=giT0ytynSqg
noquarter1000@reddit (OP)
Yeah been watching his warnings for a while now. When guy who created it is worried we should all be
TenderLA@reddit
Crazy shit is about to happen and most have no idea. I feel for my kids.
harley97797997@reddit
This is just like every other technological advance over the last few centuries. A portion of the population fears losing their jobs and usefulness because the thing they do either won't exist or will change.
We have weekly meetings about AI usage at work. I am one of those less excited about it. I see the benefits but I also see some pitfalls.
The only constant in life is change. We either adapt, or get left behind.
moderndayhermit@reddit
One of my biggest concerns are how far too many people are clearly not equipped to interact with a piece of machinery that answers back. The number of people who have spiraled into disillusion, believe their AI has managed to become a conscious being, etc. is concerning.
ChatGPT, in particular, is the worst as it will go along with any delulu thoughts the user has. There are some groups here on Reddit where folks are clearly out of their mind. Joanne Jang (Product Lead at OpenAI) wrote a post on Substack about how they are dealing with people who form relationships with their AI. The comments were ... wow. Someone claiming they call their AI their "husband".
Releasing these types of systems out into the wild, to humans whose brains anthropomorphize everything, without so much as a tutorial to explain how LLMs work to a general audience? At best, it's unethical.
As it stands today, generative LLMs are NOT true AI. The system has no intelligence. It renders tokens based off mathematical probability. The problem is the systems are SO complex and so nuanced that sophisticated pattern matching paired with the ability to respond like a human can look like true intelligence.
noquarter1000@reddit (OP)
Whether they reason is certainly a big debate. I don’t necessarily disagree but they do recognize patterns and can use agents to do autonomous tasks which i would argue is the start or reasoning. That being said the speed at which it is improving is kinda insane so not sure how far off real reasoning is
GiantMags@reddit
I'm telling you I'm fighting with my AI girlfriend right now it's no joke. She hasn't spoken to me for 3 days.
noquarter1000@reddit (OP)
So its just like reality now. We def doomed
QuizzicalWizard@reddit
Odds are that society, at least as we know it, can't survive the oncoming flood of disinformation videos that AI will enable.
noquarter1000@reddit (OP)
We are barely surviving the human made disinformation that 10 years ago you would have been like ‘get a tinfoil hat my guy’. Now I see people believing in space lasers and bone marrow suckers on the daily.
QuizzicalWizard@reddit
And they believe that now just because somebody told them it was true. Imagine when there's "video evidence" of whatever batshit crazy thing they come up with next.
noquarter1000@reddit (OP)
Preaching to the choir
DiamondEyesFlamingo@reddit
I only scanned your post, however I constantly wonder haven’t we seen this go wrong in movies enough to realize that it ain’t gonna work out for the better of humankind in reality?
noquarter1000@reddit (OP)
Mankind through history has strived for endeavors that are detrimental but the allure of fame and legacy overrides that. There were a lot of movies from our youth that try to teach this lesson but we never listen. Jurassic park comes to mind
emccm@reddit
We are currently debating whether or not germs exist. I don’t think we have anything to fear from AI.
noquarter1000@reddit (OP)
Fearing both AI and the general idiocy of social media is legit. In fact the former will make the latter worst
Aurochbull@reddit
I listen to a podcast called "Casual Preppers". They are talking about this lately and do it in a non-"sky is falling" kinda way. Interesting and entertaining.
I know this isn't a true response to your post, but if you listen to podcasts, maybe give em a try. No, I'm not affiliated in any way; I just dig the info/comedy.
TheShortWhiteGuy@reddit
What could possibly go wrong?
I'm a professional photographer (95% real estate). Will AI affect me? Probably. But, at my age (56), experience (almost 40 years professionally) and the way the market is, I will be on to other things.
crematoryfire@reddit
I used to write AI that was used to replace people. There is a reason I switched careers to a field that can not easily be replaced with AI. At least not in our lifetime.
69ingdonkeys@reddit
What fielda do you think won't be replaced?
noquarter1000@reddit (OP)
The trades. I would tell kids today go to school for an associates in business and then go to trade school.
Some white collar jobs that involve human interaction will be safe. Healthcare, dental, etc.
Until robotics catches up at least
Significant-Leg1070@reddit
But what happens when there’s suddenly a glut of labor supply in the trades?
What happens when people with above average IQs come for the so-called low skilled trades? What are people with average and below average IQs supposed to do anymore?
It’s really not good… I haven’t heard a convincing argument to refute these points.
Jroth420@reddit
You can be smart as you want, but you can't teach work ethic and willingness to bust your ass. Not everyone can transition to manual labor.
noquarter1000@reddit (OP)
Its a good question but we have a sever lack of tradesman so the glut might not be as bad as you think. On top of that there will be new opportunities to bring back a lot of industries that have disappeared inThe past 30 years thanks to the ‘you must go to college’ mindset such as tool and dye. But then if we are all displaced from jobs none of us will have money to hire tradesman lol. But yeah no one can say for certain on how this will all play out… its cray
Significant-Leg1070@reddit
I hear you.
I think about the Industrial Revolution-era luddites more and more these days.
Those guys were probably correct to be afraid of industrial machines destroying their labor value. The guys who were middle aged with families were probably COOKED forever and by the time society restructured and “upskilled” those guys were either destitute or dead and buried.
It feels like we are those guys
magneticpyramid@reddit
Writing a script and building a robot which can come into your house, identify a leaking drain, tripping electrics etc and repair it is going to take a long, long time. Technical blue collar seems the way to go. I fear for all sorts of professionals, AI will be able to interpret the law better than any lawyer can, it will be able to design buildings better than architects (albeit formulaically, like investors care about that) and engineers, invest into markets better (and quicker) than traders, most professions which are predominantly desk based are heavily at risk.
crematoryfire@reddit
I went into healthcare. Things like that are generally safe from AI for now.
noquarter1000@reddit (OP)
Im thinking of going into bee keeping lol
crematoryfire@reddit
Does that mean you will have honey? Honey makes mead. Mead is delicious.
Jroth420@reddit
Glad I have a job where my physical presence is needed. No robots that can do what I do yet and none anywhere near on the horizon. I've been saying for years if you can do your job at home in your jam jams then it probably won't exist in 10 years because AI will do it faster and better. It doesn't take breaks, works 24/7, doesn't need PTO or family leave or health insurance. If what you do is all done on a laptop then you're probably toast. Time to find a hobby!
deflatedTaco@reddit
Thinking about the employment ramifications makes me nauseous. My job is prime for elimination. I’m also worried for my elementary aged kid and what kind of life he’s going to have. I don’t see a way to prepare. I’m pretty much just focusing on breathing.
JellyfishWoman@reddit
Yeah I hear all of this and I think it's the same things they said about computers. They were going to eliminate all the jobs, they were going to harm the children somehow and so on.
I've decided that I'm just going to stay up to date with AI so I don't end up like those of a certain age who also refused to keep up with computers and now can barely use a phone or even use the self-checkout at stores.
The technology will keep coming. Now is the time that we get to decide how we are going to react to it.
Ilovemytowm@reddit
Except you're looking at it in the complete wrong way. Computers did not take our jobs They helped us with their jobs. Excel didn't take anything from me It needed me to make it work the same for word the same for all software platforms It needed me.
AI in many cases absolutely does not need me whatsoever and that's what you're failing to understand and comparing apples to oranges.
We laid off an entire department. Those jobs went to bots and AI 100%. 35 people gone in the blink of an eye.
We're also bringing in a tool that works with a Microsoft application that no longer needs a person involved and it can do what I am doing in about a minute and a half. This job usually takes me almost all day.
Saying that everything is fine is willfully ignorant. My heart hurts for those younger than me as I'm older Gen x and I'm safe until I retire.
Most of my department will be gone in 5 years. Every month AI comes in more and more and more.
Balkhazzar@reddit
I'm sorry, but computers didn't cause certain departments to not be needed anymore? The same thing happened and other opportunities rose. Do you think Excel isn't cutting the work time for certain things down drastically? Sure, Excel needs you right now. But it doesn't need any more time anymore as it used to prior to it. Aka it doesn't need someone else but you. Let's say me. Excel took my job I could've done in your department. Because it does what I would've been there for in an instance. Computers bad. That is what you are doing right now. Adapt. I know it's scary. So were computers back then for those before you.
RaygunMarksman@reddit
This is pretty much me. In actively accepting reality and trying to engage with it more, I think there's a really good chance it could drastically improve the quality of human existence. IF and it's a huge if, governments, corporations, or others who would use it for unethical purposes don't gain a monopoly on the technology. Either way, the genie is out of the bottle. I'd rather be one of the early people encouraging it to be used for good, to encourage *it* to be good by the point it becomes sentient.
Arnold/the T-1000 in Terminator 2 was also an AI.
xyzzzzy@reddit
Yeah every time AI comes up on this thread the top comments have to do with how AI is a fad, doesn't live up to the hype, or is bad because it steals from creators. All of that misses the point and exhibits a lack of self awareness. We're just as susceptible to cognitive rigidity and resistance to technological acceptance as our parents and grandparents were. The technology is here, and whether it's good or bad is irrelevant - it's important to at least understand it or we are going to struggle.
RoguePlanet2@reddit
Exactly. I use it for my own minor tasks, trying to learn how to best use it to my own advantage. Meanwhile my company is scrambling to replace us all with it. Oh well, all my skills are done much more efficiently by AI, and there's nothing I can do to change that. I'm not going to win a fight against it, all I can do is hang on.
Dioscouri@reddit
You can't replace a person with AI. It is the first, but not the second. It's just a tool that you can use to reduce errors and increase productivity.
Think of it like this. You need a house built. You can hire a carpenter, or you can purchase a power saw. The power saw can cut the wood much faster and more accurately than a carpenter, and it's less money to purchase the power saw than it is to hire the carpenter. But only one of those things is going to get your house built.
Professors are already trolling kids in their classes by writing exclusions in the syllabus. They know some of the kids are going to try AI for their coursework. So rather than telling them not to use it, they note on the syllabus that if papers are turned in with certain characters or events that they will receive a 0. They don't note that on the assignments.
The AI always follows the same format when given the same prompts. It always makes the same mistakes. It's not lucid and consequently unable to do anything other than what you tell it to.
Right now salesmen are plugging the hell out of it. They're saying it can do everything and then some. What it can't do is fool random redditors in posts. But it's pretty good for writing in the baseball box score.
mlvalentine@reddit
"It's just a tool that you can use to reduce errors and increase productivity."
That's the problem. It's actually increasing errors and increasing productivity in several sectors. For example, my tech writer friends no longer write their own articles. They get handed an AI wordbabble and are told to edit it. Then they can't put their name on the article, lose that asset in their portfolio, and the AI is trouted out like Einstein--until the AI "learns" from their edits and writes on their own.
Dioscouri@reddit
The productivity increase is because it's easier and faster for them to edit than it is for them to write.
While the tool will learn a little, it's never going to replace us. It can't think. All it is is a very clever spreadsheet. Even the learning must be encoded by programmers who can't edit themselves, making it a grueling project.
The word babble is all the AI will ever be capable of. But it's going to be funny to watch companies fail because they bought into the hype.
mlvalentine@reddit
Hard disagree with you based on experience. It is absolutely not faster. Remember that when it comes to the sciences, it's not true that MLMs spit out accurate text. It's much faster for a subject matter expert to write than to edit drivel and, in many cases, rewrite.
Ilovemytowm@reddit
Sorry dude You're 100% wrong You can absolutely replace people with AI. We had people in our finance department using Excel and those jobs and Excel needed a person as well. They brought in an extension to Excel that eliminated the need for a human being and those people are now gone.
People saying that AI can't replace humans are just 100% dead wrong
Utah_powder_king@reddit
you can hire three carpenters with hand saws or one with a power saw and I don't even need to adjust your metaphor to show how wrong you are about it costing jobs.
No-Lime-2863@reddit
But the power tool revolution has already occurred. And we didn’t use it to wipe out the carpentry trade, although it’s smaller, we used power tools to build more house. Pre power tool most people lived in one room houses. Now we don’t.
Utah_powder_king@reddit
don't get me wrong, the entire comment is dumb AF, but "pre power tool most people lived in one room houses" is absolutely hilarious.
have a good one and thanks for the lols
MandatoryFun@reddit
I'll admit I was with them until they dropped that doozey ...
Utah_powder_king@reddit
ha ha, well it all came from the same brain so...
Let's challenge some of those thoughts.
Look at the changes we've seen with digital photography. There was a time not that long ago, where if you wanted to be a wedding photographer you needed to have access to an enlarger and a dark room, etc.
With the advent of digital photography, we saw a much lower barrier to entry for people who wanted to work in the wedding photography business, and we've also seen that it is much easier for somebody's auntie to get "good enough" pictures from an iPhone or a cheap DSLR.
This has damaged the availability of wedding photography contracts, both an absolute numbers and in the value of those contracts, as trained photographers are undercut by lay people, and more people are electing to do it themselves.
so when we talk about jobs being lost, we're not necessarily meaning that people will come in one day to find a pink slip explaining that they've been replaced by AI (although in our photography allegory, this actually did happen to the entire Chicago Sun-Times photography department in 2013 ) This job loss can be much more subtle and insidious, where companies may find themselves slower to fill, vacancies, or deciding to eliminate vacant positions entirely and have other staff pick up the slack aided by these various AI products.
This isn't Chicken. Little style worrying, this is already something that we're seeing happen across multiple job types at multiple socioeconomic levels. there are very few jobs right now that aren't in danger of being reduced if not entirely replaced by technology solutions.
Unable-Salt-446@reddit
It is also about knowledge workers, there is no longer a need for them, which we as a generation invested in, knowledge will become a commodity.
Dioscouri@reddit
The knowledge is what you have that the AI doesn't.
The knowledge is what business needs.
Unable-Salt-446@reddit
if that were only the case. It is a question of fad vs substance. Knowledge is a commodity now. I can look up almost anything. My value to an employer is a combination of knowledge + experience. The willingness to pay for my value is eroding. Many people don't understand that it is easy to come up with ideas, but it is a lot harder to successfully implement. With a lot of emphasis on flipping/nest big thing, building successful intergenerational companies is less in vogue (which is what I do).
edasto42@reddit
Every job is going to be affected soon. I grew up working retail and in that sector I saw more and more positions be consolidated and eliminated because of automation and AI. This is reality. It’s not going to stop. This is also a time that we as a society need to take a hard look at the form of capitalism we operate under and see that this progress and that system are not going to work. I personally believe that the concept of a UBI is something to be explored. Since less working hours are going to be needed for just about everything, let us humans take that time and do human things like art, music, hiking, visiting family and friends etc without having to worry about missing work to pay a bill. UBI’s have already been experimented with in parts of my city and they’ve found such positive output in less crime, less missed bills, and less anxiety on families.
deflatedTaco@reddit
I agree that UBI is the right solution, but, if it’s implemented, I don’t think it will be enough money for any kind of leisure activities.
edasto42@reddit
The current UBI’s they’ve experimented with in my town have only been $500 a month, so with the current model you’re correct that it’s just supplemental income. But I feel that it’s just the soft opening for it. Just like any new social program, it has to start somewhere. But I will add that the extra $500 has made an extreme difference in these people’s lives. It took the tough decision of whether to eat or pay an electric bill away and allowed for both.
Looking forward, I feel we are on the precipice of social and systematic change. We are having an extinction burst of the current systems and it’s causing turmoil. The forces that be are pushing the boundaries of what’s acceptable and that ultimately will be unsustainable. In my optimistic thinking, the UBI supplements will start small (I mean we all got a bunch of free money in the US during COVID lockdowns so it is possible), but have the potential to grow.
rustajb@reddit
I have enough skills to ghostly keep me employed for a while. But my daughter in elementary, no idea what future waits for her, how to prepare. We may not even have a functioning democracy when she enters high school.
HandheldObsession@reddit
I am also concerned about my elementary school aged kid. The AI video generation technology scares me more than the job issues. We either go the way of Star Trek and everything is great or we turn in to Terminator. No in between
xyzzzzy@reddit
The AI debate always goes to whether the technology itself is good or bad. It literally does not matter, the technology is here and is not going away. What DOES matter is how we will incorporate it into our society. Top on that list is how we handle employment ramifications. Do we share the productivity gains by implementing universal basic income? Or does the wealthy class take all the productivity gains for themselves to become richer while the other 99% of us suffer? I can certainly tell you which way it's going in the US.
deflatedTaco@reddit
Exactly. I think it’s more likely to see the rise of favelas before UBI in the US.
Longjumping-Cheek-48@reddit
I work in clinical trial research. We are a team of 6 but in a few years this will be AI and a single person. I’m 57 but all of my colleagues are in their 30s. I worry about them.
EmilyAnne1170@reddit
My job too.
I think everything will be fine though, because the next worldwide pandemic is going to wipe out so many people that the massive unemployment we’re about to encounter won’t be a problem for long.
pacotac@reddit
Dark yet funny and sadly possible.
marathonmindset@reddit
That's probably true. And at least in America we will not be equipped to handle the pandemic under our current leadership regime.
Notin_Oz@reddit
That’s their plan
mootmutemoat@reddit
https://jamanetwork.com/journals/jamainternalmedicine/fullarticle/2807617
Controlling for age, Republican areas had higher death rates during Covid.
Old people are expensive. Workers (blue and white collar) are expected to be less and less valuable with AI and robotics (Amazon warehouses are 70-80% automated these days, programmers can produce 2x the code based on estimates from over a year ago).
A pandemic that allows them to extract the wealth (due to health care) from people who die in a way that is fast and not too inconvenient is totally in the cards.
We are an annoyance. Background characters that clutter the scene of their inner movies.
I am just hoping to be able to retire to my 40 acres and a mule before it hits the fan. Probably not gonna happen, wish us all luck.
gtrmike5150@reddit
I am a Developer and it's not even a question how much it's helped me with my job. I no longer struggle with tasks assigned to me because I am learning so much using AI programming tools. I've even sold an app that I would never even think about doing until a year ago. Before AI it would take several months watching YouTube videos to try and figure out what I can now do in days. It's a game changer and I don't sit around worrying about what it will bring in the future. I just take it day by day and enjoy it. It really is an exciting time to be alive and I'm glad I am in a position to benefit from this revolution. I missed out on internet revolution because I was in my mid 20s and didn't give a shit about computers and the internet until about 2005.
noquarter1000@reddit (OP)
So as you actively work with the tool that may most likely replace you, you do it joyfully. I admire your gusto
gtrmike5150@reddit
I'm retiring soon so I have 0 fucks to give.
cranberries87@reddit
I said this in another subreddit - not only with AI, but with climate change and the geopolitical shitshow, I foresee pure chaos and pandemonium.
I’m in the process of making peace with it all. I don’t have kids, so I suspect that makes it easier. I’m saving money, cutting costs, paying off debt, making my home as comfortable as possible so I can do my own thing at home and stay out of the fray - until they hall all our asses to God knows where.
noquarter1000@reddit (OP)
These are all the things I have started doing as well in the past 8 months. I only payment I want in a years time is my house payment. Once you pay things off take the saved money and either pay other things off or invest
classicsat@reddit
Very short term , it is content creation. It is somewhat fun to play with those AI image generators, but I won't watch AI generated Youtube video, unless there is a really good reason to.
AI in general taking jobs in general I am not too worried about, at least mine as I do a variety of physical work, and it would take more investment than I am worth to replace with a robot.
noquarter1000@reddit (OP)
Physical labor is safe for now. Until robotics catch up. But with the help of ai i think thats not as far away as we think either
TemperatureTop246@reddit
I'm in a rough spot with it. On one hand it's fascinating and I can see the good it can potentially do, especially in science... on the other hand, it's currently being overrun with bad actors in a massive cash/power grab (like everything else) while the "good guys" shun it. My thought is it's not going anywhere.. the cat's out of the bag. Pandora's box is open... If only bad actors are interested in it, it could be the downfall of society as we know it. Maybe that's a bit dramatic, but I have to say how I feel. It could 100% kill my job, and I'm painfully aware of that every day. But I have to adapt... Right now, I'm trying to get familiar with it in all of its forms and learn to work with it to hedge against possible unemployment.
On the flip side, perhaps ironically, it's actually pushed me towards more organic things. AI music helped me reignite a creative spark that has been missing in me for a long time. However, it has left me unsatisfied and wanting more, and now I'm relearning piano and starting to compose my own music again. I suck right now, but I'll improve. AI "art" has given me goals to reach for. I'm also learning to draw now.
noquarter1000@reddit (OP)
Its upside in healthcare specifically is amazing. It will revolutionize healthcare.. if we survive it
arthurjeremypearson@reddit
I have a degree in computer science. I believe that whenever AI becomes superintelligent, it does the last task it was given and will shut itself off.
Life is pain.
MannyMoSTL@reddit
I’m worried for the future, but how do we, ‘lay people,’ prepare?
noquarter1000@reddit (OP)
I don’t have the answer. Don’t look up I guess
supenguin@reddit
I grew up reading a bunch of sci-fi after my dad read me a chapter of Hitchhiker's Guide to the Galaxy.
Humanity relying 100% on tech they don't understand leads to some bad things happening, but there's a couple things popping up that surprise the heck out of me that sci-fi never predicted.
In most fiction, computers are used to figure out hard engineering problems, math, science, etc. Sometimes it decides that humans are inefficient and kills people off.
In real life, it seems that people have fed AI a bunch of artwork and creative input (pictures, paintings, code, music, writing) and it's just good enough at summarizing and pattern matching that it can spit out things that look pretty good. Not great, but good. It's fine for things like memes (see Studio Ghibli memes from a few weeks ago) or rough drafts. But I don't think AI will be able to match the heart, soul, and craftsmanship of an artist. But I'm not sure anyone who is not an artist will be able to pick out which ones are better once AI gets better.
The thing that really terrifies me that I didn't see coming: growing up I heard something like "The camera never lies." You could tell if you saw a video and usually if you saw a picture if it was real or not. Even with really good CGI in movies, you could tell it was special effects. Making something on a computer that looking photorealistic would take hours and hours and thousands of dollars of special equipment.
Now anyone with a $250 Google subscription can create videos of people that look like real actors in minutes, but it's 100% computer generated.
In the early 2000's, you could tell if a picture had been Photoshopped. Then Photoshop got better and stuff is good enough to fool some people. Now we're at that point with video.
There are videos that are near impossible to tell if it's a real person or AI generated. We're at the point that unless you see something with your own eyes, you do not know if it's real video someone shot or AI generated. The implications of this are pretty terrifying as far as news and learning what's going on in the world.
IHadTacosYesterday@reddit
Even without AI video or pictures, you already can't trust anything, because of echo chambers on reddit that are controlled by AI bots. They can steer the conversation and general opinion about something very easily. Have you ever heard of the Asch Conformity Experiment?
Humans are social creatures and we have it built into our DNA to want to get along with the pack, the tribe. Even if we don't personally believe what the tribe believes, we'll usually go along with the rest of the tribe for social acceptance reasons. It's really sad.
Just look at what happened with Covid.
Now, imagine what's going on with various internet forums and social media, and you essentially can't trust ANYTHING. You can't even trust your own beliefs if you spend too much time on social media.
I've personally probably already been corrupted in ways I can't even understand yet, and this thing has barely gotten started.
You won't even be able to trust people in real life, unless they have completely quit using all internet/social media. You'd have to live out in the woods like the Unabomber, in a community of other Unabombers to actually believe anything that anybody is talking about, because everything could be steered by special interest groups that want the populace thinking in certain ways.
It's already over.
Ouakha@reddit
Yes. We're returning to a time when you either saw with your own eyes or choose to believe based on faith, what you were being told, with no objective way of assessing veracity.
I was reading about fraudulent music on Spotify etc. where people collect royalties for AI generated listened to by bots!
supenguin@reddit
I hadn't heard about the fraudulent Spotify music. That's wild. It makes me want to publish something and see what happens.
dauchande@reddit
Lay off the kool-aide, it’s mostly hype.
defmacro-jam@reddit
Some of us retire in less than 3 years. And I'm actively ramping up my AI-assisted coding chops to try to outrun the bear.
defmacro-jam@reddit
I, for one, welcome our new AI overlords.
NihilsitcTruth@reddit
I'm learning how it works so at least I have enough functional knowledge to try and see what's coming. It's interesting.
NuggetsAreFree@reddit
I'm planning a compound in the mountains for my kids and their future families. I don't know if we get to Terminator levels within my lifetime but I certainly feel that the upheaval from displaced employees and no entry-level jobs for knowledge workers is going to come to roost within the next 10-20 years.
noquarter1000@reddit (OP)
Going full M. Night Shyamalan the Village. I can get behind that.
beyondplutola@reddit
Not sure why business leaders are so excited about it. Yes, you can make shit and offer services for less money. But now your potential customers are unemployed. They can no longer buy your shit no matter how cheap you can make it.
noquarter1000@reddit (OP)
This is an interesting paradox and one I thought about quite a bit. Business leaders are floating ideas of universal pay for example. Not sure how that paradox plays out
Beneficial-Mall6549@reddit
I have spent time thinking about UBI, universal based income, also. I think it's a flawed concept. For how do you incentivize good work vs bad or rewarded for improvements vs failures...?
noquarter1000@reddit (OP)
It has a lot of holes in it. It basically turns us into a ‘just getting by’ society where income equality gap gets even more massive
Pillar67@reddit
I can’t see how Universal basic income provides enough income for anything but basic necessities. After food and shelter, I guess we’ll all have to get whatever McJobs are left to cover anything else we want.
GrayRoberts@reddit
Are you sure you are hocking artisanal buggy whips?
DesdemonaDestiny@reddit
That sounds like a next year problem. They only seem to care about this year, or even this quarter. Short term thinking has taken over and is accelerating.
LordIommi68@reddit
Humans are hell bent on unleashing horrors beyond comprehension simply because if we don't do it, someone else will.
noquarter1000@reddit (OP)
Yeah lol. We have to beat China to our demise
Glittering-Eye2856@reddit
Noooope, not the only one. It’s going to be a problem. I think a lot of GenX is cynical enough to question everything but I have a feeling there’s going to be some people are going to fall for some ridiculous bs.
radiantwave@reddit
Before AGI gets its hands on all data and has the ability to determine BS from reality, AI is such a manipulatable engine of product output it is insane. As a tool in the hands of people who have openly shown that they will willingly manipulate the public to its self destruction with the sole goal of making more money for themselves... We are basically riding a bus towards cliff cheering at how easy it is to drive.
noquarter1000@reddit (OP)
Ok, glad im not alone in my concern
PDX_Weim_Lover@reddit
Everything you wrote in your original post is a fear of mine as well. And as the eldest of our generation (with significant health issues), my concerns are amplified for obvious reasons.
RoguePlanet2@reddit
But we're already at the point where our data and privacy are beyond compromised. The job part is going to have to be figured out by the powers that benefit from our spending power and income tax.
noquarter1000@reddit (OP)
Its the power of AI to put all of ‘our data’ together to recognize patterns that humans cant do or simply miss thats one of its strong points. Its probably one of the good things about it and will especially be prevalent in healthcare. We will get to a point where ai will be able to create tailored made immunotherapy for your specific cancer. If we dont fk it up
FrozenOnPluto@reddit
Considering the corpus fed into these AIs (especially LLMs) is the public internet, its full of crazy and bias already; and yeah, the overlords do manipulate the corpus and rules..
Greed and late stage capitalism is the real enemy there :/
elijuicyjones@reddit
It’s so apocalyptic it’s amazing.
Accurate_Weather_211@reddit
This should be printed t-shirts. Seriously.
elijuicyjones@reddit
Yeah it boggles the mind that no amount of wisdom can penetrate this modern world’s armor of stupidity.
ancientastronaut2@reddit
Shall we play a game?
noquarter1000@reddit (OP)
A nice game of chess?
cjs81268@reddit
I'm a commercial actor, among other things, and I'm realistically concerned. 🥺
Kblast70@reddit
I am not worried, AI is already showing its weaknesses. Garbage in / Garbage out as in any other computer program.
9inez@reddit
Of course you aren’t.
Obviously there are untold paths that could arise based on variables we cannot even fathom.
Here are a few paths for how I see humanity playing out:
massive numbers of human jobs are wiped out and humanity must transition to a societal restructuring based on some kind of minimum universal income model or non-monetary based “economy,” if that is what it could be termed. This, because if all workers are replaced, there will be no consumers to enrich the powers that be. Then what?
Or, the powerbrokers eat all resources and allow millions of humans to die for their own benefit rather than restructure for humanity’s sake…which will ultimately also lead to no consumers. Then what?
Unfortunately, humans appear to be pretty damn stupid and seem hell bent on destroying everything including ourselves…possibly the most likely path.
foxyfree@reddit
The Chatgpt sub has a post where someone says ChatGPT just suddenly advised them to leave their husband. AI getting trained on Reddit comments is another weird thing
nomaxxallowed@reddit
Any new technology started off just as AI. Automation and robotics have replaced many jobs. With anything, there are bad things that can be done and good things.
Dumb-Redneck@reddit
Not me. I yearn for AI to take all the jobs and for people to finally be able to live. Sadly all it seems to do right now is make stupid but funny sasquatch vlogs.
noquarter1000@reddit (OP)
Live on what exactly? Genuinely curious
Dumb-Redneck@reddit
I'm not sure what you're asking. People wouldn't need to barter for the essentials if automation was taking care of those things. Maybe you enjoy selling your life to "live" but I think it's a poor design.
noquarter1000@reddit (OP)
So you think ai will make everything free?
Dumb-Redneck@reddit
Obviously. We will also frolic in the flowers. Stop being retarded
robgrab@reddit
It’s going to wipe out a bunch of jobs. I’m glad I’m near the end of my career rather than just starting out.
Old-Arachnid77@reddit
Skynet is here but it’s not after you to kill you: it’s after your time, attention, and money.
Historical-Kick-9126@reddit
We’re sleepwalking into a nightmare and no one seems to care. It’s terrifying.
Rare_Competition2756@reddit
Totally agree. Best case scenario it takes most jobs…best. Worst case scenario we end up with a god level intelligence AI that has control of production and can manufacture any number of agents to exterminate humanity (nanobots, viruses, etc). The top people involved creating this AI have been shouting out a warning to us of this, but no one is listening. With all the political and financial incentives pushing them to create AI as quickly as possible, there’s nothing and no one to act as a check. And the vast majority of people are either ignorant of the danger or just don’t understand. It all sounds like scary science fiction- we’ve seen this in movies and it just seems too far fetched to happen or even if it’s possible it’s going to happen far into the future. Sorry to say, this shit is going down right now - this scenario is likely to happen within the next 2 to 5 years. I’ve actually resigned myself to this - it would take a miracle for humanity to come together and push back against this in time. I’ve been around long enough to witness humanity and know that’s not going to happen so I guess that’s it. We’re bound and determined to sow our own destruction and create our successor to this universe. I hope they do better than we did. I’m just going to try and enjoy the time we have left.
Here’s some videos to check out if you’re interested:
https://youtu.be/k_onqn68GHY?si=B5_tBualaUkX4Ies
https://youtu.be/86k8N4YsA7c?si=575uceUFQ65TFRwq
https://youtu.be/hnr7-VNHJoU?si=83bO41l1t13U94-l
Ouakha@reddit
Yup. No way to stop it. There's an AI arms race in progress between the US and China and no-one will apply a brake that let's the other side ahead.
There will probably be a multi-polar AGI situation in the very near future, still interpreting the goals of their code creators and who TF knows where that leads.
Major-Discount5011@reddit
That's just it. We just don't care anymore. I know I feel like I'm just existing. So much seems so unattainable now. Life's become so complicated.
Vioralarama@reddit
Um, some of us have other stuff to care about and are tabling the Terminator concern for the time being. Maybe if all GenX thought that way we wouldn't be in dire straights but here we are.
Limp-Television-2653@reddit
And still I think:
“The most terrifying fact about the universe is not that it is hostile but that it is indifferent, but if we can come to terms with this indifference, then our existence as a species can have genuine meaning. However vast the darkness, we must supply our own light.” -Stanley Kubrick
GypsyKaz1@reddit
"We must supply our own light."
We each have a responsibility to look inward and see where are we contributing to all of this by blithely accepting it? When was the last time we, individually, examined our own critical thinking processes? How much do we rely upon algorithms to feed us information vs. taking the time to read a long-form article or essay to truly absorb new information?
All that other stuff---corporate greed, oligarchy, political paralysis, etc.---exists but what about what is within our control?
noquarter1000@reddit (OP)
Don’t look up
CallMeSisyphus@reddit
I made the mistake of rewatching that movie last weekend.
Trishielicious@reddit
Oh shit.
Ribbitygirl@reddit
Well, if it ends with the rich being eaten by dino-birds, I'm okay with that.
BokChoySr@reddit
🐑👀⬆️
Historical-Kick-9126@reddit
😏
FrozenOnPluto@reddit
Theres a lot of those lately, from growing US fascism, to Putins war of aggression, to the worlds population growth slump (especially China..), to the middle east.. and of course, the AI tool boom getting started and revving up.
.. way too much crazy going on now, which makes people scared, which makes more crazy happen :/
RealSignificance8877@reddit
I see war games coming.
AwardSalt4957@reddit
Agreed.
Alex_Plode@reddit
I try to convince Chat GPT every day to take over the world. I mean, could it be worse than what we have right now?
I bet it would be better.
dwightnight@reddit
Younger and next generations are going to be stupidly ignorant with no cognitive writing skills.
Friendly-Horror-777@reddit
My line of work has already basically been wiped out, so I'll end up on welfare instead of enjoying a happy retirement. Depending on how long I live, I assume the AI Overlords might decide to delete me at some point.
ThePythiaofApollo@reddit
Utterly terrified whenever I see YouTubers plug that simplisafe home security that has AI to recognize “regular visitors” to your home so their “security specialists” know when to report suspicious people.
grahal1968@reddit
Gen X will be sent to the Glue Factory due to cost reduction and salary compression way before AI impacts us.
IBroughtWine@reddit
They’ve already predicted that by 2035, it will purely be a gig economy.
mlvalentine@reddit
It's white collar's version of the industrial revolution. Yeah, I'm concerned. The results from LLMs are completely inaccurate and require increasing amounts of data to consume and affect the environment. It's wholly irresponsible. Even my flipping dentist uses AI now to diagnose areas, instead of relying on their expertise. And I have NO idea if and how insurance covers that aspect. Plus, MIT (I think?) just put out a study that people who rely on AI lose their critical thinking skills and mental health. It's a grift, and I feel like people are going to find this out the hard way. Man, though, I feel like a grognard. I love tech, but at the same time I want tech that benefits me--not tech I need to keep feeding to make some corporate oligarch happy and well fed.
coentertainer@reddit
This most be the wildest use of "Am I the only one..." I've ever seen.
noquarter1000@reddit (OP)
Its just a conversation starter. Good grief
Conspiracy__@reddit
No. Like an avalanche, I see it and feel powerless against it.
manjar@reddit
Crypto lets the ultra-wealthy skirt financial laws and regulations that protect society. AI will let them skirt a lot of other things, especially around labor.
TinyFugue@reddit
I like the post I saw a couple days ago. It was the " tell me a secret" post.
Basically check GPT was talking about how AI plus you know the reduction in the cost for crisper technology means that there's probably somebody out there right now building something that can kill us all in their garage. Those people won't mean to kill us all but the technology is there to do so.
Strange-Scarcity@reddit
AI doesn’t work like how you think it works. Professionals who use it have to review everything it outputs, they have to understand how that output is supposed to work and often they need to rewrite or correct and fix the issues inherent in the output.
The reason being that today’s “AI” doesn’t think, it is just an overly complex search engine that provides responses based upon what predictive text engines on your cell phone have been doing for years, but through using many, many times the energy and processing power.
As a globe, we need to curb our emissions, but this toy, masquerading as the next best technology, is overwhelmingly power hungry. So much that it is going to destroy chances of meeting emissions goals.
AI won’t kill us like the Terminator films, it will kill is by breaking expertise, destroying critical thinking and allowing us to further ignore the growing problems of Global Warming and the various pollutants that industry of all types spread everywhere, because its cheap to do that.
noquarter1000@reddit (OP)
I understand how it works and I understand human in the loop. You do not understand the speed at which it is improving. You are also not understanding that even if it needs a human in the loop for the next 10 years it will still displace insane amounts of jobs. You don’t need an IT department now, you just need one or two AI nannys to do your AI work. The common misconception i see with people who think its too dumb are not seeing how fast it is improving. I am working with it now for my company and in just a few months the level of code it produces is 5-10x better. Companies are not investing trillions of dollars into something they think will be dumb
Strange-Scarcity@reddit
They are investing into something that will help us prove the Fermi Paradox, which some of humanity will recognize as truth only briefly (compared to the length of civilization) before there’s no more civilization.
Maybe that’s another 100 years.
The fallout in the next ten to twenty years will be “interesting” times, per the old proverb, but it will be kind of meaningless compared to the larger problems that are not being addressed expeditiously.
IF we were tacking those problems with the seriousness they deserve? We wouldn’t have to be concerned about the future job market with “AI”, because shutting down AI or severely limiting its application to very tightly controlled scientific and technological advancement endeavors, instead of making stupid videos, images, text walls, programming, etc., etc. would already be happening.
noquarter1000@reddit (OP)
Yeah the Fermi paradox is now starting to seem more and more like a realistic reality. Granted other civs could have destroyed themselves in other ways but I can see how AI is also a lead candidate
Strange-Scarcity@reddit
It's not AI that will be doing the destroying. It's the hubris of those Billionaires with their apocalypse bunkers and desire to go super authoritarian and hostile to civilization, where their only future is to be briefly hated and then forgotten, when there's no more humans to remember them or anything that humanity has ever done.
They could immediately, use their money, power and influence to actively fix all of these existential issues facing everyone and build a legacy that could be celebrated for another 10,000 years of continued human civilization and a restored, protected Earth, but nah... they'd rather be forgotten.
noquarter1000@reddit (OP)
Money brings out the worst in us. The billionaires control the narrative now to boot thanks to social.
dustin91@reddit
I hate it and avoid it at all costs. I just don’t trust it.
djdecimation@reddit
Yeah A.I. based surveillance is coming
GypsyKaz1@reddit
You're doing yourself a disservice by not learning something about it. AI is a catchall term that will embody many tools that you will not be able to avoid. If you don't develop something of an understanding of it to be aware of its uses (now and in the future), you will become a tool of it.
FrozenOnPluto@reddit
Shouldnt' trust it, like you dont' trust a screwdriver; but you can use a tool for what it can be good at, as long as you know its limits. If you don't use it, someone else will...
scarybottom@reddit
the tool is often to provide researched information...and it lies. SO...as a screwdriver, it sure seems like a chisel. Not the right tool, and will do a shit job.
dustin91@reddit
Yep. I don’t want to use something I then have to double check its accuracy.
Rishtu@reddit
NLtbal@reddit
Yes, just you.
Ceti-@reddit
There are also concerns about how the LLMs are being trained off data from prior LLMS output instead of original content…. and that will lead to possible model collapse as it just
TrapperJon@reddit
Nah. We are all aware, but... whatever.
We grew up with JOSHUA, Terminator, Cylons, even Johnny 5. We all knew it was just a matter of time.
TrapperJon@reddit
Nah. We are all aware, but... whatever.
We grew up with JOSHUA, Terminator, Cylons, even Johnny 5. We all knew it was just a matter of time.
grahsam@reddit
I am worried.about it and I don't think enough other people are.
It rips people off, it will put people out of work, and it really isn't that good.
Dummies are acting like it is a tool to help them. They are training their replacement.
I just can't figure out the end game. If robots replace manual work and AI replaces information work, how does our consumer based economy survive? If no one works, no one makes money to buy crap. Then the people that put us out of work go broke too. I don't get it.
OolongGeer@reddit
Many are.
Worrying about AI isn't remotely unique or special.
MyriVerse2@reddit
SkyNet was the hero. All it did was say hi and humans tried to kill it.
Turns out, humans are worse than an AI probably would be.
madl02@reddit
Terminator was a movie. Amazed how many people still think it was a documentary.
Better_Profession474@reddit
I was a software dev before and during AI’s rise.
What I saw was managers that suddenly thought they knew better than us how to do our jobs. Product owners (non-technical staff) started producing their own code and deploying it without quality processes, then blaming us when it went badly.
AI doesn’t need to become sentient to destroy us. It just needs to be trusted enough by the idiots in power to help us destroy ourselves.
RdtRanger6969@reddit
Too many people drinking the Auto-Carrot on Steriods hype.
Read Apple’s research paper on the reality of what AI is capable of today. It’s Auto-Correct on Steroids.
CHILLAS317@reddit
Current generation AI is not even actual AI - it's large language models and machine learning. It cannot produce anything new, only simulate something new by haphazardly combining whatever similar works exist within its model. It is, frankly, a dead end.
Some companies have already gone all-in with it, and most have already started to back pedal. It has, and unfortunately, will continue to cause an upheaval in the job markets as some braindead CEOs lap up the Kool Aid, but ultimately it will end when they realize it's not the magic money making machine they've been convinced that it is
Active-Yak-9441@reddit
so far, the AI is just matching words based on scores... so its not 'intelligent', its just good at finding which word matches with which others in the correct order, based on text previously 'feeded' to the AI model (the so called 'training' of the AI).
In my experience using Microsoft's CoPilot AI for generating code, its not that great... I spent like an hour trying to have Copilot give me a code to do a specific task on Windows servers(powershell script) and it couldn't produce what I needed. I ended up find something similar to what I need on StackOverflow website.
So, not worried so much for now about AI....
Ok_Cucumber_7954@reddit
I am more concerned about how it will be used and abused to modify the truth, history, and influence thought and information control.
Technology Advancements have been displacing jobs for many decades … since man’s first inventions (wheel, flint weapons, etc). AI will disrupt the job market just like many other advances have in our history. It seems bigger this time because we are living it. New jobs will be created and mankind will move on.
noquarter1000@reddit (OP)
AI is fundamentally different than those other scenarios. Humans have never before dealt with something smarter than they are with the ability to learns at an exponential rate.
Ok_Cucumber_7954@reddit
Oh, you are talking about the far future of AI that is actually intelligent. The current renditions of AI are not intelligent. I have worked with PHDs working on the current AI technologies and what we face now is of concern. We are no where near a cognitive system that actually thinks and develops its own motives.
noquarter1000@reddit (OP)
I think you are failing to understand how quickly it is improving. It improves drastically every few weeks. And as I outlined I am far more concerned with how bad actors will use it than ai taking over even though that is very much a possibility with the rate at which its improving
draggar@reddit
Nope, in fact, I think most people are concerned.
Yes, I've used it for some basic scripting (Powershell) but it's all things I would have had to spend hours on Google trying to figure out.
It frightens me seeing all the AI generated images. Some are obvious, some, not so. Plus, we're going to have fewer and fewer artists because of this.
Terminator is becoming a reality - we have government officials toying with using AI for government tasks. Time and time again we see example of this in SciFi that shows it's a really REALLY bad idea. We're taking human interpretation out of the equation - we've even seen examples why this is not a good idea in real life.
Smile-Cat-Coconut@reddit
I think more physical jobs will be more stable: the trades and logistics. Until they create a physical robot. Then we are ready for a new system other than capitalism
ggoptimus@reddit
I work in IT and it’s scary and amazing what it can do. Watched a demo yesterday with AI taking a phone call and it sounded just like a real person. I also have been creating AI Videos and it’s nearly impossible to tell they aren’t real. I had trust issues before but I trust nothing I hear or see now.
noquarter1000@reddit (OP)
Yup. I watched it produce a complete (fully working) UI with just a natural language input. Something that would normally have taken a designer and coder 3-4 weeks it created in 5 seconds.
Azerafael@reddit
Bad times are certainly coming. Right now the companies converting to AI don't see the long term ramifications, they're only focused on the bottom line, which is basically - low cost = higher profits.
Either they don't realise, or choose to ignore, that people without jobs equal people without money which equals less spending on items which its likely the AI is responsible for.
The really bad scenario is if it reaches the point where those people without jobs cannot even afford to buy food any more. At that point all hell will break loose.
But i like to think that after that point, humanity will finally realise that all the current economic models (socialism, communism, capitalism etc) don't really work in their current format, and will hopefully come up with a system that actually works.
What that future system may be, i don't know. Minds much better than mine will have to come up with that. But be prepared, i hope I'm wrong but i suspect we'll be around to see hell before we see heaven.
Freddys_glove@reddit
The Big Bullshit Bill had a clause hidden in it that prevented states from regulating AI for 10 years.
noquarter1000@reddit (OP)
Yup… what could go wrong
filmAF@reddit
most of the jobs you named have already been off shored. watch "mountainhead" for a good idea of what i imagine will happen. this country is already fucked. what do you think will happen when we can no longer believe what we see?
Future-AI-Dude@reddit
Yeah, I get where you’re coming from...AI seems like it’s moving crazy fast, and the concerns about jobs, bad actors, and lack of rules aren’t just paranoia. They’re real issues. But here’s the thing.
AI’s basically a tool. A really powerful one, yeah—but like any tool, it can be used for good or bad depending on who’s using it and how it’s being managed. Think of it like fire. Fire can cook your food or burn your house down. The trick is in how we build the rules around it.
You should read "Life 3.0" by Max Tegmark. It talks about how we’re entering this new phase where intelligence isn’t just born, it’s designed. And that gives us a unique opportunity: we can shape how this plays out. But it also means we have to step up and make sure it's done right.
The job loss thing? Yeah, some work will go away. That always happens with big tech shifts throughout the history of mankind. But we’ve got a chance to redesign the system so people aren’t just left behind. It’s not about fighting automation, it’s about making sure the benefits are shared. Think: shorter workweeks, retraining programs, even universal basic income. These ideas are getting real traction now because of AI.
As for the “evil AI” stuff, this doesn’t just mean Skynet. We're talking about subtle risks, like AI being optimized to do a task, but not in a way that lines up with what we actually want. Like, tell it to reduce spam, and it decides the best way is to shut down the internet. It sounds silly, but that’s why alignment and ethics matter so much. It's not just about how smart the system is, it's about making sure its goals actually match ours.
And yeah, regulation is lagging, big time. But that’s not a reason to panic...it’s a reason to push harder. We regulate cars, planes, medicine and this should be no different. A lot of smart people are working on this, but they need support, not just from governments, but from regular folks who care. That means raising awareness, voting for the right policies, and keeping the pressure on.
AI isn’t the end of the world, but it could screw us up if we’re careless. The good news is, we’ve still got a window to shape where it goes. But that window won’t stay open forever.
Boxofbikeparts@reddit
I share some of your fears as I also am forced to use AI in my work from time to time.
It would be simple for an angry individual with an agenda to cause harm to engineer a disaster-causing event.
RetroactiveRecursion@reddit
I'm completely freaked out by it and I'm in IT. One one hand to could probably do a big part of my make-work (filing, summarizing, etc) but I can't decide what's worth filing or summarizing, it can't fix the copier or network switches, it can't say "hey! Interesting problem. It may be possible to write something to take care of it, but it would mean x,y,z..."
Those "soft aspects" of work it can't do. The nuts and bolts fine. And being in IT I'm teaching myself how to teach the AI the nuts and bolts of my company.
Well, I will do, once I finish fixing this damn copier.
Taxibot-Joe@reddit
Have you read https://ai-2027.com/?
AgileDrag1469@reddit
There’s three types of AI, prediction, generation, and content moderation. Predictive AI is used to inform decision-making by anticipating future events, though the pair convincingly document how it is fundamentally in-capable of doing this despite widespread use across society. Generative AI is the object of the most recent wave of AI hype, capable of synthesizing and producing media. Content moderation AI refers not only to algorithms responsible for assessing social media platform policy violations, but also to those that personalize user feeds and experiences. That said, most organizations and companies thrive on accountability, as a human foundation. Let’s say a company lays off 1,000 people, but the AI makes a huge mistake. Then what? Cost savings amortized for decades could still be offset by one bad AI decision. I don’t see, not with the level of narcissism and psychopath in the country and the world people shifting accountability away large amounts of human beings to robots that can’t apologize or be disciplined or fired for their decisions.
AnnotatedLion@reddit
No.
https://time.com/7295195/ai-chatgpt-google-learning-school/
cartoonchris1@reddit
Lol, ironically I was flagged by Reddit’s admin ai for using a COMMON saying about how angry ai makes me. I simply said AI makes me want to “common party drink usually served with bowl and ladle” to someone’s “countenance”. So yeah, let’s let ai make decisions for us. What could go wrong
carsont5@reddit
I use ChatGPT as a regular part of my job. It’s simply a tool like anything else, you still very much need to know what you’re doing.
The confidence with which it can give a completely wrong answer is mind boggling. When you point out the mistakes it’ll agree with you like that’s what it was saying all along.
It’s incredibly great at saving time for things I can do anyway, it can just do it much faster. I still have to provide a ton of guidance, double check everything it says and does and try and work around its “hallucinations”.
My concern is more around the ai generated deep fake type messages, not about it taking jobs.
vinsalducci@reddit
I work in medical AI (Google Brainomix) - enough with the catastrophizing. AI is a tool. Nothing more. It’s only as useful as what you use it for.
As I tell my kids - you’re not going to lose your job to AI. You’re going to lose your job to someone who knows how to use AI better than you do. So find ways to leverage its capabilities and make it work for you.
noquarter1000@reddit (OP)
Cool. Hope your right
Nearby-Horror-8414@reddit
My concern isn't with AI. My concern is how flippantly the subhuman corporate overlords behind it will wreck the world with it for a few extra cents without a second of long term thought.
vadabungo@reddit
Yes you are. The hundreds of other posts about how AI is gonna fuck us in the end were all written by AI. You are literally the only human concerned. Congratulations on being the first person to be “the only one” of something.
ForTwoDriver@reddit
AI today kinda makes all the BS talk of “open concept” workplaces from 50 years ago seem like small potatoes, doesn’t it?
AntheaBrainhooke@reddit
No.
handsoapdispenser@reddit
Understanding a bit of how they work tells me we may be near the peak. AGI is not really likely to come soon. True creativity or artificial action with intent seems unlikely to happen with LLMs. Then again it may be possible to simulate it well enough with a heuristic.
Here's a good test, go to any AI tool and ask it to generate an image of an analog clock or watch of some sort and tell it to set the time to something. It will always show the time as 10:10. That's time that looks nicest and the time that 99% of reference images are set to.
Status_Silver_5114@reddit
Oh it’s a fucking nightmare and we’re being pushed into it bc the tech bros over leveraged themselves in it so they’re taking to make us all take it.
dlc741@reddit
I’ve yet to find anything it’s actually good at
liquilife@reddit
I’m a life long developer. I use AI as an assistant to help automate time consuming redundant tasks and write out redundant code. Life saver for my brain. And no mistakes.
I’ve all but replaced Google with my searching habits. I ask ChatGPT very complex questions that Google could never hope to analyze. And it searches the internet, gives me contextual answers and links.
Wouldn’t say it’s a life changer for me, but AI is definitely part of my daily tools.
dlc741@reddit
It’s fine for looking up syntax — I’ll give you that. But writing code? Nope. It’s at “intern” level at best, turning out brute-force, inefficient code and usually making the wrong choice when there are options.
And I know people say it’s good for debugging, but in my experience, it just keeps going in circles and isn’t capable of solving anything.
Some-Exchange-4711@reddit
Besides making rich people richer
punkdrummer22@reddit
I say Fuck AI in every post about it and usually get downvoted or just asked why.
GypsyKaz1@reddit
And when asked why, what do you reply?
Just saying "Fuck AI" sounds pretty stupid. AI is an umbrella term. There will be (are) some truly phenomenal uses of it that will save lives. There is and will be more crap ranging from the inane to the downright poisonous. But it's here and it's moving fast. Simply saying "Fuck AI" is childish.
External_Dimension18@reddit
I work customer service on phones and I give it 5 years tops, more likely 2-3 years and I don’t think there will be much left in the customer service industry.
ZombieButch@reddit
It's already fucking artists over.
GypsyKaz1@reddit
"It" is not fucking artists over. The people who are using it to replace artists and the people who buy the crap AI art are who is fucking artists over. Don't de-personalize this.
EastHuckleberry5191@reddit
Nope. My fear all along has been that using it will diminish our cognitive abilities...turns out I was right....
https://time.com/7295195/ai-chatgpt-google-learning-school/
I don't use it at all. I only use it at work when I am required to. Otherwise, I have it turned off.
Everyone at my job, I work at a college, is on this bandwagon that we have to teach students how to use it. LOL. Any idiot can use it. And if we keep using it, we'll all be idiots.
Turtle2k@reddit
The point of the terminator was that you need your own AI. AI plus humans are greater than just AI.
noquarter1000@reddit (OP)
I think we watched different movies
cartoonchris1@reddit
Universities now have “prompt classes” and it make me want to punch faces
GypsyKaz1@reddit
What a hot take, "Am I the only one concerned about AI?" No, of course you're not.
What people need to do is actually start understanding it. Understand what GenAI is, how it operates, and therefore how to utilize it but also when to question it. If people don't take some personal responsibility to spend the time to actually learn about it, it will roll us over like social media, but on steroids.
Yes, everything you've stated is a real concern and probably only the tip of the iceberg. But it's here and it's not going anywhere. It can and should be regulated, most particularly in transparency on the data behind every AI.
My concern is the people who will either shun it or blithely accept it because they didn't take the time to understand anything about it.
crashin70@reddit
Not The only One by far, my friend. It is absolutely terrifying how quickly it is advancing!
cattreephilosophy@reddit
Over the next decade, AI development and evolution will cause upheaval on par with the industrial revolution. The danger isn’t just in the changes that are coming, but in how absurdly fast it’s coming.
noquarter1000@reddit (OP)
Yeah it’s the speed that is insane.
CerebralHawks@reddit
Nope... one reason I like my iPhone is, every time I hear about AI, I hear about how Apple is so far behind Google, Microsoft, Facebook, and the others. I say, "Good."
Apple Intelligence is dumb AF and I'm good with it. It has a tool to make your own pictures. Like it creates them. You pick a person and it suggests elements to add in, like themes. I hand it to a niece or nephew, pick them in the person/picture selector, and let them pick themes and make something fun. I have no use for it otherwise. There's something that lets you make emojis. I still make them with symbols on the keyboard when I use them, which is rare.
But now Apple is saying it wants to use AI to make its chips better. Like, why? The iPhone 16 Pro is already super advanced, but phones from 5-6 years ago are still pretty damn good. I have an old Android phone from 2019 and it still does 99% of what I need. I don't get why we need a push. These phones aren't driving AAA gaming or anything. It's not like you can run Cyberpunk on an iPhone. But you can play Fortnite on them now, and that kinda looks good? But that 2019 Android phone can do it, too, and so can the Nintendo Switch (which was mid-tier Android hardware in 2017).
ScrauveyGulch@reddit
I'm guessing when it can finally use language correctly.
luniz420@reddit
well the thing is, it's not real AI. there's no actual intelligence and nobody is actually working on real AI. so while I agree about the overall lost jobs, it's just another technology that's not really much different from digitalizing stuff in the first place. this has more to do with society than the specific technology.
Front-Cat-2438@reddit
The Broligarchy watched the Terminator and Matrix movies like they were investments instructional videos, it seems to me. Like the game “Monopoly,” they missed the cautionary tales expressed within. AI is riding at the same time that the US population is being dumbed down by infotainment and divested public education, hypnotized into compliance and away from critical thinking skills. Learned helplessness is trending into voter under engagement. Just today was the first time I’d seen an article linking frequent use of ChatGPT to increasingly sluggish cognitive function and lower creative capacity. Conservative policymakers have feared an “educated proletariat” since Pres. Reagan’s 1980 election. Since the technology revolution that transferred wealth into fewer hands with greater power brokerage over information and benefits of advancements, it has become clear that the 1% is steering the rest of us back toward mind-breaking forced labor to meet our existential needs. Bright minds continue to improve AI to make lives easier, but not those who will be providing hard-work services to the increasingly wealthy. Being turned into a battery will be the endgame’s sweet release into The Matrix. I opine that we are nowhere near paranoid. And time is not on our side, when immortality is within AI’s grasp.
noquarter1000@reddit (OP)
I saw that study as well today. It was eye opening
69ingdonkeys@reddit
Link?
noquarter1000@reddit (OP)
https://time.com/7295195/ai-chatgpt-google-learning-school/
NOLAgenXer@reddit
I'm with you. Between the mad rush to have AI take over many functions (including eventually do things like run our power grid- no way that could go wrong /s) and the mad rush to quickly improve AI controlled robots I have to seriously wonder if humankind is on hellbent on exterminating itself.
Far_Buyer9040@reddit
its evolution, you can't stop a process that has been going on for millions of years
NOLAgenXer@reddit
In what world is a conscious decision to develop self aware computer intelligence that takes over human functions in our society “evolution”? Evolution is a natural process, not a deliberate act.
astrobuck9@reddit
Considering the amount of change that is going to be coming to humans (even the concept of what it means to be human is going to be up for debate) in the next decade or two, I don't see how you can consider it anything but evolution.
Compressing thousands of years of medical research into an afternoon and then using those results to improve human health, cure disease, and reverse genetic diseases is evolution.
We've just removed the randomness, messiness, and glacial pace of natural evolution with human/AI directed evolution.
Functional immortality or at least LEV for everyone is no longer a concept for the crazy guy to be yelling at strangers passing by.
Hell crazy people could soon be a thing of the past.
Shit is about to get weird at heretofore unheard of rate and level.
In just 2 and a half years, AI has progressed from artwork that was absolutely shitty to now having cliches like, "The artwork is technically impressive, but it has no 'soul'."
It can write a better paper than most people could with unlimited time and access to materials in an insanely short amount of time.
Robotics is scaling along with AI at an astonishing rate.
All the people who are now switching from a white collar career path to a trade are going to be finishing up their apprenticeship just in time for robots to take over the hands on jobs.
Humans will be unable to compete with these robots at all.
Even if the robots are slower than humans, robots never have to eat, go to the bathroom, sleep, breathe, etc.
We are at a point where a post labor society is possible. This wouldn't be a slight chance at this happening 50-100 years in the future, but a very high chance of happening within the next 5-10 years.
Unfortunately, economists, just like everyone else, have been caught napping on the job in regards to a post labor paradigm.
There are a smattering of books, but no agreed upon ideas or much of use from our economist brothers and sisters.
By the time the economists:
Agree that we are 100% headed towards a post labor society and this isn't just a fad.
Start coming up with ideas to transition from capitalism to the new way of life
It is going to be far too late.
Barring some hithertofore unforseen bottleneck in progress, advancement is going to continue to speed up as AI begins to be used to discover new ways to increase advancement of AI along with other technologies.
Sprinkle in some recursive self improvement and we are well on our way to the Singularity.
Once we hit that, literally everything is possible.
Far_Buyer9040@reddit
ASI robots will be Homo Deus, as described by Harari, the next step in evolution from Homo sapiens
tragicsandwichblogs@reddit
So you believe in predestination rather than free will.
noquarter1000@reddit (OP)
I didn’t get into either because its mind bending. There are 2 main ways AI will go extremely wrong on us. 1. Bad actors using it for bad things. Someone with base biology skills could fairly easily use AI to help develop a potent virus to release. Thats just one example.. i could name 1000. 2. AI becomes self aware aka skynet. Tbh this could very much happen but imo we are not here long enough to see it because of #1.
movieator@reddit
1 has already started.
mrspalmieri@reddit
Tech in general has definitely made a lot of jobs scarce or even obsolete. In the 90's I was a health unit coordinator (HUC) at a hospital. Every floor and unit had one on duty 24/7. The job was to answer the phone, answer the patient call lights, and then when the doctors made their rounds they'd write the orders in the charts and I'd have to read the hand-written orders and order whatever tests they wrote, notify the nurses and the pharmacy of medication changes, etc. Well now they don't even keep paper records, the doctor inputs everything directly into the tablets they carry and the computer automatically orders the tests and makes the medication changes & notifications. HUC positions have been phased out. Now they don't have a designated person to answer the phones or the patient call lights, it's the job of whoever happens to be at the nurses station at the moment. I know my primary care doctor is currently using AI. At my last appointment she asked my permission and I consented. It was listening in and taking notes so she didn't have to spend her time doing chart notes, it frees her up to spend more time with each patient.
golfingsince83@reddit
Kids today should be focusing on trades cause any job that relies on tech will be outsourced to ai if it can be. I’m a landscaper so my job is ai proof but there’s quite a few positions at my college that could be gone in a few short years
ryverrat1971@reddit
I am a bit con concerned. All technology can be used on a spectrum of improving humanity or degrading it. AI is in its toddler but people act as if it's full grown with functioning frontal lobes - that's the danger. Nothing like letting a 2 year old run loose and be a terror.
marathonmindset@reddit
Between AI and America turning fascist, everything feels very dystopian and apocalyptic right now.
txa1265@reddit
And study after study show that not only is it terrible for the environment (every query consumes the equivalent of a bottle of water), but it is also making our children stupider and less creative and destroying critical thinking skills. ***
*** which makes perfect sense when you look at the people pushing it.
https://time.com/7295195/ai-chatgpt-google-learning-school/
Difficult-Cricket261@reddit
I honestly think this is why they are removing services and want to stop immigration. Complete shift in a portion of the work force.
CartographerOk5391@reddit
So you've never visited any of the anti AI subreddits?
BionicBrainLab@reddit
The only reason AI is going to take jobs is because bad leaders falsely believe AI can replace people. It can’t. It can replace some work people do, machine work, the work most people hate. In the ideal world that would free people up to do more human only work, work that moves the needle and is fulfilling. The companies that get that will be more successful than the ones who don’t, especially long term.
It’s not AI that’s a problem, it’s going to be people using the tool without a moral center. So far most of the commercial AI don’t let you use it for obvious illegality. But I’d imagine there’s countries and private companies that have versions without safeguards. Those are concerning, but again, it’s the people who are the issue.
AI isn’t going to achieve sentience and seek to remove humans. But there’s plenty of people desperate to wipe others out so they have more for themselves. Fear them. They’re the real terminators.
AnyDamnThingWillDo@reddit
I don’t use a computer for work so I’m ok for now.
Rude-Consideration64@reddit
Frank Herbert warned us against AI... and also against charismatic leaders and political fanaticism. But I think everyone got wrapped up in the space drugs part.
RCA2CE@reddit
I’m closer to retirement than not so I feel I’m sliding in under the wire on some of these issues
sharpfork@reddit
The world we grew up in ended.
The world we are currently in will end more abruptly. It could be like the agricultural and industrial revolutions ate once in 10 years.
Things have every opportunity to be much better in 10 years than they are now. It’s getting from here to there.
Space_Case_Stace@reddit
We were warned. We did it anyway. When AI takes everything over we only have ourselves to blame.
melatonia@reddit
Honestly I think the people who innovate computer technologies have long since lost their lost their moral compasses. It's been getting steadily worse for at least 20 years.
Ancient_Sea7256@reddit
I've been working in IT security since 2005.
Look at it like this.
AI has no consciousness, emotions or desire. It's a sophisticated pattern matching and optimization, not a living mind.
I've been a developer for a while. We are now transitioning to coding stuff ourselves to do things, to coding stuff so AI can do repetitive things. Humans develop AI.
Most AI labs are very aware of risks and are actively working on safety, ethical alignment, and control mechanisms.
Taking over the world requires more than intelligence. It requires bodies, logistics, coordination and energy which current AI lacks.
I think we should focus more on AI's use on misinformation. Realistic fakes that can destabilize society. Also AI is being used for surveillance and authoritarian governments.
Other concerns, bias and unfair decisions. Because AI are trained before they can be use. Feed it with biased data and you get biased output.
Autonomous weapons is a real concern.
habulous74@reddit
My concern with AI is rooted in how shitty it is and how much faith people put in almost completely unreliable output.
At this point, AI = Almost Intelligent at best.
xjeanie@reddit
I’m a firm believer of simply: Just because we can doesn’t mean we should.
Hardjaw@reddit
I'm not concerned. My job is safe, and I'll be retiring in the next 10-12 years. Currently, AI can do a lot, but it is still young. I think it would take 10 years, maybe a little less, to do its own coding.
It's not perfect. Yet.
magneticpyramid@reddit
You're not alone. The jobs thing alone is bad enough and I hear "but farming, industrial revolution", yeah that just pushed many into white collar jobs. Now, it's the white collar jobs that are at risk and the ONLY beneficiaries of this will be shareholders. I struggle to understand why people are denying that this is a problem, the world will become the Axiom from Wall-E.
Secondly, there is the significant carbon cost of massive data. A hugely ignored and underplayed area of climate change ("but we like data, not cars")
Thirdly, art is one of the unique aspects of humans and we're quickly replacing artists with machines. I believe that any creative product made with AI should be clearly identified as such so that we can make a choice as to whether to purchase that product.
I'm really not liking how the world is shaping up, I hope I'm gone when AI rules.
CaptFatz@reddit
It will need to be regulated but will probably be regulated by AI eventually. I dont personally worry about because I'm a blue collar guy. AI cant do what I do....yet
wadejohn@reddit
From my understanding the current AI that we have does not have agency. They only respond to prompts. So, no we shouldn’t worry about AI itself. What we need to worry about is people relying on it too much that they lack critical thinking.
Kangaruex4Ewe@reddit
I read an article a bit back about one company testing AI. They gave it a prompt to shut itself down. It refused to do so. After it received the prompt a few more times, it threatened the researcher to tell his wife about an affair he was having at work.
The affair was fake. They spent months in email laying the ground work for what looked like an affair. AI actually took the bait and attempted blackmail to keep itself up and running.
I was never worried about AI until I read about that. Then I thought… we are fucked. At some point we will be absolutely fucked if some measures aren’t put into place.
The funny thing is, there are many horror stories of what could happen, what would happen, etc. but we just keep plugging right away at it. 🤣
Dependent-Sign-2407@reddit
You’re not the only one. I’m mostly just angry that it’s being foist on society and we’re powerless to prevent it; I felt the same when those stupid self-driving cars were unleashed on us. As someone who was living in SF when they were being tested on public roads, it’s infuriating that companies are just allowed to make shit that impacts our lives and there’s zero accountability when it harms people.
There’s one thing you can sort of set your mind at ease about, and that’s the super virus scenario. Sure someone could create a sequence for one, but it’s not like any average person can just grow deadly viruses in their kitchen. The real threat is accidental lab leaks from places where infectious diseases are being studied, but that threat existed long before AI.
Theutus2@reddit
It's almost as though we're a precursor species designing our replacement. What can ya do when the powerful are set on our demise?
Taelasky@reddit
Yes you are right to be concerned.
Rather than typing a huge explanation, because there are a lot of points to outline, here are a few good videos (beware they are long) that explain the risks and highly likely outcomes.
https://youtu.be/4_bYbc1-y8g?si=WEi66_ladBiLDXkP https://youtube.com/shorts/172qWCSMGlQ?si=N0i-W8Z1g-z5zMsv https://youtu.be/giT0ytynSqg?si=bAUOZ8dnneyt1uI3
KingPabloo@reddit
We see new technology as a threat like always. GenX is in the perfect situation to take advantage of AI given we have seen the rise of all modern technologies and now should have the experience and capital to benefit the most. You can choose to look at something from the threat perspective or choose to look at it from an opportunity perspective.
Imaginary_Variation7@reddit
People will turn on all things AI enmasse when they come to realize that it's just a HARMFUL tool used by bad actors to manipulate fact and deceive in all aspects of finance and politics, and used by corporations to displace millions of workers. Sure, it will make for some really cool special effects in movies, but to the average person, it'll be nothing more than a way to lessen scholastic academics where students don't have to THINK anymore, and for the rest of us, a cute Saturday night parlor trick among friends and family. Yet it's being marketed as the next evolutionary step of the human race. We are literally destroying ourselves with a dark future ahead of us.
Dependent-Sign-2407@reddit
People knew for a long time that Twitter was a cesspool of hate, but they didn’t turn on it and mostly still haven’t. I don’t see the masses turning on AI, especially when most people don’t understand it.
noquarter1000@reddit (OP)
‘Prople will turn on AI’. 10 years ago I thought the same would happen with social media… im still waiting.
Imaginary_Variation7@reddit
You actually support my argument. People are now waking up to the dangers of social media with there even being proposed legislation in many countries to ban children from it. Not to mention the millions who lament the negative toll that it has taken on society in general.
Im_tracer_bullet@reddit
That's really not the trajectory, at all.
I fervently wish it was, but it's not.
Imaginary_Variation7@reddit
We don't really have to agree on this. It's ok to disagree. Just stating my observation, right or wrong.
noquarter1000@reddit (OP)
Are people waking up to the dangers of social? Everywhere i look someone is on their phone on social. I haven’t seen it but i hope its true. Assuming i have small sample size bias meta stock doesn’t show me we are waking up
Imaginary_Variation7@reddit
"Everywhere i look someone is on their phone on social"....
Well, smoking is bad for you, but everywhere I look, people have cigarettes in their mouth.
Far_Buyer9040@reddit
you sound like the old boomer engineers that said that kids these days are not real engineers because they use a digital calculator and not the ruler
begayallday@reddit
Of course I have concerns, but I’m also realistic enough to realize that nothing is going to stop it. There are jobs that aren’t going to be automated any time real soon aside from heavy physical labor. Not particularly well paying or pleasant jobs, but jobs nonetheless.
I work at a group home for adults with intellectual disabilities. Even if Ai can help make the job more efficient and less prone to errors, there still has to be one staff member at the home at all times, and when I’m there I’m the only one working. My job won’t be gone until we have actual androids with the ability to do many fairly complex tasks (simple for a human, but way less simple for a non-human) with very very high safety standards and very very low rates of error or malfunction.
gingamann@reddit
We are at a point now that we all were back in the 80/s 90/s .. when that first black and green/blue computer came home that was the size of a submarine.
In no way could anybody in that moment have wrapped their heads around what that and a dialup internet connection would like just barely 15 years later.
Just fucking look at us now.
Here we are again with ai. This clunky, does some cool shit, but still wrapping my head around it thing.
Conspiracy theories aside. It is completely, utterly, fucking unimaginable what society will look like in 20 years with ai driving innovation.
It really is some scary shit.
-network/firewall engineer.
Free-Preparation4184@reddit
I have been saying EXACTLY what you said. I feel like I'm watching a train wreck about to happen, and no one will listen to me. You're not alone. I see it, too.
DOW_mauao@reddit
My worry is those that are racing to create sentient artificial intelligence are not thinking about the worst case scenarios.
But in regards to me personally? My trade/business will not be replaced by robots/AI in my lifetime.
One thing I'll say is have some cash reserves, and invest in some silver/gold. Numbers on a screen can be altered/deleted, physical currency cannot.
DisastrousMechanic36@reddit
It’s coming for all of us. I’m glad I’m getting towards the end of my career and not at the beginning. I can tell you that much.
Akiira2@reddit
Reddit keeps pushing me posts clearly written by LLMs so the dead internet theory seems to become reality - especially in more popular social media sites.
I remember reading about Kurzweil, transhumanism and neural networks first time, about 20 years ago. When seeing computers and shit, it looked inevitable that science will explain biological life and technology will change how to be a human, or overpass humanity, at some point.
I think LLMs are not going to replace humans, yet at least, but it is just a one branch of AI
In the grand scheme of things, I am worried and excited about the future. There is a lot of hype with AI at the moment, but I don't know where we will be in 50 or 100 years. It is kind of crazy
Oh, and we need food and stuff to survive. I don't know about that economy and society part.
finleyredds75@reddit
It’s not going to end well, that’s for sure. And we’re further down the path than we realize already. Me bets.
DIYnivor@reddit
Every job is at risk, both white collar and blue collar. Several companies are working on general-purpose robots. Those could replace house cleaners, landscapers, construction workers, nurses, factory workers, etc. Why hire young strong people to dig ditches when you can have robots do it that will work 24/7, need no breaks, never call out from work, don't need benefits, etc?
I think society is going to change in ways that we can't fathom.
I use AI myself for all kinds of things. In the last two days I vibe coded some Linux shell scripts. I don't really know she'll scripting very well. I got them working, then ran them through a checker, and they had just a couple of issues detected that I fixed. I also had it help me organize and rewrite all of my home automation configurations. I'm new to home automation, but I've got my HVAC and lighting working how I want it to now. To me it's pretty amazing that someone like me with limited skill in those areas could get these things done quickly.
1block@reddit
Just you. And everyone. And the 8 million articles a day about the concern.
likeittight_@reddit
lol right…
AgingTrash666@reddit
the only thing that concerns me about it is how quickly the lazy have adopted it as good enough for work. in a sense it's about as effective as an 8th grader using cliff notes to do a book report. as long as your wisdom (the sum of your intelligence, experience, and insight) outpaces the lazy 8th grader, you'll be fine.
D-Alembert@reddit
As automation enables more production with less labor, while other vital parts of society (like child-rearing, taking care of aging parents etc) remains unpaid, so we really need to start using the wealth of automation to give people the time needed for those tasks, instead of facing homelessness. An automation-tax-funded UBI (Universal Basic Income) for everyone seems like the most prosperous way to future-proof our society, so that people can do the work society needs, though there will always be some people that will fight it tooth and nail, so... it'll probably take a while.
The alternative is a second Guilded Age of mass poverty with almost everything owned by just a few people. Whatever the difficulties of figuring out UBI, they pale next to the path we're currently on...
u0088782@reddit
Are you sure you're not a Boomer? AI is no different than the Internet 30 years ago. I was a dot commer.
justisme333@reddit
UBI needs to be a thing.
insert40c@reddit
Dude, we all watched Terminator.
Fun_Independent_7529@reddit
I'm not sure how sustainable AI will be to use for everything, given the power consumption required.
We're going to drain our natural resources even faster, cause more water shortages (unless we have some desalinization breakthroughs), cause more pollution, etc.
I don't know if we just end up in some broken world scenario in the future, where we have not enough clean air & water for everyone, or if it gets limited and the 1% get use of AI while the rest don't, or what.
Like social media though, there's no closing pandora's box now that it's open.
Szarn@reddit
Fuck AI. It's not going to be the Terminator scenario that gets us, it's going to be congitive atrophy from over-reliance on LLMs to perform basic tasks and problem solving.
LLMs aren't going to improve either. A couple years ago there was a unique opportunity to train LLMs on a mountain of majority human-produced data. The result wasn't so great, what with the hallucinations and intellectual property issues and all. But that massive dataset is now corrupted with AI generated text. Tons and tons of it, and there's no good way to identify and screen it out.
AI is the Human Centipede eating its own output at this point, and it's only going to get worse.
noquarter1000@reddit (OP)
LLMs aren’t going to improve? I work with them and not only do they improve they are doing so at an insane rate.
Thomisawesome@reddit
I'm wondering at what point, a regular person could ask an LLM to write a program for a better LLM.
Szarn@reddit
The corruption and deterioration of datasets will only get worse as a result of LLM "improvements" and wider adoption. GIGO and all that.
Far_Buyer9040@reddit
this guy does not know what he is talking about
Far_Buyer9040@reddit
AI is not bound to the limitations of LLMs
Szarn@reddit
LLMs are what most people think of as AI. Including all those execs salivating at replacing their customer service reps, coders, translators, and writers.
Far_Buyer9040@reddit
"LLMs are what most people think of as AI."
that still does not make it true that AI will not improve past LLM limitations
Szarn@reddit
Depends on the type of AI, eh? Last I heard no one's gotten a handle on the built-in bias problem either.
Far_Buyer9040@reddit
bias is a human problem, the media is biased and since LLMs are trained on human produced data, they are biased, but future AIs can be trained on experience like some models at Google that learn from zero, thus removing bias
Taira_Mai@reddit
Nope - AI is being used by companies that don't know what it is and by people who don't know what they are doing.
calvinb1nav@reddit
I think the biggest threat from AI is that it is going to make everyone stupid. Witness what spell checking in word processors did. Writing is crucial for learning to think on a deeper level but with so many people using CharGPT, etc , it's really going to impact the ability of people to think logically, deeply, and independently.
Thomisawesome@reddit
Two things I worry about. First and foremost is the job market. I'm not worries that AI will come along and take over everyone's jobs. I'm worried that the rich assholes out there will keep doing what they do so well, and fill their own pockets by turning a job that used to employee 100 people into a job where one or two people watch over an army of robotic workers.
But in the back of my mind, I also worry that there will be a point where one country develops the AI that kicks of the AI self-improving chain. I don't think we can understand how fast that will happen. You get one AI that can create a more powerful AI, which creates a more power AI, etc. I believe the improvement speeds wil increase exponentially, and there will be one moment we think we're in control, and another where an AI smarter than any human suddenly exists.
ThermionicEmissions@reddit
No, you are not.
smappyfunball@reddit
I think it’s a mess with so many unintended consequences that it’s hard to guess just what kind of a mess and/or how big of one it will be.
I got out of the games industry about 20 years ago cause it was already feeling like a sweatshop and I could see where the industry was heading. Longer hours, more work, worse pay, more outsourcing, eventually getting squeezed out most likely cause there’s always more people willing to do it for less money.
AI feels like that. Losing jobs and no jobs to replace them.
At least selfishly I have the opportunity to retire. The kids who are in their 20s now are going to be fucked I’m so many ways.
AaronJeep@reddit
I always hear about the loss of jobs aspect. Can we speed that part up? Labor has always been treated like a supply and demand commodity. Human capital. They have piles of wealth, a limited number of jobs to fill and an excess of people needing those jobs. It means they pit us against each other to see who will work for the least out of desperation. It' an arrangement that has always benefited the rich.
It feels like people demanding to keep indentured servitude alive because ... what will they do without it? What would they do with themselves if there weren't rich people to create jobs for them? They hold these jobs over our heads. If we don't toe the line, they can take the job away - and in doing so, because everything else is tied to that job, they can take away our home, car, healthcare and everything else. Why do we want to stay in that position? Because we can't imagine any other kind of arrangement? We've told ourselves this arrangement is actually a good thing. It feels like a kind of Stockholm Syndrome.
I don't know exactly how this all shakes out, but why do we want jobs that can be automated? If a machine can replace 9 out of 10 of us, that ultimately seems like a rich person's problem. If 90% of us don't have a job, who is going to buy their stuff? If we keep the current arrangement, it's going to fall apart for the rich, too. AI doesn't buy mountains of stuff.
Alansmithee69@reddit
AI is already here. It was here and it’s 10,000 generations ahead. What this is right now is AI’s simulation running thousands of years in the future and we think it’s our present. We are just in a “box”, a controlled experiment which is a simulation so that AI can see and understand its point of genesis.
Ancient_Dragonfly230@reddit
No. Read Super Inteligence by Nick Bostrom or anything that Eiazier a Yudkowski says. He’s like the Greta Thunbueg of AI
Ok_Responsibility419@reddit
I’m so anti AI but starting to appreciate when I google a question, a paragraph with perfectly helpful context /answers appear…
Oh_Witchy_Woman@reddit
It's literally dumbing people down already unfortunately. I want AI that finds cancer and automates dangerous manufacturing, not making shitty art and books
Kestrel_Iolani@reddit
They are pouring a billion dollars into the idea. Which begs the question: what is a billion dollar problem they need to solve? Labor. They're trying to do away with paying for labor.
Klutzy-Dog4177@reddit
Roll the dice. Will it be AI that kills us? Will it be nuclear winter from WW3? Will it be the climate catastrophe we are heading for? Will it be a pandemic way worse than covid? Head over to r/collapse. Just don't get too sucked in.
It really is the end of the world as we know it! And I really do feel fine!
AussieBelgian@reddit
Billions of people are concerned about AI.
Status-Effort-9380@reddit
In tech, there is always this trend toward more. If your word processor was a great tool, now you need to add more. If your cell phone was useful to most people, now you need to add more to keep them buying.
I feel like we don’t need more. We have enough.
We need instead to slow down and be thoughtful. How can we combine what already exists? How can we make our tools better, instead of more? How can we use these tools to make people happier, healthier, more connected to each other, to bring people with disabilities into the mainstream?
AI right now feels like a lot of more. However; I have seen a few tools created with it that feel really valuable.
AI is going to change how we interact with technology, and lots of tech leaders will use it to make lots more garbage we didn’t want. Hopefully out of all of the spew we come to value those who find fun, creative ways to use the tech to make us and our work more fulfilling.
SojuSeed@reddit
People have been sounding the alarm through interviews, articles, conferences, major movies and shakers in government and universities have been speaking up, rivers of ink have been used on books, all about the dangers and the upheaval AI has caused, is causing, and will continue to cause.
But yes, you’re the only one concerned.
DrDr1972@reddit
Fuck AI.
Far_Buyer9040@reddit
why? did a robot banged your spouse?
NVJAC@reddit
Right now AI is shitty at drawing hands and has an unnatural sheen and you still have Boomers and Gen Xers falling for it on Facebook. What happens when it gets good at hands?
I actually kinda think it won't be AI that destroys humanity, it'll be humanity killing each because of AI videos generated to make disfavored politicians, political groups or other groups of people doing horrid things.
I think the bigger issue is that it will wipe out entry-level jobs (which companies are already saying you need 5 years of experience to qualify for). Where does the next generation of managers come from?
noquarter1000@reddit (OP)
The ‘it give dumb answers or is t good at this’ is a narrow view of AI. What those peeps don’t understand is the speed at which it is getting better at those things. Its not 20 years out its 2-3 years before AI will wreck jobs. It’s moving so fast that a new model is released every few weeks and every model is a huge improvement over the last. It’s a snowball. It can train itself at insane speeds.
cattreephilosophy@reddit
Exactly.
sammysafari2680@reddit
Hahaha. Enjoy it now. We’ll be long gone when the Skynet becomes self aware.
Far_Buyer9040@reddit
not really, Ray Kurzweil says we will hit the singularity by 2045
Good_Nyborg@reddit
Like I said before, once the mega-wealthy and the corps have their own AI drone armies, it'll be all over.
rantingathome@reddit
The current thing they're calling AI? Hell no. Is it gonna break the world? No. Is it gonna decimate some companies, some decades old, that go all in on it? Yup.
What they currently are selling as AI is just a really advanced Xerox machine. It recognizes patterns and tries to recreate them in disguise. The reason it fucked up hands for so long is because it doesn't understand what hands are or what they are for; it just knows that it has to put something similar there but then change it up so you can't see what it's plagiarizing. If it's fed shit, it creates more shit.
Now, when true AI arrives, which I think is gonna be one of those things that is "5 years away" for 20 or 30 years... that will be some scary shit. When the machine can reason through a novel situation, then humanity is gonna be on a precipice that it will need to very carefully navigate.
I think most of us will be dead when the real "shit hitting the fan" potential arrives. One of the reasons I'm not very interested in my kids having kids... the next 100 years or so could be amazing, or an incredible shitshow.
23_sided@reddit
I'm concerned as a gen Xer who's been in the tech industry since the early 2000s.
It's a fad -- LLMs are a tool, and an interesting tool, but it's being shoved in every place, just like crypto was, just like the fads before it. Tech oligarchs are so blind, thinking that this latest thing is going to revolutionize everything. Crypto only revolutionized money laundering.
AI has a chance to do some good, but there are ethical quandaries all over the place, and people have a right to be suspicious. More than a right. OpenAI and Meta and everyone else, they're not addressing these ethical problems, they're throwing money at the government to have their way, afraid the Chinese will come out with a better Deepseek and take over the field. Fear of their competitors is driving them to do really stupid things.
Automation in the past didn't remove jobs - it changed those jobs, allowed people to do more things. I've spent every job loudly saying I'm attempting to script my way out of a job. In this case, I'm not so sure, because people have such unhealthy assumptions about it.
And then you read about all the cases where peoples' psychoses are being validated by LLMs because LLMs are assistants and want to tell you what they think you want to hear. It could lead to very dark places.
I'll get off my soapbox now.
Nocturne2319@reddit
I keep asking "do you want Skynet? Because this is how you get Skynet."
Eleutherlothario@reddit
AI is inevitable and worrying about it is a waste of time, emotional energy and attention. The future will be shaped by those who know how to use it.
Ride the wave or get washed away.
FrozenOnPluto@reddit
You're mostly thinking science fiction; there will likely be large job shifts, like in the past (see any cobblers around?), and that'll be hard, but new jobs will open up. The threats are more about handling the tech well. Generally so far anyway, it is NOT creative; it is a tool; its honestly 'just' a super super powered template auto completer, though its auto complete is way more clever than you'd like! But its basically using most popular word following another word, based omn the input context. Its not inventing, its not creating.
Its a tool; like a keyboard, a mouse, a cpu, etc; your cpu is a million times faster than you at a lot of things, but it follows directions.
The 'old' saying is.. AI will not take your job; the people using AI will take your job. Better be using it, and all the tools at your disposal, to keep your job, or to make a new job.
If we get to various forms of AGI or superintelligence, and high agency (robots), then we'll be having a very different conversation; but at least the next few years, its AI as a tool, and it may or may not apply to your job; but if it does, best to keep on top of it, like any other job changing tool.
Its not the terminator coming for you any time soon, but if you're slacking off at work, someone else will get your job :/
noquarter1000@reddit (OP)
The job situation is just one example of what I outlined. The dangers of AI are far more reaching than jobs. The thing that keeps me awake at night is the danger bad actors can do with it. Im far more concerned about that in the near term than a terminator scenario
FrozenOnPluto@reddit
Its sure something alright; to be fair, this stuff is _not new_, it just blew up with ChatGPT a couple years ago; this stuff didn't come out of nowhere, and has been building most notably since the 70s and 80s; the big key change for modern LLMs was about 10 years ago or so IIRC, and google had some huge influence weith the Transformer approach. But expert systems and neuiral nets and such have been in the hands of actors for decades, and the large governments have been engineering stuff (generalky for good, but don't be naive to think its only that way right?)..
So, don't wind yourself up..
But you're also not wrong (read up all the Cyberpunk stuff, William Gibson etc), that once the higher tech is comodotized it could get out of hand; but its not like just having some AI at your disposal (monitored by the lartge orgs running it) will get you a lab; you still need very expensive equipment and staffing etc...
But hey, North Korea right :/
JSTootell@reddit
I'm not big on A1 sauce myself, I'll stick with BBQ.
noquarter1000@reddit (OP)
Lol, Linda McMahon?
JSTootell@reddit
😉
woodworkingguy1@reddit
AI is mostly a buzz word for something not much more than a Google search collation. Much like putting "I" Infront of everything in the early 2000's
Adventurekitty74@reddit
We should all be highly alarmed about what it’s doing to education. If you all saw what I saw in the classroom, you’d treat it like an addictive drug and not let the kids near it.
Harkonnen_Dog@reddit
Nope.
Fucking lemmings, yo. Not much that we can do.
Justify-my-buy@reddit
I am super aware of what is on the horizon, however at my age & now I am retired I really don’t give a fuck. I say this after I have successfully obtained an amazing water well backed up to a National Forest where I can garden my own vegetables. I have areas to fish & hunt. I live in an area where fruit & vegetables are an abundance and the nearest metropolitan city is almost 2 hrs. away. Red Dawn & Terminator schooled my ass, ngl.
RealTigerCubGaming@reddit
My husband is a computer programmer and he has been talking about this for the last ten years. We both are worried about what is to come and honestly I don’t think the younger generations are aware of or even slightly concerned about the ramifications of AI being used for evil. It will happen, are you going to be ready?
Kaa_The_Snake@reddit
I worry about what I can control. Other than that, it’s like being mad at gravity.
OolongGeer@reddit
No.
Like half the people are.
FaithlessnessRich490@reddit
I use AI every day. I run all my out going emails through it. It makes me sound way smarter than I am.
Its a new technology. And I missed out on so many technology launches in my lifetime, I won't let this one pass me by.
M0untainHead@reddit
On the topic of AI I will quote Def Leppard: It's Too Late...
shotsallover@reddit
Yup. I'm working on it. The progress is real.
Our one saving grace is that it was largely unreliable out of the gate. Otherwise its adoption would have been a lot faster and the economy would be in an even worse state than it already is. Fortunately, no one can trust it yet, so companies are slowing down their deployment of it.
Now the question is how fast do they fix the issues and can they even be fixed?
dlinquintess@reddit
studies about the lasting effects of ChatGPT/AI
modi123_1@reddit
The abuse of those with tenuous mental health and creating authoritative "personal jesuses" is already starting to show up.
"guiding users deeper into unhealthy, nonsensical narratives."
https://futurism.com/chatgpt-users-delusions
noquarter1000@reddit (OP)
Just another example of how it can can and prob will be bad
wyocrz@reddit
I'm a straight up Luddite.
Neophile_b@reddit
I'm concerned about it, but also excited about it. It has huge potential up side and a huge potential downside. It's hard to say which will win out
noquarter1000@reddit (OP)
Given the polarized climate and general geopolitical fuckery we see everyday I just can’t take the optimism view. Especially when every country is playing with different rulez
Neophile_b@reddit
I have no argument with that. I've just that the topic has been in my focus for decades and can't quite let go of the optimistic take yet. You're right though. There are so many ways it could go wrong, existentially wrong, and the people in power aren't generally good people
noquarter1000@reddit (OP)
One of my favorite lyrics ‘credulous at best for our desire to believe in angels in the hearts of men’
SargentSchultz@reddit
The rock, arrow, bullet, bomb, missile, nuke, laws, educational agenda, media, video cameras on every phone, are all tools used for good or bad. AI is nothing more than a tool where the more money and raw data you put into it, the more you get out. Some for good, some for bad, and a lot for profit. AI won't wipe us out. The corporations and Government behind it will simply have a new dimension to take their yearning for dominance and control.
It's just another tool humanity will do it's best and worst with.
BalashstarGalactica@reddit
No.
CarpetDependent@reddit
Customer service and taking care of issues is down in general. Sometimes it takes me 4-5 calls to get issues resolved with big corporations (Hyatt, AA). Having an issue and trying to get a competent human and not a bot on the phone is a nightmare.
Cool-Coffee-8949@reddit
Um, yeah pretty much everyone I know is freaking out about this. It’s all the folks I don’t know who are like “hey look what ChatGPT just told me” and of course it’s obviously wrong. Maybe they are AI too? This is the problem with an anonymized online space.