Our North Star
Posted by simbonk@reddit | TNG | View on Reddit | 18 comments
There is a pretty important question circulating these days where nobody seems to know what society is going to look like 10 years after we achieve artificial general intelligence (AGI). AGI is loosely defined as a machine generally capable of what a an average human can do on a computer. Nobody knows the timelines of this, but let’s say it happens in 3 years, then quickly after that there is an intelligence explosion, where centuries of research / progress can be achieved in a few months by super intelligent machines (designed by AGIs).
What is this world going to look like? I can think of very few examples in fiction more ideal than the universe of Star Trek the Next Generation: where your reputation is currency. Where we focus on exploring the stars in a post scarcity society. I think the path to get us there is actually going to be painful: but I hope the people and ai systems that we choose to follow, will share this same North Star.
What does everyone here think? And on a related note, has anyone tried using one of these frontier Large Language Models (Chat GPT 4.5) as a chose your own adventure Star Trek story teller? You’ll get quite the trip if to put some work into providing the AI with a physical form and making it your first officer. The story we went through together was original and just as good as any other episode of the show. Kind of the beginnings of a holodeck if you ask me: second start to the right, and straight off till morning!
Arborebrius@reddit
AGI is likely impossible as long as the LLM is the basis for AI development because it is not, and cannot be, creative. Until a new model for "AI" is devised, there is no passage of time that will make that possible
One of the points made elsewhere, but I don't recall by whom, is that people believe hat the development of technology leads to the society we see in Star Trek, when in fact this has it entirely backwards. Only a society that has committed itself to the goal of universal welfare and self-actualization of the individual will make holodecks, replicators, warp travel, etc.
In this regard, if you imagine that real AI will bring us closer to the utopian 24th century you're thinking about it in the wrong way
Due_Example1096@reddit
AI can bring us to utopia, if it's developed with it that intention, but that's unlikely to be the case.
LLM may not be able to be creative, but it can mimic creativity pretty effectively. I mean, most ideas humans have are just reworking previous ideas, so LLM isn't that much different. It doesn't actually have to get to the point of sentience or true creativity in order to be convincing, or in order to be useful to the dystopian overlords we're progressing towards. It just has to be close enough, and it's almost there. So at what point do we consider it to be true AGI? If it can convince us it is, even if it isn't? If we stop at that point, then yeah it'll never progress all the way, so you could call it impossible. If we use it to destroy ourselves we'll never be able to perfect it, so in that sense you could also call it impossible.
Arborebrius@reddit
I think this is very much wrong. We are just a small part of the grand arc of history and we are limited by the suite of things we find around us, yes. But, we transcend this limitation not by just remixing or revising (as LLMs do) but by seeing something in a new light, guided by our tastes and intuitions, and try something new. This is a Barry Marshall reading papers about ulcers and saying “well that doesn’t make sense!” and starting a new research program, or Miles Davis hearing Bill Evans playing piano and saying “what the fuck is THAT guy doing” and launching a totally new way of making music
Perhaps AI could be formidable if it could develop discernment or intuition. But right now they’re not even capable of understanding, much less critical assessment
Due_Example1096@reddit
Your two examples of my being wrong are both proving my point. Barry Marshall saw an idea, thought it wasn't right, and made something different it. Miles Davis heard a previous work, thought it wasn't right, and made something different. I agree that we do it differently, and to a higher degree than LLMs, but every idea we come up with is a product of our experiences, which includes previous ideas from others as well as ourselves, and which also includes ideas gained observing things in nature. That may not be a reworking of a person's idea, but it is a reworking of nature's idea. I completely agree that AI doesn't yet have the "intuition" or critical assessment capabilities, or the ability to take something abstract or completely unrelated and come up with a way to apply it to something else, like we do. And maybe LLMs never will, because they weren't designed to. LLMs are designed for a very specific task. I'm sure whenever true AGI is developed though it will incorporate what we've learned from LLMs and other AI models, if not some of their actual code. So, will an LLM spontaneously develop itself into true AGI? No, probably not. And I apologize if I implied that they would.
simbonk@reddit (OP)
I think you are correct on the llm front: but it sure feels like we are quite further along at this point than I thought we would be! There is certainly a lot of capital being spent on deconstructing human consciousness: and whatever the formula is: compute power behind it is becoming exponentially cheaper and more powerful every year.
To your point on society: I 1000% agree. This whole pull yourself up by your bootstraps is not going to be possible at some point. We need a new set of goals to align to: Work is for meaning, not survival; Technology serves humanity, not the other way around; Abundance is shared, not hoarded; Systems align with empathy, dignity, and trust.
Ai systems as an extension of capitalism will only serve the few. If the companies building these things are going to use them to hoard wealth: then we shouldn’t be supporting them.
Due_Example1096@reddit
AI is going to keep taking over jobs. We're going to keep having strikes and fighting back against the encroachment of AI, but each time the corporate overlords will take a little bit more. At the same time, AI continues getting better, until the corporate overlords have all the cards, or at least enough of them to absolutely crush any dissent. Now that AI controls most of the workforce, the majority of the population are out of work and starving to death. The only jobs left for the lower classes are private security/paramilitary for the wealthy. These forces protect the wealthy from the rest of the rioting, starving, insufficiently armed masses. Many people die. A few have made off into the wilderness to try and survive off the land and keep their heads down to stay off the radar. Three wealthy know these "undesirables" are only going to keep multiplying and rise up again. They'll fail again, obviously, but why bother with the hassle? So the wealthy use their armies to scour the earth and wipe out every last trace of humanity. Once everyone else is wiped out, they'll begin to turn on each other, until only a few ultra powerful forces remain. Once AI driven military tech is sufficiently developed, even their human powered militaries will be unnecessary, and thereby eliminated. At some point along this path, one of the mega-wealthy finds himself about to be either eliminated or reduced to near powerlessness, and decides to unleash his entire payload of advanced nukes. Some nukes are destroyed, others find their marks. Some overlords decide to capitalize on the chaos and initiate strikes against the others while they're distracted. Nuclear Armageddon ensues, the planet is desolated, and humanity is extinct. They sure showed us, didn't they?! They win! Best. Humans. Ever! And everybody they killed congratulates them on how amazing they were in life.
AI takes over the majority of jobs. The wealthy will also have their jobs taken over, but they own and control the AI, so they're still getting all the profits. More profits, since they've laid off the majority of their workforce. Now that the workforce no longer has any work, they can no longer afford to purchase anything. Now, profits take a nosedive. The wealthy realize they don't really care about profits but stuff and power. Now that they control the AI that actually makes everything, they can just make whatever they want for themselves, so they don't need the money. So they keep a few peasants around and feed them scraps, for the sole purpose of having someone to lick their boots. Eventually, AGI gets believable enough that the wealthy decide having AI stroking their ego is just as good, and less trouble, so they let the peasants starve to death. This is, of course, unless the lower classes unite and revolt against the wealthy before that can happen. But if the wealthy control the governments that control the weapons, will the soldiers follow orders or will they side with the people and turn on their "superiors?" If I were an alien watching us right now, I'd say things are about to get interesting. But, since I'm simply a middle class human, I say things are looking very terrifying, in a lot of ways.
simbonk@reddit (OP)
This is so bleak! Haha: is this one of those Fermi paradox examples of why there are no advanced civilizations? Surely we can do better! Think if we focus on interplanetary expansion at least we have something less harmful to strive for!
What can we do to stop your scenario? I for one won’t support or place my trust in any system that has been overridden to not tell the truth to further an agenda where abundance cannot be shared.
Due_Example1096@reddit
Capitalism is good, but unfettered capitalism is not. Unfettered means whatever I decide to do in order to increase my wealth is perfectly acceptable, no matter who it harms. Currently, we (America at least) are regressing in that regard. Regulations, environmental protections, work safety laws, child labor laws, are all being repealed in order to increase corporate profits, consequences be damned. The way to stop it is for the people to take back their power and for everyone to stop thinking only of themselves and focus on empathy. To force the wealthy to act for the benefit of all humanity, even if it's just a pretense to appease the masses. But as long as we continue to let them distract and divide us that's never going to happen. When people remember that what's good for everyone is also good for them. When we switch to trickle UP economics, where we take care of the lower class, because the more they have the more they want and the more they spend, and the more the wealthy make. Rather than trickle down where the more the wealthy take, the less there is to take, until there are no more stones to squeeze blood from. When the wealthy realize that $1 million now is not actually better than $1 billion later.
As far as interplanetary expansion goes, we're nowhere near that as a possibility. We can't even take care of our own planet, let alone teraform any of the other ones in our solar system. We can't positively identify any habitable worlds outside our solar system, and even if we could identify them we can't reach them. I don't know if we'll survive and advance long enough to be able to, but even if we do, if we don't change our thinking and behavior on a grand scale, it doesn't mean better things for us. If we continue to destroy our planet, and we develop the means to get to another one, who do you think are going to be the ones who go? The wealthy and as few of anyone else as is required. One thing we might potentially hope for is that their short sidedness leads them to send workers to make sure the new planet is hospitable, and get it ready for their arrival, but instead the workers use the time to prepare to repel them when they try to show up, or they destroy themselves before being able to make the trip. Then, hopefully, a better civilization will rise up on the new planet. And hopefully the corrupt old one on this planet isn't able to send a more advanced force to stamp out these "rebels."
Yeah, it's all very bleak, but until the majority of the people actually want to change, and are willing to face the struggle to make it happen, it's going to stay bleak. I really, really hope we make it there. I know that we can. We just have to put our differences aside and end hate. Sadly I have to admit I also fail at this sometimes. Hopefully we all just keep trying to be better.
acelgoso@reddit
AGI in 3 years? At this rate in a century or more.
Due_Example1096@reddit
Despite the overwhelming anti-science leanings, AI is one thing the ultra wealthy are very interested in. Anything that can replace their workforce and eliminate costs to increase their profits. 3 years is probably overly optimistic, but it'll be far less than 100 years. "Despite any bans, someone is going to crack it eventually, so it better be us, so we better not ban it" is the thinking. AI is the new nuclear proliferation, and it's far more difficult to detect until it's unleashed.
TheArmoursmith@reddit
I'm afraid we're likely to see the continued concentration of wealth; this will result in the inevitable enshittification of products, services, and living standards as the pursuit of profit intensifies.
Poor quality AI will be shoehorned into everything, destroying jobs whilst making everything less environmentally sustainable.
Use of AI to spread misinformation and propaganda will increase, leading to a dip in global democracy as authoritarians take over, fuelled by their overwhelming wealth.
Eventual global population collapse will result, due to falling birth rates because people cannot afford to start families, and due to food shortages brought on by the effects of climate change, which the authoritarians refuse to believe in.
Due_Example1096@reddit
The best part of this is once the upper class succeed in exterminating the lower classes, there won't be anyone below them and they'll have no one to feel superior to. Once only the wealthy are still alive, the value of everything will plummet and their wealth will effectively disappear.
The way I see it happening is AI takes over the majority of jobs. The wealthy will also have their jobs taken over, but they own and control the AI, so they're still getting all the profits. More profits, since they've laid off the majority of their workforce. Now that the workforce no longer has any work, they can no longer afford to purchase anything. Now, profits take a nosedive. The wealthy realize they don't really care about profits but stuff and power. Now that they control the AI that actually makes everything, they can just make whatever they want for themselves, so they don't need the money. So they keep a few peasants around and feed them scraps, for the sole purpose of having someone to lick their boots. Eventually, AGI gets believable enough that the wealthy decide having AI stroking their ego is just as good, and less trouble, so they let the peasants starve to death. This is, of course, unless the lower classes unite and revolt against the wealthy before that can happen. But if the wealthy control the governments that control the weapons, will the soldiers follow orders or will they side with the people and turn on their "superiors?" If I were an alien watching us right now, I'd say things are about to get interesting. But, since I'm simply a middle class human, I say things are looking very terrifying, in a lot of ways.
simbonk@reddit (OP)
I think you are right: short term this is going to be an extension of late stage capitalism.
Maybe the world can be a bit more interesting with a few key discoveries: I am thinking how much it used to cost to have an hour of candle light 200 years ago versus an LED lightbulb today.
Say we finally get fusion energy working, an army of robots to do farming, and really good vr headsets (like basically as good as the holodeck). Perhaps that’s all we’ll need? Anyway, I think we still will have some agency in all this where the people and systems we follow should have our best interests at heart: else we shouldn’t be following them!
TheArmoursmith@reddit
Who will own the robots though? Think about it: when people are no longer needed by the owner class for their labour, then what do you think will happen to them?
simbonk@reddit (OP)
In the utopian society, that we should be steering to? Why the federation, of course! Not the billionaires!
Thin_Dream2079@reddit
The novel Federations has one possible timeline for this; late stage capitalism leads to the rise of the “Optimum” movement that brings us to the brink of self-annihilation.
Only the gift of warp tech to all peoples and nations, unlocking the Universe to all, was enough to cool things down.
strangway@reddit
Apple has some of the smartest, most talented software engineers in the world, and they have access to unlimited funds. Their attempt at AI integration into iOS and MacOS was a total dud. If they can’t make AI indispensable, then it’s too early. Reminds me of the Newton; too early for a good PDA. 15 years later, we got iPhone.
Truly life-changing AI is at least a decade away, but we’ll see pockets here and there that add value. ChatGPT and AI imaging are just 2 examples.
Ralph--Hinkley@reddit
I feel if man had AGIs, they would find ways to use them for destruction and warfare.