Either that or shrimp concussions. Also, note that I changed the pronouns to he, for the sake of, frankly, realism.
Posted by OneHourDailyLimit@reddit | greentext | View on Reddit | 64 comments

JuanHernandes89@reddit
What does this mean
OneHourDailyLimit@reddit (OP)
JuanHernandes89@reddit
I meant the PEPFAR robots thing
OneHourDailyLimit@reddit (OP)
Basically people who are into discussing how to improve the human condition most effectively, Effective Altruists, will either default to:
sensible humanitarianism (see; their advocacy for malaria nets as the ideal charity donation in terms of lives saved over money donated), which I here compare to PEPFAR, the massive anti-AIDS program founded by W. Bush and credited with saving some 20,000,000+ lives between its founding 2003 and crippling in February of this year,
Or idiotic fearmongering about how if we don't financially support people who sit around and worry about AI all day, AI more advanced than anything even remotely realistic will eventually destroy us all to make paperclips or something.
MrCockingFinally@reddit
Except this program backfired heavily because people used the mosquito nets as fishing nets in many parts of africa.
Since the mesh is so fine, it catches even baby fish and absolutely decimated fish populations in various African lakes.
At the end of the day, practice has a habit of beating up theory and stealing it's lunch money.
Another example is people donating high efficiency wood burning stoves to people in rural India. They require less fuel, and we're touted as a very cost efficient may of reducing carbon emissions.
But researchers found that while people who got the stoves did use them, they still made additional fires. Which reduced the carbon savings per stove. The researchers gave this phenomenon a name and tried to figure out why people did it, but if they had just walked into their kitchen or gone to an Indian restaurant they would have immediately understood.
The average rural Indian family is going to cook at least rice, flatbread, and one or two curries every night. Ergo, they need 3-4 heat sources. Just like almost every range in a western kitchen has at least 4 hobs.
awesomeness1024@reddit
I think that both can be true.
We can and should try to save human lives by donating to the most effective causes. We should also acknowledge that AI is improving at an alarming rate, and that having computer systems smarter than us without human values could spell disaster for us.
OneHourDailyLimit@reddit (OP)
Yes, and if the gravity of the earth switched off, that would also be pretty bad.
awesomeness1024@reddit
In the last 6 years, we saw data cluster spending increase by 4 orders of magnitude, algorithmic efficiencies and unhobbling techniques like Chain of Thought, Human Feedback RL, and post-training have improved efficiencies by about 4 orders of magnitude. This represents a hundred million-fold increase in power, from models like gpt-2 to our latest models. 100,000,000x.
Currently, there are reports of massive data clusters being built, like a 5GW Stargate project that would blow all current data centers out of the water by orders of magnitude. Large companies are spending unprecedented figures of over a quarter trillion on further development. I'm in STEM at a top university and from first hand experience, AI is competing alongside high finance in terms of salaries and taking top talent. And Jane Street can only make so many markets - I don't see AI funding slowing down any time soon. These brilliant minds will most likely drive more compute efficiency.
We are moving fast man. I agree it shouldn't be the only thing to focus on but I don't think we should hand-wave the issue away with a joke.
OneHourDailyLimit@reddit (OP)
And none of that will ever equal the intellect of a tapeworm seconds from death. I'm sorry, I'm not willing to believe that shit is either alive or any more threatening than the existence of the internet. It can't cite a source without making shit up. It can't even generate pornography without creating an abomination. It's a glitchy process with no actual logic or humanity; it has no way to tell when it's messed up. It's an impractical mess of a technology, a true bubble.
zerosaved@reddit
Well, you’re certainly wrong on the porn thing, so frankly now it’s safe to assume you don’t know what you’re talking about for the other things.
OneHourDailyLimit@reddit (OP)
I assure you, I have tried. AI cannot make porn for free.
TreeGuy521@reddit
Okay I don't really understand how you are on a 4chan related sub and just don't notice how there's anywhere from 3 to 8 threads stapled to the top of b/gif that are all just image gen.
OneHourDailyLimit@reddit (OP)
Because I don't use fucking 4chan? I'm only even here because it's the meme template origin and Tumblr likes the meme.
Perfect_Inevitable99@reddit
Unsurprising you come from Tumblr...
Also you have all these opinions and seem to base them on models that you can access and are also free I suppose ?
Honestly, I think if you truly believe AI is a complete paper tiger, a bubble and incapable of all of the things that many rational well educated people are concerned about you are in complete denial...
Also as far as the original meme goes, if you want to focus on truly effective altruism, just go out and volunteer at a charity that actually directly helps people, instead of arguing about the philosophies of the application of socialist models of welfare.
TreeGuy521@reddit
You didn't need to specify you were from Tumblr, the completely unprompted "moids dumb xd" in the title and general air of entitlement made that pretty obvious.
OneHourDailyLimit@reddit (OP)
I'm bitter about someone attempting to pretend to me that rationalism is not a movement composed entirely of cis men. Sue me.
TreeGuy521@reddit
I'm going to be real with you rn, like actually. The only real issue I have with the Tumblr activist type is that they are really bad at optics. Nobody who looks at your post outside of your specific social circle is going to know you are talking specifically about silicon valley tech bros specifically, so the simplest answer is that the most likely reason you went out of your way to specify men in the title is just that you are bitter at them in general enough to do that.
OneHourDailyLimit@reddit (OP)
Also: for free.
TreeGuy521@reddit
Idk I don't think openai accepts good boy points
PGSylphir@reddit
yes it can, and it's easy to do.
AI is also not trully intelligent and wont be in any of our lifetimes, unless some new technology show up that increases our powers of computing thousandfold. Anyone thinking it is or will be in 10 or so years has been fooled by an LLM.
source: computer scientist with specialization in AI and compsec.
OneHourDailyLimit@reddit (OP)
THANK you.
PGSylphir@reddit
I'm saying you're wrong, and so is the other commenter.
OneHourDailyLimit@reddit (OP)
Yeah but I'm wrong (supposedly) about the unimportant stuff. They're wrong about EVERYTHING.
PGSylphir@reddit
I think your priorities are messed up.
If you think AI is not a problem, you're sorely mistaken. While AI is not and will not be truly intelligent anytime soon, it is absolutely a danger to humanity in some quite important aspects. At the rate Generative AI is improving, it will soon be impossible to distinguish real from AI, and that is where everything goes to shit. It'll be easier than ever to scam people, destroy their lives with false pictures, videos or voice recordings, and cause massive public uproar on the general public.
You need to focus less on arguing with others about topics you have no knowledge about and more on learning from those who do.
Dismissing any AI discussion as "doomposting" is a very dangerous attitude to have.
OneHourDailyLimit@reddit (OP)
I will literally just count the fingers, man.
zerosaved@reddit
Well let me assure you, just because you can’t doesn’t mean others haven’t lol. I have seen firsthand what some people have been able to create with AI, and let me tell you, it’s somethin else. Properly trained AI doesn’t produce abominations with multiple limbs and fingers anymore.
arielif1@reddit
dog do you really, honestly think AI needs emotions to be a threat to society as we know it and the international distribution of work? what the fuck even is that logic?
a_code_mage@reddit
You are so wildly misinformed.
You should probably do some research into what you’re saying before you go around saying it with such conviction.
OneHourDailyLimit@reddit (OP)
Is anything I said actually wrong?
a_code_mage@reddit
Yes.
OneHourDailyLimit@reddit (OP)
What?
lcmaier@reddit
Hello, person who has a degree from a T20 in this and whose job is to build ML systems here--you're wrong, and I'll go through in detail why.
Your first point is true, the second point is not. Efficiencies have not improved by 4 orders of magnitude, hell they still haven't found a (functional non-hallucinatory) replacement for self-attention, which is QUADRATIC in its time complexity. The larger data centers are necessary for these efficiency "breakthroughs" you speak of for precisely that reason--they're fighting a losing battle against an exponential cost.
In general in this post you're too focused on the financial numbers and not the underlying technology. Yes, a lot of money is being invested into AI, both into labor and data centers, but that does not mean superintelligence is inevitable--money is wasted all the time in the pursuit of technology that doesn't come to fruition (anyone remember the Juicero?). The AI paradigm is still based on a paper from 2017--seriously, let that sink in, we still haven't found a better base architecture than the Transformer, despite almost a decade of attempted improvement and literal billions of dollars sunk into research.
And the methods you list aren't going to push a model into superintelligence--Chain of Though is just "What if we let the model prompt itself?", RLHF necessarily tops out at the human limit since the humans are the ones evaluating the output, and post training is a buzzword for the continual work any model needs once it's productionized. We still haven't found an AI paradigm that leads to truly innovative, superhuman performance outside of highly structured perfect information games like chess and Go; even the RL agent Deepmind built to play Starcraft just ended up doing human strategies with impossibly high APM, which I think is a pretty apt metaphor for AI as it exists today. Unless we see a paradigm shift on par with Attention is All You Need or GPT-3's next token generation technique, the next decade of AI will be much more boring than the last, with a bunch of small improvements--and there's no evidence to suggest any such paradigm shift is on the horizon
Magallan@reddit
AI has made 1 (one) improvement in the last decade and it can now pretend to write coherent sentences.
It has done nothing else that it couldn't have done 20 years ago.
CCCyanide@reddit
nuance ?? on 4chan ?????
get outta here
throwawayeastbay@reddit
The original is creepy or wet
Known-Ad-1556@reddit
Methods or numbers
OneHourDailyLimit@reddit (OP)
Weirdly, no.
asswoopman@reddit
ITT: op posts indecipherable garbage on 4chan, no one understands. Op posts it again on reddit, gets same response.
Many such cases.
pre_nerf_infestor@reddit
effective altruism is the exact kind of "good" that people divroced from all reality would come up with. I realized this when the founder said in an interview that if he could only save either a baby or a famous painting from a burning building, he'd save the painting and then sell it to donate to charity.
People who think like this are the supervillains in action movies gloating about "seeing the big picture", but in real life where there's no captain america to punch them in the fucking mouth.
avagrantthought@reddit
Why exactly would that be wrong? Because in the one you can see the dying baby and in the other you aren't able to see the thousands of dying babies?
pre_nerf_infestor@reddit
no, because in one scenario you are doing an unambiguous immediate good and the other scenario gives you the opportunity to put off the good indefinitely. which is what all these EA dipshits do, when they spend all their time enriching themselves while "raising awareness" about the importance of colonizing mars and preventing the rise of an unstoppable super AI.
avagrantthought@reddit
You didn't define a substantive different. I wouldn't call it a unambiguous immediate good if you're depriving an even greater good.
By your logic is giving a golden box filled with 20 sandwiches to a starving child better than selling that box for 100,000$, buying 100,000 sandwiches and giving them to starving kids?
pre_nerf_infestor@reddit
You really don't get it do you.
To an EA, there is always an even greater good. There is no upper limit to the number of theoretically starving children in an unknown future that any money could be better spent on. if you follow the logic, the ultimate best use of your money is always on yourself, in order to convince more people to follow EA. After all, wouldn't your golden box be better served being spent paying yourself to run a series of lectures, so that you can convince one million people to each donate a thousand sandwiches to a billion total starving kids?
avagrantthought@reddit
How so?
Then it's not really for yourself, is it?
And if it's been proven that more utility is provided by educating others and convincing them to harvest utility then.. why not? Again, instead of a million kids being saved, 10 million are.
From my point of view that you seem to have an issue is with optics. Just because it's indirect and it can't be seen, doesn't mean it isn't monumental positive utility
pre_nerf_infestor@reddit
I'm discussing this with you in good faith, but it is increasingly hard to believe you really don't understand the difference between actually saving one child and using the promise of theoretically saving a thousand in an imaginary future to pay yourself a huge amount of money.
Because that's what supposed effective altruists actually did in real life.
This isn't about optics or whether you can see a child being saved. This is about how EA is used as a justification to actually not save any children at all.
avagrantthought@reddit
I see, so your issue is that in the one it's a guarantee of gained utility where is in the other was it's a risk/investment in which MAYBE it will bring more utility?
If that's your problem, then I'd have to say that I can see the logic but again, you're giving speeches to thousands of people and they in turn become effective altruists. It's almost like instead is spending 1000€ to buy food for the homeless, you spend it to open up a permanent food shelter and receive donation
Do you have a source for that? im talking in the context of giving yourself a modest wage and running such an organization
I'm sorry, but I can see the argument that the money is being spend like shit and extremely in effectively, but 'no children at all', really?
CelDidNothingWrong@reddit
To be clear with that example, MacAskill said that would be the right choice if there was a guarantee you could sell the painting for enough resources to save multiple lives.
So it’s really just a long-winded way of saying you would sacrifice one life for multiple, but it tries to challenge are inherent biases for the visceral here and now over long term consequence.
That’s largely what effective altruism is, a conscious attempt to choose options that have the best moral outcomes even if taking those decisions doesn’t make us feel as good about ourselves.
Similar-Factor@reddit
Nah it’s prosperity gospel in a tech bro wrapping. It’s entirely about moralising why becoming an investment banker or Silicon Valley tech fucker is actually the bestest thing ever trust me bro.
CelDidNothingWrong@reddit
Well that’s what many have used it for, but I dont think that can fairly be said of MacAskill
MainSquid@reddit
Agreed, the person you're replying to clearly isn't familiar with the movement. Anyone who has read Singer knows that isn't a fair assessment.
Granted, it's definitely misused by tech bro morons.
pre_nerf_infestor@reddit
Unfortunately there's no spacetime ceiling to "the best moral outcome", since a life now apparently equals a life later (hilarious how it matches exactly how they think of money in a low-interest environment). This means the logical endpoint of an EA is "the best use of my resources is to aggrandize myself to further spread the cause of EA to other people".
OneHourDailyLimit@reddit (OP)
The thing is, that would be the right decision, but they never fucking do. They spend everything they get on themselves; Siskind is rich, Yudkowsky is rich, Yarvin is rich, Thiel is rich enough that he bought the fucking vice-presidency. If they stuck to their guns, I could respect that-but they don't in movies, and they don't in reality.
pre_nerf_infestor@reddit
considering the most famous EA of all time is sam bankman-fried...
Ozymandias_1303@reddit
PEPFAR sounds like an abbreviation for a digestive condition. "Sorry I can't come into work today boss. I ate some bad fish and I've got the PEPFARs."
clotifoth@reddit
1 vowel, 1 consonant away from PŪPFART
Thevillageidiot2@reddit
My last relationship ended after I accidentally pepfarded all over her during sex.
Killingkoi@reddit
Brainrot gibberish
clotifoth@reddit
Figurative language that escapes you is not brainrot. Or else the whole western canon and Bible are brainrot and then what's the word even mean anymore.
Play174@reddit
Based on the comments here, effective altruism sounds like Mormonism, Scientology, or the Nation of Islam for atheists
KillaSlothZilla@reddit
Am I having a fuckin stroke? I can't understand any of this
StrengthfromDeath@reddit
I would almost say OP is in the wrong place, but they are so clearly on the spectrum that they should be running channel four.
avagrantthought@reddit
Anon wants to be a passive utileterian without being called a utiliterian
Fuhrious520@reddit
Looking for a new game
Ask clerk if this game is mechanically difficult or numbers difficult
Doesn't know what I'm talking about
Explain to him in detail what the difference is
She laughs and says; “its a good game, sir.”
Buy it and take it home
Its numbers difficult
Fickle_Sherbert1453@reddit
See, your problem is that you looked for a difficult game instead of an enjoyable one.