Manhattan style project race to AGI recommended to Congress by U.S congressional commission
Posted by Status-Beginning9804@reddit | LocalLLaMA | View on Reddit | 117 comments
Which models are you hoarding to use once you're in the bunker?
The Commission recommends:
- Congress establish and fund a Manhattan Project-like program dedicated to racing to and acquiring an Artificial General Intelligence (AGI) capability. AGI is generally defined as systems that are as good as or better than human capabilities across all cognitive domains and would surpass the sharpest human minds at every task. Among the specific actions the Commission recommends for Congress:
• Provide broad multiyear contracting authority to the executive branch and associated funding for leading artificial intelligence, cloud, and data center companies and others to advance the stated policy at a pace and scale consistent with the goal of U.S. AGI leadership; and
literal_garbage_man@reddit
This is such a grift. Lol
Whotea@reddit
I’m sure the mountain of phd researchers from every university on earth writing papers on it are all just making up their findings lol
literal_garbage_man@reddit
Who exactly is “the commission”.
Whotea@reddit
Google is right there
https://www.reuters.com/technology/artificial-intelligence/us-government-commission-pushes-manhattan-project-style-ai-initiative-2024-11-19/
literal_garbage_man@reddit
...
...
Okay, so my point is I'm not seeing "mountain of phd researchers from every university on earth writing papers on it". I'm seeing lobbyists asking for public funding for data centers, trying to create a captured market.
Whotea@reddit
https://huggingface.co/papers
https://koaning.github.io/arxiv-frontpage/
literal_garbage_man@reddit
No, listen, find me an article from phd researchers calling for a "manhattan style project race". Not lobbyists. I'm not saying "AGI is impossible" I'm saying this is a grift from lobbyists and corporate insiders to get public funds to enhance privatize profit and closed-source "research" in the form of "defense contracts".
literal_garbage_man@reddit
Like who? Who is recommending a “manhattan style project”? This is a grift by people who want to make a buck off the government’s dime.
In fact, WHO TF is “The Commission?” OP never mentions that. It’s just a screenshot.
Photoperiod@reddit
Yeah I mean the first bullet point basically says as much lol. "please funnel us tons of money thanks". They're really going for the military industrial complex style grift.
Cuplike@reddit
I hope Sam Altman and anyone like him get the necessary punishment for tricking the government into thinking LLM investments will somehow lead to AGI
Whotea@reddit
They can say anything they want. First amendment. It’s on the government if they want to believe it or not.
Imagine if I said I think it will be windy tomorrow and it’s not so I get arrested for it lmao
Cuplike@reddit
>They can say anything they want. First amendment.
Soliciting money by decieving people is called fraud lol.
Whotea@reddit
They said their personal belief. Good luck proving they were lying lol
glowcialist@reddit
Oh, so if i say "poo poo pee pee" you think i should be put in prison? - the dude above you
glowcialist@reddit
There aren't hundreds of billions of dollars being shuffled around based on your statement that it will be windy lol
Whotea@reddit
And that’s the government’s decision to spend billions lol.
glowcialist@reddit
i was actually thinking more of the private sector there, but ok.
it's absolutely a form of market manipulation, might be legal, but it's clearly a highly unethical practice.
Whotea@reddit
Saying “I believe X will happen by YYYY” is not market manipulation lmao
MmmmMorphine@reddit
Ehhh... I'd put it differently. LLM investment alone will not lead to AGI, but could form a crucial part of its development whether through simply helping in amassing knowledge about machine learning as a whole (though you can certainly argue that a focus on LLMs/transformers is partially misplaced ans hence a waste of money and time) or as a component of AGI in the long run
Eltaerys@reddit
Happy to see at least a little sanity around here.
jeffwadsworth@reddit
It means putting everything you can into getting AGI. I agree with them. If we don't, the others do.
knvn8@reddit
What does it even mean though. Manhattan project had a pretty clear idea of what was physically possible, AGI doesn't really have a clear direction other than "more GPUs"
dogesator@reddit
No they didn’t have a clear idea of what was possible, the point of the manhattan project was to develop such tests to see what’s possible in the first place, there was many different ideas of what would happen when a nuke of certain size is detonated. Just like there is many different ideas of what happens when an AI model of a certain compute scale is trained, we don’t know for sure what happens, but it’s definitely worth finding out.
Ok-Parsnip-4826@reddit
These things are very different in nature. The Manhattan Project's goal was specifically to weaponize nuclear fission. A lot was already known about it beforehand, everybody knew what kind of energies were involved, what materials were required and what problems needed to be solved to get it done. This AGI project would be equivalent to the Manhattan Project if the goal had been to "build a very big bomb". But they didn't just build a big bomb. They have learnt about a way to accomplish something magnitudes beyond what was thinkable at the time, something they knew would change warfare forever. There is nothing equivalent here.
dogesator@reddit
“They have learnt about a way to accomplish something magnitudes beyond what was thinkable at the time, something they knew would change warfare forever.“
This quote is exactly what many researchers would say describes the current state of AI as well…
ab2377@reddit
💯
Mental_Aardvark8154@reddit
Christ this country is in a ditch. AGI?
The saddest part is when people in tech believe it. Like, you are huffing your own propaganda dude.
Like that moron at Google they paraded around as a spectacle when he believed the AI was really talking. Holy fuck.
Nixellion@reddit
The confusion might be coming from the definition of the term AGI. It does not mean 'sentience'. It just means that it can perform better than humans across all cognitive domains, and it's what it says in the document above.
However, far as I know, many in the industry changed the meaning of AGI, lowered the gap, and added a "ASI (Artificial Superintelligence)" above it.
Either way, making an AI that "surpasses human capabilities across all cognitive domains" seems achievable within our lifetimes, at the rate that everything is goes. Again - this wont mean it's sentient or 'alive'. It does not have to be 'alive' to surpass us in solving tasks across logic, reasoning, vision, text, audio.
Quoting wikipedia:
Note the definition and wording. "Matches OR surpasses across WIDE range of cognitive tasks". By this definition an multi-modal LLM is already a close candidate to becoming AGI. Latest OpenAI model can reason across text, image, video and audio, it can do logic, writing, planning, solving various tasks, recognize images, recognize and generate audio, etc. This falls under the definition of "wide range of tasks". It's certainly not 'narrow'.
What stops it form being AGI is that it does not yet reliably match human performance across all those tasks.
logicchains@reddit
>What stops it form being AGI is that it does not yet reliably match human performance across all those tasks.
We're only an AI generation or two from exceeding average human performance at all those tasks, the question is whether "AGI" will require it to match all human performance (i.e. also match the most competent humans), or only to match the average.
sometimeswriter32@reddit
Get back to me when an AI can drive a car as good as the average human driver. Google's head of self driving said in 2015 his 11 year old would never need a drivers license.
I'm still waiting.
Nixellion@reddit
Yes, and that’s basically semantics and definitions, which are not clearly and, more importantly, universally defined yet.
NaoCustaTentar@reddit
I agree with your comment, but this is simply not true tho
Unless by "close" you mean at least 5 years, then yes.
Anyone that has to use these models to work daily can tell you that while very helpful, it's still very fucking dumb at a lot of stuff. Sometimes it's faster to just do it yourself or ask someone to do than the time you lose trying to explain to the model what you want because it failed at the tasks
The benchmarks also don't reflect that at all yet. But I know I'll get downvoted cause it is allegedly a PhD level at one or two random tests and we ignore the other 900 that it's still not at that level
Critical_Basil_1272@reddit
This is like explaining time to your dog, this creature will get the hint soon enough.
BootDisc@reddit
I think that’s it though “gpus”. It’s the main controllable element of a “manhattan project” since it’s probably the bottleneck.
MmmmMorphine@reddit
It's almost akin to saying more centrifuges - necessary (let's pretend breeding plutonium turned out to be non-practical for whatever reason)
Though I believe centrifuges were just one of many ways they approached the issue, all of which generally turned out to be too expensive or simply not possible
Whotea@reddit
Hopefully bitnet can relieve that
OfficialHashPanda@reddit
This is about training compute. Bitnet has no relevance in this context.
Whotea@reddit
Bitnet means they don’t need gpus to scale compute
OfficialHashPanda@reddit
Yeah, but unlike with the manhattan project, we have no clue how to get to AGI. Just investing billions into compute hoping it magically yields us AGI is a bit of an extreme leap.
False_Grit@reddit
No....not at all! Mass-energy equivalence was still very much in the theoretical stage, and there were plenty of chemists who didn't believe in it at all.
The Manhattan project was absolutely a pipe dream that they had no idea if it was going to work right up until it did. They spitballed all kinds of crazy ideas at first. It's a small miracle even the most brilliant minds of our time pulled it off.
And if I'm being honest, despite all the hype, AGI absolutely has the potential to be just as consequential if not more so than nuclear fission. And it could very well be a zero sum game as well. The first country to pull it off might prevent any other country from pulling it off ever. Even if they only win by minutes.
Our entire world's economic system essentially runs electronically; a sufficiently advanced AGI could cripple the entire world's economy if it so chose, diverting funds where and how it pleased. It could spread and control misinformation faster and more surgically than anyone could counter it.
AGI will absolutely be exactly like nuclear fission; a big boogeyman that means nothing and half the population laughs at, right up until the second that it very suddenly and in a very real way becomes world defining.
AriG@reddit
Billions of dollars funding to xAI (no crony capitalism whatsoever!)
GwimblyForever@reddit
At least Sam Altman has a messiah complex and seems to have genuinely gaslit himself into believing he's doing the right thing even if it's not always executed well. If what you suggest will happen comes to fruition? World's cooked.
NEEDMOREVRAM@reddit
Does Sam Altman strike anyone as that Joseph Smith guy from the Mormons?
Like when the public or government is talking to Altman, he sticks his head into a 1800s stovepipe hat and says that only he can communicate with the AI gods---and they are saying they need the U.S. government to ban open source AI and that the big AI boogeyman is coming to get us all! (while one of Altman's underlings plays a theramin while he's giving one of his trademark pontifications).
_supert_@reddit
cries in European
FullstackSensei@reddit
While I don't live in the US, I doubt this will gather enough support in congress to get any funding. Such a project won't create enough local jobs to gather enough support, and will inevitably involve giving billions to companies headed by figures considered controversial on both sides of the isle.
I also doubt there'll ever be a Manhattan style project again anywhere unless there's a physical threat to humanity (ex: a real Armageddon). That project happened at a moment in time when there was a maniac trying to conquer the world, humanity had just theorized about the the possibility of fission, and technology was advanced enough that humanity knew (the mechanics of) how to design and build the atomic bomb, but wasn't advanced enough that it still needed a lot of manpower to build all the pieces (creating a good 129k jobs).
The same analogy about technological advancement and job creation applies to the Apollo program.
AI research, building data centers, and even if we include the power infrastructure needed for those data centers, it doesn't create enough jobs to gather the necessary political support.
But again, I don't live in the US. I can't even measure in freedom units. So, what do I know...
LocoMod@reddit
I live in the US and have been a part of the military industrial complex for over 20 years. This is totally happening. And it did not start with this letter. These efforts have been well underway for some time now. The “buckling up for the ride” has come and gone. You’re a passenger sleeping in the backseat. Enjoy the ride.
Mental_Aardvark8154@reddit
Source: I made it the fuck up
Critical_Basil_1272@reddit
you don't know the U.S. military, go read some archives about a.i on darpa. They've known about the power of ML in weapons for decades. The airforce just this April had X-62A first A.I pilot dogfight real life pilots. By 2028 1000 autonomous pilots, do you think they really don't get A.I?
MoffKalast@reddit
Airplanes are the easiest thing to automate. Airliners take off, fly and land themselves, the X-37 will do an entire orbital mission on its own. Given that jet fighter pilots already mainly rely on radar and FLIR pods to track targets at the limits of detection range, just a matter of training a model to identify hostiles based on existing sensor data and bob's your uncle. Frankly I'm surprised they don't have a fleet already.
Critical_Basil_1272@reddit
Ya, I agree it's probably pretty easy for an A.I to fly a plane. It's pretty wild though they want these Boeing ghost bats to fly next to real pilots and assist them in whatever.
Daxiongmao87@reddit
we can definitely make anything up but i do believe the military has a close eye on generative AI development. i was in usaf intel for 8 years and they had their hands in everything, but that was 10 plus years ago. my younger brother still serves but as a second LT in the army, enlisted to officer. and the language he is hearing is all about leveraging AI for all sorts of things.
Im currently a full stack developer at my company that has several major government contracts. I was in devops for 7 years, and now im in an applied AI development role, when 2 years ago we had no business in it. they are searching for AI-literate developers like crazy now.
NaoCustaTentar@reddit
This is the most "open ai twitter account wannabe" ass comment I've ever read here
I'm 100% sure I've read that exact ending in at least dozens of cringe tweets from fake accounts pretending to be insiders lmao
"Y'all don't know what's ahead!! Enjoy the ride"
This is just sad roleplaying lol
LocoMod@reddit
Yea yea you’re right. The world’s sole superpower that roughly spends a third of its budget on Defense was simply waiting on Congress to say “ok, now you may”.
Aggressive-Wafer3268@reddit
Except nuclear bombs actually exist and are achievable
dogesator@reddit
Uhm, you know Nuclear bombs didn’t exist prior to the Manhattan Project right?
Brave_doggo@reddit
They knew how they works, they knew all effects involved. The only question was "how to make it".
And there we are, still not understanding what intelligence even is.
dogesator@reddit
If you don’t understand what intelligence is, please just open up a dictionary and it will give you the answer. If you’re looking for some deeper abstract philosophical answer, there is no consensus that such a thing is required to create intelligent systems. In fact most AI researchers would probably agree that you don’t need a deeper abstract philosophical understanding of intelligence in order to create intelligent systems.
Aggressive-Wafer3268@reddit
Nuclear bombs don't have shifting definitions, and have a clear point where they either exist or don't. Also they already achieved fission and it was widely understood to be a possibility, it was a matter of time, hence the race for the bomb in World War II.
dydhaw@reddit
You're saying all this in hindsight. At the time no one knew if enough fission material could be feasibly produced, or if fission bombs were really possible or how powerful they would be in practice.
Ok-Parsnip-4826@reddit
The difference here is that physics is an actual science that makes predictions that you can actually rely on most of the time. AI research is basically alchemy. They didn't start with "Uh, Fermi is pretty convinced that it's possible to build, like, a really really big bomb", they started with a shockingly plausible idea of how to achieve it that very quickly condensed into a set of very specific problems that they then set out to solve. They knew what this was way more exactly than what is going on atm in AGI
katerinaptrv12@reddit
Nothing has a clear point if either exists or doesn't until they do.
The timeline and exact point of their existence is recorded after the fact from observation.
KriosXVII@reddit
Well they didn't know that for sure before making one? Same with AGI.
There's no obvious reason it couldn't be made.
sometimeswriter32@reddit
There's no obvious reason it couldn't be made, except for the fact nobody knows how to make it?
dydhaw@reddit
You don't really know how to make something until you actually make it.
sometimeswriter32@reddit
By that logic you don't need any money. Just "know how to make it with zero dollars until you make it."
dydhaw@reddit
No. But that logic, not knowing how to make something isn't a reason why it couldn't be made.
sometimeswriter32@reddit
You're doing a begging the question fallacy.
You beg the question that they can build AGI with funding, then since they don't know how to make it you work your way back to "of course not knowing how to make it is the first step to making it!"
This only makes sense to you because you already believed they can make the AGI with funding. It's an exercise in circular reasoning.
dydhaw@reddit
You are confusing the claim that "we don't have a reason to think X couldn't be done" with the claim that "we know X can be done." The first one implies a necessary impossibility, the second implies a necessary possibility. Rejecting the first doesn't imply the second. I assume you don't do much research work?
sometimeswriter32@reddit
I read your statements in context, and in context you made a weird argument that we don't know how to do something until we do so lets fund agi. Now you are trying to change your framing.
Apple's cash on hand is 65 billion dollars. Why haven't they spent it all on AGI "research"? Maybe they could go into debt for 100s of billions of dollars based on "we don't have a reason to think X couldn't be done". Also "nobody knows how to do something until they do."
Obviously there's a ton of reasons to think AGI can't be done, from Apple's perspective at least 65 billion dollars of good reasons, and your entire framing of "you don't know how to do something until you do it" is an opinion you've arrived at by going very wrong somewhere in your reasoning, unless you think Apple management are very dumb, but that raises all sorts of other questions about your reasoning.
dydhaw@reddit
I didn't make that argument, that's your straw man. I made the argument that we don't know how to make something until we do, so the fact that we don't know how to do something shouldn't block us trying to do it. This is because, logically, not knowing how to do something doesn't imply that that thing is impossible. Unlike, say, faster than light travel, where we know the constraints of physics likely make it impossible. This is very straightforward logic, not sure why you are struggling so much with it.
sometimeswriter32@reddit
We often do know how to make something before we do, like for example I'm sure Meta knows how to make Llama 4 before they started training it.
Apparently, you came here to provide the dictionary definition of "invention" or something, and how dare someone try to pin you down in having an opinion on funding on a thread about funding when you responded to someone about funding:
You just stopped by to write allow me to define inventionq:
something invented: such as
(1)
: a device, contrivance, or process originated after study and experiment
You're not making a claim about funding in a thread about funding. Even though the entire context was funding, it's a strawman for me to interpret your post in context. Great contribution.
dydhaw@reddit
I guess you have a very short memory. This is your comment which I replied to:
Do you still stand behind it? Can you explain how I could take it to mean anything other than the syllogism
I am attacking the second premise here. If you don't want to stand behind your own argument, that's fine, but don't deflect it on me
sometimeswriter32@reddit
We're talking about a funding plan. It took the Manhatten Project 3 years to build the first atom bomb. The cost was 27 billion dollars in 2023 money. Surely my comment about nobody knowing how to make it should be read as applicable over the course of the proposed funding. If I ask for 27 billion dollars in funding to build an AGI over the next 5 years it's implied that I have some belief that the AGI can be built in the next 5 years, not 30 years, not 80 years, not 200 years..
You seem to want to engage in an argument as to whether AGI is impossible. I think you actually mean impossible for humans, not impossible given a maximally superintelligent life form trying to invent AGI.
How the hell would anyone know whether it's impossible for humans between now and the heat death of the universe?
It's pretty obvious lots of people have lots of good reason to think it's not possible in, say, the next 3 years given my example of Apple not spending the money on this.
It seems like we are talking past each other. I never said "Nobody in the next 5 billion years will invent an AGI."
KriosXVII@reddit
We have a general idea of how to make AGI but the devil is in the details. They didn't know exactly how to make a nuclear bomb before the Manhattan project either. Humans have intelligence and are a sack of atoms. There's no hard limit stopping a comparable general intelligence from arising of a different sack of atoms.
Brave_doggo@reddit
We still do not understand what's "I" in AGI and people unironically think we are close to understanding how to build AGI, lmao.
KriosXVII@reddit
Well, intelligence doesn't necessarily require sentience or consciousness.
sometimeswriter32@reddit
I don't think we have any idea how to make an AGI. Feces is a "sack of atoms" and your use of that phrase is bullshit.
KriosXVII@reddit
Well you don't think we have any idea on how to make AGI but you're also just some guy on the internet and not an expert. I think we have some idea but are obviously not there yet.
We have made very meaningful, visible progress in designing artificial intelligence over the last 5 years.
Perhaps intelligence is an emergent quality of any sufficiently large neural network, and then scale, data and compute are all we need. Perhaps other great architecture leaps will need to be made. Nevertheless the pararel between early 1900s nuclear research and the Manhattan project leading to the first nuclear bombs decades later is valid.
sometimeswriter32@reddit
I'm certainly no expert but imagine you want to install python on your computer, you post on reddit asking how to do that, and someone responds "A computer is basically a sack of atoms, move those atoms around chop chop."
Does this enable you to install python? Of course not.
MoffKalast@reddit
Clearly we need to combine nukes and LLMs to get those bombs from Dark Star.
JustinPooDough@reddit
Yeah... the thing about the Manhattan Project is that they didn't tell anyone about it.
Status-Beginning9804@reddit (OP)
Official recommendation: https://www.uscc.gov/sites/default/files/2024-11/
Article: https://thehill.com/policy/technology/4997998-government-commission-proposes-manhattan-project-style-ai-funding-plan/
confused_boner@reddit
Manhattan project was extremely secretive, why is this being blasted publicly?
espadrine@reddit
It sounds more like a Superconducting Super Collider-style project then.
Temp_Placeholder@reddit
But this one will be the Substantially Superior Superconducting Super-Collider Supreme
NEEDMOREVRAM@reddit
So...does this mean the Japanese should start to be worried? I'm not a liberal in any way/shape/form...and I'm offensive and outspoken as they come....but this does seem to have a slightly racist connotation as "Manhattan Project" was something created to kill millions of Japanese.
BangBang_ImBroke@reddit
Your gov link is broken for me (404 error)
jezzarax@reddit
https://www.uscc.gov/sites/default/files/2024-11/2024_Comprehensive_List_of_Recommendations.pdf
That's the correct link.
Round-Holiday1406@reddit
Except Trump will kill it anyway
Whotea@reddit
Nah he’ll just make it so they have to do it through xAI
NighthawkT42@reddit
No need and just a lot of waste. Several companies are in this race and what they're creating is and increasingly low margin commoditzed product. That's not to say AI isn't creating value, just that the difference between the models is decreasing and not that significant for many uses.
The value is going to be in putting those models to use.
Whotea@reddit
Computers just move electrical signals around. What could that be used for? All empty hype
TheGuy839@reddit
What? I was under the impression that the scope of the project would be to find new solution, not just stack more gpus like current companies are doing?
G4M35@reddit
Imagine this: 2-3 people in a garage or basement, hacking around a new AI something. They build a proof of concept MVP and they are out there looking for seed money, somehow they talk with In-Q-Tel... a few days later DARPA or a rep from one of those 3-letter agencies knocks at the door .......
SwagMaster9000_2017@reddit
I'd rather live in a tech surveillance state than let random people create the most powerful weapon ever.
PwanaZana@reddit
The ATF shoots the programmers' dog.
"Sorry, force of habit."
NEEDMOREVRAM@reddit
The ATF guy then waddles over to the refrigerator and starts stuffing his face full of cake and whatever other sweets are in there. His partner then brings in the cPAP machine to help him with his breathing. Protecting our democracy is hard work!
Flyingfishfusealt@reddit
With something like this, they would be more likely to be CIA dressed as ATF and kidnap them.
mr_birkenblatt@reddit
Give us money
race2tb@reddit
The issue is you do not need AGI to completely dominate the world order. If China or some other country can do what they did to the manufacturing sector to the service sector that is enough to crush countries. Especially now in their high debt positions. This is an economic arms race not a military one. You won't need to launch a single missle to capture a country if you plunge them into a depression. Once you crush them economically you can just buy them for pennies on the dollar.
gfy_expert@reddit
today is most crazy day of year, for sure.russia under attack,europe getting redy for war, now this.
yoshiK@reddit
Well, with the big ai companies spending something like 30 billion on llms this year, and their shareholders likely starting to ask awkward questions about those 30 billion next year, this initiative may come just in time to prevent another ai winter.
Educational_Gap5867@reddit
They did something similar when internet was just in its infancy. I think the internet as we know it is going to profoundly change in the next decade or so.
05032-MendicantBias@reddit
The USA has an history of giving infinite money to long shot strategic project to mixed results.
E.g. the Star Wars program gave infinite money to research far fetched missile defense beam weapons. One outlandish proposal was to have a nuke in orbit, and using rods to vaguely focus the gamma ray into a laser and evaporate ballistic missiles.
I'm not sure a AGI program would yield results, but I'm also not sure it wouldn't...
shokuninstudio@reddit
Considering what is happening right now the AGI they create will just be trained to promote creationism, homeopathy, tobacco and gasoline powered cars that shoot flames. Anyone who says the AGI is wrong will be labelled deep state crisis actor.
Sabin_Stargem@reddit
Honestly, I think that I would prefer at least a dozen corporations scrabbling against each other to make an AI. More diversity and it would make it easier for everyday folk to influence which AI is successful.
You can bet a Manhattan-style AGI won't be into being a catgirl that punches elf nazis in the groin, with Fist of the North Star style results.
whispershadowmount@reddit
I imagine its not immediately obvious to those fine gentlemen that most of those “AI chips”come from a tiney island right next to China? Good luck running your AGI without the chips. Kind of similar with how critical oil was to the previous WW.
3-4pm@reddit
AGI would be worth far more than the gubbernut would pay.I'm suspect this is a fools errand meant to drain Chinese money like we did the Soviets with SDI.
madaradess007@reddit
Exactly my thoughts, dude!
Here in Russia people are faking "our own ai progress" like crazy, putting together gpt-3 equivalent LLMs and presenting them like cutting-edge stuff. Tons of money are wasted on these useless 'ai' demos.
PlantFlat4056@reddit
Time to nuke xi and its minions with AGI!
mikaelhg@reddit
Then what?
Jumper775-2@reddit
Fuck I haven’t graduated yet I won’t be part of this.
EconomyPrior5809@reddit
Manhattan style project race?
Status-Beginning9804@reddit (OP)
Manhattan project style race?
synth_mania@reddit
Race style manhattan project?
AlbanySteamedHams@reddit
Go in all Don-Draper-like.
SpecialistStory336@reddit
I agree with this one.