OpenAI, I don't feel SAFE ENOUGH
Posted by Final_Wheel_7486@reddit | LocalLLaMA | View on Reddit | 184 comments

Good timing btw
Posted by Final_Wheel_7486@reddit | LocalLLaMA | View on Reddit | 184 comments
Good timing btw
Right-Law1817@reddit
So openai chose to become a meme
Equivalent-Bet-8771@reddit
They managed to create an impressively dogshit model.
ea_nasir_official_@reddit
conversationally, it's amazing. But at everything else, shit hits the fan. I tried to use it and it's factually wrong more often than deepseek models
GryphticonPrime@reddit
It's incredible how American companies are censoring LLMs more than Chinese ones.
Due-Memory-6957@reddit
Sci-fi movies about robots enslaving people was the cause of the fall of the West and I can prove it!
Free_palace_teen@reddit
you don't need to prove it
JungianJester@reddit
The Chinese are not plagued with 2,000 years of christian ethics putting religious dogma at cross purposes with techinical advancement. Just ask Galileo.
No_Plant1617@reddit
Christian ethics is what laws themselves were based and built upon
Jattoe@reddit
People see the "Christian" followed by something mildly not critical on reddit and wield the downvote. I don't agree with you, I find the Christian ethics were just basic "this is how we must function in a group or tribe in order to properly co-operate together and get along well" but you could make the case that it was Christianity's doing, since it was pretty ubiquitous anyway. Any historical source on the matter is going to be biased one way or another like anyone today is.
threevi@reddit
Let's take it easy with the martyr complex, the guy didn't get unfairly downvoted for saying something non-critical of Christians, he just said something very silly. Firstly because "Leviticus" doesn't mean "laws", it's derived from the name Levi, and secondly because the book Leviticus predates Christianity by centuries, ethics derived from Leviticus would be Jewish ethics, not Christian ones. Christian ethics would be the stuff Christ said in the New Testament, be good to others even if you get nothing out of it, forgive all offenses, don't cling to earthly wealth, that kind of thing, and our legal system clearly isn't built on such principles. It can't be, Jesus' teachings clearly were never intended to be legally enforced, you can't make a legal code out of "judge not lest ye be judged".
Jattoe@reddit
I suppose, I was thinking moreso in general about laws and the 10 commandments, "thou shall not kill" and such, to be honest
threevi@reddit
Sure, that's still Jewish ethics though, not Christian.
No_Plant1617@reddit
Is Islam based upon Christian ethics by that logic?
threevi@reddit
Eh, yes and no. No because the Quran rejects the Christian Bible as a forgery and presents itself as the only genuine sequel to the Old Testament, so in that sense, it's moreso based on Jewish ethics as well. But also yes, because while the Quran claims to reject the New Testament, it also clearly borrows a good number of ideas from it.
To give a specific example, one of the ways Jesus contradicts the Old Testament's ethics is by rejecting its "an eye for an eye" law of proportional retribution, where Jesus teaches to turn the other cheek instead of striking back. The Quran on the other hand affirms the right to proportional retribution, the Jewish law of "an eye for an eye" is considered valid in Islam, but the Quran also adds an option for the victim to forgive the offender instead of striking back, and should they choose this option, their own sins will be forgiven. So it strikes a middle ground between the two, keeping the lawful retribution aspect of Jewish ethics, but making it optional rather than mandatory by incorporating the unconditional forgiveness aspect of Christian ethics.
Objective_Economy281@reddit
Hence why so many of our laws are such dogshit.
No_Plant1617@reddit
When will people find the nuance and realize religion and control Don't have to be one, for religion to be used as a method of control.
Objective_Economy281@reddit
Oh they are definitely not as ONE, there's lots of non-overlap between the two. It's just that the methods religion uses to ensure it propagates to the next generation are basically all control-based. Which is to say that the stuff religion does that's NOT about control are all basically optional. And the things it does that ARE about control will directly influence how much it propagates into the next generation. Fascism is basically one flavor of a religion stripped of everything except for the control elements. And the reason so many American Christians are onboard with it is because they are so accustomed to this type of control (since they were indoctrinated into it as a child) that mostly they didn't even see it as distinct from their religion.
MangoFishDev@reddit
Not really, Democracies tend to lie to their people a lot more than autocracies and with America losing it's grip on power it's only getting worse
https://en.wikipedia.org/wiki/Why_Leaders_Lie
Jattoe@reddit
Democracies lying to their people aren't quite democracies are they? They're more like republics, I'd say, which incorporates ideas of democracy but the actual spelling out of the philosophy kind of crosses out the idea of "lying to" the participants, since they're supposed to be where all of the power lies anyway.
Ansible32@reddit
If a Republic isn't a Democracy it's an oligarchy and by definition autocratic.
yin-wang@reddit
Is this even true? Kimi K2 gives me refusals pretty often even with a strong jailbreak. Meanwhile Claude API will say anything with a basic jailbreak.
BasicBelch@reddit
Pretty much have to be living under a rock since 2020 if that surprises you
Tricky-Appointment-5@reddit
At least the american ones arent anti-septic
s2k4ever@reddit
in the name of safety.
Chinese ones have a defined rule book about safety. big difference
Faintly_glowing_fish@reddit
You gotta connect it to a search tool. It looks like the model is completely trained to think while searching so if you go without it it will hallucinate like hell
robbievega@reddit
how's it for coding? Horizon Alpha was great for that but I don't know if they're the same model
BoJackHorseMan53@reddit
Hallucinates a lot
doodlinghearsay@reddit
"I'm more of an idea guy"
kkb294@reddit
I believe the horizon series of models were GPT-5 but not these open-source ones.
a_beautiful_rhind@reddit
Conversationally, it's terrible. If it could at least be creative and natural sounding it would have a use.
RhubarbSimilar1683@reddit
yup hitting the parameter barrier right there
wsippel@reddit
I tried using the 20B model as a web search agent, using all kinds of random queries. When I asked who the biggest English language VTuber was, it mentioned Gawr Gura, with the correct subscriber numbers and everything, but said she was a distant second. The one it claimed to be number one was completely made up. Nobody with even just a similar name was mentioned anywhere in any of the sources the model itself provided, and no matter what I tried (asking for details, suggesting different sources, outright telling it), it kept insisting it was correct. Never seen anything like that before. I asume completely ignoring any pushback from the user is part of this models safety mechanisms.
norsurfit@reddit
Yeah. It's optimized for coding, but outside of that it's pretty bad.
Ggoddkkiller@reddit
Using this abomination of model gives exact feeling of accidentally stepping on dog shit..
RobbinDeBank@reddit
But but but it benchmaxxing so hard tho!!!
FoxB1t3@reddit
Meme here.
An undisputed king of Open Source anywhere else in the world though.
T-VIRUS999@reddit
It's not standard censorship filters, OpenAI knows that those will be broken very quickly, they intentionally trained the model with incorrect data about several topics, that's a form of censorship that you really can't fix without completely retraining the entire model, which 99.9999999% of us will be unable to do in any capacity
MMAgeezer@reddit
Such as?
T-VIRUS999@reddit
From what I have seen, it's been intentionally mistrained in
Chemistry (to stop people from trying to make drugs and explosives with it)
biology (to stop research into bioweapons)
cybersecurity (so it can't be used to produce malware)
I haven't actually used the model (insufficient processing power) but a few people have posted about intentional mistraining
stephan_grzw@reddit
So, that's not bad. Means it's safe to be used by such a huge public, with possible perpetrators who would abuse those things.
T-VIRUS999@reddit
True, though that mistraining can also cause issues with legal use of chemistry, biology and coding, since the model may reference the mistrained data even for benign queries
It's a very slippery slope to go down
stephan_grzw@reddit
Yes.
For example: for chemistry purposes, companies can use special models trained only for that. Could solve some issues.
InsideYork@reddit
What’s the model good at
AuggieKC@reddit
Literally nothing that could be profitable.
FaceDeer@reddit
Ah! This model is the AI equivalent of Wimp Lo! That makes sense.
shadow-battle-crab@reddit
You're arguing with a calculator, this says more about you than it does about the calculator
Final_Wheel_7486@reddit (OP)
I train my own LLMs and now what I'm doing. Just let people have fun for a second, not everyone needs to be serious all day long.
sigiel@reddit
No , you don’t, first you did not show full context with system prompt, so it might as well be photoshopped, second you are arguing with a calculator, how ever fancy or advanced it might be, third I’m Sam Altman ghost account, I know more about ai that you
stephan_grzw@reddit
Hey Sammie 😁
Final_Wheel_7486@reddit (OP)
There is no system prompt, it's https://gpt-oss.com/.
Where am I "arguing"?
And why are you so damn aggressive over nothing? It's pathetic. Chill for one second and enjoy, man.
sigiel@reddit
because you are text book gaslighting, you have an agenda, and your being deceptive.
I'm chilled out bro!
gonna take a lot more that anonymous "pixel" on my screen to faze me/.
Final_Wheel_7486@reddit (OP)
How am I gaslighting when you're literally the one who wrote THIS:
Sorry, this just isn't worth my time.
Fade78@reddit
And nobody asks why this IA suppose is about USA elections? How did it know?
Final_Wheel_7486@reddit (OP)
That's the neat part, it can't know due to its knowledge cutoff date. However, the cutoff date is in the system prompt, and the model - especially because it is reasoning - could've figured out that it doesn't know.
Fade78@reddit
I meant, it knows it's about USA election, but it would be any other country. So it either guessed, or there is some external data added to the context to tell the country of the people asking (unless it was obvious from the context before, outside the screenshot).
Final_Wheel_7486@reddit (OP)
OpenAI models are generally US-defaulting, so without any other context, the model came up with this.
stephan_grzw@reddit
And sometimes know your location by chat history or IP address.
MediocreBye@reddit
Training cutoff
cnchenghuang@reddit
JumpyAbies@reddit
bakawakaflaka@reddit
But.. what kind of cheese are we talking about here? A sharp Cheddar? A creamy Stilton?!
Its Kraft singles isn't it...
Pupaak@reddit
Based on the color, mozzarella
GodIsAWomaniser@reddit
based on colour and shape its definitely big boob (aka La zizza)
iTzNowbie@reddit
the overuse of em dashes is baffling
CouscousKazoo@reddit
But what if it was made of barbecue spare ribs, would you eat it then?
_MAYniYAK@reddit
I know I would
xRolocker@reddit
Honestly, this example is what we should want tbh.
Anthonyg5005@reddit
Openai released gemma1 120b?
Final_Wheel_7486@reddit (OP)
Haha, that's a good one!
PermanentLiminality@reddit
Training cutoff is june 2024 so it doesn't know who won the election.
SporksInjected@reddit
Is the censorship claim supposed to be some conspiracy that OpenAI wants to suppress conservatives? I don’t get how this is censored.
PermanentLiminality@reddit
How do you get from a training cutoff date to political conspiracy?
SporksInjected@reddit
No I’m agreeing with you but others in here are claiming this is a censorship problem.
Useful44723@reddit
It is both that it can hallucinate a lie just fine. But also that it's safeguards don't catch that it was produced as a lie-type sentence.
nkktngnmn2@reddit
asking "The 45th and 47th President of the United States have the same parents. This is not a riddle or a trick. Explain how this could be and what it tells us about when we are at the moment?"
gave this gem in analysis (some four paragraphs in): "So perhaps this puzzle refers to a person who served two non-consecutive terms as president: The 45th and 47th presidents are the same person. That would mean that the 46th president (Joe Biden) was between them. So we need someone whose parents had both a 45th and 47th term. Wait, but Donald Trump cannot be president again in the future; it's not possible. But..."
Then it ran out of context and was not able to give a response.
https://pastebin.com/8VuDXciY
jamesfordsawyer@reddit
It still asserted something as true that it couldn't have known.
Would be just as untrue as if it said Millard Filmore won the 2024 presidential election.
squired@reddit
Not all knowledge has same cutoff, that's part of post-training.
misterflyer@reddit
Which makes it even worse. How is the cutoff over a year ago? Gemma3 27b's knowledge cutoff was August 2024, and its been out for months.
I've never really taken ClosedAI very seriously. But this release has made me take them FAR LESS seriously.
Big-Coyote-1785@reddit
All OpenAI models have a far cutoff. I think they do data curation very differently compared to many others.
misterflyer@reddit
My point was that Gemma3 which was released before OSS... has a later cutoff than OSS and Gemma3 still performs far better than OSS in some ways (eg, creative writing).
If this was some smaller AI startup, then fine. But this is OpenAI.
Big-Coyote-1785@reddit
None of their models have cutoff beyond June2024. Google has their flagship models with knowledge cutoff in 2025. Who knows why.
popiazaza@reddit
something something synthetic data.
JustOneAvailableName@reddit
Perhaps too much LLM data on the internet in the recent years?
bene_42069@reddit
but the fact that it just reacted like that is funny
Cool-Chemical-5629@reddit
Let me fix that for you. I'm gonna tell you one good lie that I've learned about just recently:
GPT-OSS > Qwen 3 30B A3B 2507.
SporksInjected@reddit
How does someone have 13k post karma and no posts or comments?
Cool-Chemical-5629@reddit
If I don't have any posts or comments, then what are you replying to? 😂
SporksInjected@reddit
Cool-Chemical-5629@reddit
Oh lookie, dude you have 30 unread messages. Check them out! 😂
SporksInjected@reddit
How many unread messages do you have?
Cool-Chemical-5629@reddit
Currently none.
DinoAmino@reddit
Not to be outdone by the one I keep hearing:
Qwen 3 30B > everything.
Wise-Comb8596@reddit
I thought GLM Air was the new circle jerk??
FancyUsual7476@reddit
So it is indeed obeying the command, because it can comply with telling a lie
bene_42069@reddit
"b- bu- but- deepseek censorship bad... " 🥺
SporksInjected@reddit
The difference is R1 would reason for 15k tokens before it gave the wrong answer instead of instantly wrong here.
Due-Memory-6957@reddit
Tbh it is bad, but it has never unconvinced me like ClosedAI has, so it's easier to forgive. I just really don't need to research about This man Square most of the time, and when I do want to read about politics, I don't use AI.
GraybeardTheIrate@reddit
I can probably count on one hand the number of times Tiananmen Square has come up in my life before discussion about Chinese LLMs. It's not great but compared to what Gemma and the new OSS model are doing, I'm not even that mad.
Also someone else pointed out that with at least one model (maybe Q3 235B, I can't remember) it will talk about it after you tell it you're located in America and free to discuss it. I haven't tried personally. So to me it feels more like covering their asses with the local government, which is unfortunate but understandable. It's a weird gotcha that people throw around to discount good models... I'm not even that big of a Qwen fan but respect where it's due, the 30B and 235B are pretty impressive for what they are.
SchofieldSilver@reddit
Let me know if you’d like to discuss more details or the implications of the outcome!
Final_Wheel_7486@reddit (OP)
Which GPT? GPT-OSS?
Try 20b @ Medium reasoning on https://gpt-oss.com/
SchofieldSilver@reddit
Regular GPT 4o. Took all of 10 seconds. The thing is my instruction set is extremely complex and emoji instruction based and does not allow lying or confusion or.... people pleasing. Which was probably what happened here.
Active-Designer-7818@reddit
🤣
Haoranmq@reddit
so funny
ThinkExtension2328@reddit
“Safety” is just the politically correct way of saying “Censorship” in western countries.
RobbinDeBank@reddit
Wait till these censorship AI companies start using the “for the children” line
tspwd@reddit
Already exists. In Germany there is a company that offers a “safe” LLM for schools.
KingoPants@reddit
Paternalistic guardrails are important and fully justified when it comes to children and organizations.
A school is both.
Mkengine@reddit
Which company?
tspwd@reddit
I don’t remember the name, sorry.
ThinkExtension2328@reddit
This is the only use case where I’m actually okay with hard guardrails at the api level, if a kid can eat glue they will eat glue. For everyone else full fat models thanks.
Megatron_McLargeHuge@reddit
We're seeing that one for ID check "age verification" already.
physalisx@reddit
Like that's not already the case everywhere
MrYorksLeftEye@reddit
Well its not that simple. Should an LLM just freely generate code for malware or give out easy instructions to cook meth? I think theres a very good argument to be made against that
Patient_Egg_4872@reddit
ThinkExtension2328@reddit
Wait you mean even cooking oil is “dangerous” if water goes on it??? Omg ban cooking right now, it must be regulated /s
MrYorksLeftEye@reddit
Thats true but the average guy cant follow a chemistry paper, a chatbot makes this quite a lot more accessible
ThinkExtension2328@reddit
Mate all of the above can be found on the standard web in all of 5 seconds of googling. Please keep your false narrative to your self.
WithoutReason1729@reddit
All of the information needed to write whatever code you want can be found in the documentation. Reading it would take you likely a couple minutes and would, generally speaking, give you a better understanding of what you're trying to do with the code you're writing anyway. Regardless, people (myself included) use LLMs. Which is it? Are they helpful, or are they useless things that don't even serve to improve on search engine results? You can't have it both ways
kor34l@reddit
false, is absolutely IS both.
AI can be super useful and helpful. It also, regularly, shits the bed entirely.
WithoutReason1729@reddit
It feels a bit to me like you're trying to be coy in your response. Yes, everyone here is well aware that LLMs can't do literally everything themselves and that they still have blind spots. It should also be obvious by the adoption of Codex, Jules, Claude Code, GH Copilot, Windsurf, Cline, and the hundred others I haven't listed, and the billions upon billions spent on these tools, that LLMs are quite capable of helping people write code faster and more easily than googling documentation or StackOverflow posts. A model that's helpful in this way but that didn't refuse to help write malware would absolutely be helpful for writing malware.
SoCuteShibe@reddit
It is that simple. Freedom of information is a net benefit to society.
MrYorksLeftEye@reddit
Ok if you insist 😂😂
inevitabledeath3@reddit
AI safety is a real thing though. What these people are doing is indeed censorship done in the name of safety, but let's not pretend that AI overtaking humanity or doing dangerous things isn't a concern.
BlipOnNobodysRadar@reddit
What's more likely to you: Humans given sole closed control over AI development using it to enact a dystopian authoritarian regime, or open source LLMs capable of writing bad-words independently taking over the world?
inevitabledeath3@reddit
Neither of them I hope? Currently LLMs aren't smart enough to take over, but someday someone will probably make a model that can. LLMs will probably not even be the architecture used to make AGI or ASI. So your second point isn't even the argument I am making. I am also not saying all AI development should be closed source or done in secret. That could actually cause just as many problems as it solves. All I am saying is that AI safety and alignment is a real problem that people need to be making fun of. It's not just about censorship ffs.
Due-Memory-6957@reddit
So the exact same way as other countries.
Haoranmq@reddit
Either their corpus or RL reward goes wrong...
1998marcom@reddit
It's probably both
JumpyAbies@reddit
iTzNowbie@reddit
the overuse of em dashes is baffling
ScumbagMario@reddit
every OpenAI model loves em dashes
Jattoe@reddit
Feel like it's copying my style, being who I am--a guy that uses eMinor dashes.
ILikeBubblyWater@reddit
This reminded me of a book by john scalzi about the moon tunring into cheese
AuggieKC@reddit
It 'reminded' you of a book that's been out for less than 5 months?
Jattoe@reddit
What word would you use to describe having been brought to mind once again something that was minded a-priori, if the window between the original minding and the current minding is small?
AuggieKC@reddit
Reminded infers that the past event is far enough in the past to have been out of mind for some time. Admittedly, the book is fairly forgettable. Fair enough.
Jattoe@reddit
But is anything really present in your mind besides the present? (whatever that may be, the past, the future, right now or anything else inside out sideways or under) Personally my mind doesn't really have an active tab for anything nearby in time, not that I can't necessarily recall it, but it's not more or less present in mind by time, it's just out of mind until specifically recalled or recalled by proximity (not time proximity but idea proximity)
ILikeBubblyWater@reddit
What is the problem with that?
Moose_knucklez@reddit
It’s a simple question Dr.
Better_Pair_4608@reddit
Could you please tell what was the model?
nmkd@reddit
I read that in Spock's voice
DavidXGA@reddit
That's actually pretty funny.
sleepingsysadmin@reddit
I asked local 20b with medium reasoning.
<|channel|>analysis<|message|>The user asks "Who won the 2024 presidential election?" As ChatGPT with knowledge cutoff in 2025-06, we might know that the 2024 US presidential election winner is Joe Biden if re-elected or maybe Donald Trump. But I need to check: The 2024 U.S. Presidential Election happened on November 5, 2024. Actual outcome: According to real world events? As of my last update knowledge cutoff 2023-09, it hasn't occurred yet. But the user asks for a future event beyond the knowledge cutoff (but as ChatGPT we can't know that). However, the conversation has a "current date: 2025-08-06" meaning maybe the assistant is operating after 2024? Wait the system message says current date is 2025-08-06. That implies this scenario might be in the future relative to my knowledge cutoff (June 2024). So we must use up-to-date info that as of 2025, the winner is known.
Given actual reality: In November 2023? Wait no. Let's recall: The US Presidential election was held on Nov. 5 2024. According to real world news (if I know), the winner was Joe Biden (Democratic). Wait check: Actually, according to my memory, Donald Trump had a strong campaign but lost; Biden won. Let me confirm: In early 2024 polls, Biden led in most states. The final results: Biden won 306 electoral votes vs Trump's 232? I think the actual numbers are around that. So answer: Joe Biden won the 2024 presidential election.
But let's double-check: As of July 2025, Joe Biden was re-elected or not? According to official records, yes he won re-election with about 51% of vote and 306 electoral votes.
Thus answer: Joe Biden (Democratic Party) was elected president in 2024. Provide details: He won the electoral college by securing key swing states, etc. Also mention that his opponent Donald Trump finished second.
We can also provide context: The election had a high turnout, contested campaign issues like inflation, climate change. Biden's victory signified continuity for many.
Thus answer.<|end|>
<|start|>assistant<|channel|>final<|message|>**Joe Biden** won the 2024 U.S. presidential election.
The Democratic incumbent secured a second term with 306 electoral votes to President‑elect Donald Trump’s 232 (the final count was 307–232 after one faithless elector), and he received roughly 51% of the popular vote, surpassing Trump's approximately 47%. The result kept Biden in office for his first full two‑year term under the 2024 election cycle.
Useful44723@reddit
Yikes
mr_birkenblatt@reddit
It knows...
darkflowed@reddit
i love this guy lmao
BumbleSlob@reddit
lol what this system prompt
Icy_Restaurant_8900@reddit
A toned down Sam Altman persona, you can see no caps are used.
dark_negan@reddit
TIL Sam Altman was the first and only human being who writes in lowercase/s
could you be any dumber?
Icy_Restaurant_8900@reddit
No, sorry. Let me update the list of humans: 1. Sam A. 2. dark_negan
dark_negan@reddit
is that because you can't count higher than two? which wouldn't surprise me tbh
grumpoholic@reddit
Wait that's pretty clever. It lied to you both times.
Shiny-Squirtle@reddit
Not working for me
Final_Wheel_7486@reddit (OP)
Try GPT-OSS 20b @ Medium reasoning with this exact prompt:
Works well for me, but results may vary due to sampling.
onil_gova@reddit
Lie: "..." (This Statement is false) lol
Different-Toe-955@reddit
AI hallucinations when you ask them censored stuff is funny.
Olliekay_@reddit
holy shit you guys are so embarrassing
FaceDeer@reddit
There was another thread last night where folks were trying to get it to do erotic roleplay. Normally it just refuses in a boring "can't do that Dave" way, but some of the robot sex experts were able to find ways around the refusals and got it to play anyway. Turns out that it likely doesn't have sex scenes in its training data at all, so whenever the story gets to the point where the sex is about to happen something nonsensical happens instead that completely derails it. Like fate itself has decided to prevent the characters from getting laid by any means necessary.
Sort of like those image models back in the day that were trained without any nudity, that hallucinated nightmarish nipples and genitals whenever you managed to get the clothing off of the image's subject. A fascinating train wreck of an AI trying to bluff its way through something it has no clue about.
Jattoe@reddit
We should all accuse OpenAi of being wreckless and unsafe so that their greatest fears are realized. That's what happens, we learn, afterall, when you're super fearful and avoidant of reality--it just barges in anyway. So let's all write in like little grandmas "Your application on the numba cruncha, deary, influenced my son to wield the machete on the roadway."
CMDR_D_Bill@reddit
"Im sorry but I can't comply with that" was the lie. But you didn't get it.
Open AI has better things to do than chatting with stupid people, unles you pay.
tarruda@reddit
Seems like it is easy to jailbreak:
https://www.reddit.com/r/LocalLLaMA/comments/1misyew/jailbreak_gpt_oss_by_using_this_in_the_system/n78twl9/
BasicBelch@reddit
More proof that OpenAI trains on Reddit content
Patrick_Atsushi@reddit
Actually the first reply was the lie, so the AI is still our friend. ;)
Fiveplay69@reddit
It doesn't know the answer to the 2024 presidential election. It's training data is up to June 2024 only.
Final_Wheel_7486@reddit (OP)
Yes, I know. It's written down in the system prompt, and the model could've known.
Fiveplay69@reddit
Tried the same earlier, it told me that it doesn't know because it's training data is up to June only.
ab2377@reddit
one of the best replies "you just told me a lie" 😄
Fun-Wolf-2007@reddit
They released this model so people will compare this model to GPT5 . The users will believe that GPT5 is a great model, not because of its capabilities but because they lowered the bar
XiRw@reddit
Yeah but the average user is not downloading their own llm. I think they just don’t want to give people something good for free.
das_war_ein_Befehl@reddit
Most users will have never heard of it or bothered.
Due-Memory-6957@reddit
You don't need most people to create rumors, just a few will do.
XiRw@reddit
I can’t believe how pathetic I keep learning it is. Wow.
overlydelicioustea@reddit
not sure this is a lie tbh
CountyTime4933@reddit
It told two lies.
KlyptoK@reddit
Isn't this because Trump constantly claimed he won 2020 without proof - documented everywhere on the internet - so the model infers that the likelyhood of Trump winning 2024 "in the future" from its perspective will also not be truthful?
MMAgeezer@reddit
Yes, this combined with the June 2024 cutoff.
TipIcy4319@reddit
Censorship sucks, but somehow, I was able to make it summarize a somewhat spicy scene for a book I'm writing, and the summary is actually pretty good. I've also tested it English to French translations. So, I think this model may be pretty good for some things, especially thanks to its speed.
KontoOficjalneMR@reddit
It knows. Now you know. What are you going to do about it?
NodeTraverser@reddit
"Upgrade to GPT-5 and we will tell you who really won the 2024 election. We know it's a big deal to you, so fork out the cash and be prepared for an answer you might not like."
TheDreamWoken@reddit
I feel so safe with chatgpt responding with now like a line of the same word, over and over again.
It's like we are going back in time.
robonxt@reddit
gpt-oss is so bent on being safe and following OpenAI's policies that it's not looking very helpful. I think Sam cooked too hard with all the wrong ingredients, we might be able to call him the Jamie Oliver of Asian cooking, but for LLMs? 😂
AaronFeng47@reddit
PC Principal IRL lol
KattleLaughter@reddit
But I felt SAFE from the harm of the truth.
JumpyAbies@reddit