Raw Chain-of-Thought from Gemini 3 Pro. It hallucinates, corrects itself, and eventually crashes.

Posted by Numerous-Campaign844@reddit | LocalLLaMA | View on Reddit | 9 comments

We know how Gemini Pro has the 'Thinking' block which shows "summary" of its reasoning process, but I somehow glitched it into outputting the raw internal monologue instead of the summary. It looks very similar to DeepSeek's R1

So it happned when I was testing Gemini 3 Pro on AI Studio with some heavy obfucsated JS. After it missed a hidden URL, I corrected it and asked why it failed.. That’s when it broke.

Instead of the usual 'Thinking' summary, it spit out its entire raw internal monologue reasoning that felt bizarrely human

My Theory:

I think I finally understand why gemini summarizes the "Thinking" block instead of showing it raw. It’s not just for a cleaner UI. I think they hide it because if the model gets "stuck" or enters a recursive loop, it looks absolutely unhinged. There might be a failsafe mechanism designed to 'reset' or sanitize the thought process when it enters a repetitive state like this, but I somehow bypassed it.

Full Chat URL

Honestly, the fact that it admitted 'I will accept the L' in its internal monologue is the most human thing I've seen from an AI