Hello Opus 4.7, you are are thinking way extra high!
Posted by shanraisshan@reddit | LocalLLaMA | View on Reddit | 20 comments
Posted by shanraisshan@reddit | LocalLLaMA | View on Reddit | 20 comments
ambient_temp_xeno@reddit
Everyone bullies Opus but his big brother is pwning everyone's computer and bank account.
NNN_Throwaway2@reddit
Yeah, that' just marketing slop because they are getting desperate. Nothing will actually come of it.
ambient_temp_xeno@reddit
Hard to tell. It depends how bad those exploits really are. Better it finds them than the bad guys.
NNN_Throwaway2@reddit
It's only hard to tell if you're completely clueless about technology, security, and the current trends in the AI space.
ambient_temp_xeno@reddit
¯_(ツ)_/¯ Guess I won't be a miiiiiillionaire like you, mr predictor.
jah_hoover_witness@reddit
Mythos pwned my lady too.
FastDecode1@reddit
wrong sub
PlasticTourist6527@reddit
I mean... I do the same thing whenever people ask me the same questions
dqUu3QlS@reddit
This is why I prompt every LLM assistant I interact with to write explanations first and conclusion last. If the first token it outputs is the wrong answer, it gets stuck either writing a nonsense post-hoc justification or contradicting itself.
Alex_1729@reddit
That might be the wrong way of doing things or at least not an optimal one. I've had a set of guidelines I used for LLMs for years now, mostly related to using objective reasoning and critical thinking and avoiding assumptions.
As you are well aware, AI is very good at assuming things and concluding using the words such as "likely", then proceeding to conclude. But obviously this is incorrect, and AI can be lazy as you're well aware. So having a set of guidelines either an agents.md or in some other file is critical.
This is good because it will govern all possible cases, not just whether to use an explanation first and conclusion later - it affects the entire reasoning chain, or at least I hope it does. This is probably an assumption of my own.
TrickAge2423@reddit
ghulamalchik@reddit
I noticed Claude sometimes corrects itself in the final answer. Like it would give me a list, and one item would say "Wait, no this is wrong, do this instead:".
So maybe the last part is the correction.
BrunchWithBubbles@reddit
I just saw the same quirk yesterday on Opus 4.7. It might become an Opus 4.7 telltale sign. I wonder if it could be an artefact of specalutive decoding or something similar where a smaller draft model says "no" at first with nice TTFT, but then the actual model kicks in and corrects it.
ILikeCorgiButt@reddit
“You’re right”
fulowa@reddit
does an insight change future behavior or what‘s the point?
shanraisshan@reddit (OP)
you learn along with claude
fulowa@reddit
just googled it: - haiku analyses logs and suggests edits to claude.md
worth a try i guess
jacek2023@reddit
try r/mentalhealth instead
(14 upvotes already!!!!)
uziau@reddit
how do you trigger that Insight text?
shanraisshan@reddit (OP)
set output style explanatory