Cognitive Orthogonal Reasoning Engine
Posted by Pale-Broccoli-4976@reddit | programming | View on Reddit | 6 comments
Posted by Pale-Broccoli-4976@reddit | programming | View on Reddit | 6 comments
church-rosser@reddit
At it's C.O.R.E. this is absolute bullshit.
dave8271@reddit
The readme:
Where LLMs guess, C.O.R.E. computes. Where they mimic, C.O.R.E. reasons. Truth is a mathematical certainty, not a statistical artifact.
This is a complete architectural reset, designed to build intelligent systems on a foundation of logic and certainty
The code (yes, really):
I don't get it. Is it like, an early April Fools or something?
gummo89@reddit
The concept existed prior to LLM but it doesn't really seem to be implemented fully.
For this one, you need to manually add "subjects" and the first one you mentioned in the string is the subject...
The text in the readme is at least 95% marketing with no examples and I read "hyperdimensional" way too many times.
Pale-Broccoli-4976@reddit (OP)
It was a demo and an example of an alternative design. I use it internally in another project that needs to scale to yottabyte-level data and support more advanced internal logic. That larger system is what I’ll be making public soon.
hinckley@reddit
👍
Pale-Broccoli-4976@reddit (OP)
You’re absolutely right—C.O.R.E. was only the beginning. It was partly a demo, a little teaser. What’s coming next is on a whole different level. It’s not ready yet—it’s damn complex—but when it lands, it’s going to change how people think. Forty years of legacy architecture… solved.
And look—I know those are big words. That’s exactly why I need to make sure I can back every single one of them when this thing first lands. The documentation, the unit tests, everything has to be perfect. And right now, it’s just me—one developer who refuses to fool people.