Meta Superintelligence group publishes paper on new RAG technique
Posted by ttkciar@reddit | LocalLLaMA | View on Reddit | 9 comments
Posted by ttkciar@reddit | LocalLLaMA | View on Reddit | 9 comments
LinkSea8324@reddit
Another day, another shit blog
SkyFeistyLlama8@reddit
It's AI slop eating AI slop all the way down. Why can't people just link to the Arxiv paper?
Blizado@reddit
That link is directly at the top of the site. That hard to find it?
LinkSea8324@reddit
Is that hard to directly post the paper instead??
SkyFeistyLlama8@reddit
F'ing AI influencers are a big driver for this. Everyone wants to be that empty talking head on ShitTok or CrapBook.
I keep saying this again and again: learning how LLMs work by monkeying around with llama.cpp helps to immunize you against AI influencer BS and more importantly, gives you a skeptical edge against Big Tech's claims of AI supremacy. LLMs are useful but only in constrained domains and for specific use cases.
ttkciar@reddit (OP)
You think so? As a SWE working on a RAG implementation I thought it was really interesting. Is it that bad?
NoIntention4050@reddit
the shit blog is what you linked to, not the paper itself
DinoAmino@reddit
Repost. First post was a month ago https://www.reddit.com/r/LocalLLaMA/s/i6KnBbk2cS
ttkciar@reddit (OP)
The paper itself: https://arxiv.org/abs/2509.01092