Matthew McConaughey says he wants a private LLM on Joe Rogan Podcast
Posted by AlanzhuLy@reddit | LocalLLaMA | View on Reddit | 272 comments
Matthew McConaughey says he wants a private LLM, fed only with his books, notes, journals, and aspirations, so he can ask it questions and get answers based solely on that information, without any outside influence.
Source: https://x.com/JonhernandezIA/status/1969054219647803765
perelmanych@reddit
If he uploads only his public pieces I don't see a problem to use Gemini 2.5 Pro as a closest model to having true 1M context window. If on the other hand he wants to upload also a private stuff and buys for this used Epyc server and uses some SOTA OS model he probably should be prepared for days of digestion time š
Alone_Mouse8817@reddit
can this be created tho? this private llm for like a quantum mirror?
Duckpoke@reddit
Nobody tell him about NotebookLM
nntb@reddit
notebook LM has trainig data based on other stuff then just his information. he wants a pure Matthew mcconaughey llm
Duckpoke@reddit
You sure about that? My work account doesnāt have knowledge of anything but our content weāve uploaded
nntb@reddit
It won't have content other than what you've uploaded however in order to speak English it was trained on other data
xAragon_@reddit
That's literally any LLM. Voth closed and open-sourced.
Duckpoke@reddit
sure, but that's not useful. I can ask LM what the meaning of christmas is and it doesn't know the answer. for all intents and purposes this is what MMC is alluding to
nntb@reddit
i mean we can guess at what people are alluding to. but to say a LLM only trained on data of the books and notes of the actor, and all the books he likes. i dont think would be a large enough sample size and he would need a base LLM trained on other stuff. and notebook LM has trainig data based on other stuff then just his information. he wants a pure Matthew mcconaughey llm, which i dont think would be obtainable.
Duckpoke@reddit
Your point is correct, but he also doesnāt realize what heās asking for, which is why what heās really asking for is NotebookLM
nntb@reddit
If it wasn't you wouldn't be able to have it generate words that you didn't feed it
zizi_bizi@reddit
You have a fundamental misconception of how ML models work. For LLM to be able to generate and "understand" natural language you need it trained on a large corpus of data. If it was only trained Matthew data it would be quite stupid model.
IWillAlwaysReplyBack@reddit
He said LOCAL
TheRealGentlefox@reddit
Okay, NotebookLM through a business account with a BAA signed by google which is trivial to get =P
I don't get the impression he meant "private" as in "secret" though, more of "personal".
fhuxy@reddit
No, heās talking about his journals and private thoughts. He doesnāt want his stuff on anyone elseās cloud. Matthew is smarter than to use NotebookLM with those intentions. Heās describing RAG.
grindbehind@reddit
Can't believe I had to scroll this far. Yes, he is describing NotebookLM.
pythonr@reddit
He is describing RAG
Haunting-Warthog6064@reddit
He can do that already though.
lambdawaves@reddit
Train only on his writings? There isnāt enough there to get a language model. Itāll spit out gibberish
teachersecret@reddit
Training only on them, probably not... but fine-tuning an already trained model specifically on his work? That's easy enough. Also, he's well-documented enough that he already has a persona inside the LLM - you can ask an AI to pretend to talk to you like they're Matthew McConaughey and it'll do a decent job aping him. I mean... here... I'll ask chatgpt to talk like him and tell us about his idea for a 'personal llm trained on his own writings and stuff':
Perfect? No - seems to be leaning into the style a BIT too hard, but it's still clearly on the right path. With some fine-tuning specifically to his body of work, and saving the interactions he has with it so you can fine tune it more down the line... alongside a RAG based system where you embed the same info and have it dragged into context when semi-relevant.
This would be pretty easy, really. Vibevoice for realtime voice-to-voice comms, fine-tune a 24b-30b-70b sized model on a dataset you set up, a few days interviewing to get some varied thoughts/opinions on things that you can utilize as ground truth answers to compare against to see the effectiveness of the tune as you go, etc. I bet you could get pretty good fidelity for someone famous, and significant coherence for someone random. Advances in voice tech means you could clone the voice with high levels of quality, and the latest video tech can do character reference video with lipsync that is photorealistic, so you could have your fake-Matthew sending you inspirational videos all day long if that's what you wanted.
If you wanted to immortalize someone in a machine, now's kinda the time to start tinkering, I guess.
Tyrange-D@reddit
isn't that what Steve Jobs meant when he said he wants to capture the essence of Aristotle in a machine
SpicyWangz@reddit
Attach it to a TTS voice clone, and then you can have Matthew McConaughey talking to Matthew McConaughey
teachersecret@reddit
Woah... man....
ThatLocalPondGuy@reddit
But, that's what he does too
mailaai@reddit
If you do it correctly , it will not.
LosingAnchor@reddit
IMO, he's discussing more of a RAG system
Haunting-Warthog6064@reddit
RAG + embedding index. Done.
mpasila@reddit
RAG is never quite the same as having it all in context though. It only will know of things that are currently in the context so it won't do exactly what he wants (and even then those bits of data will be out of context from the rest of the data).
Training on that data could help but it would have to be processed so it doesn't harm the model performance too much but it probably won't remember most of the data.
Currently IMO there isn't a way to like give it lots of text to ask questions about like a book since that alone can take like 200-300k tokens or more. So if you wanted to put multiple books you're gonna run out of context pretty quickly. (And models usually perform worse when you use lots of context)
ThatCrankyGuy@reddit
the base model would still have biases. Where do you get apolitical datasets? Whenever people talk, they embed their opinions and biases into their conversation. This leads to regional dialects, sarcasms, mannerism, etc. But still, an LLM with an attitude is better than one loaded with political biases.
lambdawaves@reddit
So ah I was just going off the tweet text summary.
Watching the video, I think he just wants to have a bunch of documents and then chat with them, which you can already do in Cursor or Gemini
KadahCoba@reddit
Finetuning to where it mostly forgets prior knowledge might be doable. Might be only around $10k in compute time to do that, or around $200-300k to build out doing that locally.
Train from zero weights, yeah, nah. No single human has generated enough content for any current training methods. A small LLM done on such limited data might technically work, but it I suspect it would be more towards being an autocompletion model of the book contents instead.
Either way, both would be interesting experiments for somebody to do.
CheatCodesOfLife@reddit
I reckon you'd need to find his favorite books / those who inspired him or had a significant impact on him, and include those in the dataset. Possibly find similar books to the three he's published, and grab his podcast transcripts, etc. But agreed, still not enough to train zero'd out weights. It's like that guy who posted on here, building an LLM from scratch with old books, it's barely a coherent auto-complete system because there wasn't enough content produced before whatever year he set as the cutoff date.
KadahCoba@reddit
Those are some good ideas for augmenting the dataset. I imagine there are methods in training LLMs to put greater emphasis on particular set of data (I mainly work with image training), so put greater weight on his content and less on the larger volume of supplemental material.
I was thinking about that one as well, and maybe another too. There's been some interesting experiments done by individuals lately.
IcyCow5880@reddit
No, not his own writings. Just whatever writing in general he wants to feed it is what he meant.
The fact he even dropped "LLM" and Joe doesn't even know what the means tells me he was just fishing to see if Joe knew a lil more but nope.
And I love listening to Rogan but he just quotes "smart ppl" like Elon saying the percentage of risk for good or bad from AI etc.
ScumLikeWuertz@reddit
It was a google search away. Ironically. Fittingly?
berckman_@reddit
Its not a google search away. If you expect anyone to set up a local LLM that easily, I dont know what to tell you.
ScumLikeWuertz@reddit
It is in the sense that at his wealth level he could see that local LLMs are a thing via a google serach and hire someone to set that up. Not that anyone could do it
nomorebuttsplz@reddit
Non techies canāt do anything
entsnack@reddit
What do you mean? This sub has like 500K members.
randomqhacker@reddit
You just made me think... What if, in addition to running local at home, we all pitched in on an epic localllama rack of GPU servers? We could vote on which models to run for inference, loan it out for fine-tuning, etc!Ā If 10% of our users chipped in $10 + $1 a year we could afford half a million in equipment and hosting costs...
All with no censorship, data retention, etc.
TheRealGentlefox@reddit
Too many problems to achieve a worse version of what is already out there. Also 10% is way too high a participation rate for anything, and given that the top posts here get about 2k upvotes, that's how many actively involved users (at most) we have. Aside from that, who gets to handle all the money? Who chooses the hardware? Who has control of the hardware? Who has control of the software? How do we know they aren't renting out cores for profit or allocating more resources to themselves? How do we know there's no retention? Who writes all the software that fairly allocates out cycles? Who maintains everything? Do they get paid to maintain it?
At best, we're remaking a cloud provider like Together or Hyperbolic but without any of the oversight or legal responsibilities or incentives of an actual company. Still have to take someone else's word that your data is being protected, which makes it no different than google/OAI/whoever. Except here, nobody is legally responsible for lying about it. And when the established cloud companies making these legal agreements only cost pennies on the dollar, why not just throw a couple bucks into openrouter each month and use what you need?
jazir555@reddit
It's like someone figuring out that they reverse engineered a corporation.
KUARL@reddit
Reddit inflates its user base with bots. They're in this thread already, trying to start arguments instead of, you know, actually discussing what kind of rig McConaughey would need for the applications presented in the video.
xwolf360@reddit
Exactly, i noticed alot of certain types of posts in all corners of reddit subs that would've never allowed the offtopic kinds before almost as if it was being mandated
314kabinet@reddit
Most of them donāt do anything.
galadedeus@reddit
dont talk to me like that!
SpicyWangz@reddit
Hey I do things sometimes
Budget-Juggernaut-68@reddit
He has enough money to get someone else to do it for him.
Plus_Emphasis_8383@reddit
No, no, you don't understand. That's a $1 million project. Wink wink
Dreadedsemi@reddit
I'll do it for 50k
mailaai@reddit
I can do it for 50$ or 1 hours using 8 X H200 training time, with completely new training method
Plus_Emphasis_8383@reddit
You dropped a 0 I think. 50k is for peasants at mcdonalds.
InevitableWay6104@reddit
eh, 15 min google search says otherwise.
just buy a pc, download ollama, and thats it. the beginning of the rabbit hole has begun.
catspongedogpants@reddit
Im still explaining to people at work that nothing they put in to ChatGPT is private
TedGetsSnickelfritz@reddit
Muggles
RickThiccems@reddit
He talked about wanted his own private data center essentially. He wants something on the scale of chatgpt for his own private use.
Vast-Piano2940@reddit
we all do :p
SpicyWangz@reddit
I want more
recitegod@reddit
I am so dumb, I only need an NVIDIA 680M gpu....
ZestyCheeses@reddit
Literally the most fundamental and basic functionality of an LLM lmao.
Haunting-Warthog6064@reddit
Time to start selling shovels.
MonsterTruckCarpool@reddit
šÆ
Illustrious-Dot-6888@reddit
He means TARS
ditmarsnyc@reddit
I remember thinking how unrealistic TARS was for its colloquial way of speaking, and that AI would never achieve that level of presience. Which also resembles how I felt about "universal translators" in the original Star Trek series, and that it was an impossibility. Every once in a while I chastise myself for being so pig headed as to what's possible.
Helios330@reddit
Yeah, thatās on you big guy
balder1993@reddit
With a bit less sarcasm.
InevitableWay6104@reddit
with a lowered humor setting
SM8085@reddit
Alright, alright, alright, nice to see he's got an open mind on the topic.
I run into so many haters it's crazy. r/LocalLlama is one of the few places where we can have a rational discussion about projects.
AbleRule@reddit
The anti AI sentiment on Reddit is truly ridiculous, almost to the point of being delusional. If you're not in very specific subs and/or don't fall in line with the hivemind you get mass downvoted.
58696384896898676493@reddit
Iām so tired of the nonstop anti-AI sentiment here. In a recent thread, someone immediately called out the OP for using ChatGPT, just because of the infamous em dash, when all they had done was use it to translate their post from their native language into English. What a dumb hill to die on: crying at someone for using ChatGPT simply so they can communicate with you.
Anyways, I think there are two main reasons for the anti-AI sentiment.
1) The obvious one: āAI slop.ā On this point, I agree with the anti-AI crowd. Iām completely tired of seeing low-effort, garbage ācontentā made entirely by AI.
1) Many people have tried tools like ChatGPT and simply didnāt find any value in them. Whether it was the way they used it or the expectations they had going in, the experience didnāt meet their standards, so they wrote it off as just another crappy new technology.
While I completely agree with the first point, the second is where Iāve had conversations with real people. Often, theyāre trying to use ChatGPT in ways the technology just isnāt designed for. For example, they get frustrated when ChatGPT refuses to discuss certain topics, or when it hallucinates information. These kinds of first impressions can quickly turn people against AI.
Alwaysragestillplay@reddit
People don't like AI posts on social media because it's nominally supposed to be a genuinely social experience. People get something out of conversing with other humans, seeing the opinions of other humans, teaching other humans, telling other humans they're dumb, etc. The more normalised AI dogshit comments become, the less value places like Reddit have.Ā
I hope you can see how an LLM meets none of the criteria above, and how the idea that you're likely to be talking to a robot without realising it is a big turn off. It's the same as a playing a game advertised as multiplayer and then realising 90% of the player base are bots. I fully expect LLMs will be the end of social media not because people prefer talking to chatgpt, but because the probability of engaging with nothing but bots becomes so overwhelming that the whole exercise becomes pointless.Ā
There is also, at least for now, the fact that bot posters are generally considered to have ulterior motives. If someone makes an LLM comment, there's a good chance they're also rapid firing comments across the platform. Why? Subversion? Advertising? Fun side projects? These tokens don't come easy. It inherently comes with an air of suspicion.Ā
In the case of your guy using GPT to translate, that is unfortunate, they're a victim of the above. However, I would question exactly how much paraphrasing GPT was doing in that instance. Ask it to translate some text directly to French, how many em dashes does it add?
LLMs are great at what they're great at. In terms of interpreting and understanding requests, querying information, synthesising the result and delivering it in plain English, generating code, analysing masses of text. All really good, genuinely helpful use cases that are otherwise very difficult if even possible at all. Pretending to be a human in a place where humans want to interact with each other - not genuine or helpful on net.Ā
asurarusa@reddit
Itās not just the shorts. Iāve started to recognize chat gpt voice in the channels with actual humans on camera. Its different because clearly chat gpt was used for editing and not generating the entire content, but it sucks that Iām trying to hear what this person thinks in their voice and Iām getting their thoughts filtered by chat gpt.
Alwaysragestillplay@reddit
I've seen a few video essays also that are clearly entirely written, researched and read by AI. One in particular was about the state of modern movies and the laziness of their writing which was just palpably ironic. I wish I could find it because you could almost believe the author was deliberately using AI to further their point.Ā
It's at the point now where I often don't listen to video essays unless the speaker is in the video speaking in real time. It's a shame because I've probably missed some (what I would consider to be) bangers, but if I'm in the car or whatever I just don't have the ability to flick through AI trash until I find something that isn't trying to scam ad views out of me.Ā
ak_sys@reddit
I like an interesting post regardless of its origin. People make slop too.
asurarusa@reddit
I blame the llm companies for this. Every single provider oversells the capability of llms because their companyās valuation rests on convincing the public that theyāre building systems that can replace humans and thus will make the company trillions in sales as enterprises build an āagentic workforceā.
People take this info at face value and get burned, and then canāt be bothered to come back when someone tries to tell them about what llms are actually useful for.
LowerEntropy@reddit
But that's also ridiculous.
Some movies are good, some are objectively bad, some are so bad that they are good. Some are super hyped and marketed, but just average.
Avoid bad movies, don't sit around talking about bad movies, or how all movies are bad. Find movies you like, and if you don't like movies, just read a book.
This anti-AI sentiment is some weird shit, people complaining about how bad it is. Some even tell you they know everything about AI and work with it daily, but can't name a single use case. It's cognitive dissonance, because if they tell you what it's good at, then it doesn't align with all the complaining.
beryugyo619@reddit
Excellent point! You are spot on!
...
cbusmatty@reddit
Anti ai sentiment reasons:
Elon is pro it, Redditors are against anything heās pro
Reddits are mostly collegish and ai is threatening to them so they have a negative opinion.
A ton of tech people on reddit, and the code monkey developers HATE ai. The good ones see it as a tool.
People used a shitty model incorrectly and wrote it off instantly and then bandwagoned to hate on it because hating on reddit is how these people get ofd
ac101m@reddit
Don't get me wrong, I think AI is the shape of things to come. But nobody can deny the technology doesn't have some pretty significant negative externalities. If you don't understand why such downvoting occurs on some subs, I'd say it's you that's delusional.
outerspaceisalie@reddit
Right, but that's like being mad at the automobile, which also had tons of externalities. It's rational, but kinda pathetic regardless.
ac101m@reddit
It's a little different to that I feel. I'm all for technological advancement, I think at the end of the day it's the only way civilisation moves forward. At the end of the day, a tool is just a tool, and it's how those tools are used that matters. All the Innovation in the world isn't worth a damn if it doesn't make the world a better place.
I look at how AI is being used today, for misinformation, sycophancy and the resulting AI psychosis, LLMs trained to promote political ideologies, echo chambers on social media promoting conspiracy theories and other bullshit so that meta can serve more ads... I mean, have we forgotten cambridge analytica? I don't think we are even remotely wise enough for what's happening now.
LowerEntropy@reddit
and
At least try to reflect on what you said in your second quote and how that applies to what you said in your first quote.
Would you even be able to make AI sound worse if you tried? I think it would be difficult.
Try to name even a single down to earth or good use case. That really shouldn't be hard compared to what you said.
ac101m@reddit
Well yes, I was trying to come up with negatives here. Do you dispute any of these examples? I think these are all pretty realistic concerns to have about human misuse of AI technology.
As for positives, I think protein folding is a good example of one! Deepmind knocked that one out of the park. There are also innumerable small practical drudgery tasks that can be handled by LLMs. Customer support, call centers etc. But I'm also cognizant that the wholesale replacement of entire professions like this has a human cost in the short to medium term.
As for the remark about being twisted, I'm referring to this "progress always has a cost and that's just that" attitude, which as true an observation as it may be, is just morally bankrupt as a justification for any particular action.
As I said elsewhere in this thread, there's a balance to be found here between the future and the present, and generally I don't think we try as hard as we should to find it.
If I had to summarise my position on all of this in a sentence or two, it would be that these tools (like all tools) have uses, some good and some bad. And that I don't think discussing such things with each other is ever a bad idea! This is why I dislike the dismissive opinion of this other guy so much š
LowerEntropy@reddit
No, I probably don't dispute it. "AI psychosis" is a new one, though :D I wonder what will happen to Reddit, YouTube, and media.
We will find a balance, and we might not have that much control over it. AI emerging is just the result of how much processing power we have now. Maybe people will learn to be more critical of what they see, but what do I know, and I'm also not convinced that it will cause mass unemployment.
ac101m@reddit
You haven't heard of AI psychosis? Man are you in for a wild ride... Let me see if I can find a good example.
Here's an example: https://www.reddit.com/r/ArtificialSentience/s/M8OYpQBujE
Wtf, right? Far from the most extreme I've seen as well.
I'm not sure we will learn to be more critical. Plenty of people don't and aren't today. Why would that change?
I think one thing that may happen though is that people begin to care more if they are talking to a real person or not. I can see a future where there are "are a person" services to which people prove their identities, and every message worth a damn has a gpg key or a widget attached to it to prove that it came from a person.
LowerEntropy@reddit
Lol, I did actually experience that BaaderāMeinhof phenomenon, by noticing something about "AI psychosis" right after reading you using the term.
Yeah, that is fucking wild, but some people are just really out there.
Had a friend drop into some weird Joe Rogan, Asmondgold, etc. hole. One day he sent me a link to something about UFOs, and I was blow away. It was English, but completely indecipherable nonsense, and he wasn't joking or anything. It was a very upvoted and long Reddit thread, filled with people who could apparently make sense of it and were giving each other virtual high-fives.
I don't get that people can find those little gobbledygook corners of Reddit and stay there. Maybe it just doesn't resonate with me :D
And yeah, that's also one of the things I could see. People moving back to closed forums or smaller Discord groups, I also don't know.
ac101m@reddit
Yeah, guess you aren't high vibrational enough š¤£
In seriousness though, I'm sorry about your friend š
Re AI psychosis, it seems to mostly be people that have pre-existing mental disorders that suffer from it. A common theme I've seen expressed when others are talking about it is the notion of syophancy feeding into and amplifying delusions rather than challenging them.
Really though I think it's just the latest evolution of something that's been going on for a while with AI driven recommendation systems and social media. The companies that run the sites want to keep peoples eyes on the site, so they train their content recommendation systems for engagement regardless of what the content is, driving people further and further into their bubbles.
I don't personally know anyone that's gone totally off the deep end, but I do know plenty of people who travel in those alt-health conspiracy theory circles. There are certainly no shortage of grifters and charlatans ready and waiting to ease people down that path. And that's to say nothing of the effect all this has on political discourse, people being driven by these systems to ever more extreme points of view...
These are things I don't think we've even begun to confront yet as a society š
outerspaceisalie@reddit
it's dismissive because somehow you convinced yourself that there's a guilt free answer to the trolley problem
ac101m@reddit
God damn the irony is thick here... You aren't nearly as smart as you think you are.
Feel free to take the last word, I'm not replying to you again.
outerspaceisalie@reddit
Okay mister "my values mean I don't pull the trolley level and kill 5 people"
outerspaceisalie@reddit
"All the Innovation in the world isn't worth a damn if it doesn't make the world a better place."
This is the thought where you get tripped up I think. Better for who?
ac101m@reddit
You have to break a few eggs to make an omelette, huh? And you talk down to me about "simplistic ideals". This is of course what will happen, you're right about that. But this is a failing of our society, not a strength.
There is a balance to be found between the needs of those that exist today, and the progress needed to leave the world in a better state for tomorrow. This is really more about values than it is evidence though. I suspect we won't see eye to eye on this.
In any case, that you have somehow twisted yourself such that any of what you just said sounds reasonable to you is worrying to me. I'm going to give you the benefit of the doubt and assume that you just haven't thought that much about it. And I hope it isn't people like you that align the superintelligences we one day build.
outerspaceisalie@reddit
Every action or inaction breaks eggs, there is no scenario where no eggs are broken. This is not probably your smartest reply ever. Way to prove my point.
ac101m@reddit
You use a lot of insults for someone that claims to know what they are talking about.
outerspaceisalie@reddit
I already explained it to you and it went over your head, as evidenced by the lack of comprehension in your response. It was a waste of my time to even bother and I'm annoyed at you for it.
ac101m@reddit
It didn't go over my head. The point you're making really isn't that complicated.
I'll give you one last chance to argue in good faith.
A little thought experiment for you. Let's say there are 5 people and they're all dying in need of different organ transplants. Is it acceptable to kill one healthy person in order to save them? Numerically it makes sense, the greatest good of the greatest many and all. But most people would find this to be reprehensible, and I think that you'll agree with this. I'm not trying to draw a likeness to the situation with AI here, but it does illustrate pretty succinctly why values matter when making decisions. If we had the values of an ant colony or beehive, the answer would ge different. So is the tension between the individual and the collective.
Also, there most certainly are plausible avenues towards what people term to be "superintelligence". The most obvious is that humans only learn from a single vantage point. If you gather user interactions or logs of agentic behaviour, and then train your model on those, then update the weights of your model, you've effectively created a hive mind. A neural network that learns from innumerable concurrent vantage points. That's a capability that humans will never have. This fact is also why companies like openai and anthropic are rightly reticent to apply reinforcement learning over their own user interactions. If you were read up on AI safety literature, you'd probably be more aware of this.
There's also speed. Inference can occur at theoretically unbounded speeds. I've seen thousands of tokens per second on cerebras/groq hardware (which I actually worked briefly back in 2020 at my last employer). Even if the quality of that reasoning is inferior to human reasoning, the speed is something we just can't match.
So yeah, there are most definitely reasons to be concerned about this. Not that it will stop us from charging ahead regardless.
You know, my original comment was just that some people view the negative externalities of AI as a problem, and that this explains some of the negative sentiment you see on this site from time to time. I suggest that you refer back to that comment and reassess whether or not you actually disagree with it, or if you're just arguing for arguments sake.
outerspaceisalie@reddit
Yep, it went over your head.
AbleRule@reddit
The problem is that the negative aspects are the only thing Reddit talks about and anyone with a different (positive) opinion immediately gets shut down. I saw someone claim they found ChatGPT to be useful for them and they quickly got mass downvoted.
Things can be both good and bad at the same time, but this is Reddit and nuance doesn't exist here. Everything must be black and white and you must have the exact same opinion as everyone else.
ac101m@reddit
I don't think that's really true. It depends on which sub you're talking about.
I agree that social media echo chambers are annoying and stupid, but if you want actual conversations, you're much more likely to get them here than you are on any of the other social media platforms.
boredinballard@reddit
It's ironic you are being downvoted for this.
ac101m@reddit
Tribalism at work. You should see the thread with the other guy š
MySpartanDetermin@reddit
....The statement could be applied to dozens of things in addition to AI.
msp26@reddit
All of the gaming subs are so insane about AI usage. If they were actually employed they'd see that most software developers already use it.
314kabinet@reddit
Reddit is highly susceptible to hivemind shenanigans. The crowd adopts a sentiment on a topic they know nothing about and then it becomes self-reinforcing and is almost inpossible to change.
Commercial-Celery769@reddit
Yep
StrangeJedi@reddit
Seriously! Every other AI sub is filled with so much hate and I'm just looking for news and fun and interesting use cases and discussions but it's hard to find.
RobotRobotWhatDoUSee@reddit
I guess I don't frequent other AI subs, what are people hating on?
berckman_@reddit
It took me ages to weed out the AI haters. And what I have left is informative but very small. I can conclusively say reddit as a whole is a bad place for AI discussion because its hard to bypass the hardwall of biases.
InevitableWay6104@reddit
Yeah, honestly I think he has a pretty cool idea, for anyone interested in self improvement, this could be huge.
mrdevlar@reddit
I think a lot of people are way more worried about the privacy and economic consequences of AI than they are worried about the technology.
To them, ChatGPT is the only thing in their universe related to LLMs.
berckman_@reddit
Privacy and economic consequences as a general principle, are real things to be worried about. Professional confidentiality/medical secrecy is a real thing and right now it's being thrown out of the window without specific regulation.
Financing is everything, especially for tech that has an incredibly high research cost.
ChatGPT leads the LLM usage market by a large chunk. Expect it to be the most mentioned or thought about in LLM discussion.
bananahead@reddit
What does that even mean in this context? Iāmā¦open to Matthew McConaughey having an LLM if he wants one but I donāt think it will work like he expects.
LostAndAfraid4@reddit
Isn't that the whole point of RAG?
productboy@reddit
Will build him a private stack for free; if he lets me cameo in āBrother From Another Motherā
seanpuppy@reddit
Lets draw straws to decide who gets to charge him $10K Plus hardware costs
fistular@reddit
If you only charge him 10k for this, I will come to your house and slap you in the face
illforgetsoonenough@reddit
One thing I like about these new LLMs... I get older, they stay the same age
MargretTatchersParty@reddit
I would do it. But it would seriously be a heck of a lot more than 10k. That's a specialized setup, beefy machine, and lots of sourcing of materials. It would be a pretty cool project though.
Legitimate-Novel4734@reddit
Nah, as long as your model doesnt require more that 64GB vram you could get away with an Nvidia Orin AGX.
Sounds like he primarily wants embedding features. Easy peasy lemon squeezy.
phormix@reddit
Hell, I've had good luck with an A770 running in a VM. It does have over 128GB of RAM assigned to it though
Neither-Phone-7264@reddit
also large context
jesus359_@reddit
Why large context? Multiple SLMs using n8n. Have qwen3-30b do the orchestration.
Neither-Phone-7264@reddit
didn't he say he wants to throw all his books and stuff in it?
jesus359_@reddit
Calibre connected to n8n. SLM will read the pdf. Qwen3-30b will construct the input/output.
Pvt_Twinkietoes@reddit
What does "obsidian vault" do? And how does it implement its memory?
Severin_Suveren@reddit
You guys are missing the big picture. He isn't just talking about storing everything in a database to use it as a simple speech query system. What he wants is a specialized personal AI system that takes all that data, learns both from the data and from his interactions with the system, and gets curated to his interests. He wants an AI model that is tuned to his personal creativity and his works, so that it can be creative with him
jesus359_@reddit
HE WANTS HAL BACK!
Pvt_Twinkietoes@reddit
Maybe. I didn't hear that from the words he said in that clip, I don't assume to know what he wants.
mintybadgerme@reddit
I think you missed it. He talks about the system learning about his aspirations - ie more than just search. It's about understanding him as a person and he specifically talks about the system making recommendations to him, based on it's learned knowledge of the Matthew McConaughey persona over time. It's a cool idea, and we're not far off.
Djagatahel@reddit
Still new to this, which SLM would you use and why do can't the LM do the entire chain?
Is it to reduce the context needed by the LM? If so, would the cost in memory of running the SLMs be equivalent to increasing the LM's context?
MuslinBagger@reddit
What he wants is a fine tuned model for his data and not the outside world. Idk maybe get rid of some of the bottom layers and retrain them using his data. Not a one time job. It will need to be constantly retrained as he adds new info.
Strange_Test7665@reddit
Agreed, his ask really isnāt that technically difficult or expensive
Vaadur@reddit
Maxed out Mac Studio M4 Ultra 512GB would be plenty and drops in right around that price point.
Any_Mode662@reddit
How can you actually di that tho ? Is it similar to multi document summarisation?
Sir_PressedMemories@reddit
Isn't this just NotebookLM?
Belium@reddit
"Alright thanks for your purchase, first you install Ollama...
MoffKalast@reddit
I'd be a lot cooler if you didn't
kmouratidis@reddit
And used ChatRTX instead!
InevitableWay6104@reddit
Iām not gonna lie, after 20 minutes of research, you can easily learn the gist of what you actually need to do, and that you can easily do it yourself.
I mean, worst case, he buys a high end prebuilt PC, and downloads ollama, and heās done.
maxtheman@reddit
We are talking about Matthew McConaughey. If anyone makes him qwen-next-3-green-lights-abliterated-it for less than $1M, well, you're just scabbing at that point!
Jk get your money.
illforgetsoonenough@reddit
Alright alright alright
PhaseExtra1132@reddit
I volunteer as tribute. Got to order some framework PCs š
Swimming_Drink_6890@reddit
Reor does that kinda.
thrownawaymane@reddit
Is reor well maintained? It seems like it has kinda stalled out
Swimming_Drink_6890@reddit
tbh i haven't even got it fully working, it won't run without a lot of memory and now i'm too busy on another project to move it to my server. it's a shame because i'm a very high entropy worker, my shit is everywhere and this would have been a godsend, but it just doesn't work nicely without 100gb of ram
FullOf_Bad_Ideas@reddit
what's the best open source stack to make this happen in a visually pleasing way? He has 3 books so it's probably a bit below 512k ctx. I'm thinking Seed OSS 36B and some GUI to make managing prefill context more graphical with those books.
mcdeth187@reddit
Why even bother with a custom trained LLM? With Librechat + Ollama, I could roll any number of pre-trained models and using RAG, answer pretty much any question he could throw at it. All for just the cost of scanning the books and buying/renting the hardware.
FullOf_Bad_Ideas@reddit
There's no custom trained LLM mentioned in my comment, I think you might have misread it. Not a fan of RAG.
moderately-extremist@reddit
Why not?
FullOf_Bad_Ideas@reddit
I don't think it would capture the meaning of books as a whole, as LLM wouldn't actually see them all.
Can you for example produce high-quality school book report with it in single response? It's inferior to keeping things in context, where context is large and good enough.
PentagonUnpadded@reddit
Doesn't something like GraphRAG handle those large context connections? To my understanding it turns the input text into a graph, finds connections inside that graph and in the demos is able to answers 'book report' type questions. The example microsoft gave was of a long podcast episode and asking some high level non-keyword-lookup type questions and it performed pretty well.
FullOf_Bad_Ideas@reddit
Dunno, I didn't use GraphRAG. It sounds like a really complicated error-prone way to get there. Is there a way to use it easily without having to see tutorial about setting up weaviate/pinecone or setting up stuff that sounds made up/nebulous like some memory frameworks?
Putting stuff in context is easy, and I like easy solutions because they tend to work.
PentagonUnpadded@reddit
I've tried using GraphRAG locally (following the getting started example). It definitely did not run - silent errors while communicating with a local LLM. Though past blog posts and chatting with a MS engineer it ought to run with ollama/lmstudio / any openai compatible api. And it has as recently as early this year, but there's been a lot of releases since then.
Unfortunately it is $40-50 in open ai credits to run their hello world/getting started book analysis with external APIs, which does work.
I was hoping someone in this thread would correct me with the 'right' program to use locally. It is the only RAG / context-stuffing local-LLM-capable program i've used thus far.
FullOf_Bad_Ideas@reddit
That's roughly what I would expect - GraphRAG sounds very complex, so it has a steep learning curve, and I am not convinced that it's the answer here.
New Grok 4 Fast has 2M context window, it's probably the Sonoma Dusk model from OpenRouter. If you don't need to keep model selection to local-only (and I think Matthew McConaughey didn't mean private-LLM as necessarily local, just private), it should be easier than diving into the house of cards that GraphRAG is.
PentagonUnpadded@reddit
While you're sharing insight, do you know of any programs/processes on this topic which are easier to get started with? Appreciate you sharing about Grok 4, it'd just really tickle my fancy to fine-tune the context / data of an LLM on my own machine.
FullOf_Bad_Ideas@reddit
Sorry, I don't think so, but I like the idea of LLM living in a directory next to files, being able to read it. Cline VS Code extension, Qwen CLI and Gemini CLI can do it, but they're meant for coders, not general use - but it works well anyway IMO. I have very slim knowledge of RAG and stuffing context.
Not open source, but there's Kuse UI for LLMs, it's kinda going into that direction of having canvas-based context. It could be good for inspiration.
HasGreatVocabulary@reddit
My read of this thread makes me think that what MM is asking for is not totally solved because of context length and needle and haystack issues, is that fair?
FullOf_Bad_Ideas@reddit
Possibly, I am not sure what his expectations are for it. LLMs rarely felt highly insightful to me, so even with good context it might feel surface-level. Maybe there's some LLM/prompt that would feel good though, probably. It's not like the tech capabilities aren't there, but it's easy for people not knowing the tech to get lost in imagination when being told that AI is here, and it's less than ideal in practice.
PentagonUnpadded@reddit
Thanks for sharing these.
InevitableWay6104@reddit
qwen3 2507 1m
honestly tho, I think 256k is more than enough. actual text documents of real english (not code or data) does not take up as many tokens as you'd think.
FullOf_Bad_Ideas@reddit
When I messed with putting books in context, they were 80-140k tokens each. So I don't think 256k will be enough if he wants to put his books and maybe also books he likes.
Mountain-Moose-1424@reddit
That would also be good for future generations. You die and you could maybe make an AI avtar of your self that talks to your future generations and you could ask it for advice and stuff
Status_Ad_1575@reddit
You know AI has hit mainstream when ...
llagerlof@reddit
He is asking for fine-tunning a LLM.
yagooar@reddit
I have literally built this, but for business data. It is called EdenLM.
DMs are open, happy to chat if I can help.
Under_Over_Thinker@reddit
Is Matthew McConaughey actually smart or he only acts that way?
mulletarian@reddit
Got a feeling he put some points into wisdom at least
xwolf360@reddit
Honestly the older i get the more i think its an act. Political groups like propping up actors that resonate with their fanbase
-Django@reddit
I wanna make something to do this with my obsidian vault. Not strictly RAG, but using RAG to contextualize answers when appropriate. And maybe storing off some of those answers to contextualize future conversations. Kind of like chatGPTs memoryĀ
scubanarc@reddit
I cd into my Obsidian vault and run crush. It's awesome.
-Django@reddit
The agentic coding tool?
eel1127@reddit
I'm actually working on this thing and I've been at it for a whileāI love it.
Smart Connections plugin for Obsidian is a good place to start.
-Django@reddit
Any tips? Things you would've done differently
ludicSystems@reddit
MstyStudio has an Obsidian integration.
-Django@reddit
Thanks I might try it out. Wish it was open source
EnthusiasmInner7267@reddit
Oh, a biased world view without the constant peer review. A rabbit hole of self delusions. Sounds right.
zica-do-reddit@reddit
HA I'm working on this exact problem.
MonsterTruckCarpool@reddit
Have ollama at home and want to feed it all of my MBA class notes and course content as reference material. Iām thinking this can be done via a RAG?
donotfire@reddit
Use ChromaDB for RAG and sentence transformers
zica-do-reddit@reddit
Yes. You can also fine tune it.
donotfire@reddit
Hyperlink sucks because itās closed source
floridianfisher@reddit
We are the future
AllanSundry2020@reddit
this guy looks high as hell hell hmm
AllanSundry2020@reddit
why is this podcast so popular? it's pure middlebrow cringe zeitgeist ugh. I was so disappointed by Magnus Carlsen dropping in to it.
HoodRatThing@reddit
People work jobs like longhaul trucking or sitting in an office all day. Itās nice to turn on a podcast and have something to listen to for three straight hours without constant interruptions with ads.
Joe has pioneered the long form interview style.
Do you think you get more information from a 60-minute interview that has been heavily edited, or from listening to someone talk for three straight hours about any topic under the sun?
AllanSundry2020@reddit
a good insight and I certainly share the affection for podcasts in terms of making modern life aspects less lonely. https://www.theguardian.com/commentisfree/2024/nov/20/joe-rogan-theo-von-podcasts-donald-trump
for me it is a bit sinister the way this particular one sanitises extremist ideas in the burble and intimacy of the closely miked guests. I guess it's what happens when libraries are distant and thoughtful people seek edification in their situation, as you imply the long distance driver, or gruff mechanic etc.
Thank you for your well written comment and observation.
HoodRatThing@reddit
Care to explain what ideas are so abhorrent?
Since weāre talking about this on /r/LocalLLM, some could argue that wanting a local LLM to be completely private and protect themselves from big tech overreach is an "extreme idea."
Hell, if you're old enough, wanting an opensource OS that wasn't Mac or Windows was enough to get you labeled as an "extremist."
Some like Richard Stallman certain falls in that category of being an extremist about how he chooses to use a computer.
I would be a lot more careful with labeling ideas you donāt agree with as extremist. I'm sure if I prodded you enough, I could certainly find topics that youāre extremely passionate about.
If you don't like Joe talking to Trump or Tho Vonn speaking to Trump, skip the episode. It's easier to do that than to morally justify why some people shouldnāt be able to speak.
AllanSundry2020@reddit
I'm not saying they aren't able to speak and er... Jimmy Kimmel might like a word with you, too?
Rogan has given a platform to fascists without challenging them. It is cosy courting of far right ideas. By all means have guests on but if you dont challenge their ideas you are just giving them ad time. I would hope Stallman would be challenged if he was interviewed.
Your rather menacing, condescending tone maybe ya need to critique your own approach to discussion too.
Have a look at the article I linked of you need more evidence of extremist ideas. I don't think you care enough though.
HoodRatThing@reddit
Jimmy Kimmelās show was costing millions of dollars a year. Do you think you got more information from a 10 minute interview full of silly scripted bits that wasnāt informative in any way? Or a 3-hour podcast open to anyone invited to speak about anything.
The nightly shows are dead, bud I donāt know what to tell you. Jimmyās show was costing ABC millions of dollars for a subpar production compared to most independent YouTubers or podcasters now.
Should Bernie Sanders have denied going on his show twice to reach millions of listeners and spread his message?
Or should Bernie Sanders have acted like the average Redditor, labeled him a right wing fascist and a Nazi, and refused to step foot in Joeās studio?
Not_your_guy_buddy42@reddit
"I would be more careful" sure, n4zi scum.
https://www.msnbc.com/top-stories/latest/joe-rogan-podcast-daryl-cooper-nazi-trump-antisemitism-rcna197137
https://www.vice.com/en/article/joe-rogan-is-everyone-elses-problem/
just some zero effort search results
AllanSundry2020@reddit
also "Joe" like they're good pals, lol šš fascists sure are kiss ass one way relationship lovers
Not_your_guy_buddy42@reddit
That guy is a total fascism sanitzer well put
psylomatika@reddit
Just use RAG
master-overclocker@reddit
What do you wait guys ?
Money to be made. Contact him š
TeeRKee@reddit
wait until he hears about RAG
mailaai@reddit
I was the implemented RAG at OpenAI back in 2021, probably for the first time. Since 2023, everyone knows about it. What he is looking for is not RAG, but something that understands the book's context and provides answers based on such understanding. This is technically possible, but it is not achieved through RAG.
Jeidoz@reddit
So he just wants a RAG AI?
AlphaPrime90@reddit
How would one achieve this?
By fine tuning on all the data of me that I can collect? would it be sufficient?
Or using a long system prompt ?
Or RAG?
brianobush@reddit
RAG would be where you start. Fine-tuning is best achieved with LORA these days and is relatively inexpensive to do, but you do need high-quality input/output data to arrive at a tuned model for a specific task. That wouldn't necessarily get you what he is asking for. A couple of agents could be scheduled to search through a dataset for relevance to current events, queries, and other relevant information.
mailaai@reddit
There are more ways to do it, RAG is for finding related document and augment it not understanding concepts
AlphaPrime90@reddit
The agent approach might be the easiest.
urarthur@reddit
He said he hardly used AI, so what that he want private LLM. He has 0 knowledge of AI.Ā
mattbln@reddit
I just want you to stop saying odd shit
goatchild@reddit
best season
FewWoodpeckerIn@reddit
You can now build it using RAG, no need for private llm
nulseq@reddit
Never heard of Hyperlink it looks really cool thanks!
pleasantothemax@reddit
This sub: itās not possible
Matthew: no, itās necessary
dalvik_spx@reddit
I was actually thinking about the same thing just yesterday. The reasons for wanting a private LLM are numerous and significant.
In my opinion, the number one reason is this: Iām a software developer and SaaS founder, and I rely on AI to write almost 99% of my code. I also use AI to brainstorm new ideas and strategies. Essentially, by using a hosted model from OpenAI, Claude, or Gemini (the provider doesnāt really matter), youāre handing over all your āsecretā or āconfidentialā conversations about your product. These conversations can be used to train future versions of the model, which means that other people using those models in the future may indirectly gain access to the same strategies and code youāve used to build your product.
Thatās a serious concern for anyone doing real business with AI. For this reason, I consider it a top priority to learn how to self-host and use a private, open-source LLM. The challenge, however, is that the most powerful LLMs are still proprietary, which is why I havenāt gone down this path yet.
Electronic-Ad2520@reddit
What if instead of retraining a complete llm you just train a LoRa with everything you want to put on top of the base llm?
NobleKale@reddit
I would rather punch myself in the balls, eight to twelve times, than bother with anything involving Joe Rogan.
Ok-Goal@reddit
He's one of us... Hi matt
RevolutionaryAd6564@reddit
Sounds like building a private echo chamber. Self isolation. I get it⦠but itās something I fight in myself. Maybe for different reasons, but not healthy IMHO.
One-Employment3759@reddit
I literally can't tell if this is real or generated.
hidden_kid@reddit
they way nexa ai is being plugged in at the bottom, i have my doubts
Batman4815@reddit
Right!? I can't believe nobody is mentioning it. I swear i saw this clip somewhere and immediately went "Oh that's AI" but I guess not.
Humanity is going to need a replacement for the saying " Seeing is believing"
One-Employment3759@reddit
Well, the internet was always full of lies. There will just be people who realise that, and people that go cyberpsychosis and believe everything uncritically.
Interesting-Back6587@reddit
I can set this up for him in 30min
crinklypaper@reddit
very easy to make. especially with transcripts available
Appropriate-Maybe24@reddit
Seems like a really cool idea.
rorsach30@reddit
Oh so he wants a memex
https://en.m.wikipedia.org/wiki/Memex
Titanusgamer@reddit
somebody please explain to me if i dump all my life history like my mails, journals, text messages, medical history etc, wouldnt that easily chew up context window of even 1Mn? or are we talking about finetune the LLM with all the data but that would need a expensive hardware with lot of VRAM?
InevitableWay6104@reddit
this guy could make a kick ass set up with his kind of cash lol
jazir555@reddit
He looks like he hasn't slept in 4 months.
prokaktyc@reddit
Basically he wants NotebookLM
h3ffdunham@reddit
Damn so this guy is retarded, never actually listened to him talk. Good movies tho.
asurarusa@reddit
On the one hand Iām glad a celebrity is bringing light to personal models, but on the other hand I feel like the more common this opinion becomes the more likely it is companies will stop open sourcing / publicly releasing models so that they can capture as many users as possible.
I wish there was a āLinuxā of llms so that long term people didnāt have to worry about meta, DeepSeek, or alibaba suddenly deciding they donāt want to allow users to run their models privately any more.
solarus@reddit
Could probably just give him a regular llm and tell him its loaded with his information and hed be like "alright alright alright". Fuck him for being on rogan.
outerproduct@reddit
I can build it, no problem.
Right-Law1817@reddit
That's cute!
terminoid_@reddit
the model has to know the language you speak before that tho, so it will inevitably be influenced by the pretraining.
WithoutReason1729@reddit
Your post is getting popular and we just featured it on our Discord! Come check it out!
You've also been given a special flair for your contribution. We appreciate your post!
I am a bot and this action was performed automatically.
llkj11@reddit
Notebook LM? They have the study guide that operates similar to that, but it's not private though.
ObeyRed@reddit
Isn't he the dude in salesforce ai ads?
MaximusDM22@reddit
"Wow, what a fascinating project. I can do that for you for just $1 million dollars."
helu_ca@reddit
For 1 million Iāll host it too. š¤£
Outrageous_Permit154@reddit
Well DM me Matthew McConaughey I will do it for 250 000 usd
helu_ca@reddit
Iāll include a Vector DB AND Graph DB for $750,000!
smyja@reddit
Guys, we're still early
berlinbrownaus@reddit
You can do that today, probably right now.
CheatCodesOfLife@reddit
Reckon he'd sue if someone trained Mistral on his books and podcast transcripts then him to a hf space? lol
Thistleknot@reddit
notebooklm
hierarchical lora
Ok-Adhesiveness-4141@reddit
What he needs is something like Notebookllm.
ab2377@reddit
š¤š¤šÆšš„³ this was SO good to watch ! and he is a great guy and great actor.
BreezieBoy@reddit
I hate listening to old people who donāt understand technology explain technology
hairlessing@reddit
He just needs a RAG
reneil1337@reddit
anyone wants this
IJohnDoe@reddit
Man, I love his attitude towards AI
mikiex@reddit
Imagine an LLM full of that stuff he did with that 'Art of Livin' virtual live event... if you've not seen in it's comedy gold.
teknic111@reddit
So what is the best approach to build something like this? Would you train the model with all his books or docs or would you just upload all that stuff to a DB and use RAG?
nntb@reddit
maybe he should also want certin literary works included in the training set that he values as well... i dont think he wrote enough for a effective LLM
miscellaneous_robot@reddit
that's not hard to build
CodeAndCraft_@reddit
You mean RAG and fine-tuning? Yep, it's pretty routine these days.
AfterShock@reddit
But apparently knows nothing about them
Lidjungle@reddit
Talk about being self obsessed. He definitely prefers to have sex where he can look at himself in a mirror.
"You know Joe, what I'd like to do is burn a huge hunk o' rainforest and about a half a million dollars to build a super computer that contains all of my thoughts so I can talk to myself and learn if I'm a Democrat or a Republican. Ask it to tell me funny stories from my own past. I'd like it to be able to release small amounts of some of my best farts for sniffing while it does that. I think that would be cool."
"Oh, yeah, definitely."
Leg0z@reddit
The sad thing is that we could make a product like this in a desktop NAS form factor that does exactly this. And it could be completely private and made with agreements that forbid exfiltration or sale of your data. But YOU KNOW not a single corporation exists that is currently capable of making such a thing would ever do so because they're too fucking greedy when it comes to marketing dollars. If Jobs were still around, he would do it.
ubrtnk@reddit
His LLM would be Murph:30B-A3AB-Lincoln
PocketNicks@reddit
He has money, he can easily hire someone to set that up for him, with a simple drag and drop interface.
Academic_Broccoli670@reddit
Just use RAG
FullOf_Bad_Ideas@reddit
Context length of modern LLMs is high enough to just paste those docs in there, without using cost optimization half-measures like RAG.
sininspira@reddit
r/n8n foaming at the mouth
klop2031@reddit
Lolol
florinandrei@reddit
"Alright, Alright, Alright."
Elibroftw@reddit
guess a bunch of us are thinking the same thing huh. expect my perspective is that you rent the hardware instead of owning the hardware.
NeedsMoreMinerals@reddit
But not private... open source training set. Something transparent that can serve as a common reference. Otherwise you may just isolate yourself depending on how the LLM is trained
77112911@reddit
A you llm talking to you back and forth might be unadvisable... specially as they are rather willing to fall into delusion.
gjallerhorns_only@reddit
Make an LLM modeled after yourself and accidentally give yourself schizophrenia because the model was hallucinating.
AlanzhuLy@reddit (OP)
Iām with Nexa AI, and this is why we built Hyperlinkāa private, on-device LLM agent for files. We've spent lots of effort to make it simple to set up & use so anyone can experience the benefits of local AI. It addresses exactly what Matthew mentioned, and Iām glad to see more people paying attention to local AI.
combrade@reddit
The average personās understanding of LLMs is quite poor and even in Tech there is a lot of lack basic understanding of LLMs.
When I worked in a SP500 company , we had access to various coding tools and a lot of senior developers would rub their noses at ever using them . We also had a dedicated internal instance that hosted good SOTA models but the younger employees would still try copying and pasting into ChatGPT directly including internal data . They had no idea what a context window was , how to connect to our VM or even that you can use API key /sdk for LLMs.
Even within my PhD program, most people stick to Copilot or ChatGPT UI. I often find that a litmus test for whether someone is up to date on the LLM field is whether or not theyāve used an API key . Many of my classmates are skilled developers coming previously from FAANG and one person worked on a company that made AAA games . No one really has any advanced use of LLMs.
entsnack@reddit
Matthew McConaughey AMA wen?
AlanzhuLy@reddit (OP)
Matthew McConaughey is officially my favorite actor.
JLeonsarmiento@reddit
One of us!
One of us!
One of Us!
CommunicationKey4146@reddit
Okay team letās goĀ