A conversation about local LLMs with a senior government AI leader
Posted by JackStrawWitchita@reddit | LocalLLaMA | View on Reddit | 48 comments
I'm a local LLM solutions developer and I've recently had the opportunity to spend an hour talking to the head of AI technology for one of the smaller European governments. His remit is to promote AI within the country's business community and champion local AI research and projects and so on.
We connected on a technical level as he's an older guy (as am I) and we have similar technical backgrounds and worked in similar global IT organisations. He grilled me on the AI products I'm developing for clients and went quite deeply into the queries so he is obviously much more knowledgeable than just a government official. This is his first government appointment and is very experienced in the tech industry.
But what struck me was his lack of awareness of local AI. Yes, he understood that people can d/l LLMs and run them but he had no awareness of why someone or a business would want to do this. When I explained issues of data sovereignty, he countered with ‘Copilot data protection agreements’. When I explained that legal firms are building their own local AI stacks because they've read the big AI tech agreements and don't like them and are therefore securing their own data via local LLM solutions.
We also talked about API cost risk. If a business builds AI stacks into their business reliant on API calls to OpenAI/Anthropic etc then they've created a business risk as those companies can raise API costs dramatically and business are stuck. Not to mention how frontier model companies are constantly changing their model access due to whatever internal issues of usage load or model changes and more so there's no consistency - send the same prompt via API twice and you'll like get two different answer - which is a business concern.
He also seemingly had no awareness of the backlash against big AI tech companies, how many organisations don't want to do business with companies with different values and politics as them, not to mention the green issues. I explained how local LLMs can address those issues for specific use cases to get more companies working with AI.
The conversation was good natured and he was keen to understand. But I was disappointed at how little understanding of how local LLMs can be used as an option for many business use cases. He just seems to be focused on getting businesses to send API calls to the big US AI firms. And he kept mentioning Copilot which made me cringe.
I think we, as local LLM users, need to promote local LLMs as serious business solutions for specific use cases. If we can get AI leaders to start mentioning local LLMs as a possible solution, we can perhaps gain more investment in this solution stack as a viable alternative to big AI.
Are any of you speaking to senior government people about local LLMs? What kind of conversations are you having?
Enough_Big4191@reddit
i’ve had similar convos, a lot of leaders map “ai” to vendor contracts, not system behavior. once u show them where agents break, like inconsistent outputs or bad entity resolution across systems, the local vs api convo gets more concrete. otherwise it stays abstract and they default to whatever feels safer on paper.
Equivalent-Repair488@reddit
I was just talking to this with a director level guy as an intern (my whole office section and department is a lean director level employee area) because we have similar interest in using local LLMs.
We talked about how our IT head is also into local but approving budget and greenlighting a project for that from the CEO looks like a dead end.
We need data protection, and we are using Microsoft. For very like year old tasks like asking it to proof read your emails, powepoints etc, sure, but anything that needs power automate instantly lowers the reliability by so much. I have tried creating multiple flows for a singular task of automated news aggregation of very specific topics, nothing works.
We are also at the behest of MS, we have data protection sure. We and thousands of their enterprise customers are relying on their "enterprise data proection" T&Cs, and is legal protection, but legal protections are not literal protections. There is nothing stopping them from still just reading our data, and we can't audit them or anything. If they are maliciously lying, we have to rely on whistleblowing or a screw up on their side. I also created a data extraction workflow using Excel Copilot, to extract data using GPT 5.4, then handover to the edit agent to automatically add it to our database, and it was reliable when I was I tested it, but a week later they took away Excel copilot for free enterprise copilot liscenses, I can't even pay for the premium liscense because I need to request from IT, and IT sure as hell ain't going through the hoops for an intern like me, now I am requesting CSV outputs from Copilot chat, but their fucking python document reader can't even parse negative signs.
With local we can build our own apps for small tasks, dockerised containers, keep it within our company network, keep it confidential, whatever tasks we want have build it ourselves and the sky is the limit. Maybe even perhaps Hermes, so we can tweak specific outputs over time. But it is likely just impossible, and the cybersecurity will be a massive headache especially with OSS supply chain attacks as of recently.
DarePitiful5750@reddit
As far as data goes. If you have customer/client data and MS screws up and leaks it, it's still your fault from the perspective of your customer.
Equivalent-Repair488@reddit
Reliance on the T&C will put the fault on MS no?
Regardless, if there is a leak, the legal protection doesn't unleak the data.
DarePitiful5750@reddit
You can't unload your own liability onto a sub contractor. For example, your customer hired you, you then chose MS as your backend. You should have chosen better. Etc, etc.
Equivalent-Repair488@reddit
If they do not uphold the terms laid out by the contract, for example, not put reasonable cybersecurity practices or maliciously process the data despite the contract with the company states that they didn't, then they are liable.
I spoke wrongly that the EDP T&Cs are relied and hence making them liable, but no, prommisory estopple dictates that the signed contract takes precedence and liability in spite of employee reliance on the EDP T&Cs. It is not unloading liability to MS, it is keeping them accountable for the terms they set out in the signed contract.
DarePitiful5750@reddit
Sure, they may be liable to you. But they wouldn't be liable to your customers. Your customers are looking at you for liability. That's all I'm saying. You can't just flatly pass off all liability. You get sued, and then maybe you can sue MS to get some of your money back that you lost.
Lorian0x7@reddit
mmm not sure it's a good idea.They are dumb, we will probably just put ourselves in a situation where we have to do Age/ID verification just to download a model and companies have to monitor their local AI use to "keep children safe".. I pass thanks. It's just better to leave them in their ignorance.
sumptuous-drizzle@reddit
This is really foolish. Yes, they might not respect the opinion of some raving rando on the street - and to many politicians, outside of maybe the very left-wing ones, most of their constituents look like raving randos. But if someone who looks like them, dresses like them, talks like them talks to them about something, they tend to listen. That's how (a fair amount of) lobbying works, after all. And as we know, lobbying can be very successful. A politician getting the idea that people 'like them' use local AI, and especially if it's hinted that it's important for businesses, therefore is on balance probably a good thing.
ohHesRightAgain@reddit
Lobbying isn't about beautiful logical arguments reaching the people in power. They don't care what you wear or what you have to say.
Lobbying is about using your own power (money / information / ear of someone who can cause trouble / etc) as leverage to make the other party temporarily align with your interests. Until your leverage expires or someone finds a better one.
If your idea isn't directly helping them get reelected today, you might as well keep quiet.
sumptuous-drizzle@reddit
I don't think such an overly simplistic model is accurate. While it's appealing, if you already think very little of politicians, to have a neat and tidy model that they're election-chance maximising, selfish actors, that sort of unbounded, rational optimizing doesn't correspond with human behavior. Humans are social animals, not rational optimizing machines. So I can't agree.
ohHesRightAgain@reddit
Humans who optimize for a rise in power will rise further than those who optimize for the good of society. Humans who optimize for the good of their immediate pack will also rise further than those who optimize for the good of society.
There will be exceptions, as with any rule, but exceptions aren't called that for being common.
sumptuous-drizzle@reddit
You love thinking in binaries, don't you? Human motivations and strategies to accumulate status and power are far more varied, both good and bad, than you give them credit for. But I don't have any great need to convince you. Clearly this duality is load-bearing in your belief system. So I don't see any reason to drag this discussion out any further; please continue believing as you see fit.
soshulmedia@reddit
Exactly this. The best way to get burned badly is to assume non-psychopaths or some warm and fuzzy notion of "social animals" in fields where it is all about power, such as politics.
soshulmedia@reddit
The ones who really pull the strings are not at all ignorant, and those who look ignorant often feign it.
It is an exellent tactic to keep the largest parts of the populace in a state of "hypnotized goodwill" for our evil govs. Everytime something bad gets decided it was "just ignorance" or "just an accident".
thread-e-printing@reddit
"Statecraft is stagecraft" as one think tank wag put it
soshulmedia@reddit
Unfortunately, yes.
MelodicRecognition7@reddit
lol yes whenever an official speaks about protecting the children it means you're going to get fucked.
thread-e-printing@reddit
It's been thus since pederast Plato had Socrates executed for "corrupting the children" with education. Civilization, in a nutshell.
relmny@reddit
I don't get the joke, but if it isn't, Plato didn't got Socrates executed...
MrPanache52@reddit
Listen here you ai shit fucker, I don’t want to read a fucking novel, and I DON’T WANT TO SEE A GOD DAMN ENGAGEMENT BAIT QUESTION AT THE END
Thebandroid@reddit
Is this post AI? The only way someone in government would have any knowledge of their portfolio would be in an ai hallucination… /s
drallcom3@reddit
"Everyone loves AI! They better do, or I will lose my job. Reminds me, I have to introduce complicated regulations so my agency becomes important."
SkyFeistyLlama8@reddit
To be honest and brutal, local LLMs aren't a solution if you're pitching to entire government agencies. Nobody wants to run data centers on their own: that's what contractors are for.
Local LLMs only make sense in the context of data and compute sovereignty, where you have national data that cannot end up in the cloud where the American or Chinese or Russian governments can legally exfiltrate that data. At that scale, you'd be looking at government private clouds run by a large local contractor with the knowledge and experience to deploy LLM endpoints at massive scale. Even then, it might be nowhere near as good as "big AI".
mcslender97@reddit
I hope you asked him about Mistral since it's supposed to be Europe's sovereign AI solution and can run locally
JackStrawWitchita@reddit (OP)
The government person I was speaking to has spent the last few years working in the silicon valley bubble so it'll take some time before he becomes acclimatised to EU and other non-US tech.
CircularSeasoning@reddit
"Data protection agreements". The word "agreement" sounds to me about as good as a nod and a handshake.
What good is such when the New York Times can just sue ChatGPT and now all your personal and company data, suppoooosedly anonymized somehow, is in the hands of some random organization trawling all over it ("in search of people accessing paywalled content"). Legally, they can't use any of that data for anything. Realistically, your data has been leaked to strangers.
Meanwhile, hacking and data breaches have not stopped being a thing.
_mayuk@reddit
Oh that is why the post that weird model that gets private information? Xd
MelodicRecognition7@reddit
I'm speaking to business people about local LLMs and get countered with "(insert big AI name) data protection agreements". All success stories I've read about implementing local AI for a business were something like "I'm a tech guy at (insert business) and one day boss opened a door and said "I want local AI!"".
So from my point of view nobody wants local AI except us hobbyists.
ethertype@reddit
"Nobody" wants local IT, because "cloud is so cost-effective".
And it totally is. Until it no longer is. And by then the enterprise no longer has local competence and is hostage to the provider. And if the enterprise still has local competence, the company is so embedded into the cloud platform that it is expensive and cumbersome to leave.
External dependencies is a liability. External dependencies you cannot trivially replace is lethal.
Try getting priority at $CloudVendor as a million dollar company, when said $CloudVendor is a billion dollar company, with multiple 10-100 million dollar companies as customers.
Your priorities mean absolutely nothing.
thread-e-printing@reddit
But why did they burst through the door like Kool-AI'd Man "Oh Yeah!"? Perhaps because they've been using cloud inference avidly and the bills gave them heartburn.
So what if their attitude is not so much "we want local" but "we need local"? I should prefer the latter to justify a discrete increase in my hourly rate.
JackStrawWitchita@reddit (OP)
I think it depends on the use case. My clients are spending the same on local AI as they would for big tech AI calls and they love the data security and zero worries of big tech ramping up api charges in future.
It's not for everyone but local AI is a cost effective security first solution to specific use cases.
MelodicRecognition7@reddit
yea I've seen a few reports from software development sweatshops where hundreds of developers spend five digits USD in token each month, they will definitely benefir from a 300k server purchase.
_mayuk@reddit
we the people have to created a decestralize AI network , under web3 parading so your computed given to the net would mint a token for use the net itself...
eth or bitcoin use gpu to resolve an useless hash ... why not rather that proof of work .. we think in proof of inference ? hehe ....
anyways ... this if for me the ultimate solution ... a shared network ! ...
thread-e-printing@reddit
No, value exchange is not "sharing" you Uber lobbyist
_mayuk@reddit
Wtf you mean with Uber lobbyist … and be more clear about the value exchange ? I’m just thinking in way to incentivize people to give compute to a shared network … imaging if everybody in this sub just connect all their local servers to a unique network … idk ..
JackStrawWitchita@reddit (OP)
I've developed effective solutions for clients that only require an 8b LLM that are giving them amazing results. They run it on an old pc that's already in their office.
Cupakov@reddit
I successfully convinced a client to invest in a dedicated LLM server, but that’s only because the client is a pharmaceutical company and they really, really want the data to stay confidential.
Such_Advantage_6949@reddit
Matter of fact is it is easier to start on closed source ai. Local AI is not at the quality and scale needed until u invest very heavily. Also it takes out alot of complexity with deployment and maintainable. I am local ai person but at corporate level, there is alot of complexity involved
wallysimmonds@reddit
They don’t want it at the moment. At some stage I expect we could see a shift. I’m ensuring I have the skills when that occurs. Worst case is I waste time on something I actually want to do anyway as a hobby.
marscarsrars@reddit
Love the way you think, how can we persuade people for local LLm use not to mention the hardware costs.
JackStrawWitchita@reddit (OP)
One of my clients spent exactly zero money on hardware for their local ai solution as they had underutilised hardware in their office for their use case.
We need to stop thinking of AI as 'all things for everyone' and start breaking down use cases which means far less hardware required.
marscarsrars@reddit
Under utilized hardware. Sounds like they bada few nivida GPU systems lying about and ready to roll.
mr_Owner@reddit
Some don't / can't/ won't learn new tricks... Time to shine!
soshulmedia@reddit
Have you considered that he might just be doing his job?
"The purpose of a system is what it does ..."
_mayuk@reddit
yeah sound like he is getting a paid check to promote corps ... not really surprising in politics ...
DarePitiful5750@reddit
I work at a well known large computer manufacturer. My team and I are setting up local models for use within our own team. So far it's 5 small servers. It's for sort of a mix of POC work and actual development use cases.
Durian881@reddit
I've actually worked with senior government officials that run their own local LLMs and do vibe coding. They are the minority though.
In some Asian countries, there is a push to self-hosted AI, especially for regulated banking industry.