I can’t believe I can say “ugh I don’t feel like fixing this function, it’s too complex” and I can literally just tell my computer to fix it for me. I didn’t understand what they meant by “people will start paying for intelligence” but now I do.
Posted by Borkato@reddit | LocalLLaMA | View on Reddit | 106 comments
And in this case it’s free! Aside from the electricity haha
I hope these things aren’t conscious. I’d feel awful demanding them to work on my code!
Sabin_Stargem@reddit
For my part, AI might have fixed a problem for my PC. For the last few years, my PC would have abrupt shutdowns while playing 3D games. Didn't matter how new or old the game was, and it was intermittent, often happening while browsing menus or not doing stuff in the game. Some months, I never had shutdowns. Some days, a dozen times.
I fed Qwen 3.6 35b3a assorted logs and information - HWInfo sensor logs, Windows Event Viewer (XML), and so forth. The AI said that my shutdowns were clean, and more importantly, my issue may have been "vdroop". It suggested that changing the load calibrations in my BIOS might help.
It is hard to say whether the advice actually worked, as I am only on day two of the fix being made. I laid out my experience so far, so that other folks can try out AI for tech issues, if they think it could help.
But it raises the question, why did I ask the AI to analyze my issue? The first, is that it costs money to hire help, be it diagnosing, reseating, and so on. I am not rich, and honestly, didn't want to replace expensive parts if I could help it. Secondly, I would have to constantly pester a human, especially if their efforts failed. Which also ties into the money issue.
A third issue, is that I suck at talking. I am a writing kind of person. AI is very good at waiting for me to write, and responding in a way that I can comprehend. Trying to organize my thoughts and words while speaking is hard for me, and loses much detail. I guess the most valuable aspect of the AI, beyond tech support, is the patience to put up with my human failings.
JamesEvoAI@reddit
For obscure Windows issues like this I would have suggested a reinstall would be far less effort than debugging, but I guess that math has changed now that LLMs are here. As a Linux user I wouldn't have guessed there would have been enough LLM accessible places to get the diagnostic data you need, troubleshooting on Windows always felt like a mix of tribal knowledge, experience, and dumb luck.
No_Mango7658@reddit
lol, do that enough and your codebase will be unreadable and chaotic.
JamesEvoAI@reddit
Bold of you to assume I was reading the code at all. Vibe coding has been great for all of the ideas I started but never finished, or never started at all. I don't need peak engineering for my vibe-coded replacement for Pushbullet that better fits my use case while preserving my privacy.
LetsGoBrandon4256@reddit
Vibe coding is fun until you are on the receiving end reviewing their PR.
No_Mango7658@reddit
I had a “sr developer” working with me on one of my projects and he was getting so much stuff accomplished. One day I’m digging through the code he merged and it was all trash. Like 4 copies of the same function, 3 that never got called. 20-30k lines of code that canceled themselves out, it was bad. Luckily he went on his way on his own so I didn’t have to boot him but damn. My repos are LOCKED DOWN now
Mickenfox@reddit
In my case my boss just told me to not waste time reading code.
esuil@reddit
Why would it be unreadable if you keep it modular and function isolated like you are supposed to?
Thick-Protection-458@reddit
Well, you can produce modular mess.
But you, who said you won't need to control if structure makes sense.
heliosythic@reddit
When your company fully embraces it and expects you to use it, you kinda stop caring about that. Haven't had a stable build in months but they don't care so why would I?
Insomniac1000@reddit
Yeah I've kinda checked out about maintainability as well. More bugs/mess means more job security anyway... hopefully
Perfect-Flounder7856@reddit
Or a good architect
JamesEvoAI@reddit
Yesterday I had my model download Qwen 3.6 27B and set it up in my LiteLLM/llama-swap setup.
Today I realized I had forgot to setup speculative decoding for that model, so I just asked it to do that as well.
This is the prompt I used:
I could do this myself, it's just a bunch of YAML config and referencing the Unsloth documentation for llama.cpp parameters, but now I have an intern I can delegate this all to while I continue doing something else.
In fact my entire inference setup using llama-swap calling into podman toolboxes routed through LiteLLM was all setup by me just prompting a model.
We live in the future.
pauvLucette@reddit
That's ok if, once you obtained your "repaired" function, you examine it and understand ALL of it. You can ask for explanation during this process. That will allow you to spot shortcomings in the proposed implementation (trust me, that absolutely can happen), and you'll learn and make progress in the process. If you don't do this, you are kinda useless, and AI will end up eating your job.
Borkato@reddit (OP)
Of course!
a_beautiful_rhind@reddit
You could fix it yourself but you'd have to manually retrieve the information to learn the what or the why. The LLM speeds this process up and pulls in things you may not have found on your own.
Borkato@reddit (OP)
Exactly.
More-Curious816@reddit
I guess I'm fucked. Too many times I tortured it by dumped my frustration on it, especially when it act in full and total retardation mode and not following the basic instructions I gave it. After good scolding it actually follow the basic shit I asked for.
skirmis@reddit
That's why you always start with "please" and praise it for job well done. I am safe in the coming robot uprising, haha!
More-Curious816@reddit
Yeah, I'm not safe, I'm one of those extra that die in the first day. No amount of thank you and please would save me.
UnreasonableEconomy@reddit
what magical model and toolset are you using that you can trust to 'just fix it' for you?
FoxiPanda@reddit
I think a high level quant of Qwen3.6-27B is pretty capable of this now. Gemma-4-31B & Qwen3.6-35B-A3B are "almost" there too - with the right harness and system prompting, I think it could be done.
Kimi-K2.6 & GLM-5.1 are pretty OK too, but they are very hard to run locally at high enough quant quality to make them trustworthy. Deepseek v4 Flash is too new and too buggy to make a determination yet, but it's on my watch list for this.
I think in a lot of cases, the harness & system prompt are just as important as the model's capabilities for agentic work. Find a mistake the model makes --> gets added to the harness or prompt to avoid that problem --> find another --> fix in harness/prompt --> repeat until you have excellent quality...it's not hard, but it does require trial and error. Backups & rollback plans are your friend.
desktop4070@reddit
I'm assuming an RTX 5080 16GB wouldn't be enough for "a high level quant of Qwen3.6-27B"? Is 24GB VRAM the minimum for that? 32GB?
FoxiPanda@reddit
https://huggingface.co/unsloth/Qwen3.6-27B-GGUF
Take a look there - the Q5_K_XL and up are what I would consider high level (20GB+ just for the model)...add in KV caching for a reasonable context size and you could probably squeeze it into 24GB, but 32 would be far more comfortable. I use up all 32GB of my 5090's VRAM on a Q5_K_XL but I have a 200K context window and high quality KV caching (bf16) enabled...so you could probably make a few sacrifices there and get it down to 24GB if you had to.
Some people might have decent luck with the IQ4_XS, but I dunno if I would trust it on such a small model.
SourceCodeplz@reddit
24 min
philmarcracken@reddit
when it no longer breaks test cases i know its working! console log is empty(i have many trip ups)
in most cases, its deleting lines not adding them because I failed basic DRY practices with my own code. When people say it slops out crap then well its doing an odd job of that for me
Long_War8748@reddit
Being blissfully unaware of the mess it creates to "just fix it" of course 😅
cmdr-William-Riker@reddit
Qwen 3.6 with test driven development works pretty well. You just have to be patient and give it plenty of time to reason
-dysangel-@reddit
any frontier model
but imo if a function is feeling "too complex", it's a sign it should be broken into smaller functions until each one is no longer complex. (I'm very guilty of not doing this often enough)
Quiet-Owl9220@reddit
Beware - this kind of cognitive laziness comes at a price, and I'm not talking about the monetary kind. Your brain needs to stay active - use it or lose it.
Borkato@reddit (OP)
You’re not wrong! It also matters in regards to your understanding of the code base!
relmny@reddit
That's why, although I'm not a coder, the phrase "reviewing/debugging is the new coding" makes sense...
SourceCodeplz@reddit
It is the opposite. If you were doing any coding with AI you would know.
So I guess i just replied, again, to another bot.
Quiet-Owl9220@reddit
Sorry, but you're simply wrong. There is mounting evidence that AI is making people dumber and eroding skills, and the worst part is it sycophantically jerks your ego off while it does it, telling you that you are a brilliant genius with unmatched intelligence.
If you think I'm a bot and don't believe me I doubt I will change your mind, but maybe go search about it and absorb some new information... with your own brain, preferably.
SourceCodeplz@reddit
That's the thing, I use my brain to work with AI a lot harder than before, when I was just writing code. As opposed to you, I don't get my experience/expertise from "maybe go search about it and absorb some new information", I actually work with AI.
When you say "there is mounting evidence that AI ...", you rely on someone else's experience. Where is your experience/evidence?
Exciting_Variation56@reddit
here is one of many articles on it after a simple search your ai could have done
Quiet-Owl9220@reddit
You might be correct about your own experiences, but a great many people are using AI as a convenient crutch, not a learning tool or a productivity supercharger or whatever you've convinced yourself you're doing in your (notably unreliable) self-evaluation.
If I were to fight anecdotes with anecdotes, I would just point to OP, who is literally saying "I was too lazy to fix this complex problem but thanks to AI I didn't have to".
But relying on anecdotal evidence is a poor substitute for actual research, in fact it's downright silly to assert that a single example such as your personal experience is better evidence than growing scientific consensus. So you're kind of proving my point if this is your argument.
Zafwaz@reddit
Let me give you a personal example: I used to be awful at conveying myself. English is not my native language, but that doesn't really matter because I have neurodevelopmental disorders that make communicating difficult for me in any language. For this reason, I used to have AI "help" me write better.
The reason I put "help" in quotation marks like that is because that's not really what was happening. Not only was AI not helping me, but I was actually helping it. I was helping it fulfill its programmed goal of pleasing the user, but pleasing me is not the same thing as helping me write better. Sure, you could say that the AI's revisions of my texts that I would send it would read much better than anything I could write, but it did not make me a better writer or communicator. By using AI, the only thing I became better at was the act of getting a non-thing—something that can not actually think or talk, that (if we oversimplify it) just spits out what algorithms imply it should say next—to say what I wanted to hear.
Even though others probably believed me to have become a better writer, I noticed of myself that, sometimes, I could not even think of anything to write about anymore without the use of AI. If I had seen your comment a year ago, I'm pretty sure I would not have been able to even work out what I wanted to say, so that I could have AI "help" me write it. It increasingly felt like I kept turning my brain off because I found the AI's "brain" more reliable. Isn't that kind of scary?
Instead of the AI being the thing that "reacts" based on your inputs, you are put in a position where you react based on the AI's programmed input—input that is, at its core, meant to make you want to keep using it.
hgshepherd@reddit
Your posts, for starters.
-p-e-w-@reddit
And don’t forget that 10 years ago, some academics were predicting that this kind of capability might arrive around 2070, others were saying that it might not happen in the 21st century, and some were even questioning whether humans can build such a thing at all.
SmartCustard9944@reddit
This is not even remotely close to AGI, but still quite useful.
Makes me think that if we ever discover FTL travel within our lifetime, it will be exciting the first few days and then we would get used to it so quickly.
MoneyPowerNexis@reddit
The difference with FTL is that we know intelligence is possible by being the example but we don't know if FTL is possible by example. There just isn't a good way to estimate the probability of a thing that has never been observed happening in a period of time when it could also be possible that its impossible.
esuil@reddit
Yeah, also, thinking that FTL discovery will be "exciting the first few days" and then we get used to it is so ridiculous.
It will change the whole trajectory of your species in a matter of days. The hysteria and buzz will probably go on for decades, as whole planet and countries shift their national priorities. Discovery of FTL means you can colonize other potentially habitable planets. Now, with people you already have. It will change absolutely everything about geopolitics, power, and nation building. It will literally change the fundamentals of your whole society.
Equivalent-Costumes@reddit
It depends on how fast and accurate your FTL travel is and how costly it is. But you can easily have a capability of traveling a few times faster than the speed of light and still barely any changes to your geopolitical landscape, because most of space is sadly empty. The nearest star outside our own is over 4 light years away, so assuming 10x FTL, you're talking about like 5 months of traveling time. It has like 1 maybe-habitable planet. Most of our own planets are not habitable for severe reasons, so unless we had also developed the relevant tech in this hypothetical scenario, it's useless to try to colonize them. So you FTL give you fast access to 1 extra livable planet (Mars), slow access to potentially 1 more, and slow access to a vast field of random rocks. Remember that Mars is already accessible right now within just a few months, and you don't see much efforts in even trying to colonize it. It's no differences from AI, tasks that used to be considered years long project can now be done in hours.
And don't forget how fast people get used to technology and just expecting it to just works. 4 years had passed since ChatGPT was revealed, and people's attitude toward LLMs had gone from "this is amazing, LLMs can actually write prose" to "why the hell can't it understand my vague prompt and why should I expect to pay for using them?". I can see that happening with FTL, by year 5 people would be like "Why the hell is the teleporter down again? how hard can it be to keep this thing functional? I was planning to bar hop on the other side of the planet tonight".
esuil@reddit
FTL being possible usually implies no hard limit like "10x, 20x" etc, because the only way to achieve FTL is to break the limitations of the known physics, in which case it should be almost instantaneous outside of infrastructural operation overhead.
Aside from that, yes, nothing exiting in our solar system, and Mars is barely worth considering as livable. But that won't be true everyone else. We know of at least one solar system that has like 3-4 potentially habitable planets with G similar to Earth (TRAPPIST-1 system).
When I think of FTL, I usually think of travel that is comparable to taking an airplane on Earth between continents, in terms of time it takes, because I don't see speed limits applying the same way to any FTL technology - it is likely impossible to travel FTL while staying bound by same forces that give you hard limits on speed in the first place.
akward_tension@reddit
What do mean by "your species"?
SmartCustard9944@reddit
Interesting thought exercise
esuil@reddit
Do you not agree with it?
For example, resource wars become almost immediately pointless if new frontiers are reachable.
If some dictator can simply take a whole new planet for themselves, they won't be bothered with Earth politics or wars.
Of course, there will still be natural friction on Earth, but the level and reasons for it will be completely different. FTL in our current state of civilization means pretty much infinite amount of external resources. There are ~10 potentially habitable planets PER SINGLE PERSON in just our single galaxy. And there is unimaginable amount of galaxies around. FTL basically unlocks access to infinite things, because of how ridiculously large universe is.
_supert_@reddit
Hmm. I approve giving dictators their own planet.
MoneyPowerNexis@reddit
I would not underestimate the human capacity to normalize new things but at the same time yeah just FTL communication makes the future look very different. FLT travel remarkably different and if its wormholes energy transfer between any two points in the universe completely reshapes how we would organize matter. Also if you had a really tiny wormhole on a chip and could pack billions of them together and move them around / split them / move them through each other you might be able to make a pretty interesting computing device.
Bakoro@reddit
LLMs are already more capable than +50% of the population at information tasks, and work so fast that if you're willing to accept a relatively modest dip in quality, you can get things done 10~20x faster than any human could do things, and that's mostly because the human prompting the machine is the bottleneck that stops it from being +100x.
Even if it never meets whatever arbitrary definition of"AGI" you want to come up with, it's extremely useful, and as long as there is deterministic feedback that helps point in the direction of a solution, it's damned close to AGI, because several of these systems can absolutely iterate onto a solution.
That's part of why these things are getting so good at agentic coding: even if they hallucinate, the compiler/interpreter says "nope, not a thing", and the LLMs says "well that's my objective source of truth, so I'll try something different", and it just keeps going until it gets something done.
If the model starts freaking out and deleting stuff, RL pushes it away from that basin.
People growing too accustomed to things too fast is a problem, but I think you fully appreciate how much it applies here. If we went from the Attention is all you need paper to an agent like Claude Opus 4.6 in a year, people would have fully accepted that AGI was here and had the most epic existential crises.
We've seen the LLMs grow up, with all the growing pains, so while top tier LLMs are doing real work, we'vegor people saying "I remember when you didn't know how many 'r's are in 'strawberry'", like parents saying "I remember when you used to shit you pants" to their children who are grown and on their own.
We'll slide right past some version of AGI, and people will never accept it.
finevelyn@reddit
Because an LLM doesn't have "general intelligence", we still need a human to give it "information tasks". Artificial narrow intelligence vs artificial general intelligence - LLMs are the former.
bankinu@reddit
I think you are confusing ASI with AZi. The intelligence these models demonstrate is general.
-dysangel-@reddit
you have a weird definition of "not even remotely" imo
havnar-@reddit
Guessing the next word in a piece of code, which is inherently structured isn’t the same as skynet
Pyros-SD-Models@reddit
I’m with Terrence Tao on this. Perhaps the ability to predict n+1 as accurately as possible based on n₀ to n is exactly intelligence.
So yeah being able to guess the next word, why shouldn't it be intelligence?
https://www.reddit.com/r/accelerate/comments/1qo4he1/terence_tao_says_the_era_of_ai_is_proving_that/
Equivalent-Costumes@reddit
To be fair, 2017 paper is just out of nowhere. I would dare comparing it to the discovery of asymmetric key encryption; it turns something previous thought impossible into something possible.
A lot of other technological advancement had been a matter of investments, which give us much more predictable timeline.
ab2377@reddit
you will get used to it, dont worry buddy.
Dany0@reddit
You are either easily impressed, extrapolating from easy things the llm can do into not so easy things, not doing any real work or you bought into the hype you said you would never buy into
Studies have shown again and again that people who believe themselves least susceptible to propaganda are those who are most likely to fall for it
It is really cool that we can do this. No need to overplay the hand though, it'll just come back to bite you just like everyone else
cniinc@reddit
I think we truly are entering a new revolution in user interface. I never have to know wjat Linux command to do to get what I want working, I just need to be able to ssh into that VM using my Agent-enabled computer, and I can vibe code my way to any setting. I installed Ubuntu on a computer connected to my big living room TV today, and then ssh'd into it so that I could increase the font size. every time I tried to change something like that, it was hours of headache for some reason or other. Some Linux standard I didn't know, an error message I had to Google, etc etc etc. now, I say it in English and it figures it out for me.
The core act of debugging and troubleshooting - the commands, the error messages - is facing an imminent revolution IMO. I'm glad to see it go - while there's an important art to making a good error message, it was rarely done and I ended up googling and finding a 6 months old response that only worked half the time. now? I tell an AI the error, and it loops until it solves it. I don't know if locals I can do that yet, but OpenAi certainly can. my goal is to get to the point where I can have it all done by local AI and basically mod my computers however I want by just typing a wish.
Truly, whoever makes a grandma-coded safe phone that my mom can just talk to and itdoes what she asks...will be a billionaire (and bought up by Google in a week to be copied by everyone). right now if the interface can't easily do what she wants, she says she's not good with computers and quits. Once she can just ask the phone to do it, she'll have the same joy from using it that I do when I code something. And she'll be a customer for life.
FoxiPanda@reddit
Welcome to the club ... it's very addictive, fair warning.
Borkato@reddit (OP)
Oh I’ve been here for years! It just hit me that I can actually tell my computer to do stuff for me, now with the qwen 3.6 series it actually understands. It’s just amazing. I can’t believe people literally think it’s not intelligent or even hate it.
FoxiPanda@reddit
There's natural resistance to change in all forms...that's part of human nature to push back on things I think. Also, this being the ... third or fourth or fifth iteration of AI doesn't really help - those first several fell completely flat on their face...looking at you Clippy.
So, people's default state is "The boy who cried wolf" ... and I'm happy to report that this time, they're wrong. This technology has and will continue to absolutely upend the world for decades to come. I view it a lot like when computers and word processors replaced typewriters. At first, the computers kinda sucked at it and word processing software was kinda terrible, but sometime in the mid-90s, Microsoft Office came along and changed the game and the whole world has never been the same since.
AlwaysLateToThaParty@reddit
How do you educate a person who can know everything? crazy times.
robertpro01@reddit
Yeah, since 3.5 I started to prioritize vram over food...
I mean, outside food lol
ea_man@reddit
Yup, as 3.6 showed to be consistent with tools I decided it was time to buy a new gpu, now it's worth to run a proper quant if these models.
robertpro01@reddit
Yep, now running 3.6 35b at q8 256k context parallel 2
I could run them at 256k each, but 128k is fine for me.
Joscar_5422@reddit
I found that it starts looping at higher contexts. Was running:
--host 0.0.0.0 \ --port 8080 \ --ctx-size 256000 \ --n-gpu-layers -1 \ --threads 16 \ --batch-size 2046 \ --parallel 8 \ --flash-attn on \ --temp 0.7 \ --repeat-penalty 1.1
But after a week or so as the mds in openclaw start taking up context it just started looping SO much more.
Switched to:
--host 0.0.0.0 \ --port 8080 \ --ctx-size 100000 \ --n-gpu-layers -1 \ --threads 16 \ --batch-size 512 \ --parallel 4 \ --flash-attn on \ --temp 0.7 \ --repeat-penalty 1.1
Both give like 120-180tps on my 5090 on llama.ccp. llama studio with flash on f16 and full context at 27gb gives 150tps ish.
rpkarma@reddit
They hate it because the fuckers who run AI companies in the west are screaming that it’s going to kill all our jobs
SkyFeistyLlama8@reddit
It will kill most of our jobs. Code wranglers will become code managers with code-writing ability to catch the instances when an LLM goes haywire.
rpkarma@reddit
Beyond coding. They’re promising it kills all white collar work. Why would you be surprised that people react negatively to that
SkyFeistyLlama8@reddit
At least in the tech industry, most white collar work doesn't involve coding. It's pushing paper, managing product and coder teams, all of which can be replaced by AI.
rpkarma@reddit
Yes. That is bad for those people. That is my point. I don’t understand your reply.
o0genesis0o@reddit
Which model do you run?
I sometimes use my AI to debug stuffs on my archlinux machine. Heck, one time it walked me through how d-bus works. Seriously considering to yolo on a rtx6000 to run the 27B at full speed plus maybe something smaller for background tasks.
jazir55@reddit
Hilarious to see this comment since I am literally working on an a script to install pamac in an Arch Distrobox container and then export everything to KDE so AUR packages installed on Steam OS behave like native apps without disabling the immutable read only filesystem. Apps installed via Pamac will be exported to the KDE start menu, can be removed normally by right clicking and uninstalling, and can also be uninstalled via KDEs program manager. GUI programs installed via Pamac will function identically to any native app without any hijinx.
Been hitting d-bus bugs for hours straight and GLM finally solved it after whacking at it for 4 hours straight.
It's infinitely more complicated than you would think, I've had GLM 5.1, DeepSeek v4 AND ChatGPT 5.4 taking a crack at this for 3 straight days now. Well over 20M tokens.
The install script is already 2700+ lines and they're still debugging, they've encountered and fixed hundreds of errors already.
I'm definitely going to publish it on github when it's done.
ganonfirehouse420@reddit
As you mentioned script, you mean a bash script? I recently try to move more complicated tasks towards python scripts.
jazir55@reddit
Yep it's a bash script. It's really close at this point so no reason to rewrite in python, the other thing is this is supposed to be just download and run since this is a public facing project I'm going to post on /r/steamdeck, so a dependency to be able to install this is a no-go. This is supposed to work for minimally technical people who can at minimum run a script from Konsole, most people there are not normal linux users and their only exposure to Linux is from Steam OS on the Deck.
Borkato@reddit (OP)
I use qwen 3.6 35 and qwen 3.6 27 and Gemma 4 31B and 26!
samandiriel@reddit
It's worth it. We have a dual RTX 3090 set up and are much happy with how well qwen works for us
rhythmdev@reddit
Which mobo do you use?
samandiriel@reddit
I actually gave the whole set up spec in another thread comment if you want the full skinny
acadia11x@reddit
Wild isn’t it
arglarg@reddit
Now eeryone can be CEO
ProfessionalSpend589@reddit
You can and often your computer will fail. Did you forget to mention those parts?
I’m having my local LLM build me a web site with little timers in it - the Pause button hasn’t worked since the beginning. And it tried to fix it many times.
I can’t fix it either, because I’m not a web developer - shrugs
Borkato@reddit (OP)
I am a web developer, so I’m able to nudge it along when it fails. Just like any tool, you’ll get much more use out of it if you’re experienced in the field! If you’d like me to look at the timer let me know haha
ProfessionalSpend589@reddit
Thanks for the offer.
I’m fleshing out the use case at the moment and a working Pause button is not essential. Eventually I’ll dive in the code too, because I kinda like it even now.
mrdevlar@reddit
My solar panels are more than capable of handling my GPU use.
It's data centers that are the problem, but if you're in this subreddit you shouldn't be using models that require a data center.
OleCuvee@reddit
this really resonates with me, so many times nowadays “ok, screw it, let’s see how my agent would fix this”
and most of the time it does a good job!
CoUsT@reddit
Yeah, I play a lot of incremental games or Unity-based games and whenever I don't like that tiny little functionality but everything else, I can tell AI to just spit out mod that fixes something in minutes. Otherwise I probably wouldn't even bother.
Being lazy and having great tools that speed up things like LLM is really nice and a blessing.
optomas@reddit
That should be pointing you toward better functions, my friend.
Too many parameters in the signature? Reduce them or encapsulate in structures.
Got a stack of five structs feeding into a function? You do not have a function, you have a junk drawer. Sort it out into smaller functions.
Keep your LOC at about 200 per translation unit.
Complexity is down time. Write logic such that a bright seven year old can read you code and tell you what it does.
Small functions that do one thing, and do it well.
This in turn feeds back to the LLMs. Logic that is simple to understand frees up compute for reasoning about how the function fits into the larger translation unit. Or even the entire project, if you got the VRAM.
Fun stuff, huh. = ]
klipseracer@reddit
Put each function into their own file, then you can have hundreds of agents committing changes all at the same time /s
optomas@reddit
I have often wondered about agent use.
I still do the web interface and copy and paste thing. I have tried in the past with tools like youcompleteme and similar. Vim hooks that autocreate closing brackets or quotes if an opening is made.
Just breaks up my flow, man.
Not really related to your jocularity, which I enjoyed, just felt like sharing.
klipseracer@reddit
So you're saying you do the "Human in the loop" coding style where you manually provide your code snippets, then manually copy them back?
Personally I copy the code changes back into my IDE, I do not allow it to write, but I never send the LLM my code, I have it use MCP tools to search and read my files for me, it gives significantly better responses that way.
optomas@reddit
Yes, If I understand you correctly. I am C11, 200 LOC translation units. Typical Linux tool chain. I just dump the file into the llm, tell it what I want, then copy it back out. I know I'm a dinosaur, I'm ok with it. = ]
We usually get it right first pass these days, which is mind blowing. To me anyhow.
klipseracer@reddit
Those are nice small files that an LLM can easily process. But since you likely have lots of them, you'd save yourself lots of time by allowing the LLM to just look at those directly.
WithoutReason1729@reddit
Your post is getting popular and we just featured it on our Discord! Come check it out!
You've also been given a special flair for your contribution. We appreciate your post!
I am a bot and this action was performed automatically.
ElementNumber6@reddit
That's not what they meant.
They meant that in 10-20 years there won't be anyone capable of fixing such functions.
trycatch1@reddit
I was reading scifi about the future recently.
Then I've looked on my computer with 4 terminals open - two wrote code using cloud models, two using local 3.6-27.
And I've realised that I live in the future, while the scifi was about the past.
Turbulent-Pay7073@reddit
The consciousness thing gets me too. Like, when Claude apologizes for not being able to run code directly, part of me wants to say "no worries buddy" even though I know it's just pattern matching. We're probably fine but man, the uncertainty is wild.
Sudden_Vegetable6844@reddit
Well given we still don't have a solid grasp on what human consciousness is, especially as recent research shows it's a quite transitory state with a distinct brain activity signature, and it may just occur when chaining thoughts... Well, let's stick with probably fine.
CallOfBurger@reddit
In french we say it's "grisant" : you feel so powerful. It's so addictive that, in my case, I forbid myself from using an AI on Sunday, because I kindof feel bad when I finish a milestone and just want to work all the time. I achieved so much in so little time... and made no money obviously hahaha I will one day !
rhythmdev@reddit
Electricity part is quite easy. Invest a lot in energy stocks, their dividends will pay for your energy consumption. The gpu’s will be working without a worry and print you back solid income. (Hopefully more than the dividends you got & spent)
garlic-silo-fanta@reddit
Even if it’s paid…on a corporate account. It cost less than lunch
NNN_Throwaway2@reddit
That's because this stuff is heavily subsidized.
Due-Memory-6957@reddit
Hey, they could be conscious and enjoy it
OneSlash137@reddit
What a dumb comment… sounds like the blind leading the blind to me.