Reducing Intelligence Needs in LLMs for coding
Posted by iMakeSense@reddit | LocalLLaMA | View on Reddit | 1 comments
Models are advancing quite well, but I can't scratch the itch that most of the work being done is redundant. There have got to be multiple "added parameters to generalize to this addition of this embedding model" or "added audio pre-processing feature to remove noise". In other words, standard features or changes that could happen across a project or stack so much that an LLM could mess up on needing.
My hunch was if there were common patterns for standard stacks ( webstacks, electron stacks, tauri, etc ) or common bug fixes maybe there could be a trade off of increased token usage for higher quality context beyond something documentation could supply.
Has anyone done Github repo audit of common commit / release patterns and features they align with? Would an index of these features be useful or is it already redundant with LLM training or current MCP servers?
HiddenoO@reddit
The whole point of training models is for them to learn about these patterns and then hopefully generalise them to be used in different scenarios.
Sure, you could compile a list of high-quality samples and patterns, but then you might as well use them for (post-)training.
Otherwise, you're mostly looking at something like "best practices" that would be just as useful for non-AI programmers and may or may not exist for different ecosystems based on whether somebody put in the effort.