Reducing Intelligence Needs in LLMs for coding

Posted by iMakeSense@reddit | LocalLLaMA | View on Reddit | 1 comments

Models are advancing quite well, but I can't scratch the itch that most of the work being done is redundant. There have got to be multiple "added parameters to generalize to this addition of this embedding model" or "added audio pre-processing feature to remove noise". In other words, standard features or changes that could happen across a project or stack so much that an LLM could mess up on needing.

My hunch was if there were common patterns for standard stacks ( webstacks, electron stacks, tauri, etc ) or common bug fixes maybe there could be a trade off of increased token usage for higher quality context beyond something documentation could supply.

Has anyone done Github repo audit of common commit / release patterns and features they align with? Would an index of these features be useful or is it already redundant with LLM training or current MCP servers?