Directly Adjusting Arbitrary Weights in LLMs
Posted by IrisColt@reddit | LocalLLaMA | View on Reddit | 2 comments
How could I go about directly adjusting arbitrary weights of a large language model like Llama or Gemma programmatically—say, adding something like +0.00001 to any given weight? With Stable Diffusion, I could use an interface like Automatic1111's and hook into the merge checkpoints tab Python code to alter weights. But with LLMs, where architectures vary and merging isn’t as clearly defined, what’s the best way to handle this? Bonus if there’s a way to modify quantized model weights while respecting the quantization level.
NarrowTea3631@reddit
i'm not sure myself, but looking at how abliteration is done should get you started in the right direction?
IrisColt@reddit (OP)
I hadn't thought of that—thanks for pointing it out!