LMSYS (LMarena.ai) is highly susceptible to manipulation

Posted by Economy_Apple_4617@reddit | LocalLLaMA | View on Reddit | 7 comments

Here’s how I see it:
If you're an API provider for a closed LLM, like Gemini, you can set up a simple checker on incoming request traffic. This checker would verify whether the incoming query matches a pre-prepared list of questions. If it does, a flag is raised, indicating that someone has submitted that question, and you can see how your LLM responded. That’s it.

Next, you go to LMSYS, ask the same question, and if the flag is raised, you know exactly which of the two responses came from your LLM. You vote for it. Implementing this is EXTREMELY SIMPLE and COMPLETELY IMPOSSIBLE for LMSYS to track or verify. You wouldn’t even need human intervention—you could create a bot to cycle through the question list and vote accordingly. This way, you could artificially boost your model's ELO rating to any level you want.

So, the immediate question is: What is LMSYS doing to address this issue? The only real solution I see is for LMSYS to host the LLMs themselves, preventing API providers from intercepting requests and responses. However, even this wouldn't solve the problem of certain models being recognizable simply by the way they generate text.