An OpenAI-compatible proxy for Meta AI (no official API) — local, streaming, tool-calls working
Posted by Jealous-Virus4414@reddit | LocalLLaMA | View on Reddit | 0 comments
Hi,
I’ve been working on a local proxy that exposes Meta AI as an OpenAI-compatible API.
Since there’s no official API available, the goal was to make the model usable within existing tools and workflows that already support the OpenAI standard.
Overview
- Implements
/v1/chat/completionsand/v1/models - Compatible with OpenAI-based clients (IDEs, agents, UIs)
- Supports streaming responses (SSE)
- Converts Meta AI outputs into OpenAI response format
- Basic support for tool/function calling
- Runs locally (Node.js + Playwright)
Architecture (high-level)
- Playwright manages an authenticated browser session
- Requests are injected into the Meta AI web interface
- Responses are parsed and normalized into OpenAI-compatible structures
- Streaming output is relayed via Server-Sent Events
A “transparent bridge” mode is also available, where the client’s system prompt (including workspace context) is forwarded with minimal transformation.
Motivation
Many capable models are currently limited to web interfaces, which makes them difficult to integrate into development workflows.
This project explores a lightweight approach to making those systems accessible through a standardized API layer.
Use cases
- Integrating Meta AI into IDEs (e.g. Void, Cursor, Continue)
- Prototyping agent workflows without official API access
- Unifying multiple providers behind a single interface
Considerations
- Relies on browser automation and may break with UI changes
- Automated access may violate platform terms of service
- Not recommended for use with primary accounts
- Still early-stage and may have stability issues
Quick start
npm install
npx playwright install chromium
musespark authsetup
musespark apicreate
musespark startvoid
Then configure your client:
http://localhost:8788/v1
model: gpt-4o
Repository
https://github.com/Zmidz13/muse-proxy
The project is still evolving, but early results show that tool usage and structured interactions already work reasonably well.
Feedback, suggestions, and alternative approaches (especially around session handling and streaming reliability) are welcome.