An OpenAI-compatible proxy for Meta AI (no official API) — local, streaming, tool-calls working

Posted by Jealous-Virus4414@reddit | LocalLLaMA | View on Reddit | 0 comments

Hi,

I’ve been working on a local proxy that exposes Meta AI as an OpenAI-compatible API.

Since there’s no official API available, the goal was to make the model usable within existing tools and workflows that already support the OpenAI standard.


Overview


Architecture (high-level)

A “transparent bridge” mode is also available, where the client’s system prompt (including workspace context) is forwarded with minimal transformation.


Motivation

Many capable models are currently limited to web interfaces, which makes them difficult to integrate into development workflows.

This project explores a lightweight approach to making those systems accessible through a standardized API layer.


Use cases


Considerations


Quick start

npm install
npx playwright install chromium

musespark authsetup
musespark apicreate
musespark startvoid

Then configure your client:

http://localhost:8788/v1
model: gpt-4o

Repository

https://github.com/Zmidz13/muse-proxy


The project is still evolving, but early results show that tool usage and structured interactions already work reasonably well.

Feedback, suggestions, and alternative approaches (especially around session handling and streaming reliability) are welcome.