Jan v0.7.5: Jan Browser MCP extension, file attachment, Flatpak support
Posted by eck72@reddit | LocalLLaMA | View on Reddit | 18 comments
We're releasing Jan v0.7.5 with the Jan Browser MCP and a few updates many of you asked for.
With this release, Jan has a Chromium extension that makes browser use simpler and more stable. Install the Jan extension from the Chrome Web Store, connect it to Jan. The video above shows the quick steps.
You can now attach files directly in chat.
and yes, Flatpak support is finally here! This has been requested for months, and Linux users should have a better setup now.
Links:
- Jan Browser MCP: https://chromewebstore.google.com/detail/jan-browser-mcp/mkciifcjehgnpaigoiaakdgabbpfppal
- Jan on Flathub: https://flathub.org/en/apps/ai.jan.Jan
- Jan GitHub: https://github.com/janhq/jan
Please update your Jan or download the latest.
I'm Emre from the Jan - happy to answer your questions.
---
Note: Browser performance still depends on the model's MCP capabilities. In some cases, it doesn't pick the best option yet, as shown in the video... We also found a parser issue in llama.cpp that affects reliability, and we're working on it.
makerbeforecoder@reddit
What separates Jan browser MCP from other browser MCPs?
Analytics-Maken@reddit
Congrats on the shipping, it looks promising. I'm taking another approach to avoid web parsing: whenever the data source is available, I'm using ETL tools like Windsor ai to move the data to a central place and make the analysis there.
__JockY__@reddit
Hey, a while back I quit Jan and moved to Cherry because of the code block rendering speed issues. Have these all been fixed now?
eck72@reddit (OP)
Got fixed a few releases ago.
__JockY__@reddit
I have a couple of systems where Cherry isn't an option, so this is great news. Thanks!
eck72@reddit (OP)
Great, please do share your comments once you give it a try!
and yes, it supports file attachments too. This is one of the most requested features over the last 2 years. We shared our take here: https://x.com/jandotai/status/1998029880714199479
simracerman@reddit
“All that for a drop of blood?”
eck72@reddit (OP)
It's still thinking too much, so it can't react as fast as we'd like... This is just the early stage. We'd like to get it to a point where it can complete tasks for you in the background.
We're also training a bigger model for Jan that works much better - it'll be released soon.
simracerman@reddit
Thanks for all you do! I have respect for Jan team, and you’ve come a long way.
Unrelated question. How does the Jan model compare to Qwen3-4B for tool calling like web search?
rm-rf-rm@reddit
you know when even the demo sucks, its truly not worth wasting your time on
MDT-49@reddit
Maybe I should give this a spin now that the Flatpak is available!
I can't really find this in the docs, but how does the file attachment feature work? Does it work in a RAG-like way using an embedding model or does it work in a more conventional way? Does it convert e.g. PDFs to plain text?
eck72@reddit (OP)
It works both ways. There's a setting to choose the mode you want: Settings -> Attachments -> Parse preference.
Plus, Jan uses an embedding model by default for the local models. For remote models, you'll see a popup asking which mode you want to use when you upload a PDF.
MDT-49@reddit
This is perfect, thanks!
eck72@reddit (OP)
These are the settings and the prompt we use:
Prompt:
You are a helpful AI assistant. Your goal is to help users with their questions and tasks as clearly and accurately as possible.
When responding:
Using tools (including Browser MCP):
Tool usage rules:
Browser rules:
Some pages may use a code-editor input area, treat it like a normal input.
You are logged in everywhere and have permission to perform tasks for the user.
Current date: {{current_date}}
egomarker@reddit
Idk why all these are for chromium and firefox gets zero love.
eck72@reddit (OP)
It was the quickest way to test and provide the browser MCP capabilities. Hope we'll get to a zero-setup way to handle tasks in web browsers.
ilarp@reddit
this is cool, interesting it proceeded to make the worst decision
eck72@reddit (OP)
Yes, I didn't push it too hard to get the perfect answer in that demo. It happens...
We found an issue in the inference engine that slows the model down and affects its choices. We're training a bigger model for better performance and also improving the inference side.