Jan got an upgrade: New design, switched from Electron to Tauri, custom assistants, and 100+ fixes - it's faster & more stable now
Posted by eck72@reddit | LocalLLaMA | View on Reddit | 179 comments
Jan v0.6.0 is out.
- Fully redesigned UI
- Switched from Electron to Tauri for lighter and more efficient performance
- You can create your own assistants with instructions & custom model settings
- New themes & customization settings (e.g. font size, code block highlighting style)
Including improvements to thread handling and UI behavior to tweaking extension settings, cleanup, log improvements, and more.
Update your Jan or download the latest here: https://jan.ai
Full release notes here: https://github.com/menloresearch/jan/releases/tag/v0.6.0
Quick notes:
- If you'd like to play with the new Jan but has not download a model via Jan, please import your GGUF models via Settings -> Model Providers -> llama.cpp -> Import. See the latest image in the post to do that.
- Jan is going to get bigger update soon on MCP usage, we're testing MCP usage with our MCP-specific model, Jan Nano, that surpass DeepSeek V3 671B on agentic use cases. If you'd like to test it as well, feel free to join our Discord to see the build links.
DevelopmentBorn3978@reddit
On my manjaro linux it doesn't run complaining about xapp and EGL (that are installed according to pacman/yay) throwing out window with transparent null visible content.
Same results for Jan_0.6.3_amd64.AppImage and Jan-beta_0.5.18-rc6-beta_amd64.AppImage.
Older Jan tried few monts ago ran fine.
I would like to include the output from running the command but it seems that too much lines are going to prevent submitting this comment.
DevelopmentBorn3978@reddit
some relevant error messages are:
Gtk-Message: 13:19:50.406: Failed to load module "xapp-gtk3-module"
(twice)
[2025-06-30][11:19:51][app_lib::core::setup][ERROR] Failed to run mcp commands: Failed to read config file: No such file or directory (os error 2)
[2025-06-30][11:19:51][app_lib::core::setup][INFO] Clean up function executed, sidecar processes killed.
[2025-06-30][11:19:51][app_lib::core::setup][INFO] [Cortex Terminated]: Signal Some(15), Code None
[2025-06-30][11:19:51][app_lib::core::setup][WARN] Cortex server terminated unexpectedly.
Gtk-Message: 13:19:51.269: Failed to load module "xapp-gtk3-module"(twice again)
Could not create default EGL display: EGL_BAD_PARAMETER. Aborting...
** (Jan:508364): WARNING **: 13:20:32.388: atk-bridge: get_device_events_reply: unknown signature
uhuge@reddit
Is chat history formatted into some portable standard?
eck72@reddit (OP)
We're using the same format as previous versions. It loosely follows OpenAI's thread structure but isn't a strict match. It's extension-based, so it can be replaced or exported to other formats if needed.
BlueSpeaker95@reddit
How does tauri impact performance on Linux given that it's webkitGtk?
eck72@reddit (OP)
ah, on Linux Tauri uses WebKitGTK for now, so performance isn't as good as Chromium. We're refactoring to reduce app size, and once that's done, we'll look into bundling a better browser. Tauri gives us flexibility there, unlike Electron.
CptKrupnik@reddit
Hey do you have a blog or something?
I'd be fascinated to learn from your development process. what you learned along the way, what you wish you've done right from the beginning, what were the challenges and solutions.
thanks
eck72@reddit (OP)
That means a lot! We'll be publishing a blog on this soon.
We also have a handbook, some of the company definitions are still being updated, but I think it gives a good sense of how we work at Menlo (the team behind Jan): https://www.menlo.ai/handbook
uhuge@reddit
https://www.menlo.ai/handbook/how/remote 404s,
that's weird
swyx@reddit
hello hello, open invite to guest blog on latent.space if that would help kickstart your blog :)
eck72@reddit (OP)
Hey, that would be awesome, thanks! Just sent you a DM.
sinebubble@reddit
What do you feel differentiates your app from all the others? I keep downloading chat apps and they all pretty much offer the same functionality and interface. What is Jan's approach for separating yourself from the herd?
maxim-kulgin@reddit
Please help, I don't understand—how can you upload files in Jan? There just isn't a button :) I’ve looked through all the settings, but I still couldn’t figure out how to enable uploading… Plus, there’s no option for image generation either...
sinebubble@reddit
Chatbox is the only one I know that supports image generation with the usual "chat service". Do you know of any others? Rolling image, video, and sound into a regular chat app seems like an obvious choice to differentiate from all the "thousands" of chat apps out there.
eck72@reddit (OP)
Jan doesn't support chatting with files or image generation yet. We're going to add chatting with files feature soon. As for image generation, it's not available right now - we'll take a look at it as well.
maxim-kulgin@reddit
Thank you, I almost went crazy because your documentation shows working with files in the screenshot… I’m waiting!
eck72@reddit (OP)
ah, sorry!
/docs is a work in progress. Thanks for flagging it!
wencc@reddit
Looks nice! How would you compare this with GPT4All? I am looking for alternative to ChatGPT due to obvious privacy concerns.
eck72@reddit (OP)
Thanks! I haven't used GPT4All in quite a while, so I might not be able to give a fair comparison.
As for an alternative to ChatGPT: I'd say our goal with Jan is to provide a easy to use, ChatGPT-like experience with the added ability to run almost all capabilities offline & locally.
wencc@reddit
Does it support search like the "research" function in ChatGPT? I see in the roadmap you have "Can call tool to get answer from external source" ticked off, where can I use those?
ksoops@reddit
Hey! Happy to see the update.
Real important question for you all -- I've been using Jan daily at a frontend to a local mlx_lm.server.
Where, in this new update, are the remote model settings? I don't see options for my openai compatible mlx_lm.server model endpoint?
Also -- and this is very important... in the older versions I was able to open up the remote model yaml config file e.g. `\~/Library/Application Support/Jan-nightly/data/models/remote/qwen/Qwen3-30B-A3B-MLX-4bit.yml` and bump up the `max_tokens` from the default 4096 to e.g. 32_000 or whatever. Doesn't look like I have this ability anymore? Or, did it move? I can't use Jan with MLX without this ability.
Thanks!\~
eck72@reddit (OP)
Thanks!
Regarding settings: You can now set max_tokens directly in Jan Assistant as an inference parameter. At the model level, it's controlled by the context_size setting, which usually defines the default max_tokens.
Regarding remote model settings: With the MCP update, you'll be able to update settings for remote models as well. The settings option for remote models is only for enabling features like tools, vision and embeddings - and that's only available in Jan Beta for now.
ksoops@reddit
Adding `max_tokens` param to Jan Assistant worked perfectly. Thanks again
ksoops@reddit
Hey thanks for the reply! I will take a look, appreciate it
mrbonner@reddit
Is it switch to Tauri the prep work for my bike app?
eck72@reddit (OP)
Yes, that's one of the reasons as well.
spikepwnz@reddit
It would be if the tools would be available for local openai compartible backend
mevskonat@reddit
Tried Jan-beta with Jan-nano, MCPs seemed to work very well and fast too (around 35 t/s on a 4060). However my Jan-beta install doesnt have any upload button (for RAG), I wonder why.
Also, just like LM Studio if I am not mistaken, it cannot serve two models simultaneously...
eck72@reddit (OP)
wow, thanks! v0.6.0 will also help us ship MCP support as well. Happy to get your feedback for the beta release to improve it!
RAG is still a work in progress, so the upload button isn't available yet.
As for running multiple models it's currently only supported through the local API server.
mevskonat@reddit
Been using jan beta for a few days now. Some feedback:
The assistant feature should be designed with MCP functions. This is because jan nano sometimes get confused on which MCP to use, it also ignores the system prompt for assistant.
The MCP install is very easy, but its hard to manipulate or co-utilize the MCP servers with other client. e.g. I can't use MCP memory server together with Claude
Need flexibility to use external MCP server through http
Being a small model, I think it would be more efficient to use jan nano with MCP server memory, but I think I need to be able to manipulate the memory.json file. Currently thinking about doing it en masse, from my obsidian vault files into knowledge graph nodes, but can't do it since I couldn't find the json file.
For now I can use it for simple querying, e.g. what is my library card number? But in order to get the correct answer I need to specify "use read_graph" in the prompt. The response is also really fast.
I can see the potential for jan-nano to be a personal assistant, esp with sensitive private data... Keep going...!
mevskonat@reddit
Thanks, looking forward for the RAG! Tried running jan nano with ollama but it thinks too much. With Jan Beta it is fast and efficient. So when the beta ships with mcp+rag... this is what I've been searching all along..... :)
eck72@reddit (OP)
Quick heads-up: Those who would like to test the Jan Beta with MCP support, feel free to get the beta build here: https://jan.ai/docs/desktop/beta
No-Source-9920@reddit
is the beta versioning different/wrong? it says 0.5.18 and on release you have 0.6.0
eck72@reddit (OP)
They’re different. Beta is still on 0.5.18 but already includes the new design and Tauri build.
Suspicious_Demand_26@reddit
oh is that an issue for people using these platforms? i didnt know u can only run one at a time
Quagmirable@reddit
Looking pretty good, thanks a lot for maintaining this! A few observations:
It appears that the CPU instruction set selector (AVX, AVX2, AVX-512, etc.) is no longer available?
The new inference settings panel feels more comprehensive now, and I like how it uses the standard variable names for each setting. But the friendly description of each parameter feels less descriptive and less helpful for beginners than what I remember in previous versions.
Once again, thanks!
eck72@reddit (OP)
Good catch - yes, we're bringing it back with the new llama.cpp integration. We're currently moving off cortex.cpp, so some options are temporarily missing but will return soon.
Quagmirable@reddit
Overall feels like a great move so that the user can keep up-to-date with the latest llama.cpp improvements without you people at Jan specifically having to merge and implement engine-related stuff.
disspoasting@reddit
Thanks for making an astonishingly good FOSS competitor to LMStudio!
I appreciate all of the hard work of Jan's developers immensely!
kadir_nar@reddit
Thank you for sharing
alien2003@reddit
Why Electron/Tauri instead of something native? Qt, GTK, etc
eck72@reddit (OP)
Tauri allows us to reuse our existing React and JavaScript codebase, which helps us move faster while keeping the design clean and modern. Qt and similar native toolkits don't integrate well with the web stack.
alien2003@reddit
So that's the initial design flaw and it's more difficult to rewrite than just deal with it, understandable, everything costs time and money
Sea-Unit-2562@reddit
I'd personally never use this myself over cherry studio but I love the redesign 😅
bahablaskawitz@reddit
Do I have to do something to make the tool-use boxes visible or functional? I was hoping it would have web search similar to the video in the post here: Jan-nano, a 4B model that can outperform 671B on MCP : LocalLLaMA
eck72@reddit (OP)
Go to Settings -> Model Providers, find your model, and click the settings icon next to its name to enable the Tools capabilities. Also Jan-nano currently works with the Serper API, so you'll need to add your Serper API key under Settings -> MCP servers.
bahablaskawitz@reddit
This button opens the menu for configuring context size, etc.
MatchaCoffeeBobaSoda@reddit
MCP servers are only on the 0.5.18 beta it seems
bahablaskawitz@reddit
I also don't have an 'MCP servers' option. I'm on version 0.6.1, which is supposedly the latest. Any ideas?
anantj@reddit
What are the recommended hardware specs? Will it run decent on laptops with an integrated GPU?
eck72@reddit (OP)
It depends on the model you use, mainly on how much VRAM your laptop's integrated GPU has.
anantj@reddit
Fair enough. My current laptop does not have any VRAM to talk about tbh (it is more of a work laptop).
I also have a PC that has a Radeon 6750XT with 12GB of VRAM.
throne_lee@reddit
jan's development has been really slow...
Classic_Pair2011@reddit
Can we edit the responses?
eck72@reddit (OP)
No, you can't. Out of curiosity, why do you want to edit the responses? Trying to tweak the reply before pasting it somewhere?
aoleg77@reddit
That's an easy way to steer the story in the direction you want it to go. Say, you draft a chapter and ask the model to develop the story; it starts well, but in the middle of generation makes an unwanted twist, and the story starts going in the wrong direction. Sure, you could just delete the response and try again, or try editing your prompt, but it is much, much easier to just stop generation, edit the last line of text, and hit "resume". So, editing AI responses is a must-have for writing novels.
eck72@reddit (OP)
That's a great point! We'll definitely take a look at adding an edit and resume feature to improve the experience.
Dependent_Status3831@reddit
Seconded, please make it editable with a pause and resume function. This is very needed for adhoc model steering
harrro@reddit
Yes please support these. Editing both input and output after it's sent is super useful.
AnticitizenPrime@reddit
LLMs also love to wrap up stories too early. 'And so, he lived happily ever after' when the story's just getting started. Gotta edit that out to continue the tale.
Professional_Fun3172@reddit
Not the person who was requesting this, but possibly to modify context for future replies?
Dependent_Status3831@reddit
Please add MLX as a future inference engine
GhostGhazi@reddit
Very excited for this. I had a lot of issues with Jan not recognising downloaded models the next day after having it work perfectly even though it was in the settings folder still. I hope it’s been fixed
eck72@reddit (OP)
Ah, that must be annoying... I haven't come across that issue before. If it happens again with the new release, please send over the details.
GhostGhazi@reddit
im seeing more hallucination than ever before btw, anything changed that could cause this?
illusionmist@reddit
MLX support?
No-Source-9920@reddit
thats amazing!
what version of llama.cpp it uses it the background? Does it update as new versions come out like LM-Studio does or is it updated manually by you guys and then pushed?
eck72@reddit (OP)
Thanks!
It's llama.cpp b5509.
We were testing a feature that would let users bump llama.cpp themselves, but it's not included in this build. We'll bring it in the next release.
Quagmirable@reddit
Nice! That would be ideal.
pokemonplayer2001@reddit
Excellent!
Quagmirable@reddit
Nice! That would be ideal.
Zestyclose_Yak_3174@reddit
Automatic or faster Llama.cpp would be greatly appreciated!!
kkb294@reddit
This would be a game changer and may bring you a lot of maintenance activities as different users with different combinations of llama + Jan will start flooding your DMs and git issues.
I'm curious how are you thinking of achieving this without overloading you and your development folks.
AlanzhuLy@reddit
Looks fantastic. Love the custom assistant feature.
SithLordRising@reddit
Tauri! Thank you. Haven't heard of this and about to ditch electron. Great call 🤙
cManks@reddit
If you don't want to deal with rust, consider neutralinojs
maifee@reddit
Tauri isn't open source afaik
TheOneThatIsHated@reddit
Wdym, i see github etc.... What gives?
Hujkis9@reddit
what?
OnceagainLoss@reddit
of course can github be open source, especially given a MIT license.
Elibroftw@reddit
Wasting other people's time is a hobby it seems for some redditors.
Elibroftw@reddit
Why spread fake news? Why?
SithLordRising@reddit
It's fully open source MIT I think https://github.com/tauri-apps/tauri
stevyhacker@reddit
Do you have any insights to share with refactoring from Electron to Tauri? Any noticeable differences?
iliark@reddit
Election will be more consistent cross platform because it's the same browser. Tauri uses the build in webview, which will require testing in each target platform.
eck72@reddit (OP)
ah, that's changing. Tauri will soon support bundling the same browser stack across platforms. We're considering the new Verso integration: https://v2.tauri.app/blog/tauri-verso-integration/
Looks promising for improving cross-platform consistency.
Everlier@reddit
Hopefully it'll finally improve performance on Linux. It was notoriously bad with the wry runtime and just a few months ago it seemed like nothing is going to change.
eck72@reddit (OP)
Tauri helps us bring Jan to new platforms like mobile. It's also lighter and faster, which gives us more room to improve performance in upcoming updates.
abkibaarnsit@reddit
Did you also look at Wails?
If so, any particular reason for going with Tauri
eck72@reddit (OP)
Yes, we did. Wails doesn't support mobile, and we're planning to bring Jan to mobile too - so we went with Tauri.
demon_itizer@reddit
Wow. Any ETA for the mobile release?
eck72@reddit (OP)
There's still work to be done before mobile is ready. Not an exact ETA yet
demon_itizer@reddit
Thanks, great work!!
ClaudeLoom@reddit
Hey OP, Jan AI’s remote model support is surprisingly lacking for many models, unlike LM Studio, Msty, and others. I’m unsure why you guys said on discord that you don't support latest remote models to make Jan "work better". It’s not super complex to make remote models function like most applications have done already. Jan AI should improve their remote model support. Not everyone has local model hardware (8b+ models), and some RAG tasks require remote models. Many providers offer free tiers. :(((
The_frozen_one@reddit
As someone who is finalizing a port of a small app (atv-desktop-remote) from electron to tauri: If most of your code is in the renderer, the transition is easy. If you have lots of stuff in main.js, you’ll need to reimplement that in rust. The IPC line is still there (between main and renderer processes), you can move stuff around if you need to.
The iteration time with tauri was a bit slower, since it’s compiling instead of running a node.js script, but it’s not terrible. The API is pretty good and generally works, but testing on different OSes is a must (macOS/Windows/Linux - a few WMs).
Karim_acing_it@reddit
I really love Jan, have been using it more and more! Regarding the last picture, the model overview is still a bit scuffed in comparison to your super clean and beautiful design. Especially once a model is loaded and the right Start button changes to a stop, the button background color gets ever so slightly redder. Really hard to make out the difference and find the loaded model.
Would be great to have maybe tiles for each provider and their respective quant, same goes for the "Hub" overview. Any progress is great, just adding it to the wishlist tombola :)
tuananh_org@reddit
almost 1gb app ???
eck72@reddit (OP)
It's double for Mac due to the universal binary (Apple Silicon + Intel). On Linux, dependencies like CUDA add to the size. Future updates will reduce this.
Obvious_Sea4587@reddit
Once I upgraded, all my engines went missing. I re-added them, but using Jan as a client for the LMStudio server does not work anymore.
It tells it cannot locate any models. This isn't a problem for 0.5.17
Thread settings are gone, which is bad because I’ve been using it to adjust the sampling parameters on the fly for Mistral, HF, and LM Studio chats to fine-tune and test various models.
So far, the update brings a better look and feel but kills core functionality that kept me using Jan as my core LLM client. I guess I’ll be sticking to 0.5.17.🤷🏻♂️
eck72@reddit (OP)
Totally get it - it’s disappointing. We're on it to find a good solution!
Regarding LM Studio, both LM Studio and Ollama don't support preflight requests, which causes integration issues. We're working to make this smoother.
Obvious_Sea4587@reddit
Hey! Good to know I haven’t been left out in the dark, and it’s on your roadmap for sooner or later. So far, Jan is the most sane FOSS client so far. The new UI is neat. You're going in the right direction.
Thanks for this!
eck72@reddit (OP)
Thanks, this comment means a lot to us!
--dany--@reddit
It looks sleek! Educate me: what makes Jan different from / better than LM Studio? It seems you have same backend and similar frontend?
eck72@reddit (OP)
Jan is open-source and, though I might be biased, a lot easier to use.
Not sure what LM Studio's roadmap looks like, but ours seems to be heading in a very different direction. That’ll become more clear with the next few releases.
Quick note: We're experimenting with MCP support using our own native model, Jan-nano that outperforms DeepSeek V3 671B in tool use. It's available now in the Beta build.
--dany--@reddit
Thanks for the thought. I’m trying it now.
It occurs to me that Tauri can build mobile apps on iOS and android as well, do you have any plan to release them as well if llama.cpp is ready for those platforms?
eck72@reddit (OP)
100%, Jan Mobile is on the roadmap. One of the main reasons we revamped everything was to make Jan flexible enough to bring it to new platforms like mobile.
countAbsurdity@reddit
Can you use TTS with Jan? I use LM Studio a lot and usually have 6-8 different models but I'll be honest sometimes it's a chore to read through all their responses, would be nice to just click a button and have it spoken to you.
eck72@reddit (OP)
Not yet, but it’s definitely on the roadmap.
By the way, we trained a speech model and are still working on speech-related features. Planning to launch something bigger soon.
GhostGhazi@reddit
Where is your roadmap?
countAbsurdity@reddit
Interesting, I'll keep my eyes open, thanks for the follow-up.
anzzax@reddit
I’m waiting for MCP and tools use. These days, assistants without tools feel like a PC without internet :)
oxygen_addiction@reddit
Try the beta.
eck72@reddit (OP)
Thanks! We're working on to ship it properly.
By the way, you can already start using MCPs with the beta release but expect some bugs.
TheOneThatIsHated@reddit
What you could do is to integrate the open source mlx engine from lmstudio to add support for it and/or add lmstudio api support
SelectionCalm70@reddit
have you tried flutter for desktop app?
Arcuru@reddit
Did you fix the licensing issue? As pointed out when you switched to the Apache 2.0 license, you need to get approval from all Contributors to change it.
If you don't get those approvals, the parts of the code submitted under the AGPL are still AGPL
https://old.reddit.com/r/LocalLLaMA/comments/1ksjkhb/jan_is_now_apache_20/mtmpfpu/
Navith@reddit
Looking forward to them ignoring this again.
meta_voyager7@reddit
how to use ollama models in Jan? I couldn't figure it out
meta_voyager7@reddit
Jan doesn't work with Ollama llms out of the box is my biggest grip
dasjati@reddit
This looks really good. I like that UI and will give it a try. I'm not happy with LM Studio's redesign. I find it very confusing. This looks much more straightforward. And of course it's great that it's open source.
TimelyCard9057@reddit
Migrating to Tauri is actually big! Glad to see Tauri is getting used more
shouryannikam@reddit
OP, can we get a blog post about the migration please? I really like Tauri but the official docs are lacking so would love the learning opportunity from your team.
eck72@reddit (OP)
I kindly asked the team who handled the migration to put together a blog post about the process.
solarlofi@reddit
I really like the addition of the Assistants feature, as it was something I felt the app was lacking and was preventing me from going back to.
Are there any plans to add custom models similar to Open Web UI? Mainly, it would be nice to choose a downloaded model, customize the system prompt and other parameters, and save it so it can be used in the future.
Currently, I don't see a way to assign a "assistant profile" to a selected model, and it seems to be missing some features like context size. This would help make it so I don't have to open up the options menu and manually adjust context size or other options each time I want to start a new chat.
Also, as mentioned already, an edit button is fantastic. Sometimes you need to start the model off positively for it to comply with a request.
I will say one thing I really appreciate Jan AI over others like LM Studio is the ability to use cloud based models. I like to alternate between local and online. The performance is great using the app, and I'm looking forward to seeing what the future brings!
Equivalent-Bet-8771@reddit
Does it have artifacts or canvas like Claude or GPT?
eck72@reddit (OP)
Not yet.
huynhminhdang@reddit
The app is huge in size. I thought Tauri didn't ship the whole browser.
eck72@reddit (OP)
Totally fair. The app's size is due to the universal bundle - we're working on slimming it down soon.
me1000@reddit
Can you say more about this “universal bundle”? What exactly does that mean in this context? I write software so I’d appreciate some any technical insights you have here.
eck72@reddit (OP)
Universal bundle on Mac means the app includes native code for both Apple Silicon & Intel Macs, so it's basically twice the size.
Asleep-Ratio7535@reddit
haha, shockingly, it's not huge as a all-around client. this is tiny compared to others, except lms? they use separated runtimes. ollama is 5 gb.
kamikazechaser@reddit
~1 GB package size on Linux?
eck72@reddit (OP)
We're planning to decouple some dependencies in the Linux deb, especially CUDA-related files. So these won't be shipped with the app in the next version - it'll be much smaller.
HugoDzz@reddit
Awesome! Big fan of Tauri, used it to make my little game dev tool a desktop app (using Rust commands for the map compiler!)
tuananh_org@reddit
can it be used with lmstudio ?
Plums_Raider@reddit
Any plan to make this docker/proxmox/unraid compatible in the near future? Thats what keeps me with openwebui mostly at the moment
eck72@reddit (OP)
It's definitely in the pipeline, but we don't have a confirmed timeline yet.
Quick question: Would you be interested in using Jan Web if we added that?
Jan Web is already on the roadmap.
Plums_Raider@reddit
Cool to hear. Yea i dont have an issue using webapps. Using openwebui the same atm. Of course a proper app would be better to just connect to my ip on my homeserver, but i think since openwebui gets a bit too bloated atm, you could grow your userbase alot with jan :) Unfortunately couldnt find the roadmap, as the link in changelog leads to a dead github page.
eck72@reddit (OP)
We'd like to expand Jan's capabilities and make AI magic accessible to everyone. It's obvious that AI is changing how we're doing something online, and I believe it will change how we do everything. It's the new electricity.
We want to provide an experience in Jan, where users can use AI without worrying about settings or even model names. So the web version will help most people get started easily.
Updating the broken link in /changelog, thanks for flagging it!
Eden1506@reddit
Does it run with jan-nano websearch. The beta doesnt work for me
eck72@reddit (OP)
Could you please check if your beta version is up to date? Settings -> General.
Also, please make sure you've added a Serper API key for web search (Settings -> MCP). Jan-nano uses the Serper API to search the web. We're planning to enable web search without an API key soon.
If you're still getting an error on the latest version, please share more details.
Eden1506@reddit
Thx for the answer and I was missing the serper api key but you should really write that down somewhere visible as otherwise I honestly wouldnt have found that i had to edit and add it
Eden1506@reddit
Pls make a tutorial how to make the websearch function run locally as despite multiple attemps it won't work for me.
I installed Jan beta both on my windows and archlinux system.
Activated all mcp server tools outside of filesystem and downlouded q8 jan nano.
As a result I receive as an answer:
It seems that there is an issue with accessing the Google search tool, as it is returning a 403 Forbidden error. This typically indicates that the API key or access token used to authenticate the request is invalid or has expired. ...
and app logs (in german for some reason)
RMCPhoto@reddit
Very cool, I was just considering migrating an electron app to tauri. How was the experience? Do you notice any performance benefits?
eck72@reddit (OP)
I haven't personally done the migration myself, but I heard from our team that it went quite smoothly. They had actually planned the move since last year and made Jan’s codebase very modular and extensible, which helped a lot.
Most of the logic lives in extension packages, so switching from Electron to Tauri mainly involved converting the core API from JavaScript to Rust. There were some CI challenges, especially around the app updater and GitHub release artifacts, but they handled it well.
Would love to highlight the team did a really great job abstracting the codebase, and making it easy to swap components. Feel free to join our Discord to listen to real story from them - we're going to
To be honest, the level of extensibility in Jan surprised me, and I think it'll open a lot of possibilities for plugins and future development for the open-source community.
Actually we're hosting an online community call next week to discuss all journey & more: https://lu.ma/nimqd2an
GrizzyBurr_@reddit
Just about perfect for what I was looking for in a local app. The only thing I'd want in order to switch is an inline shortcut to switch agents/assistants. I love the way Mistral's Le Chat lets you use an
@
to use a different agent. That and I also like a global shortcut to call a quick pop up prompt...not really a requirement though. The less I have to touch my mouse, the better.eck72@reddit (OP)
Great ideas, thanks for sharing! We'll definitely consider adding inline shortcuts to switch agents/assistants. We experimented with a global shortcut for quick access but decided to remove it. We'll revisit the idea again - thanks for the suggestion!
Infamous_Trade@reddit
can i use openrouter api key?
eck72@reddit (OP)
Yes, you can. Jan supports OpenRouter as well - go to Settings -> Model Provider.
lochyw@reddit
Why not wails?
Ok-Pipe-5151@reddit
Wails doesn't seem like a framework suitable for serious projects. It is vastly managed by one single person, the community is significantly smaller than Tauri's and lacks many features compared to tauri
lochyw@reddit
That's not true/accurate at all. It has multiple skilled maintainers, with various official products built with it like a VPN client etc.
Ok-Pipe-5151@reddit
https://github.com/wailsapp/wails/graphs/contributors
This says otherwise. 80% or more commits are from leaanthony. Also I couldn't find a lot of serious projects in the community showcase either, the most relevant ones would be Wally and Tiny RDM
eck72@reddit (OP)
Wails could've been a good option, but it doesn't support mobile. So it doesn't align with our future plans. We want to make Jan usable everywhere, and we'll be sharing more with upcoming launches soon.
flashfire4@reddit
I love Jan! Is there an option for it to autostart on boot with the API server enabled? I couldn't find any way to do that with the previous versions of Jan so I went with LM Studio for my backend unfortunately.
curious4561@reddit
Jan AI can't read or Analyse my PDF documents even when enabled.. I use models like qwen 8b r1 distill etc..
eck72@reddit (OP)
ah, interacting with files is still an experimental feature in Jan, so it's a bit buggy. We're working on it and planning to provide full support in upcoming releases.
curious4561@reddit
so i updated to 6.0 -> now my nvidia gpu wont be detected (Linux Mint with propriatry Drivers) and it wants to connect all the time with WebKitNetworkProcess. Open Snitch gives me all the time notifications even when i am disabling the network access - it asks again and again
curious4561@reddit
ok it detects now my gpu but i cant enable lama cpp and the webkitnetworkprocess thing is really annoying
Law1z@reddit
I just updated. Haven’t tried it a lot yet but I must say it seems great! I get better performance than before, it looks better and feels more intuitive!
eck72@reddit (OP)
Thanks! Would love to hear your thoughts once you’ve tried it out.
Androix777@reddit
Been looking for some opensource UI for openrouter for a long time. It looks very promising, but I haven't figured out how to output reasoning yet.
eck72@reddit (OP)
ah, reasoning output isn't supported yet - it's still on the todo list to check.
Bitter-College8786@reddit
Does it auto-detect the instruct format? I had issues with that last time I used it
eck72@reddit (OP)
Yes, it automatically detects the instruct format based on the model. The new version is much better at it than before. Happy to hear your comments if you give it a try.
Bitter-College8786@reddit
Sounds really good! I will give it a try! I am using LM Studio now and want to replace it with an open source solution
eck72@reddit (OP)
Thanks! Would love to hear what you think once you've tried it out.
ClaudeLoom@reddit
Jan AI’s remote model support is surprisingly lacking for many models, unlike LM Studio, Msty, and others. I’m unsure why they claim to not support all remote models to make Jan "work better". It’s not difficult to make remote models function like other applications. Jan AI should improve their remote model support.
ralfun11@reddit
Sorry for what might be a stupid question, but how do you connect a remote ollama instance? I've tried to add a custom provider with different variations of base urls (http://192.168.3.1:11434/v1/chat, http://192.168.3.2:11434/v1, http://192.168.3.2:11434/) and nothing worked so far
eck72@reddit (OP)
ah, could you check if the API key field is empty? Jan supports OpenAI-compatible setup for APIs. So if it's empty, it won't work, even if the remote endpoint itself doesn't require one. We should add a clearer indicator for this, but in the meantime, try entering any placeholder key and see if that works.
Plus, Ollama doesn't support
OPTIONS
requests on all endpoints (like /models), which breaks some standard web behavior. We're working on improving compatibility.ed0c@reddit
I'm facing the same issue with http://x.x.x.x:11434/v1 or http://x.x.x.x:11434, and an API key entered.
ralfun11@reddit
This doesn't help :/ Thanks for the response anyway
ed0c@reddit
Chat completion url : http://x.x.x.x:11434/v1/chat/completions
Model list url : http://x.x.x.x:11434/v1/models
NakedxCrusader@reddit
Would I be able to use API models through the. OpenAI API?
eck72@reddit (OP)
Absolutely, Jan supports APIs too. Just go to Settings -> Model Providers to select one or add a new one.
RickAndTired@reddit
Thank you for providing a Linux AppImage
fatinex@reddit
When using Ollama if the model does not fit in my GPU it will the GPU and offload to the CPU but the same did not work with jan.
eck72@reddit (OP)
ah, you can update the number of GPU layers manually in model settings (Model Providers -> llama.cpp).
We're planning to add automatic GPU offloading soon, once it's merged upstream in llama.cpp
Perfect-Category-470@reddit
The new UI looks great, and custom assistants are a game changer. Huge thanks to the Jan team!🔥🔥