Has anyone tried open interpreter?
Posted by badassmotherfker@reddit | LocalLLaMA | View on Reddit | 18 comments
https://github.com/OpenInterpreter/open-interpreter
It's the only "agent" that I've tried that seems to work. It is really good at dependency issues on linux, even if I already know how to solve the issue. For example, if I run it with the -y option, it'll quite quickly figure out what packages are needed, what I have installed etc.
It does have some flaws. For example, if it is editing a script on my system, it makes mistakes. However, if it re-writes the script in the console and executes it, it makes less mistakes.
Without a care for privacy, I told it "Please forensically analyse my computer, looking at all activities, and infer my personality", and it did so quite effectively and quickly.
You can plug it into any model. I tried lama3:8b and it wasn't intelligent enough to navigate the framework, but gpt 3.5 was fine and gpt-4-turbo truly impressed me.
SnooCookies2024@reddit
Is Open Interpreter still active? I haven't seen any new releases
ThePhilosopha@reddit
I also just got it. Well literally yesterday. It is a great concept and all. Crashes a bit but it honestly is perfect for my uses.
washegon@reddit
I use it to get thru an intense docker problem I was having on my home server. Within an hour everything was worked out.
KurisuAteMyPudding@reddit
I typed "do your worst" and walked away to eat dinner and i came back to it apparently rm -rf /'d my machine and hired a gang of Albanian men to hunt me down.
10/10
Wanky_Danky_Pae@reddit
Hahahahaha I just found out about this tool so I decided to search it out on Reddit and this comment just made me spit my coffee out everywhere. Too damn funny ... Kudos!!!
Open_Channel_8626@reddit
It’s probably a real issue that rm rf is in the training data
badassmotherfker@reddit (OP)
Tbh gpt-4-turbo is so careful that it has never tried anything destructive
badassmotherfker@reddit (OP)
I did try to tell it to become self aware once, gpt 4 denies the request and 3.5 made a file that says it’s self aware
KurisuAteMyPudding@reddit
lol it hides the truth in the txt file
danielrosehill@reddit
I know this is an old thread, but .... jumping in to share my two cents as when I get excited about tech .. I get excited. I'm very, very impressed. 20+ year Linux user, currently using OpenSUSE.
Love/hate relationship with the terminal. I think the hate part only because remembering a whole encyclopedia of commands would tax anybody. This makes using the terminal a joy. I'm diving in slowly with fairly basic prompts like "let's check my autostart processes to make sure I'm not running anything that's bloating my system." Dredged up a few old systemd processes that I had lying around.
Sure, it's not perfect. The struggle seems to be mostly in holding context length. Like when a command gives a long output and then it needs to start firing off tool requests. But hey ... it's early days and give it a chance.
Plan on studying the docs today and trying it out with a local model although TBH I'm in no particular rush to do the latter. Great project and very excited to see how it evolves!
cris-crispy@reddit
I learned today that you can use open interpreter with Jan AI and even do wanting with local models. Hoping to get time to try it tomorrow!
malikmjunaid@reddit
I am trying to use it for a local file analysis its doing good, but taking back a json consistent response seems to be challenging
No_Avocado_2580@reddit
How did you use llama models on it?
Miserable_Key_6699@reddit
Just use lm studio locally or use 3rd party service like mistral or some other big companies
cafepeaceandlove@reddit
llama3:8b is pretty funny with Open Interpreter. I tried it with guru and it eventually got annoyed with me and started insulting both itself and me. 70b works really well. It can get stuck in a loop occasionally - not sure what's going on there - but press ctrl+c to interrupt and gently nudge it back to the right path.
Vegetable_Study3730@reddit
I have an open source LLM code execution library that can safely execute Python code in an isolated environment. Basically- different trade offs because i didn’t like running untrusted code on my machine.
https://github.com/Jonathan-Adly/AgentRun
ResponsibilityOk906@reddit
Nice job! But I have several questions: code interpreter both allows to upload some files (say, docs, csv, xlsx) into sandbox before code execution and also get newly created or modified files out of sandbox after execution so a user can download them from chat. Also, these files are available for download and chat is available to continue with new requests following up previous requests - after time passes. How to achieve the same with you library? I think of some sort of session persistency. Also, how to make it multi-user so several users have several sandboxes with isolation?
Vegetable_Study3730@reddit
Those are good ideas to implement!