Created an AI Research Assistant that actually DOES research! one query FULL document of knowledge!
Posted by CuriousAustralianBoy@reddit | Python | View on Reddit | 12 comments
Automated-AI-Web-Researcher: After months of work, I've made a python program that turns local LLMs running on Ollama into online researchers for you, Literally type a single question or topic and wait until you come back to a text document full of research content with links to the sources and a summary and ask it questions too! and more!
What My Project Does:
This automated researcher uses internet searching and web scraping to gather information, based on your topic or question of choice, it will generate focus areas relating to your topic designed to explore various aspects of your topic and investigate various related aspects of your topic or question to retrieve relevant information through online research to respond to your topic or question. The LLM breaks down your query into up to 5 specific research focuses, prioritising them based on relevance, then systematically investigates each one through targeted web searches and content analysis starting with the most relevant.
Then after gathering the content from those searching and exhausting all of the focus areas, it will then review the content and use the information within to generate new focus areas, and in the past it has often finding new, relevant focus areas based on findings in research content it has already gathered (like specific case studies which it then looks for specifically relating to your topic or question for example), previously this use of research content already gathered to develop new areas to investigate has ended up leading to interesting and novel research focuses in some cases that would never occur to humans although mileage may vary this program is still a prototype but shockingly it, it actually works!.
Key features:
- Continuously generates new research focuses based on what it discovers
- Saves every piece of content it finds in full, along with source URLs
- Creates a comprehensive summary when you're done of the research contents and uses it to respond to your original query/question
- Enters conversation mode after providing the summary, where you can ask specific questions about its findings and research even things not mentioned in the summary should the research it found provide relevant information about said things.
- You can run it as long as you want until the LLM’s context is at it’s max which will then automatically stop it’s research and still allow for summary and questions to be asked. Or stop it at anytime which will cause it to generate the summary.
- But it also Includes pause feature to assess research progress to determine if enough has been gathered, allowing you the choice to unpause and continue or to terminate the research and receive the summary.
- Works with popular Ollama local models (recommended phi3:3.8b-mini-128k-instruct or phi3:14b-medium-128k-instruct which are the ones I have so far tested and have worked)
- Everything runs locally on your machine, and yet still gives you results from the internet with only a single query you can have a massive amount of actual research given back to you in a relatively short time.
The best part? You can let it run in the background while you do other things. Come back to find a detailed research document with dozens of relevant sources and extracted content, all organised and ready for review. Plus a summary of relevant findings AND able to ask the LLM questions about those findings. Perfect for research, hard to research and novel questions that you can’t be bothered to actually look into yourself, or just satisfying your curiosity about complex topics!
GitHub repo with full instructions:
https://github.com/TheBlewish/Automated-AI-Web-Researcher-Ollama
(Built using Python, fully open source, and should work with any Ollama-compatible LLM, although only phi 3 has been tested by me)
Target Audience:
Anyone who values locally run LLMs, anyone who wants to do comprehensive research within a single input, anyone who like innovative and novel uses of AI which even large companies (to my knowledge) haven't tried yet.
If your into AI, if your curious about what it can do, how easily you can find quality information using it to find stuff for you online, check this out!
Comparison:
Where this differs from per-existing programs and applications, is that it conducts research continuously with a single query online, for potentially hundreds of searches, gathering content from each search, saving that content into a document with the links to each website it gathered information from.
Again potentially hundreds of searches all from a single query, not just random searches either each is well thought out and explores various aspects of your topic/query to gather as much usable information as possible.
Not only does it gather this information, but it summaries it all as well, extracting all the relevant aspects of the info it's gathered when you end it's research session, it goes through all it's found and gives you the important parts relevant to your question. Then you can still even ask it anything you want about the research it has found, which it will then use any of the info it has gathered to respond to your questions.
To top it all off compared to other services like how ChatGPT can search the internet, this is completely open source and 100% running locally on your own device, with any LLM model of your choosing although I have only tested Phi 3, others likely work too!
kankyo@reddit
Like all AI: don't be fooled.
greeneyedguru@reddit
So it's like perplexity?
ecthiender@reddit
But that's available in perplexity pro right? Or are you suggesting this capability is available on the free version as well? I have used perplexity briefly, so not sure and asking a genuine question.
adesh112@reddit
Exactly my thought
G0muk@reddit
I've never messed with any local AI stuff - how strong of a pc does it take to run this?
Tech4dayz@reddit
I see you hard coded your LLM configs, presumably to match the device you used to test. It'd be nice if it could be adjusted at run time or something.
menge101@reddit
Yeah, I had an issue with this as well, as I know nothing about LLMs or Ollama. More detailed setup steps are needed.
I may PR something...
informatician@reddit
I third this! I got ollama installed and running and ran Web-LLM.py but got the error: error:model 'custom-phi3-32k-Q4_K_M' not found
I know I need to configure llm_config.py but I don't know how to determine MODEL_PATH or model_name.
azimuth79b@reddit
Bravo! Well done. Keep up the great work :)
deadwisdom@reddit
Haha, this is like my entire company in a few python files.
Good work
patrickjpatten@reddit
Can you point it at a body of knowledge instead of the internet? I'm in the energy industry - i have plenty of daily numbers, values, and forecasts, and a ton of written up news going back years.
I'd love a reasearch buddy that reads the last 2 months of market news and sees if any trade ideas can come from that.
RationalDialog@reddit
I know, a bit unfair request but at work I have access to Azure AI and so having this tool which sounds assume connect to Azure AI and not having to maintain the "AI part" myself but be a gigantic benefit. But I will certainly try this out as it sound interesting I I often need to do such types of research.