How would you build an LLM agent application without using LangChain?
Posted by Zealousideal-Cut590@reddit | LocalLLaMA | View on Reddit | 216 comments
Posted by Zealousideal-Cut590@reddit | LocalLLaMA | View on Reddit | 216 comments
StewedAngelSkins@reddit
I'm writing one with just llama cpp.
Amgadoz@reddit
Even better: write one with the OpenAI api spec
mrjackspade@reddit
I use C# and I've literally written everything by just wrapping Llama.cpp using interop. All of these other frameworks look like such a massive headache from the outside, it seems so much easier to just push the data around myself than try and use someone elses implementations.
hustla17@reddit
Hi how can a noob get started with this.
And is C# just a personal prefernce ?
I would assume because it is written in cpp that using cpp would be more smooth (but I don't know shit that's why I am asking)
StewedAngelSkins@reddit
llama.cpp has a C ABI so bindings to other languages tend to be decent. are you a noob to llm runtimes or a noob to programming in general?
i think the python bindings tend to be the most approachable. it's the lingua franca of ML so most tools and libraries you'll want to use will support it in some capacity, and tutorial resources will be easier to come by.
hustla17@reddit
I am doing an undergraduate degree so I have some exposure to programming but wouldn't dare to call myself more than a beginner; So essentially yes to both.
I was thinking cpp because of my course that used it for the introduction to programming. But as python is the lingua franca I am going to learn it for the sake of machine learning.
StewedAngelSkins@reddit
Yeah, you have to know python anyway so you might as well learn it now. It's pretty easy, especially if you have some experience already. C++ is fine, and if you get deeper into this stuff on a systems level you'll have to work with it to some extent, but it's probably not where I'd recommend starting (unless you're already an experienced programmer in other areas, which is why I asked that).
hustla17@reddit
Do you have any ressources for a beginner to get started with this ?
I already have some direction and would go the llamacpp_python route but if you have a better pathway I am all ears.
Slimxshadyx@reddit
The other person gave a great response. However, if you are a noob to programming, I’d recommend sticking with Python and just using the Ollama Python library, or Llama-Cpp-Python.
StewedAngelSkins@reddit
Doing the same, but with rust. Definitely agree.
Ragecommie@reddit
Yep, you can just write a damn wrapper for your API of choice and just build whatever logic you want.
LangChain was outdated when it was released, now it looks like a fucking npm package...
instant-ramen-n00dle@reddit
Langchain is a bloody mess. Llama_index ftw.
Any-Demand-2928@reddit
Just call the API's yourself and setup your own framework as time goes on so it's fully customized to your needs. You can copy and paste the code if you really want to from Langchain or LlamaIndex into your own codebase.
instant-ramen-n00dle@reddit
> You can copy and paste the code if you really want to from Langchain or LlamaIndex into your own codebase.
You, jag-off, are the reason folks don't want to contribute to FOSS projects. Copy the code without attribution my aching ass!
Any-Demand-2928@reddit
Langchain and LlamaIndex has a MIT License, one of the most permissive licenses in the world of Open Source. They are literally telling you that you can do whatever you want with the source.
_supert_@reddit
MIT licence requires attribution.
Amgadoz@reddit
Isn't attribution basically just say "we use an MIT Licensed software in our products and services"?
OrangeESP32x99@reddit
Then attribute it? Lol
The guy didn’t say claim it’s your own.
Niightstalker@reddit
While you can do whatever you want you still need to include the original copywriter and license notice.
instant-ramen-n00dle@reddit
That goes against the spirit of open source. Don't do it.
ComprehensiveTill535@reddit
dude, massive downvotes should tell you that you're not a qualified advocate or necessarily know wtf the spirit of open source even is.
tertain@reddit
That’s the spirit of open source. It’s called a fork. I didn’t see anyone claim that you shouldn’t use attribution.
DaveSims@reddit
Checks username - and you, sir, are the reason folks don't want to dine at ramen restaurants. Put that dry shit in a plastic bag and call it ramen my aching ass!
instant-ramen-n00dle@reddit
https://i.redd.it/dqgo0y3qodde1.gif
illusionst@reddit
Llama index is mostly focused on RAG based agents right? Do they have tools (function calling support?)
NoLeading4922@reddit
llamaindex is just as bad
harsh_khokhariya@reddit
Yes! Llama index is much cleaner and useful
Zealousideal-Cut590@reddit (OP)
Noted
EnnioEvo@reddit
Just call the openai client or litellm
enspiralart@reddit
Hell even openai is bloat... requests is all you need😁
LuchsG@reddit
You fool! You forgot requests is bloat as well! urllib for the win!
Acrobatic_Click_6763@reddit
urlib is bloat! Make a C extension and send the system call from there!
enspiralart@reddit
But write the extension in ASM
Acrobatic_Click_6763@reddit
ASM is bloat, use binary.
enspiralart@reddit
Binary on RAM is bloat, use floppy disk
Acrobatic_Click_6763@reddit
Binary on floppy disk is bloat, connect the wires to the logic gates yourself.
Acrobatic_Click_6763@reddit
You know what? Electricity is bloat, just think using your mind.
-Django@reddit
Me likey the pydantic structured outputs parsing
Chigaijin@reddit
Is Haystack still doing well or are there issues with it too? Haven't checked in a while
Zealousideal-Cut590@reddit (OP)
Good point. I loved their pipelines. There are some nice docs on it [here](https://docs.haystack.deepset.ai/v1.22/docs/agent)
nold360@reddit
I dig haystack currently. But you got to watch out for the docs, current version is 2.8
namp243@reddit
txtai
ohhseewhy@reddit
For a newbie: what's bad about LangChain?
ThePinaplOfXstanc@reddit
Actual Langchain user here: there's no obvious way of having the good parts from the bad parts without experience. Most of it is just junk and feature bloat.
The good so far: unified interface for different LLMs, retry/fallback mechanisms, langfuse/smith tracing and profiling (especially for out-of-the-box RAG setups), structured outputs.
The bad: the actual chains (a kitten dies every time some dumbnut tries clever things with operator overloading in Python and breaks code introspection), LCEL, documentation, probably most of everything else I didn't try yet.
I'd only interact with the bad parts if you need powerful tracing, the ramp up is a nightmare and there's no guarantee of API stability at this point (the upside is that v0.3 trimmed down the fat a lot).
GritsNGreens@reddit
You left out waiting for langchain to support whatever LLMs shipped this week and would otherwise be trivial to implement with their decent docs & nonexistent security practices.
clckwrks@reddit
This “langchain user”person is clearly an idiot lol
Environmental-Metal9@reddit
Such harsh opinion levied towards someone who was just answering a question from their perspective. If you honestly disagree with their take, there are more constructive and less degrading ways to communicate that. Otherwise it just comes across as you wanting to feel superior at someone else’s expense, which is quite petty. Which is it? Did you have valid concerns that you’d like to elaborate in a more articulate way, or were you just taking a piss at someone for no reason?
crazycomputer84@reddit
not to mention lang chain dose not support local llm that well
Niightstalker@reddit
Well if you use ollama (which is supported) it is quite easy though.
SkyGazert@reddit
Ooh! Like JIRA then?
NotFatButFluffy2934@reddit
I wanted the unified interface for async streaming on multiple models with passing the API Key as part of the initial request so I can use user's account credentials. I tried understanding how I could do even the first part with multiple LLMs in one request and just gave up on Langchain and built my own.
kiselsa@reddit
Documentation is very lacking, everything is overcomplicated and it's painful to do even very default stuff, for example:
How I can do rag + function calling + text streaming with local model? It will be very difficult to get this right with docs.
maddogawl@reddit
Yes and Autogen for example is just so much easier to get up and running
hyperdynesystems@reddit
It's weirdly goofy how things are set up. Want to customize one of the samples to do basically anything different than how the sample does it, to add actual functionality? Nope!
Niightstalker@reddit
Have you used it recently? Especially LangGraph is quite good imo. You can either use prebuilt components or add completely customised ones.
hyperdynesystems@reddit
I haven't used it since pretty early on. I wasn't a fan of the way it bloats your context a ton to accomplish what it wants and moved on to using other methods, mostly constrained output framework + rolling my own in terms of acting on the outputs.
Niightstalker@reddit
Actually changed a lot since than and quite easy customize now.
bidibidibop@reddit
Right, but, just to add my 2c, it doesn't make sense to continually assess frameworks. People just found something that works (including manually calling the apis, connecting to vector stores, manually chunking stuff, etc it's not that difficult), so then why waste time refreshing their docs to see if they've fixed stuff in the meantime?
Niightstalker@reddit
If you work on basic stuff yes. But I do think as soon as you go for example in the direction of agents for example LangGraph does have its advantages. I do like the graph approach and brings definitely quite a lot some convenience features.
Sure you could build those things yourself as well. But that also takes some time and you need to maintain it.
So overall it is the standard tradeoff between building yourself or using a framework that you need to consider anywhere when coding.
bidibidibop@reddit
Yeah, agreed, but we're talking about langchain and not langgraph.
Niightstalker@reddit
LangGraph is from LangChain and for many things their suggested way to go now. If you keep using outdated approaches instead it is not the fault of the framework but yours.
bidibidibop@reddit
Is langchain outdated? Does it have an end of life date? That's news to me, please elaborate.
Niightstalker@reddit
As I said above, in their docs they do suggest using LangGraph instead of LangChain for certain things.
bidibidibop@reddit
"certain things" != all of friggin langchain
Niightstalker@reddit
Could you tell me where exactly I said that it is?
hyperdynesystems@reddit
For my purposes I really like constrained output and manually writing the action logic instead, since it means I know the model isn't having a ton of context taken up by the framework.
kiselsa@reddit
Exactly
hyperdynesystems@reddit
I ran into it immediately, wanting to simply use two of the samples' features together. LangChain was like "NO" and I stopped using it haha.
Old-Platypus-601@reddit
So what's the best alternative?
Jamb9876@reddit
They seem to want to force certain approaches and if you want to do something like preprocessing a pdf text it requires jumping through hoops.
Remarkable-End5073@reddit
Hey, man. I’m just a beginner. So how do I get started building an LLM agent application? I wonder if you can give me some advice
Pedalnomica@reddit
I mean, you can get pretty far enforcing a json schema with your llm calls, parsing it, and if statements. Honestly that might be a great way to start so you really understand what's going on under the hood?
jabr7@reddit
Choose a framework and do the tutorials + read the glossary, langgraph is an example of that
Niightstalker@reddit
This is quite straightforward though. Write a retrieval tool, create a prebuilt ReAct Agent and use the stream instead of invoke.
kiselsa@reddit
Hmm, maybe you're right, I just checked it and they indeed have example for reagent, an it has tools in api reference and chat templates.
Maybe they added this page recently because I don't remember it before.
Though I don't know if it's easy to find this example now or not.
Also, will function calls will work with streaming correctly?
Niightstalker@reddit
Yes it will work. And you can also stream only steps if wanted, like AIMessage - Tool Call - AI Message etc
jabr7@reddit
They have had an specific internal event function to stream almost anything, you can even create custom events of your own to stream and give feedback, is the astream_events function with the V2 api
kiselsa@reddit
What about function calls?
jabr7@reddit
I'm sorry but langgraph second tutorial have this exact combination? I think the hate for langchain is that for some cases is a too high abstraction really.
JCAPER@reddit
There may have been some fault of my own, but months ago I made a telegram bot with python and used langchain for LLM responses. After a while, the script would always crash.
Tried now with ollama's own library, and now it works like a charm, out of the box, no problems whatsoever.
Baphaddon@reddit
Does that use llama index?
Enfiznar@reddit
Documentation has been very expanded recently
fueled_by_caffeine@reddit
It adds a lot of incidental complexity, hides a lot of important stuff behind abstractions making it inaccessible and requires a lot of boilerplate to do anything useful
illusionst@reddit
I tested it when it was just launched and followed its progress closely, it’s very hard to get it to do basic things, in the end, I just used LLM+RAG+Function calling. That app has been in production for a year now. No issues.
oculusshift@reddit
Abstraction hell. Too much magic going on behind the scenes. If you have vanilla experience and know what’s going on behind the scenes then these frameworks help build things faster but if you are a beginner, you’ll just end up pulling your trying to figure out what’s going on.
The observability tools for these frameworks are also getting really popular because of this.
Environmental-Metal9@reddit
Nothing really. This is the same discussion about frameworks in web dev: a framework can make you massively more productive but it comes at the cost of complexity to your codebase, and now you’re programming the framework, not the language. If the benefits to you are worth it, and it allows you to build things it would take too long to build otherwise, or work in a team using a shared experience, then that’s a good tool to use. If, on the other hand, you just some of the primitives in order to make a proof of concept, using a whole framework is too much.
Same principle applies here. LangChain can be seen as a framework for working with LLMs, one of many, and one that can help people be massively productive.
The risks are the same as with web frameworks: you could adopt the framework without knowing how the tech works, which is fine but could cause issues down the road, and complexity
Mickenfox@reddit
Overcomplicated web frameworks are the bane of my existence. Too many people act like adding a whole layer of new concepts does not add any complexity to your program.
I'm not going to rehash all the articles about why people dislike frameworks, but I think the worst example is when you get a cherry picked example like "Look how easy BazoorpleJS is! You can write a Hello World app in 5 lines!"... and then you try to do anything else, like accept XML instead of JSON, and these 5 lines turn into 2000 lines and several weeks of reading the documentation to see where the "magic" deserialization comes from.
Environmental-Metal9@reddit
That is because people try to replace complexity with simplicity, but simplicity lacks depth. Simplicity is good when you don’t know something yet (bazoorpleJS might help motivate a new dev by allowing them to see quick progress but only if it doesn’t teach new devs a different way from the underlying language). Personally, I learned JS well in spite of first using it for work with react. I had spent a lot of time learning the DOM first that react made sense to me, but then I worked with devs who were react devs, not frontend devs and I worried for them. It’s possible that eventually they learned the basics of JS too.
At the same time though, I’m aware that I’m not developing by physically turning transistors on and off, so I’m working on several layers of abstraction myself. I don’t know what is the clear line between too much abstraction and not enough. Feels like that’s a gut feeling kind of area, as some people still love to use assembly language (for no real benefit other than their own preference as modern compilers can do a better job than a human at writing optimized code)
sjoti@reddit
I generally agree with your view on framework vs no framework, but in the case of langchain, it falls apart because not using the framework isn't all that complex. Putting prompts together, parsing some Json and getting responses from openai compatible endpoints really isn't that difficult.
If you use langchain, decide that you want to tweak things a little bit, suddenly you have completely take apart what you built. It has the downsides of a framework, with very little of the upside.
Environmental-Metal9@reddit
Being a no-framework kind of person myself, I can’t speak to langchain specifically as it didn’t solve any problems I couldn’t do it myself, and I didn’t need any complexity in my simple apps. I wonder if langchain is suffering from being a trailblazer. If I remember correctly, before langchain we were all still deciding on best practices and effective approaches. I took a lot of inspiration from the way langchain does things, I just wanted some of them without the cruft of being generic enough to fit most cases. This is not a defense of langchain, though, as I said, I have 0 experience actually using it.
I think a framework will be more useful when they provide higher level abstractions such as control flow, semaphores, asynchronous and parallel processing, etc. it could be that langchain does that already, but I’m thinking less Django and more flask, for llms
The_frozen_one@reddit
I will say, the first time I used it, it was a mess and had a steep learning curve. It seemed most of the modules were focused on commercial / cloud LLMs.
I tried it again recently and it more or less did what it was supposed to. I was able to mix and match multiple LLM endpoints (local and cloud) with minimal setup.
Personally, I don't have a huge need for that level of abstraction for most of the things I'm currently playing around with, but I do think a lot of views on langchain were people like me who tried it early on and got frustrated with the amount of tinkering it took to get it to return results. I do think it's matured somewhat, and now that they have a lot more purpose-built modules that cater to local LLM development.
enspiralart@reddit
Add to that the docs and the spaghetti mess of updates, breaking changes almost every release. I jumped ship a long time ago and made my own minimalist setup that is complete and gets the job done without cudgeling me.
loversama@reddit
Its good for prototyping especially if and when you're new to LLMs to sort of start to understand how things fit together.
If you start a business or offer a service with an LLM you will want to build it yourself so you know what is happening each step.
Langchain also sometimes has waste in its "calls" so it might send lots of un needed stuff to the LLM or get stuck.. If you tailor things properly you can avoid these situations and again if you're scaling up the application over time inefficiencies like this will cost you money.
If you want to truly understand how RAG and other systems work and if you want to build programs and workflows that can do things that haven't been done yet, you'll likely have to grow out of Langchain quite quickly..
Expensive-Apricot-25@reddit
it doesn't implement anything thats not already trivial to do. Also, since they are abstractions, it hides A LOT of really important stuff behind the abstraction.
I can do everything I can in langchain, but with less lines of code in pure python. Doing it this way also hides nothing and I have full control over everything
Short-Sandwich-905@reddit
Documentation
Kat-@reddit
I use the following for agents:
Pyros-SD-Models@reddit
wait... weren't autogen and guidance once microsoft repos?`
wait... mircosoft still has an autogen repo. I'm confused.
Mysterious-Rent7233@reddit
Yes Guidance's developers still work at Microsoft:
https://github.com/hudson-ai
lostinthellama@reddit
Autogen is being spun out as a fully open source product by the founders, but I believe they’re still employees of MS. Not an uncommon way for a big tech company to spin out something valuable that they have no interest in productizing.
SkyGazert@reddit
Don't know about guidance but I know for a fact that Autogen is a MS product.
buyingacarTA@reddit
I am just curious when you say that you use this for agents, what sort of agents do you build? Do they work in some practice or are they for fun?
caseyjohnsonwv@reddit
Curious about the use of Markdown - have you seen a significant lift in performance? It's something we started doing at work over a year ago, but never really scientifically evaluated
fullouterjoin@reddit
TaskLink, evaluate LLM performance for common tasks across a diversity of formats
-Cubie-@reddit
Nice pick for the embeddings 👌
skinnyjoints@reddit
Built my first rag system recently functioning as a search engine for the YouTube videos I watch using ChromaDB and Stella for embedding. Worked great.
I used SQLite as an intermediary data store for the transcripts and transcript chunks before passing them to the embedding model to be stored in chroma.
Is there an easier way that I missed?
Flat-Guitar-7227@reddit
I think CAMEL is friendly to start, a lot of researchproject use CAMEL.
_siriuskarthik@reddit
I found the Langchain to be messing up with the Agent's autonmous nature for some reason.
Migrating to function calling feature in openai seemed to solve much of the problems for me - https://platform.openai.com/docs/guides/function-calling
fl1pp3dout@reddit
what about LangFlow?
syrupsweety@reddit
Well it's node-based GUI for LangChain, at this point I would use Comfy, just to not deal with LangChain anymore. I've tried it to build a RAG setup, it was a huge pain
Niightstalker@reddit
What exactly was pain? I built a RAG with LangChain/LangGraph recently and it was really straightforward and done in a couple lines.
syrupsweety@reddit
While bare LangChain for a simple RAG was manageable, I would not say so about LangFlow, where I just spent days debugging everything. I don't know what the underlying issue here is, it was just not so usable
matadorius@reddit
Probably I would wait a few more months
obanite@reddit
langgraph is alright, I quite like the API and I think the underlying ideas are solid.
It's true that none of it is rocket science though, and it's a huge set of libraries just to do relatively simple stuff (a DAG that can do API calls)
Agreeable-Toe-4851@reddit
Following
SvenVargHimmel@reddit
Do you know what you're building ... LLM application is very broad.
If you're a seasoned engineer just start with pydantic and litellm for direct APi calls and a basic retry model and that's all you need. Slap on semantic-router for routing to correct agent
If not go with PydanticAI which has all of the above built in and they have a tonne of recipes in the examples folder, from your classic banking support example to a multi agent one.
Read the Anthropic Blog on agent builds and checkout their example notebooks.
There's a so much more to consider like your eval, tracing, optimisation and versioning etc but am not sure on what type of system you're building
jamie-tidman@reddit
It's all just string manipulation.
Literally just REST and whatever language I'm building the rest of the project in.
ArthurOnCode@reddit
This guy concats.
It's nice if there's a thin wrapper that abstracts away the particular LLM provider and model you're using, so you can experiment with many of them. Besides that, it's just strings in, strings out. This is what most programming languages were designed to do. No need to overthink it.
dhaitz@reddit
One can use sth like litellm or aisuite for an unified interface to several model providers.
As you say, the LLM interfaces are quite simple REST APIs. Using an framework does not reduce complexity, but increases it by adding an additional dependency.
The useful thing about LangChain are some building blocks for e.g. DocumentStore classes or interfaces to different vectorstores one can use. Effectively, treat it like a library where you import what you need, not a framework that defines your entire application.
jjolla888@reddit
i think you have just described litellm
hsn2004@reddit
Vercel's AI SDK🛐🙏🏻
mycall@reddit
MS AutoGen and Semantic Kernel are pretty nice to work with.
nrkishere@reddit
I'm experimenting with no framework approach, where every task comprise of actions. Every actions are independent microservices. The LLM schedules the actions and the system either store them in queue or runs in parallel based on scheduling. Once the action is completed, the system interact back with the LLM. This is very much inspired by the "building effective agents" article from anthropic.
If it doesn't work out, I'll go back to Llama index, much better docs than langchain
Gabcot@reddit
... So basically you're creating your own framework. Sounds a lot like what CrewAI offers if you want to check it out for inspiration
nrkishere@reddit
Wouldn't call it a framework in traditional sense. It is just a generic grpc microservices app which can (and probably will) use a bunch of traditional third tools kafka, redis, celery for all the task queueing and scheduling jobs. But it might evolve like a "framework" if it works out and we can expose a BaaS like API for third party actions (microservices).
The reason that I'm doing it in a grpc microservice is because of protobuf, language agnostic (actions can be written in any language, not just python), and plethora of distributed tools for additional needs (literally all cloud native tools at disposal)
The actual problem here to solve is the LLM effectively able to make judgement and choose actions based on task. Maybe will need vector embeddings to store details of the actions, so that system can semantically match only the required actions to keep context size smaller.
Anyway, thanks for the suggestion. If you have any more tool, architecture or anything in mind feel free to DM. Thanks again
Watchguyraffle1@reddit
I think this is sort of the next thing in documentation. Instead of the randoms stuff we have now from vendors that may or may not be easily understood and parsed by the LLMs themselves during training or maybe rag/ copy paste: documentation will be provided as an ever growing set of agents/functions meta data that is processed during a conversation. I think vendors who move to that sort of documentation for their apis will set the standard for interoperability.
_Hemlo@reddit
llamaindex?
Better_Story727@reddit
I spent two weeks, trying to understand the LangChain concept elegantly. After that, I found it's just a pile of shit. I write everying using my own lib, and the disaster was gone. I really hate my favor for spending so much time to eat that shit
ilovefunc@reddit
Try out agentreach.ai to help connect your agent to messaging apps or email easily.
burntjamb@reddit
The SDK’s out there are so good now that you don’t need a framework. Just build your own wrapper for what you need, and you’ll have far more flexibility. Hopefully, better frameworks will emerge one day.
PUNISHY-THE-CLOWN@reddit
You could try Azure/OpenAI Assistants API if you don’t mind vendor-lock-in
makesagoodpoint@reddit
LangChain was the first decently packaged solution for RAG. It’s bound to get usurped.
illusionst@reddit
I’ve been hearing good things about pydantic AI, it’s really simple and that’s what I like the most about it.
KingsmanVince@reddit
I write my own flows. I get to optimize little parts and process specific languages (not just English).
BreakfastSecure6504@reddit
I'm building my own framework using mediatr with c#. I'm applying Design patterns: Mediator and CQRS
Alex_Necessary_Exam_@reddit
I am looking for a tutorial to build a local LLM solution with tool calling / agent creation.
Do you have some references?
audiophile_vin@reddit
The learning curve for Langgraph is not the smallest, but the tutorials are helpful, and u can get started by getting help from Claude to help create the graph and nodes. The langsmith tracing seems like it could be helpful (although I haven’t had a need to inspect it yet), and having langgraph server also seems useful to serve your agent, without reinventing the wheel to build the API yourself
wochiramen@reddit
Why is LangChain bad?
RAJA_1000@reddit
It didn't look like Python, you basically need to learn a new language and the benefits are marginal. For many things you are better of writing without a framework. Pydantic ai is a nicer approach where you get a lot of benefits like structured outputs but you can write in actual Python
swiftninja_@reddit
Have you looked at their documentation
Niightstalker@reddit
Yes, and I think it actually quite good. Especially their LangGraph docs.
They also explain concepts like multi-agent architectures quite well imo.
What exactly do you dislike about the docs?
The_GSingh@reddit
Nah I just make Claude do that part along with the coding part.
croninsiglos@reddit
How is that working out for you? In my experience, Claude stinks when trying to generate langchain code.
enspiralart@reddit
Even then it will fail on anything nontrivial because there are always new breaking changes
Q_H_Chu@reddit
Well the only thing I know is LangChain so if you guys have anything else (free) I am much appreciate
RAJA_1000@reddit
Dude, try pydantic ai, no esoteric new language, just pythonic coffee that gets things done
fluxwave@reddit
Tehre's also https://github.com/BoundaryML/baml
Mickenfox@reddit
I know Semantic Kernel exists.
enspiralart@reddit
https://github.com/lks-ai/prowl prompt owl takes it the next step
RAJA_1000@reddit
Pydantic ai?
emsiem22@reddit
I don't understand why people just not use llama.cpp for local, API for proprietary cloud LLMs, and program their own interfaces. We are in age of LLMs that kick ass in coding if you are lazy or lack experience. This is trivial.
Substantial-Bid-7089@reddit
just build your own, you probably don't need that level of abstraction / features
Foreign-Beginning-49@reddit
Try langgraph. No seriously, it's much simpler and as for its functionality every part of it can be remade in pure python in relative simplicity. It introduces really cool concepts and helps get a handle on agent orchestration. Then you can ditch it, utilize pure python, and never look back. Best of wishes to you in this the year of our agent.
OracleGreyBeard@reddit
Nice job OP! The comments here are a goldmine of things to try.
OccasionllyAsleep@reddit
What's an alternative to langchang
Pyros-SD-Models@reddit
There is only httpx/request, pydantic and dspy. Perhaps outlines if you need to go crazy with structured outputs. Everything else needs more time than it should save.
NSP999@reddit
native client
goodlux@reddit
llamaindex?
GudAndBadAtBraining@reddit
Model Context Protocol. pass everything as jsons through HTTP protocols. its super convenient if you want to make interactive and extensible systems that mesh well with online APIs. also, as a bonus, you can drive it from Claude desktop if you're ever so inclined
Acrobatic_Click_6763@reddit
I don't need a bloated framework just for an AI agent, that's a very simple task to use a module for! You (most) Python developers..
spacespacespapce@reddit
LiteLLM
Zealousideal-Cut590@reddit (OP)
Is this for agents? I thought it was just inference.
Ivo_ChainNET@reddit
I mean agent frameworks are just a few convenience wrapper classes around inference anyway. You can use, rag, memory, function calling / tool use with litellm. In the end, they're just parameters to inference calls.
Eastern_Ad7674@reddit
If you are dev with today's AI capabilities and IDEs you can build anything from the scratch.
Frameworks were useful in order to save hours of learning efforts writing/integrate different kind of things.
manlycoffee@reddit
From what I learned:
don't even bother with frameworks. Just use the LLMs directly.
olli-mac-p@reddit
Use CrewAI it builds on lang chain but delivers with a standardized repository structure and simple ways to implement agents, LLMs tasks and teams.
holy_macanoli@reddit
Especially since flows was introduced.
Sushrit_Lawliet@reddit
DSPY is quite good
rebleed@reddit
By far the best framework. It is lightweight, gets out of the way, and also includes some advanced utilities for prompt optimization and fine-tuning.
a_rather_small_moose@reddit
Alright, I’m kinda on the outside looking in on this. Aren’t people basically just passing around text and JSON, maybe images? Are we just at the point where that’s considered a non-tractable problem w/o using a framework?
laichzeit0@reddit
What is better than LangSmith? I mean adding a library, two lines of code and having full traceability? Does anything do what LangSmith + LangChain/Graph does out of the box?
TheoreticalClick@reddit
Just use autogen
Cherlokoms@reddit
LoL, why would I use a shit wrapper of utterly garbage abstraction to build my application?
ortegaalfredo@reddit
1) "Hey AI! do this thing, thanks."
2) "Hey AI that's great. Convert the output into JSON please."
3) Parse json and do things.
4) goto 1
o5mfiHTNsH748KVq@reddit
pydantic_ai looks interesting.
https://ai.pydantic.dev
I’m already using pydantic, so I’m looking forward to trying this today.
davidmezzetti@reddit
It still surprises me how many people complain about LangChain then continue to use it.
Why not try one of the alternatives: LlamaIndex, Haystack and of course txtai (I'm the primary dev for txtai).
If you're not happy, do something about it.
obiouslymag1c@reddit
I mean you just write orchestration yourself and use OTS connectors, or just write them yourself... it's what you do as a software developer anyway if you have any sort of application that requires state management.
You lose a bit in ecosystem support in that langchain may have figured out how to make a connector or dataoutput or something more LLM friendly/compatible... but you gain full control over your dependencies and tooling.
FancyDiePancy@reddit
Semantic Kernel
DataPhreak@reddit
https://github.com/DataBassGit/AgentForge
We built our own framework.
jjolla888@reddit
try: phidata, haystack, langroid
langchain way past its use-by date
Manitcor@reddit
before llangchain we just wrote the tooling or used existing orchestration packages, playbooks were still very popular.
Many still run bespoke pipelines because they prefer the control over it they get. Different opinions on which is good or bad often come down to how much control vs how much you want someone else to make those decisions for you you prefer.
Its important to remember the goal of most software and systems is to fulfill the majority of cases rather than all possible cases. There is usually a double digit percentage of a userbase a platform like langchain (or whatever the popular tool is) wont work for, for whatever reason.
finally, langchain along with every other tool does little thats special, what they do do is provide pre-built components based on that development teams opinions on how a pipeline should work. Therese little to no magic in these, just glue code.
PermanentLiminality@reddit
Langchain seemed more trouble than it is worth so I just built using python and the APIs that I needed.
jackcloudman@reddit
Pybehaviortrees
itwasinthetubes@reddit
Even chaining prompts in sequential requests is better!
I don't need even more gray hairs...
parzival-jung@reddit
dify ai?
macumazana@reddit
How? With no stress
Wooden_Current_9981@reddit
FkU Langchain. I can code a high-level rag system by using a custom API requests with input JSON data.. I never felt the need to use langchain, but still the job descriptions mention Langchain everywhere, asif it's a new language for AI.
FunkyFungiTraveler@reddit
I would use either phidata if it was a public facing project or aichat for personal agenetic use.
LibraryComplex@reddit
Autogen has been good so far
samuel79s@reddit
How good is Semantic Kernel from MS?
segmond@reddit
python, and then one of many frameworks. there are literally 100 python LLM agent frameworks on github.
XhoniShollaj@reddit
I hate to say this cause Harrison is a very nice guy - but Langchain/LangGraph is definitely a headache in debugging, development and definitely not ready for production level. So many abstractions and layers which are always changing - 99% of the cases you'd be better off just with something minimal like Pydantic or even Vanilla Python + any API reference from whatever LLM you're using
Zealousideal_Cut5161@reddit
Llamaindex works for me tbh
Mindless-Okra-4877@reddit
A few days ago here was posted PocketFlow. https://www.reddit.com/r/LocalLLaMA/comments/1i0hqic/i_built_an_llm_framework_in_just_100_lines/ If you are more a programmer than analytic you will like. It gives you full control of everything. Moved from CrewAi to PocketFlow with only 2 hours time work
phree_radical@reddit
Hmmm design a completion criteria you can check and then make a job queue based on that, and maybe incorporate a measure of progress
loversama@reddit
Its good for prototyping especially if and when you're new to LLMs to sort of start to understand how things fit together.
If you start a business or offer a service with an LLM you will want to build it yourself so you know what is happening each step.
Langchain also sometimes has waste in its "calls" so it might send lots of un needed stuff to the LLM or get stuck.. If you tailor things properly you can avoid these situations and again if you're scaling up the application over time inefficiencies like this will cost you money.
If you want to truly understand how RAG works and if you want to build systems that can do things that haven't been done yet, you'll likely have to grow out of langchain quite quickly..
meta_voyager7@reddit
haystack
sekai_no_kami@reddit
AIsdk
bigdatasandwiches@reddit
Ah python?
You can build everything in langchain from pure python, so just build what you want.
Frameworks trade abstraction for implementation speed.
enspiralart@reddit
The key is to abstract the actual boring yet required parts like parsing the output w regex etc, not the parts where experimentation should def happen
bigdatasandwiches@reddit
Absolutely - I’ve found it’s possible to mix and match where needed, and some projects I’ve just tossed frameworks in the bin and just written my own wrapper.
StentorianJoe@reddit
We only use the OpenAI or requests libraries for all of our prod apps (specifics vary depending on language), through Kong AI gateway, logged through grafana dashboards and pager duty alerts - just like all of our other services.
We haven’t seen a reason to change it up as we are integrating LLMs into existing ETLs and not creating do-it-all SaaS stuff.
dogcomplex@reddit
ComfyUI! It can literally just encapsulate all types of AI workflows, and is by far the best for image/video already
AsliReddington@reddit
Structured Output at an LLM serving framework is all you need for the most part.
Zuricho@reddit
https://github.com/microsoft/autogen
GodCREATOR333@reddit
I was trying autogen(AG2) the forked off version. Seems to be pretty good.
Vitesh4@reddit
Simplemind and txtai. Very basic, but Simplemind has a pythonic way of doing structured outputs and the function calling is very painless. You do not need to keep track of and sync the actual functions, the dictionaries representing them or the work of having to pass the output of the function in and then calling the model again.
Roy_Elroy@reddit
anyone like flowchart? dify, flowise, these tools can be used to build agent applications.
southVpaw@reddit
Asyncio
charlyAtWork2@reddit
vanilla node.js and kafka
SatoshiNotMe@reddit
We’ve been building Langroid since Apr 2023 (predating other agent frameworks) but took a leaner, cleaner, deliberate approach to avoid bloat and ensure code stability and quality. We’ve kept the core vision is intact: agents that can communicate via messages, and a simple but versatile orchestration mechanism that can handle tool calls, user interaction and agent handoff.
It works with practically any LLM that can be served via an OpenAI-compatible endpoint, so it works with OpenRouter, groq, glhf, cerebras, ollama, vLLM, llama.cpp, oobabooga, gemini, Azure OpenAI, Anthropic, liteLLM.
We’re starting to see encouraging signs: langroid is being used by companies in production, and it’s attracting excellent outside developers.
Langroid: https://github.com/langroid/langroid/tree/main
Quick tour: https://langroid.github.io/langroid/tutorials/langroid-tour/
Hundreds of example scripts: https://github.com/langroid/langroid/tree/main/examples
cinwald@reddit
I used langchain to build an rag that had web scraping prices as part of the pipeline and it was much slower than prompt + scrape + prompt without Langchain
GreatBigJerk@reddit
smolagents seems pretty good for a simple agent system.
if47@reddit
If you don't know, then you can't.
The most basic LLM agent is just a "predict next token" loop with self-feedback, and the most you need to do is concatenate strings.
GeorgiaWitness1@reddit
100%.
I was like "ExtractThinker will NEVER have agents or RAG, its just a barebone IDP library to pair with Langchain and the rest".
Now i'm starting checking agents and using langchain/langGraph for my next project, i was like:
"f*** it, i will add agents coming from simple libraries like smolagents"
Available-Stress8598@reddit
Phidata. It also comes with a built in playground code snippet which you can run locally. Not sure about production
Aggravating-Agent438@reddit
bee agent