Why we no longer use LangChain for building our AI agents
Posted by _srbhr_@reddit | programming | View on Reddit | 5 comments
Posted by _srbhr_@reddit | programming | View on Reddit | 5 comments
apf6@reddit
I haven't actually used Langchain but it seems like that's not an apples to apples comparison..
In their simple example they're doing a one shot LLM completion and then getting the raw chat back. In the Langchain example they're sending the completion to an output parser.
LLMs are really squirrelly and inconsistent and so you really need an output parser. That's the logic that takes the messy LLM response and conforms it to a strict data type that your code can actually rely on. And a really good feature to add there is being able to add retry & followup questions. If the response is invalid or fails some criteria, then ask in the same chat to have the LLM fix it.
So maybe Langchain sucks but I think you do still need some similar abstraction layer that does that stuff. Your product isn't going to work well if you're expecting the chat to give you a perfect response in 1 shot.
todo_code@reddit
We are using streamlit and calling the apis directly. Semantic kernel and langchain were garbage
light24bulbs@reddit
Langchain was so freaking bad. And it has that classic python OO pattern where it's hard to figure out how anything works or how to modify anything because it's all inheritance.
Cosoman@reddit
I haven't used langchain but al least in dotnet, SK way of defining plugins that translate to function calling is really nice. If you don't need that, yeah no need for SK
c-digs@reddit
I've been using Microsoft's Semantic Kernel and very satisfied with it.
I think the abstractions and "building blocks" are at the right level and make it overall easier to build and control the interactions with LLMs.
There is an orchestration option, but I found that in most of the enterprise use cases, we actually want a predictable and somewhat deterministic outcome. And if that's the case, it's easier to just write and debug the specific code for that target outcome (the LLM itself is already somewhat non-deterministic so adding in AI orchestration would just amplify that).