Reliable ways to get structured output from llms

Posted by amit13k@reddit | LocalLLaMA | View on Reddit | 7 comments

What are the current best ways to get structured output from local llm/openai reliably ? I have found the following options and tried some of them,

Microsoft guidance - https://github.com/guidance-ai/guidance

LMQL - https://lmql.ai/

llama.cpp grammar - https://github.com/ggerganov/llama.cpp/discussions/2494

langchain https://python.langchain.com/docs/modules/model_io/output_parsers/

jsonformer - https://github.com/1rgs/jsonformer

salute - https://github.com/LevanKvirkvelia/salute

outlines - https://github.com/outlines-dev/outlines

Was looking for something that could work with both local llm’s (gguf/gptq models) and openai but i guess that’s difficult right now ? (also, i am more inclined towards typescript based solutions(zod) if possible)

I ran into a few problems for eg, guidance-ai doesn’t seem to work with text-generation ui because the openai api adapter doesn’t support logit_bias.

It will be great to know the experience of others with these approaches.