Multiagent LLM infrastructure for data engineering and data pipeline workflow?
Posted by Guyserbun007@reddit | LocalLLaMA | View on Reddit | 1 comments
I have done quite a few projects in the past that require a lot data engineering, including understanding the REST and websocket API endpoints, testing, creating postgresql schemas, iterate, ETL, orchestration, monitor health of the data influx, etc. This is a lot of pain point and time consumed.
This makes me wonder, is it possible/feasible to build robust multiagent LLM infrastructure that automates significant portion of this data engineering and data pipeline building process in a meaningful way?
What are your thoughts?
Former-Ad-5757@reddit
Depends on what you mean by llm infrastructure, I don’t use agents inside my infrastructure, but I let agents first vibe code the infrastructure and then let agents optimize it. Just give a coding agent an input, give it one or 2 records as example output and let it go.just have a third record as deterministic check if it is ok. And you let it go until it has created all 3 records correct,