Meta ML E6 Interview Prep - Allocation Between Classical ML vs GenAI/LLMs?

Posted by aa1ou@reddit | ExperiencedDevs | View on Reddit | 5 comments

I'm preparing for Meta ML E6 (SWE, ML systems focus) interviews. 35 YOE in ML, but not in big tech.

Background: I know ML fundamentals well, but news feeds, recommendation systems, and large-scale ranking aren't my domain. Been preparing classical ML system design for the past few weeks - feed ranking, content moderation, fraud detection, recommendation architectures (two-tower, FAISS, etc.).

My question: How much should I worry about GenAI/LLM-focused problems (RAG, vector databases, prompt engineering) vs continuing to deepen on classical ML?

I can discuss these systems conceptually, but I haven't built production LLM systems. Meanwhile, I'm getting comfortable with classical ML design patterns.

Specifically:

- Recent interviewees: Were you asked GenAI/LLM questions at E6?

- If yes, depth expected? (High-level discussion vs detailed architecture?)

- Or mostly classical ML (ranking, recommendations, integrity)?

Trying to allocate remaining prep time optimally. Any recent experiences appreciated.