Meta ML E6 Interview Prep - Allocation Between Classical ML vs GenAI/LLMs?
Posted by aa1ou@reddit | ExperiencedDevs | View on Reddit | 5 comments
I'm preparing for Meta ML E6 (SWE, ML systems focus) interviews. 35 YOE in ML, but not in big tech.
Background: I know ML fundamentals well, but news feeds, recommendation systems, and large-scale ranking aren't my domain. Been preparing classical ML system design for the past few weeks - feed ranking, content moderation, fraud detection, recommendation architectures (two-tower, FAISS, etc.).
My question: How much should I worry about GenAI/LLM-focused problems (RAG, vector databases, prompt engineering) vs continuing to deepen on classical ML?
I can discuss these systems conceptually, but I haven't built production LLM systems. Meanwhile, I'm getting comfortable with classical ML design patterns.
Specifically:
- Recent interviewees: Were you asked GenAI/LLM questions at E6?
- If yes, depth expected? (High-level discussion vs detailed architecture?)
- Or mostly classical ML (ranking, recommendations, integrity)?
Trying to allocate remaining prep time optimally. Any recent experiences appreciated.
Artgor@reddit
Usually, after the screening interview (2 coding questions + behavioral for E6), you have a call with the recruiter who'll share what to expect in the next rounds. If you aren't going specifically for GenAI position, you'll most likely be asked about recommendation systems.
aa1ou@reddit (OP)
Yes. I got strong across the board on tech screen with technical communication being noted as especially strong. I did a full look last year, and there were “mixed signals” on my design round. That’s why I’m putting extra effort into this.
jinxxx6-6@reddit
Senior ML here who went through a similar loop recently. I’d keep 70 to 80 percent on classical ranking and recsys system design. The interviewers pushed on objectives, feature pipelines, online offline consistency, real time constraints, and eval. LLMs came up as a flavor question, mostly to test tradeoffs around latency, cost, retrieval quality and safety, not a deep architecture drill. What helped me was timed system design mocks with Beyz coding assistant using prompts from the IQB interview question bank. I practiced mapping product goals to metrics first, then enumerating signals and infra, and kept each chunk to about 90 seconds. You’ll be in good shape if you can defend tradeoffs crisply.
dash_bro@reddit
Hmmm my Meta E5 interview was very heavy on fundamentals as well as deployment/inference.
They really drilled down into tradeoffs and architecture nuances with a focus on recommendation systems and ranking (LTR algorithms). Ability to explain tradeoffs and how they translate to business metrics was also something I noticed. I assume you'll have to go through the same, but of course the added expectations of cross team collaboration, technical maturity, and leading at a much higher level, given your experience.
My interview involved two leetcode style coding rounds as well - which were fairly rigorous
Ask your recruiter if you could do a mock interview with them as well (I did).
However the team/role I was interviewing for was also heavily working with the Language Model space, so lots of fundamental questions about computation and complexity in transformers etc
FWIW it was a really thorough experience but I did not manage to get an offer.
valence_engineer@reddit
No recent experience but hilariously \~1 year back I got dinged for even suggesting LLMs be used for something because it'd be too expensive. Given the scale they stated (on both impact and RPS) they were utterly wrong given even half competent inference but I wasn't going to argue with the interviewer. I suspect that level of internal mess hasn't changed since then but might show up in random ways.