Grapevine
Grapevine Applied AI Systems Engineer
Meet your interviewer
Saumil Tripathi
CEO & Co-Founder
Get ready for your interview with Saumil Tripathi, CEO & Co-Founder at Grapevine
The application process
- A summary of your most recent interview will be shared with the company's hiring team.
- If the company expresses interest in your resume and interview, we'll reach out to you with next steps.
- Older interviews will be overridden by the most recent interview.
NOTE: This is an AI-driven experience, and while we strive for accuracy, AI may sometimes generate unexpected or imperfect responses.
Note 🗒️
- Only completed interviews will be considered for job applications.
- Finish yours to stand a chance at getting shortlisted.
Applied AI Systems Engineer
About Grapevine
At Grapevine, we’re reimagining how people connect with careers, conversations, and content using AI. As a fast-growing startup backed by leading investors, we power dynamic recommendation engines and build the AI infrastructure behind Round1 AI Interviews. Join us for a chance to work on innovative projects at the forefront of intelligent systems, bold thinking, and scrappy execution.
Role Overview
As an Applied AI Systems Engineer, you will own end-to-end systems from idea to real-time deployment in a high-impact, full-stack AI/ML role. You will architect personalized recommendation systems, develop conversational AI agents for interview simulations, optimize internal AI infrastructure, and prototype new user-facing AI experiences. This role offers an excellent opportunity to work directly with founding teams and make a significant impact on products used by millions of users.
Key Responsibilities
- Design, build, and scale personalized recommendation pipelines for content feeds and career experiences.
- Architect retriever and scorer stacks with support for embeddings, BM25, and online evaluation metrics.
- Develop and optimize AI agents for interview simulations, including transcript ranking algorithms, prompt orchestration, and multi-turn memory systems.
- Build tooling for scalable experimentation and evaluation, focusing on latency, accuracy, and user feedback loops.
- Enhance internal AI infrastructure with prompt routing, caching layers, fast retrieval APIs, and observability.
- Prototype and deploy innovative, user-facing AI products in close collaboration with product, design, and frontend teams.
What We Look For
Skills:
- Strong ML fundamentals with experience in LLMs, Recommender Systems, NLP, and Information Retrieval.
- Proficiency in Python, PyTorch, and modern MLOps stacks.
- Experience with vector databases (e.g., FAISS, Annoy), fast retrieval, and streaming inference.
- Ownership mindset with the ability to take ideas from conception to launch.
- Proven track record in building production-grade ML/AI systems within startups or lean teams.
Qualifications
- Required Experience: 2–4 years of experience in building production-grade ML/AI systems.
- Bonus Points: Background in prompt engineering, model evaluation, or agent frameworks; previous experience at a fast-paced startup; exposure to voice agents; and experience in writing or public communication on AI, ML, or product design.
Job Location
- Bengaluru, India (On Site)
What We Offer
- The opportunity to shape the AI direction at one of India’s most exciting startups.
- Direct collaboration with founders and early-stage teams on product and infrastructure decisions.
- End-to-end ownership with impact on products handling millions of sessions per month.
- Competitive compensation, equity, and benefits, along with professional growth and career development opportunities.