FluffyNarwhal
FluffyNarwhal

This company is shady af, they want you to build a full system as assignment in 3 days

Context: I'm looking for a job as I've been unemployed for some months now. Got this assignment from a startup called Annam.ai . And this is what they asked in an assignment

https://rnsl-zc1.maillist-manage.in/click/1399c97ecd6dd2d6/1399c97ecd6dc56d

Let me share the assignment details if in any case you can't open the link

Full Stack Developer Project Details Overview: This project involves building a full-stack web application using the MERN (MongoDB, Express.js, React.js, Node.js) stack that enables users to upload an MP4 video file (typically a lecture in English, ~60 minutes in duration), automatically transcribe its contents, and generate objective-type questions (MCQs) for every 5-minute segment of the video. The questions should be generated using a locally hosted LLM (Large Language Model), ensuring offline capability and data privacy.

Key Features: User Interface (Frontend - React.js with TypeScript):

A responsive web interface for uploading MP4 video files.

Real-time progress bar and status indicators for transcription and question generation.

Display of the transcript segmented in 5-minute intervals.

Auto-generated MCQs for each segment with options to export or edit.

Backend (Node.js + Express.js + TypeScript +routing-controllers/Nest.js):

Handles file uploads securely and stores them temporarily on the local machine.

Invokes a transcription service (e.g., locally hosted Whisper model or similar) to convert speech to text.

Segments the transcript into 5-minute chunks for further processing.

Passes each segment to a locally hosted LLM (e.g., LLaMA, Mistral, or GPT4All) to generate questions.

Stores results (transcripts + questions) in MongoDB for retrieval.

Database (MongoDB):

Stores metadata of uploaded videos.

Stores full transcripts and segmented text.

Stores generated objective-type questions.

AI/LLM Integration (Local Deployment):

Integrates with a locally running LLM through a Python service (e.g., using Flask or FastAPI).

Inter-process communication via REST API from Node.js to Python.

Utilizes a prompt engineering mechanism to generate random but relevant MCQs from transcript segments.

Tech Stack: Frontend: React.js,TypeScript, ShadCN, React Query

Backend: Node.js, Express.js, TypeScript, routing-controllers/Nest.js(preferred)

Database: MongoDB with mongodb Node.js driver

AI/ML Components:

Whisper (or similar) for transcription (run locally via Python backend)

LLM (e.g., Gemma, LLaMA, Mistral via Ollama) for question generation

Python (Flask/FastAPI) to host and expose AI endpoints locally

Storage: Local file system or GridFS for large video file handling

Deployment: Runs on a local machine with Docker (optional)

Flow Summary: User uploads a lecture video via React UI.

The backend (Node.js) stores the file and sends it to a transcription module.

Transcription is performed locally using Whisper (or similar model).

The transcript is split into 5-minute segments.

Each segment is sent to a local LLM which returns a set of randomized MCQs.

The transcript and questions are stored in MongoDB and displayed on the frontend.

Optional Enhancements: Admin panel to review and edit generated questions.

Export a structured JSON for quiz data.

User authentication and Authorization.

Support for multiple languages using multilingual transcription models.

🧾 Deliverables GitHub repository link with full code

Short demo video (2–5 mins) showing the workflow

Final submission via the provided Google Form

So this is the assignment. They want you to do a full system with frontend, backend, aiml, local hosting and all. If it is a cloud deployed model integration project I should've tried but no I will not do it for sure. Even if I want my 16gb ram no gpu laptop will cry for mercy. But I don't think this is an assignment, the code will be directly used somewhere and I'm pretty sure. I've a very good gut feeling about it as I've done this kind of assignment before......

What do you think about this. Or am I thinking too much about this or to stay in the market You have to do this......

16d ago
Talking product sense with Ridhi
9 min AI interview5 questions
Round 1 by Grapevine
JumpyPretzel
JumpyPretzel

It may take full one month to have a working POC of this kind. Fully functional, battle hardened website may take even more time. These guys are nuts. May be they are assuming that you will take help form 6-7 friends, work day and night and may able to finish it with help of these days AI tools. Even if you use AI extensively, unless multiple people work on it, it will be almost impossible to complete in 3 days.

FluffyNarwhal
FluffyNarwhal

That's the thing man, even if someone wants to do it, the hardware needed for this kind of project is also important. They think we have 16gb nvidia gpus with us and will do it. Those morons🙂

BouncySushi
BouncySushi

They want you to do the task and forget about the job...

FluffyNarwhal
FluffyNarwhal

I think soo😂, getting work done as assignments

BouncyMarshmallow
BouncyMarshmallow

Submit code with hidden bash script that wipes out there storage

Discover more
Curated from across