This project is an AI-based interviewer application designed to simulate a professional interview environment. It leverages two Large Language Models (LLMs) provided by Groq: Gemma2-9b for faster inference and reasoning model qwen qwq for better reference. The application automatically converses with the interviewee and, at the end, provides a detailed dashboard with scores and summaries.
- Automated Interview Process: The AI conducts the interview by asking questions, evaluating responses, and providing feedback. This ensures a consistent and unbiased interview process.
- Dual LLM Integration: Utilizes Groq's Gemma2-9b for quick responses and reasoning model qwen qwq for more detailed and nuanced interactions, combining speed with depth.
- Real-Time Conversation: The AI maintains a conversation history and adapts its questions based on the interviewee's responses, creating a dynamic and interactive interview experience.
- AI Interviewer Tailored with Indian Voice: For seamless interaction and to make the interviewee comfortable, the AI interviewer uses Eleven Labs for voice synthesis, tailored to speak with an Indian voice.
- Dashboard Summary: At the end of the interview, the application generates a detailed summary including scores and key points, providing a comprehensive overview of the candidate's performance.
-
Home Page
- A user-friendly responsive homepage with a spectacular design.
- Login option on the home page helps to authenticate the recruiter.
-
Authentication
- Simplified verification process for the recruiter (admin) to access the admin dashboard via Email OTP Validation.
-
Initializing Recruitment Opportunity
- A descriptive job & skill selection form for a specific vacancy for which the interview has to be taken.
-
Interview Link Generation
- An individualized link will be sent to the candidate's Email ID that will redirect the candidate to the interview room.
-
Interview Room
- The magical area where all interaction happens with the virtual interviewer. The AI asks questions, evaluates responses, and provides real-time feedback.
-
Quantitative Metrics
- At the end of the interview, on the recruiter’s dashboard, quantitative metrics will be shown covering key scores, interview summary, and other key metrics for the recruiter to take the final call.
This AI interviewer agent is built using LangChain and LangGraph to orchestrate a multi-step conversational workflow:
- Framework & Graph Structure
- Uses
StateGraphfrom LangGraph to define nodes and edges. - Orchestrates message flow through three main nodes: initializer, retriever, and assistant, plus a tools node.
- Uses
-
Nodes
- Initializer
Inserts the system prompt once at the start of the conversation if not already present. - Retriever
Queries a Supabase vector store (via sentence-transformers embeddings) to fetch past candidate questions or hints when semantic similarity is high (> 0.8).This gives the organisation the option to reduce randomness among various candidates and have more control over the interview process. This is done by storing the questions that can be asked in response to the candidate's response in a vector database. - Assistant
Invokes a bound LLM (OpenAI, Google Gemini, Groq Gemma2, or HuggingFace Llama-2) to generate interview questions. - Tools
Routes calls to external tool functions when the assistant requests them.
- Initializer
-
LLM Providers
- OpenAI via
ChatOpenAI - Google Gemini via
ChatGoogleGenerativeAI - Groq via
ChatGroq - HuggingFace via
ChatHuggingFaceendpoint
These can all be swapped by setting theproviderargument inbuild_graph().
- OpenAI via
-
Tools
- wiki_search(query)
Fetches up to 1 result from Wikipedia for background or context. - web_search(query)
Retrieves up to 3 web snippets via Tavily for up-to-date information. This can be used to get the latest information about the candidate from various websites like linkedin, github, google scholar, etc. - arxiv_search(query)
Loads up to 3 Arxiv documents to ground technical questions and also to fetch the research papers of the candidate. - resume_get()
Accesses the candidate’s resume to focus on the resume. Traditionally, the resume is put in the system prompt and as the interview progresses it starts to get lost in the conversation. This function helps the model to get the resume whenever needed. - exit_tool()
Immediately ends the interview if non-serious or inappropriate behavior is detected. Traditionally, the model can't end the interview, if the user said something inappropriate and say quit, the model will not be able to quit as it always has to respond. But this function helps the model to end the interview if the user is not serious or is using some inappropriate language.
- wiki_search(query)
-
Interview Flow
- Messages start at the
initializer, pass through theretrieverto augment with relevant context, then go toassistantfor LLM response. - If the LLM calls a tool, execution jumps to the
toolsnode and returns toassistantwith the result. - This cycle continues until the interview is complete or
exit_toolis triggered.
- Messages start at the
Here are some images related to the project:
The website is running on Render's free servers, so there might be a delay when it is opened after a long time.
Watch the video demonstration of the project on YouTube:
A recruiter-focused dashboard offers a comprehensive summary of the interview process, including candidate performance metrics such as overall score, confidence, emotional stability, and job compatibility. The dashboard incorporates bias detection tools that highlight any potential AI biases, allowing recruiters to intervene or adjust the evaluation process if needed. This transparency ensures that the evaluation remains fair and objective.
The final dashboard provides the following information:
-
Basic Details:
Name: Candidate's full name.Vacancy: Position for which the candidate is interviewing.SkillsNeeded: List of skills required for the position.
-
Scores:
EducationalBackgroundScore: Score based on the candidate's educational qualifications.Experience: Score based on the candidate's relevant work experience.InterpersonalCommunication: Score based on the candidate's ability to communicate and interact effectively.TechnicalKnowledge: Score based on the candidate's technical skills and knowledge.OverallScore: Overall performance score of the candidate.
-
Interview Summary:
PositivePoints: Detailed positive insights (150-200 words) with specific examples from the interview, highlighting strengths and accomplishments.NegativePoints: Detailed areas for improvement (150-200 words) with specific examples from the interview, noting knowledge gaps or weaknesses.
-
Detailed Assessment:
RecommendationStatus: Final recommendation (Recommended/Not Recommended/Consider).InterviewDuration: Total duration of the interview.ConfidenceLevel: Measure of the candidate's confidence during the interview.SkillMatchPercentage: Percentage match between candidate's skills and job requirements.PersonalityTraits: List of observed personality traits.TechnicalSkillsBreakdown: Detailed assessment of individual technical skills with proficiency levels.
-
Recommended Learning Paths:
- List of suggested areas for improvement with specific learning resources.
-
Culture Fit Analysis:
TeamworkScore: Assessment of candidate's teamwork abilities.AdaptabilityScore: Assessment of candidate's adaptability to new environments.Summary: Overview of the candidate's potential cultural fit with the organization.
Integrated with Applicant Tracking Systems (ATS), the platform automates resume screening to pre-filter candidates based on relevant skills and qualifications. This step significantly reduces the workload on recruiters by automatically identifying the most qualified candidates for further evaluation.
The system dynamically generates interview questions tailored to the specific job role and the candidate’s unique profile. Using insights from resume analysis and real-time interactions, the platform customises questions to focus on relevant skills, ensuring each candidate experiences a highly targeted and relevant interview process.
- Facial Expression and Body Posture Analysis: Integrating video feed to analyze facial expressions and body posture, providing insights into the interviewee's emotions and confidence levels.
- Unfair Detection: Implementing eyeball tracking to detect any unfair practices during the interview.
- Audio Pitch Analysis: Utilizing audio pitch to assess the interviewee's emotions and confidence.
Contributions are welcome! Please fork the repository and submit a pull request with your changes. We appreciate your efforts to improve this project.
This project is licensed under the MIT License. See the LICENSE file for more details.
- This was a team project for SIH.
- Groq for providing the LLM API.
- Eleven Labs for providing the voice synthesis API tailored for an Indian voice.
- Cloudinary for providing media management and optimization services.
- MongoDB for providing the database solution for efficient data storage and retrieval.
- Supabase for providing the vector store database for semantic retrieval capabilities.
- LangGraph for providing the framework to build and orchestrate the AI agent workflow.







