from sentence_transformers import SentenceTransformer, util # Load a pre-trained model for embeddings model = SentenceTransformer('all-MiniLM-L6-v2') # Example question and candidate answers question = "What is the capital of France?" answers = [ "Paris is the capital of France.", "Berlin is the capital of Germany.", "Madrid is the capital of Spain." ] # Encode question and answers into embeddings q_emb = model.encode(question, convert_to_tensor=True) a_embs = model.encode(answers, convert_to_tensor=True) # Compute cosine similarity between the question and each answer cos_scores = util.cos_sim(q_emb, a_embs) # Print each answer with its similarity score for answer, score in zip(answers, cos_scores[0]): print(f"Answer: {answer}\nSimilarity: {score:.4f}\n")
Comments:
SentenceTransformer provides semantic embeddings for both questions and answers. cos_sim computes similarity; higher score = more relevant answer. - This is a standard approach for question-answer matching in deep learning NLP.