Scite’s cover photo
Scite

Scite

Research

AI built for research

About us

Scite is a platform that uses deep learning, natural language processing, and a network of experts to identify and promote reliable research by evaluating the veracity of scientific claims. We can automatically classify scientific citations as supporting, refuting, or mentioning, thus enabling us to score the veracity of scientific reports, researchers, institutions, journals, and scientific fields using a measure we call scite.

Website
http://scite.ai
Industry
Research
Company size
2-10 employees
Headquarters
New York
Type
Privately Held
Founded
2018

Locations

Employees at Scite

Updates

  • Great breakdown on what Scite MCP actually makes possible. Search for papers, check how many studies cite them supportively versus critically, chain it with other tools, all inside your favorite AI apps.

    What if Claude or ChatGPT could search academic sources for you, and then actually do something useful with the results? Not hypothetical. With MCP (Model Context Protocol) servers connected, you can ask Claude to search Consensus for recent papers on a topic, pass the top results to Scite to check supporting vs contrasting citations, and synthesise everything in a single conversation. The LLM decides which tools to call and in what order: an agentic workflow across academic sources that used to sit in silos. In my latest Research Radar piece for SMU Libraries, I walk through: - What MCP servers actually are (the 30-second version) - Why they matter for research workflows, even if you already use Scite and Consensus directly -How to set up both in a few minutes - Security caveats worth knowing before you connect anything - Where this is heading, including Claude Skills for reusable workflows Still early, still experimental, and vendor offerings will shift. But the direction of travel is clear. https://lnkd.in/gMk7Ga3Z

  • Scite reposted this

     A 1952 paper on protein formation in the pancreas received 65 AI Reads through Scite last month.  A 1962 paleontology study on Pleistocene fauna found in natural chimneys in Augusta County, Virginia received 62 AI reads. And a 1969 paper on Sialic acid changes in rat testes during puberty — 143 AI reads. These aren't famous papers. They didn't trend. Someone's AI found them because they were the right answer to a very specific question — and delivered them alongside papers from 2024. That's what's changed. Through Scite's MCP server, AI agents can now search and read scientific literature directly. In two months, tools spanning both the OpenAI and Claude ecosystems have connected, from custom scripts to enterprise deployments. The result: 1.6 million AI Reads across nearly a million unique papers. 54% of reads came from publishers outside the top 10. Over 4,000 publishers had exactly one paper accessed. The times are a changing!

  • When we launched Scite MCP, we expected most usage in chat tools like ChatGPT and Copilot. Claude Code has been the surprise. Scite MCP makes our literature search and Smart Citation graph available in the AI tools researchers already use, grounding them in peer-reviewed literature and using Smart Citations as a ranking signal guiding the AI tools to more trustworthy research. Here are a few use cases we've heard directly from customers: 1) Verify your references before submission. Paste your bibliography and get a table showing support ratio, contradiction flags, and retraction status for every citation. 2) Trace the citation lineage of a landmark paper. See how a core finding has been supported, challenged, or refined over time, rendered as a visualization. 3) Find research gaps. Search a topic, cluster the findings, and surface claims with low citation support or high contradiction rates. Research opportunities that aren't obvious from reading abstracts. 4)Draft and fact-check a literature review in one pass. Write a lit review section, then immediately verify every claim against Scite, flagging anything unsupported or disputed inline. 5) Build a contradiction report before you run an experiment. Pick a hypothesis, find every paper that contradicts it, and know what you're up against before you start. 6) Generate a citation confidence visualization. Plot top papers on a topic by citation count vs. support ratio. See which ones are load-bearing and which are more contested than their citation count suggests. 7) Pull the best-validated method for your analysis. Search for the most cited and least contradicted methodological papers on a technique and implement the approach the field has actually validated. 8) Reproduce a statistical method directly into code. Find a highly cited paper in your domain, pull the methods section, and ask Claude to implement it in Python or R. No more guessing how they actually did it. 9) Sanity-check your analytical choices before a reviewer does. Chose a normalization method or statistical approach? Ask Claude to find papers that validate or challenge that choice in similar contexts before submission. 10) Produce a structured research brief. Key findings, contested claims, open questions, recommended reading, grounded in Scite data and exported as a formatted document. If you're using Scite MCP with Claude Code (or any AI tool), share what you're building. And if there's functionality you need that isn't there yet, let us know. We're actively developing the MCP and feedback drives where we focus next.

    • No alternative text description for this image
  • Scite reposted this

    One of the most common questions researchers ask when using AI tools with scientific literature: "Can I actually access that paper?" We built Article Galaxy MCP to answer that question before you even have to ask it. Article Galaxy MCP connects directly to AI assistants like ChatGPT and Claude, letting researchers check access rights on and retrieve full-text articles without leaving their workflow. The screenshot below shows what this looks like in practice: a rights check across 14 articles, instantly categorized by access type (open access, PDF library, token access, subscription). No toggling between tabs. No copy-pasting DOIs into a separate portal. No guessing whether your institution has access. This is what it looks like when library infrastructure meets AI-native workflows. Your existing subscriptions, your token budget, your PDF library, all surfaced right where the research is happening. If you're a librarian, research ops lead, or anyone responsible for getting researchers the literature they need, I'd love to show you how this works.

    • No alternative text description for this image
  • Claude Code is great, but it wasn't built for researchers. Connecting Scite MCP changes that. Verify claims against the literature, trace citation lineage, flag contradicted references, generate visualizations and documents all in one workflow.

    • No alternative text description for this image
  • Scite reposted this

    If you're excited about building AI-powered tools that help researchers and organizations make sense of the scientific literature, we'd love to hear from you. We're a small team working on hard, meaningful problems at the intersection of AI and scientific publishing. You'd be joining at a moment of real momentum, with our MCP server launch, growing enterprise adoption, and a product roadmap full of ambitious bets. Our stack:  - Frontend: React 18, Redux/Redux Saga, Webpack, SWC, Express (SSR), CSS Modules  - Backend: FastAPI (Python 3.11), Flask, PostgreSQL, Elasticsearch, Redis, Celery  - AI/ML: Claude (Anthropic), OpenAI, PyTorch, Hugging Face Transformers, sentence-transformers, RAG pipelines  - Infrastructure: AWS (ECS Fargate, RDS, S3, Lambda), Docker, Kong API Gateway, GitHub Actions, CircleCI  - Observability: OpenTelemetry, New Relic, Sentry, PostHog  - Other: Stripe, SendGrid, Playwright, TypeScript  What we're looking for:  - Strong engineering fundamentals  - Comfort working across the stack — Python and JavaScript/React in particular  - Experience with PostgreSQL, Elasticsearch, or similar data-intensive systems is a plus  - Familiarity with LLM APIs and retrieval-augmented generation (RAG) is a bonus  - Genuine curiosity about how AI can improve how people interact with research  Remote, but looking for US-based. DM me if you're interested or know someone who might be a great fit.

  • Our new MCP Admin tool gives research teams and institutions full visibility into how their team is using the Scite MCP. Usage stats, content access controls, collection gap analysis, COUNTER reports, and more.

Similar pages

Browse jobs

Funding

Scite 5 total rounds

Last Round

Series unknown

US$ 1.9M

See more info on crunchbase