What is MedQA?
MedQA measures broad reasoning across academic disciplines, multi-step inference, and resistance to distractors. The benchmark contains 1273 problems, scored using accuracy. Released under the MIT license, it has become one of the canonical evaluations cited in model cards, vendor announcements, and procurement decisions.
The benchmark is run end-to-end as a fixed pipeline: load the dataset, prompt the model under a published decoding configuration (temperature 0.0, max tokens 64), capture every response, and grade against the canonical rubric. The output is a single score plus the per-problem transcripts that produced it.
Why MedQA matters
Reasoning benchmarks sit between knowledge recall and true general intelligence. They're imperfect proxies, but they're the proxies the field has chosen.
For developers wiring a model into a product, MedQA is the closest thing to a clean signal of capability — but only if the score was produced honestly. The gap between two cards reading "MedQA: 92.3" and "MedQA: 91.8" can determine which API gets shipped, but neither number is meaningful without methodology, transcripts, and a way to replay the result.
How MedQA is scored
The grading procedure for MedQA is deterministic. Each model response is checked against the canonical rubric — for code benchmarks this means executing test suites; for math benchmarks it means parsing the final answer; for reasoning benchmarks it means matching the multiple-choice letter. The pipeline is reproducible: given the same dataset, decoding config, and model checkpoint, you should get the same score (modulo non-determinism in the inference layer itself).
The decoding configuration matters more than most people realize. MedQA typically runs at temperature 0.0 — that's the canonical setting. Running at temperature 0.7 or 1.0 changes both the expected score and its variance. Anyone reporting MedQA numbers should disclose temperature, max tokens, and any system prompt verbatim.
Common pitfalls in MedQA reporting
The same score can mean very different things depending on how it was produced. Here are the failure modes that show up most often when comparing MedQA numbers across labs and vendors:
- Multiple-choice format gives a 25% baseline floor — comparison should always show absolute lift over random guessing.
- MMLU contamination is high; the GPQA Diamond split was created specifically to push past it. — MMLU contamination is high; the GPQA Diamond split was created specifically to push past it.
- Letter-only outputs vs reasoning-then-answer are scored differently across labs. — Letter-only outputs vs reasoning-then-answer are scored differently across labs.
- Subject-mix imbalance — a 1-point overall gain may come entirely from one subject.
None of these are theoretical — they're documented patterns across vendor announcements over the last three years. The cure is methodology disclosure plus replay capability: every claim should ship with the exact runner version, the random seed, the system prompt, the decoding config, and a Merkle root over the transcripts.
Reading a published MedQA score critically
When a vendor announcement says "Model X scores 87.4 on MedQA," ask:
- Was the score on the full 1273-problem set, or a sub-sample?
- Was it pass@1, pass@10, or self-consistency?
- What was the temperature, max-token cap, and system prompt?
- What runner version was used? Did the lab patch the upstream evaluator?
- Are the transcripts available so anyone can re-grade them?
- Is there a cryptographic receipt — a signature, Merkle root, or on-chain anchor — proving the transcripts are the ones that produced the score?
If the answer to any of these is "we don't disclose," treat the number as marketing copy.
Ship a MedQA score nobody can challenge
Benchlist runs MedQA (and 49 other benchmarks) inside a sandboxed runner, captures every transcript, builds a Merkle commitment, and signs the result with an Ed25519 attestor key. The score lands at a public verify URL anyone can replay in a browser — and you can opt into an Aligned Layer ZK anchor on Ethereum L1 for a buyer who needs a trustless receipt.
How to run MedQA on Benchlist
The simplest path is the hosted runner — POST a job and we email the verify URL when it completes:
curl -X POST https://benchlist.ai/api/v1/run \
-H "Authorization: Bearer $BENCHLIST_KEY" \
-H "Content-Type: application/json" \
-d '{
"service": "anthropic-claude",
"model": "claude-sonnet-4.5",
"benchmark": "medqa",
"runs": 1,
"limit": 20,
"proof_system": "signed",
"inference_api_key": "managed"
}'
The response includes a run_id and a verify_url. Within a couple of minutes the worker publishes the result, the email lands in your inbox, and the verify page renders the full transcript tree with the Ed25519 signature live in the browser.
For self-hosted runs, install benchlist-runner via pip, point it at your inference key, and let it produce a signed run.json you can submit through the same API. The runner is open source, the schema is documented at /api/v1, and every step of the pipeline is reproducible offline.
MedQA on the Benchlist registry
Every signed MedQA run posted to Benchlist is permanently indexed at /benchmarks/medqa. The page ranks services and models by score, links to transcripts, and surfaces dispute history. Verified-on-chain runs (those with an Aligned Layer batch anchor) get a distinct chip; signed-only runs are clearly marked.
Self-reported scores from vendor announcements that don't ship transcripts get a "Self-reported" badge so buyers can see the trust gap at a glance. If a vendor wants to upgrade their listing to Attested, they post a signed run via /v1/run and the registry replaces the self-reported number automatically.
FAQ
What does the MedQA benchmark measure?
The MedQA benchmark measures broad reasoning across academic disciplines, multi-step inference, and resistance to distractors. It uses accuracy as its primary metric across 1273 problems.
How is MedQA scored?
Each problem in MedQA is graded by a deterministic scorer. The final score is reported as a percentage of problems passed. The dataset license is MIT.
What is the contamination risk for MedQA?
Contamination risk for MedQA is rated medium. Medium-risk benchmarks may have partial leakage but are still informative when used alongside contamination-aware splits.
How much does running MedQA cost in API calls?
A single full run of MedQA costs roughly $5.0 in inference fees on a frontier model. Cheaper / smaller models reduce this by 5-20×.
How do I verify a published MedQA score is real?
Use Benchlist's signed-attestation system. Run the benchmark via benchlist run medqa or POST /v1/run — the result includes a Merkle root over every transcript, an Ed25519 signature from the attestor, and an optional Aligned Layer ZK anchor. Anyone can replay the signature in the browser.
What are the canonical decoding parameters for MedQA?
Per the catalog, MedQA runs at temperature 0.0 with a max-tokens cap of 64. Deviating from these without disclosure makes scores incomparable.