What is LongMemEval?
LongMemEval measures the model's ability to retrieve, integrate, and reason over information spread across thousands to millions of tokens. The benchmark contains 500 problems, scored using accuracy. Released under the Apache-2.0 license, it has become one of the canonical evaluations cited in model cards, vendor announcements, and procurement decisions.
The benchmark is run end-to-end as a fixed pipeline: load the dataset, prompt the model under a published decoding configuration (temperature 0.0, max tokens 512), capture every response, and grade against the canonical rubric. The output is a single score plus the per-problem transcripts that produced it.
Why LongMemEval matters
Long-context performance gates real-world workflows — analyzing whole codebases, reading legal contracts, or maintaining multi-session memory.
For developers wiring a model into a product, LongMemEval is the closest thing to a clean signal of capability — but only if the score was produced honestly. The gap between two cards reading "LongMemEval: 92.3" and "LongMemEval: 91.8" can determine which API gets shipped, but neither number is meaningful without methodology, transcripts, and a way to replay the result.
How LongMemEval is scored
The grading procedure for LongMemEval is deterministic. Each model response is checked against the canonical rubric — for code benchmarks this means executing test suites; for math benchmarks it means parsing the final answer; for reasoning benchmarks it means matching the multiple-choice letter. The pipeline is reproducible: given the same dataset, decoding config, and model checkpoint, you should get the same score (modulo non-determinism in the inference layer itself).
The decoding configuration matters more than most people realize. LongMemEval typically runs at temperature 0.0 — that's the canonical setting. Running at temperature 0.7 or 1.0 changes both the expected score and its variance. Anyone reporting LongMemEval numbers should disclose temperature, max tokens, and any system prompt verbatim.
Common pitfalls in LongMemEval reporting
The same score can mean very different things depending on how it was produced. Here are the failure modes that show up most often when comparing LongMemEval numbers across labs and vendors:
- Needle-in-a-haystack tasks measure retrieval, not synthesis — a model can ace NIAH and still fail on multi-fact reasoning.
- Context-length advertisements vs effective context-length differ — many models degrade sharply past 32k tokens.
- Prompt-position effects — facts at the start or end are recalled better than facts in the middle ('lost in the middle').
- Caching strategy in eval — KV cache reuse changes wallclock but not score; warm vs cold has been confused.
None of these are theoretical — they're documented patterns across vendor announcements over the last three years. The cure is methodology disclosure plus replay capability: every claim should ship with the exact runner version, the random seed, the system prompt, the decoding config, and a Merkle root over the transcripts.
Reading a published LongMemEval score critically
When a vendor announcement says "Model X scores 87.4 on LongMemEval," ask:
- Was the score on the full 500-problem set, or a sub-sample?
- Was it pass@1, pass@10, or self-consistency?
- What was the temperature, max-token cap, and system prompt?
- What runner version was used? Did the lab patch the upstream evaluator?
- Are the transcripts available so anyone can re-grade them?
- Is there a cryptographic receipt — a signature, Merkle root, or on-chain anchor — proving the transcripts are the ones that produced the score?
If the answer to any of these is "we don't disclose," treat the number as marketing copy.
Ship a LongMemEval score nobody can challenge
Benchlist runs LongMemEval (and 49 other benchmarks) inside a sandboxed runner, captures every transcript, builds a Merkle commitment, and signs the result with an Ed25519 attestor key. The score lands at a public verify URL anyone can replay in a browser — and you can opt into an Aligned Layer ZK anchor on Ethereum L1 for a buyer who needs a trustless receipt.
How to run LongMemEval on Benchlist
The simplest path is the hosted runner — POST a job and we email the verify URL when it completes:
curl -X POST https://benchlist.ai/api/v1/run \
-H "Authorization: Bearer $BENCHLIST_KEY" \
-H "Content-Type: application/json" \
-d '{
"service": "anthropic-claude",
"model": "claude-sonnet-4.5",
"benchmark": "longmemeval",
"runs": 1,
"limit": 20,
"proof_system": "signed",
"inference_api_key": "managed"
}'
The response includes a run_id and a verify_url. Within a couple of minutes the worker publishes the result, the email lands in your inbox, and the verify page renders the full transcript tree with the Ed25519 signature live in the browser.
For self-hosted runs, install benchlist-runner via pip, point it at your inference key, and let it produce a signed run.json you can submit through the same API. The runner is open source, the schema is documented at /api/v1, and every step of the pipeline is reproducible offline.
LongMemEval on the Benchlist registry
Every signed LongMemEval run posted to Benchlist is permanently indexed at /benchmarks/longmemeval. The page ranks services and models by score, links to transcripts, and surfaces dispute history. Verified-on-chain runs (those with an Aligned Layer batch anchor) get a distinct chip; signed-only runs are clearly marked.
Self-reported scores from vendor announcements that don't ship transcripts get a "Self-reported" badge so buyers can see the trust gap at a glance. If a vendor wants to upgrade their listing to Attested, they post a signed run via /v1/run and the registry replaces the self-reported number automatically.
FAQ
What does the LongMemEval benchmark measure?
The LongMemEval benchmark measures the model's ability to retrieve, integrate, and reason over information spread across thousands to millions of tokens. It uses accuracy as its primary metric across 500 problems.
How is LongMemEval scored?
Each problem in LongMemEval is graded by a deterministic scorer. The final score is reported as a percentage of problems passed. The dataset license is Apache-2.0.
What is the contamination risk for LongMemEval?
Contamination risk for LongMemEval is rated low. Low-risk benchmarks were either constructed after recent model training cutoffs or are kept private.
How much does running LongMemEval cost in API calls?
A single full run of LongMemEval costs roughly $10.0 in inference fees on a frontier model. Cheaper / smaller models reduce this by 5-20×.
How do I verify a published LongMemEval score is real?
Use Benchlist's signed-attestation system. Run the benchmark via benchlist run longmemeval or POST /v1/run — the result includes a Merkle root over every transcript, an Ed25519 signature from the attestor, and an optional Aligned Layer ZK anchor. Anyone can replay the signature in the browser.
What are the canonical decoding parameters for LongMemEval?
Per the catalog, LongMemEval runs at temperature 0.0 with a max-tokens cap of 512. Deviating from these without disclosure makes scores incomparable.