Core capabilities across all 11 platforms. Use the group filters to narrow the view, or expand to see the full matrix.
| Capability | Synthesis | Elicit | otto-SR | LASER AI |
|---|---|---|---|---|
| Screening | ||||
| Extraction | ~ | |||
| HEOR / NMA | ||||
| Delivery | Service | Self-serve | Self-serve | Self-serve |
Published performance metrics. Note the variation in methodology: some report peer-reviewed results, others self-report via preprint or vendor study.
| Platform | Validation scale | Screen sensitivity | Screen specificity | Extraction accuracy | Ground truth |
|---|---|---|---|---|---|
| Synthesis | 30,150 docs, 5 domains | 97.2% | 96.8% | 96.96% | Human-adjudicated |
| otto-SR | Preprint | 96.7% | 97.9% | 93.1% | LLM-as-judge |
| LASER AI | Case studies | 100% (7/8); 67% (1/8) | Not reported | Not reported | Vendor study |
| Elicit | 1 case study + 1 independent | 93.6% (F1: 0.72) | 62.8% | 20.7% "equal" | Independent (PMC) |
| Nested Knowledge | 4 SLRs | 82–97% | Not reported | PICO F1: 0.74 | Peer-reviewed |
| Covidence | 2025 study | 95% | Not reported | Not published | Published study |
| Rayyan | Peer-reviewed | 97–99% | 19–58% | N/A | Peer-reviewed |
| DistillerSR | Peer-reviewed | 78% (assisted) / 14% (alone) | 95% | Not published | Peer-reviewed |
| EPPI-Reviewer | Peer-reviewed | 99% (RCT classifier) | 8% precision | N/A | Peer-reviewed |
| ASReview | Peer-reviewed | WSS@95: 83% | N/A | N/A | Peer-reviewed |
| Abstrackr | Peer-reviewed | 79–96% (varies) | N/A | N/A | Peer-reviewed |
Feature coverage across 13 dimensions. Score shown below each platform name.
| Capability | Synthesis 13/13 | Elicit 5/13 | otto-SR 5/13 | LASER AI 4/13 |
|---|---|---|---|---|
| Search strategy | ||||
| Title/abstract screening | ||||
| Full-text screening | ||||
| Data extraction | ||||
| Risk of bias | ||||
| Meta-analysis / NMA | ||||
| HEOR model review | ||||
| PRISMA output | ||||
| Audit trail | ||||
| HTA/regulatory ready | ||||
| Source attribution | ||||
| Prompt optimization | ||||
| Collaboration |
Free reproduction study
Have Eva, our AI agent, reproduce one of your past SLRs.
Same protocol, fresh data. No commitment — just see what it’s like to work with Eva. Results in 48 hours.
Entry-level pricing for each platform. Enterprise and institutional pricing is typically negotiated.
| Platform | Model | Entry price | Notes |
|---|---|---|---|
| Synthesis | Per-project service | $10K+ | Starting at $10K+ for an SLR data package |
| Elicit | Per-seat SaaS | Free–$65/mo | Pro ($42/mo annual) for systematic reviews. |
| Nested Knowledge | Per-seat SaaS | $295–$695/mo | Academic pricing available. |
| otto-SR | Quote-based | Quote | Institutional signup required. No public pricing. |
| DistillerSR | Per-seat SaaS | $29.95/mo+ | $29.95/mo academic. Enterprise: quote-based. |
| Covidence | Per-review | $339–$907/yr | Per review/year. Unlimited collaborators; org pricing via sales. |
| Rayyan | Per-seat SaaS | Free–$13.33/mo | 3 active reviews on free tier. |
| ASReview | Open source | Free | Self-hosting required. |
| EPPI-Reviewer | Per-user + per-review | £10/user/mo+ | £10/user/month + £35/shareable review/month. LMIC discounts. |
| LASER AI | Quote-based | ~$3,000+ | Free trial available. |
| Abstrackr | Free | Free | Open source, screening only. |
There are dimensions that matter deeply but resist tabulation.
We have tried to be fair. If you spot an error or an outdated data point, please let us know.
All data sourced from peer-reviewed publications, vendor documentation, and public pricing pages. Pricing data verified as of February 2026.
Free reproduction study
Have Eva, our AI agent, reproduce one of your past SLRs.
Same protocol, fresh data. No commitment — just see what it’s like to work with Eva. Results in 48 hours.