Methodology
How contributions are collected, weighted, and reported. Updated as the data model evolves.
What this is
ReadDepth maintains a longitudinal, contributor-tracked dataset of sequencing and adjacent platform adoption — who runs what, who's evaluating, who's budgeted, and who's watching. The goal is a probabilistic map of the platform landscape that's better than any single insider could build alone.
What we collect
Each contributor fills a short adaptive form. The form captures: role and institution; geography; technology categories the contributor works with or follows; for each platform, depth of engagement and access mode (lab-owned, institutional core, external service, vendor program, collaborator); for high-stakes capital platforms, purchase trajectory (already installed, hands-on early access, budgeted, actively considering, watching, ruled out) and time horizon; and optional secondhand intelligence about other institutions.
The full taxonomy is canonical and editable — new platforms get added to the data model in seconds, not weeks.
Confidence labels
Every claim in the dataset carries one of four confidence labels:
- Observed — directly stated in publicly visible sources: press releases, methods sections, conference abstracts, SEC filings, FDA decisions.
- Inferred — derived from publicly available signals weighted by source quality: institutional core webpages, S10 grant awards, citation networks.
- Human-validated — reported directly by contributors with subject-matter knowledge. May not be publicly citable; updates the model internally.
- Vendor-asserted — claims sourced from vendor marketing or commercial channels. Useful, but bias-tagged.
Citability and privacy
At signup, every contributor sets a citability default for their own contributions: full attribution, aggregate-only (the default), or confidential. Redaction requests are honored without question.
Secondhand claims about other institutions default to confidential and never appear with public attribution. Anyone reporting on another institution rather than their own is reporting on someone else's plans, and the model treats that with appropriate care.
Individual poll responses on LinkedIn are visible only to the poll creator (per LinkedIn's product). They are aggregated into archetype- and region-level summaries; individual votes are never identified in any output, free or paid.
Aggregation thresholds
Aggregate findings published in the free tier require minimum cell density before they appear. We do not show statistics for groups with fewer than five contributors. This protects individual contributors and keeps reported numbers from being spuriously precise.
Anonymous contributions
Anonymous contributions are accepted but carry lower evidence weight than identified ones, because they cannot be triangulated with archetype priors or follow-up clarification.
Limitations
The dataset is biased toward English-language, US/EU, sequencing- adjacent professionals — those most likely to engage with the founder's existing audience. Coverage in other geographies and adjacent fields will improve as the contributor base grows. Honest acknowledgment of this bias is a feature of the methodology, not a flaw to hide.
Contact
Questions, corrections, or redaction requests: alex@readepth.com.