メインコンテンツへスキップ

Research Methodology

Our research is only useful if you can verify it. This page explains how we collect data, what we exclude and why, how we handle conflicts of interest, and when we update or retract published claims.

Data sources

Pitch deck research draws from three corpora:

  • Public deck galleries: Y Combinator showcase decks, AngelList public profiles, Slidebean public gallery, Pitchdeckcoach examples. We use only decks the founders or accelerators have explicitly published as public references.
  • Founder interviews: 30–45 minute structured interviews with founders who have raised in the last 24 months. Quotes are attributed by name when permission is granted; aggregate statistics are anonymized.
  • SlideGMM platform aggregates: when used, we report only macro-level counts (slide count distribution, color palette frequency, export-format ratios). No deck content, no user identifiers, no PII ever leaves the database — and we flag explicitly when a stat comes from platform data so you can weight the conflict-of-interest accordingly.

What we exclude

  • Decks behind paywalls or non-public NDAs. We don't pay for access; if a deck isn't reachable from a public URL, it doesn't enter the corpus.
  • Decks older than 5 years for trend analysis. Pitch deck conventions shift fast — citing a 2017 deck as evidence for what works in 2026 is misleading.
  • Decks where the founder's outcome is unknown. We track fundraising results when public so we can correlate deck attributes with success — decks with no outcome data are still counted in structural analysis but excluded from outcome-correlation claims.

Conflicts of interest

We make a presentation product. That means our research is inherently interested — we want SlideGMM to win. We mitigate this by:

  • Naming our product's weaknesses explicitly in comparison posts. If a competitor wins a category, we say so.
  • Publishing raw datasets when feasible (CSV / JSON downloads alongside the report) so independent analysts can re-run the numbers.
  • Disclosing platform-data origin on every stat that uses it. Stats from external corpora are weighted higher than platform-internal stats in headline claims.

Comparison-post methodology

Comparison pages (/compare/slidegmm-vs-…) follow this fixed evaluation structure:

  1. Pricing: verified from the competitor's public pricing page within the last quarter. We capture the date of last verification at the top of each comparison.
  2. Feature parity: tested on a real account on both platforms. We do not infer features from marketing pages.
  3. Export quality: exported the same source deck from both tools and opened the .pptx in Microsoft PowerPoint. Visual fidelity, font preservation, and edit-ability scored.
  4. "When to choose [competitor] instead" section is required on every comparison post. If we can't identify a use case where the competitor wins, the comparison isn't honest enough to publish.

Update cadence

Comparison posts are re-verified at least quarterly. The dateModified field at the top of each post reflects the last verification date. Pricing changes or feature launches from a competitor trigger an immediate update.

Research reports are static — we don't edit numbers retroactively. If a number turns out wrong, we add a correction entry to /about/corrections and link the correction from the original report. The original number stays visible (struck through) so the audit trail is preserved.

Questions or corrections

If a stat looks wrong, our methodology has a hole, or you can point at primary data we missed: email research@slidegmm.ai. We'll respond within a week and add anything we can verify to the corrections log.