Skip to main content

Product Features

What VendorTruth actually does, what it doesn't, and when to use it.

Dialectical Verification Engine

How it works:

  • Runs two parallel AI agents (prosecution + defense) investigating vendor claims from opposing perspectives
  • Uses a small, fast language model optimized for follow-up question generation
  • Recursively explores topics 1-5 levels deep (configurable)
  • Streams live progress updates during 30-60 second research cycles

Strengths:

  • Avoids confirmation bias by forcing exploration of contradictory perspectives
  • Fast (2-5 minutes for comprehensive reports vs days for manual research)
  • Transparent methodology—you see the questions asked and sources consulted
  • Scales to investigate multiple vendors in parallel

Limitations:

  • Limited by public web sources (Exa API)—can't access paywalls, private communities, confidential feedback
  • Follow-up questions depend on initial search quality—weak initial results cascade into shallow research trees
  • No human fact-checking layer before reports publish
  • Limited effectiveness for new vendors (<6 months old) with sparse online footprint
  • English-language bias—non-English vendor docs get shallow coverage

When to choose VendorTruth:

  • Evaluating established B2B vendors with public documentation
  • You need balanced research fast (pre-purchase evaluation, contract negotiation)
  • Vendor marketing feels too good to be true and you want adversarial analysis

When to consider alternatives:

  • New/stealth vendors with minimal public presence → use direct customer references
  • Highly regulated industries requiring audit trails → use Gartner/Forrester analyst reports
  • Custom enterprise software → hire consultants with domain expertise

Truth Reports

What's included:

  • Claim Rating (True / Mostly True / Misleading / False / Unverified / Mixed)
  • What's True - validated facts from defense perspective
  • Limitations - caveats from prosecution perspective
  • Strengths - positive aspects with evidence
  • Weaknesses - concerns and gotchas
  • When to choose [vendor] - use cases where vendor excels
  • When to consider alternatives - scenarios where vendor struggles
  • Sources - URLs from both prosecution and defense research

How it works:

  • Reports cite specific sources with URLs
  • Balanced structure forces inclusion of both supporting and refuting evidence
  • Knowledge graph links enable deep-dive into related topics
  • Markdown export supported for sharing with teams

Strengths:

  • Snopes-style fact-check format is scannable and decision-focused
  • Inline source citations let you verify claims yourself
  • Dialectical structure surfaces gotchas vendors don't advertise

Limitations:

  • Report quality depends on public data availability—new vendors get thin reports
  • No ongoing updates to historical reports unless you manually re-run verification
  • Synthesis (verdict) is AI-generated—you should validate conclusions against sources
  • No market share data (not in our sources)
  • No SLA on report accuracy

When to use truth reports:

  • Pre-purchase vendor evaluation
  • Validating vendor sales claims during demos
  • Documenting decision rationale for stakeholders

When to skip:

  • Time-sensitive decisions (<1 hour) where you can't afford 2-5 minute research latency
  • Vendors with zero public footprint (stealth startups, internal tools)

Vendor Monitoring & Alerts

What we monitor:

  • Vendor pricing pages, documentation, and policy pages
  • Email alerts when changes detected
  • Pro tier: hourly checks
  • Free tier: daily checks

Strengths:

  • Catch vendor price increases before renewal deadlines
  • Track terms of service changes that shift liability
  • Low-effort monitoring (set once, receive alerts passively)

Limitations:

  • Alert significance is AI-judged—may flag trivial changes or miss critical ones
  • Supports pricing and policy changes only (not security incidents or outages)
  • Can't track changes behind login walls or in vendor dashboards
  • No historical diff view (alerts show "changed" but you manually compare versions)
  • Alert fatigue if you monitor high-churn vendors (frequent feature releases)

When to use monitoring:

  • Tracking vendors you've already purchased from
  • Evaluating multiple vendors over weeks (catch changes during evaluation period)
  • Compliance requirements to document vendor policy updates

When to skip:

  • One-time evaluations where you'll decide in days
  • Vendors that change daily (monitoring noise outweighs signal)

Knowledge Graph

How it works:

  • Reports auto-inject 3-8 inline links to related topics (like Wikipedia)
  • Clicking pending links auto-generates new truth reports
  • Full-text search across all published reports
  • Visual graph interface shows vendor relationships

Strengths:

  • Natural discovery of related vendors and topics
  • SEO-optimized public pages (each report becomes searchable content)
  • Low-friction deep dives (one click to explore tangent topics)

Limitations:

  • Visual interface is basic (force-directed graph, not advanced filtering/clustering)
  • Search is keyword-based (not semantic—may miss conceptually related reports)
  • Graph grows organically based on user clicks (coverage has gaps)
  • No manual curation to ensure high-value connections (relies on AI link injection)
  • Visual graph gets cluttered with 100+ nodes

When to use knowledge graph:

  • Exploratory research (you don't know what you don't know)
  • Discovering vendor alternatives organically
  • Building context before deep evaluation

When to skip:

  • You know exactly what vendor you're evaluating (direct search is faster)
  • Need comprehensive vendor comparison matrix (graph is discovery-focused, not systematic comparison)

Next Steps