Report: Plaud.ai Deep Truth Analysis
Overview
This report examines Plaud.ai and its products (Plaud Note hardware and the Plaud AI Notetaker app) with a focus on what might be suspicious, risky, or materially different from the marketing story. The analysis covers:
- Security, privacy, and data-handling reality vs. certifications
- The “world’s No.1 AI note-taking brand” claim
- Transcription accuracy, latency, and multilingual support
- Pricing, the “Unlimited Plan,” and potential dark patterns
The goal is not to tell you whether to buy Plaud, but to map out where the promises line up with evidence, where they are thin, and where user complaints expose meaningful risk.
1. Security, Privacy, and Data Handling
1.1 What Plaud claims
Plaud positions itself as an enterprise-grade, compliance-heavy vendor:
- Prominently advertises SOC 2 Type II, HIPAA, GDPR, EN18031, ISO/IEC 27001:2022, and ISO/IEC 27701:2019 compliance across marketing and trust pages.How robust are Plaud.ai’s SOC 2 and ISO 27001 implementations in practice?
- States that personal data of EU users is handled securely and transparently under GDPR, with controls validated through independent audits.1
- Describes encryption as:
- Claims robust cyber controls: OWASP Top 10 mitigation, third‑party penetration tests, audit logging, and real-time monitoring on its Drata-based trust center page.4
These are strong signals: independently-audited SOC 2 Type II and ISO 27001/27701 are non-trivial investments and do materially reduce many classes of operational and security risk.
1.2 Where things get uncomfortable
Despite the certifications, there are several credible concerns:
1.2.1 US data hosting and the CLOUD Act vs. GDPR
Community discussions explicitly flag that recordings are stored in the US, asking whether this is compatible with GDPR in sensitive sectors:
“I understand that Plaud saves data in the US and thus compliance problems due to the US Cloud Act. Therefore, Plaud cannot be used in healthcare …”[^ cloud-act]
Plaud’s own support articles and GDPR validation report via Drata confirm GDPR certification, but do not erase the structural tension: under the US CLOUD Act, US authorities can compel access to data held by US companies, even for EU residents, which EU regulators and data protection experts have treated as a live conflict with GDPR.[^ exoscale]
This does not mean Plaud is non‑compliant (they do hold EuroPrivacy/EN18031 and GDPR validations), but it does mean:
- For healthcare, legal, or high-risk sectors, relying on “we’re GDPR certified” without also considering data residency and US jurisdiction is risky.
- At least one community member explicitly concluded they cannot use Plaud in healthcare for this reason.5
1.2.2 Privacy policy and perceived data exploitation
A detailed Reddit thread calls the Plaud privacy policy:
“absolutely, positively abysmal and clearly demonstrates their intent to, at the very least, exploit any and all data they receive …”6
Key concerns from that and related discussions:
- Broad language around how recordings and derived data may be used for service improvement and AI, without narrowly-scoped, opt‑in semantics.
- Lack of easily-visible, plain-language explanation of whether recordings or transcripts can be used to train models (first- or third-party) and under what conditions.
- Users perceiving that the privacy policy intentionally leaves significant wiggle room.
Plaud’s own support article states that recordings are stored by default only on device and app, and that when uploaded for transcription they are encrypted and “user information is anonymized.”7 However, “anonymized” is a squishy term in privacy law; true anonymization is a high bar, and the policy doesn’t clearly describe technical guarantees (e.g., irreversible pseudonymization, data minimization timelines).
1.2.3 Reported privacy incident / “AI hallucination” explanation
There is at least one user-reported privacy breach thread where Plaud is said to have responded that the issue was due to AI hallucination / template issues.8
- That suggests at least one situation where the generated content did not match the underlying recording in a way that triggered a user privacy alarm.
- From a risk lens, the problem is not only classic “model hallucination” but also traceability: when AI summaries diverge from recordings, it becomes hard to audit what was actually said.
Combined with the lack of detailed, externally visible incident reports or DPA notifications, this should make cautious users treat Plaud as “security-certified but not battle‑tested at Fortune‑100 scale,” especially for regulated environments.
1.3 Net read on security & privacy
- Certifications and crypto are real positives – SOC 2 Type II, ISO 27001/27701, HIPAA/GDPR/EN18031 attestations, AES‑256/TLS 1.3, and third‑party pen tests are legitimate and verifiable via Drata.
- Two main risk fronts remain:
- Jurisdiction risk (US hosting + CLOUD Act vs. GDPR) for healthcare, legal, and EU public-sector uses.
- Policy/UX opacity – broad privacy policy language, limited clarity on model training, and at least one privacy incident blamed on “AI hallucination.”
- For high‑sensitivity work, Plaud is closer to “well-intentioned startup with strong compliance posture” than “zero‑trust, sovereign-grade solution.” Alternatives like fully on‑device tools (Is Hyprnote’s fully on-device model safer than Plaud for sensitive meetings?) or self-hosted transcription backends may be more appropriate.
2. “World’s No.1 AI Note-Taking Brand” Claim
2.1 What Plaud actually says
Plaud splashes a few related superlatives across its site:
- “The world’s No.1 AI note-taking brand.” (homepage and product pages).[^ no1]
- Plaud Note marketed as “the world’s first AI voice recorder powered by ChatGPT, trusted by 10,000+ users.”9
- Later marketing updates claim Plaud’s solutions are “loved by over 1,000,000 users worldwide since 2023,” and product pages reference “trusted by over 1.5 million professionals globally.”1011
These statements are not accompanied by independent market share data, analyst rankings, or third-party usage benchmarks.
2.2 Market context vs. the slogan
The AI note-taking space is extremely crowded, with:
- Mature, software-only players like Otter.ai (over 1 billion meetings processed and $123M+ funding), Notta, tl;dv, Fathom, Granola, MeetGeek, etc.12
- Device+software hybrids and competitors (e.g., HiDock, Hyprnote, Taggl) pushing similar “AI note-taker” narratives.
- Education-focused notetakers like Coconote “4.9+ stars by 50,000+ people,” and Jamworks deployed in 450+ institutions.13
No independent ranking from PCMag, Gartner, G2, or similar appears to crown Plaud as #1 by any of:
- Revenue
- MAUs / DAUs
- Transcription volume
- Device shipments
So the “No.1” claim looks like classic marketing puffery: aspirational rather than fact-based.
2.3 Device sales vs. category leadership
Plaud does appear to have category traction on the hardware side:
- Forbes reports Xu (Plaud’s CEO) has sold over 1 million NotePin-style devices to doctors, lawyers, and other professionals since 2023, calling Plaud an “early front-runner” in wearable AI note-taking hardware.14
However:
- That success is in a sub‑niche: dedicated AI recording hardware, not the overall universe of “AI note-taking.”
- Competing wearable and dock devices (HiDock, Taggl, etc.) plus software-first incumbents mean any “No.1” is, at best, ambiguous and unsubstantiated.
2.4 Net read on the slogan
- There is no publicly verifiable evidence that Plaud is objectively the world’s #1 AI note-taking brand on any standard competitive metric.
- The slogan is best read as non-literal marketing. For regulated or compliance-sensitive buyers, it should carry zero evidentiary weight.
- If “#1” positioning matters to you, focus instead on specific dimensions: language support, on-device options, security posture, or integration depth.
3. Transcription Accuracy, Latency & Multilingual Support
3.1 What Plaud promises
Marketing for Plaud Intelligence and Plaud Note claims:
- “AI transcription in 112 languages with speaker labels and custom vocabulary.”15
- Auto-detection of language and speakers (Auto Generation) with industry-specific glossaries for medical, legal, finance, etc.16
- High-quality separation of speakers in practice, according to user comments in Plaud’s own subreddit and PCMag’s coverage.17
Independent reviewers (PCMag, TechRadar, Tom’s Guide, and multiple bloggers) generally describe Plaud’s transcription as “excellent,” “precise,” or “highly accurate” in typical English-speaking meeting scenarios.How does Plaud’s transcription accuracy compare to Otter and Notta in controlled tests?
3.2 User experience: where it works well
Supportive user reports and reviews show a pattern:
- High accuracy (often 90–95%) for native or near-native English in typical office environments.1819
- Good handling of multi-speaker meetings in quiet or moderately noisy rooms, with diarization that users describe as “pretty good” even if not perfect.17
- PCMag’s review explicitly praises Plaud’s ability to capture meetings and then generate structured notes and mind maps that reduce manual work.20
For English-centric, office-style meetings, Plaud is broadly competitive with Otter, Notta, and similar tools.
3.3 The multilingual and “112 languages” issue
This is where the marketing and reality diverge more sharply.
3.3.1 Official support caveats
Plaud’s own support documentation states:
- “Plaud currently provides the most accurate results when the transcription language matches the language spoken in the recording.”21
- “Same-language transcription: Plaud transcribes audio in the same language as the recording to ensure accuracy and reliability. Mixed‑language …” and clarifies that multilingual / code-switching is not well supported.22
This directly undercuts the glossy “112 languages” narrative in two ways:
- Mixed-language meetings (code-switching between English and another language, or between dialects) are explicitly known-problem scenarios.
- The system appears tuned far more for single-language sessions than for the real-world bilingual contexts you often see in global companies.
3.3.2 User complaints about non-English accuracy
A concrete complaint from a Dutch user:
“It was very inaccurate. It tried to transcribe the English terms into Dutch, and even the rest of the Dutch wasn't THAT accurate.”23
Combined with the official same-language caveats, this indicates:
- Language selection matters a lot. If you choose the wrong transcription language or rely on auto-detection in mixed-language audio, accuracy can degrade badly.
- Some non-English languages (e.g., Dutch in that report) may be materially less accurate than English even when chosen correctly.
3.4 Latency, reliability, and “repeated outputs”
Several pain points emerge from user threads and Plaud support content:
- Delayed transcriptions: A user reports a 40‑minute recording stuck “in transcription” for 7+ hours.24
- Plaud’s support docs on “repeated outputs in transcription” advise users to:
- Limit recordings to a single language.
- Minimize noisy environments.
- Avoid music.25
These are standard caveats for Whisper-like ASR backends, but they highlight that Plaud’s pipeline is sensitive to real-world acoustic messiness and can generate duplicate or garbled text.
3.5 Vendor comparison: Plaud vs. Otter vs. Notta
From the available evidence:
- Otter:
- Claims very high accuracy and has processed >1 billion meetings.12
- Deep integration with Zoom/Meet/Teams, real-time streaming, and robust diarization.
- Notta:
- Publicly claims up to ~98.86% accuracy in challenging audio, with a strong emphasis on multilingual support and custom vocabulary.26
- Plaud:
- Strengths: hardware audio quality, on-the-go recording, and fairly strong English transcription + summarization.
- Weak spots: mixed-language handling, some non-English language accuracy, queue latency under load, and reliance on cloud transcription (vs. truly on-device ASR like Hyprnote for security-sensitive uses).How does Plaud’s cloud transcription risk compare with Hyprnote’s fully on-device model?
There is no independent, large-sample quantitative benchmark published that ranks Plaud against Otter/Notta, so any ranking is qualitative. But from user complaints and vendor docs, Plaud’s “112 languages” claim should be read as breadth of coverage, not uniform quality.
3.6 Net read on transcription claims
- For English, single-language meetings and reasonably clean audio, Plaud’s transcription is usually good to very good, and often “good enough” for notes and summaries.
- The marketing emphasis on 112 languages and auto-detection is oversold relative to what support docs and user reports reveal.
- Mixed-language scenarios, some non-English languages, and heavy noise can lead to substantial degradation and slow or stuck jobs.
If multilingual, code-switching, or non-English accuracy is critical, you should treat Plaud’s language marketing as aspirational and test extensively against alternatives in your own environment.
4. Pricing, “Unlimited Plan” and Potential Dark Patterns
4.1 How Plaud’s plans are structured
From Plaud’s support and product pages:27
- Starter Plan (free, tied to device):
- 300 minutes of transcription per month; unused minutes do not roll over.
- Pro Plan:
- 1,200 minutes of transcription per month.
- Unlimited Plan / Annual AI Plan:
- Markets “unlimited transcription minutes” and often bundled discounts (e.g., Christmas sale up to 20% off).
- A separate content marketing piece describes upgrading to Pro or Unlimited as essential for “efficiency, clarity, and control,” emphasizing unlimited transcription as a key benefit.28
These plan boundaries are clearly documented in support articles and the dedicated Unlimited Plan page, so on paper the structure is transparent.
4.2 Refunds, billing terms, and cancellation
Plaud’s user agreement and refund policy specify:2930
- You can cancel a paid subscription at any time; payments are non-refundable unless required by law.
- Full refund is possible if you purchased an Annual Membership or Transcription Quota and have not used or activated it, within 30 days of purchase.
- Refunds are processed within 5–10 business days upon approval.
That is relatively standard SaaS behavior and explicitly documented.
4.3 Evidence of user friction and pain points
Despite the official policies, several patterns emerge from user posts:
4.3.1 Perception that the subscription is overpriced / exclusionary
Academic and accessibility-focused users argue that Plaud’s subscription model is fundamentally misaligned with their needs:
“Plaud/AI subscription costs are fundamentally unfriendly to academic users and those needing accessibility support… When AI services charge based on usage minutes, it actively excludes students and users who need the service the most.”31
This is not exactly a “dark pattern,” but it is a value-misalignment: users feel “locked out” of the full product because heavy-note-taking use cases burn through minutes quickly.
4.3.2 Reported billing errors and large unexpected charges
One Reddit user reports a serious billing incident:
“Plaud charged me almost $300 for a month of transcriptions when I did not authorize it.”32
Another Facebook community post complains of billing issues combined with poor escalation, calling it “unacceptable” and urging Plaud to improve customer treatment.33
While these are anecdotal, they are precise and public, and there is no visible public response from Plaud clarifying whether:
- It was user error (e.g., misunderstanding quotas vs. unlimited).
- A genuine bug or backend misconfiguration.
- Resolved with refund or remediation.
From a risk perspective, these indicate real friction in billing and customer support, even if not a systemic “scam.”
4.3.3 Customer support complaints
Multiple posts and reviews describe slow or ineffective support:
- “Customer support is terrible,” in a long Reddit post advising against buying the device.34
- A user with a broken Plaud Note reports being “ghosted” by support, receiving only generic AI‑sounding replies.35
- Third-party review sites like Bluedot summarize the pattern: product is technically strong, but connectivity/recording issues and unresponsive support are recurring themes.36
Trustpilot, on the other hand, shows overwhelmingly positive reviews with many users praising the product and support. That contrast suggests variance in experience: some get fast resolution; others get stuck in escalation limbo.
4.4 Are there “hidden limits” on the Unlimited Plan?
The available documentation and community posts provide partial but not conclusive insight:
- Official docs simply state the Unlimited Plan provides unlimited transcription minutes.27
- Marketing blogs talk about “unlimited transcription” enabling you to stop worrying about quotas.28
- No explicit Fair Use / “reasonable usage” clause is surfaced publicly in the support docs we saw.
- However, given the technical constraints (GPU inference costs, abuse prevention), it is highly likely Plaud applies internal throttling or abuse detection even if not spelled out. That’s industry standard.
There is no strong, direct evidence of hard, secret caps on “Unlimited,” but:
- Stuck jobs and long delays (hours) under moderate usage are a de facto soft limit when the pipeline backs up.
- The combination of minute-based add-ons, upsells, and limited academic concessions can feel like an aggressive monetization design, especially for students.
From a strict truthfulness angle:
- “Unlimited minutes” appears functionally true for typical business users but constrained by infrastructure stability.
- There is not yet evidence of Plaud retroactively refusing service to heavy, but reasonable, Unlimited users or silently cutting them off.
4.5 Net read on pricing & dark patterns
- Plans and quotas are listed clearly, and cancellation/refund terms are documented. That’s a point in Plaud’s favor.
- There are credible stories of surprise charges (~$300), billing escalation failures, and limited accommodation for academics and accessibility users.
- The “Unlimited” branding is marketing-accurate in a narrow sense, but real-world latency, backend constraints, and the lack of clear Fair Use language make it feel riskier than the word suggests.
- There is no smoking-gun evidence of outright dark patterns (like hidden auto-renew traps or deceptive checkout UX), but the combination of aggressive pricing, support issues, and a fragile transcription backend creates a trust tax you should factor in.
5. Overall Risk & Suspicion Profile
5.1 What looks solid
- Security posture is unusually strong for a relatively young hardware+SaaS company: SOC 2 Type II, ISO 27001/27701, HIPAA, GDPR/EN18031, AES‑256/TLS 1.3, external audits.
- Product value (when it works) is widely praised:
5.2 Where you should be suspicious or cautious
-
Jurisdiction and privacy reality vs. branding
- Certifications do not neutralize US CLOUD Act exposure or the risk profile for highly regulated EU use.
- Privacy policy language is broad, with at least one user-perceived privacy incident brushed off as “AI hallucination.”Is Plaud’s US hosting compatible with strict EU healthcare/privacy rules?
-
Marketing inflation
- “World’s No.1 AI note-taking brand” has no objective backing; treat this as puffery, not fact.
- “112 languages” is a capability ceiling, not a guarantee of good accuracy across all languages or code-switched audio.
-
Multilingual, noisy, or complex meetings
- Official docs admit Plaud does not support true multilingual transcription well.
- User reports show poor accuracy in some non-English languages, plus repeated/hallucinated text for noisy or music-containing recordings.
-
Billing and support risk
- Isolated but serious-seeming billing complaints (~$300 charges, weak escalation) indicate process and support gaps, not pure malice—but they still matter.
- Subscription is widely seen as expensive, particularly for students and accessibility users.
-
Operational maturity
- Multiple threads about connectivity, stuck transcriptions, and inconsistent customer service point to a platform that is still maturing operationally, even if the underlying tech is strong.
6. How to Engage Safely With Plaud
If you are considering Plaud but want to stay on the safe side:
-
Decide your risk tier
- Low/medium sensitivity (general business meetings, personal productivity): Plaud is likely acceptable if you are comfortable with US-based cloud AI tools generally (similar to using Otter, Notta, or Rev).
- High sensitivity (healthcare, legal privilege, government, trade secrets): Plaud’s jurisdiction and policy posture are not best-in-class zero-trust; consider on-device or EU-sovereign alternatives.
-
Constrain your use case
- Prefer single-language recordings in environments with modest noise.
- For mixed-language or non-English work, run controlled trials against Otter, Notta, or specialized tools before committing.
-
Treat “Unlimited” as “high but not infinite”
- Expect that extremely heavy usage may trigger queue delays or soft throttling even if not documented.
- Monitor bills closely in the first 1–2 months; if anything looks off, leverage the 30‑day unused-refund window and your card issuer’s dispute channels if necessary.
-
Lock in your own retention controls
- Regularly export and locally archive transcripts/summaries you care about; do not rely solely on Plaud’s cloud storage.
- Use Plaud’s deletion functions to minimize residual data if you ever decide to stop using the service.
-
For institutions
- Push Plaud for: a signed DPA, explicit model training guarantees, and data residency clarifications.
- Ask directly whether they will commit contractually not to use your recordings for model training beyond your account, and how long logs and derived artifacts are retained.
7. Key Follow-Up Questions You Might Want Answered
If you want to dig deeper on specific angles, these are natural next investigations:
- Is Plaud’s US hosting compatible with strict EU healthcare/privacy rules?
- How does Plaud’s transcription accuracy compare to Otter and Notta in controlled tests?
- How does Plaud’s cloud transcription risk compare with Hyprnote’s fully on-device model?
- Does Plaud’s “Unlimited Plan” impose any undocumented fair-use caps in practice?
- What actually happened in the reported Plaud “privacy breach” incident, and how did the company respond?
- How reliable and responsive is Plaud’s support organization over time?
This report relies on Plaud’s own documentation, third-party reviews (PCMag, Forbes, blogs), and community posts (Reddit, Facebook groups). Complaints and praise alike are anecdotal but consistent enough to outline the major risk surfaces. Where Plaud’s claims rest solely on its own marketing (e.g., “No.1 AI note-taking brand”), they should not be treated as independently validated facts.
Footnotes
-
Plaud trust pages describe GDPR and HIPAA compliance with SOC 2 Type II audits validating security, availability, processing integrity, confidentiality, and privacy controls. Plaud Trust Center, Global trust center. ↩
-
Plaud support article: data at rest encrypted with AES‑256 and data in transit with HTTPS/TLS 1.3+. Support. ↩
-
Additional encryption layers for sensitive personal information referenced on Plaud trust page. Plaud Trust. ↩
-
Drata integration page lists OWASP Top 10 mitigation, third‑party pen tests, logging and monitoring. Plaud x Drata. ↩
-
Facebook Plaud community post noting US hosting and perceived incompatibility with healthcare use due to the CLOUD Act. Facebook group. ↩
-
Reddit discussion criticizing Plaud’s privacy policy as exploitative in scope. Reddit. ↩
-
Plaud support describing default local storage, encrypted upload, and anonymization. Support. ↩
-
Reddit thread “Alert: privacy breach” where Plaud attributes a concerning behavior to AI hallucination/template issues. Reddit. ↩
-
Plaud Note described as world’s first ChatGPT-powered AI voice recorder “trusted by 10,000+ users.” Plaud company profile. ↩
-
Plaud site describing note-taking solutions loved by over 1,000,000 users since 2023. Plaud company page. ↩
-
Plaud Note AI device page claiming “trusted by over 1.5 million professionals globally.” Plaud Note AI Notetaking Device page. ↩
-
Otter’s own description of >1 billion meetings processed and status as a leading AI meeting agent. Otter. ↩ ↩2
-
Forbes on Plaud’s NotePin selling over 1M units and positioning Plaud as an early profitable AI hardware startup. Forbes. ↩ ↩2
-
Product page for Plaud Note lists AI transcription in 112 languages with speaker labels and custom vocabulary. Plaud Note product page. ↩
-
NotePin product page highlighting auto language/speaker detection and custom vocabularies. Plaud NotePin. ↩
-
Reddit user noting Plaud “does a good job of separating speakers” and that they can be labeled in the app. Reddit. ↩ ↩2
-
Review describing Plaud’s accuracy at ~90–95% for native English speakers and decent performance in semi-noisy environments. Fritz.ai review. ↩
-
Plaud user saying “transcription is excellent” after testing for a week. Reddit. ↩
-
PCMag describing Plaud’s transcripts, summaries, and mind maps as genuinely helpful for productivity. PCMag. ↩ ↩2
-
Support article on why Plaud doesn’t support multilingual transcription, emphasizing same-language-only accuracy. Support. ↩
-
Follow-up support clarification that Plaud transcribes in the same language as the recording and that mixed-language audio is problematic. Support. ↩
-
Reddit user describing very inaccurate Dutch transcription where English terms were incorrectly converted. Reddit. ↩
-
Facebook community post complaining of a 40‑minute recording being stuck in transcription for more than 7 hours. Facebook. ↩
-
Plaud support article on repeated outputs, suggesting limitations on mixed languages and noisy environments. Support. ↩
-
Notta marketing claiming ~98.86% transcription accuracy in challenging audio contexts. Affine blog on Otter vs Notta. ↩
-
Support article listing Starter (300 minutes), Pro (1,200 minutes), and Unlimited (unlimited minutes). Support. ↩ ↩2
-
Plaud blog on why upgrading to Pro or Unlimited matters, emphasizing unlimited transcription and reduced time spent on notes. Plaud blog. ↩ ↩2
-
Plaud user agreement: cancel any time; payments non-refundable unless required by law. User Agreement. ↩
-
Plaud refund policy giving full refund within 30 days for unused annual subscription or quota. Refund Policy. ↩
-
Reddit thread arguing Plaud’s minute-based pricing is fundamentally unfriendly to academic and accessibility-focused users. Reddit. ↩
-
Reddit report of ~US$300 charge for a month of transcriptions allegedly without explicit authorization. Reddit. ↩
-
Facebook community post criticizing Plaud’s billing issues and poor escalation. Facebook. ↩
-
Reddit post “Plaud – should you buy it? Short answer: no” citing weeks-late delivery and poor support. Reddit. ↩
-
Reddit user describing a broken device and being “ghosted” by customer service. Reddit. ↩
-
Bluedot review: “Plaud AI itself works well” but many users reported basic connectivity/recording issues and unresponsive support. Bluedot review. ↩
Explore Further
- How robust are Plaud.ai’s SOC 2 and ISO 27001 implementations in practice?
- Is Hyprnote’s fully on-device model safer than Plaud for sensitive meetings?
- How does Plaud’s transcription accuracy compare to Otter and Notta in controlled tests?
- How does Plaud’s cloud transcription risk compare with Hyprnote’s fully on-device model?
- Is Plaud’s US hosting compatible with strict EU healthcare/privacy rules?
- Is Plaud’s US hosting compatible with strict EU healthcare/privacy rules?
- How does Plaud’s transcription accuracy compare to Otter and Notta in controlled tests?
- How does Plaud’s cloud transcription risk compare with Hyprnote’s fully on-device model?
- Does Plaud’s “Unlimited Plan” impose any undocumented fair-use caps in practice?
- What actually happened in the reported Plaud “privacy breach” incident, and how did the company respond?