Rytr LLC; Analysis of Proposed Consent Order To Aid Public Comment
The consent agreement in this matter settles alleged violations of Federal law prohibiting unfair or deceptive acts or practices. The attached Analysis of Proposed Consent Order to Aid Public Comment describes both the allegations in the complaint and the terms of the consent order--embodied in the consent agreement--that would settle these allegations.
What this rule actually says
The FTC settled a case against Rytr (an AI writing tool) for making exaggerated claims about its product's capabilities and for unclear privacy practices. The core issue: the company claimed their AI could do things it couldn't, and didn't clearly explain how they'd handle user data. This enforcement action sets a precedent that AI companies must be honest about what their tools actually do and transparent about data handling.
Who it applies to
- If you claim your AI does something it doesn't reliably do – like saying your medical scribe captures "100% accurate" notes when accuracy varies, or your hiring assistant "eliminates bias" without evidence.
- If your privacy policy is vague about data usage – especially if you're training on, storing, or sharing customer data without explicit consent.
- If you're in the US – the FTC has jurisdiction over US-based companies and those serving US customers.
- All AI use cases – medical scribes, hiring assistants, support chatbots, and any other AI tool fall under this. Size doesn't matter; enforcement can target solo founders.
- Data scope in: User inputs, conversation logs, uploaded files, and any personal/health/employment data your product touches.
- Data scope out: Publicly available data you're training on may have different rules, but user data is always covered.
What founders need to do
- Audit your marketing claims (2-3 days). Go through your website, sales docs, and product descriptions. Remove or soften any claim you can't prove with testing. Instead of "98% accurate," say "accuracy varies by use case" or cite actual benchmarks.
- Write a clear, specific privacy policy (1-2 days). Explain exactly what data you collect, how long you store it, whether you use it for training, and who has access. Link to it prominently. Update it whenever your practices change.
- Get explicit user consent for data use (1 day). Before storing or training on user data, ask permission. Don't bury consent in terms of service.
- Document your AI's limitations (ongoing). Keep records of testing, failure modes, and edge cases. If a customer asks "how accurate is this?", you should have an honest answer backed by data.
- Monitor FTC guidance (1-2 hours monthly). The FTC is actively enforcing AI truthfulness. Check ftc.gov/news occasionally to catch updates early.
Bottom line
Act now if you're overstating what your AI does or being vague about data handling – the FTC is actively pursuing these cases and doesn't care about company size.