Trade Regulation Rule on Impersonation of Government and Businesses
The Federal Trade Commission ("FTC" or "Commission") published a supplemental notice of proposed rulemaking ("SNPRM") in the Federal Register on March 1, 2024, titled "Trade Regulation Rule on Impersonation of Government and Businesses" ("Rule"), which requested additional public comment on whether the Commission should revise the title of the Rule, add a prohibition on the impersonation of individuals, and extend liability for violations of the Rule to parties who provide goods and services with knowledge or reason to know that those goods or services will be used in impersonation schemes that violate the Rule. The SNPRM announced the opportunity for interested parties to present their positions orally at an informal hearing. Six commenters requested to participate at the informal hearing. The Commission has decided not to proceed with the SNPRM's proposed means and instrumentalities provision at this time. The purpose of the informal hearing will be to address issues relating to the proposed prohibition on impersonating individuals.
What this rule actually says
The FTC is proposing to ban businesses from impersonating government agencies, other companies, or potentially individuals in order to trick people into giving money, data, or access. For example, a scammer pretending to be the IRS to steal tax info, or a fake "Apple Support" chatbot harvesting passwords. The rule also targets companies that knowingly help others run these schemes—like selling tools specifically designed for impersonation fraud.
Who it applies to
- If you're in the US: This applies to you. The FTC has broad jurisdiction over most commercial activities affecting US consumers.
- If your AI chatbot or tool impersonates a government agency, real business, or real person to deceive users: You're covered. This includes medical scribes pretending to be licensed doctors, hiring assistants misrepresenting themselves as company HR, or support chatbots claiming to be official company support when they're not.
- If you knowingly sell or provide tools to help others run impersonation schemes: You're liable, even if you don't do the impersonating yourself. Example: selling "chatbot templates for social engineering."
- If you're truthful about what you are: You're probably fine. A clearly-labeled AI support chatbot that honestly represents itself as AI isn't impersonating anyone.
- Data scope: This rule focuses on deception for financial or data theft, not general privacy violations. It's about *who* your tool claims to be, not how much user data you collect.
What founders need to do
- Audit your marketing and product (1-2 days): Does your tool claim to be a real person, licensed professional, or government entity when it isn't? If yes, stop immediately and rebrand honestly.
- Check your terms and positioning (1 day): Make sure users clearly know they're interacting with an AI, not a human or official entity. Be explicit in onboarding.
- Monitor FTC updates (ongoing, 30 mins quarterly): This is still a *proposed* rule. The FTC is holding hearings on final details. Sign up for FTC updates or check back in mid-2025 when the final rule likely lands.
- Document your honest practices (1 day): Keep records showing your product clearly discloses what it is. This protects you if questions arise.
- If you're in healthcare or hiring: Double-check you're not implying licensure or authority you don't have, since those fields have extra sensitivity around impersonation.
Bottom line
If your AI is honest about being AI and doesn't pretend to be someone it's not, monitor but don't panic; if it does impersonate, rebrand before the final rule lands in 2025.