The FTC Act’s prohibition on deceptive or unfair conduct can apply if you make, sell, or use a tool that is effectively designed to deceive – even if that’s not its intended or sole purpose. So consider:Should you even be making or selling it? If you develop or offer a synthetic media or generative AI product, consider at the design stage and thereafter the reasonably foreseeable – and often obvious – ways it could be misused for fraud or cause other harm. Then ask yourself whether such risks are high enough that you shouldn’t offer the product at all. It’s become a meme, but here we’ll paraphrase Dr. Ian Malcolm, the Jeff Goldblum character in “Jurassic Park,” who admonished executives for being so preoccupied with whether they could build something that they didn’t stop to think if they should.Are you effectively mitigating the risks? If you decide to make or offer a product like that, take all reasonable precautions before it hits the market. The FTC has sued businesses that disseminated potentially harmful technologies without taking reasonable measures to prevent consumer injury. Merely warning your customers about misuse or telling them to make disclosures is hardly sufficient to deter bad actors. Your deterrence measures should be durable, built-in features and not bug corrections or optional features that third parties can undermine via modification or removal. If your tool is intended to help people, also ask yourself whether it really needs to emulate humans or can be just as effective looking, talking, speaking, or acting like a bot.Are you over-relying on post-release detection? Researchers continue to improve on detection methods for AI-generated videos, images, and audio. Recognizing AI-generated text is more difficult. But these researchers are in an arms race with companies developing the generative AI tools, and the fraudsters using these tools will often have moved on by the time someone detects their fake content. The burden shouldn’t be on consumers, anyway, to figure out if a generative AI tool is being used to scam them.Are you misleading people about what they’re seeing, hearing, or reading? If you’re an advertiser, you might be tempted to employ some of these tools to sell, well, just about anything. Celebrity deepfakes are already common, for example, and have been popping up in ads. We’ve previously warned companies that misleading consumers via doppelgängers, such as fake dating profiles, phony followers, deepfakes, or chatbots, could result – and in fact have resulted – in FTC enforcement actions.
Source: Chatbots, deepfakes, and voice clones: AI deception for sale | Federal Trade Commission
Leave a Reply