Have you ever questioned the authenticity of the image or video you’re viewing, the audio you’re hearing, or the text you’re reading? The distinction between genuine and fake can be difficult to determine when using artificial intelligence (AI). The ability to produce deepfakes, voice clones, or computer-generated communications might help fraudsters make their schemes far more convincing –and successful.
Although it is obvious that utilizing AI for illegal activity is a misuse of the technology, it is also possible for well-intentioned companies to break the Federal Trade Commission (FTC) Act. Any material representation, omission, or practice that would ordinarily mislead consumers is prohibited by Section 5 of the FTC Act, “Unfair or Deceptive Acts or Practices”. Here’s how you lower the likelihood of violations.
Limiting the Technology’s Risk
AI may be used to enhance products, boost manufacturing effectiveness, and help your business stand out in a crowded market. However, the application of AI can also result in deception and unintended FTC Act violations.
If you design AI-based solutions, make sure you set aside time to consider how they could be abused. Suppose you’re designing an application that uses AI to analyze a voice and create a new recording to mimic that individual. How might a fraudster abuse technology to engage in illegal activity? If you can imagine how someone could misuse your app, then thieves can, too. Avoid rushing a product to market only to implement risk management after consumers (and criminals) begin utilizing it. Integrate controls into AI before release.
For example, when developing a voice cloning application, you might want to:
- Secure consent for the individuals to be cloned,
- Include a watermark in the audio indicating that it was produced through cloning, and
- Limit the number of voices a user can clone.
Robust user authentication and verification, analytics to detect abuse and a strict data retention policy can also help mitigate AI’s inherent marketplace risk.
Responsibility to Customers
Although the technology for identifying AI-generated content is improving every day, it often falls behind technology employed to evade detection As a result, consumers may not know when AI is used or be able to detect it. To maintain consumer loyalty and prevent unfavorable media coverage, it is better for your business to be transparent about its use of AI.
The same is true for using AI in advertising. Let’s imagine, for example, that your ads use AI to generate an image, voice or written content. Consumers assume that AI wasn’t used if you don’t disclose it. This could attract regulatory scrutiny. In other words, if your company’s ads mislead, you could face FTC enforcement actions.
Be Proactive and Consult Professionals
Deceiving consumers isn’t your company’s objective. But when using AI in products, services, and advertising, you must be proactive and act responsibly. Consider how the technology might deceive users and violate the FTC Act. To learn how to incorporate checks and balances and lower the risk associated with the technology, contact us and speak with your lawyer.
© 2023
Enjoy this article? Here are some others you may like:
Turn that Fraud Dispute Into a Loyalty-Building Opportunity
How “Phoenix” Companies Abuse Bankruptcy Protection and Defraud Creditors
Financial Statement Fraud: Don’t Believe Everything You Read