From the moment we unlock our phones, we are inundated with artificial intelligence (AI) advertisements—many making grandiose promises that seem too good to be true. Businesses have claimed their AI can help customers build an “AI-powered Ecommerce Empire” or “generate perfectly valid legal documents in no time.” But at what point do these lofty, AI-infused promises cross the line into deception—and more interestingly, is this even a new phenomenon?
The Federal Trade Commission (FTC) has attempted to delineate the boundary between unrealistic over-promises and realistic expectations through Operation AI Comply. This operation began on September 25, 2024, with five settlements and injunctions against companies that misused AI. It aims to protect consumers by deterring corporate misuse of AI, a goal which has been supported by the FTC Chair, Lina M. Khan, stating that “[u]sing AI tools to trick, mislead, or defraud people is illegal.”
Four of the target companies, DoNotPay, Ascend Ecom, Ecommerce Empire Builders, and FBA Machine, made false claims or exaggerated what their services could accomplish by hiding behind the premise of revolutionary AI advancements. Additionally, the fifth targeted company, Rytr, had a functional AI service that could produce thousands of product testimonials. However, the generated reviews served “[no] reasonable, legitimate use,” leading to consumer harm.
DoNotPay claimed its AI chatbot could allow users to “sue for assault without a lawyer” and “generate perfectly valid legal documents in no time.” This was not the case. The FTC found that “employees had not even tested the quality and accuracy of the legal documents” and that some “advertised features… [DoNotPay] simply did not provide.”
Ascend Ecom’s exaggerated promises make DoNotPay’s seem comparatively tame. Ascend Ecom called itself a “surefire business opportunity in e-commerce,” claiming that its AI-powered business model allows customers to “quickly earn thousands of dollars in passive income.” However, instead of producing vast incomes, most hopeful customers were left with empty bank accounts and hefty bills. The FTC determined that the “[d]efendants’ scheme has defrauded consumers of at least $25 million.”
Similarly, Ecommerce Empire Builders (EEB) was another scheme preying on consumers looking for passive income. It promised to help customers build an “AI-powered Ecommerce Empire.” EEB sold courses promising to teach customers how to “start a million-dollar business today” through “online stores powered by artificial intelligence” for prices between “$10,000 and $35,000.” Instead of creating thriving million-dollar businesses for users, EEB enriched itself, leaving its clients with failing ventures.
FBA Machine was the third “AI-powered” ploy promising unsuspecting customers a guide to making a “7-figure business” backed by “risk-free” guarantees, with promises of “$20,000 in revenue in 90 days… or the company will work for free.” However, “of the 42 known clients, approximately 86% had gross aggregated sales of $15,000 or less… 12% had no sales at all.” Customers who attempted to return the product had their refunds conditioned on removing their negative online reviews, but regardless, the business still “failed to pay the promised refunds.”
Finally, Rytr, unlike the others, seemingly offered a legitimate service as an “AI assistant.” However, one of its tools, a “Testimonial & Review” feature, led to harmful consumer outcomes. This service let users generate “detailed reviews that contain specific… details [about consumer products] that have no relation to the user’s input [and] that would almost certainly be false.” Furthermore, “respondent [Rytr] set[] no limit on the number of reviews a user… c[ould] generate and copy.” Predictably, this was abused by its consumers: “Subscribers generated tens of thousands of reviews in a short time…[which] pollute[d] the marketplace with a glut of fake reviews.” The FTC found the “Testimonial & Review” service to provide no “legitimate use,” with “its likely only use [being] to facilitate subscribers posting fake reviews with which to deceive consumers.”
At first glance, the premise of these five cases seems novel. AI has recently taken the spotlight, and the FTC hasn’t prosecuted AI cases in the past for “deceptive marketing” claims. However, if we peel back the shiny AI-gilded cover of these cases, there is something to be discovered. None of these five businesses’ tactics are new or unique to AI. Throughout history, advancing technologies like the dot-com boom or some early blockchain ventures have been leveraged to make exaggerated promises, only to fall short and deceive consumers. Only last year, the FTC fined the cryptocurrency company Celsius $4.7 billion dollars because it “promised consumers that Celsius was ‘safer’ than a bank… because Celsius earned profits at ‘no risk’ to consumers.” Celsius subsequently went bankrupt after it “engag[ed] in uncollateralized and undercollateralized lending despite their promises to the contrary.” Consumer deception is not new—the only thing that has changed is that AI provides another option for ill-intentioned businesses and opportunistic hustlers to obfuscate false claims.
Since 1914, the FTC has defended consumers against deceptive businesses and their unfair conduct. The FTC treats these AI cases no differently: “There is no AI exemption from the laws on the books.” While the medium has evolved, the fundamental issue of deceptive marketing remains the same. There are consequences when companies fail to exercise caution and integrity when marketing their products.
While these five businesses have faced repercussions from FTC enforcement, the crackdown is not meant to be purely punitive. By removing misleading advertisements and opportunities for product abuse, the “FTC is ensuring that honest businesses and innovators can get a fair shot and consumers are being protected.” Bad actors will continue to misappropriate AI in their advertisements and promises—harming all honest businesses. However, with the injunctions from these cases and a warning signal sent to deter future malfeasors, companies with legitimate AI features and services may be able to prevail while consumers are simultaneously protected.