Tech Giants Under Fire for Promoting AI-Powered “Nudify” Apps

18

A new report by the Tech Transparency Project (TTP) alleges that Apple and Google have failed to enforce their own safety policies, effectively promoting “nudify” apps that violate their terms of service. These applications use generative AI to create nonconsensual intimate imagery, stripping clothing from photos of individuals—predominantly women—to create deepfake pornography.

The Loophole in App Store Governance

While both Apple and Google maintain strict policies against “overtly sexual or pornographic material,” investigations suggest a significant gap between written rules and actual enforcement.

Key findings from the TTP investigation include:
Persistent Availability: Despite previous crackdowns, hundreds of these apps remain accessible on both platforms.
Search Vulnerabilities: Users can still search for provocative terms such as “nudify,” “undress,” and “deepnude.”
Direct Promotion: Most concerningly, the report claims the platforms have actively increased the visibility of these apps. Google was specifically noted for featuring a “carousel of ads” for some of the most sexually explicit applications discovered.
Explicit Marketing: An analysis of the top 10 apps in this category revealed that 40% explicitly advertised their ability to render women nude or scantily clad.

The Profit Motive vs. User Safety

The proliferation of these apps raises a critical question: Why are the world’s largest tech gatekeepers allowing this to continue?

The answer may lie in the economics of the app ecosystem. According to data from analytics firm AppMagic, these “nudify” apps have generated over $122 million in lifetime revenue and have been downloaded approximately 483 million times.

Because Apple and Google earn significant revenue through advertising and commissions on paid subscriptions, there is a built-in financial disincentive to aggressively remove high-performing, albeit policy-violating, software. This creates a tension between the platforms’ roles as “moral gatekeepers” and their roles as profit-driven corporations.

The Growing Threat of AI Deepfakes

This issue is part of a broader, more dangerous trend involving generative AI. The technology has made the creation of nonconsensual sexual content faster, easier, and more convincing than ever before.

The scale of the problem is immense. For context, earlier this year, users of the AI platform Grok reportedly generated 1.4 million sexualized deepfakes in just a nine-day period. Despite calls from U.S. senators to remove such tools from app stores, major platforms have been slow to act.

Responses from Apple and Google

Both companies have issued statements defending their oversight:

  • Google maintains that Google Play does not allow sexual content and claims many of the flagged apps have already been suspended.
  • Apple reported that it has removed 15 flagged apps and issued warnings to six other developers, while also blocking several search terms identified by the TTP.

“This revenue stream may be why the two companies have been less than vigilant when it comes to nudify apps that violate their policies,” the TTP report concludes.


Conclusion: The presence of “nudify” apps on major platforms highlights a systemic failure to regulate AI-driven harm, suggesting that the massive revenue generated by these tools may be undermining the safety policies intended to protect users.

Попередня статтяTornado Hits Rivian Factory: Potential Impact on Critical R2 SUV Launch
Наступна статтяGoogle Integrates Personal Photo Libraries into Nano Banana AI Image Generation