A new report reveals that Apple and Google are actively promoting “naked” apps through search and ads

The latest report from the Tech Transparency Project sheds new light on the role of the largest digital platforms in the spread of controversial AI tools. According to their findings, Apple and Google not only host so-called “nudify” apps in their stores, but their algorithms actively promote them to users through search and advertising.

These apps use artificial intelligence to generate fake, often explicit images of real people without their consent, putting them at the center of the growing global problem of deepfake abuse.

Research shows that simple searches for terms like “nudify”, “undress” or “deepnude” in the App Store and Google Play Store lead to dozens of applications that offer just such functionalities.

More problematically, platforms do not remain neutral. Their systems suggest additional similar terms through autocomplete, display ads for such apps, and rank them high in search results.

In other words, algorithms not only do not prevent access, they potentially facilitate it.

READ ABOUT:  Finally someone reacted! Unskippable YouTube ads limited to a maximum of 5 seconds

According to the report, such applications have been downloaded hundreds of millions of times and have generated more than $120 million in revenue, which clearly indicates that this is not a marginal phenomenon.

The problem of theory and practice

The irony of the situation is that both companies formally prohibit such applications through their guidelines. Both Apple and Google have policies that prohibit content that includes pornography, harassment, and the unauthorized use of other people’s images.

However, the report suggests a serious gap between policy and implementation. Algorithmic search and monetization systems operate independently of content moderation, leading to a paradoxical situation where platforms simultaneously ban and promote the same type of apps.

An explosion of AI abuse

All this is happening at a time when AI tools for image and video generation are becoming more and more available and powerful.

“Nudify” applications represent only one segment of a wider problem – the growth of unauthorized deepfake content, which is increasingly used for harassment, blackmail and digital violence, especially against women and minors.

READ ABOUT:  Galaxy S26 series and special Snapdragon 8 Elite Gen 5 for Galaxy

Of additional concern is the fact that some of these apps are labeled as suitable for younger users, which raises the question of the effectiveness of existing content classification systems.

The news has already caused reactions from regulators and lawmakers, especially in the US and Europe, where there is increasing talk of the need for stricter control of AI tools and platform accountability.

In some cases, after media pressure, companies have removed part of the disputed applications, but critics claim that this is a reactive, not a systemic approach.

Earnings vs. Liability

The key question remains: can platforms really control the content they monetize, or are their business models in direct conflict with user security?

The case of the “nudify” app exposes a deeper problem in the digital economy – the conflict between growth, engagement and accountability.

If algorithms reward attention-grabbing content, regardless of its consequences, then the problem is not only in individual applications, but in the structure of the platforms themselves.

READ ABOUT:  the first electric Ferrari car with an interior designed by former Apple designers

In this context, Apple and Google can no longer be seen as neutral software distributors, but as active actors in shaping the digital ecosystem – with all the responsibilities that arise from that, reports Digital Trends.

Source link