150 million users exposed to risk

The explosion in popularity of AI applications for romantic companionship and intimacy has opened up a completely new problem: user privacy is seriously threatened. According to the latest analysis by the security firm Oversecured, more than half of the most popular applications in this category have critical security flaws.

With over 150 million installs, apps like Replika, Chai and Romantic AI have become digital “partners” to users, but at the same time a potential entry point for hackers.

The illusion of intimacy becomes a security risk

These apps use advanced natural language processing to simulate empathy and emotional connection. This is exactly why users entrust them with extremely sensitive information, from personal traumas to intimate fantasies.

Unlike classic chatbots, here the line between software and “relationship” is blurred, which leads to relaxation of the user and loss of caution.

The problem arises because such data has enormous value for attackers. One compromised conversation can be enough for blackmail, identity theft or manipulation.

READ ABOUT:  Mercedes-Benz pays $150 million fine for cheating on eco-tests

Technical flaws that open the door to attacks

The research found 14 critical vulnerabilities in 17 applications. The most dangerous problems include:

  • hardcoded API keys and cloud credentials within the application itself
  • XSS (Cross-Site Scripting) attacks that allow interception of conversations in real time
  • unprotected access to files such as photos and voice messages
  • poorly adjusted databases that enable massive data theft

In some cases, the attackers were able to access entire databases of conversations and financial data of users.

Most of these applications function as a “wafer” around large AI models. While companies like OpenAI or Google provide the intelligence of the model itself, the security of the application depends solely on the developer.

This is exactly where the biggest problem arises, since the weaknesses are not in the AI ​​models, but in the applications that use them.

Real incidents already confirm the danger

These are not theoretical risks. In previous cases:

  • Over 43 million intimate messages and hundreds of thousands of photos were downloaded without authorization
  • 300 million messages were exposed due to misconfigured servers
READ ABOUT:  Check out what the gameplay looks like in WWE 2K22 with a redesigned engine

Such incidents clearly show how vulnerable this sector is and how quickly massive data compromises can occur.

One of the biggest problems is the complete lack of regulation. AI dating apps are not medical or therapeutic tools, so they are not subject to strict data protection laws.

Even when there are penalties, they are mostly related to marketing or age restrictions, not technical security.

How to protect yourself

Experts recommend a “zero trust” approach. In practice this means:

  • treat every conversation as if it could become public
  • avoid connecting accounts with Google or Facebook services
  • do not share sensitive information
  • use only applications with a clear security system

AI partners offer a sense of familiarity and understanding, but behind this is software that often does not meet basic security standards.

Technology advances faster than user protection and precisely because of this, this segment becomes one of the riskiest zones in the modern digital environment.

READ ABOUT:  New Huawei P50 Pocket foldable phone

In short, the problem is not that the AI ​​”understands” the user, but that the applications do not know how to save what the user entrusts to them, writes Android Headlines.

Source link