Many companies harvest and monetise our personal details. What can the FaceApp case teach us about data protection?
Entertainment in return for information
A seemingly simple app that allows you to see how you will look old or with a different colour of hair or eyes. It initially became popular in 2017 but has been getting hyped again in recent weeks. All it took was for some famous people, including actors, journalists, and celebrities, to upload photos using filters from the app. The fun in ageing yourself in photographs spread around the world and the number of downloads for the application surpassed 150 million. At the same time, doubts have emerged about its privacy policy and what happens with the photos later. In the regulations – accepted by each user – there are also provisions about sharing photos with third parties and, respectively, targeting advertisements.
The images are uploaded into a data cloud managed by the Wireless Lab based in Saint Petersburg. According to the regulations, the pictures remain in it for an unspecified time. Moreover, the app records the user’s location and browsing history. One thing is sure – the privacy policy is not GDPR compliant. For example, reporting a photo removal request is unclear, and it is not known whether it is effective.
On the other hand, every day, we use different mobile apps that collect and use private data in a more or less transparent way. Companies like Facebook, Google, and Amazon are gathering and monetising large consumers’ data sets. Google knows where you’ve been, and everything you’ve searched; the company has every e-mail you’ve sent. Facebook stores your messages, location, follows your likes or opinions, and which topics are interesting for you. Many digital services – just like FaceApp or Google Mail – are available for free, which opens the door to popularity on a larger scale. Unfortunately, in many cases, the users themselves pay for their use unknowingly by handing over their data, according to the rule “if you’re not paying, you’re the product.” It is also surprising that despite many doubts and warnings – issued even by the FBI – a vast majority of users do not worry about privacy issues, arguing in social media that for a long time other companies such as Google and Facebook have been collecting data about us. In other words: it’s too late to protect our privacy in a digitalised world. Is it possible that data security issues – in the age of the internet and social media – are not so important anymore? If privacy is anyhow illusory, does it make sense to protect such data as our photos, location, behaviour, or even some of the medical information?
Innocent tricks
Imagine that one day, a similar app is created, one which allows you to predict what diseases you will have in 10, 20, 30 years. All you need to do is provide some personal data, medical history, prescribed medicines, location, details about your lifestyle. The application, based on Big Data analyses, assesses, percentage-wise, your risk of developing cancer, depression, diabetes, heart disease, and others of the top ten “killers.” Objectively, without the need to see a doctor. Note that no medical professional can provide such information. The app opens the door to intriguing knowledge. In the most recent version, the application introduces the function of determining the date of death with high accuracy, as the developer promises, because “calculations are based on medical data collected from millions of users.” The number of downloads of the application is snowballing. Before the warnings reach the users, and the tool is banned, the provider gains access to a wealth of information. What will be done with it and how it will be used, nobody knows. Fortunately, so far, it’s only a possible scenario.
The case of FaceApp shows that we have not concluded the Facebook and Cambridge Analytica scandal. People tacitly assume that since the application is used by many other people, it is safe. Or they simply don’t care. We also know for a long time that hardly any of us – for obvious reasons – read security policies before accepting them.
Control or education?
Unfortunately, in the future, more and more situations of subtle data harvesting can be expected. It can also be expected that many people will hand over their data in good faith, without a second thought, assuming that a central institution is watching over their privacy. This is not the case, though. The Internet is a place where manipulation is taking on more and more sophisticated forms. Technologies make it possible to create a movie with a real politician or a well-known figure and manipulate the speech in such a way that even experts can hardly distinguish between the truth and falsehood. The question is how to fight these types of threats. By focusing on education or introducing control tools and imposing a stricter data protection law?
A few lessons that health care should draw from FaceApp case:
- Many data sets, including medical data, can now be lawfully collected from users, using non-transparent privacy policies and by tempting with some benefits or just entertainment.
- Just like fake news, new apps with hidden “data harvesting goals” can be created under the guise of benefits and functionality.
- The public itself is not able to assess the security of the applications used.
- A lack of regulation of the health applications market can potentially lead to abuses in capturing and processing of medical data, including on a large scale.
- While single cases of abuse should not cause the benefits of health applications to be questioned, essential measures have to be taken to introduce transparency in privacy policies.
- Adequate procedures should be considered in the case of detecting abuses related to obtaining medical data for purposes other than those declared by the application’s developer.
- Education on topics related to data privacy and data protection must be prioritised to prepare people for new threats.
- Because society does not need to know all the potential risks of private data leakage, sensitive information must be protected “by design and default.”
The public itself is not able to assess the risks stemming from the use of mobile applications.

I have a small favour to ask…
This content is free of charge. This website is free of commercials. Please support aboutDigitalHealth.com (€1+). It only takes a minute. Thank you!
€1.00