FaceApp, an app that allows you to see what your future self may look like, has become incredibly popular in the past week. Not just because of some of the hilarious photos the app has produced, but also because of the privacy concerns that have been raised by its terms and conditions.
But are these concerns, which led the leader of the US congress to call for an FBI investigation, valid? In this post we’ll go through the main issues and assess how problematic they are.
They can access my entire camera roll from within the app to select an image even if I deny access
False. This is a feature of iOS permissions. When you deny access to your camera roll, iOS allows you to select individual images to pass to the app. As this is done with your explicit consent, this is not a security concern. On Android, if you deny access to your camera roll, you can only apply filters to images taken within the app itself. To do this you need to give it permission to access your camera.
They can do what they want with the images you upload
True. We don’t like it, but this is standard practice. This is not a FaceApp issue, this is a way-companies-cover-themselves issue. Companies will typically demand access to as many rights as possible regarding user-uploaded content to future-proof their business plan. They may not have anything in mind with what they will do with your data on day one, but they want to retain the right to do what they want once the right idea comes to them without having to ask your permission again. In reality, FaceApp’s terms aren’t that different from that of Instagram or Google Photos, two of the most popular photo apps available.
As pointed out by James Whatley in a summary by Wired, the only real difference here is that apps like Instagram and Google Photos have an option to opt out, which is notably absent from FaceApp. That being said, you can request your data be deleted by emailing them with the word “privacy” in the title. You’ll need to do this from the “Report a bug feature” in the settings of the app so the developers know which data is yours. The developers acknowledge this is far from ideal…we agree.
It’s made in Russia
True, and Russia does have some very aggressive surveillance laws that, according to Privacy International, require companies to “store their data centres in the territory and make it accessible to security agencies”. There are also more draconian requirements for apps on their Register of Information Dissemination Organizations. However, according to the developers, your data never enters Russia. The whole system is hosted on Amazon’s AWS servers (US and Australia data centres) and Google Cloud.
They’re uploading loads of photos to their server without permission
They upload images to the cloud when they could process the images on the phone
Very true. It would be much more privacy conscious of them to do the processing on the phone itself. However, this has several issues for them:
- The processing would likely be slower, which may not seem like a big deal, but in a world where waiting a couple of seconds for an ad to finish before a video starts on YouTube feels like forever, every millisecond counts when it comes to user experience
- Having the model on the device itself would probably make it easier for competitors to reverse engineer, though we suspect the former is more the reason why they do it.
They don’t immediately delete images
True, however, the developers have said that they delete most images after 48 hours. This seems reasonable as they probably don’t want someone uploading the same image over and over again as this will affect performance. By temporarily storing the image, a user can apply many filters to the same image, but only needing to upload it once. That being said, we’ll dig into this further in the next question.
They could be building a facial recognition dataset
Hmmm. While the developers have said they do delete the data, it wouldn’t be the first time a company has mislead its users. History tells us that there are plenty of companies that lie, and even more that are disingenuous. The only thing we really know is once data hits the cloud, it is incredibly difficult to know what is happening to it. FaceApp’s terms and conditions do say that you transfer essentially all your rights to the selected image over to FaceApp, so theoretically, they could well be building a facial recognition database.
However, FaceApp say they’re not doing anything untoward and there’s no evidence to suggest they are, but a lack of evidence doesn’t mean they’re not. It ultimately comes down to a single question, do you trust FaceApp with your data and what they might do with it? This is a question we need to start asking ourselves much more often.
Fair Custodian is building a platform to create a new type of relationship between consumers and businesses. A relationship based on trust, transparency, and personal empowerment. To find out more about us, check us out at www.faircustodian.com.