Since its founding in 2004, Facebook's popularity has soared as netizens across the world have welcomed it into their homes and daily lives. But with the recent privacy scandals and public concern over our right to online privacy, can we continue to trust Facebook to handle our sensitive personal data?
The privacy scandal involving Facebook and Cambridge Analytica
The conversation over online privacy has developed in recent years, with the issue being brought into question through revelations related to global companies such as Facebook. In the case of Facebook, details emerged in 2018 that a researcher, Aleksandr Kogan, had developed a personality quiz app on Facebook. He paid around 270,000 Facebook users to take the quiz.1 Their answers were recorded by the app and sold to data firm Cambridge Analytica. In doing so, he breached Facebook's policy.
A second issue, which sparked a global privacy debate, is that the app was also reportedly able to pull profile details of the contacts of all those Facebook users, leading to up to 87 million users' details being recorded.2 Although the users who took the quiz provided their consent for Kogan to use their data, their contacts did not. This was due to a flaw in Facebook's original API (Application Programming Interface - what developers use to create third-party apps) that allowed third parties access to a lot of user data up until 2014, when Facebook changed to a new API.1
It's not clear exactly how much of this data Cambridge Analytica used, but in March 2018 a number of reputable newspapers released a story that alleged the data was used to influence the 2016 American presidential election.2 While this has been since disputed, other articles suggest the data was used to influence the UK's 2017 Brexit referendum, which resulted in a public majority in support of the UK leaving the European Union.3
With the sheer volume of data involved and the potential to influence politics, the way companies use our data has become a real talking point.
What was Facebook's response?
In April 2018, Facebook CEO, Mark Zuckerberg, took responsibility for Facebook's failure to control third party access to user data without consent, admitting, "I'm responsible for what happened here".4 He went on to introduce a 3-year effort to bring about improvements to prevent a repeat of this kind of abuse. He also stressed that Facebook does not sell user data, though it does use it to help advertisers target their adverts to the right audiences.
So, there are really two issues with regards to personal online privacy here:
1. The use of personal data to influence public opinion
The term 'fake news' has become a buzzphrase of sorts ever since US president, Donald Trump, began using it in a series of attacks against left-wing media reporting.
But the term has grown to take on a wider meaning involving not only fake news stories, but the use of social media to promote this fake news in order to influence public opinion - as with the Facebook/Cambridge Analytica example. News reports also suggest it's foreign governments who are using such methods as a tool to influence foreign elections and destabilise foreign powers.5 Such attacks represent a serious threat to the integrity of democratic systems across the world.
Again, Facebook and other social media platforms are at the heart of such debates, and are facing increasing pressure to root out the fake news before it's seen by the masses.
2. The use of personal data for personalised advertising
We've been putting data about ourselves on the web for years. From birthdays and relationship statuses, to hobbies and your latest restaurant experiences. If you've ever 'liked' a Facebook comment, 'hearted' an image of a cute puppy on Instagram, or commented on a WeChat moment, you've put information into the public domain that tells advertisers a little bit about what you like and how you behave. When data like this is collected on a large scale, advertisers can use this data to personalise the adverts you see on all the social media platforms, leading to higher sales.
Why should we care? Well, many people don't - when data is used to personalise adverts. We get a more enjoyable advert experience after all. However, some people feel uneasy with their information being shared in this way.
In contrast, when our data is shared without our consent (as with the Facebook/Cambridge Analytica scandal), our rights over how companies use our personal data are breached. And this is exactly where Facebook and other social media platforms must create robust systems that allow users to easily offer their consent and effectively manage third party use of the data.
So, should we change the way we use social media?
While Facebook and other social media platforms give you the ability to enter details about yourself (e.g. your birthday, your address, where you've worked), you can often choose whether to enter the information or not. And if you do, Facebook gives you the choice of displaying this information to the public, to just your friends, or only to yourself.
So, if you put a load of information about yourself on social media, say 15 years ago, you may want to revisit those platforms and review your privacy settings. However, the general rule still applies for all things you write or publish on the internet, regardless of whether it's about yourself or something else: assume everything you publish on the internet is going straight into the public domain.
1 Business Insider - http://www.businessinsider.com/what-data-did-cambridge-analytica-have-access-to-from-facebook-2018-3
2 Wikipedia - https://en.wikipedia.org/wiki/Facebook%E2%80%93Cambridge_Analytica_data_scandal
4 CNBC - https://www.cnbc.com/2018/04/04/mark-zuckerberg-facebook-user-privacy-issues-my-mistake.html
Please log in again. The login page will open in a new tab. After logging in you can close it and return to this page.