Four ways your Google searches and social media affect your life opportunities

The data you create when using the internet can actually be used to discriminate against you

Lorna McGregor,Daragh Murray,Vivian Ng
Wednesday 23 May 2018 09:30 EDT
Comments
Just because big data analytics are based on algorithms and statistics does not mean that they are accurate, neutral or inherently objective
Just because big data analytics are based on algorithms and statistics does not mean that they are accurate, neutral or inherently objective (Alamy)

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

Whether or not you realise or consent to it, big data can affect you and how you live your life. The data we create when using social media, browsing the internet and wearing fitness trackers are all collected, categorised and used by businesses and the state to create profiles of us. These profiles are then used to target advertisements for products and services to those most likely to buy them, or to inform government decisions.

Big data enable states and companies to access, combine and analyse our information and build revealing – but incomplete and potentially inaccurate – profiles of our lives. They do so by identifying correlations and patterns in data about us, and people with similar profiles to us, to make predictions about what we might do.

But just because big data analytics are based on algorithms and statistics does not mean that they are accurate, neutral or inherently objective. And while big data may provide insights about group behaviour, these are not necessarily a reliable way to determine individual behaviour. In fact, these methods can open the door to discrimination and threaten people’s human rights – they could even be working against you. Here are four examples where big data analytics can lead to injustice.

Big data can be used to make decisions about credit eligibility, affecting whether you are granted a mortgage, or how high your car insurance premiums should be. These decisions may be informed by your social media posts and data from other apps, which are taken to indicate your level of risk or reliability.

But data such as your education background or where you live may not be relevant or reliable for such assessments. This kind of data can act as a proxy for race or socioeconomic status, and using it to make decisions about credit risk could result in discrimination.

Big data can be used to determine who sees a job advertisement or gets shortlisted for an interview. Job advertisements can be targeted at particular age groups, such as 25 to 36-year-olds, which excludes younger and older workers from even seeing certain job postings and presents a risk of age discrimination.

Automation is also used to make filtering, sorting and ranking candidates more efficient. But this screening process may exclude people on the basis of indicators such as the distance of their commute. Employers might suppose that those with a longer commute are less likely to remain in a job long-term, but this can actually discriminate against people living further from the city centre due to the location of affordable housing.

In the US and the UK, big data risk assessment models are used to help officials decide whether people are granted parole or bail, or referred to rehabilitation programmes. They can also be used to assess how much of a risk an offender presents to society, which is one factor a judge might consider when deciding the length of a sentence.

It’s not clear exactly what data is used to help make these assessments, but as the move towards digital policing gathers pace, it’s increasingly likely that these programmes will incorporate open source information such as social medial activity – if they don’t already.

These assessments may not just look at a person’s profile, but also how their compares to those of others. Some police forces have historically over-policed certain minority communities, leading to a disproportionate number of reported criminal incidents. If this data is fed into an algorithm, it will distort the risk assessment models and result in discrimination that directly affects a person’s right to liberty.

Last year, the US Immigration and Customs Enforcement Agency (ICE) announced that it wanted to introduce an automated “extreme visa vetting” programme. It would automatically and continuously scan social media accounts, to assess whether applicants will make a “positive contribution” to the United States, and whether any national security issues may arise.

As well as presenting risks to freedom of thought, opinion, expression and association, there were significant risks that this programme would discriminate against people of certain nationalities or religions. Commentators characterised it as a “Muslim ban by algorithm”.

The programme was recently withdrawn, reportedly on the basis that “there was no ‘out of the box’ software that could deliver the quality of monitoring the agency wanted”. But including such goals in procurement documents can create bad incentives for the tech industry to develop programmes that are discriminatory-by-design.

Lorna McGregor is director of the Human Rights Centre, Daragh Murray is a lecturer in international human rights law at the School of Law, and Vivian Ng is a senior researcher in human rights, all at the University of Essex. This article was originally published on TheConversation.com

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in