As we move into an era of access to greater amounts of users’ personal information, it is time for a debate about the ethical use of personal data to ensure that financial innovation doesn’t create financial exclusion or marginalisation. We should also recognise that whilst technology-based financial innovation has given the historically underbanked access to financial services, it has also had the effect of reducing access to traditional financial services for other sectors of society.
Ethical Use of Data
The explosion of new technologies has captured a vast cloud of personal data, which has significant digital and real-world consequences. Whilst this information is being harnessed across industries to create more dynamic, data-driven processes, there are questions concerning susceptibility to exploitation and the risk that the unethical use of personal information will lead to financial exclusion.
The exploitation of personal behavioural data is not a new phenomenon. Supermarket chains have used the data gathered from loyalty card schemes to better target consumers with personalised advertising and retail offers.
What is new is the convergence of big data, the abundance of freely available personal information, continual developments in smart phone technology and new developments, such as wearable devices. Taken together, these developments have created vast quantities of bespoke, personalised data reflecting on user behaviour from all parts of technology users’ lives – from their health, social and personal lives and hobbies, to their shopping habits, medical history and geographic location.
There is now a real risk that the increasing availability and sophisticated use of personal data could result in certain sectors of society being disadvantaged, despite the benefits that technological advances are perceived to bring. For example, wearable technologies that monitor health and physical activity are starting to be utilised to determine insurance premiums: the fitness-tracking wristband Fitbit has deals in place with a number of prominent U.K. and U.S. health insurers.
Data obtained from wearable health devices may drive down health insurance premiums for those who can afford to “opt in” and have regular access to gym facilities. But there is a real risk that those who cannot afford (or who are not offered) the technology will find themselves excluded from potential benefits–even though they may need the savings the most.
Using Personal Data to Secretly Determine Risk
“Opting in” might not even be an option. What if the user is not even aware that their personal information is being used to determine their level of risk?
The FCA is looking into the issue of how personal lines insurers are using data after it emerged that some insurers were harvesting data from social media and other sources to “risk assess” applicants.
According to recent press reports, personal data has also been gathered and used by banks, mortgage lenders and some government departments. The rules around when data is “personal,” and how it can be used, are growing increasingly murky.
Positive Use of Personal Data
The use of personal data is not entirely negative. Peer-to-peer (P2P) lending has grown rapidly in recent years, and new digital-based firms have created innovative techniques to determine applicants’ creditworthiness. These new lenders use personalised data acquired from a range of data points, including
- social media profiles
- internet history
- text mining
- the number and nature of Facebook connections
Their algorithms have enabled those previously rejected by traditional financial institutions to obtain credit.
However, as P2P lenders expand their remit from unsecured consumer credit to mortgages and student loans, the aggregation of personalised data points has the potential to have a detrimental impact upon less socially media conscious individuals.
We have all seen anecdotal evidence of the effect that social media accounts, an inappropriate tweet or “social media shaming” can have on career or employment prospects. Personal consumption habits are also likely to factor into an applicant’s risk profiles. We are not far from firms having the ability to combine an individual’s personal buying activity—from alcohol, cigarettes and pharmacy purchases—with their social media profiles, to assess and price risk, determine premium levels or creditworthiness.
However a user may not appreciate that releasing information on social media about their social activities, sexual orientation or the genetic conditions of their family or friends, could help build a firm’s overall picture of an individual. The issue is sufficiently significant at the moment that enterprising companies now offer to cleanse social media profiles in order to improve a user’s online reputation.
It is conceivable that consumers will not truly understand how their personalised data is being collected, shared or used and may not fully appreciate the impact it is having on their lives or financial status.
There is significant potential for personal behavioural information to have a positive impact upon our understanding of risk, but this will only occur if firms directly tackle these ethical issues and frame rules, standards and guidelines.
It is time for firms and consumers to engage in an open debate about the ethical use of data, in order to set boundaries and expectations. Consumers are more likely to give informed consent if firms make clear to them the risks and benefits of personalised data. Firms need to demonstrate to consumers that their personal information is being handled responsibly and ethically.
This post was co-authored by Jagdev Kenth.