Risk based approaches to AI - Assertion 3
Companies’ excitement and appetite for the opportunities presented by AI and machine learning may override considerations about what is appropriate for them to do with customers’ data they hold.
Artificial intelligence technologies have already begun to transform financial services. At the end of 2017, 52% of banks reported making substantial investments in AI and 66% saidthey planned to do so by the end of 2020. The stakes are enormous — one study found that banks that invest in AI could see their revenues increase by 34% by 2022. As AI becomes more embedded in banks’ most critical operations, particularly in ways that impact the financial stability both of institutions and their customers, this could expose new hazards.
Two of the most dangerous and far-reaching areas of risk when it comes to AI in banking are the opacity of some of these technologies and the vast changes AI will inflict on workforces in those banks. (8)
Machine learning is beloved by ecommerce and marketing: Amazon, Netflix and hundreds of online shops built their recommendation engines on it. Hedge funds, such as Two Sigma or Binatix, have deployed machine learning algorithms which forecast stock prices. The medical company Medecision uses machine learning to predict avoidable hospitalisations in diabetes patients, Schneider Electric to prevent oil and gas pumps from failure and the Zoological Society of London to track endangered animals from photos taken in Africa. Have you ever seen a Facebook application posing the question “which celebrity do you look like”? This uses also machine learning to deliver the result. (9)
In December 2018, several adult videos appeared on Reddit “featuring” top international female celebrities. User “DeepFakes” employed generative adversarial networks to swap celebrities’ faces with those of the original adult video stars. While face-swapping technology has been under development for years, DeepFakes’ method showed that anyone with enough facial images could now produce their own highly convincing fake videos; these realistic-looking fake videos of well-known people flooded the Internet through 2018. While such use cases are technically not a “failure,” their potential dangers are serious and farreaching. If video evidence is no longer credible, this could further encourage the circulation of fake news. (10)
According to a new study from the University of California, Berkeley, advances in artificialintelligence have rendered the privacy standards set by the Health Insurance Portability and Accountability Act of 1996 (HIPAA) obsolete. In fact, stripping healthcare data of identifying information does not guarantee HIPAA compliance. Current laws are simply insufficient to protect an individual’s health data. In part, this is a problem because the same data is incredibly valuable for companies building an AI system. As AI in healthcare becomes more and more commonplace, data privacy experts are raising big red flags about the ethical implications. (11)
Facebook, the largest social media platform, uses AI to store and act on users’ mental health data with no legal safeguards in place. HIPAA’s healthcare privacy regulations in place do not cover tech companies. HIPAA only protects patient health data when it comes from organisations that provide healthcare services, such as insurance companies andhospitals. In late 2017, Facebook rolled out a “suicide detection algorithm” in the US in an effort to promote suicide awareness and prevention. The system used AI to gather data from individuals’ posts and then also to predict their mental state and propensity to commit suicide. The Facebook suicide algorithm is outside the jurisdiction of HIPAA. Of course, it can be viewed as a positive use case for AI in healthcare. However, benevolent intent aside, the fact remains that Facebook is gathering and storing individuals’ mental health data without specific consent. 10 EU/UK readers should note that under GDPR and the UK DPA 2018, consent would have been required to collect such sensitive data and as such, Facebook have announced that they will not use this algorithm in the EU. (12)
Clearly, as the above examples help to illustrate, the definition of what is appropriate varies greatly and regulation is yet to catch up. As individuals become more aware of the ways their personal data is being used, it is likely they will become more concerned about how its use is determined and governed. With the increased adoption of these technologies, current attempts by hackers that centre on causing a data breach and some malicious damage will likely shift to a new focus. Hackers will seek to corrupt companies’ models and algorithms, hidden within a companies’ processes, in an attempt to cause even greater damage.