Thoughts on AI and Ethics

The rapid technological development in the RPA, Cognitive solutions and AI field has meant that many solutions are being brought to market in a state of hyped anticipation – almost like a kid looking forward to Christmas when seeing all the large, wrapped presents under the tree.

However, unlike the kid unwrapping the presents at Christmas and opening all the boxes, many AI solutions will remain in their boxes, working away with their algorithms, learning as they amass new data. These proprietary algorithms are the ‘black boxes’ of AI.

But how does the organisation make sense of, and explain, the solution and its decision making? And who scrutinizes the algorithm? Is the solution built on a neural network/deep learning basis or on a Bayesian network and with decision trees?

If it is a deep learning solution, is the dataset from which it learns unbiased? And how do we, as clients, customers and citizens challenge an organisation that uses the decisions an AI solution has come up with, say for a mortgage application (a ‘simple’ yes/no decision) or a judicial decision (which can range from a penalty notice to a custodial sentence)?

Below are four Ethical ‘tests’ or areas (there are arguable more but for the sake of this article…) that any organisation should discuss before putting an AI solution into a ‘live’ environment. The sooner society and organisations starts thinking, discussing and debating these, the more robust will our awareness and governance over these issues be.

Transparency

  • Internal vs external: The solution must be transparent both internally and externally. The solution must be open to be inspected, both by a governance board and by external forces, such as regulators and courts.
  • Proprietary: However, as many algorithms deployed by organisations are proprietary or ‘trade secrets’, what right does the outside world have to access to these? And, at what point? Post-facto?

Predictability/Auditability

  • Predictability: The solution must be predictable in its decision making. Compare this to what solicitors and lawmakers in the UK refers to as stare decisis, or precedent*.
  • Data Sets: Depending on the sample of data from which the solution has learnt, it can contain bias against groups of people. See examples from Propublica regarding machine bias with regards to sentencing in the US here.
  • Internal/External datasets: If the AI solution is fed data from outside the organisation, which are its sources? Can these sources be audited and - see next point -, can they be corrupted?
  • Audit of the AI solution: Is it something for the operational risk department, the audit department or the IT department to audit? How well the tree line defence is set up and communicates will tell how well the governance works.

Incorruptability

  • The solution must be robust enough not to be hacked or tampered with. This from the outside of the organisation deploying the AI solution or from the inside.
  • The headlines in the news is all about external hacks causing damage but considering that internal fraud often goes unreported, the internal operational risk with AI is considerable. How does an organisation mitigate the risk with AI solutions?
  • The point above with regards to datasets and how algorithms are developed, tested and maintained. Can someone with a sinister mind and access to the AI solution, feed a corrupt dataset to the solution, benefitting themselves. Rather than to tinker with the AI solution, feed the ‘wrong’ data to the solution and it may start taking decisions benefitting someone else.  

Responsibility

  • For the decisions: Any organisation deploying AI solutions must take responsibility for the decisions that the AI solution takes. But who in the organisation should bear the responsibility? The developers - what if the development of the solution was outsourced? Or the AI solution procured via a public procurement process? The team that bought the solution? The governance team? The CEO or chairman – who might (or might not) be vaguely aware of robotics and AI solutions exists in the organisation? Where does the ‘buck’ stops? Is there a case for the insurance industry to look at products offering insurance against AI decisions?
  • Time: For how long should the responsibility last? While the AI solution is in use or as long as the decision it has taken can be said to be reasonably valid or enforceable?
  • Version and version control: If an ‘intelligent’ learning solution is deployed, for which version or iteration can the organisation claim responsibility? What if ‘upgrades’ are done during its ‘life’, improving the solution slightly, thereby altering its decisions? Records of updates and upgrades must be kept for auditing and tracking purposes. 

These are just some views on some of the ethical questions organisations and society must grapple with in the age of AI. Each point raised above raises more questions as we discover new layers of complexity. But, as long as we are aware, willing and able to openly discuss the ethical implications of AI, we should be in a relatively good space.

* In the UK, precedent is a principle or rule established in a previous legal case that is either binding on or persuasive for a court or other tribunal when deciding subsequent cases with similar issues or facts.

To view or add a comment, sign in

More articles by Richard Ahl, PhD

Insights from the community

Others also viewed

Explore topics