Into The Gray Zone: The Hazards Of AI Without Oversight

Into The Gray Zone: The Hazards Of AI Without Oversight

Written by: Pratik Savla, MS(InfoSec), CISA, CDPSE, CEH


Hi there! 👋 We publish a weekly newsletter featuring the top minds in the industry. If you're new here, then consider subscribing for access to thought-provoking articles, interviews, and more delivered by cybersecurity experts.


From facial recognition to autonomous vehicles, artificial intelligence (AI) and machine learning (ML) technologies are rapidly transforming industries in profound ways. But these powerful innovations come with unforeseen risks.

Behind the scenes, unregulated "shadow" AI/ML systems are being deployed without oversight, accountability, or transparency.

As these opaque models take on real-world decisions affecting human lives, a chorus of concerns has emerged around their potential dangers if developed recklessly.

Without illumination, these shadow systems embed unseen biases and inequities.

As AI proliferates, our society faces a choice: continue down our current path of breakneck deployment, or confront head-on the hazards within these technologies and bring accountability.  

▶️ Subscribe to our YouTube channel to watch expert interviews today!


Article content

What Is Shadow AI/ML? 

The term "shadow AI/ML" describes AI and ML systems that operate without sufficient transparency or accountability.

These models are deployed for sensitive tasks like:

  • Facial recognition
  • Predictive policing
  • Credit decisions
  • Content moderation

However, they frequently lack documentation, auditing, or governance over their internal logic.

The proprietary nature of many shadow AI systems prevents ethical scrutiny of their inner workings, even as they take on critical real-world functions.

This opacity around how shadow AI/ML models operate has raised concerns, especially as they become entrenched in high-impact domains. 

For one, the lack of transparency and oversight for shadow AI systems raises significant risks around biased or flawed decision-making.

If the training data contains societal biases or discrimination, those can be perpetuated and amplified by the AI algorithms.

For example, facial recognition has exhibited racial and gender bias if models are trained on datasets that lack diversity.

Similarly, predictive policing systems trained on historically biased crime data can disproportionately target specific communities. 

Even with unbiased data, algorithms can entrench societal prejudices if developer teams lack diversity and awareness of inclusiveness.  

Furthermore, the autonomous nature of shadow AI taking actions without human involvement can lead to harmful outcomes.

If the model makes incorrect predictions or recommendations, there is no failsafe to prevent real-world harm. For instance, AI screening job applicants could develop biased notions of ideal candidates and discount qualified people unjustly. 


📖 Like this content? Explore our Cybersecurity Insights.


Example Instances

Here are some real-world examples:

Recruitment Bias: 

  • In 2018, Amazon scrapped an AI resume screening tool that exhibited bias against women. The algorithm penalized resumes containing words like "women's chess club", downgrading graduates of all-women's colleges. This resulted in biased recommendations favoring male applicants. 

Biased Content Moderation: 

Autonomous Vehicle Accidents: 

Harmful YouTube Recommendations: 

  • In 2019, YouTube's AI recommendation algorithm was found to steer viewers down extremist "rabbit holes", recommending increasingly radical and divisive content. This amplified harmful misinformation. 

Racial Profiling In Healthcare: 

  • A 2020 study found a widely used healthcare algorithm exhibited racial bias, underestimating black patients' needs compared to white patients. This could exacerbate health disparities. 

Toxic Chatbots: 

  • In 2016, Microsoft launched Tay, an AI chatbot that began spouting racist, sexist, and offensive views after being targeted by trolls online. This demonstrated risks of uncontrolled machine learning. 

Discriminatory Hiring Practices: 

The rapid pace of AI development using complex techniques like deep learning exacerbates these issues with shadow systems.

The rush to deploy before thoroughly evaluating for fairness and safety means potential harms are outpacing governance.

And the black-box nature of deep learning algorithms makes it difficult to audit internal processes, even as they take on sensitive tasks. 

Such instances underscore how today's largely unregulated AI can lead to real ethical perils around bias, censorship, security, and safety. 

Article content

Steps For Addressing The Risks

To address the risks of shadow AI systems, the AI/ML community needs to prioritize practices and principles that increase accountability, auditability, and transparency. 

  1. Thorough documentation and procedures are essential - data provenance should be tracked to evaluate for bias, and every stage of model development needs to be recorded.  
  2. Ongoing performance monitoring, especially across different demographic groups, can identify if the model exhibits unfair bias.  
  3. Independent 3rd-party auditing of algorithms for discrimination and ethical failures is also critical. 
  4. For high-risk AI applications like self-driving vehicles and social moderation, maintaining meaningful human oversight and decision validation is key to preventing harm. Humans must remain "in the loop" for reviewing and approving AI-generated outputs that impact human lives and society. 
  5. In some cases, certain sensitive use cases may require restrictions on AI deployment until robust governance is established, rather than rapidly deploying shadow models with unchecked risks. 
  6. Adopting standards like the EU's Ethics Guidelines for Trustworthy AI will also guide the community toward fair, accountable AI development and integration.  
  7. Organizations must ensure their AI teams represent diverse perspectives to identify potential harms. Democratically governing these rapidly evolving technologies is crucial to uphold ethics and human rights. 

Conclusion

Realizing the promise of AI/ML responsibly will require deliberate efforts from all stakeholders.

Policy makers, researchers, developers, and civil society must collaborate to illuminate the processes within shadow systems through increased transparency and accountability measures.

Establishing ethical governance for AI will be crucial for earning public trust amidst rapid technological change. 

The path forward demands sustained diligence - continually evaluating AI systems for bias, auditing algorithms, and maintaining human oversight for high-risk applications.

With sound ethical foundations guiding AI innovation and integration into society, these transformative technologies can be developed for the betterment of all.

But an ethical AI future relies on coming together to shed light on shadow systems today. 


✋ Wait! Before you go. We'd love to hear your feedback 👇


Pratik Savla

Pratik Savla is a seasoned Information Security, Compliance and Privacy Leader with over 15 years of experience largely in the private sector, responsible for safeguarding organizations against evolving cyber threats. His expertise lies at the intersection of cybersecurity, data privacy, technology risk management and regulatory compliance. He is also a member of PurpleSec's Cybersecurity Council.

Indeed, Pratik Savla's insights shine a light on a crucial issue. As Confucius once said, "To see what is right and not do it is a lack of courage." In the context of AI, ensuring ethical and responsible innovation takes collective courage and action. 🌱 Speaking of positive action, Treegens supports initiatives for a sustainable future and is sponsoring a Guinness World Record attempt for Tree Planting. This aligns perfectly with promoting responsibility in all fields, including technology. Learn more: http://bit.ly/TreeGuinnessWorldRecord 🌍 #Sustainability #ResponsibleAI

Like
Reply

Great insights! As Marie Curie once said, "Be less curious about people and more curious about ideas." Your focus on practical steps in AI and cybersecurity truly embodies this. Keep up the fantastic work! 👍🔒 #Innovation #CyberSafety Follow us!

Like
Reply

To view or add a comment, sign in

More articles by PurpleSec

Insights from the community

Others also viewed

Explore topics