Debo Dutta is Chief AI Officer @ Nutanix.

With the advent of ChatGPT in 2022 and advanced reasoning models like OpenAI o1 and Deepseek-R1, generative AI has become the new accelerant to achieve greater efficiency (often up to 20% or more) out of business—plus a means to grow the top line in new and innovative ways. That enormous potential has resulted in the development and deployment of new, advanced AI agents, each tapping its own set of data stores, charting its own independent actions and often interacting with other AI agents. We have entered into a multi-agent AI world.

Unsurprisingly, worldwide government authorities are scrambling to develop legal frameworks to prevent AI agents from running amok. Because the largest proportion of agents are generative AI, these efforts have primarily focused on the behavior of large language model (LLM) chatbots. Proposed regulations include NIST's guidelines, AI legislation in California and the EU AI Act.

Before such initiatives become enforceable laws, enterprises need to shape their AI strategies with the safety of AI agents foremost in mind. In most organizations, agents are deployed across platforms from the data center to the edge and multiple clouds. CTOs, CISOs and CAIOs must develop a centralized governance framework that embraces that entire estate. In the process, C-level executives should consider selecting a partner that can provide both the plumbing for AI deployments and trusted, ongoing guidance for adapting to the evolving regulatory landscape.

Forging An AI Management, Safety And Security Strategy

Effectively managing AI across the organization requires a centralized governance committee consisting of key stakeholders. This committee should include leaders from every business unit and corporate administrative function vested in AI, with the goal of developing a framework that clearly establishes policies, monitoring and performance metrics for every domain.

A key part of this effort should be to establish the tenets of AI safety and security, with a clear view of the hazards, remediations and tools required to conduct proper safety evaluations. Obviously, parameters will vary depending on the agent—an industrial control agent will have vastly different strictures than an LLM chatbot, for instance. Governments and highly regulated industries will face the brunt of compliance pressure and need to hash out viable AI safety initiatives sooner rather than later.

In the emerging area of AI safety testing, MLCommons is pioneering generative AI chatbot benchmarking through its hazard taxonomy, which evaluates risks in LLM models, including content related to violent crime, privacy violations, intellectual property and more. Its adjustable grading system helps organizations fine-tune what they define as acceptable.

Isolated platforms make consistent control impossible over time, so a multi-cloud infrastructure is critical. Data security monitoring across all platforms should be integrated into that infrastructure, with robust ransomware defense and identification of anomalous activity in real time. AI in general adds to the risk of exposing proprietary data, so enterprise data protection must be baked in every step of the way.

Preparing For The Regulatory Wave

Concern about the dangers of AI is nothing new, but government regulatory bodies have only begun to grapple with the current state of AI technology and its risks. Some of these initiatives amount to broad guidelines, but the EU has developed significant legislation that’s almost ready to land.

Last year in the U.S., President Biden issued an Executive Order intended to jumpstart the development of standards addressing AI safety and security, privacy, algorithmic discrimination and more. NIST followed with a series of publications adding necessary detail to the points outlined in the Executive Order, but the Order was rescinded on January 20 by President Trump.

The most mature mandate by far is the EU AI Act, which bills itself as "the world’s first comprehensive AI law." Article 5—which prohibits unacceptable AI practices—goes into effect in February 2025, and violations can result in fines of up to 35 million euros or 7% of the offending organization's annual revenue. The good news for businesses is that these prohibitions are directed at bad actors who intend to violate fundamental rights by exploiting users via subliminal techniques, using social scoring for private or public purposes, scraping facial images without authorization and so on.

Businesses that operate outside the EU and fail to recognize the importance of the EU AI Act need only contemplate the worldwide 2018 scramble to comply with the EU’s General Data Protection Regulation. The EU’s regulatory juggernaut for AI will effectively be global, although its details will morph, and other regulatory bodies (such as the California Legislature) will implement their own legal requirements.

Flexibility For The Future

A rapidly evolving regulatory landscape demands an organizational approach and multi-platform infrastructure that can respond quickly enough to meet the AI challenge—one that embraces the simple fact that LLMs and AI agents in general are the fastest-moving areas of technology.

The discipline of maintaining today’s AI agents alone requires proficiency in what has come to be known as LLMOps. This includes model training, fine-tuning models by adding an organization’s private data and using other agents to pepper models in production with questions to ensure safety and compliance.

Managing the lifecycle of AI agents and ensuring their safety—from LLMs and vector databases to underlying AI plumbing—requires proficiency at multiple levels. The ability to quickly accommodate new AI benchmarks and regulations that arrive on a regular basis is essential. Enterprises must deal with the AI skills gap and choose between hiring a shop full of AI engineers and administrators (which is often hard to recruit) or engaging a committed AI technology partner with deep expertise.


Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?