Updates include new support for external storage offerings, a containerized version of Nutanix's storage technology that doesn't require a hypervisor, and deeper integration with Nvidia AI Enterprise. Credit: JLStock | shutterstock.com Nutanix is often associated with the concept of hyperconverged infrastructure (HCI), but that’s not the direction for what’s coming next from Nutanix. At the Nutanix .NEXT conference this week, the vendor is extending its reach beyond traditional HCI into external storage, containerized architectures and AI-focused solutions. It’s promoting a vision of “run anything, run anywhere” that now includes support for more storage options, hypervisor-free deployments and AI model management. The key announcements include: Nutanix Enterprise AI (NAI): New version is enhanced with NVIDIA AI Enterprise integration to support agentic AI workflows across the enterprise. Pure Storage partnership: Nutanix Cloud Infrastructure is integrated with Pure Storage FlashArray over NVMe/TCP for mission-critical workloads. Cloud Native AOS: New solution extends Nutanix’s storage and data services to Kubernetes environments without requiring a hypervisor. Major expansion into external storage In a significant shift for the HCI pioneer, Nutanix is announcing general availability of its Dell PowerFlex integration and early access for Pure Storage integration, allowing customers to use Nutanix’s compute capabilities while leveraging external storage arrays. “Most HCI buyers were trying to get away from managing storage, they just wanted the storage to be there,” Lee Caswell, senior vice president of product marketing, explained in a press briefing. “We also realize there’s a lot of storage expertise out in the marketplace still.” The new external storage offerings preserve Nutanix’s management simplicity through the company’s Prism control plane while expanding capabilities. Caswell explained that when a user initiates a snapshot, it’s initiated from Prism, executed in PowerFlex. The same is true for replication and all other storage operations. The Pure Storage integration will also be supported within Cisco’s FlashStack offering, creating a “FlashStack with Nutanix” solution with storage provided by Pure, networking capabilities as well as UCS servers from Cisco, and then the common Nutanix Cloud Platform. Cloud Native AOS: Breaking free from hypervisors Another sharp departure from its past for Nutanix is the new Cloud Native AOS platform. “The idea of Cloud Native AOS is that we can now push deeper into the cloud and further out to the edge,” Caswell said. Cloud Native AOS is a containerized version of Nutanix’s storage technology that can run without requiring a hypervisor. This represents the first product delivery from “Project Beacon,” a vision Nutanix announced three years ago. Caswell explained that for decades the hypervisor was the core foundation for delivering services, with data and other capabilities bolted on top. Now the general view is that hypervisors are a commodity. “Most clouds, including Amazon, have an underlying hypervisor, so they’re running their Kubernetes runtime on that,” he explained. “So basically, you’re taking out any sort of nested virtualization.” The value is not in the hypervisor, but rather in the data. The cloud-native approach also is built on containers, which are the cornerstone of modern application development. “Containers have immense value for developers, because containers offer speed and they also eliminate OS dependencies for testing,” Caswell said. “That means, if you’re going to deploy your applications in different disparate environments, you can develop more quickly with containers.” Enterprise AI advancements with Nvidia Nutanix also announced general availability of the latest version of Nutanix Enterprise AI (NAI), featuring deeper integration with Nvidia AI Enterprise to simplify how organizations deploy and manage AI agents across environments. NAI is moving beyond last year’s “GPT in a box” approach to create more sophisticated AI workflows. “A lot of customers are super excited about AI, but don’t know how to start, they’re also worried about their IP, they’re worried about privacy,” Caswell said. The company is supporting Nvidia’s Nemo and NIM models and focusing on creating agentic AI workflows that include guardrails, re-ranking and embedding capabilities. These workflows help organizations move beyond simple query-answer patterns to more sophisticated AI implementations. “The AI model is moving from a simple query to an answer. That was the initial ChatGPT, right? You query, I get an answer, blurts out one thing,” Caswell explained during a press briefing. “This agentic cycle here… where you critique, you plan, you use the tool, and it’s a workflow… is incredibly important as you start thinking about how customers are going to bring in all of these models into a production grade workflow.” SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe