Site icon Maverick Studios

Scaling AI into production is forcing a rethink of enterprise infrastructure

Presented by Nutanix


Across industries, organizations are focused on how to move from AI pilots, proofs of concept, and cloud-based experimentation to deploying it at scale — across real workloads, for real users, in real business environments. VentureBeat spoke with Tarkan Maner, president and chief commercial officer at Nutanix, and Thomas Cornely, EVP of product management, about what that transition demands, and what it will take to get it right.

“AI in general is shifting everything we do, not only in technology, but across all vertical industries, from regulated industries like banking, health care, government, education to non-regulated industries like manufacturing and retail,” Maner said. “As a complete platform company, we welcome this change. It’s creating more opportunities for us as a company to serve our customers in better ways as we move forward.”

But there’s still a practical gap between experimentation and production, Cornely said.

“It’s one thing to do an experiment, to do a prototype. It’s a different thing to take that prototype and deploy it for 10,000 employees,” he explained. “We went from people focusing on training models to chatbots to now doing agents, where the demand and pressures on AI infrastructure are growing exponentially.”

Agentic AI introduces a new layer of enterprise complexity

The rise of agentic AI is what makes this transition especially consequential. These systems introduce multi-step workflows across applications and data sources, along with a degree of autonomy that creates new operational demands.

Enterprises now have to contend with multiple agents running simultaneously, unpredictable and real-time workloads, and the need to coordinate access to infrastructure across teams.

“OpenClaw is making it very easy now for anybody to build agents and run with agents,” Cornely said. “You want those agents to be running on premises with your data. You need to have the right constructs around it to protect the enterprise from what an agent could do.”

As these systems become more autonomous, the challenge extends beyond how they operate to how they interact with enterprise data, systems, and teams.

AI is augmenting human work, not replacing it

Agentic AI is fundamentally an amplifier of human capability rather than a substitute for it, Maner said. The goal for enterprises is not to eliminate human work but to find the right balance between human decision-making, AI-driven automation, and agent-based workflows.

“We believe that there’s going to be love, peace, and harmony between AI, agentic tools, and robotics systems, and human capital,” Maner said. “That harmony can be optimized for better outcomes for businesses, enterprises, governments, and public sector organizations, if the right vendors provide the right tooling and the right services.”

How enterprises are getting started with AI at scale

In practice, the move from experimentation into real-world deployment is where the challenges become most visible. Despite the momentum, many are still working through how to scale AI beyond initial use cases.

As they do, organizations quickly run into practical constraints. Many start in the cloud because of easy access to resources and services, but practical considerations like data, governance and control, and cost quickly come to the forefront.

The cloud can be used to experiment, with the ultimate goal of bringing applications back on premises as they move toward production, using platforms that solve for security and cost.

The use cases gaining the most traction include document search and knowledge retrieval, security and predictive threat detection, software development and coding workflows, and customer support and service operations. In the security realm, banking customers and others in Europe and the U.S. are deploying AI-driven tools including facial recognition and predictive threat detection. Meanwhile, there’s a growing focus on end-to-end, 360-degree customer engagement, from pre-sales through post-sales advocacy, in the customer support industry.

Industry-specific AI transformation is already underway

Across industries, the shift from experimentation to real deployment is already taking shape in distinct ways. In retail, AI is transforming store operations with cameras and robotics used for targeted in-aisle marketing at the moment of purchase decision, while cashier-less checkout is replacing traditional POS systems, and the human capital freed up is being redeployed to back-office and merchandising functions.

In healthcare, Nutanix works with customers on applications spanning diagnosis, treatment, remote health, and hospital operations, with cloud partners including AWS and Azure. In manufacturing and logistics, the transformation is equally significant.

The operational challenges of scaling enterprise AI

As AI use cases scale, enterprises are running into a new class of operational challenges. Managing multiple AI workloads and agents, coordinating infrastructure access across teams, ensuring security and governance, and integrating AI systems with existing business processes are now top-of-mind concerns for IT and business leaders alike.

The gap between AI developers pushing for speed and access, and infrastructure teams responsible for security, uptime, and governance, is one of the defining challenges of this moment.

“Now I’m running agents, and they’re all going to fight to get access to resources to solve my problems,” Cornely said. “What you want now is infrastructure that allows you to set constraints, govern resources.”

The AI factory: a shared platform for production AI

These challenges are driving demand for what Maner and Cornely describe as the AI factory: a shared infrastructure environment that supports multiple users and workloads simultaneously, enabling both experimentation and production while balancing developer agility with enterprise governance.

At GTC 2026, Nutanix announced the Nutanix Agentic AI Solution, a complete platform spanning core infrastructure, Kubernetes-based container services running on a topology-aware hypervisor, and advanced services for building and governing agents.

“We’re launching a complete platform, from core infrastructure through PaaS and advanced PaaS services to the whole management framework for your AI factories,” Cornely said. “Really enabling self-service for the teams that will build these applications in the enterprise.”

Hybrid environments are essential to enterprise AI strategy

Operating this kind of environment requires flexibility across infrastructure. Hybrid infrastructure is not a compromise, but a requirement. Some workloads will always run in the public cloud, while others must remain on premises due to security requirements, regulatory compliance, data sovereignty, or competitive IP considerations.

“Especially in the regulated industries, as sovereignty becomes a bigger issue, data gravity becomes a bigger issue, security, and also a lot of competitive differentiation in the industry, it’s going to depend on what the company wants for their own IP,” Maner said.

This is the foundation of Nutanix’s platform position, he added.

“We are the perfect harmony, bringing those applications, that data, and all the optimization for these use cases end to end, from on-prem to off-prem and in a hybrid mode,” he said. “Doing it not only in one cloud, but for multiple clouds.”

That flexibility also extends to the broader ecosystem. Nutanix works across hyperscalers including AWS, Azure, and Google Cloud, as well as regional service providers and emerging neoclouds. Nutanix offers neoclouds a full software stack to run their own clouds and deliver advanced AI services, giving enterprise customers already running Nutanix a simple extension of compute, networking, and AI capabilities.

Maner described the arrangement as a win for both sides. For enterprises, it means simplified access to hybrid AI services. For neoclouds, it means a proven platform to build on. It’s all automated and secure by default, Cornely added.

“All of those governance problems that now come up with agentic AI are the same problems we’ve been solving for the last 16 years for every other application running in your cloud,” he said.

From pilot to production: operationalizing AI across the enterprise

Ultimately, the goal is not to run a successful AI pilot, but to operationalize AI across real-world use cases, manage infrastructure as a shared resource, support collaboration between infrastructure teams and AI developers, and scale from initial projects to enterprise-wide deployment.

“There’s a massive gap right now between people building AI applications, those AI engineers, those agentic AI developers, and your classical infra teams,” Cornely said. “They need tooling to enable the infra teams, so they can support your AI engineers. That’s what we deliver with our agentic AI solution.”


Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact sales@venturebeat.com.

Exit mobile version