The storage world is quickly bifurcating in its approach to AI. Enterprise storage vendors sit on one of side, relying on traditional all-flash storage arrays to serve data to modest size AI infrastructure. These vendors often need third-party solutions to meet the scalability needs of big AI training clusters. On the other side sits a new breed of storage provider that focuses less on storage and more on enabling an end-to-end data infrastructure.

VAST Data, Weka, and IBM all deliver highly scalable and performant data infrastructure solutions designed for big data challenges like those found in AI workflows. Building on many of the ideas in HPC parallel file systems, these solutions are designed around disaggregated, software-first architectures providing a global namespace and scalable performance, operating on-prem or in the cloud.

The most forwardly aggressive of this new breed is VAST Data, which was very early in supplementing its scalable storage technology with integrated data manipulation tools. The VAST Data Platform has long had a fully integrated, full-featured database, along with AI-targeted data manipulation capabilities. Its vision is paying off, with the VAST Data Platform finding success in AI-targeted cloud services like Lambda and Coreweave.

VAST recently made a wide-ranging set of announcements that extend those features to directly address the needs of enterprise AI. Beyond the new features, VAST highlighted new strategic relationships and a new user community to help bring VAST technology into the enterprise.

The Challenges of Data for AI

AI workflows depend on the efficient movement and processing of data. The performance-sensitive nature of AI quickly exposes any inefficiencies in the underlying storage platform, often highlighting the limitations of traditional storage systems. These inefficiencies arise from how storage systems manage, store, and process data across the various stages of the AI lifecycle.

Each step of the AI pipeline often requires copying data between different environments or tools. For example, in an enterprise RAG workflow, data may be copied from storage into a vector database that encodes the vector and then writes that embedding back to storage. Later, these vectors are read from the database to supplement an existing foundation model.

This process is controlled by software like Nvidia’s NIM services, part of its Nvidia AI Enterprise suite, or may follow other processes—processes that themselves run on separate servers, adding another layer of abstraction and performance degradation.

While this is a simplified description of the process, it highlights where traditional storage systems may struggle. When data is moved or duplicated within an AI workflow, inefficiencies arise. Moving data takes time, increasing latency and decreasing throughput. Significant data movement, as in AI training, can limit the system’s scalability.

Addressing these inefficiencies requires a unified and optimized data infrastructure that can seamlessly handle the iterative nature of AI development, minimize data movement, reduce latency, and ensure efficient utilization of computational and storage resources. VAST Data addresses these challenges with the unique capabilities of its VAST Data Platform.

Nvidia GPU and NIM Integration

One of VAST’s cornerstone announcements is a deep integration between its platform and Nvidia’s enterprise AI tools. VAST now allows Nvidia GPUs to act as NIM controller nodes (NIM is part of Nvidia’s AI Enterprise software suite) in VAST’s all-flash storage system. This enables GPUs to process data directly within the data platform, eliminating the need for data migration to external servers. Allowing the GPU to access the stored data directly enhances real-time AI inference, providing a significant performance boost.

In addition to the GPU integration, VAST also embedded Nvidia’s NIM microservices software directly into its platform. NIM simplifies the deployment of AI models by providing pre-configured, containerized solutions optimized for Gen AI and large language models. This allows enterprises to rapidly deploy custom and pre-trained AI models across various environments, including cloud, on-prem data centers, and edge computing environments.

VAST InsightEngine: AI Data Processing at Scale

While its tighter integration with Nvidia NIM enables faster AI workflows, VAST’s new InsightEngine brings efficiency to AI data handling. The VAST InsightEngine leverages the power of Nvidia GPUs and NIM to provide real-time AI data processing. A key feature of InsightEngine is its ability to generate vector embeddings and graph relationships from incoming data, enabling enterprises to create advanced AI retrieval systems that use their proprietary data without needing an external vector database.

VAST tells us that its new InsightEngine can store trillions of vector embeddings and supports exabyte-scale data storage, making it well-suited for large enterprises dealing with structured and unstructured datasets. By embedding AI processes directly into the data storage architecture, VAST eliminates the need for external data lakes or third-party AI platforms, streamlining workflows, improving performance, and reducing costs.

Strategic Partnerships with Cisco and Equinix

In a move that will help expand further into the enterprise market, VAST Data also announced new strategic partnerships with Cisco and Equinix. VAST’s software will be bundled with Cisco’s UCS servers to give enterprises an integrated solution for AI and data storage workloads. Cisco’s UCS servers are optimized for data-heavy applications, and the combined solution will simplify deployment for enterprises looking to scale their AI initiatives.

Additionally, VAST is making its platform available through Equinix IBX co-location sites, providing enterprises with global access to its infrastructure. Equinix’s extensive network of data centers enables VAST to offer low-latency, secure data management solutions across geographically distributed environments. This allows Equinix customers to deploy AI infrastructure in scenarios requiring hybrid cloud or edge computing capabilities.

Analyst’s Take

VAST Data’s integrated data storage and management approach optimizes every stage of the AI pipeline, from data ingestion and preprocessing to model training, evaluation, and deployment. Its most recent announcements also address RAG and related enterprise AI workflows, ensuring faster and more reliable AI development cycles.

The capabilities of the VAST Data Platform are significantly differentiated from competing solutions. Organizations using the integrated Nvidia software services and VAST data manipulation capabilities will experience new, likely unparalleled, levels of efficiency. It’s a compelling story, one that recently led storage industry stalwart NetApp to announce an architecture built on many of these same principles (though without a corresponding product announcement). VAST should see this as a strong validation of its approach.

The challenge for VAST and others following the same path is that enterprise IT has historically treated storage and applications as different domains. Adopting the VAST Data Platform means accepting a level of lock-in and dependency that doesn’t exist in the more traditional legacy storage world. For many customers, the gains in performance and efficiency will be worth it. Others will want to manage these workflows separately. It’s reminiscent of a similar challenge that occurred with hyperconverged infrastructure a decade ago.

There’s little question that VAST Data is well-positioned as a leader in AI infrastructure, offering enterprises the tools they need to efficiently and effectively scale AI workloads. This is the right time for the newly announced capabilities and relationships. VAST Data finds itself delivering enterprise AI features at the precise time that enterprises are starting to build AI infrastructure. It’s a powerful position.

Disclosure: Steve McDowell is an industry analyst, and NAND Research is an industry analyst firm, that engages in, or has engaged in, research, analysis and advisory services with many technology companies; the author has provided paid services to every company mentioned in this article (except Coreweave, Lambda and Equinix) in the past and may again in the future. No company mentioned was involved in the drafting or publication of this article. Mr. McDowell does not hold any equity position with any company mentioned.

Share.
Exit mobile version