When it comes to AI, all eyes are on the GPUs, as evidenced by Nvidia’s mind-blowing $4 trillion market value. However, all of the AI training and inference processing requires more than just the GPU. It requires networking between the GPUs, infrastructure to power and cool the GPUs, and most of all, it requires storage and memory to manage all the raw data and models. As mathematician Clive Humby indicated, “data is the new oil.” It drives the world economy and is the heart of AI. Without the data, there is no AI. As a result, the storage and memory subsystem, sometimes referred to as the AI data layer or the AI data pipeline, is one of the most critical elements of the system. Another American company, Micron, is rapidly emerging as a key supplier for this crucial AI data layer.
Disclosure: My company, Tirias Research, has consulted for AMD, Nvidia, Micron and other companies mentioned in this article.
The AI data layer
The memory and storage hierarchy of a server is very complex and has evolved over many decades. The constant need to increase memory performance and density to keep pace with the performance of the processing elements and the demands of increasingly complex workloads, such as AI, has driven innovation at every layer of the hierarchy. Innovations ranging from on-chip SRAM to closely coupled high-bandwidth memory (HBM) to system main memory to pooled memory resources to SSD storage. For AI workloads, memory and storage have become critical, non-commodity elements in processing AI workloads. There are only three major vendors that supply both major components: Micron, Samsung, and SK Hynix, with only one US company among them Micron.
Micron’s data center acceleration
While also enjoying a substantial presence in consumer and embedded/IoT applications, Micron’s success in the data center is closely tied to the growth of AI, particularly in high-performance HBM memory and SSD storage. Micron was initially focused on an alternative memory called Hybrid Memory Cube (HMC) and pivoted to HBM around seven years ago. The company’s initial challenges with HBM2 and HBM2E left it trailing its competitors. However, Micron capitalized on the breakout growth of Nvidia’s Hopper generation of AI GPU accelerators with the HBM3 and HBM3E generations to take over the number two spot in less than a year and appears to be ahead of the curve for the next generation of AI GPU accelerators from both AMD and Nvidia. Micron’s HBM3E is designed into the AMD newest Instinct MI350 platform, and the company reportedly is shipping HBM4 to key customers for future AI platforms.
Leveraging the company’s proprietary 1-beta process node for the HBM3E and HBM4 generations, combined with advanced interposer and die stacking, Micron’s HBM products offer the industry’s highest bandwidth at up to 30% better performance efficiency than products from Samsung and SK Hynix. Similar to how Micron collaborates with other key customers in mobile and computing, the company has worked closely with AI accelerator customers, including AMD and Nvidia, to ensure optimal performance, quality, and manufacturability, thereby earning a spot as one of the leading memory suppliers for the next generation of high-performance AI platforms.
In addition to investing in new memory and storage architectures, Micron has a $200 billion manufacturing expansion plan that includes the expansion of facilities in Idaho, Virginia, and Japan, as well as a new complex of fabs in New York. The expansion will not only meet the needs of its customers in AI but also support the push for onshore manufacturing in the US.
Final thoughts
While memory and storage are often classified as two different segments, they form a single subsystem or data layer. This data layer is essential for meeting the performance and scalability requirements of AI workloads. The AI demands are so high that the data layer must be designed in conjunction with the processing layer to ensure optimal performance. So, when it comes to data center AI, memory and storage are not commodities that can be easily substituted by a lower-cost alternative. The data layer is a unique piece of the entire AI platform and AI data center.
Micron provides a complete AI data layer solution through the combination of its high-performance HBM, DRAM, and SSDs. Additionally, Micron has demonstrated that it possesses the technology and resources to become a leader in this segment in a very short period, making it valuable to the entire electronics ecosystem and to the onshoring aspirations of the US government.







