Ahead of Computex 2024 this week, Intel gave select members of the press and analyst community a deep dive look at its forthcoming Lunar Lake mobile Core Ultra processor technology that’s set to release in volume production in September of this year. Lunar Lake represents a major architectural shift for Intel, manufactured on TSMC’s N3 and N6 chip fab process nodes, in an effort to leap ahead of new entrant competition from both Qualcomm and AMD. And to say both members of the media and analyst communities were waiting with baited breath for this level of detail on Intel’s new ultra-efficient processor architecture, for the age of the AI PC, would be a major understatement. The following are my high level take-aways of what to expect from Intel Lunar Lake, which is a chip architecture that literally everyone I’ve spoken to at Intel is wildly excited about.

Lunar Lake Brings An All New Efficient And Performance Core Approach

The first expectation we have of Lunar Lake now is that a number of key execs at Intel are claiming this new CPU architecture will redefine the X86 versus Arm performance-per-watt paradigm that has been in place since Apple stepped out with its first round of M silicon, perhaps even before. In fact, technical marketing contacts I spoke with claim Lunar Lake CPU cores will be the most performant PC processor cores on the market when they arrive in September of 2024, besting Qualcomm’s Snapdragon X Elite (on a per-core basis), and perhaps even AMD’s recently announced Ryzen AI 300 series, based on the company’s Zen 5 and Zen 5c architecture.

How Intel achieved this is via a completely revamped compute core tile, consisting of 4 E-Cores on a new low power island and 4 P-Cores. The interesting change here is that not only are there fewer total cores in Lunar Lake, but P-Cores are now single-threaded engines, while E-Cores now support Intel Hyperthreading for multi-threaded workloads. That said Intel’s new Skymont E-Core architecture is reportedly much more performant, delivering up to 2X the throughput of Intel Crestmont cores in its current gen Meteor Lake chips. Skymont is a wider 8-wide allocation machine, up from 6 in the previous gen, with a 60% wider out of order execution window and a 4MB shared L2 cache for the quad E-Core cluster.

All in, Intel claims a massive 38% Instructions Per Clock uplift in integer performance and a whopping 68% IPC lift in floating point calculations for Lunar Lake E-Cores versus it’s previous gen chips. Intel’s Skymont E-Core design also incorporates quadruple the Intel Vector Neural Network Instruction hardware on board for accelerating AI and machine learning, in addition to a much beefier Neural Processing Unit as well.

Then we have Intel’s more powerful Lion Cove P-Core architecture which, though it’s now single-threaded, sports a much larger branch prediction engine, higher fetch and decode bandwidth and a 576 deep instruction window versus Meteor Lake’s 512 deep P-Core instruction window. All of these gains reportedly result in a 14% IPC lift over previous gen Redwood Cove P-Cores on board Intel’s current gen mobile processors.

Finally, Intel has also optimized its Thread Director technology to not only take advantage of this new architecture but also be more communicative with the OS, with finer granularity of workload classification and more intelligent resource management. Thread Director is managed through the Lunar Lake E-Core cluster, and in some cases apps like Teams video conferencing, for example, will be handled exclusively in the E-Core cluster. In short, these aren’t your Meteor Lake E-Cores, folks, and Thread Director will be sharing more data with the OS to hopefully provide more intelligent, controlled efficiency and performance.

Intel’s Beefier 48 TOPS NPU For Co-Pilot + PCs

Lunar Lake is also comprised of a new, much more powerful NPU capable of 48 TOPS of AI processing power, based on the traditional INT 8 data type the rest of the industry has standardized around. Intel notes Lunar Lake’s NPU is comprised of 3X the number of tiles or neural compute engines that are also clocked at a higher frequency but with a claimed 2X power efficiency. These engines also offer a 12X vector throughput lift over Intel’s legacy Meteor Lake NPU, for dramatically improved transformer performance, a type of AI neural network that’s widely used in Large Language Models like ChatGPT, etc.

Lunar Lake is also claimed to be able to deliver 9X faster attention calculation, which is a method of processing in AI models that allows them to selectively “attend” to or weight different input data, ranking that input based on relative importance. To demonstrate all of these above claims, Intel showed a Stable Diffusion demo where the AI was tasked with rendering an image of an aging human engineer looking excited about his new technology. Though the renderings, were relatively similar in quality, it took a previous generation Intel Meteor Lake Laptop about 20 seconds to complete the image, while a new Intel Lunar Lake prototype machine completed it in a quarter of the time, in around 5 seconds.

Amped Lunar Lake Xe2 Graphics, Memory On Package And Connectivity

There’s so much to cover with the major overhaul that is Intel Lunar Lake, that we’ll just have to turbo these final blocks a bit for easier digestion. However, starting with Intel’s new Xe2 graphics engine, once again Intel has ripped up the Meteor Lake spec sheet and dropped in a new Arc GPU block based on its Battlemage architecture. On board Lunar Lake’s Xe2 integrated GPU are 8 cores with 8 vector engines each for a total of 64 vector engines, 8MB of cache and 8 second generation ray tracing units. This GPU is capable of 67 peak INT 8 TOPS that, along with the NPU’s 48 peak TOPS and the CPU’s 5 TOPS, brings total AI platform TOPS to 120.

The bottom line for gaming is that Lunar Lake’s Xe2 integrated GPU delivers up to 1.5X the performance of Intel’s already strong previous gen Alchemist integrated GPU in Meteor Lake. If this pans out in testing, I’d expect Lunar Lake’s graphics chops to be class-leading. With Xe2 we’re also treated to eDisplayPort 1.5, DisplayPort 2.1 and HDMI 2.1 support, as well as support for the new VVC H.266 media CODEC that is supposed to deliver 10% better compression over AV1 without any degradation in quality.

All of Lunar Lake’s various engines on Intel’s tile-based architecture with its new Scalable Fabric Gen 2 and Foveros multi-die stacking technology, are now supported with up to 32GB of LPDDR5X memory that is on the same multi-chip package that the SoC resides on. This allows significantly more memory bandwidth at up to 8500 MT/sec, 40% lower physical interface power reduction and a major real estate savings in the average laptop design. Intel execs noted to me that 16GB would be the base model Lunar Lake laptop config, with 32GB optionable at the OEM level, depending on the model of machine.

Lunar Lake’s Value Proposition Looks Strong For Thin And Light AI PCs

Intel is also making a Lunar Dev Kit available to developers that want to get working on the platform, and the Intel NUC-sized mini PC is sure to be an interest to small form factor enthusiasts as well. In fact, I wouldn’t be surprised if ASUS steps out with Lunar Lake NUC at some point. I’ll be poking around for signs of that on the Computex show floor this week, though maybe that’s just a pipe dream at this point.

Regardless, taking these Lunar Lake specs and features at face value tells me that Qualcomm and AMD both have a formidable premium thin and light laptop competitor to contend with here from Intel. The questions remaining are how Lunar Lake will match up in terms of power consumption and perhaps multi-threaded performance, where Qualcomm Snapdragon X Elite and AMD Ryzen AI 300 Series CPUs will both have up to 12 cores, versus 8 for Intel Lunar Lake. In any event, those of us in tech are in for a very busy Q4 season, with virtually all of these mobile platform contenders hitting at roughly the same time. Get out the popcorn, folks. There’s a laptop slugfest about to ensue and Intel’s Lunar Lake looks ready for action.

Share.
Exit mobile version