How Apple will improve the next iPhone 16 and iPhone 16 Pro with artificial intelligence is one of 2024’s big questions. Now we know more about Apple’s plans to use AI in the iPhone, its approach, and how it will sell it to consumers.

Apple has submitted eight large language models to the Hugging Face hub, an online resource for open-source AI implementations. LLMs are data sets that generative AI applications use to process the inputs and work through as many iterations as necessary to arrive at a suitable solution.

The larger the LLM, the more data is available, and it should not be surprising that those data sets were originally built in the cloud to be accessed as an online service. There has been a push to create LLMs with a small enough data footprint to run on a mobile device.

This requires new software techniques, but it will also place a demand on the hardware to allow for more efficient processing. Android-focused chipset manufacturers such as Qualcomm, Samsung, and MediaTek offer system-on-chip packages optimised for generative AI. It is expected that Apple will do the same with the next generation of Axx chips to allow more AI routines to take place on this year’s iPhone 16 family rather than in the cloud.

Running on the device means user data would not need to be uploaded and copied away from the device to be processed. As the public becomes more aware of the concerns around AI privacy, this will become a key marketing point.

Alongside the code of these open-source efficient language models, Apple has published a research paper (PDF Link) on the techniques used and the rationale behind the choices, including the decision to open-source all of the training data, evaluation metrics, checkpoints, and training configurations.

This follows the release of another LLM research paper by Cornell University, working alongside Apple’s research and development team. This paper described Ferret-UI, an LLM that would help understand a device’s user interface and what is happening on screen and offer numerous interactions. Examples include using voice to navigate to a well-hidden setting or describing what is shown on the display for those with impaired vision.

Three weeks after Apple released the iPhone 15 family in 2023, Google launched the Pixel 8 and Pixel 8 Pro. Proclaiming them as the first smartphones with AI built-in, the handsets signalled a rush to use and promote the benefits of generative AI in mobile devices. Apple has been on the back foot, at least publicly, ever since.

The steady release of research papers on new techniques has kept Apple’s AI plans visible to the industry if not yet to consumers. By providing the open-source code for these efficient language models and emphasising on-device processing, Apple is quietly signalling how it hopes to stand out against the raft of Android-powered AI devices, even as it talks to Google about licencing Gemini to power some of the iPhone’s AI features.

Now take a closer look at the leaked design of the iPhone 16 and iPhone 16 Pro…

Share.
Exit mobile version