It’s conference season for tech companies, and late last month I wrapped up a whirlwind of travel with a red-eye to Boston for Think 2024, IBM’s flagship annual conference. The trip was well worth it.

Think is always informative, but this year, I found a strong correlation with my ongoing conversations with enterprise customers, confirming that we have reached an inflection point in AI. Last year was marked by build-out and construction; this year, enterprises are starting to see benefits from their AI investments. In a short period, AI—especially generative AI—has moved from a nascent field of study for most enterprises to a set of real-world technologies that positively affect company productivity and our lives in general.

Krishna’s Strategic Outlook Across Multiple Technologies

In his opening keynote, CEO Arvind Krishna highlighted Technology Atlas, a concept IBM introduced last year that looks at the multi-year confluence of hybrid cloud, artificial intelligence, quantum computing, automation, data and security through the lens of six technology roadmaps. These technologies are reaching a point where they can impact business by boosting productivity through automation, innovation and scale in a way that we couldn’t have imagined a few decades ago, or in some cases even a few years ago.

Krishna emphasized the intersection of hybrid cloud and AI as an underlying macro trend that creates an inflection point and opportunity for business improvement. Technology has always been used for productivity via automation, helping enterprises become lean. But today there is a shift from being lean to gaining revenue, scale and even more market share. That is a big, albeit subtle, shift.

I have to credit Krishna for focusing on hybrid cloud and AI. Four years ago, when Krishna took the helm at IBM, the hybrid cloud made a lot of sense, and I was personally embroiled in that debate then. Meanwhile, AI had been implemented for years with machine learning, neural networks and advanced analytics. But Krishna saw the trend line and was early out of the gate about a year ago with the IBM watsonx platform.

AI, The Topic On Everyone’s Mind

The keynote quickly zeroed in on AI, which was the focus of IBM’s announcements at the show. Krishna did not hold back from comparing AI with other fundamental technologies that have historically helped the world advance and increase GDP, such as the steam engine, electricity and the Internet. Krishna mentioned that AI will add an estimated $4 trillion to annual U.S. GDP productivity by 2030, a staggering number that reflects the value likely to be accrued by all AI technology users. To get there, however, he also called on history to point out that experimentation is essential but insufficient. What is needed is a big shift to deployment, which is at the heart of IBM’s strategy.

For deployment to deliver real benefits, it requires working at scale—expanding from small projects to enterprise scale in a systemic way. For example, truly unlocking productivity inside an enterprise involves connecting everything from the supply chain to the front end to omnichannel marketing and distribution. In parallel with this, AI deployed across the enterprise will unlock productivity improvements using assistants that augment human skills and insights.

With these big themes as the backdrop, Krishna had set the stage to drill into the new announcements.

Expansion Of IBM’s Automation Portfolio

As the technology estate increases across multiple public clouds, SaaS and in-house applications, it is time to bring AI to IT operations to better manage that sprawl. Fed with the right data, AI can help monitor the status of just about any aspect of operations and diagnose when something goes wrong.

The new IBM Concert, a generative AI-powered tool set, aims to identify, predict and resolve issues, significantly reducing risks and streamlining compliance processes. To put this tool in context, Concert can be thought of as an observability platform. Powered by watsonx.ai, IBM Concert integrates with existing systems and connects with data in the cloud, source repositories, CI/CD pipelines and other observability solutions. The result is a 360-degree view of connected applications. Concert eliminates unnecessary tasks and empowers IT teams to be more informed, faster and more responsive, in some cases tackling issues before they even arise. My colleague Will Townsend has written an analysis that goes into more detail about Concert’s capabilities and some of the high-potential use cases it addresses.

Accelerating The Deployment Of AI Solutions

Reacting to feedback from watsonx.ai users who said, “We need an assistant—we need something packaged up,” IBM has launched a suite of new AI Assistants designed to accelerate learning and productivity. These include the watsonx Code Assistant for Enterprise Java Applications (planned for release in October), the watsonx Assistant for Z, which allows people with no skills on Z to quickly learn and become productive on that platform, and the watsonx Assistant for Code, which works with both COBOL and Ansible. And these are just the offerings on the technical side; there are other watsonx Assistants for business functions including customer service, human resources and marketing.

On top of all these domain-specific products, this week IBM launched the Assistant Builder, which enables companies to build their own custom assistants beyond those provided by IBM.

Advancing Open-Source Innovation Around LLMs

There is a lot of focus today on evaluating AI models and determining the right model for the right use case. From a data perspective, an enormous amount of public data has made its way into language models. Conversely, almost no enterprise data has made its way into models because, until recently, there was no way to incorporate enterprise data into an LLM safely.

With its Granite AI models, however, IBM provides enterprises with openness and transparency for their data, and even indemnification. (My colleagues and I have written extensively about these aspects of IBM’s approach to AI; see, for example, this overview I wrote early this year.) This week, IBM announced that it will open-source the Granite models, which are at the sweet spot in terms of size for custom enterprise uses, ranging from 3 billion to 34 billion parameters. Open-sourcing these models, which perform better than other open-source models on HumanEvalPack, is huge for developers, so it is great they are now open to all. I’ll add that IBM deserves credit not only for open sourcing these models, but also for being the first company to provide its customers with indemnification—a move that was copied by all.

Simplifying AI model training is a pressing issue for enterprises because it is difficult and expensive. The new InstructLab appears to be a novel open-source approach that streamlines training and potentially reduces costs. IBM is launching InstructLab with its Red Hat unit, enabling developers to quickly build models tailored to specific business domains or industries using in-house data. These open-source contributions will be integrated with watsonx.ai and the new Red Hat Enterprise Linux AI solution. RHEL AI will feature an enterprise-ready version of InstructLab as well as the open-source Granite models. While we are still researching InstructLab, it appears to be a happy medium between creating your own models from scratch and endlessly fine-tuning them.

Expanded Ecosystem Access To Watsonx

IBM has also invested heavily in partnerships and integrations to offer customers the choice and flexibility to bring third-party models into watsonx and to enable software companies to embed watsonx capabilities. This reflects IBM’s understanding that, in the realm of generative AI, something more than single-vendor approaches is required. More than that, giving enterprise customers a real range of choices is necessary to help them drive innovation, customize models for specific business needs, optimize costs and mitigate model risks. IBM’s partners for this effort include AWS, Adobe, Meta, Microsoft, Mistral AI, Palo Alto Networks, SAP, Salesforce and SDAIA. Note that, as with its own models, the company indemnifies its clients if they use third-party models on its platform.

As AI evolves, governance in the form of ensuring enforcement of policies and standard practices has become a priority. This explains why IBM has partnered with AWS to develop an integrated AI governance service, watsonx.governance, that now integrates with Amazon’s SageMaker service, which builds, trains and deploys machine learning models. The integration provides risk management and regulatory compliance for AI/ML models.

The longstanding link between IBM and Adobe is another example of IBM’s investment in partnerships and integrations. Recent output from this partnership is the integration of Adobe Experience Platform with IBM watsonx. Adobe Experience Platform, which enables businesses to centralize and standardize customer data from disparate sources, is now augmented with IBM watsonx’s AI capabilities to provide more accurate and actionable insights.

IBM’s strategy is to make watsonx ubiquitous across partners both large and small by making it easy to embed generative AI. IBM is achieving this by making the three components of watsonx available to ISVs and MSPs. The three components are watsonx.ai (tooling for foundation models), watsonx.data (an open data store for generative AI) and watsonx.governance, (ensuring responsible implementation).

Making AI Easier To Implement For Enterprises

If I had to sum up Think 2024 in a phrase, I would say that IBM is intent on making AI real for enterprises. Under Krishna’s leadership, IBM was the first to market for enterprise AI and has kept on adding capabilities.

From an enterprise perspective, IBM could make an attractive starting point for implementing AI in a serious way, as opposed to the prevailing approach (favored by Microsoft and Google, among others) of turning to a hyperscaler to perform training and inferencing in the cloud. With IBM, an enterprise could start with Red Hat, Granite and InstructLab and do its own AI work on-premises. Considering that 75% of enterprise data is still on-premises or on the edge, bringing the model to the data rather than data to the model makes a lot of sense—starting with an open-source model trained on public data (such as Granite) and then tuning it to internal enterprise data.

There is a crucial difference between IBM’s approach and the “mega-LLM” approach adopted by hyperscalers such as Microsoft and Google. The closed LLM is fundamentally flawed for enterprises because the training data and development process are not publicly available. This lack of transparency in how these models arrive at their outputs makes identifying and addressing errors or potential biases difficult.

In contrast, IBM promotes an open-source approach to LLM development in which the underlying code and data used to train the models are publicly accessible. The aim is to foster collaboration and promote transparency, leading to broader adoption and improvement of LLMs and the technology that supports them.

InstructLab appears to be a great middle ground between the two extremes available to enterprises today. One extreme is a mega-LLM with retrieval-augmented generation that gets you 90% of the way there, while the other is a proprietary smaller model that is expensive and complex to pull off. With InstructLab, you can create your own tailored proprietary model and perform RAG at the same time, which is super impressive.

Additionally, IBM’s developer-centric motion could unlock a new wave of innovation. By embedding open-source models into RHEL AI, InstructLab will meet the needs of millions of Linux users around the world. This is just one notable way that IBM gives developers a toolkit and capabilities that didn’t exist until now. As for what kind of creativity and breakthrough thinking will open up—the sky’s the limit.

I noted earlier that Krishna emphasized the intersection of hybrid cloud and AI as an underlying macro trend. AI is accelerating the execution of hybrid cloud because folks implementing this find that you can go from pilot to production to scale only when you have a robust architecture. IBM’s open-source approach to AI, like its approach to Linux and OpenShift, will lead to a more economical approach to AI as the technology becomes ubiquitous.

When I visit any organization worldwide today, the technical community is enthusiastic and passionate about understanding and contributing to AI. IBM’s recent announcements tap into this energy, giving the developer community—and ultimately business users—a vehicle for making unique contributions and providing a continuous path to innovation.

Share.
Exit mobile version