Software applications flow. While some application functions and data services will be prevalent parts of the upper-tier elements of a codebase, some parts will form the middle-tier flowstream that connects and networks software components together… and then, there will also be the embedded lower-tier parts of an IT stack.
While one software team might prefer to hard code and work at the upper level for the majority of its workflows, others may realize that they don’t have the skills or expertize needed to make bring certain technologies to bear. Crucially, some technology functions are platform-level entities in and of themselves, so many developer and data science departments simply don’t have the technical breadth to be able to incorporate some functionalities at a cost-efficient level so that they work effectively, safely and securely.
The realm of real-time data streaming is (arguably) one of the more complex and extended technology practices that IT teams might reasonably find challenging at the lower (let’s say deeper) end of the total technology spectrum. Confluent wants to make that part easier.
Independent Software Vendors
As a real-time data streaming platform specialist, Confluent has now launched its Confluent OEM Program as a service for original equipment manufacturers (an industry term that originated in PC hardware, but now extends to the virtualized world of cloud and artificial intelligence) and also for managed service providers, cloud service providers and independent software vendors.
The idea is, Confluent’s DNA knows the A-Z of data streaming. The company offers a complete data streaming platform for Apache Kafka and Apache Flink (both open source real-time data technologies), so why not make it easier for the third-party business community to enhance software offerings with added data streaming services? It’s not a question, that’s what this is. With license to globally redistribute or embed Confluent’s enterprise-grade platform, partners can bring real-time products and Kafka offerings to market and monetize customer demand for data streaming. All with limited risk, says Confluent, because the data streaming team know how to offer robust services as part of their bread and butter.
Apache Kafka & Apache Flink
Often used in concert to one degree or another Apache Kafka offers an embeddable application programming interface known as Kafka Streams API designed to circumvent the need to use cluster computing models, while Flink is an enterprise-grade data processing framework that uses a positive use of the cluster model approach – in a universe where both approaches are applicable, this is not as combative or counter-intuitive as it may first sound. Confluent supports both of these methodologies, constructs and technologies.
The Confluent OEM program seeks to make data streaming a high-margin part of a business with what the company promises is “expert implementation guidance and certification” to help partners launch enterprise-ready offerings. There are flexible commercial terms and ongoing technical support, with a possible offer of branded t-shirts for the luckiest involved, one imagines.
“As data-driven technologies like generative AI become essential to enterprise operations, the conversation has shifted from ‘if’ or ‘when’ a business will need data streaming to ‘what’s the fastest, most cost-effective way to get started these days,’” said Kamal Brar, senior VP for worldwide ISV activity across APAC, Confluent. “We help our partners unlock new revenue streams by meeting the growing demand for real-time data within every region they serve. Confluent offers the fastest route to delivering enterprise-grade data streaming, enabling partners to accelerate service delivery, reduce support costs and minimize overall complexity and risk.”
Data Streaming Is Coming
According to ISG Software Research, by 2026, more than three-quarters of enterprises’ standard information architectures will include streaming data and event processing.
When organizations need data streaming – and given the fact that their first projects in this space may involve experimental prototyping… combined with the fact that open source software provides easy on-ramps in terms of access and costs, yet it now also backed up with enterprise-grade services for all established platforms – they often turn to popular open source technologies such as Kafka and Flink.
However, building and maintaining open source software, especially at scale, quickly becomes prohibitively expensive and time-consuming. On average, self-managing Kafka takes businesses more than two years to reach production scale, with ongoing platform development and operational costs exceeding millions of dollars per year,” states Confluent, referencing a report from analyst house Forrester here. Over time, solutions built with open source Kafka and Flink consume more and more engineering resources, which impacts a business’s ability to focus on differentiation and maintain a competitive advantage.
“There’s been something of a legacy shift in terms of the whole OEM technology proposition,” explained Confluent’s Brar. “In legacy times, we may have seen OEMs build onto the stack of a parent ‘vendor’ – and even in modern times, if an OEM decides to use Kafka and Confluent, that’s still old school in a sense. In the modern age of cloud-native with embedded Software-as-a-Service technologies, it’s all about building real-time streaming ‘consumption services’ that take advantage of all the capabilities provided by an Apache Kafka based platform such as Confluent Cloud, that natively supports automated scaling and resilience, plus governance and stream processing as well.
“It’s worth remembering that other cloud services providers and hyperscalers do provide their own Kafka-based service layers for data streaming, so there is a choice out there. But Confluent’s engineering pedigree stems from a founding in the original source of Kafka [founder Jay Kreps built this technology], so we believe that we have the most performant and enterprise-ready Kafka offering possible,” added Brar. “If you break down how important that is, when a critical bug fix happens, it is immediately rectified in Confluent Cloud and while open source versioning will catch up, it can take weeks or months for this to manifest itself in other platform services.”
Brar wants to both validate and underline the whole argument here and says that – if we take an example like tiered cloud storage for Kafka (technologies aligned to put lower priority data in cheaper places, but high-frequency data in typically higher cost instances of cloud etc.), then Confluent offers these services ahead of the curve. This, crucially in the context of this story, means that the company can offer real-time data streaming to OEMs in the most monetizable enterprise-ready way possible. All of which, as Brar puts it, means Kafka for real-time data in the most appealing time-to-market and time-to-value way possible.
Dongliang Guo, VP of international business and head of international products & solutions at Alibaba Cloud Intelligence and Confluent customer has said that customer demand for data streaming has skyrocketed as businesses strive for a competitive edge through application modernization, real-time analytics and AI. Guo says that with Confluent, his firm is were able to deliver an enterprise-grade Apache Kafka-managed service on Alibaba Cloud with minimal time and engineering effort.
Breaking Burdensome Binds
The Confluent OEM Program is designed to alleviate the “burden” of self-managing open source technologies (as the company likes to put it), while also going beyond just Kafka and Flink with additional services and functions. Confluent is adamant about its ability to give managed services providers a hassle-free solution for unlocking data streaming across AI, real-time analytics and application modernization projects. The company says it simplifies data streaming by “eliminating the operational complexities” of open source deployments, accelerating delivery times and providing expert support. Secure, governed data streams can be made available on premises, at the edge, and in the cloud.
This program of benefits includes design review and development support so that customers can build a data streaming offering with architectural guidance and hands-on development support from Confluent’s team. It is a complete, ready-to-use data streaming platform including 120+ Kafka connectors, Flink stream processing, enterprise-grade security and data quality controls and cloud-based monitoring. Confluent says it will provide certification and be there to bring committer-led Kafka and Flink support to a business to handle customer questions or issues.
What Operational Complexities?
What are those operational complexities being eliminated here then?
This is all about services to pick up the latest bug fixes, manage the need for upgrades and work to uphold established delivery guarantees and service level agreements. That’s a lot of backend work and so there’s also proactive support to run continuous analysis of cluster metadata (in real-time, naturally) that alerts a customer to issues or potential problems with cloud clusters before they happen. It all means that Confluent support engineers have insight into the critical context of any given Kafka environment, something that would take a customer a lot longer to achieve.
If we accept that data streaming and the wider use of real-time data is a growing part of the total IT universe, then the pain points where an organization like Confluent works to (as it so often likes to say) eliminate operational complexities could help us see where other cloud-native technologies will now start to become more packaged and presented to the market in this way.