Technology is expensive. Enterprise-level software and hardware systems rarely come cheap and while cloud-based Software-as-a-Service (SaaS) offerings continue to promise more flexible resource control, people rarely talk about cheap cloud, great value cloud or discount cloud specials. This perennial truth has driven the tech industry to look for ways to control the cost of operations (Ops) more effectively using (guess what) dedicated technology services designed to analyze and manage the financial (Fin) side of modern computing.

This, of course, is FinOps.

The Linux Foundation’s creation of the FinOps Foundation has enabled us to enjoy its official definition of this practice. The foundation states that, “FinOps is an operational framework and cultural practice which maximizes the business value of cloud, enables timely data-driven decision making and creates financial accountability through collaboration between engineering, finance, and business teams.”

But if there is a paradox in FinOps it is the fact that not every firm does it, or – if they do practice a degree of this process – they don’t do it enough i.e. to the point where it actually starts paying for itself. So what’s next for FinOps?

“As teams start to integrate SaaS and licensing costs into their operational frameworks, procurement teams stand to gain a lot from the strategies that FinOps teams use today,” said Kyle Campos, CTPO at CloudBolt Software. “That said, while it’s inevitable that procurement can benefit from FinOps, it would be putting the cart before the horse to tell these teams what to do when the FinOps practice is still proving its value.”

The elusive paradox

Campos notes that his firm’s recent CII report on “The Real State of FinOps” found that while FinOps adoption is massive, its impact remains elusive. The paradox is starting to unfold and straighten, but the discussion itself perhaps remains one that should be more prevalently brought forward in boardrooms.

“What comes next for FinOps teams is the building of AI-native application artifacts [see directly below] by a central platform engineering team,” said Kevin Cochrane, chief marketing office at Vultr. “Currently, enterprises are significantly reallocating budgets to expand into Graphical Processing Units (GPUs) and the development of new AI-powered applications without the same infrastructure and support to properly provision new cloud infrastructure and properly manage costs against revenue outcomes.”

As defined here by TechTarget, an artifact is a byproduct of software development that helps describe the architecture, design and function of software. “Artifacts are like roadmaps that software developers can use to trace the entire software development process,” clarifies TechTarget technical writer Alexander Gillis.

Cochrane suggests that enterprises need to integrate and product-manage a central hub of models and artifacts by a platform engineering team and pre-build Infrastructure-as-Code (IaC) templates to spin up and provision GPU resources for training, tuning and inference, by any downstream team using those pre-built, pre-known, models and artifacts. Only when this approach is taken, he says, can FinOps then start controlling the seemingly unbounded costs associated with new generative Artificial Intelligence (gen-AI) initiatives that today are still often associated with promising but uncertain outcomes.

Beyond old-school KPIs

One of the central problems with enterprise IT cost management is that traditional Key Performance Indicators (KPIs) and metrics require the same kind of reinvention that we’re seeing happening at the technology platform level. Monthly budget reviews and forecasts are starting to sound somewhat old-school in a world where we need to be able to look at continuous computing costs in real-time. This way, the software engineering team and operations team can work with the finance and procurement teams to optimize on the fly. Proactive and real-time cost management emerges as a critical component, replacing traditional cycles aligned with monthly invoicing, budgeting and forecasting.

“Collaboration between engineering, finance and business teams becomes imperative to ensure the delivery of business value from cloud investments,” said Michael Christoforu, senior director, product development at SS&C Blue Prism. “FinOps serves as the bridge, facilitating this collaboration and enabling unified action between engineers and finance, dynamic communication of cloud costs to business units, agreement and reporting on new metrics such as cost efficiency ratios, variances and resource efficiency, predictability in budgets and forecasts through proactive cost management and direct linkage of cloud costs to business outcomes, such as the revenue generated.”

A proactive shift in culture

In essence, the success of FinOps lies in its ability to foster collaboration and enabling a shift in culture with proactive cost management between the participating teams. It involves ensuring, accountability for cloud usage and costs at ‘the edge’, agreeing on the balance between cost, quality and performance/speed and aligning cloud investments with business objectives. This, in turn, is argued to drive organizational success in the cloud era.

“In navigating the evolving landscape of FinOps and cloud cost management, we should recognize the foundational shifts occurring within organizations,” urged Christoforu. “At its core, cloud cost management revolves around understanding and optimizing usage, a seemingly simple concept that unfolds into a complex array of data. As organizations delve into the granular details of individual transactions, they are met with vast quantities of data.”

Because operating within a multi-cloud infrastructure adds another layer of complexity (different cloud platforms offer a variety of tools, features and functions) Christoforu also reminds us that it becomes crucial to achieve uniformity in cost analysis, particularly when spanning across hyperscaler platforms. While native FinOps tools offered by cloud providers are of interest, standardization of cost and usage data across platforms remains a priority to enable organizations to vizualise the benefits of cloud in a more consistent way.

As organizations now start to embrace FinOps, we been through the shift from Capital Expenditure (CapEx) to Operational Expenditure (OpEx) and seen an increasing percentage of new applications deployed as cloud-native apps. Subbiah Sundaram, SVP of products at Hycu says that the next phase for FinOps-centric organizations is to accurately predict, manage, and forecast the costs of cloud computing and business applications consumed as SaaS.

“When running on-premises [private cloud] datacenters, no matter how capital-intensive, it was always comparatively straightforward to assess current costs and predict future costs in compute, storage and networking,” said Sundaram. “Cloud services, however, are decoupled and each service can be heavily configured – for cost savings or increased performance, scale etc. The challenge here is that FinOps must understand cloud architecture well and have a complete map of all services (from compute and serverless to networking and egress). This can be a challenge to map, let alone monitor and optimize initially. This is why we’ve seen such pressure on engineers themselves to build with cost-control in mind.”

Continuous FinOps evolution

What all this points to is the notion that FinOps itself will have to continually evolve to understand the individual costs of services and the variability of costs. It will also need to be adaptable enough to have the monitoring ability to address spikes and avoid surprises due to misconfigurations or misuse.

Hycu’s Sundaram points to a FinOps future that champions real-time accessible monitoring. This is because without real-time monitoring, it will be very difficult to effectively track and monitor spend. He also says it will be necessary to understanding where costs come from and this is part in parcel with real-time monitoring.

“It’s not just about understanding total changes in spend, but where the spend is coming from – from user level to entire workloads and architectures. This is the only way for FinOps and engineering teams to continue working on reducing monthly costs and reduce burn rates,” said Sundaram. “Also here, integrating live cloud consumption in an accessible and integrated way via Enterprise Resource Planning (ERP) systems and accounting software is needed. These costs, and the understanding of where they came from, need to be embedded into standard operational processes. This is the only way for organizations to manage, control and optimize cloud spend with FinOps. If there is not a native integration, nor a real-time data feed, this will present significant challenges.”

We may never start talking about ‘great value discount clouds’ in the realm of modern computing, but we may – with enough foundational FinOps fortitude – start to think about truly cost-effective cloud that leads us towards that fabled land called ‘outcomes-based IT pricing’ where an organization pays for technology based upon what it achieves with it. Okay, that last part really is mostly far-fetched for now, but at least FinOps might help us balance the books better along the way.

Share.
Exit mobile version