Cloud is flexible. Because of this core reality, a second truism also comes to light i.e. there is no R in cloud, except there is. Because the cloud computing model of Software-as-a-Service (SaaS) based applications and data services is so inherently movable, malleable and manageable, we are offered an array of re-this and re-that options that give us the chance to reap the maximum business benefit possible (and environmental benefit too, we hope) from the use of cloud computing deployments.

But how many of these R-factor processes are there, how important are they and what do we need to know about them?

#1 Re-factoring

We talk about cloud refactoring when we need to move things around. If a cloud deployment could benefit from being moved closer to a new Database-as-a-Service (DBaaS) offering, then we might consider refactoring one or more cloud instances to enable them to scale up or outwards more easily.

This process works best when the cloud workload itself is typified by a high degree of internal decoupling and modularity i.e. meaning that the cloud itself has clearly defined and separated areas for data storage, for big data analytics, for Artificial Intelligence (AI) functions and so on. Cloud workloads dedicated to tasks such as Content Management System (CMS) work might be good candidates for refactoring, but only if they still perform in the same way once refactored.

“There’s no one-size-fits-all approach to migrating applications to the cloud. Sometimes, it’s faster and more cost-effective to lift-and-shift… and sometimes – either because of longstanding technical debt or the opportunity to serve customers in a completely novel way – a complete re-imagining is required.” said Stephen Orban, VP, migrations, ISVs and marketplace at Google Cloud. “We see customers leverage both of these approaches and everything in between. The benefits of large-scale refactoring can be immense, but also take longer.”

Google’s stance on re-factoring no doubt stems from its work in the space which has resulted in the company’s Anthos hybrid and multi-cloud platform technology. Broader than a standalone refactoring tool per se and part of GKE Enterprise Google’s cloud-centric container platform, Anthos is designed to provide a means of adopting ‘modern ‘cloud container orchestration without the need for businesses to abandon their existing on-premises datacenter infrastructure for some applications and services.

#2 Re-secure

With the many benefits of cloud-native computing, organizations should know that traditional security methods and tools will be disrupted, so this is why (especially after cloud migration and re-factoring) we need to re-secure.

“Assume everyone has access to everything, assume it is distributed and could be anywhere, now protect it. Network isolation is no longer sufficient and there are no locked doors,” said Brent Schroeder, Global CTO, SuSE. “Access to every resource, application and API needs to be structured on the basis of ‘least privileged access’. The need to encrypt everything is also a mantra in this context, data at rest and data in flight must be encrypted. It is more critical that these strategies are always applied in the shared and distributed ecosystem of cloud environments. Still, many industries with highly sensitive data don’t trust utilizing the shared infrastructure of the cloud. The emergence of Confidential Computing, which can now encrypt and protect data in use to ensure clear data is never exposed, not even to the cloud provider themselves, opens the door for these more sensitive industries and applications.”

Schroeder also insists that we assume everything is dynamic and will change, so no protection should be static. This drives numerous needs, some technological, but most importantly people and processes. Security needs to become a part of all development, release and change control processes not an afterthought.

“Assume new technologies will render traditional tools ineffective. Cloud-native introduces new communication between processes that goes undetected by traditional network monitoring tools impacting the ability for deep packet inspection and data loss prevention techniques. Ensure you seek out new generation solutions that cover all the bases,” advised Schroeder. “Finally, even with all these threats protected, assume you will be breached. Zero trust security method should be overlaid across all the above, never trust, always verify. Zero trust helps prevent loss even when the breach occurs providing that last line of defense.”

#3 Re-skilling

The Confederation of British Industry (CBI) suggests that some 94% of UK workers will need reskilling by 2030 – and it’s a figure that would surely gain a similar ranking in the USA or indeed any modern economy. This high figure is widely agreed to be predominantly due to technologies like AI – typically embedded into cloud applications – changing the way businesses operate… and as a result, which skills they need. Recent research by Workday found that nearly three in four (72%) of business decision-makers believe their organisations lack the skills to fully implement AI.

“Reskilling to be able to use and make the most of AI is imperative and mutually beneficial for companies and employees: companies can quickly adapt to change while employees can bolster their internal mobility. For both, it offers access to new areas of expertise which can drive business performance and efficiency – whether it’s utilising predictive analytics, or automating administrative tasks,” said Dan Pell, vice president and country manager for UKI region at Workday.

Pell explains that his team has carefully considered its own approach to employee reskilling by offering gigs (short-term assignments that align with employee interests and desired skill sets) to all staff members. He insists that this technique works i.e. data suggests 95% of gig participants said they were able to build on existing skills or build new skills. Furthermore, internal movement and progression through a company was nearly 50% greater for employees who participated in a gig than for those who did not.

#4 Re-building

As a business grows and its cloud infrastructure balloons there’s a point when the CTO asks, ‘where’s our agility gone?’… and this is because early cloud-enabled flexibility and agility tends to bottleneck as the business scales. The combination of multi-cloud environments – sometimes segregated across a variety of increasingly complex digital services – adds up to increasing management time and engineering toil.

The trick to staying agile and up is to rebuild for operability insists Mandi Walls, DevOps advocate at PagerDuty. “More specifically, when scaling cloud capabilities, this means building in automation as the business changes. The aim is a series of pivots as rebuilds to turn manual runbooks into delegated, self-service requests. To automate critical operational processes across cloud environments. To optimize security and compliance and strip back team toil as they let automation take the strain and the slowness out of planned rebuilding and post-incident repair,” said Walls.

She proposes that rebuilding is the foundation of another R word: resilience. This requires an AIOps layer so that the multitude of event-based triggers and running SaaS systems can be managed in real-time. Any ticket-based, purely human response can’t manage this at the speed of digital business.

#5 Re-architect (aka cloud migration)

One of the benefits of migration of any type is that in most cases it requires an IT team to think about the reasoning and justification of the change. Cloud migration facilitates this in many different ways but evaluation of architectural decisions is one that seems to come up less often. This is the paradox of cloud in the view of Bill McLane, CTO of cloud at DataStax.

“Most of the early conversation around cloud migration focused on how to lift and shift existing applications and infrastructure to the cloud with as little change as possible. What this avoided however was monumental value that a new modern architecture might bring to the table in the process of moving to the cloud,” explained McLane. “Lift and shift on the surface makes a lot of sense as it provides the ability to migrate with minimal effort, downtime and cost to existing applications. But, there is also a lost opportunity cost that occurs when migration doesn’t evaluate the decision around the existing architecture and doesn’t look for ways to modernize the foundational technology used by the applications.”

Because so many of our application design principles are tied to a given architectural decision, migration to the cloud provides a clear opportunity to modernize those decisions. Modernization initiatives are often first thing to get cut when budgets are tightened, but we have seen very clearly over the last few years what neglecting modernization can cause.

“Cloud migration offers a special opportunity to evaluate the decisions of the past, validate those assumptions and modernize the approaches applications are being developed against,” explained DataStax’s McLane. “With the rapid introduction of generative AI, there is a clear need to modernize legacy infrastructures to enable agile data access, data distribution and application development. There is a need to support recent modern architectures like event driven and microservices, but more importantly adopt cutting edge architectures like Retrieval Augmented Generation. Part of the cloud journey has to be an evaluation of the architectural decisions of the past that can be used to feed the justification and modernization to the architectures to support the future.”

#6 Re-hosting

Let’s take on cloud R that typifies the reality of modern SaaS environments. As we have already suggested here on a number of levels, no one solitary single lonely cloud is generally enough; certainly, most enterprise organizations of any size will be running multiple databases across hundreds of applications in several world regions, so more than one cloud service is needed. This is why we have multi-cloud and why we need to be able to perform re:hosting functions.

The goal here is to perform re:hosting procedures without changing the way any software application is architected and constructed. Hybrid multi-cloud computing company Nutanix reminds us that its annual Enterprise Cloud Index survey now lists multi-cloud as the primary IT deployment model for most companies making any substantial use of cloud. This reality is thought to be on the verge of being even more substantiated according to Nutanix cloud architect Peter Chang, who says that, term of cloud re:hosting and the process of getting applications to reside on cloud services, “Only a small percentage of theworkloads, perhaps 15% to 20%, have been migrated. [This is because] for companies migrating multiple apps in parallel because it can take one to two years per migration, depending on the size of the application.”

Cloud computing technology consultant Dipti Parmar writes in line with Nutanix’s Chang to explain that, in fact, we need to think about the cloud R selection pack as also including these elements:

#7 Retain

Keeping an application where it is, on-premises.

#8 Re-optimize

Undergo processes which may include the re-writing of code into another programming language for greater effeciency and/or apply management processes to reduce over provisioning of cloud clusters, containers and components.

#9 Remediate

The same as re-platforming, often with a complete change of database

#10 Repurchase

Replace legacy applications that can’t be rehosted

#11 Re-observe

Applying some degree of Application Performance Management (APM) and wider observability techniques to the cloud estate to gain deeper insight into system stability, solidity & security.

#12 Retire

When a cloud application has come to end of life, or met its objectives

13 re:Invent

It would be inappropriate to compile this inevitably unfinished and undoubtedly updatable list without mentioning what might be the most famous of all Rs in the world of cloud computing – re:Invent.

The quirky name AWS gives to its annual cloud conference is probably read by most people as a sort of stylized iPhone generation lowercase uppercase amalgam designed to look quite cool and insert punctuation in the middle of a ‘word’ or term. In fact, AWS re:Invent is so named because (and this tale might be slightly apocryphal) apparently, a techie sent someone internally a email about how they were going to “Invent” new services on cloud and the reply came back quite quickly with a positive vote under the subject line (and you’ve already guessed it) re:Invent.

Somehow (so the story goes) this email made its way into the hands of the marketing team who thought it looked snappy enough to perhaps one day even use it as the name for what is a very big cloud conference and the rest is history.

Share.
Exit mobile version