Ivo Ivanov is the CEO of DE-CIX.
The crew aboard the Artemis II lunar flyby mission has traveled further from the Earth than any other human being in history.
Since it was founded in 1958, NASA has been responsible for some of humankind’s biggest technological breakthroughs, from water purification systems to memory foam. Even the cameras in our smartphones are built on NASA’s CMOS image sensor technology, designed to miniaturize cameras for interplanetary missions.
And so it follows with digital twins, initially developed in the 1960s to provide a “living model” of the Apollo 13 spacecraft. More than half a century later, many industries, including the automotive industry, are continuing to develop and utilize high-fidelity digital models of physical systems.
Modern “smart” vehicles may have as much in common with computing as with mechanics in 2026, but they are still physical systems that must interact constantly with the physical world.
They generate vast, continuous streams of data from sensors, software systems and user interactions, which can be used to create virtual counterparts or “digital twins” that can simulate performance, anticipate failures and refine everything from safety systems to in-car experiences.
While space may seem like the ultimate test for digital twin technology, the environment in which cars operate is extremely dynamic and complex. Vehicles face a great number of challenging variables—like unpredictable traffic flows, adverse weather and changing surface conditions—and they need to handle those variables while interacting with infrastructure, networks and services that sit far beyond the car itself.
The effectiveness of any digital twin now depends on how quickly and reliably those worlds stay connected.
Cars are moving powerhouses of data.
There are already hundreds of millions of connected cars on our roads worldwide. Almost 95% of cars sold will have connected features by 2030, according to Salesforce research, and the average connected vehicle will generate roughly 25 gigabytes of data per hour.
Modern vehicles are equipped with an array of sensors and systems that capture everything from engine performance and battery health to road conditions, driver behavior and situational context.
Advanced driver-assistance systems process inputs from cameras, radar and range sensors like LiDAR in real time, while onboard software platforms manage navigation, safety features and infotainment.
At the same time, vehicles are receiving over-the-air updates that refine performance, introduce new features and address emerging issues long after the car has left the factory floor.
If digital twins are to function as living models, their usefulness will depend on data exchange that is controlled, secure, resilient and very fast.
That’s why technology isn’t the limiting factor here—connectivity infrastructure is.
Latency will make or break a model.
The effectiveness of digital twins can quickly erode when the data they rely on is delayed, incomplete or inconsistent. Generating the data is the easy part—moving it efficiently is where things get tricky.
In a typical connected vehicle scenario, information doesn’t travel along a single path. It passes between mobile networks, cloud providers, analytics platforms and third-party services, often crossing multiple administrative and geographic boundaries along the way.
Each transition introduces the potential for delay, packet loss or reduced visibility, particularly when traffic is routed across fragmented or congested pathways.
The impact of these inefficiencies can quickly accumulate over time. A digital twin designed to model battery performance may begin to lose accuracy if telemetry data lacks fidelity or real-time data experiences delays. Systems trained to support advanced driver functions rely on consistent, high-quality data inputs, and even small disruptions can affect how those models evolve.
As feedback loops weaken, the connection between real-world behavior and virtual simulation becomes less precise, undermining the purpose of a digital twin and ultimately slowing development and the ability to respond to issues as they arise.
In a world where vehicles are expected to learn, adapt and improve continuously, the integrity and timing of data flows become just as important as the models themselves.
Keeping the twins close.
Geography matters when it comes to connectivity, particularly with regard to AI inference, digital twins or anything that relies on real-time data. The closer the connection between a physical vehicle and its virtual counterpart, the stronger and more responsive that relationship becomes.
Low-latency, high-throughput data exchange allows telemetry to be processed, analyzed and reintegrated into models with minimal delay, supporting near real-time synchronization.
This has direct implications for how quickly insights can be generated and applied. In fleets, data captured from one vehicle’s system anomaly or its navigation of a specific road condition can inform updates across others almost immediately.
Predictive maintenance models can operate with greater precision when fed with continuous, high-quality data streams, while software-defined features can be tuned and deployed with a level of responsiveness that aligns more closely with real-world usage.
So, what actually makes that level of responsiveness possible?
Smarter models and better software, yes, but more importantly—the data pathways that link them. As data flows expand across vehicles, cloud platforms and distributed compute environments, the architecture underpinning those flows matters as much as the systems themselves.
For example, highly interconnected data center hubs, where networks and platforms meet directly, can help enable lower-latency data movement, which supports the timing and fidelity that digital twins depend on.
Before we get there, leaders need to think more broadly about their connectivity, looking beyond raw speed and assessing how their data actually moves. They should start by mapping data flows end to end, identifying where latency, congestion or unnecessary handoffs are introduced, then weigh architecture decisions carefully—edge processing can reduce delay for time-sensitive workloads, but centralized cloud environments may be better suited for large-scale data storage and aggregation.
Cost, performance and control should now be evaluated together, because faster data alone won’t move the needle. Without consistency, governance and visibility, even the fastest networks will fall short of their potential.
The same thinking that once helped engineers mirror spacecraft from afar is now shaping how vehicles learn and improve in motion, with digital counterparts kept in sync by networks and data centers that are no longer confined to central hubs.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

