While most people recognize Cisco as a cornerstone of the internet’s plumbing, the company is currently looking much further up than the terrestrial fiber-optic cables that connect our homes. In a recent interview on Decoder, Cisco CEO Chuck Robbins revealed a surprising strategic pivot: the company is preparing for the possibility of data centers in orbit.
This isn’t just science fiction. As the demand for AI processing power explodes, the physical limitations of Earth are becoming a primary bottleneck for the tech industry.
The Problem: The “Unpleasant Neighbor” Effect
The rapid expansion of Artificial Intelligence requires massive amounts of computational power, which in turn requires gargantuan data centers. However, these facilities face three mounting challenges on the ground:
- Energy Demands: Data centers consume enormous amounts of electricity, often straining local grids and driving up costs for residents.
- Environmental & Social Friction: Data centers are loud, take up massive amounts of land, and are increasingly facing bipartisan political opposition from communities that do not want them in their “backyards.”
- Cooling Constraints: The heat generated by AI chips is immense, requiring sophisticated and resource-heavy cooling systems.
The Orbital Solution: Unlimited Power and No Neighbors
Chuck Robbins suggests that moving data centers into space could bypass these terrestrial headaches.
“Up there, [power] is unlimited and unimpeded,” Robbins noted, referring to the constant access to solar energy in orbit. “You don’t have to deal with… people who don’t want these data centers in or near their communities.”
While experts like Sam Altman have expressed skepticism, calling space-based data centers a “pipe dream,” Robbins is placing his bets on the vision of Elon Musk. To prepare, Cisco has already tasked its product teams with analyzing how their networking hardware can survive the rigors of space—addressing issues like extreme temperatures and the lack of atmospheric cooling.
Cisco’s Strategic Pivot: From “Internet Builder” to AI Infrastructure Leader
The conversation also shed light on how Cisco has repositioned itself to capitalize on the AI boom. Despite the company’s history of “boom and bust” cycles—most notably during the dot-com era—Cisco is seeing significant growth in its enterprise data center business.
The Silicon Advantage
A critical factor in Cisco’s current relevance was a strategic acquisition in 2016 of the Israeli silicon company Leaba. This move allowed Cisco to design its own custom networking chips, rather than relying on “merchant silicon” (off-the-shelf parts) used by its competitors.
Today, Cisco is one of only three companies globally capable of building the specialized networking silicon required to connect high-performance GPUs. This capability has turned Cisco into a vital partner for “hyperscalers” (massive cloud providers like Amazon and Microsoft) who are racing to build out AI infrastructure.
“Coopetition” with Nvidia
The rise of Nvidia as a networking powerhouse—with revenues significantly outpacing Cisco’s in certain segments—raises questions about competition. Robbins describes the relationship as “coopetition.”
While Nvidia offers a highly integrated, “path of least resistance” stack for those who want a turnkey AI solution, Cisco maintains a stronghold through:
1. Optionality: Large enterprises prefer to mix and match different vendors to avoid being locked into a single ecosystem.
2. Security: Cisco integrates security directly into the network layer, a critical requirement as the industry moves toward an era of autonomous AI agents.
Conclusion
As AI pushes the boundaries of what is computationally possible, the industry is hitting physical limits on Earth. Whether through custom silicon or the radical move to orbital data centers, Cisco is positioning itself to provide the essential connectivity required for the next era of computing.
