The evolution of computing has always been marked by swings between centralized to distributed models. First, there was the mainframe, a centralized computing platform that manages and operates all computing resources to which users can access with “dumb” terminals. Then came the client/server architecture, where a powerful server, housed in an on-premises data center, hosts, manages and delivers compute resources and services to multiple clients (i.e. PCs). These clients are then each able to run applications and store data.
The client/server architecture quickly moved into the mainstream, resulting in massive deployments of servers in data centers across the globe. Over time, these highly distributed data centers grew into huge complexes with large numbers of servers consuming heavy amounts of compute resources and resulting in high operational costs.
Cloud computing emerged as an alternative model aimed at reducing the burden of managing and maintaining on-premises data centers. By outsourcing IT infrastructure management to public cloud providers, organizations could easily scale on-demand, reduce capital expenses using a pay-per-use model, and also gain access to innovative technologies and services.
Cloud computing has taken us back to a centralized model but this time with an important twist: While the client/server model was typically aimed at organizations with adequate IT resources and skills, the public cloud democratized computing power. Everyone with an internet connection could reap its benefits.
One Cloud Doesn’t Fit All
Although the cloud revolutionized the way computing resources are provisioned and consumed, it is not a panacea for all IT issues. This realization has been driving the continuous innovation and development of different cloud technologies.
Many organizations started their cloud journey with the implementation of internal, private clouds. In the early days of the cloud, when security concerns were a major barrier to adoption, these private clouds offered users the safety of the corporate firewall. Other benefits of the private cloud include unrestricted configuration and allocation of resources as well as performance and availability assurance.
Having a private cloud, however, does not require giving up the benefits of a public cloud. Users with private clouds can always turn to a public cloud for specific use cases – for example, the handling of peak loads or the hosting of standardized functionalities – while maintaining core applications and business processes that handle sensitive data in their local infrastructure.
Hybrid to the Rescue?
This flexibility is achieved through the hybrid cloud model, which combines aspects of both private and public cloud computing. A hybrid cloud enables users to run critical workloads on their on-premise data center or private cloud to achieve improved security and control and to comply with regulatory and/or data sovereignty laws that require data to be stored locally. Other workloads can be migrated to the public cloud to take advantage of its many benefits, such as cost, scalability, and built-in maintenance.
Is the hybrid cloud a sustainable model uniting the advantages of both worlds? It’s hard to ignore some significant technological challenges: For example, cloud management technologies need to build a bridge between the private and public clouds while addressing security, performance, and availability issues. In addition, policy engines need to automate and control the movement of workloads between the clouds.
And the public cloud is not a monolithic creature: Many organizations distribute their compute loads between cloud vendors – a model known as multi-cloud – in order to avoid “cloud lock-in” and to take advantage of best-of-breed services. It’s a great business strategy, but it further highlights the need for effective management and control.
And things are only getting more complicated…
Distributed Computing Strikes Back at the Edge
The shifts between centralized and distributed computing occur when the demand to support new use cases that cannot be adequately supported by existing models. But as a computing paradigm is stretched to its limit trying to address use cases or operational requirements (e.g. performance, reliability, and scalability) it was not designed to meet, the shortcomings become apparent, spurring the development of alternatives. This is exactly what’s happening today as distributed computing is making a comeback.
Many latency-sensitive critical applications, such as autonomous vehicles, telemedicine, online gaming, and drones are either available now or are sitting on the immediate horizon. These resource-intensive applications are dependent on the ability to transfer and process large amounts of data in real-time. Hence, they cannot rely on a centralized cloud located miles away to guarantee consistent performance and response times.
To address this challenge, a new cloud computing model model has emerged, sometimes known as edge computing, that allows for offloading computational tasks to servers located at network edges. The processing of data is moved closer to end-users and end-devices for reduced latency and improved performance. Latency is definitely improved, but edge computing cannot provide the scalability, flexibility and ease-of-use of the public cloud.
Taking a Ride on the Pendulum
The back and forth between centralized and distributed computing models has always been accompanied by trade-offs. For example, the move from mainframes to PCs has made computing accessible to a much larger population of businesses and users. On the flip side, it resulted in reduced control and exposure to security vulnerabilities. The enormous cybersecurity industry owes its existence to this evolution.
These trade-offs explain why the emergence of a new computing model doesn’t necessarily lead to the extinction of its predecessors. Even monolithic systems continue to coexist alongside modern cloud environments, for example, in facilities that require the highest level of security. Ever since it became a viable alternative, organizations have had to carefully consider cost, security, performance, reliability and other trade-offs when deciding which IT infrastructure model to adopt. As depicted above, many of them are now using a hybrid model that combines on-premise and cloud deployments.
On the same note, it is obvious that edge computing will not render the cloud obsolete. These two approaches can live side by side, targeting different use cases. Moreover, new technologies are now emerging to provide more integrated models that leverage their inherent benefits of edge and cloud computing to tackle management complexities and operational inefficiencies.
A (New) Star is Born
The rapid adoption of applications has led to increased distribution of workloads and data across multiple locations and environments. To tackle the sprawl of complex workloads across heterogeneous environments and to maintain operational control, new cloud models are necessary to extend a robust public cloud experience to the edge. They must enable organizations to leverage the ease-of-use, access to advanced services, scalability, and flexibility of the cloud while taking advantage of the low-latency, high availability, performance, and regulatory compliance of on-premise data centers.
In response to this need, the major public cloud providers are offering solutions that enable organizations to deploy a cloud infrastructure in their own on-premises data centers while the public cloud provider maintains the management and operation of the cloud services in use. This model tackles some of the complexities involved in hybrid environments. Essentially, it provides a hybrid cloud as-a-service to facilitate the management of applications (e.g. latency-sensitive edge applications) that cannot be moved to the public cloud.
On the flip side, organizations that require a distributed cloud infrastructure that spans across multiple locations may be dealing with significant connectivity costs as well as cloud lock-in concerns.
Introducing the Distributed Cloud
In response to these inherent challenges, a new cloud model has emerged with significant differences from both the centralized and hybrid cloud models: the Distributed Cloud. Ridge has developed a distributed cloud platform that enables application developers to deploy and scale their workloads locally and to significantly improve end-user experience in any region. The underlying infrastructure is created by federating data centers all over the world that have capacity, power, and connectivity.
So no, the cloud did not kill the data center star. Actually, just the opposite: By combining the agility of centralized public clouds with the geographic distribution and high performance of localized computing, the new distributed cloud has actually energized data centers. They can now offer a range of benefits otherwise not available, including enabling enterprises to deploy and infinitely scale their applications anywhere without investing in new infrastructure.
The Ridge distributed cloud ushers in a new era of cloud computing: cloud-native applications that run with flexibility of the cloud and with the low-latency and high performance of on-premise computing.
Jonathan Seelig, Co-Founder & Executive Chairman | Ridge
Co-founder of Akamai (NASDAQ: AKAM), the first-ever CDN. Former Managing Director at Globespan Capital Partners, Chairman of the board at Zipcar, and EIR at Polaris Partners. Board Member of over a dozen companies and investor in dozens more. Stanford undergrad and MIT Sloan dropout.