I was raised on Dr. Seuss books, as were my own kids. And while Dr. Seuss never wrote about cloud computing or web infrastructure, “One Fish, Two Fish, Red Fish, Blue Fish” came to mind recently for its colorful and very specific descriptions. In speaking with hundreds of people about what we are building at Ridge, it has become clear to me that there are a lot of terms that have different definitions depending on who is using them. So much so, that I generally don’t use any of these terms when I tell people about Ridge -- because words don’t help to drive clarity unless we agree on what they mean.
This is what comes to mind when most people talk about cloud computing. AWS, Azure & GCP are the kings of the hill… But Oracle, Alicloud & IBM have impressive offerings. And smaller players like Digital Ocean and Linode have very satisfied customers.
The hallmark of the public clouds is that they offer a full stack of on demand services that a modern cloud application developer relies on in order to deploy an application at scale. For public clouds to stay competitive over time, they need to continually innovate and expand their services. Where 5 years ago, a public cloud could offer Virtual Data Center services (what we are calling IaaS today), they now need a full complement of managed services on top of that infrastructure (what we are calling PaaS).
A private cloud is used exclusively by one organization. The organization has control over the infrastructure stack, which may be hosted in house or at a third party facility. Private clouds allow for flexibility and granular control.
Hybrid cloud architectures combine private and public clouds into an infrastructure stack that allows enterprises to move their applications from their own environments to the public cloud. Hybrid cloud architectures allow enterprises to continue to make use of their existing infrastructure and also provide security and data sovereignty control.
The big public cloud companies are intent on offering large enterprises the ability to bring their stack in house. The strategic thinking behind this is that if a cloud vendor can get a customer to use their tools, protocols, and technologies both on the public cloud AND in their own facilities, that customer becomes very, very unlikely to move to another provider.
Multi cloud deployments have the same application running on multiple clouds at the same time. Popular games, applications that require a global footprint, and applications that are concerned about vendor lock-in all seek out multi-cloud deployments. A company that runs different applications on multiple clouds might be better described as “multiple cloud”, rather than multi cloud.
Application administration, orchestration and management technologies such as Terraform, Rancher, and others are simplifying the path to multi-cloud deployment. One of the biggest challenges with multi-cloud is that application owners need to develop and maintain several proprietary integrations for each cloud platform that they are going to use. Fortunately, modern container architectures and standards such as Kubernetes are making this unnecessary - Kubernetes becomes the only API which developers need to integrate with
Serverless computing does, of course, require physical servers… It does allow application developers to completely ignore the physical underpinnings of their run time environment and to simply pay for functions called. However, we find that only relatively simple applications are embracing this technology stack. Serverless applications are, by definition, limited in what they can do. More sophisticated applications that need to control their underlying compute architecture to a greater degree are not well suited for serverless computing.
Edge Cloud is a term that means many different things to different people. Regional data centers tend to think of the “edge” as the metro region in which they reside. CDNs tend to think of it as the “CDN Edge”. Meanwhile, 5G operators are touting the cell tower as the 5G Edge. The reality is that it is very hard to pick a definition of “Edge Cloud” -- so it is very dependent on who you ask at any given point in time.
At Ridge, we describe edge cloud as being “as close as you need to be” to end users. While this may seem like a cop out or “non-answer”, we think it actually makes the most sense. Edge is truly in the eye of the beholder… If your infrastructure is close enough to end users to provide the performance characteristics that you require, you are at “your” edge.
5G Edge Cloud
The very low latency in moving traffic from the air interface to the terrestrial network will allow for a whole new set of applications that require low latency. Mobile operators view 5G edge compute as an opportunity to build defensible infrastructure assets and to become significant players in the Edge Cloud market. We are in the very early stages of this market developing.
Advantages of 5G edge cloud: Single digit millisecond proximity to the end user -- and a global footprint (eventually).
Ridge is building a cloud service that uses existing data center infrastructure as its building blocks. This means that Ridge’s cloud will be massively distributed as compared to current vendors. It also means that Ridge’s own definition of “edge” will change based on relationships with underlying infrastructure providers.
One thing we know for sure is that tomorrow’s infrastructure will look different from yesterday’s. And that modern infrastructures will enable a whole new class of modern applications that will change how we work, play, and live.