What was your No. 1 lesson learned from the pandemic? I may live in a bubble, but for me, the answer is obvious: Always be ready with a flexible IT environment so you can quickly adapt to changes in business needs, wherever you’ll need it.
Look, stuff happens: social distancing, remote work, supply chain disruptions… there’s no vaccine that will make sure that your technology stack keeps pace. What’s out there, however, still works pretty well: cloud-native deployments built upon microservice architecture. Cloud-nativity helps you to hit the ground running, as long as the ground on which you’re running is solid. Let’s take a look:
At one point we used to think that cloud-native architectures — built upon multiple containerized microservices — were an evolutionary pinnacle. All our past has led us to this point. But, of course, it wasn’t as simple as all that: each microservice needed scheduling, resource allocation and all kinds of other processes. And then came the magic of Kubernetes, and all the rest is history. Thank you Google, and thanks to you too, CNCF!
And now that we’ve dispensed with the formalities, a word or two about what Kubernetes is not: It’s not your 24/7 personal assistant. Maybe just the opposite: When Kubernetes, a.k.a K8s, decides that you need new physical resources, and that they need to be provisioned, configured, and updated, you’d better be paying attention. And if you’re not listening, your application will go off the rails.
That’s why organizations that want to develop cloud-native applications — and not manage underlying infrastructure — rely upon managed Kubernetes services. You’re free from provisioning, networking, load balancing, security patching, and all those other operational tasks. With Kubernetes orchestration, they’ll be taken care of automatically, no worries.
Of course! The public cloud is great, it’s easy to use, we wouldn’t be having this discussion without it. All the major public cloud providers offer managed Kubernetes deployments. But their very magnificence can also be the reason they may not always provide the service that you need and crave. They work out of mega data centers. And you deploy locally. Remember the three most important things in deployments? So we may have a mismatch here.
We’re seeing a lot of innovations and cutting-edge applications — like autonomous vehicles, drones, telemedicine, robotics, AR/VR, and even cloud gaming. For them to really take off, they require super-fast response times, or in other words, hyper-low latency and high throughput. Lots of bandwidth is consumed and huge amounts of data must be transferred and processed in real-time.
Can a public cloud located many miles away support these deployments? Sometimes, but sometimes not. Actually, very often not. And with 5G here, we just don’t know what will be the nature of tomorrow’s newest applications. And it’s a pretty safe bet that they’ll be processing more data, and require lower latencies, than anything we’ve already seen.
Here’s another monkey wrench: data privacy & data sovereignty. They’re not just a passing phase. Most of the world now has laws that require personal information to reside in-country. And these laws are applicable even if the data is just next door, in the country just over your border.
From virtualization to containerization and to cloud-native applications, one great technological achievement always led to another, and it’s no different here. A new cloud emerged to address the dual challenges of latency & throughput without surrendering the benefits of the public cloud. For lack of a better word, it’s called the distributed cloud. By connecting hundreds or even thousands of data centers and cloud providers, the distributed cloud brings computing resources to the edge, closer to where they are consumed by end-users.
In a distributed cloud, enterprises get the benefits of two worlds, without the drawbacks: They get public cloud service together with the high performance of localized infrastructure. Normally, to get performance like that you would have to make your own arrangements and maintain a dedicated staff of IT specialists in a data center of your choosing. A big hassle. But with a distributed cloud, it’s available easily through managed services.
A distributed cloud? It’s a great theory, but let's get practical. There are so many data centers, and they all have different technology stacks, whether they’re built on OpenStack, VMware, or some proprietary management platform. How can they be integrated into one functioning platform?
So here’s the next new thing: Ridge has taken all this heterogeneous infrastructure and created a homogeneous, massively distributed, cloud platform. The Ridge platform — Ridge Cloud — federates data centers and cloud service providers into a unified network. When working on Ridge Cloud, developers are agnostic to each data center’s specific hypervisor and cloud management environment. They need only to interact via a single API to deploy applications in any location, anywhere.
Ridge owns none of these data centers, but instead partners with their owners to provide cloud services. Ridge’s distributed data center ecosystem can rapidly turn any partner into a Ridge Cloud Point of Presence (PoP). Using infrastructure they already have, they are able to offer Ridge managed services, including Ridge Kubernetes Service (RKS), a fully certified managed Kubernetes service.
Developers deploy RKS by simply describing the type of resources, data center characteristics, target price, and geographic locations they wish to use. Ridge Cloud then automatically parks the workloads in the data center(s) that will provide optimal performance, based on user defined parameters.
The Ridge Allocation Engine moves your workloads to a data center in a location that ensures maximum performance. It also takes into account pricing, resource availability, server type, and compliance. Other features include one-click provisioning, load balancing, persistent storage, health monitoring, auto-healing and IAM. Once you specify your cluster parameters, RKS takes care of configuration, installation, and maintenance. As a fully managed service, RKS adjusts workloads by automatically spinning up computing instances for uninterrupted performance.
As the de-facto standard for container orchestration, Kubernetes enables cloud-native application deployments. It provides you with incredible flexibility to move and update your workloads. But the full cloud-native potential cannot be realized until you deploy whatever, whenever — and most importantly — wherever.
As an extension to the public cloud, Ridge’s massively distributed cloud enables you to deploy & scale applications anywhere. And through managed Kubernetes services, even complex, resource-intensive applications can easily become cloud-native. No need to rely on the availability of resources in any specific location.
By powering the full potential of cloud-native applications, Ridge has changed the way businesses think about growing. For them, cloud-nativity now enables them to be flexible and innovative without being limited by their cloud infrastructure.
To learn more about how Ridge adds flexibility to the public cloud, click here