Microsoft announced a raft of new Kubernetes-related projects at the KubeCon conference in Austin, Texas, this week, demonstrating its growing commitment to the technology.
It launched a new version of its experimental Azure Container Instances for Kubernetes, the Virtual Kubelet.
Microsoft also entered a collaboration with Heptio on a new disaster recovery solution.
The Virtual Kubelet builds on Microsoft’s original ACI announcement this summer, which established a serverless container runtime that provided per-second billing and required no virtual machine management.
Customers can use the new version to target ACI or any equivalent runtime. It includes a pluggable architecture supporting a variety of runtimes and uses existing Kubernetes primitives, which will make it easier to build upon, according to Microsoft.
“Hyper is very excited to support the Virtual Kubelet project as the first outside contributor,” said James Kulina, chief operating officer. “Hyper’s vision from the start has been to make deploying and using containers as simple and easy as possible.”
The project will help platforms that support secure container technology to enable seamless multicloud container deployment between Kubernetes-based “serverless container” platforms, Kulina said.
In order to connect containers to Azure services more easily, Microsoft open-sourced the Open Service Broker for Azure, which was built using the Open Service Broker API.
The API provides a standard mechanism for developers to expose backing services to applications running in cloud-native platforms like Kubernetes and Cloud Foundry, according to Microsoft.
The OSBA exposes Azure services like Azure CosmosDB, Azure Database for PostgreSQL and Azure Blob Storage, the company said. Using OSBA and the Kubernetes Service Catalog, customers can manage the SLA-backed Azure data services via the Kubernetes API.
Microsoft also announced its contribution of an alpha release of the Command Line Interface for the Kubernetes service catalog, a move designed to help cluster administrators and application developers request and use services through the Kubernetes Service Catalog.
The company also announced Kashti, which is a dashboard and visualization tool for Brigade pipelines, a previously announced project that allows developers to script together multiple tasks and execute them inside containers.
Heptio announced its collaboration with Microsoft on Heptio Ark, a strong Kubernetes disaster recovery solution for Azure.
Under the agreement, Heptio will work with Microsoft to make sure the Ark project is an efficient method of moving Kubernetes applications from on-premises applications to Azure, and to make sure that Azure applications are secure.
Heptio launched the Ark project earlier this year to provide a production-grade solution for cluster disaster recovery. Working through the Kubernetes API, the Ark takes a snapshot of the user’s intent, instead of relying on low-level information recovered from the underlying Kubernetes data store, the etcd.
Since Heptio’s launch just over a year ago, it has released five open source projects to help make Kubernetes easier to use, said CEO Craig McLuckie.
Heptio Ark helps manage disaster recovery for Kubernetes cluster resources and persistent volumes, he told LinuxInsider.
Unlike rival solutions, Kubernetes takes a declarative approach to running applications, McLuckie said.
“Other solutions are focused on copying the underlying cluster state and reproducing it in a cluster that has the same size, shape and location as the original cluster,” he noted.
“Ark is different,” McLuckie explained, “because it captures the user’s intent and provides a way to move workloads to new clusters without carrying forward the transient information that relates to the running cluster.”
Heptio Ark has two primary roles, said Paul Teich, principal analyst at Tirias Research.
One is providing an easy cloning service for the devtest environment, and the second is disaster recovery, he told LinuxInsider.
Disaster recovery is not highly available for transaction processing, Teich noted.
“Disaster recovery is designed to bring a service back up to a previously known good state from a hardware failure or network outage and not to preserve all transactions in flight,” he said.
“This type of disaster recovery is important to many enterprises to deliver a baseline level of service robustness, so it will enable more enterprise customers to consider moving legacy applications to a modern container environment,” Teich explained.
Top-level cloud providers, like Microsoft’s Azure, AWS and IBM’s Softlayer, all have demonstrated they can perform at enterprise levels with a relatively robust, secure and flexible environment for their clients, observed Rob Enderle, principal analyst at the Enderle Group.
The competition in the sector has shifted to application and performance diversity, he told LinuxInsider.
“These announcements speak to the former,” Enderle said, “and to get to this application diversity requires a close relationship with the firms that create the applications to ensure end users can rely on the result.”