AZURE KUBERNETES SERVICES(AKS)
What is Kubernetes?
Kubernetes, also known as K8s, is an open-source system for automating deployment, scaling, and management of containerized applications.
Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.
Kubernetes Architecture
To understand how Kubernetes is able to provide these capabilities, it is helpful to get a sense of how it is designed and organized at a high level. Kubernetes can be visualized as a system built in layers, with each higher layer abstracting the complexity found in the lower levels.
At its base, Kubernetes brings together individual physical or virtual machines into a cluster using a shared network to communicate between each server. This cluster is the physical platform where all Kubernetes components, capabilities, and workloads are configured.
The machines in the cluster are each given a role within the Kubernetes ecosystem. One server (or a small group in highly available deployments) functions as the master server. This server acts as a gateway and brain for the cluster by exposing an API for users and clients, health checking other servers, deciding how best to split up and assign work (known as “scheduling”), and orchestrating communication between other components. The master server acts as the primary point of contact with the cluster and is responsible for most of the centralized logic Kubernetes provides.
The other machines in the cluster are designated as nodes: servers responsible for accepting and running workloads using local and external resources. To help with isolation, management, and flexibility, Kubernetes runs applications and services in containers, so each node needs to be equipped with a container runtime (like Docker or rkt). The node receives work instructions from the master server and creates or destroys containers accordingly, adjusting networking rules to route and forward traffic appropriately.
As mentioned above, the applications and services themselves are run on the cluster within containers. The underlying components make sure that the desired state of the applications matches the actual state of the cluster. Users interact with the cluster by communicating with the main API server either directly or with clients and libraries. To start up an application or service, a declarative plan is submitted in JSON or YAML defining what to create and how it should be managed. The master server then takes the plan and figures out how to run it on the infrastructure by examining the requirements and the current state of the system. This group of user-defined applications running according to a specified plan represents Kubernetes’ final layer.
What is Azure?
Azure is a public cloud computing platform — with solutions including Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) that can be used for services such as analytics, virtual computing, storage, networking, and much more. It can be used to replace or supplement your on-premise servers.
Azure Kubernetes Services(AKS)
Azure Kubernetes Service (AKS) is a fully-managed service that allows you to run Kubernetes in Azure without having to manage your own Kubernetes clusters. Azure manages all the complex parts of running Kubernetes, and you can focus on your containers.
Case Study: WhiteSource simplifies deployments using Azure Kubernetes Service(AKS)
WhiteSource is a user-friendly, cloud-based, open-source management solution that automates the process for monitoring and documenting open-source dependencies. The WhiteSource platform continuously detects all the open-source components used in a customer’s software using a patent-pending Contextual Pattern Matching (CPM) Engine that supports more than 200 programming languages. It then compares these components against the extensive WhiteSource database. Unparalleled in its coverage and accuracy, this database is built by collecting up-to-date information about open-source components from numerous sources, including various vulnerability feeds and hundreds of community and developer resources. New sources are added on a daily basis and are validated and ranked by credibility.
Problem:
WhiteSource was looking for a way to deliver new services faster to provide more value for its customers. The solution required more agility and the ability to quickly and dynamically scale up and down, while maintaining the lowest costs possible.
Because WhiteSource is a security DevOps–oriented company, its solution required the ability to deploy fast and to roll back even faster. Focusing on an immutable approach, WhiteSource was looking for a built-in solution to refresh the environment upon deployment or restart, keeping no data on the app nodes.
This was the impetus to investigate containers. Containers make it possible to run multiple instances of an application on a single instance of an operating system, thereby using resources more efficiently. Containers also enable continuous deployment (CD), because an application can be developed on a desktop, tested in a virtual machine (VM), and then deployed for production in the cloud.
Solution:
The WhiteSource development team explored many vendors and technologies in its quest to find the right container orchestrator. The team knew that it wanted to use Kubernetes because it was the best established container solution in the open-source community. The WhiteSource team was already using other managed alternatives, but it hoped to find an even better way to manage the building process of Kubernetes clusters in the cloud. The solution needed to quickly scale per work queue and to keep the application environment clean post-execution. However, the Kubernetes management solutions that the team tried were too cumbersome to deploy, maintain, and get proper support for.
The services that run on AKS communicate with the front end through Application Gateway to get the requests from clients, process them, and then return the answers to the application servers. When the WhiteSource application starts, it samples an Azure Database for MySQL database and looks for an open-source component to process. After finding the data, it starts processing, sends the results to the database, and expires. The process running in the container can be scrubbed entirely from the environment. The container starts fresh, and no data is saved. Then it starts all over.
Benefits of AKS
The WhiteSource developers couldn’t help comparing AKS to their experience with Amazon Elastic Container Service for Kubernetes (Amazon EKS). They felt that the learning curve for AKS was considerably shorter. Using the AKS documentation, walk-throughs, and example scenarios, they ramped up quickly and created two clusters in less time than it took to get started with EKS. The integration with other Azure components provided a great operational and development experience, too.
Other benefits included:
- Automated scaling. With AKS, WhiteSource can scale its container usage according to demand. The workloads can change dynamically, enabling some background processes to run when the cluster is idle, and then return to running the customer-facing services when needed. In addition, more instances can run for a much lower cost than they could with the previous methods the company used.
- Faster updates. Security is a top priority for WhiteSource. The company needs to update its databases of open-source components as quickly as possible with the latest information. It was accustomed to a more manual deployment process, so the ease of AKS surprised them. Its integration with Azure DevOps and its CD pipeline makes it simple to push updates as often as needed.
- Safer deployments. The WhiteSource deployment pipeline includes rolling updates. An update can be deployed with zero downtime by incrementally updating pod instances with new ones. Even if an update includes an error and the application crashes, it doesn’t terminate the other pods and performance is not affected.
- Control over configurations. The entire WhiteSource environment is set up to use a Kubernetes manifest file that defines the desired state for the cluster and the container images to run. The manifest file makes it easy to configure and create all the objects needed to run the application on Azure.
Thankyou..