Skip to content

AKS

Azure Kubernetes Service (AKS) simplifies deploying a managed Kubernetes cluster in Azure by offloading the operational overhead to Azure. As a hosted Kubernetes service, Azure handles critical tasks, like health monitoring and maintenance.

Who manages control plane?

When you create an AKS cluster, a control plane is automatically created and configured by Microsoft and you pay for only the number of nodes you add to the cluster

Core components

Core Kubernetes infrastructure components:

  • Control plane; Azure-managed nodes are the masters of the cluster, and theyโ€™re responsible for managing the cluster. When you create an AKS cluster, this node is automatically provisioned and configured with all the management-layer components. As this layer is managed by Azure, customers will not be able to access or make configuration changes to this one.
  • Nodes: They are the VMs in which the containerized applications are running.

  • Node pools: Nodes of the same configuration are grouped together into node pools. A Kubernetes cluster contains at least one node pool. The initial number of nodes and size are defined when you create an AKS cluster, which creates a default node pool. This default node pool in AKS contains the underlying VMs that run your agent nodes.

Example

You could create a pool with Windows VMs for running Windows containers and a pool with Linux VM for running Linux containers

windows support

To run an AKS cluster that supports node pools for Windows Server containers, your cluster needs to use a network policy that uses Azure CNI (advanced) network plugin. Also, Windows Containers need their own Node pool as default AKS configuration is for Linux containers.

  • Pods: It represent the smallest execution unit in Kubernetes. A pod encapsulates one or more containers

AKS Features

  1. IAM : You can authenticate, authorize, secure, and control access to Kubernetes clusters in 2 ways:
  2. Using Kubernetes role-based access control (Kubernetes RBAC), you can grant users, groups, and service accounts access to only the resources they need. Here you can use:

    • Roles
    • ClusterRoles
    • RoleBindings
    • ClusterRoleBindings
    • Service Accounts
    • Azure Active Directory and Azure RBAC.
    • Registry support
    • Autoscaling of Node pools
    • Networking using CNI plugin

AKS Namespaces

When you create an AKS cluster, the following namespaces are available:

  • Default: Where pods and deployments are created by default when none is provided. When you interact with the Kubernetes API, such as with kubectl get pods, the default namespace is used when none is specified.

  • Kube-system: Where core resources exist, such as network features like DNS and proxy, or the Kubernetes dashboard. You typically don't deploy your own applications into this namespace

  • Kube-public: Typically not used, but can be used for resources to be visible across the whole cluster, and can be viewed by any user.

Network Plugins

Kubenet

NAT is used in Kubenet but not on CNI

Kubenet is a very basic, simple network plugin, on Linux only. It does not, of itself, implement more advanced features like cross-node networking or network policy. It is typically used together with a cloud provider that sets up routing rules for communication between nodes, or in single-node environments.

  • Nodes receive an IP address from the Azure virtual network subnet.
  • Pods receive an IP address from a logically different address space than the nodes' Azure virtual network subnet.
  • Network address translation (NAT) is then configured so that the pods can reach resources on the Azure virtual network.
  • The source IP address of the traffic is translated to the node's primary IP address.

CNI

With Azure Container Networking Interface (CNI), every pod gets an IP address from the subnet and can be accessed directly. These IP addresses must be planned in advance and unique across your network space.

Carefully plan for the IP's

Each node has a configuration parameter for the maximum number of pods it supports. The equivalent number of IP addresses per node are then reserved up front. This approach can lead to IP address exhaustion or the need to rebuild clusters in a larger subnet as your application demands grow, so it's important to plan properly.

Network Policy

If you want to control traffic flow at the IP address or port level (OSI layer 3 or 4), then you might consider using Kubernetes NetworkPolicies for particular applications in your cluster.

Dependency on Network Plugin

Network policies are implemented by the network plugin. To use network policies, you must be using a networking solution which supports NetworkPolicy. Creating a NetworkPolicy resource without a controller that implements it will have no effect.

Volumes

In AKS, data volumes can be created using Azure Files or Azure Disks

Disks

Azure Disks can be referenced in the YAML file as a DataDisk resource. Both Azure Standard storage (HDD) and Premium storage (SSD) is supported. Since there is premium storage with high-performance SSD, this is ideal for production workloads that demand high IOPS.

Warning

Azure Disk is mounted as ReadWriteOnce, so it will be available to a single node. If you are planning to implement shared storage, then Azure Files is the right choice

Files

Azure Files uses SMB 3.0 and lets you mount the same storage to multiple nodes and pods. In Azure Files also you have support for Standard storage and Premium storage. These files can be accessed by multiple nodes and pods at the same time, making it ideal for shared storage scenarios.

Auto-Scaling

Cluster Autoscaling

Cluster autoscaler is typically used alongside the horizontal pod autoscaler. When combined, the horizontal pod autoscaler increases or decreases the number of pods based on application demand, and the cluster autoscaler adjusts the number of nodes as needed to run those additional pods accordingly.

HPA

Kubernetes uses the horizontal pod autoscaler (HPA) to monitor the resource demand and automatically scale the number of replicas.

By default, the horizontal pod autoscaler checks the Metrics API every 15 seconds for any required changes in replica count, but the Metrics API retrieves data from the Kubelet every 60 seconds. Effectively, the HPA is updated every 60 seconds.

When changes are required, the number of replicas is increased or decreased accordingly

Configuration required

When you configure the HPA, you can decide on the minimum number of instances,the maximum number of instances, and the metrics that need to be monitored.


Was this page helpful?
-->