Networking¶
Service¶
- A
Kubernetes Service
is a resource you create to make a single, constant point of entry to a group of pods providing the same service. - Each service has an IP address and port that never change while the service exists. Clients can open connections to that IP and port, and those connections are then routed to one of the pods backing that service. This way, clients of service don’t need to know the location of individual pods providing the service, allowing those pods to be moved around the cluster at any time.
- If we do an
exec
to service IP, then we will be able toexec
to one of the pods backed by it.
Info
List the pods with the kubectl get pods
command and choose one as your target for the exec
command (in the following example, I've chosen the kubia-7nog1 pod
as the target). You'll also need to obtain the cluster IP of your service (using kubectl get svc
). When running the following commands yourself, be sure to replace the pod name and the service IP with your own:
kubectl exec kubia-7nog1 -- curl -s http://10.111.249.153
Various service types are:
- NodePort
- ClusterIP
- Load Balancer
- ExternalName
NodePort¶
By creating a NodePort service, you make Kubernetes reserve a port
on all its nodes (the same port number is used across all of them) and forward incoming connections to the pods that are part of the service.
- If you set the type field to NodePort, the Kubernetes control plane allocates a port from a range specified by --service-node-port-range flag (default: 30000-32767). Each node proxies that port (the same port number on every Node) into your Service.
- Every node in the cluster configures itself to listen on that assigned port and to forward traffic to one of the ready endpoints associated with that Service. You'll be able to contact the type: NodePort Service, from outside the cluster, by connecting to any node using the appropriate protocol (for example: TCP), and the appropriate port (as assigned to that Service).
As shown above, how the same port is allocated to all the nodes in order to make the service accessible.
Remember
- NodePort service can be accessed through the
service's internal cluster IP
and through any node's IP and the reservednode port
- Specifying the port isn’t mandatory; Kubernetes will choose a random port if you omit it.
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: NodePort
selector:
app.kubernetes.io/name: MyApp
ports:
# By default and for convenience, the `targetPort` is set to the same value as the `port` field.
- port: 80
targetPort: 80
# Optional field
# By default and for convenience, the Kubernetes control plane will allocate a port from a range (default: 30000-32767)
nodePort: 30007
ClusterIP¶
Tldr
Creates internal IP addresses
for communication within the AKS cluster. This is ideal for internal communication between components. This is the default service type
To make the pod accessible from the outside, we need to expose it through a Service
object. You’ll create a special service of type LoadBalancer
, because if you create a regular service (a ClusterIP service
), like the pod, it would also only be accessible from inside the cluster. By creating a LoadBalancer-type service, an external load balancer will be created and you can connect to the pod through the load balancer’s public IP.
The ClusterIP provides a load-balanced IP address. One or more pods that match a label selector can forward traffic to the IP address. The ClusterIP service must define one or more ports to listen on with target ports to forward TCP/UDP traffic to containers. The IP address that is used for the ClusterIP is not routable outside the cluster, like the pod IP address is
LoadBalancer¶
Kubernetes clusters running on cloud providers usually support the automatic provision of a load balancer from the cloud infrastructure. All you need to do is set the service’s type to LoadBalancer instead of NodePort. The load balancer will have its own unique, publicly accessible IP address and will redirect all connections to your service. You can thus access your service through the load balancer’s IP address
A more detailed LB example is shown below. This works out of box on various cloud platfroms such as GCP and Azure etc.
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app.kubernetes.io/name: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
clusterIP: 10.0.171.239
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 192.0.2.127
Warning
Some cloud providers allow you to specify the loadBalancerIP. In those cases, the load-balancer is created with the user-specified loadBalancerIP. If the loadBalancerIP field is not specified, the loadBalancer is set up with an ephemeral IP address. If you specify a loadBalancerIP but your cloud provider does not support the feature, the load balancer IP field that you set is ignored.
ExternalName¶
Creates a specific DNS entry for easier application access.
Network Plugins¶
Kubenet¶
The kubenet networking option is the default configuration
for AKS cluster creation. With kubenet:
- Nodes receive an IP address from the Azure VNet subnet.
- Pods receive an IP address from a logically different address space than the nodes' Azure virtual network subnet.
- NAT is then configured so that the pods can reach resources on the Azure virtual network.
- The source IP address of the traffic is translated to the node's primary IP address.
Why to use Kubenet
Only the nodes receive a routable IP address. The pods use NAT to communicate with other resources outside the AKS cluster. This approach reduces the number of IP addresses you need to reserve in your network space for pods to use.
CNI¶
With Azure CNI, every pod gets an IP address from the subnet and can be accessed directly. These IP addresses must be planned in advance and unique across your network space.
IP exhaustation
Each node has a configuration parameter for the maximum number of pods it supports. The equivalent number of IP addresses per node are then reserved up front. This approach can lead to IP address exhaustion or the need to rebuild clusters in a larger subnet as your application demands grow, so it's important to plan properly.