4. Kubernetes Deep Dive – Serverless Architectures with Kubernetes

4. Kubernetes Deep Dive

Learning Objectives

By the end of this chapter, you will be able to:

  • Set up a local Kubernetes cluster on your computer
  • Access a Kubernetes cluster using the dashboard and the Terminal
  • Identify the fundamental Kubernetes resources, the building blocks of Kubernetes applications
  • Install complex applications on a Kubernetes cluster

In this chapter, we will explain the basics of the Kubernetes architecture, the methods of accessing the Kubernetes API, and fundamental Kubernetes resources. In addition to that, we will deploy a real-life application into Kubernetes.

Introduction to Kubernetes

In the previous chapter, we studied serverless frameworks, created serverless applications using these frameworks, and deployed these applications to the major cloud providers.

As we have seen in the previous chapters, Kubernetes and serverless architectures started to gain traction at the same time in the industry. Kubernetes got a high level of adoption and became the de facto container management system with its design principles based on scalability, high availability, and portability. For serverless applications, Kubernetes provides two essential benefits: removal of vendor lock-in and reuse of services.

Kubernetes creates an infrastructure layer of abstraction to remove vendor lock-in. Vendor lock-in is a situation where transition from one service provider to another is very difficult or even infeasible. In the previous chapter, we studied how serverless frameworks make it easy to develop cloud-agnostic serverless applications. Let's assume you are running your serverless framework on an AWS EC2 instance and want to move to Google Cloud. Although your serverless framework creates a layer between the cloud provider and serverless applications, you are still deeply attached to the cloud provider for the infrastructure. Kubernetes breaks this connection by creating an abstraction between the infrastructure and the cloud provider. In other words, serverless frameworks running on Kubernetes are unaware of the underlying infrastructure. If your serverless framework runs on Kubernetes in AWS, it is expected to run on Google Cloud Platform (GCP) or Azure.

As the defacto container management system, Kubernetes manages most microservices applications in the cloud and in on-premise systems. Let's assume you have already converted your big monolith application to cloud-native microservices and you're running them on Kubernetes. And now you've started developing serverless applications or turning some of your microservices to serverless nanoservices. At this stage, your serverless applications will need to access the data and other services. If you can run your serverless applications in your Kubernetes clusters, you will have the chance to reuse the services and be close to your data. Besides, it will be easier to manage and operate both microservices and serverless applications.

As a solution to vendor lock-in, and for potential reuse of data and services, it is crucial to learn how to run serverless architectures on Kubernetes. In this chapter, a Kubernetes recap is presented to introduce the origin and design of Kubernetes. Following that, we will install a local Kubernetes cluster, and you will be able to access the cluster by using a dashboard or a client tool such as kubectl. In addition to that, we will discuss the building blocks of Kubernetes applications, and finally, we'll deploy a real-life application to the cluster.

Kubernetes Design and Components

Kubernetes, which is also known as k8s, is a platform for managing containers. It is a complex system focused on the complete life cycle of containers, including configuration, installation, health checking, troubleshooting, and scaling. With Kubernetes, it is possible to run microservices in a scalable, flexible, and reliable way. Let's assume you are a DevOps engineer at a fin-tech company, focusing on online banking for your customers.

You can configure and install the complete backend and frontend of an online bank application to Kubernetes in a secure and cloud-native way. With the Kubernetes controllers, you can manually or automatically scale your services up and down to match customer demand. Also, you can check the logs, perform health checks on each service, and even SSH into the containers of your applications.

In this section, we will focus on how Kubernetes is designed and how its components work in harmony.

Kubernetes clusters consist of one or more servers, and each server is assigned with a set of logical roles. There are two essential roles assigned to the servers of a cluster: master and node. If the server is in the master role, control plane components of the Kubernetes run on these nodes. Control plane components are the primary set of services used to run the Kubernetes API, including REST operations, authentication, authorization, scheduling, and cloud operations. With the recent version of Kubernetes, four services are running as the control plane:

  • etcd: etcd is an open source key/value store, and it is the database of all Kubernetes resources.
  • kube-apiserver: API server is the component that runs the Kubernetes REST API. It is the most critical component for interacting with other parts of the plane and client tools.
  • kube-scheduler: A scheduler assigns workloads to nodes based on the workload requirements and node status.
  • kube-controller-manager: kube-controller-manager is the control plane component used to manage core controllers of Kubernetes resources. Controllers are the primary life cycle managers of the Kubernetes resources. For each Kubernetes resource, there is one or more controller that works in the observe, decide, and act loop diagrammed in Figure 4.1. Controllers check the current status of the resources in the observe stage and then analyze and decide on the required actions to reach the desired state. In the act stage, they execute the actions and continue to observe the resources.
Figure 4.1: Controller loop in Kubernetes

Servers with the node role are responsible for running the workload in Kubernetes. Therefore, there are two essential Kubernetes components required in every node:

  • kubelet: kubelet is the management gateway of the control plane in the nodes. kubelet communicates with the API server and implements actions needed on the nodes. For instance, when a new workload is assigned to a node, kubelet creates the container by interacting with the container runtime, such as Docker.
  • kube-proxy: Containers run on the server nodes, but they interact with each other as they are running in a unified networking setup. kube-proxy makes it possible for containers to communicate, although they are running on different nodes.

The control plane and the roles, such as master and node, are logical groupings of components. However, it is recommended to have a highly available control plane with multiple master role servers. Besides, servers with node roles are connected to the control plane to create a scalable and cloud-native environment. The relationship and interaction of the control plane and the master and node servers are presented in the following figure:

Figure 4.2: The control plane and the master and node servers in a Kubernetes cluster

In the following exercise, a Kubernetes cluster will be created locally, and Kubernetes components will be checked. Kubernetes clusters are sets of servers with master or worker nodes. On these nodes, both control plane components and user applications are running in a scalable and highly available way. With the help of local Kubernetes cluster tools, it is possible to create single-node clusters for development and testing. minikube is the officially supported and maintained local Kubernetes solution, and it will be used in the following exercise.

Note

You will use minikube in the following exercise as the official local Kubernetes solution, and it runs the Kubernetes components on hypervisors. Hence you must install a hypervisor such as Virtualbox, Parallels, VMWareFusion, Hyperkit, or VMWare. Refer to this link for more information:

https://kubernetes.io/docs/tasks/tools/install-minikube/#install-a-hypervisor

Exercise 10: Starting a Local Kubernetes Cluster

In this exercise, we will install minikube and use it to start a one-node Kubernetes cluster. When the cluster is up and running, it will be possible to check the master and node components.

To complete the exercise, we need to ensure the following steps are executed:

  1. Install minikube to the local system by running these commands in your Terminal:

    # Linux

    curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64

    # MacOS

    curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-darwin-amd64

    chmod +x minikube

    sudo mv minikube /usr/local/bin

    These commands download the binary file of minikube, make it executable, and move it into the bin folder for Terminal access.

  2. Start the minikube cluster by running the following command:

    minikube start

    This command downloads the images and creates a single-node virtual machine. Following that, it configures the machine and waits until the Kubernetes control plane is up and running, as shown in the following figure:

    Figure 4.3: Starting a new cluster in minikube
  3. Check the status of Kubernetes cluster:

    minikube status

    As the output in the following figure indicates, the host system, kubelet, and apiserver are running:

    Figure 4.4: Kubernetes cluster status
  4. Connect to the virtual machine of minikube by running the following command:

    minikube ssh

    You should see the output shown in the following figure:

    Figure 4.5: minikube virtual machine
  5. Check for the four control-plane components with the following command:

    pgrep -l etcd && pgrep -l kube-apiserver && pgrep -l kube-scheduler && pgrep -l controller

    This command lists the processes and captures the mentioned command names. There are total of four lines corresponding to each control plane component and its process IDs, as depicted in the following figure:

    Figure 4.6: Control plane components
  6. Check for the node components with the following command:

    pgrep -l kubelet && pgrep -l kube-proxy

    This command lists two components running in the node role, with their process IDs, as shown in the following figure:

    Figure 4.7: Node components
  7. Exit the terminal started in Step 4 with the following command:

    exit

    You should see the output shown in the following figure:

Figure 4.8: Exiting the minikube virtual machine

In this exercise, we installed a single-node Kubernetes cluster using minikube. In the next section, we will discuss using the official client tool of Kubernetes to connect to and operate the cluster from the previous exercise.

Kubernetes Client Tool: kubectl

The Kubernetes control plane runs a REST API server for accessing Kubernetes resources and undertaking operational activities. Kubernetes comes with an open source official command-line tool named kubectl in order to consume the REST API. It is installed on the local system and configured to connect remote clusters securely and reliably. kubectl is the primary tool for the complete life cycle of applications running in Kubernetes. For instance, say you deploy a WordPress blog in your cluster. First, you start creating your database passwords as secrets using kubectl. Following that, you deploy your blog application and check its status. In addition to that, you may trace the logs of your applications or even SSH into the containers for further analysis. Therefore, it is a powerful CLI tool that can handle both basic create, read, update, and delete (CRUD) actions and troubleshooting.

In addition to application management, kubectl is also a powerful tool for cluster operations. It is possible to check the Kubernetes API status or the status of the servers in the cluster using kubectl. Let's assume you need to restart a server in your cluster and you need to move the workload to other nodes. Using kubectl commands, you can mark the node as unschedulable and let the Kubernetes scheduler move the workload to other nodes. When you complete the maintenance, you can mark the node back as Ready and let a Kubernetes scheduler assign workloads.

kubectl is a vital command-line tool for daily Kubernetes operations. Therefore, learning the basics and getting hands-on experience with kubectl is crucial. In the following exercise, you will install and configure kubectl to connect to a local Kubernetes cluster.

Exercise 11: Accessing Kubernetes Clusters Using the Client Tool: kubectl

In this exercise, we aim to access the Kubernetes API using kubectl and explore its capabilities.

To complete the exercise, we need to ensure the following steps are executed:

  1. Download the kubectl executable by running these commands in the Terminal:

    # Linux

    curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.15.0/bin/linux/amd64/kubectl

    # MacOS

    curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.15.0/bin/darwin/amd64/kubectl

    chmod +x kubectl

    sudo mv kubectl /usr/local/bin

    These commands download the binary of kubectl, make it executable, and move it into the bin folder for Terminal access.

  2. Configure kubectl to connect to the minikube cluster:

    kubectl config use-context minikube

    This command configures kubectl to use the minikube context, which is the set of credentials used to connect to the kubectl cluster, as shown in the following figure:

    Figure 4.9: kubectl context setting
  3. Check the available nodes with the following command:

    kubectl get nodes

    This command lists all the nodes connected to the cluster. As a single-node cluster, there is only one node, named minikube, as shown in the following figure:

    Figure 4.10: kubectl get nodes
  4. Get more information about the minikube node with the following command:

    kubectl describe node minikube

    This command lists all the information about the node, starting with its metadata, such as Roles, Labels, and Annotations. The role of this node is specified as master in the Roles section, as shown in the following figure:

    Figure 4.11: Node metadata

    Following the metadata, Conditions lists the health status of the node. It is possible to check available memory, disk, and process IDs in tabular form, as shown in the following figure.

    Figure 4.12: Node conditions

    Then, available and allocatable capacity and system information are listed, as shown in the following figure:

    Figure 4.13: Node capacity information

    Finally, the running workload on the node and allocated resources are listed, as shown in the following figure:

    Figure 4.14: Node workload information
  5. Get the supported API resources with the following command:

    kubectl api-resources -o name

    You should see the output shown in the following figure:

Figure 4.15: Output of kubectl api-resources

This command lists all the resources supported by the Kubernetes cluster. The length of the list indicates the power and comprehensiveness of Kubernetes in the senseof application management. In this exercise, the official Kubernetes client tool was installed, configured, and explored. In the following section, the core building block resources from the resource list will be presented.

Kubernetes Resources

Kubernetes comes with a rich set of resources to define and manage cloud-native applications as containers. In the Kubernetes API, every container, secret, configuration, or custom definition is defined as a resource. The control plane manages these resources while the node components try to achieve the desired state of the applications. The desired state could be running 10 instances of the application or mounting disk volumes to database applications. The control plane and node components work in harmony to make all resources in the cluster reach their desired state.

In this section, we will study the fundamental Kubernetes resources used to run serverless applications.

Pod

The pod is the building block resource for computation in Kubernetes. A pod consists of containers scheduled to run into the same node as a single application. Containers in the same pod share the same resources, such as network and memory resources. In addition, the containers in the pod share life cycle events such as scaling up or down. A pod can be defined with an ubuntu image and the echo command as follows:

apiVersion: v1

kind: Pod

metadata:

name: echo

spec:

containers:

- name: main

   image: ubuntu

   command: ['sh', '-c', 'echo Serverless World! && sleep 3600']

When the echo pod is created in Kubernetes API, the scheduler will assign it to an available node. Then the kubelet in the corresponding node will create a container and attach networking to it. Finally, the container will start to run the echo and sleep commands. Pods are the essential Kubernetes resource for creating applications, and Kubernetes uses them as building blocks for more complex resources. In the following resources, the pod will be encapsulated to create more complex cloud-native applications.

Deployment

Deployments are the most commonly used Kubernetes resource to manage highly available applications. Deployments enhance pods by making it possible to scale up, scale down, or roll out new versions. The deployment definition looks similar to a pod with two important additions: labels and replicas.

Consider the following code:

apiVersion: apps/v1

kind: Deployment

metadata:

name: webserver

labels:

   app: nginx

spec:

replicas: 5

selector:

   matchLabels:

     app: server

template:

   metadata:

     labels:

       app: server

   spec:

     containers:

     - name: nginx

       image: nginx:1.7.9

       ports:

       - containerPort: 80

The deployment named webserver defines five replicas of the application running with the label app:server. In the template section, the application is defined with the exact same label and one nginx container. The deployment controller in the control plane ensures that five instances of this application are running inside the cluster. Let's assume you have three nodes, A, B, and C, with one, two, and two instances of webserver application running, respectively. If node C goes offline, the deployment controller will ensure that the two lost instances will be recreated in nodes A and B. Kubernetes ensures that scalable and highly available applications are running reliably as deployments. In the following section, Kubernetes resources for stateful applications such as databases will be presented.

StatefulSet

Kubernetes supports running both stateless ephemeral applications and stateful applications. In other words, it is possible to run database applications or disk-oriented applications in a scalable way inside your clusters. The StatefulSet definition is similar to deployment with volume-related additions.

Consider the following code snippet:

apiVersion: apps/v1

kind: StatefulSet

metadata:

  name: mysql

spec:

  selector:

    matchLabels:

      app: mysql

  serviceName: mysql

  replicas: 1

  template:

    metadata:

      labels:

        app: mysql

    spec:

      containers:

      - name: mysql

        image: mysql:5.7

        env:

        - name: MYSQL_ROOT_PASSWORD

          value: "root"

        ports:

        - name: mysql

          containerPort: 3306

        volumeMounts:

        - name: data

          mountPath: /var/lib/mysql

          subPath: mysql

  volumeClaimTemplates:

  - metadata:

      name: data

    spec:

      accessModes: ["ReadWriteOnce"]

      resources:

        requests:

          storage: 1Gi

The mysql StatefulSet state creates a MySQL database with 1 GB volume data. The volume is created by Kubernetes and attached to the container at /var/lib/mysql. With the StatefulSet controllers, it is possible to create applications that need disk access in a scalable and reliable way. In the following section, we'll discuss how to connect applications in a Kubernetes cluster.

Service

In Kubernetes, multiple applications run in the same cluster and connect to each other. Since each application has multiple pods running on different nodes, it is not straightforward to connect applications. In Kubernetes, Service is the resource used to define a set of pods, and you access them by using the name of the Service. Service resources are defined using the labels of the pods.

Consider the following code snippet:

apiVersion: v1

kind: Service

metadata:

  name: my-database

spec:

  selector:

    app: mysql

  ports:

    - protocol: TCP

      port: 3306

      targetPort: 3306

With the my-database service, the pods with the label app: mysql are grouped. When the 3306 port of my-database address is called, Kubernetes networking will connect to the 3306 port of a pod with the label app:mysql. Service resources create an abstraction layer between applications and enable decoupling. Let's assume you have a three-instance backend and a three-instance frontend in your application. Frontend pods can easily connect to backend instances using the Service resource without knowing where the backend instances are running. It creates abstraction and decoupling between the applications running in the cluster. In the following section, resources focusing on tasks and scheduled tasks will be presented.

Job and CronJob

Kubernetes resources such as deployments and StatefulSets focus on running applications and keeping them up and running. However, Kubernetes also provides Job and CronJob resources to run applications to completion. For instance, if your application needs to do one-time tasks, you can create a Job resource as follows:

apiVersion: batch/v1

kind: Job

metadata:

  name: echo

spec:

  template:

    spec:

      restartPolicy: OnFailure

      containers:

      - name: echo

        image: busybox

        args:

         - /bin/sh

         - -c

         - echo Hello from the echo Job!

When the echo Job is created, Kubernetes will create a pod, schedule it, and run it. When the container terminates after the echo command, Kubernetes will not try to restart it or keep it running.

In addition to one-time tasks, it is possible to run scheduled jobs using the CronJob resource, as shown in the following code snippet:

apiVersion: batch/v1beta1

kind: CronJob

metadata:

  name: hourly-echo

spec:

  schedule: "0 * * * *"

  jobTemplate:

    spec:

      template:

        spec:

          containers:

          restartPolicy: OnFailure

          - name: hello

            image: busybox

            args:

            - /bin/sh

            - -c

            - date; echo It is time to say echo!

With the hourly-echo CronJob, an additional schedule parameter is provided. With the schedule of "0 * * * *", Kubernetes will create a new Job instance of this CronJob and run it every hour. Jobs and CronJobs are Kubernetes-native ways of handling manual and automated tasks required for your applications. In the following exercise, Kubernetes resources will be explored using kubectl and a local Kubernetes cluster.

Exercise 12: Installing a Stateful MySQL Database and Connecting inside Kubernetes

In this exercise, we will install a MySQL database as StatefulSet, check its status, and connect to the database using a job for creating tables.

To complete the exercise, we need to ensure the following steps are executed:

  1. Create a file named mysql.yaml on your local computer with the following content:

    apiVersion: apps/v1

    kind: StatefulSet

    metadata:

      name: mysql

    spec:

      selector:

        matchLabels:

          app: mysql

      serviceName: mysql

      replicas: 1

      template:

        metadata:

          labels:

            app: mysql

        spec:

          containers:

          - name: mysql

            image: mysql:5.7

            env:

            - name: MYSQL_ROOT_PASSWORD

              value: "root"

            - name: MYSQL_DATABASE

              value: "db"

            - name: MYSQL_USER

              value: "user"

            - name: MYSQL_PASSWORD

              value: "password"

            ports:

            - name: mysql

              containerPort: 3306

            volumeMounts:

            - name: data

              mountPath: /var/lib/mysql

              subPath: mysql

      volumeClaimTemplates:

      - metadata:

          name: data

        spec:

          accessModes: ["ReadWriteOnce"]

          resources:

            requests:

              storage: 1Gi

    Note

    mysql.yaml is available on GitHub at https://github.com/TrainingByPackt/Serverless-Architectures-with-Kubernetes/blob/master/Lesson04/Exercise12/mysql.yaml.

  2. Deploy the StatefulSet MySQL database with the following command in your Terminal:

    kubectl apply -f mysql.yaml

    This command submits the mysql.yaml file, which includes a StatefulSet called mysql and a 1 GB volume claim. The output will look like this:

    Figure 4.16: StatefulSet creation
  3. Check the pods with the following command:

    kubectl get pods

    This command lists the running pods, and we expect to see the one instance of mysql, as shown in the following figure:

    Figure 4.17: Pod listing

    Note

    If the pod status is Pending, wait a couple of minutes until it becomes Running before continuing to the next step.

  4. Check the persistent volumes with the following command:

    kubectl get persistentvolumes

    This command lists the persistent volumes, and we expect to see the one-volume instance created for the StatefulSet, as shown in the following figure:

    Figure 4.18: Persistent volume listing
  5. Create the service.yaml file with the following content:

    apiVersion: v1

    kind: Service

    metadata:

      name: my-database

    spec:

      selector:

        app: mysql

      ports:

        - protocol: TCP

          port: 3306

          targetPort: 3306

    Note

    service.yaml is available on GitHub at https://github.com/TrainingByPackt/Serverless-Architectures-with-Kubernetes/blob/master/Lesson04/Exercise12/service.yaml.

  6. Deploy the my-database service with the following command in your Terminal:

    kubectl apply -f service.yaml

    This command submits the Service named my-database to group pods with the label app:mysql:

    Figure 4.19: Service creation
  7. Create the create-table.yaml file with the following content:

    apiVersion: batch/v1

    kind: Job

    metadata:

      name: create-table

    spec:

      template:

        spec:

          restartPolicy: OnFailure

          containers:

          - name: create

            image: mysql:5.7

            args:

             - /bin/sh

             - -c

             - mysql -h my-database -u user -ppassword db -e 'CREATE TABLE IF NOT EXISTS messages (id INT)';

    Note

    create-table.yaml is available on GitHub at https://github.com/TrainingByPackt/Serverless-Architectures-with-Kubernetes/blob/master/Lesson04/Exercise12/create-table.yaml.

  8. Deploy the job with the following command:

    kubectl apply -f create-table.yaml

    This command submits the Job named create-table and within a couple of minutes, the pod will be created to run the CREATE TABLE command, as shown in the following figure:

    Figure 4.20: Job creation
  9. Check for the pods with the following command:

    kubectl get pods

    This command lists the running pods, and we expect to see the one instance of create-table, as shown in the following figure:

    Figure 4.21: Pod listing

    Note

    If the pod status is Pending or Running, wait a couple of minutes until it becomes Completed before continuing to the next step.

  10. Run the following command to check the tables in the MySQL database:

    kubectl run mysql-client --image=mysql:5.7 -i -t --rm --restart=Never \

    -- mysql -h my-database -u user -ppassword  db -e "show tables;"

    This command runs a temporary instance of the mysql:5.7 image and runs the mysql command, as shown in the following figure:

    Figure 4.22: Table listing

    In the MySQL database, a table with the name messages is available, as shown in the preceding output. It shows that MySQL StatefulSet is up and running the database successfully. In addition, the create-table Job has created a pod, connected to the database using the service, and created the table.

  11. Clean the resources by running the following command:

    kubectl delete -f create-table.yaml,service.yaml,mysql.yaml

    You should see the output shown in the following figure:

Figure 4.23: Cleanup

In the following activity, the database will be filled with the information retrieved by automated tasks in Kubernetes.

Note

You will need a Docker Hub account to push the images into the registry in the following activity. Docker Hub is a free service, and you can sign up to it at https://hub.docker.com/signup.

Activity 4: Collect Gold Prices in a MySQL Database in Kubernetes

The aim of this activity to create a real-life serverless application that runs in a Kubernetes cluster using Kubernetes-native resources. The serverless function will get gold prices from the live market and will push the data to the database. The function will run with predefined intervals to keep a history and make statistical analyses. Gold prices can be retrieved from the CurrencyLayer API, which provides a free API for exchange rates. Once completed, you will have a CronJob running every minute:

Note

In order to complete the following activity, you need to have a CurrencyLayer API access key. It is a free currency and exchange rate service, and you can sign up to it on the official website.

Figure 4.24: Kubernetes Job for gold price

Finally, with each run of the Kubernetes Job, you will have a real-time gold price in the database:

Figure 4.25: Price data in the database

Execute the following steps to complete this activity:

  1. Create an application to retrieve the gold price from CurrencyLayer and insert it into the MySQL database. It is possible to implement this function in Go with the following structure in a main.go file:

    //only displaying the function here//

    func main() {

        db, err := sql.Open("mysql", ...

        r, err := http.Get(fmt.Sprintf(„http://apilayer.net/api/...

        stmt, err := db.Prepare("INSERT INTO GoldPrices(price) VALUES(?)")_,       err = stmt.Exec(target.Quotes.USDXAU)

        log.Printf("Successfully inserted the price: %v", target.Quotes.

    USDXAU)

    }

    In the main function, first you need to connect to the database, and then retrieve the price from CurrencyLayer. Then you need to create a SQL statement and execute on the database connection. The complete code for main.go can be found here: https://github.com/TrainingByPackt/Serverless-Architectures-with-Kubernetes/blob/master/Lesson04/Activity4/main.go.

  2. Build the application as a Docker container.
  3. Push the Docker container to the Docker registry.
  4. Deploy the MySQL database into the Kubernetes cluster.
  5. Deploy a Kubernetes service to expose the MySQL database.
  6. Deploy a CronJob to run every minute.
  7. Wait for a couple of minutes and check the instances of CronJob.
  8. Connect to the database and check for the entries.
  9. Clean the database and automated tasks from Kubernetes.

    Note

    The solution of the activity can be found on page 403.

Summary

In this chapter, we first described the origins and characteristics of Kubernetes. Following that, we studied the Kubernetes design and components with the details of master and node components. Then, we installed a local single-node Kubernetes cluster and checked the Kubernetes components. Following the cluster setup, we studied the official Kubernetes client tool, kubectl, which is used to connect to a cluster. We also saw how kubectl is used to manage clusters and the life cycle of applications. Finally, we discussed the fundamental Kubernetes resources for serverless applications, including pods, deployments, and StatefulSets. In addition to that, we also studied how to connect applications in a cluster using services. Kubernetes resources for one-time and automated tasks were presented using Jobs and CronJobs. At the end of this chapter, we developed a real-time data collection function using Kubernetes-native resources.

In the next chapter, we will be studying the features of Kubernetes clusters and using a popular cloud platform to deploy them.