Downward API

An application will often need information about the environment it is running in, including detailed information about itself and information about other applications running in the same environment. For example, a web server needs to know information about the database address and redis address. A network application needs to know the name of the worker node it is deployed to. A monitor application needs to know information about the applications it needs to monitor.

As for information about database and redis addresses, we can know in advance and transmit it through the Pod's env configuration. But as for information about the name of the worker node to which the Pod will be deployed, we cannot know in advance. For example, if we have 10 worker nodes, we do not know which node the Pod will be deployed to. Specify env first. We only know when the Pod has been deployed to that node. If so, how can we pass the name of that node into the application?

Another example is that we have an application that needs to know the name of the Pod it belongs to, but if we deploy the Pod using ReplicaSet, the Pod name will be random, we cannot know the name of the Pod unless it has been created, so how can we pass that value to the application through the Pod's env configuration? Then kubernetes provides us with a guy called the Kubernetes Downward API to support the above cases.

Downward API

The Downward API allows us to pass the Pod's metadata and its environmental information inside the container. We can use this information and pass it through the Pod's env config, or the Pod's volume config to pass to the container as a file. And don't let the name of the Downward API confuse you, it's not a REST endpoint, it's just a way for us to pass the Pod's meta information into the container.

Pod metadata that Downward API supports

Downward API allows us to pass the following information to the container:

  • Pod's name

  • Pod's IP

  • Pod namespace

  • Name of the node the Pod is running on

  • The name of the ServiceAccount (will be discussed in the following articles) of the Pod

  • CPU and memory requirements of each container

  • CPU and memory limits per container

  • Labels of Pod

  • Pod Annotations

All of the above attributes can be transmitted to the Pod via env, except for labels and annotations, which must be transmitted as a volume file.

Passing metadata using env

Now we will create a Pod and pass the Pod's metadata to the container. Create a file named downward-api-env.yaml with the following configuration:

apiVersion: v1
kind: Pod
metadata:
  name: downward
spec:
  containers:
    - name: main
      image: busybox
      command: ["sleep", "9999999"]
      resources:
        requests:
          cpu: 15m
          memory: 100Ki
        limits:
          cpu: 100m
          memory: 8Mi
      env:
        - name: POD_NAME
          valueFrom:
            fieldRef: # using downward API
              fieldPath: metadata.name # the metadata.name field from the pod manifest
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace # the metadata.namespace field from the pod manifest
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP # access pod IP
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName # access node name
        - name: CONTAINER_CPU_REQUEST_MILLICORES
          valueFrom:
            resourceFieldRef: # using resourceFieldRef instead of fieldRef.
              resource: requests.cpu
              divisor: 1m
        - name: CONTAINER_MEMORY_LIMIT_KIBIBYTES
          valueFrom:
            resourceFieldRef:
              resource: limits.memory
              divisor: 1Ki

We will use the fieldRef and resourceFieldRef properties in the Pod's env config to pass metadata to the container through the Downward API. With fieldRef we will specify the fieldPath attribute for it, and access the name of the Pod used metadata.name, the name of the worker node used spec.nodeName.

For the env that accesses the container's resource requests and limits, we specify the divisor field, the values ​​of requests and limits will be divided by this number to get the value we pass into the container. For example, in the above config we specify requests.cpu as 15m, divisor as 1m => then the CONTAINER_CPU_REQUEST_MILLICORES env value will be 15m/1m = 15. limits.memory is 8Mi, divisor 1Ki => then the CONTAINER_MEMORY_LIMIT_KIBIBYTES env value will be 8Mi/1Ki = 8192.

Create Pod and test:

$ kubectl apply -f downward-api-env.yaml
pod/downward created
$ kubectl exec downward main -- env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=downward
POD_NAME=downward
POD_NAMESPACE=default
POD_IP=10.1.11.166
NODE_NAME=docker-desktop
CONTAINER_CPU_REQUEST_MILLICORES=15
CONTAINER_MEMORY_LIMIT_KIBIBYTES=8192
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
KUBERNETES_SERVICE_HOST=10.96.0.1
KUBERNETES_SERVICE_PORT=443
HOME=/root

You will see the Pod's metadata information already in the container's env. And our application can use the envs we need.

Transfer metadata using volume files

Now we will pass the metadata through the volume config and mount it into the container as a file. Create a file named downward-api-volume.yaml with the following configuration:

apiVersion: v1
kind: Pod
metadata:
  name: downward-volume
spec:
  containers:
    - name: main
      image: busybox
      command: ["sleep", "9999999"]
      resources:
        requests:
          cpu: 15m
          memory: 100Ki
        limits:
          cpu: 100m
          memory: 8Mi
      volumeMounts:
        - name: downward
          mountPath: /etc/downward
  volumes:
    - name: downward
      downwardAPI: # using downward API
        items:
          - path: "podName" # file name mount to container
            fieldRef:
              fieldPath: metadata.name
          - path: "podNamespace"
            fieldRef:
              fieldPath: metadata.namespace
          - path: "labels"
            fieldRef:
              fieldPath: metadata.labels
          - path: "annotations"
            fieldRef:
              fieldPath: metadata.annotations
          - path: "containerCpuRequestMilliCores"
            resourceFieldRef:
              containerName: main
              resource: requests.cpu
              divisor: 1m
          - path: "containerMemoryLimitBytes"
            resourceFieldRef:
              containerName: main
              resource: limits.memory
              divisor: 1Ki

Here we will specify the volume and use the downwardAPI property to pass metadata to the container in the form of a file located in the /etc/downward folder. When using in volume form, when declaring the resourceFieldRef's config, we need to add the containerName attribute to select the container for which we want to receive requests and limit.

Create Pod and test:

$ kubectl apply -f downward-api-volume.yaml
pod/downward-volume created
$ kubectl exec downward-volume -- ls -lL /etc/downward
-rw-r--r-- 1 root root 134 May 25 10:23 annotations
-rw-r--r-- 1 root root 2 May 25 10:23 containerCpuRequestMilliCores
-rw-r--r-- 1 root root 7 May 25 10:23 containerMemoryLimitBytes
-rw-r--r-- 1 root root 9 May 25 10:23 labels
-rw-r--r-- 1 root root 8 May 25 10:23 podName
-rw-r--r-- 1 root root 7 May 25 10:23 podNamespace
$ kubectl exec downward -- cat /etc/downward/labels
foo="bar

As you can see, using the Downward API is not very difficult. It allows us to transfer basic information of the environment and necessary information of the Pod into the container. But the data it supports is quite limited, we cannot access information about other Pods using the Downward API. If we need to access more information, we will use the Kubernetes API server, a true REST API.

Kubernetes API server

This is a REST API, allowing us to call it and get necessary information about our cluster, but using it is not as easy as the Downward API.

To call the API server, we need authentication to be able to call the API server. Before talking about how to interact with the API server, we will see what it is like first.

Explore the API server

To check the URL of the API server, we run the command:

$ kubectl cluster-info
Kubernetes control plane is running at https://kubernetes.docker.internal:6443

Depending on your environment, this URL will print as an IP or a DNS, with port 6443. We send the request to the API server.

$ curl https://kubernetes.docker.internal:6443
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html

curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.

You will see it print an error, because this URL runs HTTPS, to call it, we pass params --insecure (or -k).

$ curl -k https://kubernetes.docker.internal:6443
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {
    
  },
  "status": "Failure",
  "message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
  "reason": "Forbidden",
  "details": {
    
  },
  "code": 403
}

At this point, we have called the kubernetes API server, but it will return a 403 error because we need authentication to call it. There are many ways to authenticate to the server, but now to test, we can use kubectl proxy to expose the server API without having to authenticate to it.

$ kubectl proxy
Starting to serve on 127.0.0.1:8001

Open another terminal.

$ curl 127.0.0.1:8001
{
  "paths": [
    "/.well-known/openid-configuration",
    "/api",
    "/api/v1",
    "/apis",
    "/apis/"
    ...
    ]
}

Yep. We have called the API server, you will see the results it returns are API server endpoints that you can call.

Interact with API server

In the paths array, you will see a path called /api/v1, this is the path containing our basic resources. Do you remember that when declaring a config file, we usually specify the apiVersion attribute first? That attribute will be related to these paths.

For example, when declaring a Pod, we need to specify apiVersion: v1 , which corresponds to the Pod resource that will be in API /api/v1. Let's call this link to try:

$ curl 127.0.0.1:8001/api/v1
{
  "kind": "APIResourceList",
  "groupVersion": "v1",
  "resources": [
    ...
    {
      "name": "configmaps",
      "singularName": "",
      "namespaced": true,
      "kind": "ConfigMap",
      "verbs": [
        "create",
        "delete",
        "deletecollection",
        "get",
        "list",
        "patch",
        "update",
        "watch"
      ],
      "shortNames": [
        "cm"
      ],
      "storageVersionHash": "qFsyl6wFWjQ="
    },
    ...
    {
      "name": "pods",
      "singularName": "",
      "namespaced": true,
      "kind": "Pod",
      "verbs": [
        "create",
        "delete",
        "deletecollection",
        "get",
        "list",
        "patch",
        "update",
        "watch"
      ],
      "shortNames": [
        "po"
      ],
      "categories": [
        "all"
      ],
      "storageVersionHash": "xPOwRZ+Yhw8="
    }
    ...
  ]
}

We will see that it lists all the resources that are in it, including Pod and Configmap.

We can list all Pods in a namespace by calling the API path as follows:

$ curl curl 127.0.0.1:8001/api/v1/namespaces/default/pods
{
  "kind": "PodList",
  "apiVersion": "v1",
  "metadata": {
    "resourceVersion": "1152707"
  },
  "items": []
}

If there is any Pod, it will display in the items attribute. API path structure <api-server-url>/api/v1/namespaces/<namespace-name>/pods, list all pods located in <namespace-name>the namespace you can specify, above we list all pods located in the default namespace.

If we want to list pods in another namespace, we will specify the namespace name above the path, like this:

$ curl 127.0.0.1:8001/api/v1/namespaces/gitlab/pods
{
  "kind": "PodList",
  "apiVersion": "v1",
  "metadata": {
    "resourceVersion": "1153755"
  },
  "items": [
    {
      "metadata": {
        "name": "gitlab-webservice-default-ff4459cf5-bqklw",
        "namespace": "gitlab",
        ...
      }
      ...
    },
    {
      "metadata": {
        "name": "gitlab-workhorse-7d98fb5bd-ph75v",
        "namespace": "gitlab",
        ...
      }
      ...
    },
  ]
}

To get information about a Pod, we call the following link:

$ curl 127.0.0.1:8001/api/v1/namespaces/gitlab/pods/gitlab-webservice-defaul-ff4459cf5-bqklw
{
  "kind": "Pod",
  "apiVersion": "v1",
  "metadata": {
    "name": "gitlab-webservice-default-ff4459cf5-bqklw",
    "generateName": "gitlab-webservice-default-ff4459cf5-",
    "namespace": "gitlab",
    "uid": "33dbeab0-f6d7-47e6-b61c-2feff4021b17",
    "resourceVersion": "1139441",
    "creationTimestamp": "2021-08-23T08:42:01Z",
    "labels": {
      "app": "webservice",
      "chart": "webservice-5.0.1",
      ...
    }
    ...
  }
  ...
}

With structure <api-server-url>/api/v1/namespaces/<namespace-name>/pods/<pod-name>. At this point, we know how to interact with the API server using kubectl proxy, but inside a container, how will we interact with it?

API server interaction inside the Pod container

To interact with the API server inside the Pod, we need to know its URL. kubernetes has provided us with a default ClusterIP for the API server.

$ kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   103d

We have a service called kubernetes, so inside the container, we can call the API server using the URL https://kubernetes. We create a pod and access it to test:

$ kubectl run curl --image=curlimages/curl --command -- sleep 9999999
pod/curl created
$ kubectl exec -it curl -- sh
/ $ curl https://kubernetes
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.se/docs/sslcerts.html

curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.

We have access to the Pod, now we will send a request to the API server. Above, when we call, we pass options -k, in reality we should not do that, but we will use the server file. certificate to verify with HTTPS server, the CA file and information to authenticate to the API server will be in a folder /var/run/secrets/kubernetes.io/serviceaccount/inside a container, there will be 3 files that are automatically mounted inside a container when the container is created, the information of These 3 files are located in a resource called ServiceAccount, we will talk about this resource later, for now we just need to understand that we will use it to authenticate to the API server.

Jump to folder /var/run/secrets/kubernetes.io/serviceaccount/.

/ $ cd /var/run/secrets/kubernetes.io/serviceaccount/
/run/secrets/kubernetes.io/serviceaccount $ ls
ca.crt     namespace  token

When we list it, we will see a ca.crt file, this is the server certificate for us to verify with HTTPS of the API server. We will use it as follows:

/run/secrets/kubernetes.io/serviceaccount $ curl --cacert ca.crt  https://kubernetes ; echo
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {
    
  },
  "status": "Failure",
  "message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
  "reason": "Forbidden",
  "details": {
    
  },
  "code": 403
}

OK. We have called the API server. Now we will authenticate with the API server. You will see a file called token, we use this file to authenticate with the API server.

/run/secrets/kubernetes.io/serviceaccount $ TOKEN=$(cat token)
/run/secrets/kubernetes.io/serviceaccount $ curl --cacert ca.crt -H "Authorization: Bearer $TOKEN" https://kubernete
s
{
  "paths": [
    "/.well-known/openid-configuration",
    "/api",
    "/api/v1",
    "/apis",
    "/apis/",
    ...
  ]
}

Yep, so we have interacted with the API server and been able to call it without getting a 403. In a container, we will use 3 files that are automatically mounted inside the Pod container via ServiceAccount to authenticate to the API server. This is an illustration.

There are some clusters where you will need to enable RBAC (will be discussed in the article about ServiceAccount) to be able to authenticate to the API server.

Introducing AMBASSADOR containers pattern in Kubernetes

Instead of having to do a lot of things like using ca.crt and using token files, we can use a pattern called ambassador. This pattern will deploy an additional container in the same Pod as the main container. This sub-container will be called Sidecar Container, which will support functionality for the main container. Here, this sidecar container will be in charge of authentication to the API server. The main container just needs to call the sidecar container and it will send the main container's request to the API server.

Using this pattern will make it easier for us to interact with the API server.

Use the SDK to interact with the API server

If we only perform simple tasks with the API server such as list resources, then we can call via REST API for simplicity. But if we want to interact more with the API server, then we should use the client library. Kubernetes has a number of SDKs corresponding to languages ​​that we can use to work with API servers:

Example of code to create a namespace using nodejs sdk:

const k8s = require('@kubernetes/client-node');

const kc = new k8s.KubeConfig();
kc.loadFromDefault();

const k8sApi = kc.makeApiClient(k8s.CoreV1Api);

var namespace = {
    metadata: {
        name: 'test',
    },
};

k8sApi.createNamespace(namespace).then(
    (response) => {
        console.log('Created namespace');
        console.log(response);
        k8sApi.readNamespace(namespace.metadata.name).then((response) => {
            console.log(response);
            k8sApi.deleteNamespace(namespace.metadata.name, {} /* delete options */);
        });
    },
    (err) => {
        console.log('Error!: ' + err);
    },
);

As you can see, using the Kubernetes API server, we can get information about other applications inside the cluster, and cluster information if we need it.

Last updated