Quantcast
Channel: Dibbus.com
Viewing all articles
Browse latest Browse all 18

Working with Kubernetes II

$
0
0

So, we have 3 images. Time to think about how these images must be running inside a Kubernetes cluster. As mentioned in my previous post we have a single webapp which must be communicating with 2 api’s:

To be more specific: we have a single accessible webapp which are powered by 2 api which are not accessible from outside. This is important for our mini Kubernetes architecture. Ofcourse every application can be accessible from outside but that’s not how it should work.

Designing the Kubernetes setup

To make applications run inside Kubernetes, we must provide a template, also called a manifest file which is a json or yaml file. This file tells Kubernetes what needs to be done. It’s actually a way to describe what our desired state must be. For me, this shows the real power of Kubernetes. The system is actually doing just a single job: maintain the desired state as designed by the manifest.

If the system detects a anomaly in the state, Kubernetes will try to recover itself to it’s desired state!

We only need to create a manifest 😉

The manifest tells Kubernetes what our desired state should be. We have 3 images, all of these images must be running and only one image needs to be accessible from outside. Time to setup a deployment. A deployment is our first description of how and what we would like to setup. A deployment is responsible for setting up one or more containers based upon an image.

These containers run inside a Pod.


Let’s first list the Kubernetes terms we are using:

  1. Namespace, this is used to group our setup. This will make it easier filter or remove all our resources in a single command.
  2. Deployments, this describes our desired state
  3. Services, we need to define a service to make our applications accessible.

A Pod can run multiple containers. A Pod is the smallest unit in Kubernetes to run stuff in. A Pod is a working unit which houses one or more containers. Our deployment will result in a single container per Pod. If we need to scale, we can scale on Pod level.

[Example of a Pod running mulitple containers]

Deployment file

I’ve created 3 deployment files, this can be done in a single file but it’s better to define these in separate file. I’ll show later.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  namespace: kubernetes-demo
  name: kubernetes-web-deployment
  labels:
    app: kubernetes-web
spec:
  replicas: 2
  selector:
    matchLabels:
      app: kubernetes-web
  template:
    metadata:
      namespace: kubernetes-demo
      labels:
        app: kubernetes-web
    spec:
      containers:
      - name: container-kubernetes-web
        image: kubernetes.web:1.0.0
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
        env:
        - name: ProfileService
          value: "http://kubernetes-profile/api/profile"        
        - name: DictionaryService
          value: "http://kubernetes-dictionary/api/dictionary"

Important to know are:

  1. Type, this tells Kubernetes that this is a deployment
  2. Namespace, this groups our deployment
  3. Replicas, this tells Kubernetes how many Pods must be created
  4. Template, this is the contents of what must be inside the Pod
  5. Containers, this is a list of containers inside the Pod, every container needs to have a pointer to an image

Namespace file

We can run this file but it will fail because the used namespace isn’t there yet. Let’s create a namespace then:

apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-demo

This is our yaml file which will create a namespace.

creating a yaml doesn’t bring us any further so we need to tell Kubernetes to do something with this file. Well, we can do this by tell Kubernetes to use this file:

kubectl create -f services.yaml

This can also be done using the Kubernetes CLI:

kubectl create namespace kubernetes-demo

Both options will create a new namespace in our cluster.

Now we can start our first deployment:

kubectl create -f deployment_kubernetes_web.yaml

This will create the defined resources from our deployment_kubernetes_web.yaml file which is a container using our Mvc application inside a Pod defined in our namespace. We can repeat this task for the other 2 api’s as well which will result in 3 deployments and 3 Pods:

You’ll see 6 pods because we have set the replicas to 2, remember? Right now we have:

  • 2 pods serving a webapp
  • 2 pods serving the profile api
  • 2 pods serving the dictionary api

These are running within seconds!!!

Next step is to

  • make our application accessible from the outside.
  • make our api’s accessible from our mvc application

Within Kubernetes we can setup Services to make this happen. In our example we’ll need 3 services:

  1. MVC app which exposes the endpoint
  2. Profile API service which exposes the enpoint internally
  3. Dictionary API service which exposes the enpoint internally

We can setup our services yaml file like this:

kind: Service
apiVersion: v1
metadata:
  name: kubernetes-web
  namespace: kubernetes-demo
  labels:
    app: kubernetes-web
spec:
  type: NodePort
  selector:
    app: kubernetes-web
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
      nodePort: 31760
---
kind: Service
apiVersion: v1
metadata:
  name: kubernetes-profile
  namespace: kubernetes-demo
  labels:
    app: kubernetes-profile
spec:
  type: ClusterIP
  selector:
    app: kubernetes-profile
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
---
kind: Service
apiVersion: v1
metadata:
  name: kubernetes-dictionary
  namespace: kubernetes-demo
  labels:
    app: kubernetes-dictionary
spec:
  type: ClusterIP
  selector:
    app: kubernetes-dictionary
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80

The best post I could find about the different kinds of services is:

https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0

Take a note of the different types of services. I’ve used:

  1. Nodeport, this is used for the webapp and will expose the application to the outside
  2. CusterIP, this will expose both API’s to be accessible but only within the cluster

Now we need to create the services by a kubectl command:

kubectl create -f services.yaml

After executing this command, our cluster will have 3 services:

When it’s all properly setup, take a look at the details of the web service:

You can see the internal IP’s as well as the IP which we can use to access the webapp within Kubernetes: 31760. So we can browse to http://localhost:31760 et voila!

What you see is a running mvc app using both services!

There is only one thing left to explain and that’s the way we can access the api’s within our mvc application. This was actually quite easy to setup because all communication within the cluster is done by using the name of the resource.

In our MVC application we need to setup a proper configuration to access environment variables, like:

Now, our application is looking for an environment variable which can be set in the settings.json or can be defined as a environment variable within the container running our mvc image:

Because the name of the profile service is ‘kubernetes-profile’, a request to this name is internally resolved to the api itself serving the profile api. This is all loadbalanced ofcourse, so all traffic will be managed across all available pods serving the profile api.

The post Working with Kubernetes II appeared first on Dibbus.com.


Viewing all articles
Browse latest Browse all 18

Trending Articles