K8s with Port-Forwarding and NodePort

Aman Arham
7 min readAug 5, 2020
Credits: Shea Rouda

Alright! It is time for the more advanced part of Kubernetes. Now that you are fond of the ideas of YAML Files and Pods, it is time to dive deeper into the concepts of Kuberentes. There are many different features and services that need to be explored and in this blog, you will find out those unique features. Some of the main components within Kuberentes are Services and Deployments, each having its own YAML code in this blog. There is also a feature that we will learn, known as Port-Forwarding, used for Pod access.

Without further or do, it’s time to get into the content…

Deployments

Deployments are another aspect of Kubernetes that are very important when looking at the infrastructure and functionality that Kubernetes offers. In essence, Deployments allows the user to runs many different replicas of the same pods, creating efficiency and enabling the user to run their application with fault-tolerance. Fault-tolerance is an idea where the environment/system can tolerate pod-failures. Because there are many replicas of the Pods present, when one goes down, the others will take the role off the missing one until the final Pod comes back. In addition, Deployments are the most commonly used when creating Pods, especially by large Companies such as Google.

In this case, we will be using a Deployment YAML file. Here is the Deployment file which will be used for this blog:

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80

When looking at this definition file, you will see some very similar aspects that are present in a Pod-Definition file. However, these aspects have different values. Do not get pressured! These are different for a reason. Deployments include more information, causing the YAML file to carry much more content.

Some thing that you may notice are the difference in the kind, labels, and replicas. The kind for this scenario is Deployment so that must go in that specific key-value pair. Labels are present in order to organize and select specific pods and replicas. The final thing which is significantly different is the replicas. This key-value pair will assign the specific number to replicas. In this case, we are only having 1 replica, meaning 1 pod will be created from this deployment.

Now that we have went over the YAML code, we will need to create a file to place this code in. Use this command to create the Deployment-Definition file:

vi deployment-def.yaml

Copy and paste the code in and click :wq!to save and exit. Once done, create the deployment with the Kubectl Create command as presented:

kubectl create -f deployment-def.yaml

Use the kubectl get deployments to check and see if the deployment is running. You should be able to see a deployment running. Use the kubectl get pods to see the pods or pod running from the deployment. The terminal should look something similar to this:

Testing and Experimenting with Port-Forwarding

Now that we have created the Deployment and Pod, how can we test that the application is working correctly? Is there a way to test it through your browser? The answer to both of those questions is YES! This is where Port-Forwarding comes into the picture. This may sound unfamiliar but stick with me. Port-Forwarding is a feature in Kubernetes that permits the user to route incoming traffic to a local IP address with a unique port number. An example of this is 127.0.0.1:9079. This is a local IP with a Port of 9079. If the Local IP Address and Forwarded-Port is searched up in the browser, the application will be visible. However, we will need to use the port-forward function to forward traffic to the Port. I will show you how to do this.

Use this command:

kubectl port-forward [pod name] 9079:80

With this command, you will be able to port-forward to the Port of 9079 to the deployment port 80. In the Deployment Definition File, we specified the the port would be 80, where the Nginx application is listening on. Now, we are forwarding it to 9079.

The screen may looks frozen, but the port is being forwarded and you can access through the browser as shown below.

Copy and past you local IP Address along with the Forwarded-Port into your browser. You should see a screen like this:

Yaay!!! Your Deployment is working and Nginx is up and running. This is one of the many ways where you can test to see if the application in your deployment are working through the browser.

Services

Another component that you must know in regards to Kubernetes is Services. In essence, Services are known as an abstraction which defines the Pods and their functionality in the Cluster. It also allows Pods to be recognized in DNS Names too.

As you can guess, this also has a YAML file. Here is the Service YAML File:

apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: NodePort
selector:
app: nginx
ports:
- port: 80
targetPort: 80

All of the YAML Definition file have aspects that are common and this Service file is no different. The components that are different include: kind, type, and ports. Since this is a Service YAML file, the kind is Service. We are using NodePort in this case so we will have to use the type NodePort. Finally, ports are provided, both port and target port. Based on this YAML file, the Service is being placed on port 80 and is targeting port 80.

Note: NodePort Exposes the Service on each Node’s IP at a static port. It ranges from 30000–32767.

Alright! Whew! Now that we have went over the Service YAML File, it is time to create it. Use this code to create the Service file and paste the code above into it:

vi service.yaml

After, exit by using :wq! and create the service with:

kubectl create -f service.yaml

This service can be viewed using kubectl get services -o wide to get an in-depth view of the Service.

Testing with NodePort

Now that the service is created, we are, actually, going to take a trip to the past. To “Docker Land”! It has been a while since we have last talked about Docker. Docker plays a big role when we look at Services and NodePorts in a Kubernetes Cluster.

If you did not know, this Kind Cluster runs a Docker Container in the Background in order to manage the cluster. If you run the docker ps command, you will see a container, similar to this one, running:

Kind Docker Container helps manage cluster.

We will use this container to test and see if the Service is working based on the NodePort. In order to do this, we will need to access this Container. We are able to do that by entering as bash using this command:

docker exec -it [container ID] bash

Once completed, you will have access to the entire Container. Cool, right? In this container, we will run this command:

curl http://127.0.0.1:31892 -v

In this command, we attempting to curl to the IP Address and NodePort of the Service. If you do not know what curl is, curl is a function in CLI that shows the user that data is being transferred to and from the server. You maybe wondering where I got the IP Address from? Well, if youkubectl get the services, you will see the Port column where the NodePort is displayed:

In my case, the NodePort is 31892.

After running this command in the Docker Container, you should be able to get an output similar to this:

If this output is received, you Service is up and running. Yah! We did it!

Time for a Change

In these few blogs, you guys have read through a very high-level overview of Kubernetes and its many different features. Starting off with the building blocks of Kubernetes(Docker) to more advanced topics, such as Deployments and Service. However, for the next few weeks, I would like to turn things around. The next topic that I will be covering is Edge Computing. Catch you guys in the next blog. Aman Arham out!

--

--

Aman Arham

Senior at Rick Reedy High School and aspiring Data Scientist; Writer for Better Programming.