Serverless Media Function on Amazon EKS
In the previous articles in this blog series I have described the creation of a simple media function to analyze media files, and how it can be deployed on a Kubernetes managed container infrastructure. In this article we will have a look at how we can deploy this function using Amazon Elastic Kubernetes Service (Amazon EKS). Amazon Elastic Kubernetes Service (Amazon EKS) is a managed service that makes it easy for you to run Kubernetes on AWS without needing to maintain your own Kubernetes control plane. This article is written by Jonas Rydholm Birmé, streaming video specialist at Eyevinn Technology.
What is Amazon EKS?
Kubernetes is an open-source system for automated orchestration of deployments, scaling and management of containers running on a cluster of servers. Instead of running and managing the Kubernetes system yourself, Amazon EKS provides the Kubernetes control plane across multiple zones to ensure high availability and automatically replaces unhealthy control plane instances. To simplify a bit, Kubernetes ensures high-availability and scaling of containerized applications and Amazon EKS ensures high-availability and scaling of the Kubernetes system itself. In addition it utilizes some other services from AWS such as the Elastic Load Balancing for load distribution and Amazon ECR for container images. In this example we will use Docker Hub as this media function image is available to the public.
Creating Cluster and Control Plane
We will create a cluster for our media functions and for the sake of this demonstration it is a small cluster of 2 nodes of t3.micro instances. In practice for these type of functions we might want more performant instances and perhaps even GPU instances.
To create the Kubernetes control plane I use the command line tool provided by AWS. You can create the cluster in the management console (web user interface) but I had some issues getting the IAM permissions right as I use another user for the CLI access to AWS. When creating the cluster you give it a name, specify in which region to place the cluster and what type of instances to use for the nodes (workers).
eksctl create cluster --name mediafunctions --version 1.14 --region eu-north-1 --nodegroup-name standard-workers --node-type t3.micro --nodes 2 --nodes-min 1 --nodes-max 3 –managed
This process takes about 10–15 minutes and it took me a couple of retries as I initially specified an instance type that was not available in the region I chosen. It was not that easy to find out what was wrong as the error I got after approximately 14 minutes was an Internal Failure in the process of creating the node group. Verify that the cluster is ready with the Kubernetes command line utility.
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 53m
And looking in the AWS EC2 console we also see that the instances for the nodes have been created and is running.
Deploying the Media Function
To deploy the media function we use the same configuration that we created in the previous article with a slight modification.
apiVersion: apps/v1
kind: Deployment
metadata:
name: function-probe-deployment
labels:
app: function-probe
spec:
replicas: 2
strategy:
rollingUpdate:
maxUnavailable: 50%
selector:
matchLabels:
app: function-probe
template:
metadata:
labels:
app: function-probe
spec:
containers:
- name: function-probe
image: eyevinntechnology/function-probe:0.1.1
ports:
- containerPort: 8080
protocol: TCP
What we have changed is that we need to push our Docker image to a repository that our cluster can access. We could have used the Amazon ECR if we wanted to limit the access to the image but for the purpose of this case I published it on the publicly available Docker Hub. Deploying this to our cluster is simply the same way as was previously demonstrated. As we have only two nodes available we have defined the rollout strategy to allow 50% of the pods to be down to make rollout of a new deployment possible.
$ kubectl apply -f deployment.yaml
And we can verify that the Pods are up and running:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
function-probe-deployment-5694694c6d-cmkbk 1/1 Running 0 48m
function-probe-deployment-5694694c6d-s6lwd 1/1 Running 0 48m
Creating the Service
Now that we have the containers up and running we need to make it available by creating a service. We will define a Service using a load balancer to manage the load and availability with the following configuration:
apiVersion: v1
kind: Service
metadata:
name: media-function-probe
spec:
type: LoadBalancer
selector:
app: function-probe
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
Applying this configuration will create a Kubernetes service but also an AWS Elastic LoadBalancer.
$ kubectl apply -f service.yaml
And then verify that everything has succeeded.
$ kubectl get svc media-function-probe
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
media-function-probe LoadBalancer 10.100.196.112 adcd05e822edf11ea9cc70e90f0aadc6-1657283786.eu-north-1.elb.amazonaws.com 80:32230/TCP 53m
Here we see that to access this service we point our browser to http:// adcd05e822edf11ea9cc70e90f0aadc6–1657283786.eu-north-1.elb.amazonaws.com/
After the service is created you might need to wait a couple of minutes for the DNS to propagate.
Creating an API Gateway for our Services
The default domain name for the load balancer is not that convenient to use and we could simply create a DNS alias for it. However, as we intend to add more media functions we might want to group them under one domain. For example have all functions under the domain functions.eyevinn.technology and each service as path. We want this media function probe that we have created to be accessed from http://functions.eyevinn.technology/probe/api.
To achieve this we will setup an API Gateway in AWS.
We start by creating the “root” endpoint for the media function probe and, a resource called probe and define it to handle ANY methods.
This endpoint will mainly act as an HTTP proxy for this media function’s load balancer that we have created. We also setup this endpoint to handle all paths below this root using the {proxy+} directive. I am using a stage variable to specify the address to the load balancer so I only need to change this at one place. I then specify the address when creating the stage to be deployed.
I can then test this media function by pointing the browser to https://w5yey0p866.execute-api.eu-central-1.amazonaws.com/prod/probe/
Or use curl to try it out:
$ curl -X POST "https://pm9mspzd7c.execute-api.eu-north-1.amazonaws.com/prod/probe/api" -H "accept: application/json" -H "Content-Type: application/json" -d "{\"medialocator\":\"https://testcontent.eyevinn.technology/stswe19/Fraunhofer_updated_v2.mp4\"}"
What now remains is to create a custom domain name for this API gateway. I will not go through each of the steps in detail but it involves creating a certificate for the domain name, create custom domain in the API gateway, create a base path mapping for the stage and a DNS alias pointing to the API gateway. We made the API gateway Edge optimized which means that it will use the Amazon CloudFront edge distribution network to improve the performance of the API for global use.
We can now access the API documentation for the function on https://functions.eyevinn.technology/probe/ and use curl to try it out:
curl -X POST "https://functions.eyevinn.technology/probe/api" -H "accept: application/json" -H "Content-Type: application/json" -d "{\"medialocator\":\"https://testcontent.eyevinn.technology/stswe19/Fraunhofer_updated_v2.mp4\"}"
The media function is now live and available on https://functions.eyevinn.technology/probe/ if you want to try it out.
To summarize we have now in three articles described how we can create a media function, use Kubernetes as the container control plane and using Amazon Web Services for the underlying server infrastructure.
Thank you for reading and if you have any comments or questions please join the conversation in the Streaming Tech Sweden slack.
Eyevinn Technology is the leading independent consultant firm specializing in video technology and media distribution, and proud organizer of the yearly nordic conference Streaming Tech Sweden.