Overview
Amazon Elastic Kubernetes Service (Amazon EKS) is a managed service that you can use to run Kubernetes on AWS without needing to install, operate, and maintain your own Kubernetes control plane or nodes.
Amazon EKS runs the Kubernetes control plane instances across multiple Availability Zones to ensure high availability. It is also integrated with many AWS services to provide scalability and security for your applications.
Prerequisites
To get started, you must first have:
- AWS account with Admin role and
- AWS CLI to access Node & Services from the local machine, along with Kubectl and Helm.
Creating an Amazon EKS Cluster
- Before creating the EKS cluster, create an Amazon EKS IAM role which will allow access to other AWS service resources that are required to operate any cluster managed by EKS.
- Go to IAM → Roles → Create Role
- Select the EKS service from "Service to view its use cases" → EKS Cluster from Select your use cases → click Next: Permissions.
- AmazonEKSClusterPolicy Policy should already be present → click Next:Tags.
- Add the required Tags and click Next: Review.
- Assign a name to the role in Role name and click Create role.
- Create IAM Role for the Node Group which will be used by worker nodes.
- Navigate to IAM → Roles → Create Role.
- Select the EC2 from "Choose a use case" → click Next: Permissions.
- Choose AmazonEKS_CNI_Policy, AmazonEKSWorkerNodePolicy & AmazonEC2ContainerRegistryReadOnly Policy from filter policy, click Next: Tags.
- Add the required Tags and click Next: Review.
- Assign a name to the role in Role name and click Create role.
-
Go to Elastic Kubernetes Service in the AWS console and select Create from the Add Cluster tab.
- Configure the Cluster Configuration for EKS to create the cluster:
- Assign Name to the cluster, select the Default Kubernetes Version, and select the IAM Role created in Step 1 of this section.
- Add the required tags as needed and click Next.
- Assign the required VPC, Subnets, and Security groups as per your infrastructure policy.
- Keep the Cluster endpoint access as Public.
- Keep the Default settings for the Networking add-ons, and click Next.
- Based on your requirements, enable the required Control Plane Logging, and click Next.
- Review the information entered or selected on the previous pages and then click Create.
- The Cluster creation usually takes 10-15 mins to be in Active status. After that, add the Node Group to supply compute capacity to the EKS cluster:
- Click Add Node Group in compute section of cluster we created.
- Assign Name to the Node group, select the IAM Role created in Step 2 of this section, and add Tag(s) based on your requirements in the Node Group tab. Then, click Next.
- Select the AMI Type and Capacity type based on your requirements.
- Based on our Data Points estimation, select the Instance type and the Disk Size. This is the size your Kubernetes worker nodes will be and directly correlates with the initial size of your Kubernetes cluster. Use the table in the Change size UI to determine which node type has the CPU and memory requirements you will need.
- Select the desired Node Group scaling configuration and the Node Group update configuration, and click Next.
-
Specify the Networking and then, proceed to create the Node Group by reviewing the information entered and selected in previous pages, and then click Create.
- Verify the status of the instances in the Node tab and confirm its Ready state.
Connecting to the Amazon EKS Cluster
After creating the cluster, you view the details of the cluster in the Clusters Tab in the Amazon Console. Click on the Name of the cluster you created to find more details about the Kubernetes Cluster and Configuration.
Before connecting to the cluster, configure AWS CLI with the console details in regards to authenticating via CLI to your account.
1. Connect to the Kubernetes cluster using AWS CLI using the commands below:
aws eks --region <region> update-kubeconfig --name <cluster_name>
2. Verify the connection to your cluster using the Kubectl get nodes
command to return a list of the cluster nodes:
kubectl get nodes
Deploying Countly Application on Kubernetes Cluster
The following assumes you have already set up kubectl
and helm
. Serviced, Deployments, and Ingress resource configurations are available in our Github repository.
-
Firstly, create a namespace "Countly" and set it as default to deploy the services and application pods in the Countly namespace to isolate the resources in a single cluster, as shown below:
kubectl create ns countly
kubectl config set-context --current --namespace=countly
- After creating the namespace, create a storage class with an AWS-specific provisioner and Disk type.
storageclass.yaml
:apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: mongo-storageclass
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
reclaimPolicy: Retain
allowVolumeExpansion: true
mountOptions:
- debug
volumeBindingMode: Immediatekubectl apply -f storageclass.yaml
kubectl get storageclass - Install MongoDB and set up a replica set configuration prior to installing Countly's API and Frontend pods, as plugins installation is dependent on MongoDB. Use the commands below:
cd countly/bin/docker/k8s
helm install mongo -f mongo/values.yaml stable/mongodb-replicaset
To verify the installation, check the pods generated for MongoDB, as shown below:
kubectl get pods
- Before deploying the Countly application containers, create a Kubernetes Secret to authenticate to and access the Enterprise Edition Docker Images from our Private Google Container Registry.
To create the Secret, refer to this Guide.Step available only in Countly Enterprise Edition.
- Once the MongoDB pods are running, create Countly Deployments and Services for the API and the Frontend.
Bothcountly-frontend.yaml
andcountly-api.yaml
need to be edited with a key:value pair to configure pods with relevant values (Refer env config guide), in the env section:env:
- name: COUNTLY_PLUGINS
value: "mobile,web,desktop,some,more,plugins" #<Enterprise or Community Plugins>
- name: COUNTLY_CONFIG__FILESTORAGE
value: "gridfs"
- name: COUNTLY_CONFIG__MONGODB
value: "mongodb://some.mongo.host/countly" #<Mongodb pod connection names>
- name: COUNTLY_CONFIG_HOSTNAME
value: countly.example.com #<Domain name required as url>
- name: COUNTLY_CONFIG_API_API_WORKERS
value: "4" #<value can be CPU core count>
- name: NODE_OPTIONS
value: "--max-old-space-size=2048"
- name: COUNTLY_CONFIG__MAIL_CONFIG_HOST
value: "smtp.example.com"
- name: COUNTLY_CONFIG__MAIL_CONFIG_PORT
value: 25
- name: COUNTLY_CONFIG__MAIL_CONFIG_AUTH_USER
value: "example-user"
- name: COUNTLY_CONFIG__MAIL_CONFIG_AUTH_PASS
value: "example-password"
cd countly/bin/docker/k8s
kubectl apply -f countly-frontend.yaml
kubectl apply -f countly-api.yaml -
Once Countly Service and deployments are up and running, you will also need to expose the setup to the outer world so that it can be accessible publicly.
This can be done by setting up an ingress resource configured to forward all incoming requests either to the Countly-API or to the Countly-frontend services based on the route defined.
To do this, enable Application Load Balancer Ingress controller:- Follow the steps mentioned in the AWS Load Balancer Controller guide to install and configure ingress-controller.
- Post configuring the Ingress controller, we will tag the Subnets to help ingress-resource to auto-discover the subnets and assign the routes.
- After enabling the Ingress controller, we will create the Kubernetes Secret to enable SSL for the URL mapped with your service. Use the command to help you create a TLS Secret:
kubectl create secret tls <add name to secret> --key <path-to-key> --cert <path-to-cert>
kubectl get secret #To view the secret created - After generating the TLS Secret, create the Ingress resource to route the traffic based on the path configured for the Countly application:
countly-ingress.yaml
:
apiVersion: networking.k8s.io/v1
To view the Ingress created, use the command below:
kind: Ingress
metadata:
name: countly-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
spec:
tls:
- hosts:
- YOUR_HOSTNAME # countly.example.com
secretName: countly-tls
rules:
- host: YOUR_HOSTNAME # countly.example.com
http:
paths:
- path: /i
pathType: ImplementationSpecific
backend:
service:
name: countly-api
port:
number: 3001
- path: /i/*
pathType: ImplementationSpecific
backend:
service:
name: countly-api
port:
number: 3001
- path: /o
pathType: ImplementationSpecific
backend:
service:
name: countly-api
port:
number: 3001
- path: /o/*
pathType: ImplementationSpecific
backend:
service:
name: countly-api
port:
number: 3001
- path: /*
pathType: ImplementationSpecific
backend:
service:
name: countly-frontend
port:
number: 6001
kubectl get ingress
-
After creating the Ingress resource, map the DNS A record with the IP address/Alias Name associated with Ingress.
-
To enable SSL in AWS setup, you will be required to Request/Import SSL certificates in AWS Certificate Manager (ACM). There, choose the appropriate option based on the availability of certificates.
- The final step would be to add Listener on Port 443 in Load Balancer added via Ingress controller, along with associating SSL certs Requested/Imported in ACM to enable SSL for the Domain.