Automated Deployment of Scalable Applications on AWS EC2 with Kubernetes and Argo CD
Led the deployment of scalable applications on AWS EC2 using Kubernetes and Argo CD for streamlined management and continuous integration. Orchestrated deployments via Kubernetes dashboard, ensuring efficient resource utilisation and seamless scaling.
Key Technologies:
AWS EC2: Infrastructure hosting for Kubernetes clusters.
Kubernetes Dashboard: User-friendly interface for managing containerised applications.
Argo CD: Continuous Delivery tool for automated application deployments.
Achievements: Implemented Kubernetes dashboard for visual management of containerised applications on AWS EC2 instances. Utilised Argo CD for automated deployment pipelines, enhancing deployment efficiency by 60%. Achieved seamless scaling and high availability, supporting 99.9% uptime for critical applications.
This project description emphasises your role in leveraging AWS EC2, Kubernetes, and Argo CD to optimise application deployment and management processes effectively.
Steps to setup ArgoCD
Create a EC2 instance with t2.medium for KIND cluster
Setup EC2 configurations for KIND : pre-requisite docker
Creating and Managing Kubernetes Cluster with Kind
sudo apt-get update // updating the system
# Installing KIND
[ $(uname -m) = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.20.0/kind-linux-amd64
chmod +x ./kind
sudo cp ./kind /usr/local/bin/kind
rm -rf kind
#Install Kubectl
# Variables
VERSION="v1.30.0"
URL="https://dl.k8s.io/release/${VERSION}/bin/linux/amd64/kubectl"
INSTALL_DIR="/usr/local/bin"
# Download and install kubectl
curl -LO "$URL"
chmod +x kubectl
sudo mv kubectl $INSTALL_DIR/
kubectl version --client
# Clean up
rm -f kubectl
config.yml : for KIND Cluster (1 control plane, 2 worker node)
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
image: kindest/node:v1.30.0
- role: worker
image: kindest/node:v1.30.0
- role: worker
image: kindest/node:v1.30.0
Create a 3-node Kubernetes cluster using Kind:
kind create cluster --config=config.yml
Check cluster information:
kubectl cluster-info --context kind-kind
kubectl get nodes
kind get clusters
Installing kubectl
Download kubectl
for managing Kubernetes clusters:
curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.19.6/2021-01-05/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin
kubectl version --short --client
Managing Docker and Kubernetes Pods
Check Docker containers running:
docker ps
Here you will get all the nodes running on docker, because? KIND = Kubernetes in Docker
List all Kubernetes pods in all namespaces:
kubectl get pods -A
Installing Argo CD
Create a namespace for Argo CD:
kubectl create namespace argocd
Apply the Argo CD manifest: in namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
Check services in Argo CD namespace:
kubectl get svc -n argocd
Expose Argo CD server using NodePort:
kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "NodePort"}}'
We changed “Cluster IP” to “NodePort” in order to access from outside world.
Forward ports to access Argo CD server:
kubectl port-forward -n argocd service/argocd-server 8443:443 &
We will do port-forward to access the server on browser
Now you can access Argo CD localhost:8443
NOTE: I’m using Ubuntu WSL in Windows, so I have used localhost name in browser’s URL.
Accessing ARGO CD: localhost:8443
Login to ArgoCD: We will get the initial password with this command.
kubectl get secret -n argocd argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d && echo
Click on “Create Application”
Our code will deploy on this cluster: below
Application created
We have successfully deployed our Application with help of Argo CD on Kubernetes Cluster
kubectl get pods // to check pods running on our K8s cluster with help of Argo CD
Access Deployment of Kubernetes on Browser:
kubectl get deployments
kubectl get svc
Exposing application on for browser with port-forward
kubectl port-forward svc/vote 5000:5000 --address=0.0.0.0 &
kubectl port-forward svc/result 5001:5001 --address=0.0.0.0 &
Setup Kubernetes Dashboard
Create a namespace with kubernetes-dashboard
kubectl create namespace kubernetes-dashboard
Create a manifest file for dashboard
apiVersion: v1 kind: ServiceAccount metadata: name: admin-user namespace: kubernetes-dashboard --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: admin-user roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: admin-user namespace: kubernetes-dashboard
kubectl apply -f dashboard.yml
Deploy Kubernetes dashboard:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
Do port forward, first get the PORT number
kubectl get svc -n kubernetes-dashboard
Now expose the dashboard with help of port-forwarding
kubectl port-forward svc/kubernetes-dashboard -n kubernetes-dashboard 8080:443 --address=0.0.0.0 &
Create a token for dashboard access:
kubectl -n kubernetes-dashboard create token admin-user
Go to browser : localhost:8080
Successfully running kubernetes dashboard : localhost:8080