Add a certificate to Nginx Ingress Controller

Having installed Nginx, now it’s the time to setup the TLS for our kubernetes cluster. First we download the certificate from our certificate authority. Next we apply the next secret in our kubernetes cluster, updating the values with the base 64 encoded certificate and key. apiVersion: v1 kind: Secret metadata: name: podrunner-tls namespace: kube-system type: kubernetes.io/tls data: tls.crt: <REPLACE WITH BASE64 OF THE CERTIFICATE> tls.key: <REPLACE WITH BASE64 OF THE KEY> If you need help getting the base64 of the key and certificate, run the next commands: ...

March 11, 2025 · 1 min · Alex Popescu

Replace Traefik with Nginx ingress controller in Kubernetes

If the default ingress controller (Traefik) not ok for you ? Here are some simple instructions on how to replace it with Nginx Uninstall traefik from an existing K3S instance sudo rm -rf /var/lib/rancher/k3s/server/manifests/traefik.yaml helm uninstall traefik traefik-crd -n kube-system sudo systemctl restart k3s Or the proper way to do so on installing k3s: curl -sfL https://get.k3s.io | sh -s - --cluster-init --disable-traefik Install Nginx Ingress Controller helm upgrade --install ingress-nginx ingress-nginx \ --repo https://kubernetes.github.io/ingress-nginx \ --namespace ingress-nginx --create-namespace Setting Static IP to Nginx (if needed): If needed, you can provide a static IP to Nginx using the static-ip-svc.yaml yml: ...

January 14, 2025 · 3 min · Alex Popescu

Single Server Kubernetes Installation

Do you want a simple single server kubernetes that is easy to install? Here is the guide for you. Installing kubernetes First, we install k3s kuberntes using the next command: curl -sfL https://get.k3s.io | sh -s - --cluster-init --node-external-ip <external-ip> Note: Replace external-ip with your server external IP. This is needed in order to get the traefik ingress controller working. Next we should copy the k3s kubernetes configuration file /etc/rancher/k3s/k3s.yaml to the default location ~/.kube/config and change the KUBECONFIG environment variable: ...

January 7, 2025 · 4 min · Alex Popescu

Observability with Elastic APM and Flask

In out last article here , we explained how to send Docker and application logs to ELK. Now is the time to add some observability to our app using Elastic APM. Configure Elastic APM First, we add Elastic APM to our Docker Compose file from last time, using the same version as Elastic Search. apm-server: image: docker.elastic.co/apm/apm-server:7.17.24 container_name: apm-server user: apm-server ports: - "8200:8200" volumes: - ./apm-server.docker.yml:/usr/share/apm-server/apm-server.yml:ro command: > --strict.perms=false -e -E output.elasticsearch.hosts=["http://elasticsearch:9200"] Next, run the containers using the docker compose up command and wait a few minutes. ...

December 23, 2024 · 1 min · Alex Popescu

Deploying docker image to AWS Fargate in GitHub

Let’s deploy a docker image to AWS ECR and further to AWS Fargate. If you don’t have docker installed, now is the time to do it. GitHub Action YML: Create a workflow named main-aws.yml, for example as: name: Deploy to Amazon ECS on: workflow_dispatch: env: AWS_REGION: eu-central-1 # set this to your preferred AWS region, e.g. us-west-1 ECS_SERVICE: test-svc # set this to your Amazon ECS service name ECS_CLUSTER: test-fargate-dev # set this to your Amazon ECS cluster name ECS_TASK_DEFINITION: .aws/td.json CONTAINER_NAME: test # set this to the name of the container in the ECS_TASK_NAME: test-task defaults: run: shell: bash jobs: Build: name: Build runs-on: ubuntu-latest steps: - name: Checkout uses: actions/checkout@v2 - name: Configure AWS credentials uses: aws-actions/configure-aws-credentials@v1 with: aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }} aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }} aws-region: ${{ env.AWS_REGION }} - name: Login to Amazon ECR id: login-ecr uses: aws-actions/amazon-ecr-login@v1 - name: Get commit hash id: get-commit-hash run: echo "::set-output name=commit-hash::$(git rev-parse --short HEAD)" - name: Get timestamp id: get-timestamp run: echo "::set-output name=timestamp::$(date +'%Y%m%d-%H%M')" - name: Build, tag, and push the image to Amazon ECR id: build-image env: ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }} ECR_REPOSITORY: ${{ secrets.AWS_REPO_NAME }} IMAGE_TAG: ${{ steps.get-commit-hash.outputs.commit-hash }}-${{ steps.get-timestamp.outputs.timestamp }} run: | docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG . docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG echo "image=$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG" >> $GITHUB_OUTPUT - name: Create task definition in Amazon ECS run: aws ecs register-task-definition --cli-input-json file://${{ env.ECS_TASK_DEFINITION }} - name: Create svc in Amazon ECS run: aws ecs create-service --cluster ${{ env.ECS_CLUSTER }} --service-name ${{ env.ECS_SERVICE }} --task-definition ${{ env.ECS_TASK_NAME }} --desired-count 1 --capacity-provider-strategy '[{"capacityProvider":"FARGATE_SPOT","weight":100}]' --network-configuration "awsvpcConfiguration={subnets=[subnet-1,subnet-2,subnet-3],securityGroups=[sg-1,sg-2],assignPublicIp=ENABLED}" --region ${{ env.AWS_REGION }} - name: Fill in the new image ID in the Amazon ECS task definition id: task-def uses: aws-actions/amazon-ecs-render-task-definition@v1 with: task-definition: ${{ env.ECS_TASK_DEFINITION }} # same as container name in taks def container-name: ${{ env.CONTAINER_NAME }} image: ${{ steps.build-image.outputs.image }} - name: Deploy Amazon ECS task definition uses: aws-actions/amazon-ecs-deploy-task-definition@v2 with: task-definition: ${{ steps.task-def.outputs.task-definition }} service: ${{ env.ECS_SERVICE }} cluster: ${{ env.ECS_CLUSTER }} wait-for-service-stability: true Details for every step are bellow. ...

December 9, 2024 · 5 min · Alex Popescu

Developing a Sample TODO API with Couchbase in Docker

Have you ever wanted to run Couchbase on Docker to freely build and test your app? Here’s a simple guide to help you set up Couchbase in Docker and build a sample TODO app using Python and FastAPI. Developer Cluster Setup First, let’s create a minimal docker-compose.yml file to spin up Couchbase: services: couchbase: image: couchbase:latest container_name: couchbase ports: - "8091:8091" # Couchbase Web Console - "8092:8092" # Query Service - "8093:8093" # Full Text Search - "11210:11210" # Data Service environment: COUCHBASE_ADMINISTRATOR_USERNAME: admin COUCHBASE_ADMINISTRATOR_PASSWORD: password volumes: - couchbase_data:/opt/couchbase/var - ./init_bucket.sh:/init_bucket.sh - ./start-couchbase.sh:/start-couchbase.sh # get the image entry point using docker inspect -f '{{.Config.Entrypoint}}' couchbase # for the current image is /entrypoint.sh command: ["/bin/bash", "/start-couchbase.sh"] mem_limit: 1024m # Limit memory usage to 3100 MB for full volumes: couchbase_data: To make it work, we need two shell scripts: start-couchbase.sh and init_bucket.sh. ...

October 27, 2024 · 5 min · Alex Popescu

Almost Wrong Way To Send Docker Containers logs To ELK

In this article, we’ll walk through setting up a Docker-based ELK (Elasticsearch, Logstash, and Kibana) stack to collect, view, and send Docker logs. services: elasticsearch: image: elasticsearch:7.17.24 environment: - discovery.type=single-node volumes: - ./elasticsearch_data/:/usr/share/elasticsearch/data mem_limit: "1g" redis-cache: image: redis:7.4.0 logstash-agent: image: logstash:7.17.24 volumes: - ./logstash-agent:/etc/logstash command: logstash -f /etc/logstash/logstash.conf depends_on: - elasticsearch ports: - 12201:12201/udp logstash-central: image: logstash:7.17.24 volumes: - ./logstash-central:/etc/logstash command: logstash -f /etc/logstash/logstash.conf depends_on: - elasticsearch kibana: image: kibana:7.17.24 ports: - 5601:5601 environment: - ELASTICSEARCH_HOSTS=http://elasticsearch:9200 depends_on: - elasticsearch ElasticSearch Just create a folder named elasticsearch_data for storing data. ...

October 16, 2024 · 3 min · Alex Popescu

Backup PostgreSQL with Docker in S3 Object Storage

How can you backup your PostgreSQL to a Hetzner S3 object storage ? Here is how it’s done. Creating S3 Object Storage First, let’s create our S3 Object Storage: Next, we should create our S3 access key and secret: Backup and restore Let’s create a database with sample data: -- Create the database CREATE DATABASE music_streaming_platform; \c music_streaming_platform; -- Create users table CREATE TABLE users ( user_id SERIAL PRIMARY KEY, username VARCHAR(50) UNIQUE NOT NULL, email VARCHAR(100) UNIQUE NOT NULL, date_joined DATE DEFAULT CURRENT_DATE, country VARCHAR(50) ); -- Create artists table CREATE TABLE artists ( artist_id SERIAL PRIMARY KEY, artist_name VARCHAR(100) UNIQUE NOT NULL, genre VARCHAR(50), country VARCHAR(50), active_since DATE ); -- Create albums table CREATE TABLE albums ( album_id SERIAL PRIMARY KEY, album_name VARCHAR(100) NOT NULL, release_date DATE, artist_id INT REFERENCES artists(artist_id) ON DELETE CASCADE, genre VARCHAR(50) ); -- Create songs table CREATE TABLE songs ( song_id SERIAL PRIMARY KEY, song_name VARCHAR(100) NOT NULL, duration TIME NOT NULL, album_id INT REFERENCES albums(album_id) ON DELETE CASCADE, plays INT DEFAULT 0, release_date DATE ); -- Create playlists table CREATE TABLE playlists ( playlist_id SERIAL PRIMARY KEY, playlist_name VARCHAR(100) NOT NULL, user_id INT REFERENCES users(user_id) ON DELETE CASCADE, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ); -- Create playlist_songs table (many-to-many relationship between playlists and songs) CREATE TABLE playlist_songs ( playlist_id INT REFERENCES playlists(playlist_id) ON DELETE CASCADE, song_id INT REFERENCES songs(song_id) ON DELETE CASCADE, PRIMARY KEY (playlist_id, song_id) ); -- Insert data into users table INSERT INTO users (username, email, country) VALUES ('musiclover_101', 'lover101@example.com', 'USA'), ('rockfanatic', 'rockfan@example.com', 'UK'), ('classicbuff', 'classicbuff@example.com', 'Germany'); -- Insert data into artists table INSERT INTO artists (artist_name, genre, country, active_since) VALUES ('The Electric Beats', 'Rock', 'USA', '2005-06-15'), ('Synthwave Dream', 'Electronic', 'UK', '2012-04-20'), ('Classical Vibes', 'Classical', 'Germany', '1995-09-25'); -- Insert data into albums table INSERT INTO albums (album_name, release_date, artist_id, genre) VALUES ('Rock Revolution', '2018-07-22', 1, 'Rock'), ('Synthwave Sunrise', '2020-03-15', 2, 'Electronic'), ('Timeless Classics', '2000-11-12', 3, 'Classical'); -- Insert data into songs table INSERT INTO songs (song_name, duration, album_id, plays, release_date) VALUES ('Thunderstrike', '00:03:45', 1, 52340, '2018-07-22'), ('Neon Dreams', '00:04:12', 2, 78423, '2020-03-15'), ('Symphony No. 5', '00:07:30', 3, 152340, '2000-11-12'), ('Rock On', '00:03:55', 1, 62340, '2018-07-22'), ('Night Drive', '00:04:55', 2, 46450, '2020-03-15'); -- Insert data into playlists table INSERT INTO playlists (playlist_name, user_id) VALUES ('Morning Rock Hits', 1), ('Relaxing Classical', 3), ('Electronic Vibes', 2); -- Insert data into playlist_songs table INSERT INTO playlist_songs (playlist_id, song_id) VALUES (1, 1), -- Morning Rock Hits -> Thunderstrike (1, 4), -- Morning Rock Hits -> Rock On (2, 3), -- Relaxing Classical -> Symphony No. 5 (3, 2), -- Electronic Vibes -> Neon Dreams (3, 5); -- Electronic Vibes -> Night Drive -- Query to check all data SELECT * FROM users; SELECT * FROM artists; SELECT * FROM albums; SELECT * FROM songs; SELECT * FROM playlists; SELECT * FROM playlist_songs; The database contains: ...

October 12, 2024 · 5 min · Alex Popescu

Running a CLI Before Your Main App In Docker

Do you ever want to run a CLI before your main app in Docker? Here is a complicated way to do just that. We need a folder named success_flag that will contain a flag for the main app to know when the cli has finished running. First, we delete any existing success flag that remains: base: build: . container_name: base volumes: - ./success_flag:/success_flag command: bash -c "rm -f /success_flag/cli_success" restart: "no" Second, let’s run the database layer, in this case Postgres (but it can be any database engine). ...

September 19, 2024 · 2 min · Alex Popescu

Environment variables at build time with Docker

Introduction Do you need a certain environment variable at build time in Docker as opposed to runtime? There is an easy way to achieve this. Use Docker build arguments in combination with environment variables. An example of this is below of a React JS docker build that needs the environment variable REACT_APP_BACKEND_API and build time in the command npm run build. # Declare the build argument in the global scope ARG REACT_APP_BACKEND_API_ARG=TEST # Use an official Node runtime as a parent image FROM node:20-alpine # Set the working directory WORKDIR /app # Copy package.json and package-lock.json COPY package*.json ./ # Install the dependencies RUN npm install # Copy the rest of the application code COPY . . # Consume the build argument in the build stage ARG REACT_APP_BACKEND_API_ARG ENV REACT_APP_BACKEND_API=${REACT_APP_BACKEND_API_ARG} RUN echo $REACT_APP_BACKEND_API # Build the React app # WE need the env variable to be available at BUILD TIME RUN npm run build # Serve the app using serve RUN npm install -g serve # Expose the port the app runs on EXPOSE 3000 # Command to run the app CMD ["serve", "-s", "build", "-l", "3000"] The trick is to declare a build argument REACT_APP_BACKEND_API_ARG, use it in a stage and set the value of the required environment variable (in this case REACT_APP_BACKEND_API_ARG) to the argument value (${REACT_APP_BACKEND_API_ARG}) ...

August 1, 2024 · 2 min · Alex Popescu