AWS Lambda and CORS preflight response

Are you struggling with a frontend application that wants to use a backend AWS Lambda API ? Do you have the next CORS problem: Request header field content-type is not allowed by Access-Control-Allow-Headers in preflight response The solution is simple: implement HTTP OPTIONS method respond with the next access control headers : Access-Control-Allow-Origin, Access-Control-Allow-Headers, Access-Control-Allow-Methods For example in python you can do something like this: def return_200(): return { 'statusCode': 200, 'headers': { 'Access-Control-Allow-Origin': '*', 'Access-Control-Allow-Methods': 'GET,HEAD,OPTIONS,POST,PUT', 'Access-Control-Allow-Headers': 'Origin, X-Requested-With, Content-Type, Accept, Authorization', 'Content-Type': 'application/json' }, 'body': json.dumps({'message': 'ok'}) } Documentation and links: Request header field content-type - https://answers.netlify.com/t/request-header-field-content-type-is-not-allowed-by-access-control-allow-headers-in-preflight-response/54410

December 9, 2024 · 1 min · Alex Popescu

Deploying docker image to AWS Fargate in GitHub

Let’s deploy a docker image to AWS ECR and further to AWS Fargate. If you don’t have docker installed, now is the time to do it. GitHub Action YML: Create a workflow named main-aws.yml, for example as: name: Deploy to Amazon ECS on: workflow_dispatch: env: AWS_REGION: eu-central-1 # set this to your preferred AWS region, e.g. us-west-1 ECS_SERVICE: test-svc # set this to your Amazon ECS service name ECS_CLUSTER: test-fargate-dev # set this to your Amazon ECS cluster name ECS_TASK_DEFINITION: .aws/td.json CONTAINER_NAME: test # set this to the name of the container in the ECS_TASK_NAME: test-task defaults: run: shell: bash jobs: Build: name: Build runs-on: ubuntu-latest steps: - name: Checkout uses: actions/checkout@v2 - name: Configure AWS credentials uses: aws-actions/configure-aws-credentials@v1 with: aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }} aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }} aws-region: ${{ env.AWS_REGION }} - name: Login to Amazon ECR id: login-ecr uses: aws-actions/amazon-ecr-login@v1 - name: Get commit hash id: get-commit-hash run: echo "::set-output name=commit-hash::$(git rev-parse --short HEAD)" - name: Get timestamp id: get-timestamp run: echo "::set-output name=timestamp::$(date +'%Y%m%d-%H%M')" - name: Build, tag, and push the image to Amazon ECR id: build-image env: ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }} ECR_REPOSITORY: ${{ secrets.AWS_REPO_NAME }} IMAGE_TAG: ${{ steps.get-commit-hash.outputs.commit-hash }}-${{ steps.get-timestamp.outputs.timestamp }} run: | docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG . docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG echo "image=$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG" >> $GITHUB_OUTPUT - name: Create task definition in Amazon ECS run: aws ecs register-task-definition --cli-input-json file://${{ env.ECS_TASK_DEFINITION }} - name: Create svc in Amazon ECS run: aws ecs create-service --cluster ${{ env.ECS_CLUSTER }} --service-name ${{ env.ECS_SERVICE }} --task-definition ${{ env.ECS_TASK_NAME }} --desired-count 1 --capacity-provider-strategy '[{"capacityProvider":"FARGATE_SPOT","weight":100}]' --network-configuration "awsvpcConfiguration={subnets=[subnet-1,subnet-2,subnet-3],securityGroups=[sg-1,sg-2],assignPublicIp=ENABLED}" --region ${{ env.AWS_REGION }} - name: Fill in the new image ID in the Amazon ECS task definition id: task-def uses: aws-actions/amazon-ecs-render-task-definition@v1 with: task-definition: ${{ env.ECS_TASK_DEFINITION }} # same as container name in taks def container-name: ${{ env.CONTAINER_NAME }} image: ${{ steps.build-image.outputs.image }} - name: Deploy Amazon ECS task definition uses: aws-actions/amazon-ecs-deploy-task-definition@v2 with: task-definition: ${{ steps.task-def.outputs.task-definition }} service: ${{ env.ECS_SERVICE }} cluster: ${{ env.ECS_CLUSTER }} wait-for-service-stability: true Details for every step are bellow. ...

December 9, 2024 · 5 min · Alex Popescu

PostgreSQL Web UIs in docker

Let’s install and configure PgHero and PgAdmin4 in docker to connect to the databases deloyed in the previous post. If you don’t have docker installed, now is the time to do it. PgHero: Create the config for PgHero pghero.yml : databases: database1: url: postgres://dummy:123456@10.0.0.3:5432/postgres database2: url: postgres://dummy:123456@10.0.0.4:5432/postgres And run the docker command to start it: docker run -it -d -v $(pwd)/pghero.yml:/app/config/pghero.yml -p 8080:8080 ankane/pghero If everything is ok, you should see: PgAdmin 4 First we create pg-admin-servers.json : ...

November 25, 2024 · 1 min · Alex Popescu

PostgreSQL Replication and Loadbalancing

Create 3 servers on the cloud of choice. This is a diagram of the servers: Node 1 On Node 1 we install postgres, start the postgresql service and add the port to firewall: sudo apt install -y postgresql sudo systemctl enable postgresql sudo systemctl start postgresql sudo systemctl status postgresql sudo ufw allow from 10.0.0.0/16 to any port 5432 Next, we open psql to create the replication user: sudo -u postgres psql CREATE USER ruser REPLICATION LOGIN CONNECTION LIMIT 3 ENCRYPTED PASSWORD 'rpassword'; CREATE USER dummy WITH SUPERUSER PASSWORD '123456'; We should open /etc/postgresql/14/main/pg_hba.conf and add the users: ...

November 17, 2024 · 10 min · Alex Popescu

Developing a Sample TODO API with Couchbase in Docker

Have you ever wanted to run Couchbase on Docker to freely build and test your app? Here’s a simple guide to help you set up Couchbase in Docker and build a sample TODO app using Python and FastAPI. Developer Cluster Setup First, let’s create a minimal docker-compose.yml file to spin up Couchbase: services: couchbase: image: couchbase:latest container_name: couchbase ports: - "8091:8091" # Couchbase Web Console - "8092:8092" # Query Service - "8093:8093" # Full Text Search - "11210:11210" # Data Service environment: COUCHBASE_ADMINISTRATOR_USERNAME: admin COUCHBASE_ADMINISTRATOR_PASSWORD: password volumes: - couchbase_data:/opt/couchbase/var - ./init_bucket.sh:/init_bucket.sh - ./start-couchbase.sh:/start-couchbase.sh # get the image entry point using docker inspect -f '{{.Config.Entrypoint}}' couchbase # for the current image is /entrypoint.sh command: ["/bin/bash", "/start-couchbase.sh"] mem_limit: 1024m # Limit memory usage to 3100 MB for full volumes: couchbase_data: To make it work, we need two shell scripts: start-couchbase.sh and init_bucket.sh. ...

October 27, 2024 · 5 min · Alex Popescu

Almost Wrong Way To Send Docker Containers logs To ELK

In this article, we’ll walk through setting up a Docker-based ELK (Elasticsearch, Logstash, and Kibana) stack to collect, view, and send Docker logs. services: elasticsearch: image: elasticsearch:7.17.24 environment: - discovery.type=single-node volumes: - ./elasticsearch_data/:/usr/share/elasticsearch/data mem_limit: "1g" redis-cache: image: redis:7.4.0 logstash-agent: image: logstash:7.17.24 volumes: - ./logstash-agent:/etc/logstash command: logstash -f /etc/logstash/logstash.conf depends_on: - elasticsearch ports: - 12201:12201/udp logstash-central: image: logstash:7.17.24 volumes: - ./logstash-central:/etc/logstash command: logstash -f /etc/logstash/logstash.conf depends_on: - elasticsearch kibana: image: kibana:7.17.24 ports: - 5601:5601 environment: - ELASTICSEARCH_HOSTS=http://elasticsearch:9200 depends_on: - elasticsearch ElasticSearch Just create a folder named elasticsearch_data for storing data. ...

October 16, 2024 · 3 min · Alex Popescu

Backup PostgreSQL with Docker in S3 Object Storage

How can you backup your PostgreSQL to a Hetzner S3 object storage ? Here is how it’s done. Creating S3 Object Storage First, let’s create our S3 Object Storage: Next, we should create our S3 access key and secret: Backup and restore Let’s create a database with sample data: -- Create the database CREATE DATABASE music_streaming_platform; \c music_streaming_platform; -- Create users table CREATE TABLE users ( user_id SERIAL PRIMARY KEY, username VARCHAR(50) UNIQUE NOT NULL, email VARCHAR(100) UNIQUE NOT NULL, date_joined DATE DEFAULT CURRENT_DATE, country VARCHAR(50) ); -- Create artists table CREATE TABLE artists ( artist_id SERIAL PRIMARY KEY, artist_name VARCHAR(100) UNIQUE NOT NULL, genre VARCHAR(50), country VARCHAR(50), active_since DATE ); -- Create albums table CREATE TABLE albums ( album_id SERIAL PRIMARY KEY, album_name VARCHAR(100) NOT NULL, release_date DATE, artist_id INT REFERENCES artists(artist_id) ON DELETE CASCADE, genre VARCHAR(50) ); -- Create songs table CREATE TABLE songs ( song_id SERIAL PRIMARY KEY, song_name VARCHAR(100) NOT NULL, duration TIME NOT NULL, album_id INT REFERENCES albums(album_id) ON DELETE CASCADE, plays INT DEFAULT 0, release_date DATE ); -- Create playlists table CREATE TABLE playlists ( playlist_id SERIAL PRIMARY KEY, playlist_name VARCHAR(100) NOT NULL, user_id INT REFERENCES users(user_id) ON DELETE CASCADE, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ); -- Create playlist_songs table (many-to-many relationship between playlists and songs) CREATE TABLE playlist_songs ( playlist_id INT REFERENCES playlists(playlist_id) ON DELETE CASCADE, song_id INT REFERENCES songs(song_id) ON DELETE CASCADE, PRIMARY KEY (playlist_id, song_id) ); -- Insert data into users table INSERT INTO users (username, email, country) VALUES ('musiclover_101', 'lover101@example.com', 'USA'), ('rockfanatic', 'rockfan@example.com', 'UK'), ('classicbuff', 'classicbuff@example.com', 'Germany'); -- Insert data into artists table INSERT INTO artists (artist_name, genre, country, active_since) VALUES ('The Electric Beats', 'Rock', 'USA', '2005-06-15'), ('Synthwave Dream', 'Electronic', 'UK', '2012-04-20'), ('Classical Vibes', 'Classical', 'Germany', '1995-09-25'); -- Insert data into albums table INSERT INTO albums (album_name, release_date, artist_id, genre) VALUES ('Rock Revolution', '2018-07-22', 1, 'Rock'), ('Synthwave Sunrise', '2020-03-15', 2, 'Electronic'), ('Timeless Classics', '2000-11-12', 3, 'Classical'); -- Insert data into songs table INSERT INTO songs (song_name, duration, album_id, plays, release_date) VALUES ('Thunderstrike', '00:03:45', 1, 52340, '2018-07-22'), ('Neon Dreams', '00:04:12', 2, 78423, '2020-03-15'), ('Symphony No. 5', '00:07:30', 3, 152340, '2000-11-12'), ('Rock On', '00:03:55', 1, 62340, '2018-07-22'), ('Night Drive', '00:04:55', 2, 46450, '2020-03-15'); -- Insert data into playlists table INSERT INTO playlists (playlist_name, user_id) VALUES ('Morning Rock Hits', 1), ('Relaxing Classical', 3), ('Electronic Vibes', 2); -- Insert data into playlist_songs table INSERT INTO playlist_songs (playlist_id, song_id) VALUES (1, 1), -- Morning Rock Hits -> Thunderstrike (1, 4), -- Morning Rock Hits -> Rock On (2, 3), -- Relaxing Classical -> Symphony No. 5 (3, 2), -- Electronic Vibes -> Neon Dreams (3, 5); -- Electronic Vibes -> Night Drive -- Query to check all data SELECT * FROM users; SELECT * FROM artists; SELECT * FROM albums; SELECT * FROM songs; SELECT * FROM playlists; SELECT * FROM playlist_songs; The database contains: ...

October 12, 2024 · 5 min · Alex Popescu

Good Starter Cloud Init Config

Do you want a good starter cloud init config? Here is an example: #cloud-config users: - name: user groups: users, admin sudo: ALL=(ALL) NOPASSWD:ALL shell: /bin/bash ssh_authorized_keys: - <update with public ssh key> chpasswd: list: | root:<secure-password-here> expire: False packages: - fail2ban - ufw package_update: true package_upgrade: true runcmd: - printf "[sshd]\nenabled = true\nbanaction = iptables-multiport" > /etc/fail2ban/jail.local - systemctl enable fail2ban - ufw default deny incoming - ufw default allow outgoing - ufw allow 2022/tcp - ufw enable - sed -i -e '/^\(#\|\)PermitRootLogin/s/^.*$/PermitRootLogin no/' /etc/ssh/sshd_config - sed -i -e '/^\(#\|\)Port 22/s/^.*$/Port 2022/' /etc/ssh/sshd_config - sed -i -e '/^\(#\|\)PasswordAuthentication/s/^.*$/PasswordAuthentication no/' /etc/ssh/sshd_config - sed -i -e '/^\(#\|\)KbdInteractiveAuthentication/s/^.*$/KbdInteractiveAuthentication no/' /etc/ssh/sshd_config - sed -i -e '/^\(#\|\)ChallengeResponseAuthentication/s/^.*$/ChallengeResponseAuthentication no/' /etc/ssh/sshd_config - sed -i -e '/^\(#\|\)MaxAuthTries/s/^.*$/MaxAuthTries 2/' /etc/ssh/sshd_config - sed -i -e '/^\(#\|\)AllowTcpForwarding/s/^.*$/AllowTcpForwarding no/' /etc/ssh/sshd_config - sed -i -e '/^\(#\|\)X11Forwarding/s/^.*$/X11Forwarding no/' /etc/ssh/sshd_config - sed -i -e '/^\(#\|\)AllowAgentForwarding/s/^.*$/AllowAgentForwarding no/' /etc/ssh/sshd_config - sed -i -e '/^\(#\|\)AuthorizedKeysFile/s/^.*$/AuthorizedKeysFile .ssh\/authorized_keys/' /etc/ssh/sshd_config - sed -i '$a AllowUsers user' /etc/ssh/sshd_config - systemctl enable ssh - reboot What does it do? ...

October 1, 2024 · 2 min · Alex Popescu

Running a CLI Before Your Main App In Docker

Do you ever want to run a CLI before your main app in Docker? Here is a complicated way to do just that. We need a folder named success_flag that will contain a flag for the main app to know when the cli has finished running. First, we delete any existing success flag that remains: base: build: . container_name: base volumes: - ./success_flag:/success_flag command: bash -c "rm -f /success_flag/cli_success" restart: "no" Second, let’s run the database layer, in this case Postgres (but it can be any database engine). ...

September 19, 2024 · 2 min · Alex Popescu

Linux Login Update Notifier Script With Python

Introduction Do you ever want to get a notification with the number of updates required when logging into XFCE in debian/ubuntu like systems ? There is a simple python script that can do just that. It uses notify-send command and paired with the magic command that returns the update count, we get the next python script: #!/usr/bin/env python3 import os import subprocess # Run the apt-get command and grep the output command = 'apt-get --simulate upgrade | grep "upgraded.*newly installed"' output = subprocess.getoutput(command) # If there's output, send it as a notification, otherwise send a default message if output: os.system(f'notify-send "Upgrade Check" "{output}"') else: os.system('notify-send "Upgrade Check" "No upgrades available or no packages to be installed."') # Check if the file /var/run/reboot-required exists reboot_file = '/var/run/reboot-required' if os.path.exists(reboot_file): os.system('notify-send "System Update" "Reboot is required to complete updates."') All you need is to make it executable: ...

August 29, 2024 · 2 min · Alex Popescu