apt get life

Life around technology

  • Technology
    • Guides
    • Linux
    • Development
      • Laravel
    • Misc
    • Raspberry Pi
  • Writing
  • Crafts
    • Crochet
    • Model Making
    • Painting
  • Privacy Policy
You are here: Home / Archives for ubuntu server

Injecting inbound request headers with Traefik v2

2020/06/30 by sudo Leave a Comment

I’m using Traefik v2.2 as a reverse proxy for my docker containers. Basically all HTTP or HTTPS traffic is handled by Traefik as an ingress container and then routing according to rules defined in my docker-compose file to the appropriate internal container.

Something that I’ve needed to do for a project is add a header to an inbound request in order to identify that the request has been processed by traefik. I tried to follow the documentation but found it… lacking. For anyone interested in the official documentation for adding headers to Traefik you can find it here: https://docs.traefik.io/middlewares/headers/

What I’ve ended up with is a service container (nginx in my case) that looks like this:

nginx:
        image: nginx:stable-alpine
        volumes:
            - ./src:/var/www
            - ./.docker/nginx/default.conf:/etc/nginx/conf.d/default.conf
        labels:
            - "traefik.enable=true"
            - "traefik.http.routers.router1.rule=Host(`localhost`)"
            - "traefik.http.middlewares.testHeader.headers.customrequestheaders.test-header=new-header"
            - "traefik.http.routers.router1.middlewares=testHeader"
    php:
        image: php:7.4-fpm
        volumes:
            - ./src:/var/www

Now something that threw me was the header “test_header” doesn’t exist in requests handled by nginx. A linked PHP container simply running print_r($_SERVER) dumps all the variables to the page. It’s only at this point I discovered that Traefik adds HTTP_ to the header. So instead of :

test-header = new-header

you get:

HTTP_TEST_HEADER = new-header

I think this is one of those things that you can waste alot of time on if you don’t know that Traefik is re-writing the header values in this way.

Looking back at the Traefik configuration, to add a header you basically have two steps:

First, create a new header. This is done in the following format:

traefik.http.middlewares.{your header group name}.headers.customrequestheaders.{header key}={header value}

As an example:

- "traefik.http.middlewares.testHeader.headers.customrequestheaders.test-header=new-header"

This creates a header `HTTP_TEST_HEADER` and assigns it to the `testHeader` middleware group.

Secondly, assign this middleware group to the router you’re using for your service. In my case that’s an nginx container on a router `router1`. This takes the following format:

traefik.http.routers.{router name}.middlewares={middleware name}

As an example:

- "traefik.http.routers.router1.middlewares=testHeader"

If you’re not sure what this looks like in the context of the other containers check the first, more complete example or see the entire docker-compose file below (NOTE: this file exposes the Traefik API for debugging purposes, so don’t blindly deploy it to production):

version: '3'

services:
    proxy:
        image: traefik:v2.2
        command:
            - "--providers.docker=true"
            - "--providers.docker.exposedbydefault=false"
            - "--entrypoints.web.address=:80"
            - "--api.insecure=true"
        ports:
            - "80:80"
            - "8080:8080"
        volumes:
            - "/var/run/docker.sock:/var/run/docker.sock:ro"

    nginx:
        image: nginx:stable-alpine
        volumes:
            - ./src:/var/www
            - ./.docker/nginx/default.conf:/etc/nginx/conf.d/default.conf
        labels:
            - "traefik.enable=true"
            - "traefik.http.routers.router1.rule=Host(`localhost`)"
            - "traefik.http.middlewares.testHeader.headers.customrequestheaders.test-header=new-header"
            - "traefik.http.routers.router1.middlewares=testHeader"
            - "traefik.http.middlewares.test-ratelimit.ratelimit.average=1"
            - "traefik.http.middlewares.test-ratelimit.ratelimit.period=1m"

    php:
        image: php:7.4-fpm
        volumes:
            - ./src:/var/www

Filed Under: Docker, Linux, Technology Tagged With: docker, traefik, ubuntu server

Setting up a bond and a bridge in Netplan on Ubuntu Server 20.04

2020/06/03 by sudo 2 Comments

I’m in the process of updating my KVM servers from Ubuntu 18.04 to Ubuntu 20.04. Along with the new version of Ubuntu there’s been some changes in netplan.

What I’ve done is edit the default file created after the Ubuntu Server installation /etc/netplan/00-installer-config.yaml and setup the following:

network:
  bonds:
    bond0:
      interfaces:
      - eno1
      - eno2
      parameters:
        mode: active-backup
  ethernets:
    eno1: {}
    eno2: {}
  version: 2
  bridges:
    br0:
      dhcp4: true
      interfaces:
        - bond0
      mtu: 1500
      parameters:
        stp: false
        forward-delay: 4

This has my two interfaces eno1 and eno2 and created bond0 as an active backup. There’s a few different networking modes you can chose from:

Bond ModeDescription
balance-rrRound robin network configuration. Packets are send in sequential order from the first connection listed, going down the chain
active-backupOnly the first connection is used, unless it fails, in which case another connection is used
balance-xorThis uses a transmission policy to route between interfaces and provides both load balancing and fault tolerance
broadcastNot sure why you’d use this – sends data on all interfaces
802.3adThis is an IEEE standard. It does require switches to support the same protocol. This mode aggregates the connection to provide the benefit of bandwidth from all configured interfaces.
balance-tlbManages load between the the network adapters based on demand and availability
balance-albIncludes both transmission load balancing (balance-tlb) and receive load balancing.

Then, the bridge br0 connects to bond0. This is where you configure the network type – DHCP or static IP. In this case I’m using DHCP as the firewall I have in place manages IP address assignments and it has the server set to a static address. If you want to specify a static IP address in this configuration file, you can do it like below:

network:
  bonds:
    bond0:
      interfaces:
      - eno1
      - eno2
      parameters:
        mode: active-backup
  ethernets:
    eno1: {}
    eno2: {}
  version: 2
  bridges:
    br0:
      addresses:
        - 192.168.10.30/24
      dhcp4: false
      gateway4: 192.168.10.1
      nameservers:
        addresses:
          - 192.168.10.1
          - 192.168.10.2
        search: []
      interfaces:
        - bond0

You can find out more information here:
https://netplan.io/examples

There’s a version of this post for 18.04 here (see the comments with suggested fixes):
https://www.aptgetlife.co.uk/setting-up-a-bond-and-bridge-in-netplan-on-ubuntu-18-04/

Filed Under: Guides, Linux, Technology Tagged With: networking, ubuntu, ubuntu 20.04, ubuntu server

Handling Failure: What I’ve learned with a failed systems architecture change

2020/04/25 by sudo Leave a Comment

Docker. It’s everywhere. I use it at work and at home. It’s amazing for doing development in environments that more closely match production (such as running a full Laravel stack with queues, Redis, database, local mailmog for catching test email).

In order to learn more about docker, and improve my ability to roll out sites I host, I decided that I wanted to move a whole bunch of WordPress sites to docker. The existing setup is based on Ubuntu 16.04 LTS running on Nginx and PHP-FPM with per user resource pools for better security and site resource allocation. PHP 7.0 has been end of life for some time and it’s definitely time to update. Ubuntu LTS likely won’t track current PHP versions due to the way PHP’s release cycle has changed to be effectively 2 years. That means before Ubuntu 20.04 (released just days ago) is end of Life, PHP 7.4 will have been in end of life for over a year and won’t have had active support for over 2 years! Docker, I think, will allow me to better update the environments and keep them in line with the PHP release cycle. Hopefully it also makes them easier to migrate to new operating systems later.

I’ve already had experience with Traefik as a reverse proxy and it’s fantastic for handling Lets Encrypt SSL certificates with multiple sites out of the box. I can easily add docker containers with labels and they’ll appear automatically in Traefik; magic!

So here’s what I had in my head when I started:

  • VM running Ubuntu Server 20.04 LTS
  • A docker user that manages and has permissions over everything
  • A docker-compose file in that user’s home folder. This file runs the core config for Traefik and any other main containers I need (maybe fail2ban too).
  • Each site exists in a subdirectory named after it’s domain name. Within that there’s a docker-compose file related to that site and any files are stored there too.
  • Database instances either per site or provided by the host

So that gives you something that looks like:

/home/docker/docker-compose.yml # traefik

/home/docker/aptgetlife/docker-compose.yml # wordpress and MySQL

/home/docker/aptgetlife/public_html/ # site files

/home/docker/aptgetlife/mysql/ # database files

 

Problem: The docker PPA for ubuntu doesn’t exist for 20.04!

Ubuntu’s apt packages are often out of date, by default I jump straight into the docker doc website in order to get the latest possible version. Or not. There was no PPA available for 20.04 yet!

Fallback was to use apt. Since it’s a new release it was an up to date docker package. It may be worth changing to the PPA once it’s available.

Problem: Packet loss on ubuntu 20.4

While editing the master docker compose file, my ssh connection kept hanging and dropping. Following some pings I discovered that there was an intermittent network connection. It isn’t clear if this was caused by the docker networking packages, KVM drivers or Ubuntu 20.04 itself. It has only been out a day so there’s possible issues with the OS itself.

Fallback was to go back to Ubuntu 18.04 which didn’t have any issues! I’ll jump back to 20.04 after it’s bedded in a little and hopefully the issue will go away.

Problem: The traefik network can’t be seen by individual sites

This is a new one to me and I’ve never done this before so I didn’t have any experience of the setup. I have named my internal network traefik in my main docker-compose file. This works great. Traefik will create the network and the traefik container will connect to it fine. What didn’t work was the per-site docker-compose file connecting to the network. It wanted to create it’s own version named after the folder it was in. I discovered that as of docker-compose format version 3.5 (https://github.com/docker/compose/issues/3736) it allows you to use named containers.

networks:
    aptgetlife: # network for this container and associated resources like MySQL
    traefik: # link to Traefik for inbound traffic
        external:
            name: traefik

Problem: wordpress site URLS

Okay. This is one of my pet hates with WordPress. WordPress requires a site URL. This apparently was “www.” and I’d set it up on the new system without. It turned out easier to change the configuration to use “www.” instead of convincing WordPress to change the URL. This worked. Except no style sheets or Javascript would load. This, as it turns out, is due to WordPress loading insecure URLs for these assets. I attempted to use tools to update the SQL file and edited the wp-config.php file but neither would solve this problem.

sed -i 's|http://www.aptgetlife.co.uk|https://www.aptgetlife.co.uk|g' wp_aptgetlife.sql

This actually defeated me. I really don’t understand WordPress and how insistent it is to load resourced on particular URLs.

What have I learned?

Well, I’ve learned a lot about docker-compose, networking, override files. I know my architecture will work. I have also learned that I dislike WordPress. Alot. I’m sure the site asset problem is fixable, but I don’t have the patience to deal with it. I’m not interested in fixing WordPress related problems. Even though this project failed, and it is something I wanted to use for moving my hosted sites to, I have gained a lot of knowledge in the process. So instead of taking the failure “I have not deployed my sites using docker”, I am trying to look at the benefits of the knowledge I’ve gained and reflecting on the project as a learning experience. Hopefully a little bit of a retrospective will embed some key technical details in my brain for future DevOps! It’s also important to research what’s possible with systems architecture and I’d have looked into this before starting if it were a work project, but because it’s a personal project I didn’t feel the need. This was almost a playground, a trial run to see if it was even feasible. I think I learnt more and more quickly through this make, fail, make, fail, make, fail iterative approach. I had a definable success for each of the failures and a learning experience from each while overcoming them.

My big takeaway is that docker-compose has some great features in 3.5+ that I didn’t know existed. There’s some great information about networking (and in particular the section at the end about “external” or pre-existing docker networks) on their website https://docs.docker.com/compose/networking/

Filed Under: Misc, Technology Tagged With: DevOps, docker, learning, networking, traefik, ubuntu server

KVM converting virtual disks from raw img files to qcow2

2020/01/21 by sudo Leave a Comment

If you’re running qemu KVM on Ubuntu and want to take advantage of the qcow2 file format’s snapshotting capabilities and sparse disk population you can easily convert using the command line tool qemu-img convert

First, make sure your virtual machine is turned off! Then you can navigate to the directory your virtual disks are stored in (usually /var/lib/libvirt). It’s probably a good idea to be a root user or otherwise sudo the following command

qemu-img convert -f raw -O qcow2 vm_hdd.img vm_hdd.qcow2

The -f flag tells convert what format it’s reading. If you don’t provide it then it’ll guess based on the file extension.

the -O flag tells convert what file format to output to, again if not provided it’ll guess based on the file extension.

Now you’ve got a qcow2 file, you’ll need to edit the VM configuration

virsh edit vm_name

this will open up an editor for your VM configuration. It’s an XML file, so it’s reasonably easy to follow. What you’re looking for is a disk section so you can change the file extension and disk type

<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='none'/>
<source file='/var/lib/libvirt/images/rhel62-2.img'/>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' cache='none'/>
<source file='/var/lib/libvirt/images/rhel62-2.qcow2'/>

Note both “raw” and “img” have been changed to “qcow2” for this disk. Make sure you’ve picked the right disk to edit in the XML. It may be a good idea to take a backup first so you can fall back to the img file if needed!

That should be it, your VM should now boot with the new disk file. Once you’re sure it’s working you can delete the original (or keep it safe somewhere).

 

More information about KVM can be found on the RedHat website: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/index or the Ubuntu wiki https://help.ubuntu.com/community/KVM/Installation

Filed Under: Linux, Technology Tagged With: KVM, ubuntu 18.04, ubuntu server

Optimising Nginx for PHP & WordPress (Time To First Byte)

2019/04/06 by sudo

When running page speed insights, it seems that TTFB (Time To First Byte) is something that it really doesn’t like when checking performance.

To solve this, we can use nginx’s caching of compiled PHP pages. Even better, the cache can be a RAM disk, making it very responsive.

First, create a directory for the RAM disk:

sudo mkdir -p /mnt/nginx-cache

Now create an entry in the fstab file so it’s mounted to the RAM disk on boot:

sudo nano /etc/fstab

tmpfs /mnt/nginx-cache tmpfs rw,size=2048M 0 0

This creates a 2GB RAM disk. Edit the size as appropriate for your server. Then mount it:

sudo mount /mnt/nginx-cache

Now, create a cache configuration file for Nginx:

sudo nano /etc/nginx/conf.d/cache.conf

fastcgi_cache_path /etc/nginx-cache levels=1:2 keys_zone=phpcache:512m inactive=2h max_size=1024m;
fastcgi_cache_key "$scheme$request_method$host$request_uri";

This creates a cache of 1GB with a default time of 2 hours. Next update the config files for your website – change your config file name where appropriate.

/etc/nginx/sites-enabled/mysite.conf

Inside of the location ~ "^(.+\.php)($|/)" { section, add:

# ----------------------------------------------
# Caching
# ----------------------------------------------
# This defines which cache to use (defined in /etc/nginx/cache.conf)
fastcgi_cache phpcache;
# Cache only 200 Okay responses for 2 hours
fastcgi_cache_valid 200 2h;
# Don't cache POST requests, only GET
fastcgi_cache_methods GET HEAD;
# Optional. Add a header to prove it works
add_header X-Fastcgi-Cache $upstream_cache_status;

now you should be able to restart nginx sudo service nginx restart and access the site via a web browser. Then you can use something like developer tools access the headers of the web requests. You should find a header:

X-Fastcgi-Cache: HIT

 

Filed Under: Linux, Technology, Uncategorized Tagged With: nginx, php, ubuntu server, wordpress

  • 1
  • 2
  • Next Page »

Recent Posts

  • Disable iLO on HP Microserver Gen8
  • Ubuntu Desktop 24.04 Change Wallpaper Settings
  • Customising Ubuntu Desktop 24.04
  • Remove domains from Let’s Encrypt using Certbot
  • Install Jetbrains Toolbox on Ubuntu 22.04

Tags

API auditing crochet data recovery debian debudding development Dingo API docker email Getting started with Laravel 5 & Dingo API hard drive health HP Microserver KVM Laravel larvel 5 lenovo Linux Minion mint netgear nas networking network shares php PHP development Postfix raspberry pi review samba security SMART smartctl smartmontools smb testing traefik ubuntu ubuntu 18.04 ubuntu 20.04 ubuntu 22.04 ubuntu server vagrant Virtual machines xdebug xubuntu

© Copyright 2015 apt get life