apt get life

Life around technology

  • Technology
    • Guides
    • Linux
    • Development
      • Laravel
    • Misc
    • Raspberry Pi
  • Writing
  • Crafts
    • Crochet
    • Model Making
    • Painting
  • Privacy Policy
You are here: Home / Archives for docker

MySQL in Docker for Local Development

2020/08/04 by sudo Leave a Comment

Outside of my standard docker-compose setup for Laravel, I find myself requiring one-off MySQL instances for short pieces of development, debugging or testing. Instead of building a whole container ecosystem to do this, if I can get away with it I simply run MySQL as a docker container, binding it to the local machine’s ports.

The MySQL docker container is available on docker hub https://hub.docker.com/_/mysql which provides some basic usage instructions. I’m going to outline how I use MySQL in docker for this one off work.

Almost everything I do is based on the older MySQL 5.7. That means I need to use that image specifically when running docker. In order to make sure my local container is up to date and ready and waiting for use I tend to run a docker pull command on my dev machine as soon as it’s setup.

docker pull mysql:5.7

Version information is available on the MySQL docker hub page if you need a different version of MySQL. Now that the docker image is held locally it’s much easier to just start up a container whenever you need it and it will not spend ages re-downloading.

Understanding the Basics of MySQL’s Docker Container

There’s a few things I do with my containers. Firstly I want to expose the container to the host machine. This allows database administration tools like MySQL Workbench or Datagrip to connect to the MySQL docker instance. It also allows code to talk to it, and often this is what I want to do. It’s important not to overlap these ports, but generally I don’t run a temporary MySQL container along side any development stacks or local installs so I bind to the default port (3306). To do this I add the -p 3306:3306 flag to the command. If you want to change the external port (the one you’re using to connect to MySQL inside docker), then change the port number before the colon (:) like so -p 1234:3306. This maps port 1234 on your machine to the docker containers port 3306 inside the container.

Next, a root password and default database should be created. You could skip database creation and do it later with the management tool of your choice, but I find this easier. There’s two environment variables to set and I usually pick a short, insecure password for MySQL’s root account as this is only a test locally and firewalled on my dev machine. -e MYSQL_ROOT_PASSWORD=toor sets the root password to “toor” (root backwards. This was a default on a few Linux distros for a while). Setting the default database is just as easy -e MYSQL_DATABASE=dev. In this case it’s creating a database called “dev”.

Finally, I tend to name the docker container so I can run it again easily if required. I do this long hand with --name mysql57 where “mysql57” is the name of the container I’m creating. You can name this per project if it makes more sense for you, but I do regularly delete and recreate this container as it’s outside my usual dev workflow and usually just for debugging/fixing something once.

Creating a Named MySQL Docker Container Exposed to the Host

Rolling it all together you can run this command to create a named MySQL 5.7 instance that is running in the background (-d).

docker run --name mysql57 -e MYSQL_ROOT_PASSWORD=toor -e MYSQL_DATABASE=dev -p 3306:3306 -d mysql:5.7

Restore Backups to a MySQL Docker Container

If you have a database backup you need to restore, then it’s reasonably easy to pass it into MySQL, although if it’s a big database then it can take some time to do. This can be done by using cat to read the backup file and feeding it into the MySQL docker container. If you’re a user who doesn’t have native docker permissions (like on Ubuntu, which requires sudo docker) then it may be best to change to a user that does have permissions (sudo -i to switch to root, then run the backup restore command).

cat database_backup.sql | docker exec -i mysql57 /usr/bin/mysql -u root --password=toor dev

Backing up a Database Inside MySQL’s Docker Container

If you need to backup your MySQL docker database from the container, you can do so by running the mysqldump command that the container has installed by default, passing it container name, username, password and database you’ve defined when creating the container and defining the output file to save the database dump to.

docker exec mysql57 /usr/bin/mysqldump -u root --password=toor dev > dev_backup.sql

Cleaning up

Once you’re done with your MySQL container, you can stop and delete it by running the following commands, making sure to replace the container name (“mysql57”) with the name of your container if you happened to change it:

docker stop mysql57
docker rm mysql57

That’s it! You’ve created a named docker container running MySQL 5.7. You’ve exposed it to the host machine using port binding and learned how to restore a database backup to it. It’s not as useful as a full docker-compose stack for development. If you’re interested in a docker-compose dev environment check out this article. It does, however give you quick and easy MySQL access when you just need to poke around a database.

Filed Under: Development, Docker, Linux, Technology, Uncategorized Tagged With: docker, mysql, mysql-server

Laravel Docker Development Environment With PHP7

2020/07/27 by sudo 1 Comment

Running specific PHP versions for Laravel can be quite useful, especially when working with legacy applications. I work on a range of different solutions with different versions of PHP and Laravel versions. To save me time reconfiguring my local environment’s PHP version and to better represent the live systems, I have opted for Docker based development environments. Here’s what I am aiming for:

  • Customisable PHP versions
    • Including libraries like Imagick and XDebug to make dev easier
  • Self contained database instance
  • Supporting queue worker, so I can test queues work locally
  • Email catching, so I can test email notifications locally
  • Redis, for queue management
  • The Laravel Scheduler working

In order to achieve this, I’ve opted to use a docker-compose environment with custom docker PHP file. This defines the PHP version as well as any extra libraries in it that I need for the project. Then the project files (source code of the Laravel application) can be mounted as a volume. By mounting the project’s source code, it’s available for an editor on the host machine, while also being available for the PHP code to execute.

Let’s start by defining the project structure:

.
├── .docker
│   ├── Dockerfile.app
│   └── nginx
│       └── default.conf
├── docker-compose.yml
└── src

This structure tends to keep the Docker configuration and extra files neater, since they’re self-contained in a `.docker` directory. The custom PHP docker file (Dockerfile.app) is contained here, as is a subdirectory for Nginx, the webserver I’ll be using. Only the docker-compose file needs to be in the parent folder.

Lets start with the docker file. You’ll need to find your host user and group ID. On Linux (and presumably Mac) you can find this by running id -u and id -g. Normally they’re both 1000. Replace the ARG entries in the docker file if your IDs are different.

If you’ve not created the directory structure already, do it now:

mkdir -p .docker/nginx

Now create the Docker file, I’m using Nano but you can use whatever editor you want: nano .docker/Dockerfile.app

FROM php:7.2-fpm

# Define the User and Group ID for this docker file. This should match your host system UID and GID.
ARG UID=1000
ARG GID=1000

# Set working directory for future docker commands
WORKDIR /var/www/html

# Install dependencies
RUN apt-get update && apt-get install -y --quiet ca-certificates \
    build-essential \
    mariadb-client \
    libpng-dev \
    libxml2-dev \
    libxrender1 \
    wkhtmltopdf \
    libjpeg62-turbo-dev \
    libfreetype6-dev \
    locales \
    zip \
    jpegoptim optipng pngquant gifsicle \
    vim \
    unzip \
    curl \
    libmcrypt-dev \
    msmtp \
    iproute2 \
    libmagickwand-dev

# Clear cache: keep the container slim
RUN apt-get clean && rm -rf /var/lib/apt/lists/*

# Xdebug
# Note that "host.docker.internal" is not currently supported on Linux. This nasty hack tries to resolve it
# Source: https://github.com/docker/for-linux/issues/264
RUN ip -4 route list match 0/0 | awk '{print $3" host.docker.internal"}' >> /etc/hosts

# Install extensions: Some extentions are better installed using this method than apt in docker
RUN docker-php-ext-configure gd --with-gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ --with-png-dir=/usr/include/ \
    && docker-php-ext-install \
        pdo_mysql \
        mbstring \
        zip \
        exif \
        pcntl \
        xml \
        soap \
        bcmath \
        gd

# Install Redis, Imagick xDebug (Optional, but reccomended) and clear temp files
RUN pecl install -o -f redis \
    imagick \
    xdebug \
&&  rm -rf /tmp/pear \
&&  docker-php-ext-enable redis \
    imagick \
    xdebug

# Install composer: This could be removed and run in it's own container
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer

# xdebug.remote_connect_back = true does NOT work in docker
RUN echo '\n\
[Xdebug]\n\
xdebug.remote_enable=true\n\
xdebug.remote_autostart=true\n\
xdebug.remote_port=9000\n\
xdebug.remote_host=docker.host.internal\n'\
>> /usr/local/etc/php/php.ini

RUN echo "request_terminate_timeout = 3600" >> /usr/local/etc/php-fpm.conf
RUN echo "max_execution_time = 300" >> /usr/local/etc/php/php.ini

# Xdebug
# Note that "host.docker.internal" is not currently supported on Linux. This nasty hack tries to resolve it
# Source: https://github.com/docker/for-linux/issues/264
#RUN ip -4 route list match 0/0 | awk '{print $3" host.docker.internal"}' >> /etc/hosts
RUN ip -4 route list match 0/0 | awk '{print "xdebug.remote_host="$3}' >> /usr/local/etc/php/php.ini

# Add user for laravel application
RUN groupadd -g $GID www
RUN useradd -u $UID -ms /bin/bash -g www www

# Make sure permissions match host and container
RUN chown www:www -R /var/www/html

#  Change current user to www
USER www

# Copy in a custom PHP.ini file
# INCOMPLETE/UNTESTED
#COPY source /usr/local/etc/php/php.ini

# We should do this as a command once the container is up.
# Leaving here incase someone wants to enable it here...
#RUN composer install && composer dump-autoload -o

I’ve left in some commented commands, which can be uncommented and customised if needed. The file comments should also help you make any changes as needed, but the file should work for you as is.

Next, lets create the nginx configuration file nano .docker/nginx/default.conf

server {
    listen 80 default_server;

    root /var/www/html/public;

    index.php index index.html index.htm;

    charset utf-8;

    location = /favicon.ico { log_not_found off; access_log off; }
    location = /robots.txt  { log_not_found off; access_log off; }

    location / {
        try_files $uri $uri/ /index.php$is_args$args;
    }

    location ~ ^/.+\.php(/|$) {
        fastcgi_pass php:9000;
        fastcgi_split_path_info ^(.+\.php)(/.*)$;
        fastcgi_read_timeout 3600;
        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_param HTTPS off;
    }

    error_page 404 /index.php;

    location ~ /\.ht {
        deny all;
    }
}

The most important part of this file is the fastcgi_pass php:9000; line. This tells nginx in it’s container where to find PHP running in it’s container. You’ll see that tie in the docker compose file.

Create the docker-compose.yml file nano docker-compose.yml

version: '3'

services:

    # Nginx web server
    nginx:
        image: nginx:stable-alpine
        ports:
            # OPTIONAL: change the port number before the colon ":" to alter we traffic port
            - "8080:80"
        volumes:
            - ./src:/var/www/html
            - ./.docker/nginx/default.conf:/etc/nginx/conf.d/default.conf
        depends_on:
            # for this container to run, wait until PHP and MYSQL are running
            - php
            - mysql
        networks:
            # OPTIONAL: change or remove the network name (do this for all containers)
            - laravel

    # MySQL database server
    mysql:
        image: mysql:5.7
        restart: unless-stopped
        tty: true
        ports:
            # OPTIONAL: Change the port number before the colon ":" to alter where MySQL binds on the host
            # Allow connections to MySQL from the host (MySQL Workbench, DataGrip, etc) on port 33060
            # WARNING: do not expose in production!
            - "3306:3306"
        environment:
            # OPTIONAL: Change MySQL credentials
            MYSQL_ROOT_PASSWORD: secret
            MYSQL_DATABASE: laravel
            MYSQL_USER: laravel
            MYSQL_PASSWORD: secret
            SERVICE_TAGS: dev
            SERVICE_NAME: mysql
        networks:
            - laravel
        volumes:
            # Persist MySQL data with a docker volume (see end of file)
            - mysql_data:/var/lib/mysql

    # Custom PHP image for Laravel
    php:
        build:
            context: .
            dockerfile: ./.docker/Dockerfile.app
        volumes:
            - ./src:/var/www/html
            # Load a custom PHP.ini file
            #- ./.docker/php/php.ini:/usr/local/etc/php/php.ini
        #command: ip -4 route list match 0/0 | awk '{print $$3" host.docker.internal"}' >> /etc/hosts
        networks:
            - laravel

    # Redis, for caching and queues (Optional)
    redis:
        image: redis:5-alpine
        restart: unless-stopped        
        # OPTIONAL: change or open up Redis port binding.
        # Disabled by default for security. Redis should not be exposed to the world!
        # your other containers should still be able to access it without this enabled
        #ports:
            #- 6379:6379
        networks:
            - laravel

    # Laravel Horizion (Optional)
    # NOTE: if you're not running horizon, you should delete this stanza or you'll get errors
    horizon:
        build:
            context: .
            dockerfile: ./.docker/Dockerfile.app
        restart: unless-stopped
        command: /bin/bash -c 'while [ 0 -lt 1 ] ; do php artisan horizon; sleep 60; done'
        networks:
            - laravel
        volumes:
            - ./src:/var/www/html

    # Laravel Scheduler (Optional)
    scheduler:
        build:
            context: .
            dockerfile: ./.docker/Dockerfile.app
        restart: unless-stopped
        command: /bin/bash -c 'while [ 0 -lt 1 ] ; do php artisan schedule:run >> /dev/null 2>&1 ; sleep 60; done'
        networks:
            - laravel
        volumes:
            - ./src:/var/www/html

    # Default Queue Worker (Optional)
    worker-default:
        build:
            context: .
            dockerfile: ./.docker/Dockerfile.app
        restart: unless-stopped
        command: /bin/bash -c 'while [ 0 -lt 1 ] ; do php artisan queue:work --tries=3 --timeout=90 --sleep=10; done'
        networks:
            - laravel
        volumes:
            - ./src:/var/www/html

    # Mailhug (Optional, mail-catcher)
    # Comment out or delete this if you don't want to use it
    mailhog:
        image: mailhog/mailhog
        networks:
            - laravel
        ports:
            # Uncomment to allow host access to SMTP (not sure why you'd want to?!)
            # your containers on the same network can still access this without the binding
            # - 1025:1025 # smtp server
            # OPTIONAL: Change the port number before the colon ":" to alter where the Mailhog UI can be accessed
            - 8025:8025 # web ui

networks:
    # A network for the laravel containers
    laravel:


# Persist the MySQL data
volumes:
    mysql_data:

This is quite a big file. Each container is defined inside the service block. Most are provided containers from dockerhub. There’s a few important things to know (which are mostly commented in the file).

The Nginx container has ports exposed. I’ve set these to 8080 externally, mapping to port 80 internally. So to access the site in your browser navigate to http://localhost:8080. The next thing the container does is mount two volumes. The first is the source code for your application, the second is the default.conf nginx file written above.

The MySQL container has port 3306 count to the host, allowing access from a MySQL management tool such as MySQL Workbench, DataGrip or DBeaver. You absolutely should not run this on a production server without firewalling it. Infact this whole environment is designed for local development, but this particularly needs raised as a point for anyone adapting this for production. Do not expose MySQL to the world! Other settings of interest here are the MYSQL_ segments. You can use these to define your username, password, database name. Additionally, the configuration mounts a volume to the MySQL database directory which means the data will be persistent until the volume is deleted. You can optionally remove this if you want volatile data that’s deleted on container restart.

The PHP container’s name is important. This relates to the nginx configuration file, where the fast_cgi parameters was defined. If you change the container definition form php: to something else, you’ll need to update it in the nginx default.conf as well as elsewhere in this file. The PHP image also needs to have a volume for the source code, and this needs to be the same path as the nginx container. Because this is a custom docker file, this needs built by docker-compose instead of just pulling an image. You can of course create this image and upload it to somewhere like dockerhub and include it from there, but I like to keep the environment customisable without messing around with external docker hubs.

The other containers are entirely optional. If you’re not running Horizon, then just remove or comment out that block. Same with the other remaining containers.

Next thing to do is create a new Laravel install in the src directory, or copy in an existing Laravel repo. Generally I install a new Laravel instance using composer like this:
`

composer create-project --prefer-dist laravel/laravel src

Now all that’s left to do is run docker-compose up -d. It’ll build the PHP image, pull the MySQL and nginx image and start your containers using the ports specified in the docker-compose file. To run composer or artisan commands, simply run docker-compose exec php bash and you’ll be dropped into the web directory on the PHP docker container. From here you can easily run commands such as php artisan key:generate, php artisan migrate and any of the php artisan make: commands.

It’s also possible to version control your src folder. Do this from the host, and not inside a docker container. cd src to go into the source code directory, as it’d be unusual for you to store your dev environment with the application. git init should initialise a new git repository for you to manage as you see fit.

Filed Under: Development, Docker, Guides, Laravel, Technology Tagged With: development, docker, docker-compose, Laravel, PHP development

Injecting inbound request headers with Traefik v2

2020/06/30 by sudo Leave a Comment

I’m using Traefik v2.2 as a reverse proxy for my docker containers. Basically all HTTP or HTTPS traffic is handled by Traefik as an ingress container and then routing according to rules defined in my docker-compose file to the appropriate internal container.

Something that I’ve needed to do for a project is add a header to an inbound request in order to identify that the request has been processed by traefik. I tried to follow the documentation but found it… lacking. For anyone interested in the official documentation for adding headers to Traefik you can find it here: https://docs.traefik.io/middlewares/headers/

What I’ve ended up with is a service container (nginx in my case) that looks like this:

nginx:
        image: nginx:stable-alpine
        volumes:
            - ./src:/var/www
            - ./.docker/nginx/default.conf:/etc/nginx/conf.d/default.conf
        labels:
            - "traefik.enable=true"
            - "traefik.http.routers.router1.rule=Host(`localhost`)"
            - "traefik.http.middlewares.testHeader.headers.customrequestheaders.test-header=new-header"
            - "traefik.http.routers.router1.middlewares=testHeader"
    php:
        image: php:7.4-fpm
        volumes:
            - ./src:/var/www

Now something that threw me was the header “test_header” doesn’t exist in requests handled by nginx. A linked PHP container simply running print_r($_SERVER) dumps all the variables to the page. It’s only at this point I discovered that Traefik adds HTTP_ to the header. So instead of :

test-header = new-header

you get:

HTTP_TEST_HEADER = new-header

I think this is one of those things that you can waste alot of time on if you don’t know that Traefik is re-writing the header values in this way.

Looking back at the Traefik configuration, to add a header you basically have two steps:

First, create a new header. This is done in the following format:

traefik.http.middlewares.{your header group name}.headers.customrequestheaders.{header key}={header value}

As an example:

- "traefik.http.middlewares.testHeader.headers.customrequestheaders.test-header=new-header"

This creates a header `HTTP_TEST_HEADER` and assigns it to the `testHeader` middleware group.

Secondly, assign this middleware group to the router you’re using for your service. In my case that’s an nginx container on a router `router1`. This takes the following format:

traefik.http.routers.{router name}.middlewares={middleware name}

As an example:

- "traefik.http.routers.router1.middlewares=testHeader"

If you’re not sure what this looks like in the context of the other containers check the first, more complete example or see the entire docker-compose file below (NOTE: this file exposes the Traefik API for debugging purposes, so don’t blindly deploy it to production):

version: '3'

services:
    proxy:
        image: traefik:v2.2
        command:
            - "--providers.docker=true"
            - "--providers.docker.exposedbydefault=false"
            - "--entrypoints.web.address=:80"
            - "--api.insecure=true"
        ports:
            - "80:80"
            - "8080:8080"
        volumes:
            - "/var/run/docker.sock:/var/run/docker.sock:ro"

    nginx:
        image: nginx:stable-alpine
        volumes:
            - ./src:/var/www
            - ./.docker/nginx/default.conf:/etc/nginx/conf.d/default.conf
        labels:
            - "traefik.enable=true"
            - "traefik.http.routers.router1.rule=Host(`localhost`)"
            - "traefik.http.middlewares.testHeader.headers.customrequestheaders.test-header=new-header"
            - "traefik.http.routers.router1.middlewares=testHeader"
            - "traefik.http.middlewares.test-ratelimit.ratelimit.average=1"
            - "traefik.http.middlewares.test-ratelimit.ratelimit.period=1m"

    php:
        image: php:7.4-fpm
        volumes:
            - ./src:/var/www

Filed Under: Docker, Linux, Technology Tagged With: docker, traefik, ubuntu server

Handling Failure: What I’ve learned with a failed systems architecture change

2020/04/25 by sudo Leave a Comment

Docker. It’s everywhere. I use it at work and at home. It’s amazing for doing development in environments that more closely match production (such as running a full Laravel stack with queues, Redis, database, local mailmog for catching test email).

In order to learn more about docker, and improve my ability to roll out sites I host, I decided that I wanted to move a whole bunch of WordPress sites to docker. The existing setup is based on Ubuntu 16.04 LTS running on Nginx and PHP-FPM with per user resource pools for better security and site resource allocation. PHP 7.0 has been end of life for some time and it’s definitely time to update. Ubuntu LTS likely won’t track current PHP versions due to the way PHP’s release cycle has changed to be effectively 2 years. That means before Ubuntu 20.04 (released just days ago) is end of Life, PHP 7.4 will have been in end of life for over a year and won’t have had active support for over 2 years! Docker, I think, will allow me to better update the environments and keep them in line with the PHP release cycle. Hopefully it also makes them easier to migrate to new operating systems later.

I’ve already had experience with Traefik as a reverse proxy and it’s fantastic for handling Lets Encrypt SSL certificates with multiple sites out of the box. I can easily add docker containers with labels and they’ll appear automatically in Traefik; magic!

So here’s what I had in my head when I started:

  • VM running Ubuntu Server 20.04 LTS
  • A docker user that manages and has permissions over everything
  • A docker-compose file in that user’s home folder. This file runs the core config for Traefik and any other main containers I need (maybe fail2ban too).
  • Each site exists in a subdirectory named after it’s domain name. Within that there’s a docker-compose file related to that site and any files are stored there too.
  • Database instances either per site or provided by the host

So that gives you something that looks like:

/home/docker/docker-compose.yml # traefik

/home/docker/aptgetlife/docker-compose.yml # wordpress and MySQL

/home/docker/aptgetlife/public_html/ # site files

/home/docker/aptgetlife/mysql/ # database files

 

Problem: The docker PPA for ubuntu doesn’t exist for 20.04!

Ubuntu’s apt packages are often out of date, by default I jump straight into the docker doc website in order to get the latest possible version. Or not. There was no PPA available for 20.04 yet!

Fallback was to use apt. Since it’s a new release it was an up to date docker package. It may be worth changing to the PPA once it’s available.

Problem: Packet loss on ubuntu 20.4

While editing the master docker compose file, my ssh connection kept hanging and dropping. Following some pings I discovered that there was an intermittent network connection. It isn’t clear if this was caused by the docker networking packages, KVM drivers or Ubuntu 20.04 itself. It has only been out a day so there’s possible issues with the OS itself.

Fallback was to go back to Ubuntu 18.04 which didn’t have any issues! I’ll jump back to 20.04 after it’s bedded in a little and hopefully the issue will go away.

Problem: The traefik network can’t be seen by individual sites

This is a new one to me and I’ve never done this before so I didn’t have any experience of the setup. I have named my internal network traefik in my main docker-compose file. This works great. Traefik will create the network and the traefik container will connect to it fine. What didn’t work was the per-site docker-compose file connecting to the network. It wanted to create it’s own version named after the folder it was in. I discovered that as of docker-compose format version 3.5 (https://github.com/docker/compose/issues/3736) it allows you to use named containers.

networks:
    aptgetlife: # network for this container and associated resources like MySQL
    traefik: # link to Traefik for inbound traffic
        external:
            name: traefik

Problem: wordpress site URLS

Okay. This is one of my pet hates with WordPress. WordPress requires a site URL. This apparently was “www.” and I’d set it up on the new system without. It turned out easier to change the configuration to use “www.” instead of convincing WordPress to change the URL. This worked. Except no style sheets or Javascript would load. This, as it turns out, is due to WordPress loading insecure URLs for these assets. I attempted to use tools to update the SQL file and edited the wp-config.php file but neither would solve this problem.

sed -i 's|http://www.aptgetlife.co.uk|https://www.aptgetlife.co.uk|g' wp_aptgetlife.sql

This actually defeated me. I really don’t understand WordPress and how insistent it is to load resourced on particular URLs.

What have I learned?

Well, I’ve learned a lot about docker-compose, networking, override files. I know my architecture will work. I have also learned that I dislike WordPress. Alot. I’m sure the site asset problem is fixable, but I don’t have the patience to deal with it. I’m not interested in fixing WordPress related problems. Even though this project failed, and it is something I wanted to use for moving my hosted sites to, I have gained a lot of knowledge in the process. So instead of taking the failure “I have not deployed my sites using docker”, I am trying to look at the benefits of the knowledge I’ve gained and reflecting on the project as a learning experience. Hopefully a little bit of a retrospective will embed some key technical details in my brain for future DevOps! It’s also important to research what’s possible with systems architecture and I’d have looked into this before starting if it were a work project, but because it’s a personal project I didn’t feel the need. This was almost a playground, a trial run to see if it was even feasible. I think I learnt more and more quickly through this make, fail, make, fail, make, fail iterative approach. I had a definable success for each of the failures and a learning experience from each while overcoming them.

My big takeaway is that docker-compose has some great features in 3.5+ that I didn’t know existed. There’s some great information about networking (and in particular the section at the end about “external” or pre-existing docker networks) on their website https://docs.docker.com/compose/networking/

Filed Under: Misc, Technology Tagged With: DevOps, docker, learning, networking, traefik, ubuntu server

Install Docker on Linux Mint 17.2

2015/10/21 by sudo

This post provides practical steps to setting up docker to run on Linux Mint 17.2, which is what I’m using on my development machine at the moment.

Before beginning, make sure you have remove any existing version(s) of Docker from your system.

First, add the Docker repository key

sudo apt-key adv --keyserver hkp://pgp.mit.edu:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609

Then create a new file for apt to find the Docker repository

sudo nano /etc/apt/sources.list.d/docker.list

Inside the docker.list file enter the following

# Ubuntu Trusty/Mint 17.2
deb https://apt.dockerproject.org/repo ubuntu-trusty main

Save and close the file. Now we want to get the latest updates from the new repository

sudo apt-get update

Once this finishes, you should now be able to install the latest Docker version

sudo apt-get install docker-engine

Once this has run, you can test the hello world docker image, which is tiny and quick to download

sudo docker run hello-world

 

Filed Under: Guides, Technology Tagged With: docker, linux mint 17.2

Recent Posts

  • Disable iLO on HP Microserver Gen8
  • Ubuntu Desktop 24.04 Change Wallpaper Settings
  • Customising Ubuntu Desktop 24.04
  • Remove domains from Let’s Encrypt using Certbot
  • Install Jetbrains Toolbox on Ubuntu 22.04

Tags

API auditing crochet data recovery debian debudding development Dingo API docker email Getting started with Laravel 5 & Dingo API hard drive health HP Microserver KVM Laravel larvel 5 lenovo Linux Minion mint netgear nas networking network shares php PHP development Postfix raspberry pi review samba security SMART smartctl smartmontools smb testing traefik ubuntu ubuntu 18.04 ubuntu 20.04 ubuntu 22.04 ubuntu server vagrant Virtual machines xdebug xubuntu

© Copyright 2015 apt get life