apt get life

Life around technology

  • Technology
    • Guides
    • Linux
    • Development
      • Laravel
    • Misc
    • Raspberry Pi
  • Writing
  • Crafts
    • Crochet
    • Model Making
    • Painting
  • Privacy Policy
You are here: Home / Archives for Uncategorized

Solving Error on statfs() system call with cAdvisor and Docker

2021/01/14 by sudo Leave a Comment

With a docker setup running with Prometheus node exporter and a docker container running cAdvisor, I’ve been seeing error messages similar to the following repeatedly appearing in syslog:

Jan 14 15:15:25 dockerserver-live prometheus-node-exporter[603]: time="2021-01-14T15:15:25Z" level=error msg="Error on statfs() system call for \"/var/lib/docker/containers/719fe4c20d2d274bb034e914006ecfe6760d8aec98efdc8010c85a01cf4059aa/mounts/shm\": permission denied" source="filesystem_linux.go:57"
Jan 14 16:23:25 dockerserver-live prometheus-node-exporter[28623]: time="2021-01-14T16:23:25Z" level=error msg="Error on statfs() system call for \"/var/lib/docker/overlay2/8c25cb3049b4cfc9bebfd4df0ea6104560155bed2c18a9bd75d21323931570f4/merged\": permission denied" source="filesystem_linux.go:57"

These errors are being generated by a Prometheus node exporter process running with the following args:

      --collector.diskstats.ignored-devices=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\\d+n\\d+p)\\d+$ \
      --collector.filesystem.ignored-mount-points=^/(sys|proc|dev|run)($|/) \
      --collector.netdev.ignored-devices=^lo$ \
      --collector.textfile.directory=/var/lib/prometheus/node-exporter

I’m not sure where this block originally came from, but in our case it’s in /etc/defaults/prometheus-node-exporter and easily edited to fix the regular expressions. Specifcally, because there’s problems with permissions on the shm and overlay they can be added to the ignored-mount-point regular expression as below:

      --collector.diskstats.ignored-devices=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\\d+n\\d+p)\\d+$ \
      --collector.filesystem.ignored-mount-points=^/(sys|proc|dev|run|var\/lib\/docker\/containers\/.*\/mounts\/shm|var\/lib\/docker\/overlay2\/.*\/merged)($|/) \
      --collector.netdev.ignored-devices=^lo$ \
      --collector.textfile.directory=/var/lib/prometheus/node-exporter

following this change and restarting the prometheus-node-exporter process stopped new entries appearing in our syslog as it should be ignoring both troublesome directories.

Share this:

  • Twitter
  • Facebook
  • Reddit
  • Tumblr
  • Pinterest

Filed Under: Docker, Linux, Technology, Uncategorized

MySQL in Docker for Local Development

2020/08/04 by sudo Leave a Comment

Outside of my standard docker-compose setup for Laravel, I find myself requiring one-off MySQL instances for short pieces of development, debugging or testing. Instead of building a whole container ecosystem to do this, if I can get away with it I simply run MySQL as a docker container, binding it to the local machine’s ports.

The MySQL docker container is available on docker hub https://hub.docker.com/_/mysql which provides some basic usage instructions. I’m going to outline how I use MySQL in docker for this one off work.

Almost everything I do is based on the older MySQL 5.7. That means I need to use that image specifically when running docker. In order to make sure my local container is up to date and ready and waiting for use I tend to run a docker pull command on my dev machine as soon as it’s setup.

docker pull mysql:5.7

Version information is available on the MySQL docker hub page if you need a different version of MySQL. Now that the docker image is held locally it’s much easier to just start up a container whenever you need it and it will not spend ages re-downloading.

Understanding the Basics of MySQL’s Docker Container

There’s a few things I do with my containers. Firstly I want to expose the container to the host machine. This allows database administration tools like MySQL Workbench or Datagrip to connect to the MySQL docker instance. It also allows code to talk to it, and often this is what I want to do. It’s important not to overlap these ports, but generally I don’t run a temporary MySQL container along side any development stacks or local installs so I bind to the default port (3306). To do this I add the -p 3306:3306 flag to the command. If you want to change the external port (the one you’re using to connect to MySQL inside docker), then change the port number before the colon (:) like so -p 1234:3306. This maps port 1234 on your machine to the docker containers port 3306 inside the container.

Next, a root password and default database should be created. You could skip database creation and do it later with the management tool of your choice, but I find this easier. There’s two environment variables to set and I usually pick a short, insecure password for MySQL’s root account as this is only a test locally and firewalled on my dev machine. -e MYSQL_ROOT_PASSWORD=toor sets the root password to “toor” (root backwards. This was a default on a few Linux distros for a while). Setting the default database is just as easy -e MYSQL_DATABASE=dev. In this case it’s creating a database called “dev”.

Finally, I tend to name the docker container so I can run it again easily if required. I do this long hand with --name mysql57 where “mysql57” is the name of the container I’m creating. You can name this per project if it makes more sense for you, but I do regularly delete and recreate this container as it’s outside my usual dev workflow and usually just for debugging/fixing something once.

Creating a Named MySQL Docker Container Exposed to the Host

Rolling it all together you can run this command to create a named MySQL 5.7 instance that is running in the background (-d).

docker run --name mysql57 -e MYSQL_ROOT_PASSWORD=toor -e MYSQL_DATABASE=dev -p 3306:3306 -d mysql:5.7

Restore Backups to a MySQL Docker Container

If you have a database backup you need to restore, then it’s reasonably easy to pass it into MySQL, although if it’s a big database then it can take some time to do. This can be done by using cat to read the backup file and feeding it into the MySQL docker container. If you’re a user who doesn’t have native docker permissions (like on Ubuntu, which requires sudo docker) then it may be best to change to a user that does have permissions (sudo -i to switch to root, then run the backup restore command).

cat database_backup.sql | docker exec -i mysql57 /usr/bin/mysql -u root --password=toor dev

Backing up a Database Inside MySQL’s Docker Container

If you need to backup your MySQL docker database from the container, you can do so by running the mysqldump command that the container has installed by default, passing it container name, username, password and database you’ve defined when creating the container and defining the output file to save the database dump to.

docker exec mysql57 /usr/bin/mysqldump -u root --password=toor dev > dev_backup.sql

Cleaning up

Once you’re done with your MySQL container, you can stop and delete it by running the following commands, making sure to replace the container name (“mysql57”) with the name of your container if you happened to change it:

docker stop mysql57
docker rm mysql57

That’s it! You’ve created a named docker container running MySQL 5.7. You’ve exposed it to the host machine using port binding and learned how to restore a database backup to it. It’s not as useful as a full docker-compose stack for development. If you’re interested in a docker-compose dev environment check out this article. It does, however give you quick and easy MySQL access when you just need to poke around a database.

Share this:

  • Twitter
  • Facebook
  • Reddit
  • Tumblr
  • Pinterest

Filed Under: Development, Docker, Linux, Technology, Uncategorized Tagged With: docker, mysql, mysql-server

Docker WordPress Increase PHP Max File Size

2020/07/21 by sudo 1 Comment

I’ve recently been working with WordPress inside docker containers and discovered that file uploads are limited to 2MB, the apache default when using the official WordPress docker image. I don’t know of any sites that I’ve worked on that can actually work with only 2MB (possibly that is an indication of how abused WordPress is to turn it into something beyond a simple blogging platform). This brief guide provides you with one possible way of increasing the WordPress maximum file upload size in docker.

Firstly, create an uploads.ini file, content is defined below

upload_max_filesize = 16M
post_max_size = 16M

Now either mount the file using volumes in docker, or using docker compose. Below is an example except using docker-compose

wordpress:
  image: wordpress:latest
  ports:
    - "80:80"
  volumes:
    - ./uploads.ini:/usr/local/etc/php/conf.d/uploads.ini
    - ./wp-content:/var/www/html/wp-content/

Share this:

  • Twitter
  • Facebook
  • Reddit
  • Tumblr
  • Pinterest

Filed Under: Docker, Guides, Technology, Uncategorized

Optimising Nginx for PHP & WordPress (Time To First Byte)

2019/04/06 by sudo

When running page speed insights, it seems that TTFB (Time To First Byte) is something that it really doesn’t like when checking performance.

To solve this, we can use nginx’s caching of compiled PHP pages. Even better, the cache can be a RAM disk, making it very responsive.

First, create a directory for the RAM disk:

sudo mkdir -p /mnt/nginx-cache

Now create an entry in the fstab file so it’s mounted to the RAM disk on boot:

sudo nano /etc/fstab

tmpfs /mnt/nginx-cache tmpfs rw,size=2048M 0 0

This creates a 2GB RAM disk. Edit the size as appropriate for your server. Then mount it:

sudo mount /mnt/nginx-cache

Now, create a cache configuration file for Nginx:

sudo nano /etc/nginx/conf.d/cache.conf

fastcgi_cache_path /etc/nginx-cache levels=1:2 keys_zone=phpcache:512m inactive=2h max_size=1024m;
fastcgi_cache_key "$scheme$request_method$host$request_uri";

This creates a cache of 1GB with a default time of 2 hours. Next update the config files for your website – change your config file name where appropriate.

/etc/nginx/sites-enabled/mysite.conf

Inside of the location ~ "^(.+\.php)($|/)" { section, add:

# ----------------------------------------------
# Caching
# ----------------------------------------------
# This defines which cache to use (defined in /etc/nginx/cache.conf)
fastcgi_cache phpcache;
# Cache only 200 Okay responses for 2 hours
fastcgi_cache_valid 200 2h;
# Don't cache POST requests, only GET
fastcgi_cache_methods GET HEAD;
# Optional. Add a header to prove it works
add_header X-Fastcgi-Cache $upstream_cache_status;

now you should be able to restart nginx sudo service nginx restart and access the site via a web browser. Then you can use something like developer tools access the headers of the web requests. You should find a header:

X-Fastcgi-Cache: HIT

 

Share this:

  • Twitter
  • Facebook
  • Reddit
  • Tumblr
  • Pinterest

Filed Under: Linux, Technology, Uncategorized Tagged With: nginx, php, ubuntu server, wordpress

Recommended starter hosting providers for 2014

2014/09/06 by sudo

These are my recommended hosting providers of 2014, all of whom offer Linux machines running Ubuntu or Debian:
Linode offer a wide range of VPS solutions in multiple locations. They have a good community behind them with some great tutorials. Their control panel makes things easy to manage too. There are pay-more additions like load balancers so if you think you’re going to grow your sites quickly or get lots of traffic it’s worth going with them as a more mature hosting provider.

Digital Ocean have been making a big name for themselves in the past year. Their machines are much faster than a traditional VPS as they’re an SSD only hosting provider, so no old spinning disks to slow things down. They have a wide range of tutorials and Q&A sections on their site which are growing rapidly. The web interface is easy to use, but I’m not too keen on the spin-up process yet as they insist on providing you a password via email.

Bytemark offer a cloud platform similar to both Linode and Digital Ocean. They’re a smaller team based in the UK and the platform is still maturing, but Bytemark are always my first port of call for hosting services. They sponsor many open source events and projects, and even offer hosting to the Debian project.

Share this:

  • Twitter
  • Facebook
  • Reddit
  • Tumblr
  • Pinterest

Filed Under: Linux, Technology, Uncategorized Tagged With: hosting, Linux

  • 1
  • 2
  • Next Page »

Recent Posts

  • Alternative GPIO pins for Wemos D1 Mini BME280 sensor
  • Fixing zsys daemon: timed out waiting for server handshake on Ubuntu 20.04 ZFS
  • Solving Error on statfs() system call with cAdvisor and Docker
  • MySQL in Docker for Local Development
  • Laravel Docker Development Environment With PHP7

Tags

7zip API auditing Courier MTA crochet data recovery debian debudding development Dingo API docker email Getting started with Laravel 5 & Dingo API hard drive health KVM Laravel larvel 5 lenovo Linux Mail Quota Minion mint netgear nas networking network shares php PHP development Postfix samba security SMART smartctl smartmontools smb smbfs testing traefik ubuntu ubuntu 18.04 ubuntu 20.04 ubuntu server vagrant Virtual machines xdebug xubuntu

© Copyright 2015 apt get life