apt get life

Life around technology

  • Technology
    • Guides
    • Linux
    • Development
      • Laravel
    • Misc
    • Raspberry Pi
  • Writing
  • Crafts
    • Crochet
    • Model Making
    • Painting
  • Privacy Policy
You are here: Home / Home

Setting up Display Link drivers on Ubuntu 20.04

2020/06/23 by sudo Leave a Comment

I’ve purchased a Dell “universal” USB 3/USB C docking station featuring Display Link, essentially allowing you to use it to drive external displays with the intention of using it for my laptop when sat at a desk with external monitor, mouse and keyboard. Since I run Ubuntu or Linux Mint as my primary operating system, some extra steps were required to install the Display Link drivers.

The first thing you need to do is download the Display Link drivers from the official website: https://www.displaylink.com/downloads/ubuntu

Once downloaded, it’s a reasonably straight forward process to install the drivers. First thing that’s required is to install pre-requisites

sudo apt-get install dkms libdrm-dev

Next run the downloaded file replacing the version number with whatever the one you downloaded is. (Note, this command assumes you’re in the appropriate directory already):

sudo ./displaylink-driver-5.3.1.34.run

Once finished you should have a message asking if you are running xorg and if you want to reboot. It’s actually best to reboot regardless of your display driver, so select Y for yes and let your machine reboot. Then you should be able to plug in your USB dock and your external devices with them all working.

For more information, the Display Link website actually has some good resources. Checkout their Ubuntu setup guide here: https://support.displaylink.com/knowledgebase/articles/684649

Filed Under: Guides, Linux, Technology Tagged With: DisplayLink, ubuntu, ubuntu 20.04

Setting up a bond and a bridge in Netplan on Ubuntu Server 20.04

2020/06/03 by sudo 2 Comments

I’m in the process of updating my KVM servers from Ubuntu 18.04 to Ubuntu 20.04. Along with the new version of Ubuntu there’s been some changes in netplan.

What I’ve done is edit the default file created after the Ubuntu Server installation /etc/netplan/00-installer-config.yaml and setup the following:

network:
  bonds:
    bond0:
      interfaces:
      - eno1
      - eno2
      parameters:
        mode: active-backup
  ethernets:
    eno1: {}
    eno2: {}
  version: 2
  bridges:
    br0:
      dhcp4: true
      interfaces:
        - bond0
      mtu: 1500
      parameters:
        stp: false
        forward-delay: 4

This has my two interfaces eno1 and eno2 and created bond0 as an active backup. There’s a few different networking modes you can chose from:

Bond ModeDescription
balance-rrRound robin network configuration. Packets are send in sequential order from the first connection listed, going down the chain
active-backupOnly the first connection is used, unless it fails, in which case another connection is used
balance-xorThis uses a transmission policy to route between interfaces and provides both load balancing and fault tolerance
broadcastNot sure why you’d use this – sends data on all interfaces
802.3adThis is an IEEE standard. It does require switches to support the same protocol. This mode aggregates the connection to provide the benefit of bandwidth from all configured interfaces.
balance-tlbManages load between the the network adapters based on demand and availability
balance-albIncludes both transmission load balancing (balance-tlb) and receive load balancing.

Then, the bridge br0 connects to bond0. This is where you configure the network type – DHCP or static IP. In this case I’m using DHCP as the firewall I have in place manages IP address assignments and it has the server set to a static address. If you want to specify a static IP address in this configuration file, you can do it like below:

network:
  bonds:
    bond0:
      interfaces:
      - eno1
      - eno2
      parameters:
        mode: active-backup
  ethernets:
    eno1: {}
    eno2: {}
  version: 2
  bridges:
    br0:
      addresses:
        - 192.168.10.30/24
      dhcp4: false
      gateway4: 192.168.10.1
      nameservers:
        addresses:
          - 192.168.10.1
          - 192.168.10.2
        search: []
      interfaces:
        - bond0

You can find out more information here:
https://netplan.io/examples

There’s a version of this post for 18.04 here (see the comments with suggested fixes):
https://www.aptgetlife.co.uk/setting-up-a-bond-and-bridge-in-netplan-on-ubuntu-18-04/

Filed Under: Guides, Linux, Technology Tagged With: networking, ubuntu, ubuntu 20.04, ubuntu server

Iiyama 34″ IPS Ultra-Wide Monitor XUB3493WQSU Review

2020/04/26 by sudo Leave a Comment

The Iiyama XUB3493WQSU is a low cost, reasonable quality IPS screen with a few minor issues that shouldn’t detract from the ultrawide experience. If you’re looking for a good quality, budget ultrawide, this should be near the top of your list!

Background

Ultrawide monitors. They’re the current “big thing” in the monitor industry, with big brands producing 49″ super ultrawides. Much as I’d like to own such screens, I really don’t have that kind of money (or space)! So I spent many months investigating ultrawide monitors with just a few specifications:

  • It must have an IPS panel, in my mind they’re generally superior in terms of colour and quality
  • It must be at least 34″ in size, otherwise it’s smaller than my current work monitor I’d be replacing it with
  • It must be at least 3440 by 1440, again to make it practical as a replacement
  • It must be flicker free. I’m really sensitive to some types of light flickering; they give me migraines
  • Ideally, it shouldn’t make me regret the purchase too much afterwards! (because money).

After dreaming of a number of the LG utltrawides for some time, but not being able to justify £700+ for a screen, I discovered the Iiyama 34″ XUB3493WQSU, an IPS panel with flicker free technology. I have to admit, when I found this I thought it was too good to be true and passed it by a few times. The listing on Ebuyer and Amazon were not clear about it’s flicker free credentials and there were no reviews of it anywhere online (this has since changed). Having interrogated the Iiyama website about this screen, I decided to give it a go and below are my impressions.

 

Initial Impressions

I purchased the screen from Ebuyer, a UK based company. They shipped the monitor in its original packaging (no extra box, so that the shipping label was stuck onto the actual Iiyama packaging). It’s large, but as a result is well packaged, with polystyrene cutouts protecting the screen. It comes with a few cables and power lead. It’s reasonably easy to remove from the packaging.

The first thing this screen reminded me of was a Dell Ultrasharp. It’s got a similar bezel to the 2015 models and has a stand that provides some tilt, pivot and height adjustment. Note: there is a sticker on the stand stating that the pivot function is not supported; this makes sense, since the monitor is so wide it can’t really go anywhere!

XUB3493WQSU

The size of this monitor does not come across very well in the photos I’ve taken, but it certainly makes an impression of being big!

Iiyama XUB3493WQSU tilted as far as it allows

Iiyama XUB3493WQSU rear branding

Iiyama XUB3493WQSU and Dell u2514h

Iiyama XUB3493WQSU and Dell u2514h from above for size comparison

Iiyama XUB3493WQSU rear ports

Iiyama XUB3493WQSU rear input ports

The screen feels well built, as well built as the Dell monitor, which is impressive given they were almost the same price! The stand offers significant height adjustment – a little over 12CM total movement. Pivot and (unsupported) tilt is also quite impressive.

Ultra-Wide, Ultra-Good?

The monitor is really easy to set up (with the slight exception of its sheer size on my desk). I did find my old display port and HDMI cables didn’t work – the screen wouldn’t display anything but black. It’s possible that due to their age they don’t support modern standards used by this monitor. The included cables felt a little cheap, but the monitor worked perfectly after swapping the cables out.

I did find that there’s a lot of light bleed with the screen – something that plagues IPS screens. This is something that’s hopefully visible in the pictures below, notice that the light bleed from the bottom left hand side of the screen is significantly worse than the right hand side.

Iiyama XUB3493WQSU light bleed right hand side

Iiyama XUB3493WQSU light bleed right hand side

Iiyama XUB3493WQSU light bleed left hand side

Iiyama XUB3493WQSU light bleed left hand side

Iiyama XUB3493WQSU running Tomb Raider

Iiyama XUB3493WQSU running Tomb Raider benchmark

Iiyama XUB3493WQSU running Orville simulator

Iiyama XUB3493WQSU running Orville simulator

As you can hopefully make out the Tomb Raider screen is darker and shows a lot more light bleed than a brighter screen as seen in the Orville simulator. If this is the kind of thing that bothers you, then re-consider buying this screen! I actually find that it’s not too noticeable when properly engaged in a game.

The actual game play on this screen is reasonably good (coming from the Dell u2514h, which is not a gaming monitor in itself). I did get some screen tearing, but my graphics card is a reasonably (7 years) old Nvidia Zotac 770 so it can’t take advantage of the freesync support this monitor provides. I would suggest getting a better graphics card than I have for any reasonable game players out there! It’s okay for older games such as counter strike source, but the newer Tomb Raider games really struggle (as you may have noticed from my 17 frames per second). Game play is actually surprisingly immersive, I cannot describe the experience, but once you’ve used an ultrawide you likely won’t want to go back!

Working on the screen is reasonably easy to do, and I can comfortably have a Libre Office Writer and Firefox window open side by side at the same time. It’s also reasonably good for coding, although I would say that a two or three screen setup is better supported with window snapping than the single display. This is important if you code like I do with about 5 different windows in active use! The text is clear and sharp, really to the extent that I would say it is the same as the Dell Ultrasharp.

The main problems with the Iiyama XUB3493WQSU are reasonably supervicial. First is of course the light bleed. It’s an IPS screen and to some extent you need to live with it. My next biggest complaint is the delay both in turning on and switching inputs. The input switch lag is crazy; we’re not talking a couple of seconds, it’s more like 8! There’s a lot of hesitation in the monitor when using the input buttons as well and the menu can be almost impossible to use. Once you’ve got the monitor configured as you want I would advise never touching the input buttons again for fear of messing something up and getting stuck in the menu somewhere! One last complaint is there’s a noticeable polarization effect on the left and right hand edge of the screen. I couldn’t get a good picture of this, but I’ve noticed that with a scroll bar for a web browser on the right hand side, I sometimes have to move my head to the left to stop the polarizing filter from “hiding” it behind blackness.

Overall, I think this screen has ticked all the boxes for me. I really like it. I like using lots of windows at once and my work layout is actually far easier to make dynamic based on activity with the extra real-estate to play with. I think it’s pitfalls are worth putting up with given the value of the screen, especially if you’re after an ultrawide!

Note: it does have picture in picture for multiple inputs at the same time, but I haven’t really used this feature. Given the clumsy on screen menu input, I’ve set the monitor up how I wanted it and then dared not touch it again!

Iiyama XUB3493WQSU FireWatch desktop background

Filed Under: Review, Technology Tagged With: Iiyama, review, ultrawide

Handling Failure: What I’ve learned with a failed systems architecture change

2020/04/25 by sudo Leave a Comment

Docker. It’s everywhere. I use it at work and at home. It’s amazing for doing development in environments that more closely match production (such as running a full Laravel stack with queues, Redis, database, local mailmog for catching test email).

In order to learn more about docker, and improve my ability to roll out sites I host, I decided that I wanted to move a whole bunch of WordPress sites to docker. The existing setup is based on Ubuntu 16.04 LTS running on Nginx and PHP-FPM with per user resource pools for better security and site resource allocation. PHP 7.0 has been end of life for some time and it’s definitely time to update. Ubuntu LTS likely won’t track current PHP versions due to the way PHP’s release cycle has changed to be effectively 2 years. That means before Ubuntu 20.04 (released just days ago) is end of Life, PHP 7.4 will have been in end of life for over a year and won’t have had active support for over 2 years! Docker, I think, will allow me to better update the environments and keep them in line with the PHP release cycle. Hopefully it also makes them easier to migrate to new operating systems later.

I’ve already had experience with Traefik as a reverse proxy and it’s fantastic for handling Lets Encrypt SSL certificates with multiple sites out of the box. I can easily add docker containers with labels and they’ll appear automatically in Traefik; magic!

So here’s what I had in my head when I started:

  • VM running Ubuntu Server 20.04 LTS
  • A docker user that manages and has permissions over everything
  • A docker-compose file in that user’s home folder. This file runs the core config for Traefik and any other main containers I need (maybe fail2ban too).
  • Each site exists in a subdirectory named after it’s domain name. Within that there’s a docker-compose file related to that site and any files are stored there too.
  • Database instances either per site or provided by the host

So that gives you something that looks like:

/home/docker/docker-compose.yml # traefik

/home/docker/aptgetlife/docker-compose.yml # wordpress and MySQL

/home/docker/aptgetlife/public_html/ # site files

/home/docker/aptgetlife/mysql/ # database files

 

Problem: The docker PPA for ubuntu doesn’t exist for 20.04!

Ubuntu’s apt packages are often out of date, by default I jump straight into the docker doc website in order to get the latest possible version. Or not. There was no PPA available for 20.04 yet!

Fallback was to use apt. Since it’s a new release it was an up to date docker package. It may be worth changing to the PPA once it’s available.

Problem: Packet loss on ubuntu 20.4

While editing the master docker compose file, my ssh connection kept hanging and dropping. Following some pings I discovered that there was an intermittent network connection. It isn’t clear if this was caused by the docker networking packages, KVM drivers or Ubuntu 20.04 itself. It has only been out a day so there’s possible issues with the OS itself.

Fallback was to go back to Ubuntu 18.04 which didn’t have any issues! I’ll jump back to 20.04 after it’s bedded in a little and hopefully the issue will go away.

Problem: The traefik network can’t be seen by individual sites

This is a new one to me and I’ve never done this before so I didn’t have any experience of the setup. I have named my internal network traefik in my main docker-compose file. This works great. Traefik will create the network and the traefik container will connect to it fine. What didn’t work was the per-site docker-compose file connecting to the network. It wanted to create it’s own version named after the folder it was in. I discovered that as of docker-compose format version 3.5 (https://github.com/docker/compose/issues/3736) it allows you to use named containers.

networks:
    aptgetlife: # network for this container and associated resources like MySQL
    traefik: # link to Traefik for inbound traffic
        external:
            name: traefik

Problem: wordpress site URLS

Okay. This is one of my pet hates with WordPress. WordPress requires a site URL. This apparently was “www.” and I’d set it up on the new system without. It turned out easier to change the configuration to use “www.” instead of convincing WordPress to change the URL. This worked. Except no style sheets or Javascript would load. This, as it turns out, is due to WordPress loading insecure URLs for these assets. I attempted to use tools to update the SQL file and edited the wp-config.php file but neither would solve this problem.

sed -i 's|http://www.aptgetlife.co.uk|https://www.aptgetlife.co.uk|g' wp_aptgetlife.sql

This actually defeated me. I really don’t understand WordPress and how insistent it is to load resourced on particular URLs.

What have I learned?

Well, I’ve learned a lot about docker-compose, networking, override files. I know my architecture will work. I have also learned that I dislike WordPress. Alot. I’m sure the site asset problem is fixable, but I don’t have the patience to deal with it. I’m not interested in fixing WordPress related problems. Even though this project failed, and it is something I wanted to use for moving my hosted sites to, I have gained a lot of knowledge in the process. So instead of taking the failure “I have not deployed my sites using docker”, I am trying to look at the benefits of the knowledge I’ve gained and reflecting on the project as a learning experience. Hopefully a little bit of a retrospective will embed some key technical details in my brain for future DevOps! It’s also important to research what’s possible with systems architecture and I’d have looked into this before starting if it were a work project, but because it’s a personal project I didn’t feel the need. This was almost a playground, a trial run to see if it was even feasible. I think I learnt more and more quickly through this make, fail, make, fail, make, fail iterative approach. I had a definable success for each of the failures and a learning experience from each while overcoming them.

My big takeaway is that docker-compose has some great features in 3.5+ that I didn’t know existed. There’s some great information about networking (and in particular the section at the end about “external” or pre-existing docker networks) on their website https://docs.docker.com/compose/networking/

Filed Under: Misc, Technology Tagged With: DevOps, docker, learning, networking, traefik, ubuntu server

KVM converting virtual disks from raw img files to qcow2

2020/01/21 by sudo Leave a Comment

If you’re running qemu KVM on Ubuntu and want to take advantage of the qcow2 file format’s snapshotting capabilities and sparse disk population you can easily convert using the command line tool qemu-img convert

First, make sure your virtual machine is turned off! Then you can navigate to the directory your virtual disks are stored in (usually /var/lib/libvirt). It’s probably a good idea to be a root user or otherwise sudo the following command

qemu-img convert -f raw -O qcow2 vm_hdd.img vm_hdd.qcow2

The -f flag tells convert what format it’s reading. If you don’t provide it then it’ll guess based on the file extension.

the -O flag tells convert what file format to output to, again if not provided it’ll guess based on the file extension.

Now you’ve got a qcow2 file, you’ll need to edit the VM configuration

virsh edit vm_name

this will open up an editor for your VM configuration. It’s an XML file, so it’s reasonably easy to follow. What you’re looking for is a disk section so you can change the file extension and disk type

<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='none'/>
<source file='/var/lib/libvirt/images/rhel62-2.img'/>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' cache='none'/>
<source file='/var/lib/libvirt/images/rhel62-2.qcow2'/>

Note both “raw” and “img” have been changed to “qcow2” for this disk. Make sure you’ve picked the right disk to edit in the XML. It may be a good idea to take a backup first so you can fall back to the img file if needed!

That should be it, your VM should now boot with the new disk file. Once you’re sure it’s working you can delete the original (or keep it safe somewhere).

 

More information about KVM can be found on the RedHat website: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/index or the Ubuntu wiki https://help.ubuntu.com/community/KVM/Installation

Filed Under: Linux, Technology Tagged With: KVM, ubuntu 18.04, ubuntu server

  • « Previous Page
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • …
  • 15
  • Next Page »

Recent Posts

  • System Hang on Ubuntu 24.04 “e1000_print_hw_hang”
  • Disable iLO on HP Microserver Gen8
  • Ubuntu Desktop 24.04 Change Wallpaper Settings
  • Customising Ubuntu Desktop 24.04
  • Remove domains from Let’s Encrypt using Certbot

Tags

API auditing crochet data recovery debian debudding development Dingo API docker email Getting started with Laravel 5 & Dingo API hard drive health HP Microserver KVM Laravel larvel 5 lenovo Linux Minion mint netgear nas networking network shares php PHP development Postfix raspberry pi review samba security SMART smartctl smartmontools smb testing traefik ubuntu ubuntu 18.04 ubuntu 20.04 ubuntu 22.04 ubuntu server vagrant Virtual machines xdebug xubuntu

© Copyright 2015 apt get life

 

Loading Comments...