apt get life

Life around technology

  • Technology
    • Guides
    • Linux
    • Development
      • Laravel
    • Misc
    • Raspberry Pi
  • Writing
  • Crafts
    • Crochet
    • Model Making
    • Painting
  • Privacy Policy
You are here: Home / Archives for Technology / Development

MySQL in Docker for Local Development

2020/08/04 by sudo Leave a Comment

Outside of my standard docker-compose setup for Laravel, I find myself requiring one-off MySQL instances for short pieces of development, debugging or testing. Instead of building a whole container ecosystem to do this, if I can get away with it I simply run MySQL as a docker container, binding it to the local machine’s ports.

The MySQL docker container is available on docker hub https://hub.docker.com/_/mysql which provides some basic usage instructions. I’m going to outline how I use MySQL in docker for this one off work.

Almost everything I do is based on the older MySQL 5.7. That means I need to use that image specifically when running docker. In order to make sure my local container is up to date and ready and waiting for use I tend to run a docker pull command on my dev machine as soon as it’s setup.

docker pull mysql:5.7

Version information is available on the MySQL docker hub page if you need a different version of MySQL. Now that the docker image is held locally it’s much easier to just start up a container whenever you need it and it will not spend ages re-downloading.

Understanding the Basics of MySQL’s Docker Container

There’s a few things I do with my containers. Firstly I want to expose the container to the host machine. This allows database administration tools like MySQL Workbench or Datagrip to connect to the MySQL docker instance. It also allows code to talk to it, and often this is what I want to do. It’s important not to overlap these ports, but generally I don’t run a temporary MySQL container along side any development stacks or local installs so I bind to the default port (3306). To do this I add the -p 3306:3306 flag to the command. If you want to change the external port (the one you’re using to connect to MySQL inside docker), then change the port number before the colon (:) like so -p 1234:3306. This maps port 1234 on your machine to the docker containers port 3306 inside the container.

Next, a root password and default database should be created. You could skip database creation and do it later with the management tool of your choice, but I find this easier. There’s two environment variables to set and I usually pick a short, insecure password for MySQL’s root account as this is only a test locally and firewalled on my dev machine. -e MYSQL_ROOT_PASSWORD=toor sets the root password to “toor” (root backwards. This was a default on a few Linux distros for a while). Setting the default database is just as easy -e MYSQL_DATABASE=dev. In this case it’s creating a database called “dev”.

Finally, I tend to name the docker container so I can run it again easily if required. I do this long hand with --name mysql57 where “mysql57” is the name of the container I’m creating. You can name this per project if it makes more sense for you, but I do regularly delete and recreate this container as it’s outside my usual dev workflow and usually just for debugging/fixing something once.

Creating a Named MySQL Docker Container Exposed to the Host

Rolling it all together you can run this command to create a named MySQL 5.7 instance that is running in the background (-d).

docker run --name mysql57 -e MYSQL_ROOT_PASSWORD=toor -e MYSQL_DATABASE=dev -p 3306:3306 -d mysql:5.7

Restore Backups to a MySQL Docker Container

If you have a database backup you need to restore, then it’s reasonably easy to pass it into MySQL, although if it’s a big database then it can take some time to do. This can be done by using cat to read the backup file and feeding it into the MySQL docker container. If you’re a user who doesn’t have native docker permissions (like on Ubuntu, which requires sudo docker) then it may be best to change to a user that does have permissions (sudo -i to switch to root, then run the backup restore command).

cat database_backup.sql | docker exec -i mysql57 /usr/bin/mysql -u root --password=toor dev

Backing up a Database Inside MySQL’s Docker Container

If you need to backup your MySQL docker database from the container, you can do so by running the mysqldump command that the container has installed by default, passing it container name, username, password and database you’ve defined when creating the container and defining the output file to save the database dump to.

docker exec mysql57 /usr/bin/mysqldump -u root --password=toor dev > dev_backup.sql

Cleaning up

Once you’re done with your MySQL container, you can stop and delete it by running the following commands, making sure to replace the container name (“mysql57”) with the name of your container if you happened to change it:

docker stop mysql57
docker rm mysql57

That’s it! You’ve created a named docker container running MySQL 5.7. You’ve exposed it to the host machine using port binding and learned how to restore a database backup to it. It’s not as useful as a full docker-compose stack for development. If you’re interested in a docker-compose dev environment check out this article. It does, however give you quick and easy MySQL access when you just need to poke around a database.

Filed Under: Development, Docker, Linux, Technology, Uncategorized Tagged With: docker, mysql, mysql-server

Laravel Docker Development Environment With PHP7

2020/07/27 by sudo 1 Comment

Running specific PHP versions for Laravel can be quite useful, especially when working with legacy applications. I work on a range of different solutions with different versions of PHP and Laravel versions. To save me time reconfiguring my local environment’s PHP version and to better represent the live systems, I have opted for Docker based development environments. Here’s what I am aiming for:

  • Customisable PHP versions
    • Including libraries like Imagick and XDebug to make dev easier
  • Self contained database instance
  • Supporting queue worker, so I can test queues work locally
  • Email catching, so I can test email notifications locally
  • Redis, for queue management
  • The Laravel Scheduler working

In order to achieve this, I’ve opted to use a docker-compose environment with custom docker PHP file. This defines the PHP version as well as any extra libraries in it that I need for the project. Then the project files (source code of the Laravel application) can be mounted as a volume. By mounting the project’s source code, it’s available for an editor on the host machine, while also being available for the PHP code to execute.

Let’s start by defining the project structure:

.
├── .docker
│   ├── Dockerfile.app
│   └── nginx
│       └── default.conf
├── docker-compose.yml
└── src

This structure tends to keep the Docker configuration and extra files neater, since they’re self-contained in a `.docker` directory. The custom PHP docker file (Dockerfile.app) is contained here, as is a subdirectory for Nginx, the webserver I’ll be using. Only the docker-compose file needs to be in the parent folder.

Lets start with the docker file. You’ll need to find your host user and group ID. On Linux (and presumably Mac) you can find this by running id -u and id -g. Normally they’re both 1000. Replace the ARG entries in the docker file if your IDs are different.

If you’ve not created the directory structure already, do it now:

mkdir -p .docker/nginx

Now create the Docker file, I’m using Nano but you can use whatever editor you want: nano .docker/Dockerfile.app

FROM php:7.2-fpm

# Define the User and Group ID for this docker file. This should match your host system UID and GID.
ARG UID=1000
ARG GID=1000

# Set working directory for future docker commands
WORKDIR /var/www/html

# Install dependencies
RUN apt-get update && apt-get install -y --quiet ca-certificates \
    build-essential \
    mariadb-client \
    libpng-dev \
    libxml2-dev \
    libxrender1 \
    wkhtmltopdf \
    libjpeg62-turbo-dev \
    libfreetype6-dev \
    locales \
    zip \
    jpegoptim optipng pngquant gifsicle \
    vim \
    unzip \
    curl \
    libmcrypt-dev \
    msmtp \
    iproute2 \
    libmagickwand-dev

# Clear cache: keep the container slim
RUN apt-get clean && rm -rf /var/lib/apt/lists/*

# Xdebug
# Note that "host.docker.internal" is not currently supported on Linux. This nasty hack tries to resolve it
# Source: https://github.com/docker/for-linux/issues/264
RUN ip -4 route list match 0/0 | awk '{print $3" host.docker.internal"}' >> /etc/hosts

# Install extensions: Some extentions are better installed using this method than apt in docker
RUN docker-php-ext-configure gd --with-gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ --with-png-dir=/usr/include/ \
    && docker-php-ext-install \
        pdo_mysql \
        mbstring \
        zip \
        exif \
        pcntl \
        xml \
        soap \
        bcmath \
        gd

# Install Redis, Imagick xDebug (Optional, but reccomended) and clear temp files
RUN pecl install -o -f redis \
    imagick \
    xdebug \
&&  rm -rf /tmp/pear \
&&  docker-php-ext-enable redis \
    imagick \
    xdebug

# Install composer: This could be removed and run in it's own container
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer

# xdebug.remote_connect_back = true does NOT work in docker
RUN echo '\n\
[Xdebug]\n\
xdebug.remote_enable=true\n\
xdebug.remote_autostart=true\n\
xdebug.remote_port=9000\n\
xdebug.remote_host=docker.host.internal\n'\
>> /usr/local/etc/php/php.ini

RUN echo "request_terminate_timeout = 3600" >> /usr/local/etc/php-fpm.conf
RUN echo "max_execution_time = 300" >> /usr/local/etc/php/php.ini

# Xdebug
# Note that "host.docker.internal" is not currently supported on Linux. This nasty hack tries to resolve it
# Source: https://github.com/docker/for-linux/issues/264
#RUN ip -4 route list match 0/0 | awk '{print $3" host.docker.internal"}' >> /etc/hosts
RUN ip -4 route list match 0/0 | awk '{print "xdebug.remote_host="$3}' >> /usr/local/etc/php/php.ini

# Add user for laravel application
RUN groupadd -g $GID www
RUN useradd -u $UID -ms /bin/bash -g www www

# Make sure permissions match host and container
RUN chown www:www -R /var/www/html

#  Change current user to www
USER www

# Copy in a custom PHP.ini file
# INCOMPLETE/UNTESTED
#COPY source /usr/local/etc/php/php.ini

# We should do this as a command once the container is up.
# Leaving here incase someone wants to enable it here...
#RUN composer install && composer dump-autoload -o

I’ve left in some commented commands, which can be uncommented and customised if needed. The file comments should also help you make any changes as needed, but the file should work for you as is.

Next, lets create the nginx configuration file nano .docker/nginx/default.conf

server {
    listen 80 default_server;

    root /var/www/html/public;

    index.php index index.html index.htm;

    charset utf-8;

    location = /favicon.ico { log_not_found off; access_log off; }
    location = /robots.txt  { log_not_found off; access_log off; }

    location / {
        try_files $uri $uri/ /index.php$is_args$args;
    }

    location ~ ^/.+\.php(/|$) {
        fastcgi_pass php:9000;
        fastcgi_split_path_info ^(.+\.php)(/.*)$;
        fastcgi_read_timeout 3600;
        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_param HTTPS off;
    }

    error_page 404 /index.php;

    location ~ /\.ht {
        deny all;
    }
}

The most important part of this file is the fastcgi_pass php:9000; line. This tells nginx in it’s container where to find PHP running in it’s container. You’ll see that tie in the docker compose file.

Create the docker-compose.yml file nano docker-compose.yml

version: '3'

services:

    # Nginx web server
    nginx:
        image: nginx:stable-alpine
        ports:
            # OPTIONAL: change the port number before the colon ":" to alter we traffic port
            - "8080:80"
        volumes:
            - ./src:/var/www/html
            - ./.docker/nginx/default.conf:/etc/nginx/conf.d/default.conf
        depends_on:
            # for this container to run, wait until PHP and MYSQL are running
            - php
            - mysql
        networks:
            # OPTIONAL: change or remove the network name (do this for all containers)
            - laravel

    # MySQL database server
    mysql:
        image: mysql:5.7
        restart: unless-stopped
        tty: true
        ports:
            # OPTIONAL: Change the port number before the colon ":" to alter where MySQL binds on the host
            # Allow connections to MySQL from the host (MySQL Workbench, DataGrip, etc) on port 33060
            # WARNING: do not expose in production!
            - "3306:3306"
        environment:
            # OPTIONAL: Change MySQL credentials
            MYSQL_ROOT_PASSWORD: secret
            MYSQL_DATABASE: laravel
            MYSQL_USER: laravel
            MYSQL_PASSWORD: secret
            SERVICE_TAGS: dev
            SERVICE_NAME: mysql
        networks:
            - laravel
        volumes:
            # Persist MySQL data with a docker volume (see end of file)
            - mysql_data:/var/lib/mysql

    # Custom PHP image for Laravel
    php:
        build:
            context: .
            dockerfile: ./.docker/Dockerfile.app
        volumes:
            - ./src:/var/www/html
            # Load a custom PHP.ini file
            #- ./.docker/php/php.ini:/usr/local/etc/php/php.ini
        #command: ip -4 route list match 0/0 | awk '{print $$3" host.docker.internal"}' >> /etc/hosts
        networks:
            - laravel

    # Redis, for caching and queues (Optional)
    redis:
        image: redis:5-alpine
        restart: unless-stopped        
        # OPTIONAL: change or open up Redis port binding.
        # Disabled by default for security. Redis should not be exposed to the world!
        # your other containers should still be able to access it without this enabled
        #ports:
            #- 6379:6379
        networks:
            - laravel

    # Laravel Horizion (Optional)
    # NOTE: if you're not running horizon, you should delete this stanza or you'll get errors
    horizon:
        build:
            context: .
            dockerfile: ./.docker/Dockerfile.app
        restart: unless-stopped
        command: /bin/bash -c 'while [ 0 -lt 1 ] ; do php artisan horizon; sleep 60; done'
        networks:
            - laravel
        volumes:
            - ./src:/var/www/html

    # Laravel Scheduler (Optional)
    scheduler:
        build:
            context: .
            dockerfile: ./.docker/Dockerfile.app
        restart: unless-stopped
        command: /bin/bash -c 'while [ 0 -lt 1 ] ; do php artisan schedule:run >> /dev/null 2>&1 ; sleep 60; done'
        networks:
            - laravel
        volumes:
            - ./src:/var/www/html

    # Default Queue Worker (Optional)
    worker-default:
        build:
            context: .
            dockerfile: ./.docker/Dockerfile.app
        restart: unless-stopped
        command: /bin/bash -c 'while [ 0 -lt 1 ] ; do php artisan queue:work --tries=3 --timeout=90 --sleep=10; done'
        networks:
            - laravel
        volumes:
            - ./src:/var/www/html

    # Mailhug (Optional, mail-catcher)
    # Comment out or delete this if you don't want to use it
    mailhog:
        image: mailhog/mailhog
        networks:
            - laravel
        ports:
            # Uncomment to allow host access to SMTP (not sure why you'd want to?!)
            # your containers on the same network can still access this without the binding
            # - 1025:1025 # smtp server
            # OPTIONAL: Change the port number before the colon ":" to alter where the Mailhog UI can be accessed
            - 8025:8025 # web ui

networks:
    # A network for the laravel containers
    laravel:


# Persist the MySQL data
volumes:
    mysql_data:

This is quite a big file. Each container is defined inside the service block. Most are provided containers from dockerhub. There’s a few important things to know (which are mostly commented in the file).

The Nginx container has ports exposed. I’ve set these to 8080 externally, mapping to port 80 internally. So to access the site in your browser navigate to http://localhost:8080. The next thing the container does is mount two volumes. The first is the source code for your application, the second is the default.conf nginx file written above.

The MySQL container has port 3306 count to the host, allowing access from a MySQL management tool such as MySQL Workbench, DataGrip or DBeaver. You absolutely should not run this on a production server without firewalling it. Infact this whole environment is designed for local development, but this particularly needs raised as a point for anyone adapting this for production. Do not expose MySQL to the world! Other settings of interest here are the MYSQL_ segments. You can use these to define your username, password, database name. Additionally, the configuration mounts a volume to the MySQL database directory which means the data will be persistent until the volume is deleted. You can optionally remove this if you want volatile data that’s deleted on container restart.

The PHP container’s name is important. This relates to the nginx configuration file, where the fast_cgi parameters was defined. If you change the container definition form php: to something else, you’ll need to update it in the nginx default.conf as well as elsewhere in this file. The PHP image also needs to have a volume for the source code, and this needs to be the same path as the nginx container. Because this is a custom docker file, this needs built by docker-compose instead of just pulling an image. You can of course create this image and upload it to somewhere like dockerhub and include it from there, but I like to keep the environment customisable without messing around with external docker hubs.

The other containers are entirely optional. If you’re not running Horizon, then just remove or comment out that block. Same with the other remaining containers.

Next thing to do is create a new Laravel install in the src directory, or copy in an existing Laravel repo. Generally I install a new Laravel instance using composer like this:
`

composer create-project --prefer-dist laravel/laravel src

Now all that’s left to do is run docker-compose up -d. It’ll build the PHP image, pull the MySQL and nginx image and start your containers using the ports specified in the docker-compose file. To run composer or artisan commands, simply run docker-compose exec php bash and you’ll be dropped into the web directory on the PHP docker container. From here you can easily run commands such as php artisan key:generate, php artisan migrate and any of the php artisan make: commands.

It’s also possible to version control your src folder. Do this from the host, and not inside a docker container. cd src to go into the source code directory, as it’d be unusual for you to store your dev environment with the application. git init should initialise a new git repository for you to manage as you see fit.

Filed Under: Development, Docker, Guides, Laravel, Technology Tagged With: development, docker, docker-compose, Laravel, PHP development

Laravel 5.2 API Token Authentication

2016/04/30 by sudo

At work I’ve been tasked with improving an API recently, and I decided it would be a good opportunity to take Laravel out for a spin. I’ve been keen on learning more about laravel and it’s API capabilities which are supposedly very strong, although I have noted that there’s not much documentation around them. The existing API is flat PHP and uses token based authentication. This allows users to authenticate with a string “api_key” in the request URL, in the header or in the body of the JSON request. I decided that instead of trying to get existing users to upgrade to something like oAuth (for which there are some interesting plugins https://packagist.org/packages/lucadegasperi/oauth2-server-laravel), I’d just implement the same token based authentication model for the revised API in Laravel. There are already advantages to using Laravel for APIs – it highly encourages a restful approach, as for Laravel 5.2 it includes rate limiting out of the box and allows for route prefixing, so it is possible to have multiple endpoints in one Laravel application.

Setting up token based authenticaton in Laravel is so poorly documented that it took me a while to work out how it is achieved.

1. User API Tokens

Users need to have an API token to be associated with them in order to allow the authentication model to work. This is easy enough to add by editing the user migration in your laravel installation.

// Store an API key for this user.
$table->string('api_token', 60)->unique();

This allows you to store a 60 character unique API Token for each user.

2. Setting up API Authentication

There are several ways you can now call API Token authentication for your application. Probably the best is to use middleware in your routes file:

Route::group([
    'prefix' => 'api',
    'middleware' => 'auth::api'
    ], function() {
    Route::resource('fruit', FruitController);
});

Now any time requests are made to the route group, the API authentication method will be called. This includes token based authentication (now defined in the users table) as well as the API rate limiting.

3. Making API Requests

You can now submit your API requests to see if the Laravel token authentication is working. To do this you can submit “api_token” as either a GET or POST paramiter. There’s also hidden away the option to have it set as a header, however this requires you to use an Authorization header:

Key: ‘Authorization’

Value: ‘Bearer [token]’

Check out the code here:

https://github.com/laravel/framework/blob/c04159dee4a47b0d7cd508ab720932121927b1b3/src/Illuminate/Http/Request.php#L815-L822

and here:

https://github.com/laravel/framework/blob/master/src/Illuminate/Auth/TokenGuard.php#L81-L94

 

Filed Under: Laravel Tagged With: API, Laravel, php

Getting started with Laravel 5 & Dingo API: 4. Transformers

2016/04/01 by sudo

Okay, so the last few lessons have got us up to the point where we’re able to send and receive data to the API, but there are some problems that need to be thought about:

  1. We’re exposing our database architecture – people can see orders have fields “order_ref”, “recipient_id”, etc.
  2. Our index functions are using “all()”, so they get all results from the database.
  3. We’re not validating our data before adding it to the database.
  4. We’re not authenticating users.

Lets star addressing these.

Transformers

Transformers are often used in APIs to obscure and abstract the database later from the responses provided to users. What this means is we “transform” what our database record field names are, and turn them into something else. Say in our database we were storing a field “recipient_name”. Instead of the API returning this to the user on a get request, we could use a transformer to return a different field name “name” for example. This obscures our database architecture so we’re not giving away our field names. Additionally, the abstraction here means that if we change our database architecture we’re not relying on API users to change their tools or utilities as well. We can change the database field names without worrying about what users are doing.

Variants Transformer

Once again, I’m going to start with the Variants as this is the smallest part of the API. All we do here is get variants, we don’t allow them to be added, updated or deleted. I’m going to start by looking at my project structure. At the moment we should have Http/Controllers/api and all of the controllers should be within this. It doesn’t really make sense to put transformers here, as they’re not controllers. Instead, I think we should make a new folder in the app directory, and lets version it too, incase different versions of the API use different transformers

mkdir app/Transformers
mkdir app/Transformers/V1

Now lets make a new VariantsTransformer.php file in that directory:

touch app/Transformers/V1/VariantsTransformer.php

Open that file in Atom and lets make our transformer

<?php

namespace App\Transformers\V1;

// We need to reference the Variants Model
use App\Variants;

// Dingo includes Fractal to help with transformations
use League\Fractal\TransformerAbstract;

class VariantsTransformer extends TransformerAbstract
{
    public function transform(Variants $variant)
    {
            // Specify what elements are going to be visible to the API
            return [
        'id' => (int) $variant->id,
                'size' => $variant->size,
                'brand' => $variant->brand,
                'type' => $variant->type,
                'color' => $variant->colour,
                'design' => $variant->design,
        ];
    }
}

All we’re doing here is transforming our Database collection into an array and returning it. The left hand side of the array define the keys that will be used for the JSON response. The right hand side gets the variant fields from the database. What this empowers you to do is hide database fields – like created at – so there’s no risk they’ll be visible to the API. Only items in this array will be returned in API requests. The next key advantage is that the variant database field name doesn’t have to match the API field name. This means that if there’s a major database update that needs to take place, you can update the transformer and not have to ask customers to re-map everything in their APIs.

In the VariantsController, we need to change our functions to adopt the new transformer

use App\Transformers\V1\VariantsTransformer;
public function index()
{
    // Return variants via the Variants Transformer.
    return $this->collection(Variants::all(), new VariantsTransformer);
}
public function show($id)
{
    return $this->item(Variants::find($id), new VariantsTransformer);
}

I’ve added the show function here to return an individual variant instead of a collection. You’ll also need to add the route for it to work:

$api->get('/variants/{id}', 'App\Http\Controllers\api\VariantsController@show');

Now when you call the variants controller in postman, you should see array values coming through as keys in JSON, not database field names.

Following this theme we need to update the Items controller, as well as creating an ItemsTransformer, and the orders controller along with an OrdersTransformer.

ItemsTransformer:

<?php

namespace App\Transformers\V1;

// We need to reference the Items Model
use App\Items;

// Dingo includes Fractal to help with transformations
use League\Fractal\TransformerAbstract;

class ItemsTransformer extends TransformerAbstract
{
    public function transform(Items $item)
    {
        // specify what elements are going to be visible to the API
        return [
            'id' => (int) $item->id,
            'item_ref' => $item->item_ref,
            'quantity' => (int) $item->quantity,
            'variant_id' => (int) $item->variant_id,
        ];
    }

    public function deform(Items $item)
    {
        // specify what elements are going to be visible to the API
        return [
            'id' => (int) $item->id,
            'item_ref' => $item->item_ref,
            'quantity' => (int) $item->quantity,
            'variant_id' => (int) $item->variant_id,
        ];
    }
}

ItemsController:

// At the top of the file, include the items tranformer
use App\Transformers\V1\ItemsTransformer;


// Update the index function to use the ItemsTransformer
public function index()
    {
        return $this->collection(Items::all(), new ItemsTransformer);
    }  

// Update the show function to use the items transformer
public function show($id)
    {
        return $this->item(Items::find($id), new ItemsTransformer);
    }

OrdersTransformer:

<?php

namespace App\Transformers\V1;

// We need to reference the Orders Model
use App\Orders;

// Dingo includes Fractal to help with transformations
use League\Fractal\TransformerAbstract;

class OrdersTransformer extends TransformerAbstract
{
    public function transform(Orders $order)
    {
        // specify what elements are going to be visible to the API
        return [
            'id' => (int) $order->id,
            'order_ref' => $order->order_ref,
            'recipient_id' => $order->recipient_id,
            'shipping_method' => $order->shipping_method,
        ];
    }

    public function deform(Orders $order)
    {
        // specify what elements are going to be visible to the API
        return [
            'id' => (int) $order->id,
            'order_ref' => $order->order_ref,
            'recipient_id' => $order->recipient_id,
            'shipping_method' => $order->shipping_method,
        ];
    }
}

OrdersController:

// Include the orders transformer
use App\Transformers\V1\OrdersTransformer;


// change the index function to use the transformer
public function index()
    {
        return $this->collection(Orders::all(), new OrdersTransformer);
    }

// Change the show function to use the transformer
public function show($id)
    {
        return $this->item(Orders::find($id), new OrdersTransformer);
    }

 

Save all of your work, git commit it if you’re being safe (I won’t show you how to do that now, you know how by now!) and run php artisan serve in order to start the local webserver. Use Postman to send requests to see if your updates are working.

 

One thing to note is that this only transforms the output of our database. Your incoming requests are not handled via the transformer. This is due to the design of fractal, and there is a discussion about it here if you’re interested. Instead of “transforming” input in this way, it’s suggested that models and validation are used instead. Next time we’ll look at running some validation on the input we’re sending to create new orders and order items, as well as working out how to apply the API architecture to the models.

 

Filed Under: Development, Laravel, Technology Tagged With: API, Dingo API, Getting started with Laravel 5 & Dingo API, Laravel, larvel 5, Transformers

Getting started with Laravel 5 & Dingo API: 3. Controllers

2016/03/24 by sudo

This is part 3 of the Laravel 5 & Dingo API series, in which we’re building an API to receive orders from 3rd parties and ship them to recipients. Last time we covered setting up the database, creating migrations and setting up models. This time we’re going to focus on Laravel’s controllers and how we get them working with Dingo API in order to create, read and update data stored in our database.

Controllers

I’m going to start with a variant controller as it has more information that can be returned with a get request and we wouldn’t allow 3rd parties accessing the API to create any variants. This makes it much simpler to build.

With Dingo API, we should specify a base controller to pull all of the helper functions into our individual controllers using inheritance. To do this run:

php artisan make:controller api\\BaseController

Open the BaseController.php file in Atom and add edit it to look like the following:

<?php

namespace App\Http\Controllers\api;

use Dingo\Api\Routing\Helpers;
use Illuminate\Routing\Controller;

class BaseController extends Controller
{
    use Helpers;
}

This is simply inheriting the Controller class, then using the Helpers and creating a new BaseClass to extend from in our API.

We’re going to create a VariantsController, and to keep things organised lets also make it in a subfolder of the application. From the command line run:

php artisan make:controller api\\VariantsController --resource

Now open the controller in Atom and after the use statements at the top of the file add one for the Variants model:

use App\Variants;

You’ll notice because we specified the “–resource” flag at the end of our command line action, the controller has been populated with a skeleton framework of RESTful functions. In the “index” function lets add some code to return all of our variants from the database:

public function index()
{
   return $this->response->array(Variants::all());
}

Before we can test this, lets update our routes.php file to use the new controller:

$api = app('Dingo\Api\Routing\Router');

$api->version('v1', function ($api) {
    $api->get('/', function() {
        return ['test' => true];
    });
    $api->get('/variants', 'App\Http\Controllers\api\VariantsController@index');
});

So, now if you navigate to /api/variants (http://localhost:8000/api/variants/) you should get Dingo API returning all variants in the database. (note you may need to run php artisan serve from the command line first, and if you didn’t seed the database you’ll get an empty response [] ). If you didn’t seed your database, why not add some records and see what responses you get, for example I’ve added a variant and the response is now:

{
  "variants": [
    {
      "id": 1,
      "size": "small",
      "brand": "fashion",
      "type": "hoodie",
      "colour": "black",
      "design": "blank",
      "created_at": "-0001-11-30 00:00:00",
      "updated_at": "-0001-11-30 00:00:00"
    }
  ]
}

At this point, we know our routing is working, and that we’re able to connect to and query a database as well as returning a response to the user. Lets create the Orders and Items controllers:

php artisan make:controller api\\ItemsController --resource
# Controller created successfully.
php artisan make:controller api\\OrdersController --resource
# Controller created successfully.

Lets fill in the functionality to get all items first. Open up the ItemsController in Atom and add the use statement, update it to use the dingo base controller as well as the index function code:

use App\Items;

class ItemsController extends BaseController
public function index()
{
   return $this->response->array(Items::all());
}

While we’re in the items controller, lets add the ability to find a single item in the show method

public function show($id)
{
    return $this->response->array(Items::find($id));
}

As part of the URL, we will pass the item ID, which allows us to do a database search for that item and return the response. If no item is found it’ll return an empty array.

We can save items like so:

public function store(Request $request)
{
    $item = new Items;

    $item->item_ref = $request->input('item_ref');
    $item->quantity = $request->input('quantity');
    $item->variant_id = $request->input('variant_id');

    $item->save();
}

We can also allow updates for an item:

public function update(Request $request, $id)
{
    $item = Items::find($id);

    $item->item_ref = $request->input('item_ref');
    $item->quantity = $request->input('quantity');
    $item->variant_id = $request->input('variant_id');

    $item->save()
}

And finally delete an item:

public function destroy($id)
{
   $item = Items::find($id);
   if ($item->delete()) {
            return $this->response->array(['id' => $id, 'status' => 'deleted']);
   }
}

in the routes.php file, add the methods that we want for the items controller

    // Items
    $api->get('/items', 'App\Http\Controllers\api\ItemsController@index');
    $api->post('/items', 'App\Http\Controllers\api\ItemsController@store');
    $api->get('/items/{id}', 'App\Http\Controllers\api\ItemsController@show');
    $api->patch('/items/{id}', 'App\Http\Controllers\api\ItemsController@update');
    $api->destroy('/items/{id}', 'App\Http\Controllers\api\ItemsController@destroy');

Now you should be able to use Postman to create items:

POST | http://localhost:8000/api/items/

{
    "item_ref": "Test1",
    "quantity": 5,
    "variant_id": 1
}

Select items

GET | http://localhost:8000/api/items/

{
  "items": [
    {
      "id": 1,
      "item_ref": "Test1",
      "quantity": "100",
      "variant_id": "1",
      "created_at": "2016-03-09 16:00:52",
      "updated_at": "2016-03-09 16:07:34"
    }
]

update items

PATCH | http://localhost:8000/api/items/1

{
    "item_ref": "Test1",
    "quantity": 100,
    "variant_id": 1
}

and delete items:

DELETE | http://localhost:8000/api/items/2

{
  "id": "2",
  "status": "deleted"
}

Have a play with the items controller and Postman to make sure your routes and actions are working as expected before moving on. These implementations are not perfect, but it’s enough to get started. Later we’ll see how we can perform validation and even use transformers to alter requests.

The Orders Controller

The orders controller is going to be a duplicate of the items controller to begin with. All of the functionality we added into it we will add into this.

use App\Orders;

class OrdersController extends BaseController

Adding the index method to list all orders

public function index()
{
    return $this->response->array(Orders::all());
}

Adding the store method

public function store(Request $request)
{
    $order = new Orders;

    $order->order_ref = $request->input('order_ref');
    $order->recipient_id = $request->input('recipient_id');
    $order->shipping_method = $request->input('shipping_method');

    if ( $order->save() ) {
        return $this->response->created();
    } else {
        return $this->response->errorBadRequest();
    }
}

Adding the show method

public function show($id)
{
    return $this->response->array(Orders::find($id));
}

The update method

public function update(Request $request, $id)
{
    $order = Orders::find($id);
    $order->order_ref = $request->input('order_ref');
    $order->recipient_id = $request->input('recipient_id');
    $order->shipping_method = $request->input('shipping_method');

    $order->save();
}

and the destroy method

public function destroy($id)
{
    $order = Orders::find($id);
    if ($order->delete()) {
        return $this->response->array(['id' => $id, 'status' => 'deleted']);
    }
}

Finally adding the routes to routes.php

// Orders
$api->get('/orders', 'App\Http\Controllers\api\OrdersController@index');
$api->post('/orders', 'App\Http\Controllers\api\OrdersController@store');
$api->get('/orders/{id}', 'App\Http\Controllers\api\OrdersController@show');
$api->patch('/orders/{id}', 'App\Http\Controllers\api\OrdersController@update');
$api->delete('/orders/{id}', 'App\Http\Controllers\api\OrdersController@destroy'

Now is a good time to commit what we’ve done in git, before moving onto refactoring it.

git add -A
git commit -m "created items, orders and variants controllers with basic functionality"

Filed Under: Development, Laravel, Technology Tagged With: API, Dingo API, Getting started with Laravel 5 & Dingo API, Laravel, larvel 5

  • 1
  • 2
  • Next Page »

Recent Posts

  • Disable iLO on HP Microserver Gen8
  • Ubuntu Desktop 24.04 Change Wallpaper Settings
  • Customising Ubuntu Desktop 24.04
  • Remove domains from Let’s Encrypt using Certbot
  • Install Jetbrains Toolbox on Ubuntu 22.04

Tags

API auditing crochet data recovery debian debudding development Dingo API docker email Getting started with Laravel 5 & Dingo API hard drive health HP Microserver KVM Laravel larvel 5 lenovo Linux Minion mint netgear nas networking network shares php PHP development Postfix raspberry pi review samba security SMART smartctl smartmontools smb testing traefik ubuntu ubuntu 18.04 ubuntu 20.04 ubuntu 22.04 ubuntu server vagrant Virtual machines xdebug xubuntu

© Copyright 2015 apt get life