<![CDATA[Blog by Sandro Keil]]>https://sandro-keil.de/blog/https://sandro-keil.de/blog/favicon.pngBlog by Sandro Keilhttps://sandro-keil.de/blog/Ghost 2.0Fri, 13 Sep 2024 10:10:13 GMT60<![CDATA[PHP Code Generator Redux]]>https://sandro-keil.de/blog/php-code-generator-redux/602ac2c776025b000127caacTue, 16 Feb 2021 20:38:17 GMT

PHP is an imperative programming language and can also be used as a template engine. This allows programming language constructs and static text to be combined via placeholders or marked areas. In addition, several variants for string concatenation are available. It is also possible to write the generated code to different files. With PHP version 7.0 an Abstract Syntax Tree (AST) is supported. An AST represents a syntactically correct sentence of a programming language as a tree structure. Therefore, PHP is very well suited as a programming language for a code generator.

Model Driven Software Development (MDSD) deals with the automatic creation of software systems based on models. A model-to-text transformation generates code for a specific platform. In a model-to-model transformation, an existing model is enriched with information or an entirely new model can be created.

In order to be able to generate executable software, a domain-specific abstraction with formal modeling is required. Code is automatically generated from models, for instance based on the Unified Modeling Language (UML), a JSON schema, XML etc. Models thus not only serve to document the application software, but also represent the actual code. As many artifacts of a software system as possible should be derived from the formal models. Modeling provides a simplified representation of complex interrelationships. The model is an abstract representation of a software system to be developed. This can be represented both graphically and textually.

The following sections describes which Open-Source PHP libraries can be used to generate PHP code.

PHP Filter

The library open-code-modeling/php-filter provides common filter for PHP code generation. There are preconfigured filters to filter a name / label for class names, constants, properties, methods and namespaces. This library uses laminas/laminas-filter as a great foundation.

PHP Code AST

It is a challenge to combine generated and handwritten code. The library open-code-modeling/php-code-ast ships with an easy to use high level object-oriented API and supports also reverse engineering of your PHP code. During code generation, previously generated code is then analyzed using an AST-based approach. This makes it possible to distinguish between already generated and handwritten code. An approach such as protected areas can thus be dispensed with. Furthermore, the AST-based approach allows parts of the code to be specifically modified. This library uses nikic/php-parser as a great foundation.

Take a look at a straightforward example of generating a class using the ClassBuilder high level API.

use OpenCodeModeling\CodeAst\Builder;

$parser = (new PhpParser\ParserFactory())->create(PhpParser\ParserFactory::ONLY_PHP7);
$printer = new PhpParser\PrettyPrinter\Standard(['shortArraySyntax' => true]);

$parser = (new PhpParser\ParserFactory())->create(PhpParser\ParserFactory::ONLY_PHP7);
$printer = new PhpParser\PrettyPrinter\Standard(['shortArraySyntax' => true]);

$code = ''; // or file_get_contents() of file
$ast = $parser->parse($code);

$classBuilder = Builder\ClassBuilder::fromScratch('TestClass', 'My\\Awesome\\Service');
$classBuilder
    ->setFinal(true)
    ->setExtends('BaseClass')
    ->setNamespaceImports('Foo\\Bar')
    ->setImplements('\\Iterator', 'Bar')
    ->addConstant(
        Builder\ClassConstBuilder::fromScratch('AWESOME', true)
    )
    ->addMethod(
        Builder\ClassMethodBuilder::fromScratch('sayHello')
            ->setBody("echo 'Hello World!';")
            ->setReturnType('void')
    );

$nodeTraverser = new PhpParser\NodeTraverser();

$classBuilder->injectVisitors($nodeTraverser, $parser);

print_r($printer->prettyPrintFile($nodeTraverser->traverse($ast)));

The code above will generate the following PHP code.

<?php

declare (strict_types=1);
namespace My\Awesome\Service;

use Foo\Bar;
final class TestClass extends BaseClass implements \Iterator, Bar
{
    public const AWESOME = true;
    public function sayHello() : void
    {
        echo 'Hello World!';
    }
}

JSON Schema to PHP

A JSON schema can be used as a model for code generation. For instance it can be used to create value objects. The library open-code-modeling/json-schema-to-php parses JSON schema files and provides an API to easily generate code from a JSON schema.

Consider you have this JSON schema.

{
    "type": "object",
    "required": ["buildingId", "name"],
    "additionalProperties": false,
    "definitions": {
        "name": {
            "type": ["string", "null"]
        }
    },
    "properties": {
        "buildingId": {
            "type": "string",
            "pattern": "^[0-9A-Fa-f]{8}-[0-9A-Fa-f]{4}-[0-9A-Fa-f]{4}-[0-9A-Fa-f]{4}-[0-9A-Fa-f]{12}$"
        },
        "name": {
            "$ref": "#/definitions/name"
        }
    }
}

You can create a TypeSet definition from the JSON schema above with the following code.

<?php
use OpenCodeModeling\JsonSchemaToPhp\Type;
use OpenCodeModeling\JsonSchemaToPhp\Type\TypeSet;
use OpenCodeModeling\JsonSchemaToPhp\Type\ObjectType;
use OpenCodeModeling\JsonSchemaToPhp\Type\StringType;

$decodedJson = \json_decode($jsonSchema, true);

$typeSet = Type::fromDefinition($decodedJson);

/** @var ObjectType $type */
$type = $typeSet->first();

$type->additionalProperties(); // false

$properties = $type->properties();

/** @var TypeSet $buildingIdTypeSet */
$buildingIdTypeSet = $properties['buildingId'];

/** @var StringType $buildingId */
$buildingId = $buildingIdTypeSet->first();

$buildingId->name(); // buildingId
$buildingId->type(); // string
$buildingId->pattern(); // ^[0-9A-Fa-f]{8}-[0-9A-Fa-f]{4}-[0-9A-Fa-f]{4}-[0-9A-Fa-f]{4}-[0-9A-Fa-f]{12}$
$buildingId->isRequired(); // true
$buildingId->isNullable(); // false
// ...

JSON Schema to PHP AST

The library open-code-modeling/json-schema-to-php-ast compiles a JSON schema to PHP classes / value objects via PHP AST. It provides factories to create nikic/php-parser node visitors or open-code-modeling/php-code-ast class builder objects from JSON schema. It supports the the JSON schema types string, enum, integer, boolean, number and array. The type string supports also the formats date-time, ISO 8601, uuid and BCP 47.

PHP Code Generator

For sophisticated PHP code generation workflows there is the library open-code-modeling/php-code-generator. It provides the runtime environment for various components. These can be interconnected via a configuration. Thus, individual operational sequences can be provided and combined. By this modular structure the code generator can be individually extended and configured by developers.

The code beneath shows a simple Hello World workflow.

use OpenCodeModeling\CodeGenerator;

$workflowContext = new CodeGenerator\Workflow\WorkflowContextMap();

// initialize workflow with some data
$workflowContext->put('hello', 'Hello ');

$config = new CodeGenerator\Config\Workflow(
    new CodeGenerator\Workflow\ComponentDescriptionWithSlot(
        function (string $input) {
            return $input . 'World';
        },
        'greetings', // output slot
        'hello' // input slot
    ),
    new CodeGenerator\Workflow\ComponentDescriptionWithInputSlotOnly(
        function (string $input) {
            echo $input; // prints Hello World
        },
        'greetings',
    )
);

$workflowEngine = new CodeGenerator\Workflow\WorkflowEngine();
$workflowEngine->run($workflowContext, ...$config->componentDescriptions());

Conclusion

PHP Code generation using an AST-based approach allows already generated code to be matched with new code. Another advantage of this approach is that manually added code is not overwritten. Various detection mechanisms have been implemented for this purpose.

This article shows that PHP code can be generated very well using an AST-based approach and a model in PHP. If you find it useful please spread the word and star the libraries on GitHub.

]]>
<![CDATA[Dockerized desktop apps - Development with GIT, PhpStorm and Postman]]>https://sandro-keil.de/blog/dockerized-desktop-apps-phpstorm-git-postman-xdebug-remote-debugging/5b8cfa7a76025b000127ca68Sun, 13 Jan 2019 17:02:26 GMT

This blog post shows how to setup a fully dockerized development suite with PhpStorm, GIT and Postman. But what is development without debugging? If all apps are dockerized you will need a workaround to be able to debug the application e.g. with PhpStorm and Xdebug. This is also considered. If you not familiar with Dockerized desktop apps, check out the Dockerized introduction. You should create your own Docker images, but for testing my Docker images should work too. The bash scripts refer to a folder data/x11docker in your home directory where x11docker stores the container config. Please create it before.

Setup GIT

To work properly with Dockerized GIT you need three things. A Docker container with GIT, the mounted source of course and your SSH credentials.

Docker Image

Let's start with the GIT Docker image. The minimal packages are git and openssh.

Start script

One cool thing of x11docker is, that it does all the heavy lifting. Don't worry about file permissions, how to share the ssh socket or to mount directories. All my sources are stored under data/sources. This makes it easy to mount the root source directory. I set the working directory to the current directory where the git command is executed. With this is feels like native GIT usage.

Create the following script named git and put it to a directory which is in your PATH variable e.g. ~/bin and make it executable.

#!/usr/bin/env bash
x11docker -q -t --sharessh --sharedir $HOME/data/sources --homedir=$HOME/data/x11docker/git --workdir $(pwd) -- sandrokeil/archlinux:git $@

Now you can use GIT as always and it works seamlessly. Ok, there are some small caveats. The start time is compared to native GIT long and you don't have bash completion. But I use PhpStorm mostly for VCS stuff.

If some SSH keys are not found, you can mount the ssh folder with --sharedir $HOME/.ssh:ro.

Setup PhpStorm

To work efficiently with Dockerized PhpStorm you will need a Docker container with PhpStorm and the same packages like in the GIT dockerfile.

Docker Image

PhpStorm can be downloaded and extracted to /opt. I use this method in my PhpStorm Docker image. You will need the packages git, openssh, vim, gnome-keyring and libsecret to work properly. PhpStorm stores connection credentials in the Linux keyring.

Start script

The PhpStorm script has some more options like git, because we need a clipboard for copy & paste and hostdbus for credentials. I use also hostdisplay but you can also try xpra. To debug applications with PhpStorm you must add PhpStorm to the network of the application which should be debugged. I use a trick in the PhpStorm startup script to add the phpstorm container to every default network.

#!/usr/bin/env bash

# add all networks which ends with "_default" to the PhpStorm Docker container
docker network ls | grep "_default" | awk '{print $2}' | while read line
do
  $(sleep 5 && docker network connect $line phpstorm) &
done

x11docker --hostdbus --name phpstorm -q --sharessh --sharedir $HOME/data/sources --homedir=$HOME/data/x11docker/phpstorm --hostdisplay --clipboard -- sandrokeil/archlinux:phpstorm

You have to set the xDebug XDEBUG_CONFIG option to remote_host=phpstorm and ensure that xdebug.remote_connect_back is disabled. Read more about Docker PHP debugging.

Setup Postman

Postman is a popular tool for API development. It has many features like test, debug and documentation of your APIs.

Docker Image

Simply install Postman for your distro. That's it.

Start script

To interact with other Docker containers via a local domain you have to add the add-host option with the IP of your Docker network. In this example it's 172.17.0.1 but may be vary on your host. You can also share your Downloads folder to import / export Postman collections. Debugging your APIs with xDebug works like a charm.

#!/usr/bin/env bash
x11docker --name=postman -q --xpra --homedir=$HOME/data/x11docker/postman --clipboard -- --cpus="2" --memory="2G" --add-host=awesome.local:172.17.0.1 sandrokeil/archlinux:postman

Chromium

I use a dedicated Chromium for development with installed development plugins. Some plugins have access to all data of the webpage or can even manipulate the website data. To browse a development website which is served by a Docker container via a local domain you have to add the add-host option with the IP of your Docker network.

Conclusion

This blog post has shown how to setup a complete development environment with Docker. It's not very complicated but you have to figure out a few things. It's almost a native feeling and has so many benefits, such as: Run different versions of same application. The best thing is, that you not bloat your host system with other software.

]]>
<![CDATA[YubiKey full disk encryption with UEFI secure boot for everyone]]>https://sandro-keil.de/blog/yubikey-full-disk-encryption-uefi-secure-boot-for-everyone/5b8ee17c76025b000127ca6cWed, 12 Sep 2018 20:07:16 GMT

I've created a full disk encryption setup guide. If you complete this guide, you will have an encrypted root and home partition with YubiKey two factor authentication, an encrypted boot partition and UFEI secure boot enabled. Sounds complicated? No, it isn't!

It took me several days to figure out how to set up a fully encrypted machine with 2FA. This guide should help to get it done in some hours (hopefully). There exists a plenty bunch of tutorials. but none contains a step-by-step guide to get the following things done.

  • YubiKey encrypted root (/) and home (/home) folder on separated partitions
  • Encrypted boot (/boot) folder on separated partition
  • UEFI Secure boot with self signed boot loader
  • YubiKey authentication for user login and sudo commands
  • Hooks to auto sign the kernel after an upgrade

You should be familiar with Linux and you should be able to edit files with vi/vim. You need an USB stick for the Linux Live environment and a second computer would be useful for look ups and to read this guide while preparing your fully encrypted Linux. And of course you will need an YubiKey.

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048            4095   1024.0 KiB  EF02  BIOS boot partition
   2            4096         1232895   600.0 MiB   EF00  EFI System
   3         1232896         2461695   600.0 MiB   8300  Linux filesystem
   4         2461696      2000409230   952.7 GiB   8E00  Linux LVM

The disk partitions will look similar like above and the GRUB boot loader will ask you to unlock the boot partition with a password. After that, you will be asked to unlock the root and home partition with a password and your YubiKey device (2FA). The BIOS will be also protected by a password, otherwise UEFI secure boot can be disabled. But even if this is the case, your root and home partition will still be encrypted. This is maximum security.

At the moment there exists only a guide for Arch Linux, but it should be similar for other Linux distributions. If you want to write a guide for Debian/Ubuntu or any other Linux, don't hesitate to open an issue on GitHub or bring your pull request.

If you like this guide, please spread the word, so everyone can use it and don't forget to star this project on GitHub.

]]>
<![CDATA[Ghost theme Casperion 2.0]]>https://sandro-keil.de/blog/ghost-theme-casperion-2-0/5b859cb46d9ed700018f1155Tue, 28 Aug 2018 20:21:32 GMT

It's quite a while since the Ghost theme Casperion was released. Things have changed and it was time to bring it up to date with latest Ghost 2.0. This time, I'd like to make minimal changes to simplify updates, as many features are planned for the Ghost and Ghost theme Casper. I've removed Google Analytics and Disqus and all resources are delivered from the theme. No external resources are used anymore. Thanks to the DSGVO (GDPR). Here are the features.

Full Ghost 2.0 support

The free Casperion Ghost theme supports latest Ghost 2.x version.

GhostHunter

GhostHunter provides Casperion full text searching right in the blog without having to resort to any third-party solutions, by utilizing the Ghost API. The blog search uses an overlay and displays the blog post description, so it looks really nice.

Highlight.JS

Highlight.js highlights syntax in code examples on Casperion blog posts. It's very easy to use because it works automatically. It finds blocks of code, detects a language and highlights it. Highlight.js is only loaded if a code block was detected in blog post.

You can download Casperion here. If you like it, please star this project on GitHub.

]]>
<![CDATA[Dockerize desktop apps with NVIDIA GPU and audio support]]>https://sandro-keil.de/blog/dockerize-desktop-apps-with-nvidia-gpu-audio-support/5b801214d8f3180001dd672dMon, 20 Aug 2018 19:28:20 GMT

It's 2018 and it's time to Dockerize all desktop applications, isn't it? Don't pollute your system with package dependencies which makes system updates harder and adds possible attack surface and vulnerabilities to your system. Run every application inside Docker, even GUIs and full desktops with Wayland or X11. You think it's not possible, then read this post.

x11docker

Martin Viereck has created an awesome Docker project called x11docker - Run GUI applications in docker. This bash script is a wrapper around the Docker arguments to run GUI applications. Even more, it simplifies to start and manage GUI applications. It comes with much security in place. The goal of x11docker is to provide isolation from host as good as possible and works also on Windows. Hardware accelerated OpenGL rendering is supported, even for closed NVIDIA drivers and CUDA. Clipboard and sound can be activated. So much WOW!

See also how to install and tune Docker.

Dependencies

The following packages are for Arch Linux but should be similar for other operation systems. I can't use the x11docker wayland mode because of the NVIDIA proprietary driver, so I have to install some additional X11 packages. Please refer the x11docker documentation, different renderer and terminals are supported.

xpra xorg-xinit xorg-xprop xorg-xsetroot xdotool xorg-server xorg-server-xephyr xorg-xhost xorg-server-xvfb xorg-server-xwayland weston xorg-xrandr xorg-xauth xorg-xdpyinfo gnome-terminal

Enable NVIDIA GPU

To leverage a NVIDIA GPU in the container there exists a NVIDIA Docker container runtime. Please follow the install steps in their docs. As described in the x11docker docs, you have to ensure that the same version of closed NVIDIA drivers are used at host and in the Docker container. You will get also CUDA support. That's really handy. The NVIDIA runtime is actived via the Docker argument --runtime nvidia.

Docker images for desktop apps

Jess Frazelle has various desktop Dockerfiles but you should build your own like I have done. It's not complicated and you can optimize it for your needs and you can ensure same package version on host and Docker container. This is useful for GPU acceleration and audio.

Let's take a look how I start VLC with GPU acceleration and audio. Remember I've build my own VLC NVIDIA Docker image. Use the following simple bash script which starts VLC via x11docker. Some additional Docker options are used to limit the resources to 2 CPU and 4 GB RAM. The $@ at the end means that all arguments which are passed to the bash script are passed to the Docker container. You will see in the next chapter why it's useful. Create a bash script called vlc and ensure that it is executable and in your environment path.

You may want to replace the Docker image with your own.

#!/usr/bin/env bash
x11docker --stderr --stdout --hostdisplay --sharedir=$HOME/Videos --gpu --pulseaudio -- --cpus="2" --memory="4G" --runtime=nvidia sandrokeil/archlinux-nvidia:vlc $@

Desktop icon and file association

Do you know that you can create a desktop icon for your Docker application and associate it for specific file types? I use a simple bash script for each of my desktop Docker apps which contains the necessary arguments. The advantage is that you can change it without recreating the desktop item. And you will do it at some time. This makes it also very easy to create a desktop icon.

To associate your Docker app with a file type you will need a desktop icon. This is done with the command gnome-desktop-item-edit --create-new ~/.local/share/applications if you use GNOME desktop. If this command is not available, please install the package gnome-panel. Write vlc as the command you want to execute. This points to the bash script above.

Now you can find out the specific file type with xdg-mime query filetype [your file] and link it to the desktop entry via xdg-mime default [desktop entry name].desktop [mime type name]. For instance, to automatically start VLC for mp4 files you would run xdg-mime default vlc.desktop video/mp4.

Conclusion

All of my daily used applications like Thunderbird, PhpStorm, AWS CLI, GIT (yes, even GIT), Rambox, Postman and Chromium are dockerized and what should I say. It works quite well. Maybe it's a bit less comfortable depending on the setup. For instance if you share only some folders then you have to copy files around but this can be easily changed. I prefer minimal sharing of host files and share only folders that are needed for the current application.

x11docker makes dockerized desktop apps very easy and it works on Windows too. No more excuses. Start your dockerized desktop app journey today.

]]>
<![CDATA[Let nginx start if upstream host is unavailable or down]]>https://sandro-keil.de/blog/let-nginx-start-if-upstream-host-is-unavailable-or-down/5b801214d8f3180001dd6720Mon, 24 Jul 2017 16:55:59 GMT

If you use proxy_pass or fastcgi_pass definitions in your nginx server config, then nginx checks the hostname during the startup phase. If one of these servers is not available, nginx will not start. This is not useful. If you use nginx as a gateway, why should all services unreachable if only one service is down due the nginx ramp time? This blog post shows a trick how to avoid such behaviour and exposes also the internal Docker DNS IP for the Docker DNS resolver.

Use nginx variables

The trick is to use variables for the upstream domain name. Maybe you even don't need an upstream definition like this GIT diff shows. Let's take a look at a common PHP nginx location example.

location ^~ /api/ {
    # other config entries omitted for breavity

    # nginx start will fail if host is not reachable
    fastcgi_pass    api.awesome.com:9000; 
    fastcgi_index   index.php;
}

The next example replaces the fastcgi_pass with a variable, so nginx will not check if the host is reachable on startup. This results in a 502 Bad Gateway message if the host is unavailable and that's fine. As soon as the service is back, everything works as expected.

server {
    location ^~ /api/ {
        # other config entries omitted for breavity
    
        set $upstream api.awesome.com:9000;

        # nginx will now start if host is not reachable
        fastcgi_pass    $upstream; 
        fastcgi_index   index.php;
    }
}

Internal Docker DNS resolver IP

If the definition above is used, a resovler definition is needed. Because we use Docker, we have to use the internal Docker DNS resolver IP which is 127.0.0.11. By the way, the internal AWS DNS resolver IP is your AWS VPC network range plus two. To further remove the downtime, reduce the resolve cache time to 30 seconds instead of the default 5 minutes. Let's at the nginx resolver definition to the config above.

server {
    # this is the internal Docker DNS, cache only for 30s
    resolver 127.0.0.11 valid=30s;
    
    location ^~ /api/ {
        # other config entries omitted for breavity
    
        set $upstream api.awesome.com:9000;
 
        # nginx will now start if host is not reachable
        fastcgi_pass    $upstream; 
        fastcgi_index   index.php;
    }
}

Conclusion

In this blog post you have seen a small but subtle difference in the nginx upstream host definition. You will only notice the change, if something is broken in your infrastructure and then it's to late. With this nginx config you will deliver more robust infrastructure. If you have some other handy tips, don't hesitate to leave a comment.

]]>
<![CDATA[Asynchronous prooph messages via Amazon AWS SQS]]>https://sandro-keil.de/blog/asynchronous-prooph-messages-via-amazon-aws-sqs/5b801214d8f3180001dd672aSun, 18 Jun 2017 19:09:11 GMT

Do you know that you can easily switch to async prooph messages for your commands, events and even queries? This blog post shows how to use it to produce asynchronous messages via Amazon AWS Simple Queue Service (SQS). If you not familiar with the prooph components I will give you a short explanation. The prooph components are CQRS and Event Sourcing packages for PHP. They are enterprise ready, works with every PHP application and has support for the most famous PHP web frameworks (Zend, Symfony, Laravel) and of course, plays well with microservices, too! I recommend to try it out.

Enable prooph async switch

To enable the prooph async switcher, add the following config definition to your prooph config file for your specific service bus. This example illustrates it for the event bus. Don't forget to register a factory in your favorite dependency injection container.

<?php

declare(strict_types=1);

// prooph array config file
return [
    'prooph' => [
        'service_bus' => [
            'event_bus' => [
                'plugins' => [
                    \Prooph\ServiceBus\Plugin\InvokeStrategy\OnEventStrategy::class,
                ],
                'router' => [
                    // only one line, that's it
                    'async_switch' => Acme\SqsMessageProducer::class,
                    'routes' => [/**/],
                ],
            ],
        ],
    ],
];

Mark prooph message class as async

The following example illustrates how to define an asynchronous event. Implement the interface Prooph\ServiceBus\Async\AsyncMessage to your event class. Is that easy, isn't it? You have nothing to do anything else. prooph service bus handles all the stuff for you. If you interested on some internals, read on or jump to the next headline. If a message occurs, the prooph AsyncSwitchMessageRouter enriches the message metadata with handled-async. If this field is not true, the message is send to the async message producer. If it is true, the message is sent to the decorated router and handled by the service bus like normally. Now let's go to the async message producer implementation.

Amazon AWS SQS async message producer

This is an example for the Amazon AWS Simple Queue Service (SQS). Be sure you have created an Amazon SQS queue and have the correct access rights. Then you should see the messages in the queue. Ok, the following code contains no fancy stuff and you are free to change it to your needs. But it should help to get started. The official Amazon AWS PHP library is used. Be sure you have installed it.

<?php

declare(strict_types=1);

namespace Acme;

use Aws\Sqs\SqsClient;
use Prooph\Common\Messaging\Message;
use Prooph\Common\Messaging\MessageConverter;
use Prooph\ServiceBus\Async\MessageProducer;
use Prooph\ServiceBus\Exception\RuntimeException;
use React\Promise\Deferred;

class SqsMessageProducer implements MessageProducer
{
    /**
     * AWS SQS client
     *
     * @var SqsClient
     */
    private $sqsClient;

    /**
     * Queue URL
     *
     * @var string
     */
    private $queueUrl;

    /**
     * Message converter
     *
     * @var MessageConverter
     */
    private $messageConverter;

    public function __construct(MessageConverter $messageConverter, SqsClient $sqsClient, string $queueUrl)
    {
        $this->sqsClient = $sqsClient;
        $this->messageConverter = $messageConverter;
        $this->queueUrl = $queueUrl;
    }

    public function __invoke(Message $message, Deferred $deferred = null)
    {
        if (null !== $deferred) {
            throw new \RuntimeException('The SqsMessageProducer can not handle deferred messages.');
        }

        $promise = $this->sqsClient->sendMessageAsync(array(
            'QueueUrl'    => $this->queueUrl,
            'MessageBody' => json_encode($this->messageConverter->convertToArray($message)),
        ));

        $promise->wait();
    }
}

Now the event messages are sent to the Amazon SQS queue, but where is the message consumer, right? You can use Amazon AWS Lambda to read the messages from the queue and send them to a message box HTTP endpoint, which is responsible for the incoming messages. You can use the prooph PSR-7 middleware library or write your own implementation. The AWS Lambda consumer function is triggered via an AWS::Events::Rule with a rate of one minute. It's not realtime but it works like a charm.

Conclusion

With a few lines of code, you can activate asynchronous prooph messages. This is really awesome. Don't miss to checkout the prooph website, to find out what you can do anything else with the prooph components.

Which asynchronous message producer do you use?

]]>
<![CDATA[OpenResty (nginx) with auto generated SSL certificate from Let’s Encrypt]]>https://sandro-keil.de/blog/openresty-nginx-with-auto-generated-ssl-certificate-from-lets-encrypt/5b801214d8f3180001dd671fSun, 26 Feb 2017 17:30:05 GMT

I started with a startssl.com free SSL certificate to use encrypted connections for my website. This works fine, but I have to update the SSL certificate every year manually. Let’s Encrypt offers auto (re)generate SSL certificates and there exists different implementations. The only option for me was Docker of course, but not with an extra Docker container. nginx has support for Lua, so the lua-resty-auto-ssl should be a perfect match. Unfortunatly, I was not able to get it running with nginx. If someone want to try it, you find my nginx Dockerfile here. OpenResty is a nginx drop-in replacement and has Lua built-in. That's pretty nifty. This blog post shows how to use OpenResty with the lua-resty-auto-ssl plugin to automatically and transparently issues SSL certificates from Let's Encrypt (a free certificate authority) as requests are received.

OpenResty lua-resty-auto-ssl Docker image

The openresty/openresty:alpine-fat Docker image is used as base image, because LuaRocks is already included and this makes the installation of lua-resty-auto-ssl plugin very easy. There are some additional libraries needed, to work properly. My OpenResty Dockerfile looks like this.

FROM openresty/openresty:alpine-fat

RUN apk add --no-cache --virtual .run-deps \
    bash \
    curl \
    diffutils \
    grep \
    sed \
    openssl \
    && mkdir -p /etc/resty-auto-ssl \
    && addgroup -S nginx \
    && adduser -D -S -h /var/cache/nginx -s /sbin/nologin -G nginx nginx \
    && chown nginx /etc/resty-auto-ssl

RUN apk add --no-cache --virtual .build-deps \
        gcc \
        libc-dev \
        make \
        openssl-dev \
        pcre-dev \
        zlib-dev \
        linux-headers \
        gnupg \
        libxslt-dev \
        gd-dev \
        geoip-dev \
        perl-dev \
        tar \
        unzip \
        zip \
        unzip \
        g++ \
        cmake \
        lua \
        lua-dev \
        make \
        autoconf \
        automake \
    && /usr/local/openresty/luajit/bin/luarocks install lua-resty-auto-ssl \
    && apk del .build-deps \
    && rm -rf /usr/local/openresty/nginx/conf/* \
    && mkdir -p /var/cache/nginx

# use self signed ssl certifacte to start nginx
COPY ./ssl /etc/resty-auto-ssl

nginx config

The needed Docker image is ready so we have to configure nginx. I will only show important parts of the nginx config to use the lua-resty-auto-ssl plugin. Take also a look at the lua-resty-auto-ssl documentation to see which options are available. For instance, to test your config you can use the staging system of Let’s Encrypt to not run in rate limits.

# Run as a less privileged user for security reasons.
user nginx;

error_log  /usr/local/openresty/nginx/logs/error.log warn;

# ...

http {
  # The "auto_ssl" shared dict should be defined with enough storage space to
  # hold your certificate data. 1MB of storage holds certificates for
  # approximately 100 separate domains.
  lua_shared_dict auto_ssl 1m;

  # Initial setup tasks.
  init_by_lua_block {
    auto_ssl = (require "resty.auto-ssl").new()

    -- Define a function to determine which SNI domains to automatically handle
    -- and register new certificates for. Defaults to not allowing any domains,
    -- so this must be configured.
    auto_ssl:set("allow_domain", function(domain)
      return ngx.re.match(domain, "(sandro-keil.de)$", "ijo")
    end)

    auto_ssl:set("dir", "/etc/resty-auto-ssl")

    auto_ssl:init()
  }

  init_worker_by_lua_block {
    auto_ssl:init_worker()
  }

  access_log /usr/local/openresty/nginx/logs/access.log main;

  # ...
}

nginx server definition

The last thing is to configure your nginx server definitions to auto (re)generate the SSL certificate and allow Let’s Encrypt to access your server. This is also quite easy, but you should not expose the port 8999. Put ssl_certificate_by_lua_block to your HTTPS server definition like shown below.

server {
    listen 443 ssl http2;

    # Dynamic handler for issuing or returning certs for SNI domains.
    ssl_certificate_by_lua_block {
      auto_ssl:ssl_certificate()
    }
}

The endpoint which is used for performing domain verification with Let's Encrypt is put to your HTTP server definition and an extra server definition for handling certificate tasks is needed.

server {
    listen 80;
    server_name www.sandro-keil.de www.sandrokeil.de sandro-keil.de sandrokeil.de;

    # Endpoint used for performing domain verification with Let's Encrypt.
    location /.well-known/acme-challenge/ {
        content_by_lua_block {
            auto_ssl:challenge_server()
        }
    }

    location / {
        return 301 https://sandro-keil.de$request_uri;
    }
}

# Internal server running on port 8999 for handling certificate tasks.
server {
    listen 8999;
    location / {
        content_by_lua_block {
            auto_ssl:hook_server()
        }
    }
}

Conclusion

nginx with Lua is very powerful and OpenResty provides an easy way to use it. There are much more interesting OpenResty plugins based on lua. The SSL certificate is now auto regenerated and saves me some work every year. I hope this blog post helps you too. I'm happy to see your comment.

If the IP of the server has changed, you can flush the DNS cache via Google Developers DNS flush, because Let’s Encrypt uses Google's DNS. This was pretty handy for me. I've switched from an 1 vCore / 1 GB RAM to an 1 vCore / 512 MB RAM server.

]]>
<![CDATA[Docker Daemon tuning and JSON file configuration]]>https://sandro-keil.de/blog/docker-daemon-tuning-and-json-file-configuration/5b801214d8f3180001dd6715Mon, 23 Jan 2017 20:12:21 GMT

The default Docker config works but there are some additional features which improves the overall experience with Docker. We will create a JSON config file with optimized options for the Docker Daemon, install bash completion for the Docker CLI commands with one line and increase security. But first things first.

Docker / Docker Compose installation

Please refer to the official Docker installation docs to install Docker on your specific system. To install Docker Compose, you can simply execute the following command which downloads Docker Compose 1.11 and makes it executable. Make sure you are root, otherwise you get a permission denied error. Docker Compose simplifies Mult-Container apps. It is a tool for defining and running Multi-Container Docker applications and maintains a logical definition of a distributed application. You can then deploy this stack to your Docker Swarm Cluster with docker stack deploy --compose-file=docker-compose.yml my_stack. But this is another great story.

$ curl -L https://github.com/docker/compose/releases/download/1.24.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
$ chmod +x /usr/local/bin/docker-compose

Docker Daemon configuration

You can modify the Docker Daemon to improve overall performance and make it more robust. Especially the storage filesystem driver is a key component. We will use the overlay2 storage driver, which can be used with Linux kernel >= 4.0 and Docker >= 1.12. So make sure it is available on your system. There are some security features like user namespaces which should be enabled.

Let's activate our own configuration file by running this command.

Warning: Your current Docker configuration will be overwritten.

There is no way to move data from one storage to another, so all your Docker containers and images are not available anymore. You can delete everything before switching with the command docker system prune to save some disk space. This is optional of course and you may switch back, if you use your previous storage driver. Fasten your seatbelts and take off.

$ echo 'DOCKER_OPTS="--config-file=/etc/docker/daemon.json"' > /etc/default/docker

Create the file /etc/docker/daemon.json and put the following lines there. You find an excellent explanation of each configuration flag here. In short, we use the storage driver overlay2, enable JSON log files with logrotation and enable user namespaces. userns-remap uses UID and GID which is 1000 on my system. You can check these values for your user by executing the command id.

{
  "storage-driver": "overlay2",
  "graph": "/var/lib/docker",
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "10m",
    "max-file": "2"
  },
  "debug": false,
  "userns-remap": "1000:1000"
} 

Docker CLI Bash completion

Do you know that Docker comes also with bash completion? This is really helpful. Make sure you are root, otherwise you get a permission denied error. The following command downloads the bash completion file for the current installed Docker version. You should also run this command after each Docker update.

curl -L https://raw.githubusercontent.com/docker/docker-ce/v$(docker -v | cut -d' ' -f3 | tr -d '\-ce,')/components/cli/contrib/completion/bash/docker > /etc/bash_completion.d/docker

The bash completion is also available for Docker Compose which makes things easier. The following command downloads the bash completion file for the current installed Docker Compose version. You should also run this command after each Docker Compose update.

curl -L https://raw.githubusercontent.com/docker/compose/$(docker-compose version --short)/contrib/completion/bash/docker-compose > /etc/bash_completion.d/docker-compose

Now it's time to restart the Docker service with sudo service docker restart (Ubuntu) and with docker info you should get this info. The bash completion will be available if you reopen your terminal. Let me know if you have other Docker config improvements.

Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 0
Server Version: 1.13.1
Storage Driver: overlay2
 Backing Filesystem: extfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins: 
 Volume: local
 Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 03e5862ec0d8d3b3f750e19fca3ee367e13c090e
runc version: 2f7393a47307a16f8cee44a37b262e8b81021e3e
init version: 949e6fa
Security Options:
 apparmor
 seccomp
  Profile: default
 userns
Kernel Version: 4.8.12-040812-generic
Operating System: Ubuntu 16.10
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 19.54 GiB
Name: [MACHINE NAME]
ID: [A LONG ID]
Docker Root Dir: /home/[YOUR USERNAME]/docker/100000.100000
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

Conclusion

This blog post has shown how to configure and optimize the Docker Daemon configuration. The Docker Daemon has now more performance due the overlay2 storage and is more robust due the user namespaces. The CLI bash completion for Docker and Docker Compose is very handy too.

]]>
<![CDATA[My talk Docker for PHP Developers at PHP.RUHR]]>https://sandro-keil.de/blog/my-talk-docker-for-php-developers-at-php-ruhr/5b801214d8f3180001dd6721Tue, 18 Oct 2016 20:54:50 GMT

I'm very excited to give a talk at the PHP.RUHR conference in Dortmund. This conference takes place on November 10th for the third time in the Ruhr area, which is the largest metropolitan region in Germany. In addition to the programming language PHP, related topics such as IT security, databases and web hosting are also highlighted at the event. There are fifteen talks in all on this day, plus a workshop.

Docker for PHP Developers

My talk Docker for PHP Developers at PHP.RUHR

Everyone talks about Docker and you might think Docker already belongs to the standard repertoire. In fact, Docker is revolutionizing web development. In minutes, a whole web server stack is set up to simulate the live environment. Why is Docker very good for PHP developers, you can learn in this talk. I introduce the Docker ecosystem and how to create your own Docker images and multi-container applications. In addition to a classic PHP webserver stack, I also give a few tips on general problems and what new challenges are coming to you.

This is my third talk this year. I got a lot of feedback on the other talks, which helps me to improve them. If you will be an attendee at the PHP.RUHR conference, don't hesitate to drop some notes here about my talk.

]]>
<![CDATA[My conference talks in September]]>https://sandro-keil.de/blog/my-conference-talks-in-september/5b801214d8f3180001dd6723Tue, 20 Sep 2016 20:32:49 GMT

I'm thrilled to give two talks in September on different conferences. This is my first time, to be a Speaker on such a stage. I am very happy to have been accepted. Hopefully you can join and enjoy my talks. The first talk on 24th September in Dresden is about PHP Docker builds and the second one on 30th September in Hamburg is about PHP profiler.

The Way to Hassle Free Docker PHP Web Stack Deployments

My conference talks in September

The PHP Developer Day in Dresden is a free conference and brought to you by move:elevator and the PHP USERGROUP DRESDEN e.V.. Some of the great speakers are Bernhard Schussek, Sebastian Heuer and Benjamin Cremer. And of course, the members of PHPUGDD Holger Woltersdorf, Tommy Mühle and Patrick Pächnatz have awesome talks too. Don't hesitate to grab your free ticket now to join us on 24th September.

I will speak about The Way to Hassle Free Docker PHP Web Stack Deployments. Good things come in small containers. A typical PHP web server stack has at least three Docker containers. A nginx, PHP-FPM and a database Docker image is created quickly for development, but the way for deployment is longer as you think. This talk is about single source of truth, rebuild any version, any service, any time, and what's going on in my app aka logging. The persistence of data is not missed. If the build is ready, the deployment can be done, or maybe not?

A new era of PHP profiler

My conference talks in September

The code.talks conference in Hamburg has more than 1500 attendees and is one of the biggest conferences in europe. There are more than eighty talks on two days from 29th - 30th September. The level of the talks reaches from basic till experts. This is really awesome.

I will speak about A new era of PHP profiler. Xdebug and XHProf belong to the old generation, but they work properly. But the new PHP profiler revolutionize the analysis of PHP applications. Bottlenecks or inefficient code are things of the past now. Why and how to profile the PHP code and what is the difference between Profiling and Benchmarking? This talk has not only answers to these questions. We take a closer look to SensioLabs Blackfire, Tideways and Zend Z-Ray.

I want to thank my employer prooph software GmbH, which gives me the free time to be at the conference.

]]>
<![CDATA[Docker Compose with named Volumes and multiple Networks]]>https://sandro-keil.de/blog/docker-compose-with-named-volumes-and-multiple-networks/5b801214d8f3180001dd671dMon, 02 May 2016 20:17:42 GMT

In Docker Compose 1.6 or higher Networks and Volumes are now first class citizens. This means you have more control and you can use individual services in one or more networks. Sharing volumes have been improved. It was never so easy. A new docker-compose.yml format was introduced. The Docker compose config file must start with an entry version: "2" to use the new features. This blog post covers a typical web server stack with nginx, PHP-FPM and MariaDB with the new Docker Compose configuration format.

Docker Named Volumes

One really interesting point is to use named volumes. You can create new volumes with docker volume create my-volume or you can use Docker Compose too. The latter one creates a default volume with the prefix of the name of the project. With this, you don't need data only containers anymore. Which is a good benefit. The command docker-compose ps won't have extra dead entries, and docker volume ls will have more descriptive output, because volumes have names. I guess there is also a slightl performance improvement, if you have some data only containers, because Docker Compose doesn't have to start the data only containers.

So, when should I use named volumes? Every time, especially for persistence data like Databases! One benefit is, that you have only change the volume driver and then you can use flocker for instance. This could be useful for production. For development, you can mount your PHP files directly in the container. You can read more about best practices in Docker for PHP Developers.

Docker with Multiple Networks

A DNS is embedded in the Docker engine to auto discover services by name. No link definitions needed, but you can still use it. You can access other container by their name from another container in the same network. No linking means a faster start of the application, because of the asynchronous start. You can put Docker Container on different networks to increase the security, for instance frontend and backend.

Typical web server stack

The new Docker Compose configuration format has three top level keys named services, volumes and networks. The following server stack contains a nginx, PHP-FPM and a MariaDB (MySQL) container with a named data volume for the database data. Only the nginx Docker container is in the frontend network. The others are put to the backend network.

version: '2'
services:
  #
  # [ server stack ]
  #
  # - nginx
  # - php
  # - mysql
  #
  nginx:
    image: prooph/nginx:www
    restart: "always"
    links:
      # nginx access the php-fpm container with php 
      - php-fpm:php
    networks:
      - frontend
      # nginx must communicate with php-fpm from the backend network
      - backend

  php-fpm:
    image: prooph/php:7.0-fpm
    restart: "always"
    # see 12factor.net why env variables are used here
    environment:
      - MYSQL_HOST=${MYSQL_HOST}
      - MYSQL_USER=${MYSQL_USER}
      - MYSQL_PASSWORD=${MYSQL_PASSWORD}
      - MYSQL_DATABASE=${MYSQL_DATABASE}
    networks:
      - backend

  mysql:
    image: mariadb
    restart: "always"
    # named volumes come here into play
    volumes:
      - data-mysql:/var/lib/mysql
    environment:
      - MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
      - MYSQL_USER=${MYSQL_USER}
      - MYSQL_PASSWORD=${MYSQL_PASSWORD}
      - MYSQL_DATABASE=${MYSQL_DATABASE}
    networks:
      - backend

#
# [ volumes definition ]
#
# creates Docker volumes which can be mounted by other containers too e.g. for backup
#
volumes:
  data-mysql:
    driver: local

#
# [ networks definition ]
#
networks:
  frontend:
    driver: bridge
  backend:
    driver: bridge

Conclusion

The new features of Docker Compose and Docker gives you great benefits. Faster Docker container starts, increased network security and better portability of the volumes. Further Docker daemon improvement is to use the overlayfs storage driver. Remember that you can also use multiple Docker Compose configuration files and merge or extend from them. One example is to setup your production configuration and overwrite only the parts for development e.g. ports or volume definitions.

]]>
<![CDATA[PHP 7 Expectations / Assertions]]>https://sandro-keil.de/blog/php-7-expectations-assertions/5b801214d8f3180001dd671eThu, 17 Mar 2016 21:52:46 GMT

PHP 7 has many new features. One of them are Expectations maybe better known as assertions. It's a common practice to use an assertion library like beberlei/assert to ensure correct values and types on low level, for instance Value Objects or Aggregates. On the first sight, the Expectations in PHP 7 can replace the assertion libraries and you have a dependency fewer and of course no static calls. assert() is a language construct in PHP 7, so it's quite faster than (static) function calls. Sounds good, but not on the second sight.

I'm surprised that PHP 7 Expectations should only used for development. I know that assertions are not replacing input filtering and validation but is it a replacement for assertion libraries? You know, if you rely on an external dependency, you are dependent on the time of the Maintainer or contributors. Things are changed from time to time and what is if your PR is not recognized. Sure, you can write all the stuff yourself, but that's not an option and in the end it's even worse than this.

Problem Details for HTTP APIs

Take a look at a typical problem if you work with APIs and a server to server communication. You want to map the JSON data to an object. This example uses the API Problem from Apigility. The requirements are that the response object should be immutable and don't use reflection. And a fixed set of data is provided and some optional information can be available. The only solution is constructor injection with an array. That's a nice solution, because the caller have only put the JSON encoded data to the constructor. Note, we trust Apigility to provide the right data if the application/problem+json header is present. The response object can look like this:

declare(strict_types = 1);

class ApiProblem
{
    private $data;

    public function __construct(array $data)
    {
        $this->data = $data;
    }

    public function type() : string
    {
        return $this->data['type'];
    }

    public function title() : string
    {
        return $this->data['title'];
    }

    public function status() : int
    {
        return $this->data['status'];
    }

    public function detail() : string
    {
        return $this->data['detail'];
    }

    public function additionalDetails(string $name)
    {
        return $this->data[$name] ?? null;
    }
}

Nothing special here, but strict types are used and if one the function doesn't return the correct type, for instance null instead string, an error is raised. Sure you can use checks in the functions like return $this->data['detail'] ?? ''; but is this good code? You must be able to rely on some data definitions. Here is where assertion libraries and PHP 7 Expectations comes into the game.

With beberlei/assert the constructor function would look like this:

public function __construct(array $data)
{
    Assertion::allKeyExists($data, [ 'type',  'title', 'status', 'detail'], 'Not set');

    Assertion::string($data['type'], '"type" wrong');
    Assertion::string($data['title'], '"title" wrong');
    Assertion::integer($data['status'], '"status" wrong');
    Assertion::string($data['detail'], '"detail" wrong');

    $this->data = $data;
}

To use PHP 7 Expectations you must configure your PHP ini settings with ini_set('assert.exception', 1); ini_set('zend.assertions', -1); . The constructor function would look like this:

public function __construct(array $data)
{
    assert(isset($data['type']) && is_string($data['type']), '"type" not set/wrong');

    assert(isset($data['title']) && is_string($data['title']), '"title" not set/wrong');

    assert(isset($data['status']) && is_int($data['status']), '"status" not set/wrong');

    assert(isset($data['detail']) && is_string($data['detail']), '"detail" not set/wrong');

    $this->data = $data;
}

With PHP 7 Expectations you can also use your own exceptions for instance assert(false, new MyDomainException('my message'));. This can be useful to catch exceptions by component, but in most cases, if something goes wrong on this level, you can only graceful give up. But it's really good for debugging or analyzing the abnormal behaviour.

On the Pro side there is no new project dependency and you can use own exceptions.

On the Contra side it's only useful for type checks. Otherwise it's too much boilerplate code and error prone for instance strlen() checks and it depends on PHP ini settings.

Conclusion

You cannot rely that the Expectations throw exceptions because it depends on PHP ini settings. That's a no go for public code, but internal projects can use it. If your internal component has to do only some simple checks, then maybe it's good to avoid any assertion library and use assert() instead. It could also be useful to write an own assertion library which fits your needs and which don't rely on not needed PHP extensions. If you use the PHP intl extension, why should you install the PHP mbstring extension too?

Don't hesitate to put a comment and share your experience with PHP 7 Expectations.

]]>
<![CDATA[Docker for PHP Developers]]>https://sandro-keil.de/blog/docker-for-php-developers/5b801214d8f3180001dd6713Tue, 26 Jan 2016 07:09:12 GMT

Docker is a great way to emulate live server environment. Sure, you don't have the same hardware, but you can have the same infrastructure stack like multiple web, PHP-FPM, Database, CDN server and so on. Another reason why to use Docker for PHP development is, that it's faster than Vagrant and needs much fewer resources as Virtual Box machines. But it's also possible to use Docker in a Vagrant Box.

Looking for an extended german version? There are also slides from my PHP Usergroup Dresden talk available.

PHP webserver stack

A typical PHP webserver stack contains nginx, PHP-FPM and a MySQL database like MariaDB. How much time do you need to setup such a system? What if I would say, you need only 20 lines of configuration, Docker and some minutes? I'm curious, no! ;-) We at prooph software have some Docker images for development, which fits this webserver stack and we have a cool example app which uses this webserver stack and some other cool features like CQRS, Service Bus and Event Sourcing with Snapshots.

If you not have already installed and optimized Docker, take a look at using Docker with OverlayFS on Ubuntu. Here is the Docker Compose YAML configuration which setups the PHP webserver stack with nginx, PHP-FPM and MariaDB. Save this configuration to an empty directory or an example project with name docker-compose.yml.

If you destroy the container, the data of the database will be lost in this example!

nginx:
  image: prooph/nginx:www
  ports:
      - "8080:80"
      - "443:443"
      # these ports are for Zend Z-Ray
      - "10081:10081"
      - "10082:10082"
  links:
    - php:php
  volumes_from:
    - dataphp

php:
  image: prooph/php:7.0-fpm
  links:
    - mariadb:mariadb
  volumes_from:
    - dataphp

dataphp:
  image: debian:jessie
  volumes:
    - .:/var/www

mariadb:
  image: mariadb
  ports:
    - 3306:3306
  environment:
    - MYSQL_ROOT_PASSWORD=dev
    - MYSQL_USER=dev
    - MYSQL_PASSWORD=dev
    - MYSQL_DATABASE=proophessor

The nginx vHost is configured for the folder public. Create an index.php file with <?php echo 'Hello World!'; if it doesn't exists. Now start the Docker containers with docker-compose up -d and open the browser at http://localhost:8080. nginx is configured with HTTP/2 and SSL. Check this with https://localhost.

Use a database data container

To avoid losing database data if you destroy the container you need a Docker data container for the database. The MariaDB Docker image has a volume /var/lib/mysql defined, so you can mount a Docker data container at this location. You don't want that the database data is available in your PHP Docker container. It's necessary to change the mounted paths of the PHP Docker data container with a list of needed folders for the webserver. This is an exercise for you.

What's about PHP debugging

With Docker you are free to switch your environment in seconds. If you want to use Xdebug, simply change the line 15 with prooph/5.6-php-fpm-xdebug and rebuild your server stack with docker-compose stop && docker-compose rm -f && docker-compose up -d. Now you can debug your application. Ensure that your IDE is listen to port 10000, becaue port 9000 is used by PHP-FPM. If you need help, please check my blog post about remote PHP debugging.

Do you need a PHP Profiler? No problem, change line 2 with prooph/nginx:zray and line 15 prooph/php:5.6-fpm-zray to use Zend Z-Ray Profiler. You have to configure the Zend Z-Ray URL. This is why the ports 10081 and 10082 are exposed in the example above. Don't know which PHP profiler you should use? Read more about PHP Profiler Z-Ray, Blackfire and Tideways.

Production best practice

There were some questions how to use PHP Docker container in production. Well, put the PHP source code into the PHP Docker image. Your logs should be logged to stdout or you use another Docker log driver. Product images or user generated content should be mounted into the container, so you can use a CDN and another Docker volume plugin driver like flocker to support more than one server.

You don't need Ansible, Puppet or Chef. Settings like DB credentials should be defined as environment variables. This is really cool, because this image with your version of the source code runs on production, staging, testing and development without changing the configuration. No more environment checks or switches in your application.

It's easy to deploy a new version. Spin up the new container and stop the old container. Switching back to the old version is also easy. You don't have to maintenance your server host system, if you use a Docker hosting provider. Simply build a new image and ship it.

The nginx image doesn't know anything of your PHP application. The mount of the PHP data to the nginx Container in the example above is used for the application assets (CSS, JS, images). So it's easier to start, but it's not mandatory, especially when you use a CDN.

Conclusion

With Docker it's so easy to build infrastructure for development like a production environment. You can also override your production Docker images with an extended development environment where you have running a PHP profiler. And last but not least, you can check how your application works on scale with less ressources as with virtual machines.

]]>
<![CDATA[Docker with OverlayFS on Ubuntu]]>https://sandro-keil.de/blog/docker-with-overlayfs-on-ubuntu/5b801214d8f3180001dd6710Sun, 20 Dec 2015 21:35:00 GMT

Docker uses the DeviceMapper storage driver as default if no other driver is available. That's ok and it works, mostly. I run, sometimes into trouble, because the container could not be started. At the end, I have to delete Docker container and images and create it again. The OverlayFS driver is faster than DeviceMapper and aufs. You can also read more about OverlayFS on Docker.com. In this Blog post you will learn how to configure your Linux system and Docker to use the OverlayFS storage driver.

Check current Docker storage driver

If you not have Docker currently installed, skip this section and go to Configure Docker with OverlayFS. Otherwise run the command docker info to see information about your Docker environment. Important is the storage driver. If you not already using OverlayFS as the Docker storage driver, you see something like devicemapper or aufs. The following output shows that the OverlayFS storage driver is used.

Containers: 14
Images: 501
Server Version: 1.9.1
Storage Driver: overlay
 Backing Filesystem: extfs
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.19.8-031908-generic
Operating System: Ubuntu 15.04
CPUs: 8
Total Memory: 7.73 GiB

Check if OverlayFS is available

Depending on your Linux version, OverlayFS is not in the Linux kernel upstream. However, to check if OverlayFS is already installed, run the command lsmod | grep overlay. If you get an output with overlayfs, you are ready to enable OverlayFS in your Docker config.

If you have no output, check your Linux kernel version with the command uname -a. You should see somthing like that: Linux ThinkPad 3.19.8-031908-generic #201505110938 SMP Mon May 11 13:39:59 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux. The Linux kernel version here is 3.19.8. OverlayFS is available since linux kernel 3.18.

Upgrade Ubuntu kernel

If OverlayFS is not available on your system you can simply upgrade your Linux kernel to a newer version. The current example uses the kernel version 4.3 for a 64 Bit system, but you are free to use another kernel version >= 3.18.

$ wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.3-wily/linux-headers-4.3.0-040300-generic_4.3.0-040300.201511020949_amd64.deb

$ wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.3-wily/linux-image-4.3.0-040300-generic_4.3.0-040300.201511020949_amd64.deb

$ wget http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.3-wily/linux-headers-4.3.0-040300-generic_4.3.0-040300.201511020949_all.deb

$ sudo dpkg -i linux-headers-4.3.0*.deb linux-image-4.3.0*.deb

Now reboot your system and check if OverlayFS is available (see above).

Another possibility to use a different storage driver than devicemapper is to install the linux-image-extra kernel with the command sudo apt-get -y install linux-image-extra-$(uname -r), if you can't upgrade your Linux kernel.

Configure Docker with OverlayFS

To enable OverlayFS for Docker open /etc/default/docker and put the following line DOCKER_OPTS="--storage-driver=overlay" at the end of the file. Restart the Docker daemon with sudo service docker restart or install Docker now and check if Docker uses OverlayFS (see above).

Conclusion

Using Docker with OverlayFS is easy and there is a better performance. You should also use the OverlayFS Docker storage driver for development. Now, I'm waiting for User Namespaces to avoid the file permission issues in development. guso is not an option. Don't forget to enable Logrotate for the Docker log files.

]]>