Oct 15, 2017

An Embarrassing Mistake: 'The "example" entity type does not exist'

Using the code from  Creating a configuration entity type in Drupal 8  to create a sample module for demonstrating use of configuration entities, I got the following log message when browsing to the path /admin/config/system/example

Drupal\Component\Plugin\Exception\PluginNotFoundException: The "example" entity type does not exist. in Drupal\Core\Entity\EntityTypeManager->getDefinition() (line 133 of /var/www/html/opti/core/lib/Drupal/Core/Entity/EntityTypeManager.php).

This is embarrassing, but I'm going to write it up anyway.

I had copied and pasted the code verbatim from the article into new files using the filenames and paths that were indicated. It should have worked!

It turns out I had forgotten one little thing: at the top of each of the php files, I had neglected to insert

<?php

What eventually tipped me off was that my text editor, Sublime Text, was not providing the syntax coloring that it normally did.

Laughing my ass off ...


Sources:

Creating a configuration entity type in Drupal 8
https://www.drupal.org/docs/8/api/configuration-api/creating-a-configuration-entity-type-in-drupal-8

Configuration Entities in Drupal 8
https://wunder.io/blog/configuration-entities-in-drupal-8/2014-07-14

Sep 2, 2017

How to enable Vagrant SSH access into a Docker container

Vagrant seems to work really well with VirtualBox and some other virtualization providers, but I had a situation where I wanted to get it to run with Docker instead (on my local Linux Mint system) and to enable Vagrant to have ssh access into the Docker container.

I started with the Docker image wadmiraal/drupal:7.54 which I had been using for doing Drupal development via Docker alone. There are several changes that had to be made, some in the Docker image, some in the Vagrantfile, in order to satisfy Vagrant expectations.

(1)  Create a user named vagrant in the guest system.

Running the image in a Docker container, I ssh'ed into it using its existing root user, then created a new user "vagrant". Set the password to "vagrant", which is a convention but it could be a different password.

I tested this new user by logging in manually.

# ssh -p 2222 vagrant@127.0.0.1

where 2222 is the local host port that is forwarded to the ssh port on the guest.

(2)  Provide ssh key authentication in the guest system.

On the host / local system, I already had ssh keys. I used the following command to set up the key in the guest system.

# ssh-copy-id -p 2222 vagrant@127.0.0.1

Again, I tested this by logging in manually.

(3)  Enable sudo for the vagrant user in the guest system.

The Vagrant docs say, "Many aspects of Vagrant expect the default SSH user to have passwordless sudo configured. This lets Vagrant configure networks, mount synced folders, install software, and more."

The Docker image I started with did not even have sudo available, so I logged into the guest as root and installed it.

# apt-get update
# apt-get install sudo

Then used visudo to edit /etc/sudoers to insert two lines.

# visudo

vagrant ALL=(ALL) NOPASSWD:ALL
Defaults:vagrant !requiretty


After making these changes to the guest, I used Docker to commit a new image. Let's call it earl/drupal:7.54

(4)  Add Docker settings in the Vagrantfile.

ENV['VAGRANT_DEFAULT_PROVIDER'] = 'docker'

Vagrant.configure("2") do |config|

  config.vm.network "forwarded_port", guest: 22, host: 2222

  config.vm.provider "docker" do |dock|
    dock.image = "earl/drupal:7.54"
    dock.name = "drupal-7.54"
    dock.has_ssh = true
  end

end

Specifying the default provider in the Vagrantfile is just a convenience so that you don't have to use the --provider option for the vagrant up command.

(5)  Set up password authentication for ssh in the Vagrantfile.

Vagrant.configure("2") do |config|

  config.ssh.username = "vagrant"
  config.ssh.password = "vagrant"

end

This is optional. If you provide config.ssh.password as above, Vagrant will use password authentication. Otherwise, Vagrant will default to key authentication, as in the following step. 

(6)  Set up key authentication for ssh in the Vagrantfile.

Vagrant.configure("2") do |config|

  config.ssh.keys_only = false
  config.ssh.private_key_path = "/home/earl/.ssh/id_rsa"

end

This is also optional. Do either step (5) or step (6). If you do both, Vagrant will use password authentication. 

ssh.keys_only must be set to false in order to use your own ssh keys, and you must also provide the path to those keys.

(7) Test that Vagrant is able to make a change in the guest.

For example, I added the following two lines to the Vagrantfile to execute a shell command.

Vagrant.configure("2") do |config|

  config.vm.provision "shell",
    inline: "touch /vagrant/hello-world" 
end

Doing vagrant up should boot up without errors and execute the shell command successfully. In the above snippet, because Vagrant automatically synchronizes the /vagrant directory on the guest with the host directory where the Vagrantfile is located, you can check for the hello-world file on the host.

Sources:

Creating a Base Box
https://www.vagrantup.com/docs/boxes/base.html

SSH Settings
https://www.vagrantup.com/docs/vagrantfile/ssh_settings.html

Docker Configuration
https://www.vagrantup.com/docs/docker/configuration.html

Aug 25, 2017

Moving Docker's storage to a different location

On a Linux Mint system, I was running low on disk space in the partition where Docker CE 17.06 had installed itself and was storing its files such as images. So I thought I'd move the docker storage directory to a different partition that had a lot more space, and create a symbolic link in the directory's original location to where it had been moved.

That sounded pretty simple, but it turned out to be a vexing problem.

After moving the directory, I ran a Docker container that contained an instance of Drupal 7. When I browsed to the Drupal site I got the error:
Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock'
If you do a web search on this error message, there are a lot of possible causes. In my case, the Docker container had been running fine just before the directory move, so that was an obvious culprit.

I had typed in several cp commands before running the container again, but I didn't have a clear memory or record of the steps I had taken. After the first occurrence of the error, I did further copy commands. The error kept occurring, but once in a while the site came up okay. This thrashing about included my re-initializing the docker directory at least once.

From my frustrating experience, here are a few suggestions about moving Docker's storage.

(a)  Be sure to stop the Docker service before making any changes.

(b)  Use Docker's configuration file and the -g option to indicate the location of storage. This provides both flexibility and safety in trying different locations.

(c)  If you use the cp command to copy storage contents, be sure to include the -p option to preserve owner, mode, and timestamp.



Here's an example of a sequence of steps that has worked in moving the Docker storage folder to a different location by using cp.

(1) Stop the Docker service.

# service docker stop

(2) Copy Docker's storage folder to a different partition.

# cd /var/lib
# cp -r -p docker /home/earl
# mv docker docker-save

(3) Edit the Docker configuration file to add the -g option to point to the new location. The file may already contain a commented line you can use as a starting point.

# vim /etc/default/docker

DOCKER_OPTS="--dns 8.8.8.8 --dns 8.8.4.4 -g /home/earl/docker"

(4) Re-start the Docker service. Run your container to test the change.

# service docker start

(5) When you are confident that the change is correct, you can remove the original directory to recover the space.

# rm -r /var/lib/docker-save

Sources:

How do I change the Docker image installation directory?
https://forums.docker.com/t/how-do-i-change-the-docker-image-installation-directory/1169

Aug 2, 2017

VirtualBox, Vagrant, and KVM

I was exploring the possibility of using Vagrant with VirtualBox for doing development on my local system.

The system is running Linux Mint 17, and I was using Vagrant 1.9.7 with VirtualBox 5.1.24.

I immediately starting having problems with some of the Vagrant boxes that I tried to run. (A "Vagrant box" is an initial machine image to be loaded into the virtual machine.)

For example, using the box hashicorp/precise64, the vagrant up command showed the error:
Stderr: VBoxManage: error: VT-x is not available (VERR_VMX_NO_VMX)
VBoxManage: error: Details: code NS_ERROR_FAILURE (0x80004005), component ConsoleWrap, interface IConsole
Later, after I had learned how to configure Vagrant to have VirtualBox display its own user window, I tried to run the box geerlingguy/ubuntu1604 and got this error from VirtualBox:
VT-x/AMD-V hardware acceleration is not available on your system. Your 64-bit guest will fail to detect a 64-bit CPU and will not be able to boot.
And on the command line, kvm-ok showed:
INFO: Your CPU does not support KVM extensions
KVM acceleration can NOT be used
It turns out that there are two possible explanations for these error messages.

(1)  The processor hardware does not have the capabilities required by VirtualBox to support Linux KVM (kernel-based virtual machines). These capabilities are either Intel's VT-x or AMD's AMD-V.

VT-x is sometimes encoded as vmx, and AMD-V is sometimes encoded as svm.

(2)  The processor hardware has the capabilities but they are not enabled.

Without this hardware, VirtualBox cannot run 64-bit operating systems. However, it can still run 32-bit operating systems.

In my case, it turned out that I have a lower-end processor that does not have the capabilities at all. I was able to check this by looking at the file /proc/cpuinfo to find the model number of the processor:

   model name : Intel(R) Pentium(R) CPU B960 @ 2.20GHz

Then using the model number B960 I searched this Intel site to find its specs.

If the processor has the capabilities but they are not enabled, then you might be able to enable them by going into the BIOS and looking for a setting such as VT (virtualization technology).

And for my next system, I will be looking for the processor to have this.

Sources:

ERROR: VT-X is not available
https://forums.virtualbox.org/viewtopic.php?f=8&t=17090

PRODUCT SPECIFICATIONS
https://ark.intel.com/#@Processors

KVM/Installation
https://help.ubuntu.com/community/KVM/Installation

x86 virtualization
https://en.wikipedia.org/wiki/X86_virtualization

May 25, 2017

A Reason to Use the Drupal Coder / PHP_CodeSniffer utility

Drupal Coder includes the command line utility PHP_CodeSniffer, which parses source code to detect violations of a coding standard.

Actually, there are two utilities in the package, one to detect violations, a second to automatically make changes for those that can be so fixed. 

I used the provided Drupal standard and ran it against all of the source code of the Optimizely module.

There are more than a hundred "sniffs" that are checked against. Individually, many are minor and relatively insignificant. A few are downright annoying. In the aggregate, though, I do feel that the resulting code was improved in terms of its readability.

The main benefit I've experienced so far is that I am being nudged into writing doc comments for all functions and classes. At first, I was resistent to doing so because good naming is often sufficient as documentation. As I edited file after file, though, I started to appreciate this kind of commenting as a desirable, consistent practice to adopt.

In one case, writing descriptions about a class and its methods helped me realize that the class was not entirely cohesive and maybe should have been written as two classes instead.

Retroactively applying the Drupal coding standard to the entire module was quite a bit of work. Moving forward, using PHP_CodeSniffer incrementally as a matter of habit should be much, much easier.

Sources:

Coder
https://www.drupal.org/project/coder

Installing Coder Sniffer
https://www.drupal.org/node/1419988

Apr 21, 2017

Using xDebug and Sublime Text with Docker

I've used xDebug with Sublime Text locally for quite some time but have started playing with Docker containers to instantiate instances of Apache, PHP, MySQL, and Drupal 8.

The Docker image I use has xDebug enabled for PHP, but I wanted to have xDebug running in the container to communicate with Sublime running on my local host system. This is not complicated, but it still took quite awhile for me to determine the correct settings.

Some of my confusion was due to the terms server and client as used in documentation and comments. Most of the time, server refers to xDebug running within PHP, and client refers to the IDE or text editor such as Sublime.

On the other hand, apparently it is xDebug that initiates the connection to the IDE, which makes xDebug act like a client. Also, xDebug has a setting called remote_host which sounds like a remote server that it is communicating with.

In the container I'm running, the xDebug settings are in

  /etc/php5/mods-available/xdebug.ini

Here are tips to get these components working together.

Run ifconfig in a local terminal to get the local ip address.

Working on my laptop connected to a home router, ifconfig shows eth0 with inet addr 10.0.0.3. I use that value as follows in xdebug.ini 

  xdebug.remote_host = 10.0.0.3

At a public library using their wi-fi, ifconfig shows wlan0 with inet addr 10.12.13.211, for example, but the address changes from session to session.

  xdebug.remote_host = 10.12.13.211

This is the key difference from running in a purely local way without Docker, where localhost is a typically used value for the remote_host setting.

Here are the settings that need to be present in xdebug.ini.

  xdebug.remote_enable = On
  xdebug.remote_host = 10.0.0.3
  xdebug.remote_port = 9000


If you have trouble getting your setup to work, use a log for xDebug to record errors and warnings. Enable the log by adding the following directive into your xdebug.ini file, e.g.

  xdebug.remote_log = /tmp/xdebug.log

This is useful for debugging. For example, at the beginning of the log there will probably be an indication of whether xDebug is even able to connect to the client, which is an important clue.

But use this key only as needed since it can generate a lot of log messages, some of which seem spurious.

For the Sublime editor, the key setting is path_mapping. Its value is an object that indicates corresponding paths. For example,

  { "/var/www/modules/custom/optimizely/": "/var/www/html/opti/modules/contrib/optimizely/" }

The key (the left side) is a path in the Docker container where the xDebug server finds its source code. Its value (the right side) is the path in the local system where Sublime finds the corresponding code.

In my use case, I am only interested in the code for the Optimizely module, so I'm only providing a mapping between the two root directories of the module.

Here are the settings that need to be present in Sublime for its xDebug package.

    "port": 9000,
    "path_mapping": { "/var/www/modules/custom/optimizely/": "/var/www/html/opti/modules/contrib/optimizely/" },

If you need to do troubleshooting, you might add the following setting as well in order for Sublime to output messages to its own local xDebug log.

    "debug": true

Also use this key only as needed since it can generate a lot of log messages, some of which seem spurious.

No port forwarding is needed.

The Docker documentation states: "By default Docker containers can make connections to the outside world ..." Since it is xDebug within the container that initiates the connection to Sublime running outside, there is no need to use port forwarding for the port between them.

Once you've got the correct settings for xdebug.ini, there are different ways to persist them. In my case, some settings don't change, but I work in different locations where the IP address for remote_host does vary, so I took a hybrid approach.

First, I used the docker commit command to capture the current state of a container into an image. In that container I had edited xdebug.ini with the settings that remain the same.

# docker commit distracted_dijkstra drupal-xdebug

Second, I use the docker run command with the -e option to provide the IP address as an environment variable when instantiating the image in a new container.

# docker run -e XDEBUG_CONFIG="remote_host=10.0.0.3" drupal-xdebug


Sources:

Xdebug 2 | Remote Debugging
https://xdebug.org/docs/remote#browser_session

martomo / SublimeTextXdebug
https://github.com/martomo/SublimeTextXdebug/blob/master/Xdebug.sublime-settings

Debug your PHP in Docker with Intellij/PHPStorm and Xdebug
https://gist.github.com/chadrien/c90927ec2d160ffea9c4

Apr 17, 2017

Initial Thoughts on Using Docker

I have started to use the Docker images from wadmiraal/drupal for local Drupal development, for example, wadmiraal/drupal:8.1.0 to use Drupal 8.1.0. These images are very well documented at Use Docker to kickstart your Drupal development.

There are other images for working with Drupal, but I happened to choose this set since it incorporates the versions of PHP, MySQL, and Apache that are close to what I've been using.

Here are some random notes on what I have experienced initially as someone who is new to Docker.

(1) Containers are easy and really fast to spin up
(At least on my Ubuntu-based system).

If you want to start completely fresh, run a new container, which means any changes you have made are lost. Sometimes, that's what you want, for example, when I want a fresh install of the module I'm working on and a clean database.

On the other hand, if you want to preserve changes to the file system of the container, you can stop it and then start the same container later. But be aware that it's really easy to accumulate clutter in the way of containers that you no longer want and have to manually remove. You can see all containers, running or not, by the command: docker ps -a

(2) There are different ways to communicate into Docker containers.

The ways I've used are port forwarding and volume mounting.

With port forwarding, when you run a container you specify which ports on the local host are passed on to a corresponding port of the container. For example, port 8080 locally can be mapped to the default port 80 of the container for http. Browsing to an address such as localhost:8080 then sends the request to the instance of the web server running in the container.

Volume mounting is a way to make local directories visible to processes running in the container. I mount my local development directory for the Optimizely module to a directory path in the container's file system under the Drupal site. This allows me to edit code locally without having to do so inside the container. Nice!

(3) Using ssh and scp

If you run the container with the appropriate port forwarding, you can ssh into the container. In the case of the image wadmiraal/drupal, once you have an ssh terminal the vi and vim editors are available.

However, other tools that you might want are not there. These can be added on the fly by using apt-get install, for example. But keep in mind that such changes will not necessarily persist, depending on how you manage the container.

If ssh is working, then so does scp for copying files back and forth between host and container. I have a one line php script that calls the function phpinfo(), which I scp from my local into the web root of the container for troubleshoot.

(4) Using xDebug and Sublime Text with Docker

The Docker image has xDebug enabled for PHP, but I'd like to have xDebug running on the container to communicate with Sublime on the local system so that I can edit and step through code locally.

So far, I am struggling to set this up. I expect to crack this nut eventually and will blog about it when I do.


Sources:

Docker Overview
https://docs.docker.com/engine/understanding-docker/

Docker Tutorial Series, Part 1: An Introduction | Docker Components
http://blog.flux7.com/blogs/docker/docker-tutorial-series-part-1-an-introduction

Use Docker to kickstart your Drupal development
http://wadmiraal.net/lore/2015/03/27/use-docker-to-kickstart-your-drupal-development/ 

Bind container ports to the host
https://docs.docker.com/engine/userguide/networking/default_network/binding/