Posts Tagged 'Containers'

June 3, 2016

Mount SoftLayer Object Storage in a Docker Container

The popularity of Docker containers has many organizations wanting to host containers in their cloud environments. They’re looking for ways to “marry” their existing cloud storage options with Docker containers, which offers application portability. SoftLayer offers persistent data (structured or unstructured) with its object, file, and block storage.

Of the three storage options, object storage is usually more popular in the cloud world as a pay-as-you-go option. It provides persistent storage for numerous workloads with image, video, and audio files, such as mobile and web applications. Combine persistence with the power of Docker containers, and the result is a highly portable and flexible application platform on the cloud. I’d like to showcase mounting SoftLayer object storage inside a Docker container using Cloudfuse. This example can, of course, be extended for further automation of the mount process as needed.

The following are steps for mounting object storage to a Docker container:

  1. Know your SoftLayer object storage credentials, which can be retrieved from your SoftLayer account.
username (Your SoftLayer Object Store Username or password string)
api_key (Your SoftLayer API Key
authurl (Authorization URL of the data center where your object store is hosted)
  1. Install Docker on your host machine. Click here for installation instructions.

     
  2. Create a new folder named SLObjectStoreTest and make it your current directory.

     
  3. Copy the following into a file named Dockerfile and store it in the SLObjectStoreTest folder. You can also clone it from GitHub.
# Dockerfile : Mount SoftLayer Object Store inside a container
# Version 1.1
 
# Pull base images
FROM ubuntu
 
# Set working directory
WORKDIR /root
 
# Install Python
RUN apt-get update && \
apt-get -y upgrade
 
# Install pip
RUN apt-get install -y python-pip && \
pip install softlayer-object-storage
 
# Install cloudfuse
RUN apt-get install -y build-essential libcurl4-openssl-dev libxml2-dev libssl-dev libfuse-dev && \
apt-get install -y curl && \
curl -L https://github.com/redbo/cloudfuse/tarball/master > cloudfuse.tar && \
tar -xzvf cloudfuse.tar && \
apt-get install -y libjson0 libjson0-dev && \
cd redb* && \
./configure && \
make && \
make install
ENTRYPOINT [/bin/bash"]
 
# Build the Docker image from the Dockerfile
$docker build

You should see the Docker image being built. It will take a couple of minutes.

  1. Check that the image exists once it’s built by typing $docker images.

     
  2. Use the following command to spin up a Docker container from this image:

docker run –cap-add SYS_ADMIN –privileged –device /dev/fuse:/dev/fuse:mrw -i -t <imageid></imageid>

You should see the bash command of the Docker container.

  1. Create a new folder where the SoftLayer Object Storage should be mounted, e.g.,

mkdir /storage

  1. Create a new file in /root directory named .cloudfuse.
  2. Enter your SoftLayer object storage credentials (from Step 1) in the .cloudfuse file like below :
username (Your SoftLayer Object Store Username)
api_key (Your SoftLayer API Key or password string)
authurl (Authorization URL of the data center where your object store is hosted)
  1. Mount the SoftLayer object storage at /storage by running

cloudfuse /storage

You should see your SoftLayer object store mounted at /storage in your Docker container!

You can now configure this image to run your application, which can leverage this container—or use the container as a Docker volume container, composed with other containers running your application.

In case you want to experiment with an already built Docker image, you can pull it from the softlayerobjectstore_mount repository.

-Sravan K Yallapragada

Categories: 
March 4, 2015

Docker: Containerization for Software

Before modern-day shipping, packing and transporting different shaped boxes and other oddly shaped items from ships to trucks to warehouses was difficult, inefficient, and cumbersome. That was until the modern day shipping container was introduced to the industry. These containers could easily be stacked and organized onto a cargo ship then easily transferred to a truck where it would be sent on to its final destination. Solomon Hykes, Docker founder and CTO, likens the Docker to the modern-day shipping industry’s solution for shipping goods. Docker utilizes containerization for shipping software.

Docker, an open platform for distributed applications used by developers and system administrators, leverages standard Linux container technologies and some git-inspired image management technology. Users can create containers that have everything they need to run an application just like a virtual server but are much lighter to deploy and manage. Each container has all the binaries it needs including library and middleware, configuration, and activation process. The containers can be moved around [like containers on ships] and executed in any Docker-enabled server.

Container images are built and maintained using deltas, which can be used by several other images. Sharing reduces the overall size and allows for easy image storage in Docker registries [like containers on ships]. Any user with access to the registry can download the image and activate it on any server with a couple of commands. Some organizations have development teams that build the images, which are run by their operations teams.

Docker & SoftLayer

The lightweight containers can be used on both virtual servers and bare metal servers, making Docker a nice fit with a SoftLayer offering. You get all the flexibility of a re-imaged server without the downtime. You can create red-black deployments, and mix hourly and monthly servers, both virtual and bare metal.

While many people share images on the public Docker registry, security-minded organizations will want to create a private registry by leveraging SoftLayer object storage. You can create Docker images for a private registry that will store all its information with object storage. Registries are then easy to create and move to new hosts or between data centers.

Creating a Private Docker Registry on SoftLayer

Use the following information to create a private registry that stores data with SoftLayer object storage. [All the commands below were executed on an Ubuntu 14.04 virtual server on SoftLayer.]

Optional setup step: Change Docker backend storage AuFS

Docker has several options for an image storage backend. The default backend is DeviceMapper. The option was not very stable during the test, failing to start and export images. This step may not be necessary in your specific build depending on updates of the operating system or Docker itself. The solution was to move to Another Union File System (AuFS).
  1. Install the following package to enable AuFS:
    apt-get install linux-image-extra-3.13.0-36-generic
  2. Edit /etc/init/docker.conf, and add the following line or argument:
    DOCKER_OPTS="--storage-driver=aufs"
  3. Restart Docker, and check if the backend was changed:
    service docker restart
    docker info

The command should indicate AuFS is being used. The output should look similar to the following:
Containers: 2
Images: 29
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Dirs: 33
Execution Driver: native-0.2
Kernel Version: 3.13.0-36-generic
WARNING: No swap limit support

Step 1: Create image repo

  1. Create the directory registry-os in a work directory.
  2. Create a file named Dockerfile in the registry-os directory. It should contain the following code:
    # start from a registry release known to work
    FROM registry:0.7.3
    # get the swift driver for the registry
    RUN pip install docker-registry-driver-swift==0.0.1
    # SoftLayer uses v1 auth and the sample config doesn't have an option 
    # for it so inject one
    RUN sed -i '91i\    swift_auth_version: _env:OS_AUTH_VERSION' /docker-registry/config/config_sample.yml
  3. Execute the following command from the directory that contains the registry-os directory to build the registry container:
    docker build -t registry-swift:0.7.3 registry-os

Step 2: Start it with your object storage credential

The credentials and container on the object storage must be provided in order to start the registry image. The standard Docker way of doing this is to pass the credentials as environment variables.
docker run -it -d -e SETTINGS_FLAVOR=swift -e 
OS_AUTH_URL='<a href="https://dal05.objectstorage.service.network
layer.com/auth/v1.0">https://dal05.objectstorage.service.network
layer.com/auth/v1.0</a>'     -e OS_AUTH_VERSION=1     -e
OS_USERNAME='<API-USER>'     -e 
OS_PASSWORD='<API_KEY>'     -e 
OS_CONTAINER='docker'     -e GUNICORN_WORKERS=8     -p 
127.0.0.1:5000:5000     registry-swift:0.7.3

This example assumes we are storing images in DAL05 on a container called docker. API_USER and API_KEY are the object storage credentials you can obtain from the portal.

Step 3: Push image

An image needs to be pushed to the registry to make sure everything works. The image push involves two steps: tagging an image and pushing it to the registry.
docker tag registry-swift:0.7.3 localhost:5000/registry-swift
 
docker push localhost:5000/registry-swift

You can ensure that it worked by inspecting the contents of the container in the object storage.

Step 4: Get image

The image can be downloaded once successfully pushed to object storage via the registry by issuing the following command:
docker pull localhost:5000/registry-swift
Images can be downloaded from other servers by replacing localhost with the IP address to the registry server.

Final Considerations

The Docker container can be pushed throughout your infrastructure once you have created your private registry. Failure of the machine that contains the registry can be quickly mitigated by restarting the image on another node. To restart the image, make sure it’s on more than one node in the registry allowing you to leverage the SoftLayer platform and the high durability of object storage.

If you haven’t explored Docker, visit their site, and review the use cases.

-Thomas

November 13, 2008

Size Isn't Everything

A couple days ago, I took my daughter to her favorite store. We picked up a fair amount and on the way to the car she asked a simple question, or so I thought. “Why did they only fill these bags half way”. Confused I looked at the bags and realized she was holding a bag which had a large stuffed bear in it and was looking at a bag less than half full of canned food.

Being the person I am, rather than attempt to explain this to her I wanted to let her try and figure it out for herself so she would understand it better. When we got home, I filled the rest of the bag with cans and had her try and pick it up, as I expected the bag broke in her hands. I explained to her that the cans were much heavier then the bag. She still doesn’t quite understand the concept that the bag has 2 limits, size and weight but she is starting to understand this concept.

I thought about this story this morning when I started working on a project of determining how many containers a Virtuozzo server could handle based on its system requirements. Just like the bag, a Virtuozzo system has multiple limitations that need to be observed, the size of the containers as well as their “weight”. In this situation “weight” would be the drain on overall system resources. When attempting to determine how many containers a system can handle, you need to take into account not only how many will fit size wise, but also how much of the overall system resources each container will require.

It turns out this question is much easier to ask then to answer. You can take a small server such as a dual core with 4GB of RAM and put 20 or even 30 containers onto the server and have it run flawlessly when those containers are small and do not require much in the way of system recourses. At the same time however I can take a quad proc quad core with 64GB of RAM and grind it to a halt with 1 or 2 containers.

At the end of the day, I have found that you can make just about anything work, but before you attempt to determine what hardware you will need to run a Virtuozzo server, it’s a good idea to have an estimate of what you expect the containers to be doing. What could be worse than spending hours configuring a server and getting it online only to watch it grind to a halt because there are just too many containers completely saturating your system resources?

-Mathew

Categories: 
Subscribe to containers