пятница, 9 октября 2015 г.

Docker. Storage driver for CentOS 7.1

By default, when starting Docker container following message appears:
Usage of loopback devices is strongly discouraged for production use. Either use `--storage-opt dm.thinpooldev` or use `--storage-opt dm.no_warn_on_loop_devices=true` to suppress this warning.

Reading the man docker page reveals the following:
The only backend which currently takes options is devicemapper.


 
It is strongly recommended to not use loopback devices in production. You can switch to using lvm thin pool for devicemapper. Use option --storage-opt dm.thinpooldev to pass lvm thin pool to docker daemon. Read "man lvmthin" to figure out how to setup lvm thin pool.

Why default devicemapper config is bad? If docker is setting up loop devices for docker thin pool setup, docker operations like docker deletion and container I/O operations can be slow. The strongly recommended alternative configuration is to set up an LVM thin pool and use it as storage back-end for docker.

Theory of operation

The device mapper graphdriver uses the device mapper thin provisioning module (dm-thinp) to implement CoW snapshots. The preferred model is to have a thin pool reserved outside of Docker and passed to the daemon via the --storage-opt dm.thinpooldev option.

As a fallback if no thin pool is provided, loopback files will be created. Loopback is very slow, but can be used without any pre-configuration of storage. It is strongly recommended that you do not use loopback in production. Ensure your Docker daemon has a --storage-opt dm.thinpooldev argument provided.

In loopback, a thin pool is created at /var/lib/docker/devicemapper (devicemapper graph location) based on two block devices, one for data and one for metadata. By default these block devices are created automatically by using loopback mounts of automatically created sparse files.

The default loopback files used are /var/lib/docker/devicemapper/devicemapper/data and /var/lib/docker/devicemapper/devicemapper/metadata. Additional metadata required to map from docker entities to the corresponding devicemapper volumes is stored in the /var/lib/docker/devicemapper/devicemapper/json file (encoded as Json).

In order to support multiple devicemapper graphs on a system, the thin pool will be named something like: docker-0:33-19478248-pool, where the 0:33 part is the minor/major device nr and 19478248 is the inode number of the /var/lib/docker/devicemapper directory.

On the thin pool, docker automatically creates a base thin device, called something like docker-0:33-19478248-base of a fixed size. This is automatically formatted with an empty filesystem on creation. This device is the base of all docker images and containers. All base images are snapshots of this device and those images are then in turn used as snapshots for other images and eventually containers.

 If we use devicemapper, here is what docker info shows:
[user@localhost precise64]$ docker info
Containers: 39
Images: 67
Storage Driver: devicemapper
Pool Name: docker-253:0-67854726-pool
Pool Blocksize: 65.54 kB
Backing Filesystem: xfs
Data file: /dev/loop0
Metadata file: /dev/loop1
Data Space Used: 4.414 GB
Data Space Total: 107.4 GB
Data Space Available: 10.24 GB
Metadata Space Used: 6.214 MB
Metadata Space Total: 2.147 GB
Metadata Space Available: 2.141 GB
Udev Sync Supported: true
Deferred Removal Enabled: false
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
Library Version: 1.02.93-RHEL7 (2015-01-28)
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.10.0-229.el7.x86_64
Operating System: CentOS Linux 7 (Core)
CPUs: 1
Total Memory: 1.941 GiB
Name: localhost.localdomain
ID: ZROP:MO75:VNFS:ENRM:WJYC:ZY53:S7KU:HJAF:WIIF:BJ7Z:CJZG:WIPD


Each item in the indented section under Storage Driver: devicemapper are status information about the driver.
  • Pool Name name of the devicemapper pool for this driver.
  • Pool Blocksize tells the blocksize the thin pool was initialized with. This only changes on creation.
  • Data file blockdevice file used for the devicemapper data
  • Metadata file blockdevice file used for the devicemapper metadata
  • Data Space Used tells how much of Data file is currently used
  • Data Space Total tells max size the Data file
  • Data Space Available tells how much free space there is in the Data file. If you are using a loop device this will report the actual space available to the loop device on the underlying filesystem.
  • Metadata Space Used tells how much of Metadata file is currently used
  • Metadata Space Total tells max size the Metadata file
  • Metadata Space Available tells how much free space there is in the Metadata file. If you are using a loop device this will report the actual space available to the loop device on the underlying filesystem.
  • Udev Sync Supported tells whether devicemapper is able to sync with Udev. Should be true.
  • Data loop file file attached to Data file, if loopback device is used
  • Metadata loop file file attached to Metadata file, if loopback device is used
  • Library Version from the libdevmapper used

 Lvmthin to rescue

1) add raw hdd
2) use fdisk /dev/sdb
Use m for help, p to print the existing partition table, n to create a new partition, t to change
the partition type, w to write the changes, and q to quit.
3) create pv:
pvreate /dev/sdb1
4) create vg
vgcreate dockervg /dev/sdb1
5) create thin pool
lvcreate -L 19.50G -T dockervg/dockerpool
The pool is in /dev/mapper/dockervg-dockerpool.
6) vim /etc/sysconfig/docker-storage
DOCKER_STORAGE_OPTIONS=--storage-driver devicemapper --storage-opt dm.thinpooldev=/dev/mapper/dockervg-dockerpool
7) systemctl stop docker
8) rm -rf /var/lib/docker
9) systemctl start docker
10) docker info
[root@localhost ~]# docker info
Containers: 0
Images: 0
Storage Driver: devicemapper
Pool Name: dockervg-dockerpool
Pool Blocksize: 65.54 kB
Backing Filesystem: xfs
Data file:
Metadata file:
Data Space Used: 307.2 MB
Data Space Total: 20.94 GB
Data Space Available: 20.63 GB
Metadata Space Used: 262.1 kB
Metadata Space Total: 20.97 MB
Metadata Space Available: 20.71 MB
Udev Sync Supported: true
Deferred Removal Enabled: false
Library Version: 1.02.93-RHEL7 (2015-01-28)
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.10.0-229.el7.x86_64
Operating System: CentOS Linux 7 (Core)
CPUs: 1
Total Memory: 1.941 GiB
Name: localhost.localdomain
ID: ZROP:MO75:VNFS:ENRM:WJYC:ZY53:S7KU:HJAF:WIIF:BJ7Z:CJZG:WIPD

Alternatives

One can use overlayfs. Formally, it is built in the kernel 3.18. Suppose it will be introduced in CentOS 7.2.

More look in:

Also, you can set up Ubuntu Server 14.04.3 LTS as host. Ubuntu supports AUFS out of box.

Комментариев нет:

Отправить комментарий