Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
This commit is contained in:
Sebastiaan van Stijn 2023-05-19 01:42:52 +02:00
Родитель 20923dfbc7
Коммит 5d6612798a
Не найден ключ, соответствующий данной подписи
Идентификатор ключа GPG: 76698F39D527CE8C
5 изменённых файлов: 18 добавлений и 799 удалений

Просмотреть файл

@ -559,11 +559,12 @@ backported), there is no reason to keep maintaining the `overlay` storage driver
**Deprecated in Release: v18.09**
**Disabled by default in Release: v23.0**
**Removed in Release: v25.0**
The `devicemapper` storage driver is deprecated in favor of `overlay2`, and will
be removed in a future release. Users of the `devicemapper` storage driver are
recommended to migrate to a different storage driver, such as `overlay2`, which
is now the default storage driver.
The `devicemapper` storage driver is deprecated in favor of `overlay2`, and has
been removed in Docker Engine v25.0. Users of the `devicemapper` storage driver
must migrate to a different storage driver, such as `overlay2`, before upgrading
to Docker Engine v25.0.
The `devicemapper` storage driver facilitates running Docker on older (3.x) kernels
that have no support for other storage drivers (such as overlay2, or btrfs).
@ -572,24 +573,6 @@ Now that support for `overlay2` is added to all supported distros (as they are
either on kernel 4.x, or have support for multiple lowerdirs backported), there
is no reason to continue maintenance of the `devicemapper` storage driver.
#### Disabled by default in v23.0
Docker already prevented deprecated storage drivers from being automatically
selected on new installations, but continued to use these drivers when upgrading
existing installations. Starting with the v23.0 release, the Docker Engine will
fail to start if a deprecated storage driver is used (see [moby#43378](https://github.com/moby/moby/pull/43378):
```console
failed to start daemon: error initializing graphdriver: prior storage driver
devicemapper is deprecated and will be removed in a future release; update the the daemon
configuration and explicitly choose this storage driver to continue using it;
visit https://docs.docker.com/go/storage-driver/ for more information.
```
To continue using the storage driver, update the daemon configuration to use
explicitly use the given storage driver. Users are encouraged to migrate to
different storage driver.
### Use of reserved namespaces in engine labels
**Deprecated in Release: v18.06**

Просмотреть файл

@ -323,7 +323,7 @@ $ docker -H tcp://127.0.0.1:2375 pull ubuntu
### Daemon storage-driver
On Linux, the Docker daemon has support for several different image layer storage
drivers: `overlay2`, `fuse-overlayfs`, `btrfs`, `zfs`, and `devicemapper`.
drivers: `overlay2`, `fuse-overlayfs`, `btrfs`, and `zfs`.
`overlay2` is the preferred storage driver for all currently supported Linux distributions,
and is selected by default. Unless users have a strong reason to prefer another storage driver,
@ -336,440 +336,8 @@ On Windows, the Docker daemon only supports the `windowsfilter` storage driver.
### Options per storage driver
Particular storage-driver can be configured with options specified with
`--storage-opt` flags. Options for `devicemapper` are prefixed with `dm`,
options for `zfs` start with `zfs`, and options for `btrfs` start with `btrfs`.
#### Devicemapper options
This is an example of the configuration file for devicemapper on Linux:
```json
{
"storage-driver": "devicemapper",
"storage-opts": [
"dm.thinpooldev=/dev/mapper/thin-pool",
"dm.use_deferred_deletion=true",
"dm.use_deferred_removal=true"
]
}
```
##### `dm.thinpooldev`
Specifies a custom block storage device to use for the thin pool.
If using a block device for device mapper storage, it is best to use `lvm`
to create and manage the thin-pool volume. This volume is then handed to Docker
to exclusively create snapshot volumes needed for images and containers.
Managing the thin-pool outside of Engine makes for the most feature-rich
method of having Docker utilize device mapper thin provisioning as the
backing storage for Docker containers. The highlights of the lvm-based
thin-pool management feature include: automatic or interactive thin-pool
resize support, dynamically changing thin-pool features, automatic thinp
metadata checking when lvm activates the thin-pool, etc.
As a fallback if no thin pool is provided, loopback files are
created. Loopback is very slow, but can be used without any
pre-configuration of storage. It is strongly recommended that you do
not use loopback in production. Ensure your Engine daemon has a
`--storage-opt dm.thinpooldev` argument provided.
###### Example:
```console
$ sudo dockerd --storage-opt dm.thinpooldev=/dev/mapper/thin-pool
```
##### `dm.directlvm_device`
As an alternative to providing a thin pool as above, Docker can setup a block
device for you.
###### Example:
```console
$ sudo dockerd --storage-opt dm.directlvm_device=/dev/xvdf
```
##### `dm.thinp_percent`
Sets the percentage of passed in block device to use for storage.
###### Example:
```console
$ sudo dockerd --storage-opt dm.thinp_percent=95
```
##### `dm.thinp_metapercent`
Sets the percentage of the passed in block device to use for metadata storage.
###### Example:
```console
$ sudo dockerd --storage-opt dm.thinp_metapercent=1
```
##### `dm.thinp_autoextend_threshold`
Sets the value of the percentage of space used before `lvm` attempts to
autoextend the available space [100 = disabled]
###### Example:
```console
$ sudo dockerd --storage-opt dm.thinp_autoextend_threshold=80
```
##### `dm.thinp_autoextend_percent`
Sets the value percentage value to increase the thin pool by when `lvm`
attempts to autoextend the available space [100 = disabled]
###### Example:
```console
$ sudo dockerd --storage-opt dm.thinp_autoextend_percent=20
```
##### `dm.basesize`
Specifies the size to use when creating the base device, which limits the
size of images and containers. The default value is 10G. Note, thin devices
are inherently "sparse", so a 10G device which is mostly empty doesn't use
10 GB of space on the pool. However, the filesystem will use more space for
the empty case the larger the device is.
The base device size can be increased at daemon restart which will allow
all future images and containers (based on those new images) to be of the
new base device size.
###### Examples
```console
$ sudo dockerd --storage-opt dm.basesize=50G
```
This will increase the base device size to 50G. The Docker daemon will throw an
error if existing base device size is larger than 50G. A user can use
this option to expand the base device size however shrinking is not permitted.
This value affects the system-wide "base" empty filesystem
that may already be initialized and inherited by pulled images. Typically,
a change to this value requires additional steps to take effect:
```console
$ sudo service docker stop
$ sudo rm -rf /var/lib/docker
$ sudo service docker start
```
##### `dm.loopdatasize`
> **Note**
>
> This option configures devicemapper loopback, which should not
> be used in production.
Specifies the size to use when creating the loopback file for the
"data" device which is used for the thin pool. The default size is
100G. The file is sparse, so it will not initially take up this
much space.
###### Example
```console
$ sudo dockerd --storage-opt dm.loopdatasize=200G
```
##### `dm.loopmetadatasize`
> **Note**
>
> This option configures devicemapper loopback, which should not
> be used in production.
Specifies the size to use when creating the loopback file for the
"metadata" device which is used for the thin pool. The default size
is 2G. The file is sparse, so it will not initially take up
this much space.
###### Example
```console
$ sudo dockerd --storage-opt dm.loopmetadatasize=4G
```
##### `dm.fs`
Specifies the filesystem type to use for the base device. The supported
options are "ext4" and "xfs". The default is "xfs"
###### Example
```console
$ sudo dockerd --storage-opt dm.fs=ext4
```
##### `dm.mkfsarg`
Specifies extra mkfs arguments to be used when creating the base device.
###### Example
```console
$ sudo dockerd --storage-opt "dm.mkfsarg=-O ^has_journal"
```
##### `dm.mountopt`
Specifies extra mount options used when mounting the thin devices.
###### Example
```console
$ sudo dockerd --storage-opt dm.mountopt=nodiscard
```
##### `dm.datadev`
(Deprecated, use `dm.thinpooldev`)
Specifies a custom blockdevice to use for data for the thin pool.
If using a block device for device mapper storage, ideally both `datadev` and
`metadatadev` should be specified to completely avoid using the loopback
device.
###### Example
```console
$ sudo dockerd \
--storage-opt dm.datadev=/dev/sdb1 \
--storage-opt dm.metadatadev=/dev/sdc1
```
##### `dm.metadatadev`
(Deprecated, use `dm.thinpooldev`)
Specifies a custom blockdevice to use for metadata for the thin pool.
For best performance the metadata should be on a different spindle than the
data, or even better on an SSD.
If setting up a new metadata pool it is required to be valid. This can be
achieved by zeroing the first 4k to indicate empty metadata, like this:
```console
$ dd if=/dev/zero of=$metadata_dev bs=4096 count=1
```
###### Example
```console
$ sudo dockerd \
--storage-opt dm.datadev=/dev/sdb1 \
--storage-opt dm.metadatadev=/dev/sdc1
```
##### `dm.blocksize`
Specifies a custom blocksize to use for the thin pool. The default
blocksize is 64K.
###### Example
```console
$ sudo dockerd --storage-opt dm.blocksize=512K
```
##### `dm.blkdiscard`
Enables or disables the use of `blkdiscard` when removing devicemapper
devices. This is enabled by default (only) if using loopback devices and is
required to resparsify the loopback file on image/container removal.
Disabling this on loopback can lead to *much* faster container removal
times, but will make the space used in `/var/lib/docker` directory not be
returned to the system for other use when containers are removed.
###### Examples
```console
$ sudo dockerd --storage-opt dm.blkdiscard=false
```
##### `dm.override_udev_sync_check`
Overrides the `udev` synchronization checks between `devicemapper` and `udev`.
`udev` is the device manager for the Linux kernel.
To view the `udev` sync support of a Docker daemon that is using the
`devicemapper` driver, run:
```console
$ docker info
<...>
Udev Sync Supported: true
<...>
```
When `udev` sync support is `true`, then `devicemapper` and udev can
coordinate the activation and deactivation of devices for containers.
When `udev` sync support is `false`, a race condition occurs between
the`devicemapper` and `udev` during create and cleanup. The race condition
results in errors and failures. (For information on these failures, see
[docker#4036](https://github.com/docker/docker/issues/4036))
To allow the `docker` daemon to start, regardless of `udev` sync not being
supported, set `dm.override_udev_sync_check` to true:
```console
$ sudo dockerd --storage-opt dm.override_udev_sync_check=true
```
When this value is `true`, the `devicemapper` continues and simply warns
you the errors are happening.
> **Note**
>
> The ideal is to pursue a `docker` daemon and environment that does
> support synchronizing with `udev`. For further discussion on this
> topic, see [docker#4036](https://github.com/docker/docker/issues/4036).
> Otherwise, set this flag for migrating existing Docker daemons to
> a daemon with a supported environment.
##### `dm.use_deferred_removal`
Enables use of deferred device removal if `libdm` and the kernel driver
support the mechanism.
Deferred device removal means that if device is busy when devices are
being removed/deactivated, then a deferred removal is scheduled on
device. And devices automatically go away when last user of the device
exits.
For example, when a container exits, its associated thin device is removed.
If that device has leaked into some other mount namespace and can't be
removed, the container exit still succeeds and this option causes the
system to schedule the device for deferred removal. It does not wait in a
loop trying to remove a busy device.
###### Example
```console
$ sudo dockerd --storage-opt dm.use_deferred_removal=true
```
##### `dm.use_deferred_deletion`
Enables use of deferred device deletion for thin pool devices. By default,
thin pool device deletion is synchronous. Before a container is deleted,
the Docker daemon removes any associated devices. If the storage driver
can not remove a device, the container deletion fails and daemon returns.
```console
Error deleting container: Error response from daemon: Cannot destroy container
```
To avoid this failure, enable both deferred device deletion and deferred
device removal on the daemon.
```console
$ sudo dockerd \
--storage-opt dm.use_deferred_deletion=true \
--storage-opt dm.use_deferred_removal=true
```
With these two options enabled, if a device is busy when the driver is
deleting a container, the driver marks the device as deleted. Later, when
the device isn't in use, the driver deletes it.
In general it should be safe to enable this option by default. It will help
when unintentional leaking of mount point happens across multiple mount
namespaces.
##### `dm.min_free_space`
Specifies the min free space percent in a thin pool require for new device
creation to succeed. This check applies to both free data space as well
as free metadata space. Valid values are from 0% - 99%. Value 0% disables
free space checking logic. If user does not specify a value for this option,
the Engine uses a default value of 10%.
Whenever a new a thin pool device is created (during `docker pull` or during
container creation), the Engine checks if the minimum free space is
available. If sufficient space is unavailable, then device creation fails
and any relevant `docker` operation fails.
To recover from this error, you must create more free space in the thin pool
to recover from the error. You can create free space by deleting some images
and containers from the thin pool. You can also add more storage to the thin
pool.
To add more space to a LVM (logical volume management) thin pool, just add
more storage to the volume group container thin pool; this should automatically
resolve any errors. If your configuration uses loop devices, then stop the
Engine daemon, grow the size of loop files and restart the daemon to resolve
the issue.
###### Example
```console
$ sudo dockerd --storage-opt dm.min_free_space=10%
```
##### `dm.xfs_nospace_max_retries`
Specifies the maximum number of retries XFS should attempt to complete
IO when ENOSPC (no space) error is returned by underlying storage device.
By default XFS retries infinitely for IO to finish and this can result
in unkillable process. To change this behavior one can set
xfs_nospace_max_retries to say 0 and XFS will not retry IO after getting
ENOSPC and will shutdown filesystem.
###### Example
```console
$ sudo dockerd --storage-opt dm.xfs_nospace_max_retries=0
```
##### `dm.libdm_log_level`
Specifies the maxmimum `libdm` log level that will be forwarded to the
`dockerd` log (as specified by `--log-level`). This option is primarily
intended for debugging problems involving `libdm`. Using values other than the
defaults may cause false-positive warnings to be logged.
Values specified must fall within the range of valid `libdm` log levels. At the
time of writing, the following is the list of `libdm` log levels as well as
their corresponding levels when output by `dockerd`.
| `libdm` Level | Value | `--log-level` |
|---------------|------:|---------------|
| `_LOG_FATAL` | 2 | error |
| `_LOG_ERR` | 3 | error |
| `_LOG_WARN` | 4 | warn |
| `_LOG_NOTICE` | 5 | info |
| `_LOG_INFO` | 6 | info |
| `_LOG_DEBUG` | 7 | debug |
###### Example
```console
$ sudo dockerd \
--log-level debug \
--storage-opt dm.libdm_log_level=7
```
`--storage-opt` flags. Options for `zfs` start with `zfs`, and options for
`btrfs` start with `btrfs`.
#### ZFS options

Просмотреть файл

@ -198,15 +198,15 @@ $ docker run -it --storage-opt size=120G fedora /bin/bash
```
This (size) constraints the container filesystem size to 120G at creation time.
This option is only available for the `devicemapper`, `btrfs`, `overlay2`,
`windowsfilter` and `zfs` storage drivers.
This option is only available for the `btrfs`, `overlay2`, `windowsfilter`,
and `zfs` storage drivers.
For the `overlay2` storage driver, the size option is only available if the
backing filesystem is `xfs` and mounted with the `pquota` mount option.
Under these conditions, you can pass any size less than the backing filesystem size.
For the `windowsfilter`, `devicemapper`, `btrfs`, and `zfs` storage drivers,
you cannot pass a size less than the Default BaseFS Size.
For the `windowsfilter`, `btrfs`, and `zfs` storage drivers, you cannot pass a
size less than the Default BaseFS Size.
### <a name="tmpfs"></a> Mount tmpfs (--tmpfs)

Просмотреть файл

@ -622,8 +622,8 @@ incompatible with any restart policy other than `none`.
$ docker run -it --storage-opt size=120G fedora /bin/bash
This (size) will allow to set the container rootfs size to 120G at creation time.
This option is only available for the `devicemapper`, `btrfs`, `overlay2` and `zfs` graph drivers.
For the `devicemapper`, `btrfs` and `zfs` storage drivers, user cannot pass a size less than the Default BaseFS Size.
This option is only available for the `btrfs`, `overlay2` and `zfs` graph drivers.
For the `btrfs` and `zfs` storage drivers, user cannot pass a size less than the Default BaseFS Size.
For the `overlay2` storage driver, the size option is only available if the backing fs is `xfs` and mounted with the `pquota` mount option.
Under these conditions, user can pass any size less than the backing fs size.

Просмотреть файл

@ -416,347 +416,15 @@ unix://[/path/to/socket] to use.
# STORAGE DRIVER OPTIONS
Docker uses storage backends (known as "graphdrivers" in the Docker
Docker uses storage backends (known as "storage drivers" in the Docker
internals) to create writable containers from images. Many of these
backends use operating system level technologies and can be
configured.
Specify options to the storage backend with **--storage-opt** flags. The
backends that currently take options are **devicemapper**, **zfs** and **btrfs**.
Options for **devicemapper** are prefixed with **dm.**, options for **zfs**
start with **zfs.**, and options for **btrfs** start with **btrfs.**.
Specifically for devicemapper, the default is a "loopback" model which
requires no pre-configuration, but is extremely inefficient. Do not
use it in production.
To make the best use of Docker with the devicemapper backend, you must
have a recent version of LVM. Use **lvm**(8) to create a thin pool; for
more information, see **lvmthin**(7). Then, use **--storage-opt
dm.thinpooldev** to tell the Docker engine to use that pool for
allocating images and container snapshots.
## Devicemapper options
#### dm.thinpooldev
Specifies a custom block storage device to use for the thin pool.
If using a block device for device mapper storage, it is best to use `lvm`
to create and manage the thin-pool volume. This volume is then handed to Docker
to exclusively create snapshot volumes needed for images and containers.
Managing the thin-pool outside of Engine makes for the most feature-rich
method of having Docker utilize device mapper thin provisioning as the
backing storage for Docker containers. The highlights of the lvm-based
thin-pool management feature include: automatic or interactive thin-pool
resize support, dynamically changing thin-pool features, automatic thinp
metadata checking when lvm activates the thin-pool, etc.
As a fallback if no thin pool is provided, loopback files are
created. Loopback is very slow, but can be used without any
pre-configuration of storage. It is strongly recommended that you do
not use loopback in production. Ensure your Engine daemon has a
`--storage-opt dm.thinpooldev` argument provided.
Example use:
$ dockerd \
--storage-opt dm.thinpooldev=/dev/mapper/thin-pool
#### dm.directlvm\_device
As an alternative to manually creating a thin pool as above, Docker can
automatically configure a block device for you.
Example use:
$ dockerd \
--storage-opt dm.directlvm_device=/dev/xvdf
#### dm.thinp\_percent
Sets the percentage of passed in block device to use for storage.
Example:
$ sudo dockerd \
--storage-opt dm.thinp_percent=95
#### dm.thinp\_metapercent
Sets the percentage of the passed in block device to use for metadata storage.
Example:
$ sudo dockerd \
--storage-opt dm.thinp_metapercent=1
#### dm.thinp\_autoextend\_threshold
Sets the value of the percentage of space used before `lvm` attempts to
autoextend the available space [100 = disabled]
Example:
$ sudo dockerd \
--storage-opt dm.thinp_autoextend_threshold=80
#### dm.thinp\_autoextend\_percent
Sets the value percentage value to increase the thin pool by when `lvm`
attempts to autoextend the available space [100 = disabled]
Example:
$ sudo dockerd \
--storage-opt dm.thinp_autoextend_percent=20
#### dm.basesize
Specifies the size to use when creating the base device, which limits
the size of images and containers. The default value is 10G. Note,
thin devices are inherently "sparse", so a 10G device which is mostly
empty doesn't use 10 GB of space on the pool. However, the filesystem
will use more space for base images the larger the device
is.
The base device size can be increased at daemon restart which will allow
all future images and containers (based on those new images) to be of the
new base device size.
Example use: `dockerd --storage-opt dm.basesize=50G`
This will increase the base device size to 50G. The Docker daemon will throw an
error if existing base device size is larger than 50G. A user can use
this option to expand the base device size however shrinking is not permitted.
This value affects the system-wide "base" empty filesystem that may already
be initialized and inherited by pulled images. Typically, a change to this
value requires additional steps to take effect:
$ sudo service docker stop
$ sudo rm -rf /var/lib/docker
$ sudo service docker start
Example use: `dockerd --storage-opt dm.basesize=20G`
#### dm.fs
Specifies the filesystem type to use for the base device. The
supported options are `ext4` and `xfs`. The default is `ext4`.
Example use: `dockerd --storage-opt dm.fs=xfs`
#### dm.mkfsarg
Specifies extra mkfs arguments to be used when creating the base device.
Example use: `dockerd --storage-opt "dm.mkfsarg=-O ^has_journal"`
#### dm.mountopt
Specifies extra mount options used when mounting the thin devices.
Example use: `dockerd --storage-opt dm.mountopt=nodiscard`
#### dm.use\_deferred\_removal
Enables use of deferred device removal if `libdm` and the kernel driver
support the mechanism.
Deferred device removal means that if device is busy when devices are
being removed/deactivated, then a deferred removal is scheduled on
device. And devices automatically go away when last user of the device
exits.
For example, when a container exits, its associated thin device is removed. If
that device has leaked into some other mount namespace and can't be removed,
the container exit still succeeds and this option causes the system to schedule
the device for deferred removal. It does not wait in a loop trying to remove a
busy device.
Example use: `dockerd --storage-opt dm.use_deferred_removal=true`
#### dm.use\_deferred\_deletion
Enables use of deferred device deletion for thin pool devices. By default,
thin pool device deletion is synchronous. Before a container is deleted, the
Docker daemon removes any associated devices. If the storage driver can not
remove a device, the container deletion fails and daemon returns.
`Error deleting container: Error response from daemon: Cannot destroy container`
To avoid this failure, enable both deferred device deletion and deferred
device removal on the daemon.
`dockerd --storage-opt dm.use_deferred_deletion=true --storage-opt dm.use_deferred_removal=true`
With these two options enabled, if a device is busy when the driver is
deleting a container, the driver marks the device as deleted. Later, when the
device isn't in use, the driver deletes it.
In general it should be safe to enable this option by default. It will help
when unintentional leaking of mount point happens across multiple mount
namespaces.
#### dm.loopdatasize
**Note**: This option configures devicemapper loopback, which should not be
used in production.
Specifies the size to use when creating the loopback file for the "data" device
which is used for the thin pool. The default size is 100G. The file is sparse,
so it will not initially take up this much space.
Example use: `dockerd --storage-opt dm.loopdatasize=200G`
#### dm.loopmetadatasize
**Note**: This option configures devicemapper loopback, which should not be
used in production.
Specifies the size to use when creating the loopback file for the "metadata"
device which is used for the thin pool. The default size is 2G. The file is
sparse, so it will not initially take up this much space.
Example use: `dockerd --storage-opt dm.loopmetadatasize=4G`
#### dm.datadev
(Deprecated, use `dm.thinpooldev`)
Specifies a custom blockdevice to use for data for a Docker-managed thin pool.
It is better to use `dm.thinpooldev` - see the documentation for it above for
discussion of the advantages.
#### dm.metadatadev
(Deprecated, use `dm.thinpooldev`)
Specifies a custom blockdevice to use for metadata for a Docker-managed thin
pool. See `dm.datadev` for why this is deprecated.
#### dm.blocksize
Specifies a custom blocksize to use for the thin pool. The default
blocksize is 64K.
Example use: `dockerd --storage-opt dm.blocksize=512K`
#### dm.blkdiscard
Enables or disables the use of `blkdiscard` when removing devicemapper devices.
This is disabled by default due to the additional latency, but as a special
case with loopback devices it will be enabled, in order to re-sparsify the
loopback file on image/container removal.
Disabling this on loopback can lead to *much* faster container removal times,
but it also prevents the space used in `/var/lib/docker` directory from being
returned to the system for other use when containers are removed.
Example use: `dockerd --storage-opt dm.blkdiscard=false`
#### dm.override\_udev\_sync\_check
By default, the devicemapper backend attempts to synchronize with the `udev`
device manager for the Linux kernel. This option allows disabling that
synchronization, to continue even though the configuration may be buggy.
To view the udev sync support of a Docker daemon that is using the
devicemapper driver, run:
$ docker info
[...]
Udev Sync Supported: true
[...]
When udev sync support is **true**, then devicemapper and udev can
coordinate the activation and deactivation of devices for containers.
When udev sync support is **false**, a race condition occurs between the
devicemapper and udev during create and cleanup. The race condition results
in errors and failures. (For information on these failures, see
[docker#4036](https://github.com/docker/docker/issues/4036))
To allow the docker daemon to start, regardless of whether udev sync is
**false**, set **dm.override_udev_sync_check** to **true**:
$ dockerd --storage-opt dm.override_udev_sync_check=true
When this value is **true**, the driver continues and simply warns you the errors
are happening.
**Note**: The ideal is to pursue a `docker` daemon and environment that does
support synchronizing with `udev`. For further discussion on this topic, see
[docker#4036](https://github.com/docker/docker/issues/4036).
Otherwise, set this flag for migrating existing Docker daemons to a daemon with
a supported environment.
#### dm.min\_free\_space
Specifies the min free space percent in a thin pool require for new device
creation to succeed. This check applies to both free data space as well
as free metadata space. Valid values are from 0% - 99%. Value 0% disables
free space checking logic. If user does not specify a value for this option,
the Engine uses a default value of 10%.
Whenever a new a thin pool device is created (during `docker pull` or during
container creation), the Engine checks if the minimum free space is available.
If the space is unavailable, then device creation fails and any relevant
`docker` operation fails.
To recover from this error, you must create more free space in the thin pool to
recover from the error. You can create free space by deleting some images and
containers from tge thin pool. You can also add more storage to the thin pool.
To add more space to an LVM (logical volume management) thin pool, just add
more storage to the group container thin pool; this should automatically
resolve any errors. If your configuration uses loop devices, then stop the
Engine daemon, grow the size of loop files and restart the daemon to resolve
the issue.
Example use:: `dockerd --storage-opt dm.min_free_space=10%`
#### dm.xfs\_nospace\_max\_retries
Specifies the maximum number of retries XFS should attempt to complete IO when
ENOSPC (no space) error is returned by underlying storage device.
By default XFS retries infinitely for IO to finish and this can result in
unkillable process. To change this behavior one can set **xfs_nospace_max_retries**
to e.g. **0** and XFS will not retry IO after getting ENOSPC and will shutdown
filesystem.
Example use:
$ sudo dockerd --storage-opt dm.xfs_nospace_max_retries=0
##### dm.libdm\_log\_level
Specifies the maxmimum libdm log level that will be forwarded to the dockerd
log (as specified by **--log-level**). This option is primarily intended for
debugging problems involving libdm. Using values other than the defaults may
cause false-positive warnings to be logged.
Values specified must fall within the range of valid libdm log levels. At the
time of writing, the following is the list of libdm log levels as well as their
corresponding levels when output by dockerd.
| libdm Level | Value | --log-level |
| ------------- | -----:| ----------- |
| \_LOG\_FATAL | 2 | error |
| \_LOG\_ERR | 3 | error |
| \_LOG\_WARN | 4 | warn |
| \_LOG\_NOTICE | 5 | info |
| \_LOG\_INFO | 6 | info |
| \_LOG\_DEBUG | 7 | debug |
Example use:
$ sudo dockerd \
--log-level debug \
--storage-opt dm.libdm_log_level=7
backends that currently take options are **zfs** and **btrfs**.
Options for **zfs** start with **zfs.**, and options for **btrfs** start
with **btrfs.**.
## ZFS options