In some cases, if a user specifies `-f` when disabling a plugin mounts
can still exist on the plugin rootfs.
This can cause problems during upgrade where the rootfs is removed and
may cause data loss.
To resolve this, ensure the rootfs is unmounted
before performing an upgrade.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
While restoring plugins during daemon restart, some plugins can fail to
respond to net.Dial. These plugins should be explicitly set to disabled,
else they will retain their original state of enabled, which is
incorrect.
Tested with a plugin that fails to restart and observed that the state
was set to disabled.
Signed-off-by: Anusha Ragunathan <anusha.ragunathan@docker.com>
When a plugin fails to start, we still incorrectly mark it as enabled.
This change verifies that we can dial to the plugin socket to confirm that
the plugin is functional and only then mark the plugin as enabled. Also,
dont delete the plugin on install, if only the enable fails.
Signed-off-by: Anusha Ragunathan <anusha.ragunathan@docker.com>
This persists the "propagated mount" for plugins outside the main
rootfs. This enables `docker plugin upgrade` to not remove potentially
important data during upgrade rather than forcing plugin authors to hard
code a host path to persist data to.
Also migrates old plugins that have a propagated mount which is in the
rootfs on daemon startup.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
This allows a plugin to be upgraded without requiring to
uninstall/reinstall a plugin.
Since plugin resources (e.g. volumes) are tied to a plugin ID, this is
important to ensure resources aren't lost.
The plugin must be disabled while upgrading (errors out if enabled).
This does not add any convenience flags for automatically
disabling/re-enabling the plugin during before/after upgrade.
Since an upgrade may change requested permissions, the user is required
to accept permissions just like `docker plugin install`.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
The `digest` data type, used throughout docker for image verification
and identity, has been broken out into `opencontainers/go-digest`. This
PR updates the dependencies and moves uses over to the new type.
Signed-off-by: Stephen J Day <stephen.day@docker.com>
Move plugins to shared distribution stack with images.
Create immutable plugin config that matches schema2 requirements.
Ensure data being pushed is same as pulled/created.
Store distribution artifacts in a blobstore.
Run init layer setup for every plugin start.
Fix breakouts from unsafe file accesses.
Add support for `docker plugin install --alias`
Uses normalized references for default names to avoid collisions when using default hosts/tags.
Some refactoring of the plugin manager to support the change, like removing the singleton manager and adding manager config struct.
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
Signed-off-by: Derek McGowan <derek@mcgstyle.net>
Fixes an issue when starting the daemon with live-restore
where previously it was not set, plugins are not running.
Fixes an issue when starting the daemon with live-restore, the plugin
client (for interacting with the plugins HTTP interface) is not set,
causing a panic when the plugin is called.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Legacy plugins expect host-relative paths (such as for Volume.Mount).
However, a containerized plugin cannot respond with a host-relative
path. Therefore, this commit modifies new volume plugins' paths in Mount
and List to prepend the container's rootfs path.
This introduces a new PropagatedMount field in the Plugin Config.
When it is set for volume plugins, RootfsPropagation is set to rshared
and the path specified by PropagatedMount is bind-mounted with rshared
prior to launching the container. This is so that the daemon code can
access the paths returned by the plugin from the host mount namespace.
Signed-off-by: Tibor Vass <tibor@docker.com>
v2/Plugin struct had fields that were
- purely used by the manager.
- unsafely exposed without proper locking.
This change fixes this, by moving relevant fields to the manager as well
as making remaining fields as private and providing proper accessors for
them.
Signed-off-by: Anusha Ragunathan <anusha@docker.com>
A plugin has an `ExitChan` channel which is used to signal the exit of
the plugin process. In a recent change, the initialization was
incorrectly moved to the daemon Shutdown path.
Fix this by initializing the channel during plugin enable.
Signed-off-by: Anusha Ragunathan <anusha@docker.com>
During error cases, we dont cleanup correctly. This commit takes care
of removing the plugin, if there are errors after the pull passed. It
also shuts down the plugin, if there are errors after the plugin in the
enable path.
Signed-off-by: Anusha Ragunathan <anusha@docker.com>
Legacy plugins (aka pluginv1) calls in libnetwork are replaced with
calls using the new plugin model (aka pluginv2). pkg/plugins is still
used for managing the http client connections to the plugin.
This commit makes the necessary changes in docker/docker. Part 2 will
will take care of the libnetwork changes.
Signed-off-by: Anusha Ragunathan <anusha@docker.com>
Split plugin package into `store` and `v2/plugin`. Now the functionality
is clearly delineated:
- Manager: Manages the global state of the plugin sub-system.
- PluginStore: Manages a collection of plugins (in memory and on-disk)
- Plugin: Manages the single plugin unit.
This also facilitates splitting the global PluginManager lock into:
- PluginManager lock to protect global states.
- PluginStore lock to protect store states.
- Plugin lock to protect individual plugin states.
Importing "github.com/docker/docker/plugin/store" will provide access
to plugins and has lesser dependencies when compared to importing the
original monolithic `plugin package`.
Signed-off-by: Anusha Ragunathan <anusha@docker.com>
The main intent of handling plugin exit is for graceful shutdown
of plugins during daemon shutdown. So avoid plugin lookup during
plugin exits caused by other reasons (eg. force remove)
Signed-off-by: Anusha Ragunathan <anusha@docker.com>
Volumes and other content created under a bind mount should be
recursively propagated using rshared, not shared. This could be
the reason for EBUSY during removal. Override options with rbind,
rshared and see if CI errors are fixed.
May fix#25511
Signed-off-by: Anusha Ragunathan <anusha@docker.com>
Unix sockets are limited to 108 bytes. As a result, we need to be
careful in not using exec-root as the parent directory for pluginID
(which is already 64 bytes), since it can result in socket path names
longer than 108 bytes. Use /tmp instead. Before this change, setting:
- dockerd --exec-root=/go/src/github.com/do passes
- dockerd --exec-root=/go/src/github.com/doc fails
After this change, there's no failure.
Also, write a volume plugins test to verify that the plugins socket
responds.
Signed-off-by: Anusha Ragunathan <anusha@docker.com>
This ensures that:
- The in-memory plugin store is populated with all the plugins
- Plugins which were active before daemon restart are active after.
This utilizes the liverestore feature when available, otherwise it
manually starts the plugin.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
This patch introduces a new experimental engine-level plugin management
with a new API and command line. Plugins can be distributed via a Docker
registry, and their lifecycle is managed by the engine.
This makes plugins a first-class construct.
For more background, have a look at issue #20363.
Documentation is in a separate commit. If you want to understand how the
new plugin system works, you can start by reading the documentation.
Note: backwards compatibility with existing plugins is maintained,
albeit they won't benefit from the advantages of the new system.
Signed-off-by: Tibor Vass <tibor@docker.com>
Signed-off-by: Anusha Ragunathan <anusha@docker.com>