If AutoRemove is set, wait until client get `destroy` events, or get
`detach` events that implies container is detached but not stopped.
Signed-off-by: Zhang Wei <zhangwei555@huawei.com>
`--rm` is a client side flag which caused lots of problems:
1. if client lost connection to daemon, including client crash or be
killed, there's no way to clean garbage container.
2. if docker stop a `--rm` container, this container won't be
autoremoved.
3. if docker daemon restart, container is also left over.
4. bug: `docker run --rm busybox fakecmd` will exit without cleanup.
In a word, client side `--rm` flag isn't sufficient for garbage
collection. Move the `--rm` flag to daemon will be more reasonable.
What this commit do is:
1. implement a `--rm` on daemon side, adding one flag `AutoRemove` into
HostConfig.
2. Allow `run --rm -d`, no conflicting `--rm` and `-d` any more,
auto-remove can work on detach mode.
3. `docker restart` a `--rm` container will succeed, the container won't
be autoremoved.
This commit will help a lot for daemon to do garbage collection for
temporary containers.
Signed-off-by: Zhang Wei <zhangwei555@huawei.com>
There are currently problems with "swarm init" and "swarm join" when an
explicit --listen-addr flag is not provided. swarmkit defaults to
finding the IP address associated with the default route, and in cloud
setups this is often the wrong choice.
Introduce a notion of "advertised address", with the client flag
--advertise-addr, and the daemon flag --swarm-default-advertise-addr to
provide a default. The default listening address is now 0.0.0.0, but a
valid advertised address must be detected or specified.
If no explicit advertised address is specified, error out if there is
more than one usable candidate IP address on the system. This requires a
user to explicitly choose instead of letting swarmkit make the wrong
choice. For the purposes of this autodetection, we ignore certain
interfaces that are unlikely to be relevant (currently docker*).
The user is also required to choose a listen address on swarm init if
they specify an explicit advertise address that is a hostname or an IP
address that's not local to the system. This is a requirement for
overlay networking.
Also support specifying interface names to --listen-addr,
--advertise-addr, and the daemon flag --swarm-default-advertise-addr.
This will fail if the interface has multiple IP addresses (unless it has
a single IPv4 address and a single IPv6 address - then we resolve the
tie in favor of IPv4).
This change also exposes the node's externally-reachable address in
docker info, as requested by #24017.
Make corresponding API and CLI docs changes.
Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>
This adds an `--oom-score-adjust` flag to the daemon so that the value
provided can be set for the docker daemon's process. The default value
for the flag is -500. This will allow the docker daemon to have a
less chance of being killed before containers do. The default value for
processes is 0 with a min/max of -1000/1000.
-500 is a good middle ground because it is less than the default for
most processes and still not -1000 which basically means never kill this
process in an OOM condition on the host machine. The only processes on
my machine that have a score less than -500 are dbus at -900 and sshd
and xfce( my window manager ) at -1000. I don't think docker should be
set lower, by default, than dbus or sshd so that is why I chose -500.
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
Now that Windows base images can be loaded directly into docker via "docker load" of a specialized tar file (with docker pull support on the horizon) we no longer have need of the custom images code path that loads images from a shared central location. Removing that code and it's call points.
Signed-off-by: Stefan J. Wernli <swernli@microsoft.com>
Most modern distros have the limit for the maximum root keys at 1000000
but some do not. Because we are creating a new key for each container
we need to bump this up as the older distros are having this limit at
200.
Using 1000000 as the limit because that is that most distros are setting
this to now. If someone has this value configured over that we do not
change it.
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
Its a minor thing but the daemon's output looks like this when there are
no containers to load:
```
INFO[0001] Loading containers: start.
INFO[0001] Loading containers: done.
```
That blank line (or more correctly, the \n) is unnecessary when
there are no containers to load since there is no `.` printed
Signed-off-by: Doug Davis <dug@us.ibm.com>
we store the active sandbox after daemon.containerd.Restore, but there
is a chance the `Restore` will set the container to exit see
(https://github.com/docker/docker/blob/master/libcontainerd/client_linux.go#L469).
so we should check if the container is really running before add it to
activesandbox.
Signed-off-by: Lei Jitang <leijitang@huawei.com>
This patch introduces a new experimental engine-level plugin management
with a new API and command line. Plugins can be distributed via a Docker
registry, and their lifecycle is managed by the engine.
This makes plugins a first-class construct.
For more background, have a look at issue #20363.
Documentation is in a separate commit. If you want to understand how the
new plugin system works, you can start by reading the documentation.
Note: backwards compatibility with existing plugins is maintained,
albeit they won't benefit from the advantages of the new system.
Signed-off-by: Tibor Vass <tibor@docker.com>
Signed-off-by: Anusha Ragunathan <anusha@docker.com>
As described in our ROADMAP.md, introduce new Swarm management API
endpoints relying on swarmkit to deploy services. It currently vendors
docker/engine-api changes.
This PR is fully backward compatible (joining a Swarm is an optional
feature of the Engine, and existing commands are not impacted).
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
Signed-off-by: Victor Vieux <vieux@docker.com>
Signed-off-by: Daniel Nephin <dnephin@docker.com>
Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
Signed-off-by: Madhu Venugopal <madhu@docker.com>
This flags enables full support of daemonless containers in docker. It
ensures that docker does not stop containers on shutdown or restore and
properly reconnects to the container when restarted.
This is not the default because of backwards compat but should be the
desired outcome for people running containers in prod.
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
This is similar to network scopes where a volume can either be `local`
or `global`. A `global` volume is one that exists across the entire
cluster where as a `local` volume exists on a single engine.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
… and refactor a little bit some daemon on the way.
- Move `SearchRegistryForImages` to a new file (`daemon/search.go`) as
`daemon.go` is getting pretty big.
- `registry.Service` is now an interface (allowing us to decouple it a
little bit and thus unit test easily).
- Add some unit test for `SearchRegistryForImages`.
- Use UniqueExactMatch for search filters
- And use empty restore id for now in client.ContainerStart.
Signed-off-by: Vincent Demeester <vincent@sbr.pm>
This fix tries to cover the issue raised in #22463 by adding
filter for events emitted by docker daemon so that user could
utilize filter to receive events of interest.
Documentations have been updated for this fix.
Additional tests have been added to cover the changes in this fix.
This fix fixes#22463.
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
This fix tries to cover the issue raised in #22463 by emitting
events for docker daemon so that user could be notified by
scenarios like config reload, etc.
This fix adds the `daemon reload`, and events for docker daemon.
Additional tests have been added to cover the changes in this fix.
This fix fixes#22463.
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
The filtering is made server-side, and the following filters are
supported:
* is-official (boolean)
* is-automated (boolean)
* has-stars (integer)
Signed-off-by: Fabrizio Soppelsa <fsoppelsa@mirantis.com>
Signed-off-by: Vincent Demeester <vincent@sbr.pm>
This fix tries to address issues raised in #20936 and #22443
where `docker pull` or `docker push` fails because of the
concurrent connection failing.
Currently, the number of maximum concurrent connections is
controlled by `maxDownloadConcurrency` and `maxUploadConcurrency`
which are hardcoded to 3 and 5 respectively. Therefore, in
situations where network connections don't support multiple
downloads/uploads, failures may encounter for `docker push`
or `docker pull`.
This fix tries changes `maxDownloadConcurrency` and
`maxUploadConcurrency` to adjustable by passing
`--max-concurrent-uploads` and `--max-concurrent-downloads` to
`docker daemon` command.
The documentation related to docker daemon has been updated.
Additional test case have been added to cover the changes in this fix.
This fix fixes#20936. This fix fixes#22443.
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>