- So that they comply with docker inspect convention
Which is allowing camel case for json field names
Signed-off-by: Alessandro Boch <aboch@docker.com>
--cluster-advertise daemon option is enahanced to support <interface-name>
in addition to <ip-address> in order to amke it automation friendly using
docker-machine.
Signed-off-by: Madhu Venugopal <madhu@docker.com>
introduced --subnet, --ip-range and --gateway options in docker network
command. Also, user can allocate driver specific ip-address if any using
the --aux-address option.
Supports multiple subnets per network and also sharing ip range
across networks if the network-driver and ipam-driver supports it.
Example, Bridge driver doesnt support sharing same ip range across
networks.
Signed-off-by: Madhu Venugopal <madhu@docker.com>
* Moving Network Remote APIs out of experimental
* --net can now accept user created networks using network drivers/plugins
* Removed the experimental services concept and --default-network option
* Neccessary backend changes to accomodate multiple networks per container
* Integration Tests
Signed-off-by: David Calavera <david.calavera@gmail.com>
Signed-off-by: Madhu Venugopal <madhu@docker.com>
Use `pkg/discovery` to provide nodes discovery between daemon instances.
The functionality is driven by two different command-line flags: the
experimental `--cluster-store` (previously `--kv-store`) and
`--cluster-advertise`. It can be used in two ways by interested
components:
1. Externally by calling the `/info` API and examining the cluster store
field. The `pkg/discovery` package can then be used to hit the same
endpoint and watch for appearing or disappearing nodes. That is the
method that will for example be used by Swarm.
2. Internally by using the `Daemon.discoveryWatcher` instance. That is
the method that will for example be used by libnetwork.
Signed-off-by: Arnaud Porterie <arnaud.porterie@docker.com>
For now docker stats will sum the rxbytes, txbytes, etc. of all
the interfaces.
It is OK for the output of CLI `docker stats` but not good for
the API response, especially when the container is in sereval
subnets.
It's better to leave these origianl data to user.
Signed-off-by: Hu Keping <hukeping@huawei.com>
Some structures use int for sizes and UNIX timestamps. On some
platforms, int is 32 bits, so this can lead to the year 2038 issues and
overflows when dealing with large containers or layers.
Consistently use int64 to store sizes and UNIX timestamps in
api/types/types.go. Update related to code accordingly (i.e.
strconv.FormatInt instead of strconv.Itoa).
Use int64 in progressreader package to avoid integer overflow when
dealing with large quantities. Update related code accordingly.
Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>
[pkg/archive] Update archive/copy path handling
- Remove unused TarOptions.Name field.
- Add new TarOptions.RebaseNames field.
- Update some of the logic around path dir/base splitting.
- Update some of the logic behind archive entry name rebasing.
[api/types] Add LinkTarget field to PathStat
[daemon] Fix stat, archive, extract of symlinks
These operations *should* resolve symlinks that are in the path but if the
resource itself is a symlink then it *should not* be resolved. This patch
puts this logic into a common function `resolvePath` which resolves symlinks
of the path's dir in scope of the container rootfs but does not resolve the
final element of the path. Now archive, extract, and stat operations will
return symlinks if the path is indeed a symlink.
[api/client] Update cp path hanling
[docs/reference/api] Update description of stat
Add the linkTarget field to the header of the archive endpoint.
Remove path field.
[integration-cli] Fix/Add cp symlink test cases
Copying a symlink should do just that: copy the symlink NOT
copy the target of the symlink. Also, the resulting file from
the copy should have the name of the symlink NOT the name of
the target file.
Copying to a symlink should copy to the symlink target and not
modify the symlink itself.
Docker-DCO-1.1-Signed-off-by: Josh Hawn <josh.hawn@docker.com> (github: jlhawn)
In 1.6.2 we were decoding inspect API response into interface{}.
time.Time fields were JSON encoded as RFC3339Nano in the response
and when decoded into interface{} they were just strings so the inspect
template treated them as just strings.
From 1.7 we are decoding into types.ContainerJSON and when the template
gets executed it now gets a time.Time and it's formatted as
2015-07-22 05:02:38.091530369 +0000 UTC.
This patch brings back the old behavior by typing time.Time fields
as string so they gets formatted as they were encoded in JSON -- RCF3339Nano
Signed-off-by: Antonio Murdaca <runcom@linux.com>
Adds http handlers for new API endpoints:
GET ContainersArchivePath
Return a Tar Archive of the contents at the specified location in a
container. Deprecates POST ContainersCopy. Use a HEAD request to stat
the resource.
PUT ContainersExtractToDir
Extract the Tar Archive from the request body to the directory at the
specified location inside a container.
Docker-DCO-1.1-Signed-off-by: Josh Hawn <josh.hawn@docker.com> (github: jlhawn)
The following methods will deprecate the Copy method and introduce
two new, well-behaved methods for creating a tar archive of a resource
in a container and for extracting a tar archive into a directory in a
container.
Docker-DCO-1.1-Signed-off-by: Josh Hawn <josh.hawn@docker.com> (github: jlhawn)
Export image/container metadata stored in graph driver. Right now 3 fields
DeviceId, DeviceSize and DeviceName are being exported from devicemapper.
Other graph drivers can export fields as they see fit.
This data can be used to mount the thin device outside of docker and tools
can look into image/container and do some kind of inspection.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
The DOCKER_EXPERIMENTAL environment variable drives the activation of
the 'experimental' build tag.
Signed-off-by: Arnaud Porterie <arnaud.porterie@docker.com>
Prior to this patch, the response of
- GET /images/json
- GET /containers/json
- GET /images/(name)/history
display the Created Time as UNIX format which doesn't make sense.
These should be more readable as CLI command `docker inspect` shows.
Due to the case that an older client with a newer version daemon, we
need the version check for now.
Signed-off-by: Hu Keping <hukeping@huawei.com>
Two main things
- Create a real struct Info for all of the data with the proper types
- Add test for REST API get info
Signed-off-by: Hu Keping <hukeping@huawei.com>
This type is produced on the server side and is a type safe struct that
can be encoded to json. It is consumed via the client.
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
Move the stats structs from the api/stats package into a new package
api/types that will contain all the api structs for docker's api request
and responses.
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>