it fixes a repository creation issue when pushing the 1st time a Compose OCI artifact on the Hub
Signed-off-by: Guillaume Lours <705411+glours@users.noreply.github.com>
Previously the HTTP requests were sent with a generic Go-http-client
user-agent which made it hard to determine where the requests are
coming from. It's important that we can find clients so that they
can be updated if APIs change in future.
Signed-off-by: David Scott <dave@recoil.org>
Unfortunately, the feature flag mechanism for experimental features
isn't adequate. To avoid some edge cases where Compose might try to
use Synchronized file shares with Desktop when the feature isn't
available, we need to query a new endpoint.
Before we move any of this out of experimental, we need to improve
the integration here with Desktop & Compose so that we get all the
necessary feature/experiment state up-front to reduce the quantity
of IPC calls needed up-front.
For now, there's some intentional redundancy to avoid making this
extra call if we can avoid it. The actual endpoint is very cheap/
fast, but every IPC call is a potential point of of failure, so
it's worth it.
Signed-off-by: Milas Bowman <milas.bowman@docker.com>
Use environment variable for global opt-out and Docker Desktop (if
available) to determine specific experiment states.
In the future, we'll allow per-feature opt-in/opt-out via env vars
as well, but currently there is a single `COMPOSE_EXPERIMENTAL` env
var that can be used to opt-out of all experimental features
independently of Docker Desktop configuration.
This has not been the default for quite a while and required
setting an environment variable to revert back.
The tar implementation is more performant and addresses several
edge cases with the original `docker cp` version, so it's time
to fully retire it.
The scaffolding for multiple sync implementations remains to
support future experimentation here.
Signed-off-by: Milas Bowman <milas.bowman@docker.com>
This was a bad configuration (my fault) that meant each span was
exported synchronously, as it ended. That can cause weird behavior
such as stuttering/blocking.
There's really no reason to NOT use the batch processor, it's the
recommended way to configure it. In the future, it might make sense
to tune the intervals based on the fact that Compose is a CLI vs
a long-running server app, but we handle flushing out on exit
already, so it's not a huge deal.
Signed-off-by: Milas Bowman <milas.bowman@docker.com>
This is a good place to start introducing (local) exclusivity
to Compose. Now, when `alpha watch` launches, it will check for
the existence of a PID file in the user XDG runtime directory,
and create one if the existing one is stale or does not exist.
If the PID file exists and is valid, an error is returned and
Compose exits.
A slight tweak to the experimental remote Git loader has been
made to use the XDG package for consistency.
Signed-off-by: Milas Bowman <milas.bowman@docker.com>
This was left over from debugging, but we should not block.
OTel will handle the connection in the background.
Signed-off-by: Milas Bowman <milas.bowman@docker.com>
Adjust the debouncing logic so that it applies to all inbound file
events, regardless of whether they match a sync or rebuild rule.
When the batch is flushed out, if any event for the service is a
rebuild event, then the service is rebuilt and all sync events for
the batch are ignored. If _all_ events in the batch are sync events,
then a sync is triggered, passing the entire batch at once. This
provides a substantial performance win for the new `tar`-based
implementation, as it can efficiently transfer the changes in bulk.
Additionally, this helps with jitter, e.g. it's not uncommon for
there to be double-writes in quick succession to a file, so even if
there's not many files being modified at once, it can still prevent
some unnecessary transfers.
Signed-off-by: Milas Bowman <milas.bowman@docker.com>
Support services with scale > 1 for the tar watch sync.
Add a "lossy" multi-writer specific to pipes that writes the
tar data to each `io.PipeWriter`, which is connected to `stdin`
for the `tar` process being exec'd in the container.
The data is written serially to each writer. This could be
adjusted to do concurrent writes but that will rapidly increase
the I/O load, so is not done here - in general, 99% of the
time you'll be developing (and thus using watch/sync) with a
single replica of a service.
If a write fails, the corresponding `io.PipeWriter` is removed
from the active set and closed with an error.
This means that a single container copy failing won't stop
writes to the others that are succeeding. Of course, they will
be in an inconsistent state afterwards still, but that's a
different problem.
Signed-off-by: Milas Bowman <milas.bowman@docker.com>
Just moving some code around in preparation for an alternative
sync implementation that can do bulk transfers by using `tar`.
Signed-off-by: Milas Bowman <milas.bowman@docker.com>
* Move all the initialization code out of `main.go`
* Ensure spans are reported when there's an error with the
command
* Attach the Compose version & active Docker context to the
resource instead of the span
* Name the root CLI span `cli/<cmd>` for clarity and grab
the full subcommand path (e.g. `alpha-viz` instead of just
`viz`)
Signed-off-by: Milas Bowman <milas.bowman@docker.com>