* Eliminate direct dependency on gopkg.in/yaml.v2
* Add gopkg.in/yaml.v2 as a restricted import
* Add github.com/distribution/distribution as a restricted dependency in favor of distribution/reference which is the subset of functionality that Compose needs
* Remove an unused exclusion
NOTE: This does change the `compose config` output slightly but does NOT change the semantics:
* YAML indentation is slightly different for lists (this is a `v2` / `v3` thing)
* JSON is now "minified" instead of pretty-printed (I think this generally desirable and more consistent with other JSON command outputs)
Signed-off-by: Milas Bowman <milas.bowman@docker.com>
The `alpha watch` command current "attaches" to an already-running
Compose project, so it's necessary to run something like
`docker compose up --wait` first.
Now, we'll do the equivalent of an `up --build` before starting the
watch, so that we know the project is up-to-date and running.
Additionally, unlike an interactive `up`, the services are not stopped
when `watch` exits (e.g. via `Ctrl-C`). This prevents the need to start
from scratch each time the command is run - if some services are already
running and up-to-date, they can be used as-is. A `down` can always be
used to destroy everything, and we can consider introducing a flag like
`--down-on-exit` to `watch` or changing the default.
Signed-off-by: Milas Bowman <milas.bowman@docker.com>
The big change here is to pass around an explicit `*BuildOptions` object
as part of Compose operations like `up` & `run` that may or may not do
builds. If the options object is `nil`, no builds whatsoever will be
attempted.
Motivation is to allow for partial rebuilds in the context of an `up`
for watch. This was broken and tricky to accomplish because various parts
of the Compose APIs mutate the `*Project` for convenience in ways that
make it unusable afterwards. (For example, it might set `service.Build = nil`
because it's not going to build that service right _then_. But we might
still want to build it later!)
NOTE: This commit does not actually touch the watch logic. This is all
in preparation to make it possible.
As part of this, a bunch of code moved around and I eliminated a bunch
of partially redundant logic, mostly around multi-platform. Several
edge cases have been addressed as part of this:
* `DOCKER_DEFAULT_PLATFORM` was _overriding_ explicitly set platforms
in some cases, this is no longer true, and it behaves like the Docker
CLI now
* It was possible for Compose to build an image for one platform and
then try to run it for a different platform (and fail)
* Errors are no longer returned if a local image exists but for the
wrong platform - the correct platform will be fetched/built (if
possible).
Because there's a LOT of subtlety and tricky logic here, I've also tried
to add an excessive amount of explanatory comments.
Signed-off-by: Milas Bowman <milas.bowman@docker.com>
If running `up` in foreground mode (i.e. not `-d`),
when exiting via `Ctrl-C`, Compose stops all the
services it launched directly as part of that `up`
command.
In one of the E2E tests (`TestUpDependenciesNotStopped`),
this was occasionally flaking because the stop
behavior was racy: the return might not block on
the stop operation because it gets added to the
error group in a goroutine. As a result, it was
possible for no services to get terminated on exit.
There were a few other related pieces here that
I uncovered and tried to fix while stressing this.
For example, the printer could cause a deadlock if
an event was sent to it after it stopped.
Also, an error group wasn't really appropriate here;
each goroutine is a different operation for printing,
signal-handling, etc. If one part fails, we don't
actually want printing to stop, for example. This has
been switched to a `multierror.Group`, which has the
same API but coalesces errors instead of canceling a
context the moment the first one fails and returning
that single error.
Signed-off-by: Milas Bowman <milas.bowman@docker.com>
We can't assume we receive container logs line by line. Some framework won't buffer output and will send char by char, and we also can receive looong lines which get buffered to 32kb and then cut into multiple logs.
This assumes we will catch container streams being closed before we receive a die event for container, which could be subject to race condition, but at least the impact here is minimal and the fix works for reproduction examples provided in linked issues.
Signed-off-by: Nicolas De Loof <nicolas.deloof@gmail.com>
By default, `compose up` attaches to all services (i.e.
shows log output from every associated container). If
a service is specified, e.g. `compose up foo`, then
only `foo`'s logs are tailed. The `--attach-dependencies`
flag can also be used, so that if `foo` depended upon
`bar`, then `bar`'s logs would also be followed. It's
also possible to use `--no-attach` to filter out one
or more services explicitly, e.g. `compose up --no-attach=noisy`
would launch all services, including `noisy`, and would
show log output from every service _except_ `noisy`.
Lastly, it's possible to use `up --attach` to explicitly
restrict to a subset of services (or their dependencies).
How these flags interact with each other is also worth
thinking through.
There were a few different connected issues here, but
the primary issue was that running `compose up foo` was
always attaching dependencies regardless of `--attach-dependencies`.
The filtering logic here has been updated so that it
behaves predictably both when launching all services
(`compose up`) or a subset (`compose up foo`) as well
as various flag combinations on top of those.
Notably, this required making some changes to how it
watches containers. The logic here between attaching
for logs and monitoring for lifecycle changes is
tightly coupled, so some changes were needed to ensure
that the full set of services being `up`'d are _watched_
and the subset that should have logs shown are _attached_.
(This does mean faking the attach with an event but not
actually doing it.)
While handling that, I adjusted the context lifetimes
here, which improves error handling that gets shown to
the user and should help avoid potential leaks by getting
rid of a `context.Background()`.
Signed-off-by: Milas Bowman <milas.bowman@docker.com>
Refactor to use a consistent code path for determining the build
args for a service image regardless of whether BuildKit or the
classic builder is being used.
After recent changes, these code paths had diverged, so the classic
builder was missing the proxy variables from the Docker client
config.
Signed-off-by: Milas Bowman <milas.bowman@docker.com>
Swap the default implementation now that batching is merged.
Keeping the `docker cp` based implementation around for the
moment, but it needs to be _explicitly_ disabled now by setting
`COMPOSE_EXPERIMENTAL_WATCH_TAR=0`.
After the next release, we should remove the `docker cp`
implementation entirely.
Signed-off-by: Milas Bowman <milas.bowman@docker.com>
If an optional dependency exits successfully (exit code of 0),
with a service condition of `service_completed_successfully`,
don't log a warning.
Signed-off-by: Milas Bowman <milas.bowman@docker.com>
* When waiting for dependencies, `select` on the context as well
as the ticker
* Write multiple progress events "transactionally" (i.e. hold the
lock for the duration to avoid other events being interleaved)
* Do not change "finished" steps back to "in progress" to prevent
flickering
Signed-off-by: Milas Bowman <milas.bowman@docker.com>
Move builder and nodes initialization code up, avoiding to recreate/load them for every service build.
Signed-off-by: Silvin Lubecki <silvin.lubecki@docker.com>
Adjust the debouncing logic so that it applies to all inbound file
events, regardless of whether they match a sync or rebuild rule.
When the batch is flushed out, if any event for the service is a
rebuild event, then the service is rebuilt and all sync events for
the batch are ignored. If _all_ events in the batch are sync events,
then a sync is triggered, passing the entire batch at once. This
provides a substantial performance win for the new `tar`-based
implementation, as it can efficiently transfer the changes in bulk.
Additionally, this helps with jitter, e.g. it's not uncommon for
there to be double-writes in quick succession to a file, so even if
there's not many files being modified at once, it can still prevent
some unnecessary transfers.
Signed-off-by: Milas Bowman <milas.bowman@docker.com>
Just moving some code around in preparation for an alternative
sync implementation that can do bulk transfers by using `tar`.
Signed-off-by: Milas Bowman <milas.bowman@docker.com>
It's no longer used in docker/cli, and doesn't do anything other than
creating an empty struct, so replacing it (as we're planning to
deprecate that function)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
When building, if images are being pushed, ensure that only
named images (i.e. services with a populated `image` field)
are attempted to be pushed.
Services without `image` get an auto-generated name, which
will be a "Docker library" reference since they're in the
format `$project-$service`, which is implicitly the same as
`docker.io/library/$project-$service`. A push for that is
never desirable / will always fail.
The key here is that we cannot overwrite the `<svc>.image`
field when doing builds, as we need to be able to check for
its presence to determine whether a push makes sense.
Fixes#10813.
Signed-off-by: Milas Bowman <milas.bowman@docker.com>
closes#10783
Compose Spec mentions that default values for secrets is `0444` aka. world-readable permissions. However, the value was previously set to `0400`.
Signed-off-by: Shan Desai <shantanoo.desai@gmail.com>
As part of the fix for #10668, the logic was adjusted so that the
default (highest-priority) network is used in the `ContainerCreate`,
and then the remaining networks are connected via calls to
`NetworkConnect` before starting the container.
Unfortunately, `ServiceConfig::NetworksByPriority` is neither
deterministic nor stable when networks have the same priority.
It's non-deterministic because the order of networks from parsing
YAML is random, since they are loaded into a Go map (which have
random iteration order). Additionally, it's not using a `SortStable`
in `compose-go`, so even if the load order was predictable, it
still might produce different results.
While I look at improving `compose-go` here to prevent this from
tripping us up in the future, this fix looks at _all_ networks for
a service and ignores the "default" one now. Before, it would
always skip the first one in the slice since that _should_ have
been the "default".
Signed-off-by: Milas Bowman <milas.bowman@docker.com>
Engine API only allows at most one network to be connected as
part of the ContainerCreate API request. Compose will pick the
highest priority network.
Afterwards, the remaining networks (if any) are connected before
the container is actually started.
The big change here is that, previously, the highest-priority
network was connected in the create, and then disconnected and
immediately reconnected along with all the others. This was
racy because evidently connecting the container to the network
as part of the create isn't synchronous, so sometimes when Compose
tried to disconnect it, the API would return an error like:
```
container <id> is not connected to the network <network>
```
To avoid needing to disconnect and immediately reconnect, the
network config logic has been refactored to ensure that it sets
up the network config correctly the first time.
Signed-off-by: Milas Bowman <milas.bowman@docker.com>