diff options
34 files changed, 1318 insertions, 175 deletions
@@ -15,7 +15,7 @@ in the [API.md](https://github.com/containers/libpod/blob/master/API.md) file in [func ContainerExists(name: string) int](#ContainerExists) -[func ContainerInspectData(name: string) string](#ContainerInspectData) +[func ContainerInspectData(name: string, size: bool) string](#ContainerInspectData) [func ContainerRestore(name: string, keep: bool, tcpEstablished: bool) string](#ContainerRestore) @@ -43,9 +43,13 @@ in the [API.md](https://github.com/containers/libpod/blob/master/API.md) file in [func GetContainerStats(name: string) ContainerStats](#GetContainerStats) +[func GetContainerStatsWithHistory(previousStats: ContainerStats) ContainerStats](#GetContainerStatsWithHistory) + [func GetContainersByContext(all: bool, latest: bool, args: []string) []string](#GetContainersByContext) -[func GetEvents(options: EventInput) Event](#GetEvents) +[func GetContainersLogs(names: []string, follow: bool, latest: bool, since: string, tail: int, timestamps: bool) LogLine](#GetContainersLogs) + +[func GetEvents(filter: []string, since: string, until: string) Event](#GetEvents) [func GetImage(id: string) Image](#GetImage) @@ -133,6 +137,8 @@ in the [API.md](https://github.com/containers/libpod/blob/master/API.md) file in [func TagImage(name: string, tagged: string) string](#TagImage) +[func TopPod(pod: string, latest: bool, descriptors: []string) []string](#TopPod) + [func UnmountContainer(name: string, force: bool) ](#UnmountContainer) [func UnpauseContainer(name: string) string](#UnpauseContainer) @@ -169,8 +175,6 @@ in the [API.md](https://github.com/containers/libpod/blob/master/API.md) file in [type Event](#Event) -[type EventInput](#EventInput) - [type IDMap](#IDMap) [type IDMappingOptions](#IDMappingOptions) @@ -199,6 +203,8 @@ in the [API.md](https://github.com/containers/libpod/blob/master/API.md) file in [type ListPodData](#ListPodData) +[type LogLine](#LogLine) + [type MoreResponse](#MoreResponse) [type NotImplemented](#NotImplemented) @@ -237,8 +243,6 @@ in the [API.md](https://github.com/containers/libpod/blob/master/API.md) file in [error RuntimeError](#RuntimeError) -[error StreamEnded](#StreamEnded) - [error VolumeNotFound](#VolumeNotFound) [error WantsMoreRequired](#WantsMoreRequired) @@ -296,7 +300,7 @@ $ varlink call -m unix:/run/podman/io.podman/io.podman.ContainerExists '{"name": ### <a name="ContainerInspectData"></a>func ContainerInspectData <div style="background-color: #E8E8E8; padding: 15px; margin: 10px; border-radius: 10px;"> -method ContainerInspectData(name: [string](https://godoc.org/builtin#string)) [string](https://godoc.org/builtin#string)</div> +method ContainerInspectData(name: [string](https://godoc.org/builtin#string), size: [bool](https://godoc.org/builtin#bool)) [string](https://godoc.org/builtin#string)</div> ContainerInspectData returns a container's inspect data in string form. This call is for development of Podman only and generally should not be used. ### <a name="ContainerRestore"></a>func ContainerRestore @@ -472,6 +476,12 @@ $ varlink call -m unix:/run/podman/io.podman/io.podman.GetContainerStats '{"name } } ~~~ +### <a name="GetContainerStatsWithHistory"></a>func GetContainerStatsWithHistory +<div style="background-color: #E8E8E8; padding: 15px; margin: 10px; border-radius: 10px;"> + +method GetContainerStatsWithHistory(previousStats: [ContainerStats](#ContainerStats)) [ContainerStats](#ContainerStats)</div> +GetContainerStatsWithHistory takes a previous set of container statistics and uses libpod functions +to calculate the containers statistics based on current and previous measurements. ### <a name="GetContainersByContext"></a>func GetContainersByContext <div style="background-color: #E8E8E8; padding: 15px; margin: 10px; border-radius: 10px;"> @@ -479,10 +489,15 @@ method GetContainersByContext(all: [bool](https://godoc.org/builtin#bool), lates GetContainersByContext allows you to get a list of container ids depending on all, latest, or a list of container names. The definition of latest container means the latest by creation date. In a multi- user environment, results might differ from what you expect. +### <a name="GetContainersLogs"></a>func GetContainersLogs +<div style="background-color: #E8E8E8; padding: 15px; margin: 10px; border-radius: 10px;"> + +method GetContainersLogs(names: [[]string](#[]string), follow: [bool](https://godoc.org/builtin#bool), latest: [bool](https://godoc.org/builtin#bool), since: [string](https://godoc.org/builtin#string), tail: [int](https://godoc.org/builtin#int), timestamps: [bool](https://godoc.org/builtin#bool)) [LogLine](#LogLine)</div> + ### <a name="GetEvents"></a>func GetEvents <div style="background-color: #E8E8E8; padding: 15px; margin: 10px; border-radius: 10px;"> -method GetEvents(options: [EventInput](#EventInput)) [Event](#Event)</div> +method GetEvents(filter: [[]string](#[]string), since: [string](https://godoc.org/builtin#string), until: [string](https://godoc.org/builtin#string)) [Event](#Event)</div> GetEvents returns known libpod events filtered by the options provided. ### <a name="GetImage"></a>func GetImage <div style="background-color: #E8E8E8; padding: 15px; margin: 10px; border-radius: 10px;"> @@ -714,7 +729,7 @@ See also [GetContainer](#GetContainer). method ListImages() [Image](#Image)</div> ListImages returns information about the images that are currently in storage. -See also [InspectImage](InspectImage). +See also [InspectImage](#InspectImage). ### <a name="ListPods"></a>func ListPods <div style="background-color: #E8E8E8; padding: 15px; margin: 10px; border-radius: 10px;"> @@ -976,6 +991,11 @@ $ varlink call -m unix:/run/podman/io.podman/io.podman.StopPod '{"name": "135d71 method TagImage(name: [string](https://godoc.org/builtin#string), tagged: [string](https://godoc.org/builtin#string)) [string](https://godoc.org/builtin#string)</div> TagImage takes the name or ID of an image in local storage as well as the desired tag name. If the image cannot be found, an [ImageNotFound](#ImageNotFound) error will be returned; otherwise, the ID of the image is returned on success. +### <a name="TopPod"></a>func TopPod +<div style="background-color: #E8E8E8; padding: 15px; margin: 10px; border-radius: 10px;"> + +method TopPod(pod: [string](https://godoc.org/builtin#string), latest: [bool](https://godoc.org/builtin#bool), descriptors: [[]string](#[]string)) [[]string](#[]string)</div> + ### <a name="UnmountContainer"></a>func UnmountContainer <div style="background-color: #E8E8E8; padding: 15px; margin: 10px; border-radius: 10px;"> @@ -1426,17 +1446,6 @@ status [string](https://godoc.org/builtin#string) time [string](https://godoc.org/builtin#string) type [string](https://godoc.org/builtin#string) -### <a name="EventInput"></a>type EventInput - -EventInput describes the input to obtain libpod events - -filter [[]string](#[]string) - -since [string](https://godoc.org/builtin#string) - -stream [bool](https://godoc.org/builtin#bool) - -until [string](https://godoc.org/builtin#string) ### <a name="IDMap"></a>type IDMap IDMap is used to describe user name spaces during container creation @@ -1636,6 +1645,19 @@ labels [map[string]](#map[string]) numberofcontainers [string](https://godoc.org/builtin#string) containersinfo [ListPodContainerInfo](#ListPodContainerInfo) +### <a name="LogLine"></a>type LogLine + + + +device [string](https://godoc.org/builtin#string) + +parseLogType [string](https://godoc.org/builtin#string) + +time [string](https://godoc.org/builtin#string) + +msg [string](https://godoc.org/builtin#string) + +cid [string](https://godoc.org/builtin#string) ### <a name="MoreResponse"></a>type MoreResponse MoreResponse is a struct for when responses from varlink requires longer output @@ -1793,9 +1815,6 @@ PodNotFound means the pod could not be found by the provided name or ID in local ### <a name="RuntimeError"></a>type RuntimeError RuntimeErrors generally means a runtime could not be found or gotten. -### <a name="StreamEnded"></a>type StreamEnded - -The Podman endpoint has closed because the stream ended. ### <a name="VolumeNotFound"></a>type VolumeNotFound VolumeNotFound means the volume could not be found by the name or ID in local storage. diff --git a/cmd/podman/cliconfig/config.go b/cmd/podman/cliconfig/config.go index 1461c9f03..884bd7fdb 100644 --- a/cmd/podman/cliconfig/config.go +++ b/cmd/podman/cliconfig/config.go @@ -572,3 +572,9 @@ type SystemPruneValues struct { type SystemRenumberValues struct { PodmanCommand } + +type SystemDfValues struct { + PodmanCommand + Verbose bool + Format string +} diff --git a/cmd/podman/commands.go b/cmd/podman/commands.go index 810c5a6f6..875b2aec8 100644 --- a/cmd/podman/commands.go +++ b/cmd/podman/commands.go @@ -108,6 +108,7 @@ func getSystemSubCommands() []*cobra.Command { return []*cobra.Command{ _pruneSystemCommand, _renumberCommand, + _dfSystemCommand, } } diff --git a/cmd/podman/common.go b/cmd/podman/common.go index 8b42ed673..771738302 100644 --- a/cmd/podman/common.go +++ b/cmd/podman/common.go @@ -293,7 +293,7 @@ func getCreateFlags(c *cliconfig.PodmanCommand) { ) createFlags.String( "healthcheck-interval", "30s", - "set an interval for the healthchecks", + "set an interval for the healthchecks (a value of disable results in no automatic timer setup)", ) createFlags.Uint( "healthcheck-retries", 3, diff --git a/cmd/podman/ps.go b/cmd/podman/ps.go index ad942da2e..27774f95d 100644 --- a/cmd/podman/ps.go +++ b/cmd/podman/ps.go @@ -423,7 +423,7 @@ func generateContainerFilterFuncs(filter, filterValue string, runtime *libpod.Ru return false }, nil case "status": - if !util.StringInSlice(filterValue, []string{"created", "running", "paused", "exited", "unknown"}) { + if !util.StringInSlice(filterValue, []string{"created", "running", "paused", "stopped", "exited", "unknown"}) { return nil, errors.Errorf("%s is not a valid status", filterValue) } return func(c *libpod.Container) bool { @@ -431,6 +431,9 @@ func generateContainerFilterFuncs(filter, filterValue string, runtime *libpod.Ru if err != nil { return false } + if filterValue == "stopped" { + filterValue = "exited" + } state := status.String() if status == libpod.ContainerStateConfigured { state = "created" @@ -491,6 +494,14 @@ func generateContainerFilterFuncs(filter, filterValue string, runtime *libpod.Ru } return false }, nil + case "health": + return func(c *libpod.Container) bool { + hcStatus, err := c.HealthCheckStatus() + if err != nil { + return false + } + return hcStatus == filterValue + }, nil } return nil, errors.Errorf("%s is an invalid filter", filter) } diff --git a/cmd/podman/shared/create.go b/cmd/podman/shared/create.go index 55eb3ce83..5ce0b8865 100644 --- a/cmd/podman/shared/create.go +++ b/cmd/podman/shared/create.go @@ -868,21 +868,21 @@ func makeHealthCheckFromCli(c *cliconfig.PodmanCommand) (*manifest.Schema2Health hc := manifest.Schema2HealthConfig{ Test: cmd, } + + if inInterval == "disable" { + inInterval = "0" + } intervalDuration, err := time.ParseDuration(inInterval) if err != nil { return nil, errors.Wrapf(err, "invalid healthcheck-interval %s ", inInterval) } - if intervalDuration < time.Duration(time.Second*1) { - return nil, errors.New("healthcheck-interval must be at least 1 second") - } - hc.Interval = intervalDuration if inRetries < 1 { return nil, errors.New("healthcheck-retries must be greater than 0.") } - + hc.Retries = int(inRetries) timeoutDuration, err := time.ParseDuration(inTimeout) if err != nil { return nil, errors.Wrapf(err, "invalid healthcheck-timeout %s", inTimeout) diff --git a/cmd/podman/system_df.go b/cmd/podman/system_df.go new file mode 100644 index 000000000..183c5a7dd --- /dev/null +++ b/cmd/podman/system_df.go @@ -0,0 +1,639 @@ +package main + +import ( + "context" + "fmt" + "os" + "path/filepath" + "strings" + "time" + + "github.com/containers/buildah/pkg/formats" + "github.com/containers/libpod/cmd/podman/cliconfig" + "github.com/containers/libpod/cmd/podman/libpodruntime" + "github.com/containers/libpod/libpod" + "github.com/containers/libpod/libpod/image" + units "github.com/docker/go-units" + "github.com/pkg/errors" + "github.com/sirupsen/logrus" + "github.com/spf13/cobra" +) + +var ( + dfSystemCommand cliconfig.SystemDfValues + dfSystemDescription = ` + podman system df + + Show podman disk usage + ` + _dfSystemCommand = &cobra.Command{ + Use: "df", + Short: "Show podman disk usage", + Long: dfSystemDescription, + RunE: func(cmd *cobra.Command, args []string) error { + dfSystemCommand.GlobalFlags = MainGlobalOpts + return dfSystemCmd(&dfSystemCommand) + }, + } +) + +type dfMetaData struct { + images []*image.Image + containers []*libpod.Container + activeContainers map[string]*libpod.Container + imagesUsedbyCtrMap map[string][]*libpod.Container + imagesUsedbyActiveCtr map[string][]*libpod.Container + volumes []*libpod.Volume + volumeUsedByContainerMap map[string][]*libpod.Container +} + +type systemDfDiskUsage struct { + Type string + Total int + Active int + Size string + Reclaimable string +} + +type imageVerboseDiskUsage struct { + Repository string + Tag string + ImageID string + Created string + Size string + SharedSize string + UniqueSize string + Containers int +} + +type containerVerboseDiskUsage struct { + ContainerID string + Image string + Command string + LocalVolumes int + Size string + Created string + Status string + Names string +} + +type volumeVerboseDiskUsage struct { + VolumeName string + Links int + Size string +} + +const systemDfDefaultFormat string = "table {{.Type}}\t{{.Total}}\t{{.Active}}\t{{.Size}}\t{{.Reclaimable}}" + +func init() { + dfSystemCommand.Command = _dfSystemCommand + dfSystemCommand.SetUsageTemplate(UsageTemplate()) + flags := dfSystemCommand.Flags() + flags.BoolVarP(&dfSystemCommand.Verbose, "verbose", "v", false, "Show detailed information on space usage") + flags.StringVar(&dfSystemCommand.Format, "format", "", "Pretty-print images using a Go template") +} + +func dfSystemCmd(c *cliconfig.SystemDfValues) error { + runtime, err := libpodruntime.GetRuntime(&c.PodmanCommand) + if err != nil { + return errors.Wrapf(err, "Could not get runtime") + } + defer runtime.Shutdown(false) + + ctx := getContext() + + metaData, err := getDfMetaData(ctx, runtime) + if err != nil { + return errors.Wrapf(err, "error getting disk usage data") + } + + if c.Verbose { + err := verboseOutput(ctx, metaData) + if err != nil { + return err + } + return nil + } + + systemDfDiskUsages, err := getDiskUsage(ctx, runtime, metaData) + if err != nil { + return errors.Wrapf(err, "error getting output of system df") + } + format := systemDfDefaultFormat + if c.Format != "" { + format = strings.Replace(c.Format, `\t`, "\t", -1) + } + generateSysDfOutput(systemDfDiskUsages, format) + return nil +} + +func generateSysDfOutput(systemDfDiskUsages []systemDfDiskUsage, format string) { + var systemDfHeader = map[string]string{ + "Type": "TYPE", + "Total": "TOTAL", + "Active": "ACTIVE", + "Size": "SIZE", + "Reclaimable": "RECLAIMABLE", + } + out := formats.StdoutTemplateArray{Output: systemDfDiskUsageToGeneric(systemDfDiskUsages), Template: format, Fields: systemDfHeader} + formats.Writer(out).Out() +} + +func getDiskUsage(ctx context.Context, runtime *libpod.Runtime, metaData dfMetaData) ([]systemDfDiskUsage, error) { + imageDiskUsage, err := getImageDiskUsage(ctx, metaData.images, metaData.imagesUsedbyCtrMap, metaData.imagesUsedbyActiveCtr) + if err != nil { + return nil, errors.Wrapf(err, "error getting disk usage of images") + } + containerDiskUsage, err := getContainerDiskUsage(metaData.containers, metaData.activeContainers) + if err != nil { + return nil, errors.Wrapf(err, "error getting disk usage of containers") + } + volumeDiskUsage, err := getVolumeDiskUsage(metaData.volumes, metaData.volumeUsedByContainerMap) + if err != nil { + return nil, errors.Wrapf(err, "error getting disk usage of volumess") + } + + systemDfDiskUsages := []systemDfDiskUsage{imageDiskUsage, containerDiskUsage, volumeDiskUsage} + return systemDfDiskUsages, nil +} + +func getDfMetaData(ctx context.Context, runtime *libpod.Runtime) (dfMetaData, error) { + var metaData dfMetaData + images, err := runtime.ImageRuntime().GetImages() + if err != nil { + return metaData, errors.Wrapf(err, "unable to get images") + } + containers, err := runtime.GetAllContainers() + if err != nil { + return metaData, errors.Wrapf(err, "error getting all containers") + } + volumes, err := runtime.GetAllVolumes() + if err != nil { + return metaData, errors.Wrap(err, "error getting all volumes") + } + activeContainers, err := activeContainers(containers) + if err != nil { + return metaData, errors.Wrapf(err, "error getting active containers") + } + imagesUsedbyCtrMap, imagesUsedbyActiveCtr, err := imagesUsedbyCtr(containers, activeContainers) + if err != nil { + return metaData, errors.Wrapf(err, "error getting getting images used by containers") + } + metaData = dfMetaData{ + images: images, + containers: containers, + activeContainers: activeContainers, + imagesUsedbyCtrMap: imagesUsedbyCtrMap, + imagesUsedbyActiveCtr: imagesUsedbyActiveCtr, + volumes: volumes, + volumeUsedByContainerMap: volumeUsedByContainer(containers), + } + return metaData, nil +} + +func imageUniqueSize(ctx context.Context, images []*image.Image) (map[string]uint64, error) { + imgUniqueSizeMap := make(map[string]uint64) + for _, img := range images { + parentImg := img + for { + next, err := parentImg.GetParent() + if err != nil { + return nil, errors.Wrapf(err, "error getting parent of image %s", parentImg.ID()) + } + if next == nil { + break + } + parentImg = next + } + imgSize, err := img.Size(ctx) + if err != nil { + return nil, err + } + if img.ID() == parentImg.ID() { + imgUniqueSizeMap[img.ID()] = *imgSize + } else { + parentImgSize, err := parentImg.Size(ctx) + if err != nil { + return nil, errors.Wrapf(err, "error getting size of parent image %s", parentImg.ID()) + } + imgUniqueSizeMap[img.ID()] = *imgSize - *parentImgSize + } + } + return imgUniqueSizeMap, nil +} + +func getImageDiskUsage(ctx context.Context, images []*image.Image, imageUsedbyCintainerMap map[string][]*libpod.Container, imageUsedbyActiveContainerMap map[string][]*libpod.Container) (systemDfDiskUsage, error) { + var ( + numberOfImages int + sumSize uint64 + numberOfActiveImages int + unreclaimableSize uint64 + imageDiskUsage systemDfDiskUsage + reclaimableStr string + ) + + imgUniqueSizeMap, err := imageUniqueSize(ctx, images) + if err != nil { + return imageDiskUsage, errors.Wrapf(err, "error getting unique size of images") + } + + for _, img := range images { + + unreclaimableSize += imageUsedSize(img, imgUniqueSizeMap, imageUsedbyCintainerMap, imageUsedbyActiveContainerMap) + + isParent, err := img.IsParent() + if err != nil { + return imageDiskUsage, err + } + parent, err := img.GetParent() + if err != nil { + return imageDiskUsage, errors.Wrapf(err, "error getting parent of image %s", img.ID()) + } + if isParent && parent != nil { + continue + } + numberOfImages++ + if _, isActive := imageUsedbyCintainerMap[img.ID()]; isActive { + numberOfActiveImages++ + } + + if !isParent { + size, err := img.Size(ctx) + if err != nil { + return imageDiskUsage, errors.Wrapf(err, "error getting disk usage of image %s", img.ID()) + } + sumSize += *size + } + + } + sumSizeStr := units.HumanSizeWithPrecision(float64(sumSize), 3) + reclaimable := sumSize - unreclaimableSize + if sumSize != 0 { + reclaimableStr = fmt.Sprintf("%s (%v%%)", units.HumanSizeWithPrecision(float64(reclaimable), 3), 100*reclaimable/sumSize) + } else { + reclaimableStr = fmt.Sprintf("%s (%v%%)", units.HumanSizeWithPrecision(float64(reclaimable), 3), 0) + } + imageDiskUsage = systemDfDiskUsage{ + Type: "Images", + Total: numberOfImages, + Active: numberOfActiveImages, + Size: sumSizeStr, + Reclaimable: reclaimableStr, + } + return imageDiskUsage, nil +} + +func imageUsedSize(img *image.Image, imgUniqueSizeMap map[string]uint64, imageUsedbyCintainerMap map[string][]*libpod.Container, imageUsedbyActiveContainerMap map[string][]*libpod.Container) uint64 { + var usedSize uint64 + imgUnique := imgUniqueSizeMap[img.ID()] + if _, isCtrActive := imageUsedbyActiveContainerMap[img.ID()]; isCtrActive { + return imgUnique + } + containers := imageUsedbyCintainerMap[img.ID()] + for _, ctr := range containers { + if len(ctr.UserVolumes()) > 0 { + usedSize += imgUnique + return usedSize + } + } + return usedSize +} + +func imagesUsedbyCtr(containers []*libpod.Container, activeContainers map[string]*libpod.Container) (map[string][]*libpod.Container, map[string][]*libpod.Container, error) { + imgCtrMap := make(map[string][]*libpod.Container) + imgActiveCtrMap := make(map[string][]*libpod.Container) + for _, ctr := range containers { + imgID, _ := ctr.Image() + imgCtrMap[imgID] = append(imgCtrMap[imgID], ctr) + if _, isActive := activeContainers[ctr.ID()]; isActive { + imgActiveCtrMap[imgID] = append(imgActiveCtrMap[imgID], ctr) + } + } + return imgCtrMap, imgActiveCtrMap, nil +} + +func getContainerDiskUsage(containers []*libpod.Container, activeContainers map[string]*libpod.Container) (systemDfDiskUsage, error) { + var ( + sumSize int64 + unreclaimableSize int64 + reclaimableStr string + ) + for _, ctr := range containers { + size, err := ctr.RWSize() + if err != nil { + return systemDfDiskUsage{}, errors.Wrapf(err, "error getting size of container %s", ctr.ID()) + } + sumSize += size + } + for _, activeCtr := range activeContainers { + size, err := activeCtr.RWSize() + if err != nil { + return systemDfDiskUsage{}, errors.Wrapf(err, "error getting size of active container %s", activeCtr.ID()) + } + unreclaimableSize += size + } + if sumSize == 0 { + reclaimableStr = fmt.Sprintf("%s (%v%%)", units.HumanSizeWithPrecision(0, 3), 0) + } else { + reclaimable := sumSize - unreclaimableSize + reclaimableStr = fmt.Sprintf("%s (%v%%)", units.HumanSizeWithPrecision(float64(reclaimable), 3), 100*reclaimable/sumSize) + } + containerDiskUsage := systemDfDiskUsage{ + Type: "Containers", + Total: len(containers), + Active: len(activeContainers), + Size: units.HumanSizeWithPrecision(float64(sumSize), 3), + Reclaimable: reclaimableStr, + } + return containerDiskUsage, nil +} + +func ctrIsActive(ctr *libpod.Container) (bool, error) { + state, err := ctr.State() + if err != nil { + return false, err + } + return state == libpod.ContainerStatePaused || state == libpod.ContainerStateRunning, nil +} + +func activeContainers(containers []*libpod.Container) (map[string]*libpod.Container, error) { + activeContainers := make(map[string]*libpod.Container) + for _, aCtr := range containers { + isActive, err := ctrIsActive(aCtr) + if err != nil { + return nil, err + } + if isActive { + activeContainers[aCtr.ID()] = aCtr + } + } + return activeContainers, nil +} + +func getVolumeDiskUsage(volumes []*libpod.Volume, volumeUsedByContainerMap map[string][]*libpod.Container) (systemDfDiskUsage, error) { + var ( + sumSize int64 + unreclaimableSize int64 + reclaimableStr string + ) + for _, volume := range volumes { + size, err := volumeSize(volume) + if err != nil { + return systemDfDiskUsage{}, errors.Wrapf(err, "error getting size of volime %s", volume.Name()) + } + sumSize += size + if _, exist := volumeUsedByContainerMap[volume.Name()]; exist { + unreclaimableSize += size + } + } + reclaimable := sumSize - unreclaimableSize + if sumSize != 0 { + reclaimableStr = fmt.Sprintf("%s (%v%%)", units.HumanSizeWithPrecision(float64(reclaimable), 3), 100*reclaimable/sumSize) + } else { + reclaimableStr = fmt.Sprintf("%s (%v%%)", units.HumanSizeWithPrecision(float64(reclaimable), 3), 0) + } + volumesDiskUsage := systemDfDiskUsage{ + Type: "Local Volumes", + Total: len(volumes), + Active: len(volumeUsedByContainerMap), + Size: units.HumanSizeWithPrecision(float64(sumSize), 3), + Reclaimable: reclaimableStr, + } + return volumesDiskUsage, nil +} + +func volumeUsedByContainer(containers []*libpod.Container) map[string][]*libpod.Container { + volumeUsedByContainerMap := make(map[string][]*libpod.Container) + for _, ctr := range containers { + + ctrVolumes := ctr.UserVolumes() + for _, ctrVolume := range ctrVolumes { + volumeUsedByContainerMap[ctrVolume] = append(volumeUsedByContainerMap[ctrVolume], ctr) + } + } + return volumeUsedByContainerMap +} + +func volumeSize(volume *libpod.Volume) (int64, error) { + var size int64 + err := filepath.Walk(volume.MountPoint(), func(path string, info os.FileInfo, err error) error { + if err == nil && !info.IsDir() { + size += info.Size() + } + return err + }) + return size, err +} + +func getImageVerboseDiskUsage(ctx context.Context, images []*image.Image, imagesUsedbyCtr map[string][]*libpod.Container) ([]imageVerboseDiskUsage, error) { + var imagesVerboseDiskUsage []imageVerboseDiskUsage + imgUniqueSizeMap, err := imageUniqueSize(ctx, images) + if err != nil { + return imagesVerboseDiskUsage, errors.Wrapf(err, "error getting unique size of images") + } + for _, img := range images { + isParent, err := img.IsParent() + if err != nil { + return imagesVerboseDiskUsage, errors.Wrapf(err, "error checking if %s is a parent images", img.ID()) + } + parent, err := img.GetParent() + if err != nil { + return imagesVerboseDiskUsage, errors.Wrapf(err, "error getting parent of image %s", img.ID()) + } + if isParent && parent != nil { + continue + } + size, err := img.Size(ctx) + if err != nil { + return imagesVerboseDiskUsage, errors.Wrapf(err, "error getting size of image %s", img.ID()) + } + numberOfContainers := 0 + if ctrs, exist := imagesUsedbyCtr[img.ID()]; exist { + numberOfContainers = len(ctrs) + } + var repo string + var tag string + if len(img.Names()) == 0 { + repo = "<none>" + tag = "<none>" + } + repopairs, err := image.ReposToMap([]string{img.Names()[0]}) + if err != nil { + logrus.Errorf("error finding tag/digest for %s", img.ID()) + } + for reponame, tags := range repopairs { + for _, tagname := range tags { + repo = reponame + tag = tagname + } + } + + imageVerbosedf := imageVerboseDiskUsage{ + Repository: repo, + Tag: tag, + ImageID: shortID(img.ID()), + Created: units.HumanDuration(time.Since((img.Created().Local()))) + " ago", + Size: units.HumanSizeWithPrecision(float64(*size), 3), + SharedSize: units.HumanSizeWithPrecision(float64(*size-imgUniqueSizeMap[img.ID()]), 3), + UniqueSize: units.HumanSizeWithPrecision(float64(imgUniqueSizeMap[img.ID()]), 3), + Containers: numberOfContainers, + } + imagesVerboseDiskUsage = append(imagesVerboseDiskUsage, imageVerbosedf) + } + return imagesVerboseDiskUsage, nil +} + +func getContainerVerboseDiskUsage(containers []*libpod.Container) (containersVerboseDiskUsage []containerVerboseDiskUsage, err error) { + for _, ctr := range containers { + imgID, _ := ctr.Image() + size, err := ctr.RWSize() + if err != nil { + return containersVerboseDiskUsage, errors.Wrapf(err, "error getting size of container %s", ctr.ID()) + } + state, err := ctr.State() + if err != nil { + return containersVerboseDiskUsage, errors.Wrapf(err, "error getting the state of container %s", ctr.ID()) + } + + ctrVerboseData := containerVerboseDiskUsage{ + ContainerID: shortID(ctr.ID()), + Image: shortImageID(imgID), + Command: strings.Join(ctr.Command(), " "), + LocalVolumes: len(ctr.UserVolumes()), + Size: units.HumanSizeWithPrecision(float64(size), 3), + Created: units.HumanDuration(time.Since(ctr.CreatedTime().Local())) + "ago", + Status: state.String(), + Names: ctr.Name(), + } + containersVerboseDiskUsage = append(containersVerboseDiskUsage, ctrVerboseData) + + } + return containersVerboseDiskUsage, nil +} + +func getVolumeVerboseDiskUsage(volumes []*libpod.Volume, volumeUsedByContainerMap map[string][]*libpod.Container) (volumesVerboseDiskUsage []volumeVerboseDiskUsage, err error) { + for _, vol := range volumes { + volSize, err := volumeSize(vol) + if err != nil { + return volumesVerboseDiskUsage, errors.Wrapf(err, "error getting size of volume %s", vol.Name()) + } + links := 0 + if linkCtr, exist := volumeUsedByContainerMap[vol.Name()]; exist { + links = len(linkCtr) + } + volumeVerboseData := volumeVerboseDiskUsage{ + VolumeName: vol.Name(), + Links: links, + Size: units.HumanSizeWithPrecision(float64(volSize), 3), + } + volumesVerboseDiskUsage = append(volumesVerboseDiskUsage, volumeVerboseData) + } + return volumesVerboseDiskUsage, nil +} + +func imagesVerboseOutput(ctx context.Context, metaData dfMetaData) error { + var imageVerboseHeader = map[string]string{ + "Repository": "REPOSITORY", + "Tag": "TAG", + "ImageID": "IMAGE ID", + "Created": "CREATED", + "Size": "SIZE", + "SharedSize": "SHARED SIZE", + "UniqueSize": "UNQUE SIZE", + "Containers": "CONTAINERS", + } + imagesVerboseDiskUsage, err := getImageVerboseDiskUsage(ctx, metaData.images, metaData.imagesUsedbyCtrMap) + if err != nil { + return errors.Wrapf(err, "error getting verbose output of images") + } + os.Stderr.WriteString("Images space usage:\n\n") + out := formats.StdoutTemplateArray{Output: systemDfImageVerboseDiskUsageToGeneric(imagesVerboseDiskUsage), Template: "table {{.Repository}}\t{{.Tag}}\t{{.ImageID}}\t{{.Created}}\t{{.Size}}\t{{.SharedSize}}\t{{.UniqueSize}}\t{{.Containers}}", Fields: imageVerboseHeader} + formats.Writer(out).Out() + return nil +} + +func containersVerboseOutput(ctx context.Context, metaData dfMetaData) error { + var containerVerboseHeader = map[string]string{ + "ContainerID": "CONTAINER ID ", + "Image": "IMAGE", + "Command": "COMMAND", + "LocalVolumes": "LOCAL VOLUMES", + "Size": "SIZE", + "Created": "CREATED", + "Status": "STATUS", + "Names": "NAMES", + } + containersVerboseDiskUsage, err := getContainerVerboseDiskUsage(metaData.containers) + if err != nil { + return errors.Wrapf(err, "error getting verbose output of containers") + } + os.Stderr.WriteString("\nContainers space usage:\n\n") + out := formats.StdoutTemplateArray{Output: systemDfContainerVerboseDiskUsageToGeneric(containersVerboseDiskUsage), Template: "table {{.ContainerID}}\t{{.Image}}\t{{.Command}}\t{{.LocalVolumes}}\t{{.Size}}\t{{.Created}}\t{{.Status}}\t{{.Names}}", Fields: containerVerboseHeader} + formats.Writer(out).Out() + return nil +} + +func volumesVerboseOutput(ctx context.Context, metaData dfMetaData) error { + var volumeVerboseHeader = map[string]string{ + "VolumeName": "VOLUME NAME", + "Links": "LINKS", + "Size": "SIZE", + } + volumesVerboseDiskUsage, err := getVolumeVerboseDiskUsage(metaData.volumes, metaData.volumeUsedByContainerMap) + if err != nil { + return errors.Wrapf(err, "error getting verbose ouput of volumes") + } + os.Stderr.WriteString("\nLocal Volumes space usage:\n\n") + out := formats.StdoutTemplateArray{Output: systemDfVolumeVerboseDiskUsageToGeneric(volumesVerboseDiskUsage), Template: "table {{.VolumeName}}\t{{.Links}}\t{{.Size}}", Fields: volumeVerboseHeader} + formats.Writer(out).Out() + return nil +} + +func verboseOutput(ctx context.Context, metaData dfMetaData) error { + if err := imagesVerboseOutput(ctx, metaData); err != nil { + return err + } + if err := containersVerboseOutput(ctx, metaData); err != nil { + return err + } + if err := volumesVerboseOutput(ctx, metaData); err != nil { + return err + } + return nil +} + +func systemDfDiskUsageToGeneric(diskUsages []systemDfDiskUsage) (out []interface{}) { + for _, usage := range diskUsages { + out = append(out, interface{}(usage)) + } + return out +} + +func systemDfImageVerboseDiskUsageToGeneric(diskUsages []imageVerboseDiskUsage) (out []interface{}) { + for _, usage := range diskUsages { + out = append(out, interface{}(usage)) + } + return out +} + +func systemDfContainerVerboseDiskUsageToGeneric(diskUsages []containerVerboseDiskUsage) (out []interface{}) { + for _, usage := range diskUsages { + out = append(out, interface{}(usage)) + } + return out +} + +func systemDfVolumeVerboseDiskUsageToGeneric(diskUsages []volumeVerboseDiskUsage) (out []interface{}) { + for _, usage := range diskUsages { + out = append(out, interface{}(usage)) + } + return out +} + +func shortImageID(id string) string { + const imageIDTruncLength int = 4 + if len(id) > imageIDTruncLength { + return id[:imageIDTruncLength] + } + return id +} diff --git a/cmd/podman/utils.go b/cmd/podman/utils.go index 4ec0f8a13..45d081512 100644 --- a/cmd/podman/utils.go +++ b/cmd/podman/utils.go @@ -200,35 +200,6 @@ func getPodsFromContext(c *cliconfig.PodmanCommand, r *libpod.Runtime) ([]*libpo return pods, lastError } -func getVolumesFromContext(c *cliconfig.PodmanCommand, r *libpod.Runtime) ([]*libpod.Volume, error) { - args := c.InputArgs - var ( - vols []*libpod.Volume - lastError error - err error - ) - - if c.Bool("all") { - vols, err = r.Volumes() - if err != nil { - return nil, errors.Wrapf(err, "unable to get all volumes") - } - } - - for _, i := range args { - vol, err := r.GetVolume(i) - if err != nil { - if lastError != nil { - logrus.Errorf("%q", lastError) - } - lastError = errors.Wrapf(err, "unable to find volume %s", i) - continue - } - vols = append(vols, vol) - } - return vols, lastError -} - //printParallelOutput takes the map of parallel worker results and outputs them // to stdout func printParallelOutput(m map[string]error, errCount int) error { diff --git a/cmd/podman/varlink/io.podman.varlink b/cmd/podman/varlink/io.podman.varlink index 517a7a2a1..ad2de56f8 100644 --- a/cmd/podman/varlink/io.podman.varlink +++ b/cmd/podman/varlink/io.podman.varlink @@ -1111,7 +1111,7 @@ method ContainerArtifacts(name: string, artifactName: string) -> (config: string # ContainerInspectData returns a container's inspect data in string form. This call is for # development of Podman only and generally should not be used. -method ContainerInspectData(name: string) -> (config: string) +method ContainerInspectData(name: string, size: bool) -> (config: string) # ContainerStateData returns a container's state config in string form. This call is for # development of Podman only and generally should not be used. @@ -1151,7 +1151,7 @@ method GetPodsByContext(all: bool, latest: bool, args: []string) -> (pods: []str method LoadImage(name: string, inputFile: string, quiet: bool, deleteFile: bool) -> (reply: MoreResponse) # GetEvents returns known libpod events filtered by the options provided. -method GetEvents(filter: []string, since: string, stream: bool, until: string) -> (events: Event) +method GetEvents(filter: []string, since: string, until: string) -> (events: Event) # ImageNotFound means the image could not be found by the provided name or ID in local storage. error ImageNotFound (id: string, reason: string) diff --git a/commands.md b/commands.md index 31a77c0c4..156a1cdf6 100644 --- a/commands.md +++ b/commands.md @@ -1,83 +1,85 @@ ![PODMAN logo](logo/podman-logo-source.svg) + # libpod - library for running OCI-based containers in Pods ## Podman Commands -| Command | Description | Demo| -| :------------------------------------------------------- | :------------------------------------------------------------------------ | :----| -| [podman(1)](/docs/podman.1.md) | Simple management tool for pods and images || -| [podman-attach(1)](/docs/podman-attach.1.md) | Attach to a running container |[![...](/docs/play.png)](https://asciinema.org/a/XDlocUrHVETFECg4zlO9nBbLf)| -| [podman-build(1)](/docs/podman-build.1.md) | Build an image using instructions from Dockerfiles || -| [podman-commit(1)](/docs/podman-commit.1.md) | Create new image based on the changed container || -| [podman-container(1)](/docs/podman-container.1.md) | Manage Containers || -| [podman-container-checkpoint(1)](/docs/podman-container-checkpoint.1.md) | Checkpoints one or more running containers || -| [podman-container-cleanup(1)](/docs/podman-container-cleanup.1.md) | Cleanup Container storage and networks || -| [podman-container-exists(1)](/docs/podman-container-exists.1.md) | Check if an container exists in local storage || -| [podman-container-prune(1)](/docs/podman-container-prune.1.md) | Remove all stopped containers || -| [podman-container-refresh(1)](/docs/podman-container-refresh.1.md) | Refresh all containers state in database || -| [podman-container-restore(1)](/docs/podman-container-restore.1.md) | Restores one or more running containers || -| [podman-container-runlabel(1)](/docs/podman-container-runlabel.1.md) | Execute Image Label Method || -| [podman-cp(1)](/docs/podman-cp.1.md) | Copy files/folders between a container and the local filesystem || -| [podman-create(1)](/docs/podman-create.1.md) | Create a new container || -| [podman-diff(1)](/docs/podman-diff.1.md) | Inspect changes on a container or image's filesystem |[![...](/docs/play.png)](https://asciinema.org/a/FXfWB9CKYFwYM4EfqW3NSZy1G)| -| [podman-events(1)](/docs/podman-events.1.md) | Monitor Podman events || -| [podman-exec(1)](/docs/podman-exec.1.md) | Execute a command in a running container -| [podman-export(1)](/docs/podman-export.1.md) | Export container's filesystem contents as a tar archive |[![...](/docs/play.png)](https://asciinema.org/a/913lBIRAg5hK8asyIhhkQVLtV)| -| [podman-generate(1)](/docs/podman-generate.1.md) | Generate structured output based on Podman containers and pods | | -| [podman-history(1)](/docs/podman-history.1.md) | Shows the history of an image |[![...](/docs/play.png)](https://asciinema.org/a/bCvUQJ6DkxInMELZdc5DinNSx)| -| [podman-image(1)](/docs/podman-image.1.md) | Manage Images|| -| [podman-image-exists(1)](/docs/podman-image-exists.1.md) | Check if an image exists in local storage|| -| [podman-image-prune(1)](/docs/podman-image-prune.1.md) | Remove all unused images|| -| [podman-image-sign(1)](/docs/podman-image-sign.1.md) | Create a signature for an image|| -| [podman-image-trust(1)](/docs/podman-image-trust.1.md) | Manage container registry image trust policy|| -| [podman-images(1)](/docs/podman-images.1.md) | List images in local storage |[![...](/docs/play.png)](https://asciinema.org/a/133649)| -| [podman-import(1)](/docs/podman-import.1.md) | Import a tarball and save it as a filesystem image || -| [podman-info(1)](/docs/podman-info.1.md) | Display system information |[![...](/docs/play.png)](https://asciinema.org/a/yKbi5fQ89y5TJ8e1RfJd4ivTD)| -| [podman-inspect(1)](/docs/podman-inspect.1.md) | Display the configuration of a container or image |[![...](/docs/play.png)](https://asciinema.org/a/133418)| -| [podman-kill(1)](/docs/podman-kill.1.md) | Kill the main process in one or more running containers |[![...](/docs/play.png)](https://asciinema.org/a/3jNos0A5yzO4hChu7ddKkUPw7)| -| [podman-load(1)](/docs/podman-load.1.md) | Load an image from a container image archive |[![...](/docs/play.png)](https://asciinema.org/a/kp8kOaexEhEa20P1KLZ3L5X4g)| -| [podman-login(1)](/docs/podman-login.1.md) | Login to a container registry |[![...](/docs/play.png)](https://asciinema.org/a/oNiPgmfo1FjV2YdesiLpvihtV)| -| [podman-logout(1)](/docs/podman-logout.1.md) | Logout of a container registry |[![...](/docs/play.png)](https://asciinema.org/a/oNiPgmfo1FjV2YdesiLpvihtV)| -| [podman-logs(1)](/docs/podman-logs.1.md) | Display the logs of a container |[![...](/docs/play.png)](https://asciinema.org/a/MZPTWD5CVs3dMREkBxQBY9C5z)| -| [podman-mount(1)](/docs/podman-mount.1.md) | Mount a working container's root filesystem |[![...](/docs/play.png)](https://asciinema.org/a/YSP6hNvZo0RGeMHDA97PhPAf3)| -| [podman-pause(1)](/docs/podman-pause.1.md) | Pause one or more running containers |[![...](/docs/play.png)](https://asciinema.org/a/141292)| -| [podman-pod(1)](/docs/podman-pod.1.md) | Simple management tool for groups of containers, called pods || -| [podman-pod-create(1)](/docs/podman-pod-create.1.md) | Create a new pod || -| [podman-pod-inspect(1)](/docs/podman-pod-inspect.1.md) | Inspect a pod || -| [podman-pod-kill(1)](podman-pod-kill.1.md) | Kill the main process of each container in pod. || -| [podman-pod-ps(1)](/docs/podman-pod-ps.1.md) | List the pods on the system || -| [podman-pod-pause(1)](podman-pod-pause.1.md) | Pause one or more pods. || -| [podman-pod-restart](/docs/podman-pod-restart.1.md) | Restart one or more pods || -| [podman-pod-rm(1)](/docs/podman-pod-rm.1.md) | Remove one or more pods || -| [podman-pod-start(1)](/docs/podman-pod-start.1.md) | Start one or more pods || -| [podman-pod-stats(1)](/docs/podman-pod-stats.1.md) | Display a live stream of one or more pods' resource usage statistics || || -| [podman-pod-stop(1)](/docs/podman-pod-stop.1.md) | Stop one or more pods || -| [podman-pod-top(1)](/docs/podman-pod-top.1.md) | Display the running processes of a pod || -| [podman-pod-unpause(1)](podman-pod-unpause.1.md) | Unpause one or more pods. || -| [podman-port(1)](/docs/podman-port.1.md) | List port mappings for running containers |[![...](/docs/play.png)]()| -| [podman-ps(1)](/docs/podman-ps.1.md) | Prints out information about containers |[![...](/docs/play.png)](https://asciinema.org/a/bbT41kac6CwZ5giESmZLIaTLR)| -| [podman-pull(1)](/docs/podman-pull.1.md) | Pull an image from a registry |[![...](/docs/play.png)](https://asciinema.org/a/lr4zfoynHJOUNu1KaXa1dwG2X)| -| [podman-push(1)](/docs/podman-push.1.md) | Push an image to a specified destination |[![...](/docs/play.png)](https://asciinema.org/a/133276)| -| [podman-restart](/docs/podman-restart.1.md) | Restarts one or more containers |[![...](/docs/play.png)](https://asciinema.org/a/jiqxJAxcVXw604xdzMLTkQvHM)| -| [podman-rm(1)](/docs/podman-rm.1.md) | Removes one or more containers |[![...](/docs/play.png)](https://asciinema.org/a/7EMk22WrfGtKWmgHJX9Nze1Qp)| -| [podman-rmi(1)](/docs/podman-rmi.1.md) | Removes one or more images |[![...](/docs/play.png)](https://asciinema.org/a/133799)| -| [podman-run(1)](/docs/podman-run.1.md) | Run a command in a container || -| [podman-runlabel(1)](/docs/podman-container-runlabel.1.md) | Executes the command of a container image's label || -| [podman-save(1)](/docs/podman-save.1.md) | Saves an image to an archive |[![...](/docs/play.png)](https://asciinema.org/a/kp8kOaexEhEa20P1KLZ3L5X4g)| -| [podman-search(1)](/docs/podman-search.1.md) | Search a registry for an image || -| [podman-start(1)](/docs/podman-start.1.md) | Starts one or more containers -| [podman-stats(1)](/docs/podman-stats.1.md) | Display a live stream of one or more containers' resource usage statistics|[![...](/docs/play.png)](https://asciinema.org/a/vfUPbAA5tsNWhsfB9p25T6xdr)| -| [podman-stop(1)](/docs/podman-stop.1.md) | Stops one or more running containers |[![...](/docs/play.png)](https://asciinema.org/a/KNRF9xVXeaeNTNjBQVogvZBcp)| -| [podman-system(1)](/docs/podman-system.1.md) | Manage podman || -| [podman-tag(1)](/docs/podman-tag.1.md) | Add an additional name to a local image |[![...](/docs/play.png)](https://asciinema.org/a/133803)| -| [podman-top(1)](/docs/podman-top.1.md) | Display the running processes of a container |[![...](/docs/play.png)](https://asciinema.org/a/5WCCi1LXwSuRbvaO9cBUYf3fk)| -| [podman-umount(1)](/docs/podman-umount.1.md) | Unmount a working container's root filesystem |[![...](/docs/play.png)](https://asciinema.org/a/MZPTWD5CVs3dMREkBxQBY9C5z)| -| [podman-unpause(1)](/docs/podman-unpause.1.md) | Unpause one or more running containers |[![...](/docs/play.png)](https://asciinema.org/a/141292)| -| [podman-varlink(1)](/docs/podman-varlink.1.md) | Run the varlink backend || -| [podman-version(1)](/docs/podman-version.1.md) | Display the version information |[![...](/docs/play.png)](https://asciinema.org/a/mfrn61pjZT9Fc8L4NbfdSqfgu)| -| [podman-volume(1)](/docs/podman-volume.1.md) | Manage Volumes || -| [podman-volume-create(1)](/docs/podman-volume-create.1.md) | Create a volume || -| [podman-volume-inspect(1)](/docs/podman-volume-inspect.1.md) | Get detailed information on one or more volumes || -| [podman-volume-ls(1)](/docs/podman-volume-ls.1.md) | List all the available volumes || -| [podman-volume-rm(1)](/docs/podman-volume-rm.1.md) | Remove one or more volumes || -| [podman-volume-prune(1)](/docs/podman-volume-prune.1.md) | Remove all unused volumes || -| [podman-wait(1)](/docs/podman-wait.1.md) | Wait on one or more containers to stop and print their exit codes |[![...](/docs/play.png)](https://asciinema.org/a/QNPGKdjWuPgI96GcfkycQtah0)| + +Command | Description | Demo +:----------------------------------------------------------------------- | :------------------------------------------------------------------------- | :-------------------------------------------------------------------------- +[podman(1)](/docs/podman.1.md) | Simple management tool for pods and images | +[podman-attach(1)](/docs/podman-attach.1.md) | Attach to a running container | +[podman-build(1)](/docs/podman-build.1.md) | Build an image using instructions from Dockerfiles | +[podman-commit(1)](/docs/podman-commit.1.md) | Create new image based on the changed container | +[podman-container(1)](/docs/podman-container.1.md) | Manage Containers | +[podman-container-checkpoint(1)](/docs/podman-container-checkpoint.1.md) | Checkpoints one or more running containers | +[podman-container-cleanup(1)](/docs/podman-container-cleanup.1.md) | Cleanup Container storage and networks | +[podman-container-exists(1)](/docs/podman-container-exists.1.md) | Check if an container exists in local storage | +[podman-container-prune(1)](/docs/podman-container-prune.1.md) | Remove all stopped containers | +[podman-container-refresh(1)](/docs/podman-container-refresh.1.md) | Refresh all containers state in database | +[podman-container-restore(1)](/docs/podman-container-restore.1.md) | Restores one or more running containers | +[podman-container-runlabel(1)](/docs/podman-container-runlabel.1.md) | Execute Image Label Method | +[podman-cp(1)](/docs/podman-cp.1.md) | Copy files/folders between a container and the local filesystem | +[podman-create(1)](/docs/podman-create.1.md) | Create a new container | +[podman-diff(1)](/docs/podman-diff.1.md) | Inspect changes on a container or image's filesystem | +[podman-events(1)](/docs/podman-events.1.md) | Monitor Podman events | +[podman-exec(1)](/docs/podman-exec.1.md) | Execute a command in a running container | +[podman-export(1)](/docs/podman-export.1.md) | Export container's filesystem contents as a tar archive | +[podman-generate(1)](/docs/podman-generate.1.md) | Generate structured output based on Podman containers and pods | +[podman-history(1)](/docs/podman-history.1.md) | Shows the history of an image | +[podman-image(1)](/docs/podman-image.1.md) | Manage Images | +[podman-image-exists(1)](/docs/podman-image-exists.1.md) | Check if an image exists in local storage | +[podman-image-prune(1)](/docs/podman-image-prune.1.md) | Remove all unused images | +[podman-image-sign(1)](/docs/podman-image-sign.1.md) | Create a signature for an image | +[podman-image-trust(1)](/docs/podman-image-trust.1.md) | Manage container registry image trust policy | +[podman-images(1)](/docs/podman-images.1.md) | List images in local storage | [![...](/docs/play.png)](https://asciinema.org/a/133649) +[podman-import(1)](/docs/podman-import.1.md) | Import a tarball and save it as a filesystem image | +[podman-info(1)](/docs/podman-info.1.md) | Display system information | +[podman-inspect(1)](/docs/podman-inspect.1.md) | Display the configuration of a container or image | [![...](/docs/play.png)](https://asciinema.org/a/133418) +[podman-kill(1)](/docs/podman-kill.1.md) | Kill the main process in one or more running containers | +[podman-load(1)](/docs/podman-load.1.md) | Load an image from a container image archive | +[podman-login(1)](/docs/podman-login.1.md) | Login to a container registry | +[podman-logout(1)](/docs/podman-logout.1.md) | Logout of a container registry | +[podman-logs(1)](/docs/podman-logs.1.md) | Display the logs of a container | +[podman-mount(1)](/docs/podman-mount.1.md) | Mount a working container's root filesystem | +[podman-pause(1)](/docs/podman-pause.1.md) | Pause one or more running containers | [![...](/docs/play.png)](https://asciinema.org/a/141292) +[podman-play(1)](/docs/podman-play.1.md) | Play pods and containers based on a structured input file | +[podman-pod(1)](/docs/podman-pod.1.md) | Simple management tool for groups of containers, called pods | +[podman-pod-create(1)](/docs/podman-pod-create.1.md) | Create a new pod | +[podman-pod-inspect(1)](/docs/podman-pod-inspect.1.md) | Inspect a pod | +[podman-pod-kill(1)](podman-pod-kill.1.md) | Kill the main process of each container in pod. | +[podman-pod-ps(1)](/docs/podman-pod-ps.1.md) | List the pods on the system | +[podman-pod-pause(1)](podman-pod-pause.1.md) | Pause one or more pods. | +[podman-pod-restart](/docs/podman-pod-restart.1.md) | Restart one or more pods | +[podman-pod-rm(1)](/docs/podman-pod-rm.1.md) | Remove one or more pods | +[podman-pod-start(1)](/docs/podman-pod-start.1.md) | Start one or more pods | +[podman-pod-stats(1)](/docs/podman-pod-stats.1.md) | Display a live stream of one or more pods' resource usage statistics | | | +[podman-pod-stop(1)](/docs/podman-pod-stop.1.md) | Stop one or more pods | +[podman-pod-top(1)](/docs/podman-pod-top.1.md) | Display the running processes of a pod | +[podman-pod-unpause(1)](podman-pod-unpause.1.md) | Unpause one or more pods. | +[podman-port(1)](/docs/podman-port.1.md) | List port mappings for running containers | +[podman-ps(1)](/docs/podman-ps.1.md) | Prints out information about containers | +[podman-pull(1)](/docs/podman-pull.1.md) | Pull an image from a registry | +[podman-push(1)](/docs/podman-push.1.md) | Push an image to a specified destination | [![...](/docs/play.png)](https://asciinema.org/a/133276) +[podman-restart](/docs/podman-restart.1.md) | Restarts one or more containers | [![...](/docs/play.png)](https://asciinema.org/a/jiqxJAxcVXw604xdzMLTkQvHM) +[podman-rm(1)](/docs/podman-rm.1.md) | Removes one or more containers | +[podman-rmi(1)](/docs/podman-rmi.1.md) | Removes one or more images | +[podman-run(1)](/docs/podman-run.1.md) | Run a command in a container | +[podman-save(1)](/docs/podman-save.1.md) | Saves an image to an archive | +[podman-search(1)](/docs/podman-search.1.md) | Search a registry for an image | +[podman-start(1)](/docs/podman-start.1.md) | Starts one or more containers | +[podman-stats(1)](/docs/podman-stats.1.md) | Display a live stream of one or more containers' resource usage statistics | +[podman-stop(1)](/docs/podman-stop.1.md) | Stops one or more running containers | +[podman-system(1)](/docs/podman-system.1.md) | Manage podman | +[podman-tag(1)](/docs/podman-tag.1.md) | Add an additional name to a local image | [![...](/docs/play.png)](https://asciinema.org/a/133803) +[podman-top(1)](/docs/podman-top.1.md) | Display the running processes of a container | +[podman-umount(1)](/docs/podman-umount.1.md) | Unmount a working container's root filesystem | +[podman-unpause(1)](/docs/podman-unpause.1.md) | Unpause one or more running containers | [![...](/docs/play.png)](https://asciinema.org/a/141292) +[podman-varlink(1)](/docs/podman-varlink.1.md) | Run the varlink backend | +[podman-version(1)](/docs/podman-version.1.md) | Display the version information | +[podman-volume(1)](/docs/podman-volume.1.md) | Manage Volumes | +[podman-volume-create(1)](/docs/podman-volume-create.1.md) | Create a volume | +[podman-volume-inspect(1)](/docs/podman-volume-inspect.1.md) | Get detailed information on one or more volumes | +[podman-volume-ls(1)](/docs/podman-volume-ls.1.md) | List all the available volumes | +[podman-volume-rm(1)](/docs/podman-volume-rm.1.md) | Remove one or more volumes | +[podman-volume-prune(1)](/docs/podman-volume-prune.1.md) | Remove all unused volumes | +[podman-wait(1)](/docs/podman-wait.1.md) | Wait on one or more containers to stop and print their exit codes diff --git a/completions/bash/podman b/completions/bash/podman index 1976bff44..dfa673481 100644 --- a/completions/bash/podman +++ b/completions/bash/podman @@ -999,6 +999,24 @@ _podman_container() { esac } +_podman_system_df() { + local options_with_args=" + --format + --verbose + " + local boolean_options=" + -h + --help + --verbose + -v + " + case "$cur" in + -*) + COMPREPLY=($(compgen -W "$boolean_options $options_with_args" -- "$cur")) + ;; + esac +} + _podman_system_info() { _podman_info } @@ -1029,6 +1047,7 @@ _podman_system() { -h " subcommands=" + df info prune " diff --git a/docs/podman-build.1.md b/docs/podman-build.1.md index 42fa9a359..ccc8bd900 100644 --- a/docs/podman-build.1.md +++ b/docs/podman-build.1.md @@ -288,13 +288,21 @@ process. **--pull** -Pull the image if it is not present. If this flag is disabled (with -*--pull=false*) and the image is not present, the image will not be pulled. +When the flag is enabled, attempt to pull the latest image from the registries +listed in registries.conf if a local image does not exist or the image is newer +than the one in storage. Raise an error if the image is not in any listed +registry and is not present locally. + +If the flag is disabled (with *--pull=false*), do not pull the image from the +registry, use only the local version. Raise an error if the image is not +present locally. + Defaults to *true*. **--pull-always** -Pull the image even if a version of the image is already present. +Pull the image from the first registry it is found in as listed in registries.conf. +Raise an error if not found in the registries, even if the image is present locally. **--quiet, -q** diff --git a/docs/podman-logs.1.md b/docs/podman-logs.1.md index 8cd6ad5e7..ce5d890ce 100644 --- a/docs/podman-logs.1.md +++ b/docs/podman-logs.1.md @@ -1,9 +1,11 @@ -% podman-logs(1) +% podman-container-logs(1) ## NAME -podman\-logs - Fetch the logs of one or more containers +podman\-container\-logs (podman\-logs) - Fetch the logs of one or more containers ## SYNOPSIS +**podman** **container** **logs** [*options*] *container* [*container...*] + **podman** **logs** [*options*] *container* [*container...*] ## DESCRIPTION @@ -15,7 +17,11 @@ any logs at the time you execute podman logs **--follow, -f** -Follow log output. Default is false +Follow log output. Default is false. + +Note: If you are following a container which is removed `podman container rm` +or removed on exit `podman run --rm ...`, then there is a chance the the log +file will be removed before `podman logs` reads the final content. **--latest, -l** @@ -86,7 +92,7 @@ podman logs --since 10m myserver ``` ## SEE ALSO -podman(1) +podman(1), podman-run(1), podman-container-rm(1) ## HISTORY February 2018, Updated by Brent Baude <bbaude@redhat.com> diff --git a/docs/podman-pod-create.1.md b/docs/podman-pod-create.1.md index 06f962849..d913083d1 100644 --- a/docs/podman-pod-create.1.md +++ b/docs/podman-pod-create.1.md @@ -81,6 +81,8 @@ $ podman pod create --name test $ podman pod create --infra=false $ podman pod create --infra-command /top + +$ podman pod create --publish 8443:443 ``` ## SEE ALSO diff --git a/docs/podman-ps.1.md b/docs/podman-ps.1.md index 811fbbc2f..685a52bda 100644 --- a/docs/podman-ps.1.md +++ b/docs/podman-ps.1.md @@ -100,6 +100,7 @@ Valid filters are listed below: | before | [ID] or [Name] Containers created before this container | | since | [ID] or [Name] Containers created since this container | | volume | [VolumeName] or [MountpointDestination] Volume mounted in container | +| health | [Status] healthy or unhealthy | **--help**, **-h** diff --git a/docs/podman-system-df.1.md b/docs/podman-system-df.1.md new file mode 100644 index 000000000..f33523dd6 --- /dev/null +++ b/docs/podman-system-df.1.md @@ -0,0 +1,57 @@ +% podman-system-df(1) podman + +## NAME +podman\-system\-df - Show podman disk usage + +## SYNOPSIS +**podman system df** [*options*] + +## DESCRIPTION +Show podman disk usage + +## OPTIONS +**--format**="" + +Pretty-print images using a Go template + +**-v, --verbose**[=false] +Show detailed information on space usage + +## EXAMPLE + +$ podman system df +TYPE TOTAL ACTIVE SIZE RECLAIMABLE +Images 6 2 281MB 168MB (59%) +Containers 3 1 0B 0B (0%) +Local Volumes 1 1 22B 0B (0%) + +$ podman system df -v +Images space usage: + +REPOSITORY TAG IMAGE ID CREATED SIZE SHARED SIZE UNQUE SIZE CONTAINERS +docker.io/library/alpine latest 5cb3aa00f899 2 weeks ago 5.79MB 0B 5.79MB 5 + +Containers space usage: + +CONTAINER ID IMAGE COMMAND LOCAL VOLUMES SIZE CREATED STATUS NAMES +073f7e62812d 5cb3 sleep 100 1 0B About an hourago exited zen_joliot +3f19f5bba242 5cb3 sleep 100 0 5.52kB 4 hoursago exited pedantic_archimedes +8cd89bf645cc 5cb3 ls foodir 0 58B 2 hoursago configured agitated_hamilton +a1d948a4b61d 5cb3 ls foodir 0 12B 2 hoursago exited laughing_wing +eafe3e3c5bb3 5cb3 sleep 10000 0 72B 2 hoursago running priceless_liskov + +Local Volumes space usage: + +VOLUME NAME LINKS SIZE +data 1 0B + +$ podman system df --format "{{.Type}}\t{{.Total}}" +Images 1 +Containers 5 +Local Volumes 1 + +## SEE ALSO +podman-system(1) + +# HISTORY +March 2019, Originally compiled by Qi Wang (qiwan at redhat dot com) diff --git a/docs/podman-system.1.md b/docs/podman-system.1.md index 6d87648e8..32b3efdd9 100644 --- a/docs/podman-system.1.md +++ b/docs/podman-system.1.md @@ -13,7 +13,8 @@ The system command allows you to manage the podman systems | Command | Man Page | Description | | ------- | --------------------------------------------------- | ---------------------------------------------------------------------------- | -| info | [podman-info(1)](podman-info.1.md) | Displays Podman related system information. | +| df | [podman-system-df(1)](podman-system-df.1.md) | Show podman disk usage. | +| info | [podman-system-info(1)](podman-info.1.md) | Displays Podman related system information. | | prune | [podman-system-prune(1)](podman-system-prune.1.md) | Remove all unused data | | renumber | [podman-system-renumber(1)](podman-system-renumber.1.md)| Migrate lock numbers to handle a change in maximum number of locks. | diff --git a/libpod/container_internal.go b/libpod/container_internal.go index 13e660dc3..7a90bc7d4 100644 --- a/libpod/container_internal.go +++ b/libpod/container_internal.go @@ -833,6 +833,12 @@ func (c *Container) init(ctx context.Context) error { if err := c.save(); err != nil { return err } + if c.config.HealthCheckConfig != nil { + if err := c.createTimer(); err != nil { + logrus.Error(err) + } + } + defer c.newContainerEvent(events.Init) return c.completeNetworkSetup() } @@ -956,6 +962,15 @@ func (c *Container) start() error { c.state.State = ContainerStateRunning + if c.config.HealthCheckConfig != nil { + if err := c.updateHealthStatus(HealthCheckStarting); err != nil { + logrus.Error(err) + } + if err := c.startTimer(); err != nil { + logrus.Error(err) + } + } + defer c.newContainerEvent(events.Start) return c.save() @@ -1123,6 +1138,13 @@ func (c *Container) cleanup(ctx context.Context) error { logrus.Debugf("Cleaning up container %s", c.ID()) + // Remove healthcheck unit/timer file if it execs + if c.config.HealthCheckConfig != nil { + if err := c.removeTimer(); err != nil { + logrus.Error(err) + } + } + // Clean up network namespace, if present if err := c.cleanupNetwork(); err != nil { lastError = err diff --git a/libpod/container_internal_linux.go b/libpod/container_internal_linux.go index 2a7808bdf..c6c9ceb0c 100644 --- a/libpod/container_internal_linux.go +++ b/libpod/container_internal_linux.go @@ -203,7 +203,8 @@ func (c *Container) generateSpec(ctx context.Context) (*spec.Spec, error) { } // Check if the spec file mounts contain the label Relabel flags z or Z. // If they do, relabel the source directory and then remove the option. - for _, m := range g.Mounts() { + for i := range g.Config.Mounts { + m := &g.Config.Mounts[i] var options []string for _, o := range m.Options { switch o { @@ -219,6 +220,13 @@ func (c *Container) generateSpec(ctx context.Context) (*spec.Spec, error) { } } m.Options = options + + // If we are using a user namespace, we will use an intermediate + // directory to bind mount volumes + if c.state.UserNSRoot != "" && strings.HasPrefix(m.Source, c.runtime.config.VolumePath) { + newSourceDir := filepath.Join(c.state.UserNSRoot, "volumes") + m.Source = strings.Replace(m.Source, c.runtime.config.VolumePath, newSourceDir, 1) + } } g.SetProcessSelinuxLabel(c.ProcessLabel()) diff --git a/libpod/events/events.go b/libpod/events/events.go index 48bbbb00e..7db36653e 100644 --- a/libpod/events/events.go +++ b/libpod/events/events.go @@ -219,6 +219,8 @@ func StringToStatus(name string) (Status, error) { return Create, nil case Exec.String(): return Exec, nil + case Exited.String(): + return Exited, nil case Export.String(): return Export, nil case History.String(): diff --git a/libpod/healthcheck.go b/libpod/healthcheck.go index d2c0ea0fb..d8f56860b 100644 --- a/libpod/healthcheck.go +++ b/libpod/healthcheck.go @@ -3,13 +3,16 @@ package libpod import ( "bufio" "bytes" + "fmt" "io/ioutil" "os" + "os/exec" "path/filepath" "strings" "time" "github.com/containers/libpod/pkg/inspect" + "github.com/coreos/go-systemd/dbus" "github.com/pkg/errors" "github.com/sirupsen/logrus" ) @@ -47,6 +50,10 @@ const ( HealthCheckHealthy string = "healthy" // HealthCheckUnhealthy describes an unhealthy container HealthCheckUnhealthy string = "unhealthy" + // HealthCheckStarting describes the time between when the container starts + // and the start-period (time allowed for the container to start and application + // to be running) expires. + HealthCheckStarting string = "starting" ) // hcWriteCloser allows us to use bufio as a WriteCloser @@ -68,17 +75,18 @@ func (r *Runtime) HealthCheck(name string) (HealthCheckStatus, error) { } hcStatus, err := checkHealthCheckCanBeRun(container) if err == nil { - return container.RunHealthCheck() + return container.runHealthCheck() } return hcStatus, err } -// RunHealthCheck runs the health check as defined by the container -func (c *Container) RunHealthCheck() (HealthCheckStatus, error) { +// runHealthCheck runs the health check as defined by the container +func (c *Container) runHealthCheck() (HealthCheckStatus, error) { var ( - newCommand []string - returnCode int - capture bytes.Buffer + newCommand []string + returnCode int + capture bytes.Buffer + inStartPeriod bool ) hcStatus, err := checkHealthCheckCanBeRun(c) if err != nil { @@ -111,12 +119,28 @@ func (c *Container) RunHealthCheck() (HealthCheckStatus, error) { returnCode = 1 } timeEnd := time.Now() + if c.HealthCheckConfig().StartPeriod > 0 { + // there is a start-period we need to honor; we add startPeriod to container start time + startPeriodTime := c.state.StartedTime.Add(c.HealthCheckConfig().StartPeriod) + if timeStart.Before(startPeriodTime) { + // we are still in the start period, flip the inStartPeriod bool + inStartPeriod = true + logrus.Debugf("healthcheck for %s being run in start-period", c.ID()) + } + } + eventLog := capture.String() if len(eventLog) > MaxHealthCheckLogLength { eventLog = eventLog[:MaxHealthCheckLogLength] } + + if timeEnd.Sub(timeStart) > c.HealthCheckConfig().Timeout { + returnCode = -1 + hcResult = HealthCheckFailure + hcErr = errors.Errorf("healthcheck command exceeded timeout of %s", c.HealthCheckConfig().Timeout.String()) + } hcl := newHealthCheckLog(timeStart, timeEnd, returnCode, eventLog) - if err := c.updateHealthCheckLog(hcl); err != nil { + if err := c.updateHealthCheckLog(hcl, inStartPeriod); err != nil { return hcResult, errors.Wrapf(err, "unable to update health check log %s for %s", c.healthCheckLogPath(), c.ID()) } return hcResult, hcErr @@ -145,8 +169,23 @@ func newHealthCheckLog(start, end time.Time, exitCode int, log string) inspect.H } } +// updatedHealthCheckStatus updates the health status of the container +// in the healthcheck log +func (c *Container) updateHealthStatus(status string) error { + healthCheck, err := c.GetHealthCheckLog() + if err != nil { + return err + } + healthCheck.Status = status + newResults, err := json.Marshal(healthCheck) + if err != nil { + return errors.Wrapf(err, "unable to marshall healthchecks for writing status") + } + return ioutil.WriteFile(c.healthCheckLogPath(), newResults, 0700) +} + // UpdateHealthCheckLog parses the health check results and writes the log -func (c *Container) updateHealthCheckLog(hcl inspect.HealthCheckLog) error { +func (c *Container) updateHealthCheckLog(hcl inspect.HealthCheckLog, inStartPeriod bool) error { healthCheck, err := c.GetHealthCheckLog() if err != nil { return err @@ -159,11 +198,13 @@ func (c *Container) updateHealthCheckLog(hcl inspect.HealthCheckLog) error { if len(healthCheck.Status) < 1 { healthCheck.Status = HealthCheckHealthy } - // increment failing streak - healthCheck.FailingStreak = healthCheck.FailingStreak + 1 - // if failing streak > retries, then status to unhealthy - if int(healthCheck.FailingStreak) > c.HealthCheckConfig().Retries { - healthCheck.Status = HealthCheckUnhealthy + if !inStartPeriod { + // increment failing streak + healthCheck.FailingStreak = healthCheck.FailingStreak + 1 + // if failing streak > retries, then status to unhealthy + if int(healthCheck.FailingStreak) >= c.HealthCheckConfig().Retries { + healthCheck.Status = HealthCheckUnhealthy + } } } healthCheck.Log = append(healthCheck.Log, hcl) @@ -199,3 +240,81 @@ func (c *Container) GetHealthCheckLog() (inspect.HealthCheckResults, error) { } return healthCheck, nil } + +// createTimer systemd timers for healthchecks of a container +func (c *Container) createTimer() error { + if c.disableHealthCheckSystemd() { + return nil + } + podman, err := os.Executable() + if err != nil { + return errors.Wrapf(err, "failed to get path for podman for a health check timer") + } + + var cmd = []string{"--unit", fmt.Sprintf("%s", c.ID()), fmt.Sprintf("--on-unit-inactive=%s", c.HealthCheckConfig().Interval.String()), "--timer-property=AccuracySec=1s", podman, "healthcheck", "run", c.ID()} + + conn, err := dbus.NewSystemdConnection() + if err != nil { + return errors.Wrapf(err, "unable to get systemd connection to add healthchecks") + } + conn.Close() + logrus.Debugf("creating systemd-transient files: %s %s", "systemd-run", cmd) + systemdRun := exec.Command("systemd-run", cmd...) + _, err = systemdRun.CombinedOutput() + if err != nil { + return err + } + return nil +} + +// startTimer starts a systemd timer for the healthchecks +func (c *Container) startTimer() error { + if c.disableHealthCheckSystemd() { + return nil + } + conn, err := dbus.NewSystemdConnection() + if err != nil { + return errors.Wrapf(err, "unable to get systemd connection to start healthchecks") + } + defer conn.Close() + _, err = conn.StartUnit(fmt.Sprintf("%s.service", c.ID()), "fail", nil) + return err +} + +// removeTimer removes the systemd timer and unit files +// for the container +func (c *Container) removeTimer() error { + if c.disableHealthCheckSystemd() { + return nil + } + conn, err := dbus.NewSystemdConnection() + if err != nil { + return errors.Wrapf(err, "unable to get systemd connection to remove healthchecks") + } + defer conn.Close() + serviceFile := fmt.Sprintf("%s.timer", c.ID()) + _, err = conn.StopUnit(serviceFile, "fail", nil) + return err +} + +// HealthCheckStatus returns the current state of a container with a healthcheck +func (c *Container) HealthCheckStatus() (string, error) { + if !c.HasHealthCheck() { + return "", errors.Errorf("container %s has no defined healthcheck", c.ID()) + } + results, err := c.GetHealthCheckLog() + if err != nil { + return "", errors.Wrapf(err, "unable to get healthcheck log for %s", c.ID()) + } + return results.Status, nil +} + +func (c *Container) disableHealthCheckSystemd() bool { + if os.Getenv("DISABLE_HC_SYSTEMD") == "true" { + return true + } + if c.config.HealthCheckConfig.Interval == 0 { + return true + } + return false +} diff --git a/libpod/oci_linux.go b/libpod/oci_linux.go index 2737a641e..f85c5ee62 100644 --- a/libpod/oci_linux.go +++ b/libpod/oci_linux.go @@ -106,6 +106,23 @@ func (r *OCIRuntime) createContainer(ctr *Container, cgroupParent string, restor if err != nil { return } + + if ctr.state.UserNSRoot != "" { + _, err := os.Stat(ctr.runtime.config.VolumePath) + if err != nil && !os.IsNotExist(err) { + return + } + if err == nil { + volumesTarget := filepath.Join(ctr.state.UserNSRoot, "volumes") + if err := idtools.MkdirAs(volumesTarget, 0700, ctr.RootUID(), ctr.RootGID()); err != nil { + return + } + if err = unix.Mount(ctr.runtime.config.VolumePath, volumesTarget, "none", unix.MS_BIND, ""); err != nil { + return + } + } + } + err = r.createOCIContainer(ctr, cgroupParent, restoreOptions) }() wg.Wait() diff --git a/libpod/runtime_ctr.go b/libpod/runtime_ctr.go index 3b74a65dd..f23dc86dd 100644 --- a/libpod/runtime_ctr.go +++ b/libpod/runtime_ctr.go @@ -186,8 +186,11 @@ func (r *Runtime) newContainer(ctx context.Context, rSpec *spec.Spec, options .. return nil, errors.Wrapf(err, "error creating named volume %q", vol.Source) } ctr.config.Spec.Mounts[i].Source = newVol.MountPoint() + if err := os.Chown(ctr.config.Spec.Mounts[i].Source, ctr.RootUID(), ctr.RootGID()); err != nil { + return nil, errors.Wrapf(err, "cannot chown %q to %d:%d", ctr.config.Spec.Mounts[i].Source, ctr.RootUID(), ctr.RootGID()) + } if err := ctr.copyWithTarFromImage(ctr.config.Spec.Mounts[i].Destination, ctr.config.Spec.Mounts[i].Source); err != nil && !os.IsNotExist(err) { - return nil, errors.Wrapf(err, "Failed to copy content into new volume mount %q", vol.Source) + return nil, errors.Wrapf(err, "failed to copy content into new volume mount %q", vol.Source) } continue } diff --git a/pkg/adapter/containers_remote.go b/pkg/adapter/containers_remote.go index a8146567a..2982d6cbb 100644 --- a/pkg/adapter/containers_remote.go +++ b/pkg/adapter/containers_remote.go @@ -22,7 +22,7 @@ import ( // Inspect returns an inspect struct from varlink func (c *Container) Inspect(size bool) (*inspect.ContainerInspectData, error) { - reply, err := iopodman.ContainerInspectData().Call(c.Runtime.Conn, c.ID()) + reply, err := iopodman.ContainerInspectData().Call(c.Runtime.Conn, c.ID(), size) if err != nil { return nil, err } diff --git a/pkg/adapter/runtime_remote.go b/pkg/adapter/runtime_remote.go index 01f774dbd..6c53d0c62 100644 --- a/pkg/adapter/runtime_remote.go +++ b/pkg/adapter/runtime_remote.go @@ -763,7 +763,11 @@ func (r *LocalRuntime) JoinOrCreateRootlessPod(pod *Pod) (bool, int, error) { // Events monitors libpod/podman events over a varlink connection func (r *LocalRuntime) Events(c *cliconfig.EventValues) error { - reply, err := iopodman.GetEvents().Send(r.Conn, uint64(varlink.More), c.Filter, c.Since, c.Stream, c.Until) + var more uint64 + if c.Stream { + more = uint64(varlink.More) + } + reply, err := iopodman.GetEvents().Send(r.Conn, more, c.Filter, c.Since, c.Until) if err != nil { return errors.Wrapf(err, "unable to obtain events") } diff --git a/pkg/varlinkapi/containers.go b/pkg/varlinkapi/containers.go index 3185ba0e9..7a6ae3507 100644 --- a/pkg/varlinkapi/containers.go +++ b/pkg/varlinkapi/containers.go @@ -519,12 +519,12 @@ func (i *LibpodAPI) ContainerArtifacts(call iopodman.VarlinkCall, name, artifact } // ContainerInspectData returns the inspect data of a container in string format -func (i *LibpodAPI) ContainerInspectData(call iopodman.VarlinkCall, name string) error { +func (i *LibpodAPI) ContainerInspectData(call iopodman.VarlinkCall, name string, size bool) error { ctr, err := i.Runtime.LookupContainer(name) if err != nil { return call.ReplyContainerNotFound(name, err.Error()) } - data, err := ctr.Inspect(true) + data, err := ctr.Inspect(size) if err != nil { return call.ReplyErrorOccurred("unable to inspect container") } diff --git a/pkg/varlinkapi/events.go b/pkg/varlinkapi/events.go index d3fe3d65f..47c628ead 100644 --- a/pkg/varlinkapi/events.go +++ b/pkg/varlinkapi/events.go @@ -10,13 +10,15 @@ import ( ) // GetEvents is a remote endpoint to get events from the event log -func (i *LibpodAPI) GetEvents(call iopodman.VarlinkCall, filter []string, since string, stream bool, until string) error { +func (i *LibpodAPI) GetEvents(call iopodman.VarlinkCall, filter []string, since string, until string) error { var ( fromStart bool eventsError error event *events.Event + stream bool ) if call.WantsMore() { + stream = true call.Continues = true } filters, err := shared.GenerateEventOptions(filter, since, until) @@ -52,5 +54,5 @@ func (i *LibpodAPI) GetEvents(call iopodman.VarlinkCall, filter []string, since break } } - return call.ReplyGetEvents(iopodman.Event{}) + return nil } diff --git a/test/e2e/attach_test.go b/test/e2e/attach_test.go index c728f482d..a843fe7ff 100644 --- a/test/e2e/attach_test.go +++ b/test/e2e/attach_test.go @@ -4,6 +4,8 @@ package integration import ( "os" + "syscall" + "time" . "github.com/containers/libpod/test/utils" . "github.com/onsi/ginkgo" @@ -73,4 +75,44 @@ var _ = Describe("Podman attach", func() { results.WaitWithDefaultTimeout() Expect(results.ExitCode()).To(Equal(125)) }) + + It("podman attach to a running container", func() { + session := podmanTest.Podman([]string{"run", "-d", "--name", "test", ALPINE, "/bin/sh", "-c", "while true; do echo test; sleep 1; done"}) + session.WaitWithDefaultTimeout() + Expect(session.ExitCode()).To(Equal(0)) + + results := podmanTest.Podman([]string{"attach", "test"}) + time.Sleep(2 * time.Second) + results.Signal(syscall.SIGTSTP) + Expect(results.OutputToString()).To(ContainSubstring("test")) + Expect(podmanTest.NumberOfContainersRunning()).To(Equal(1)) + }) + It("podman attach to the latest container", func() { + session := podmanTest.Podman([]string{"run", "-d", "--name", "test1", ALPINE, "/bin/sh", "-c", "while true; do echo test1; sleep 1; done"}) + session.WaitWithDefaultTimeout() + Expect(session.ExitCode()).To(Equal(0)) + + session = podmanTest.Podman([]string{"run", "-d", "--name", "test2", ALPINE, "/bin/sh", "-c", "while true; do echo test2; sleep 1; done"}) + session.WaitWithDefaultTimeout() + Expect(session.ExitCode()).To(Equal(0)) + + results := podmanTest.Podman([]string{"attach", "-l"}) + time.Sleep(2 * time.Second) + results.Signal(syscall.SIGTSTP) + Expect(results.OutputToString()).To(ContainSubstring("test2")) + Expect(podmanTest.NumberOfContainersRunning()).To(Equal(2)) + }) + + It("podman attach to a container with --sig-proxy set to false", func() { + session := podmanTest.Podman([]string{"run", "-d", "--name", "test", ALPINE, "/bin/sh", "-c", "while true; do echo test; sleep 1; done"}) + session.WaitWithDefaultTimeout() + Expect(session.ExitCode()).To(Equal(0)) + + results := podmanTest.Podman([]string{"attach", "--sig-proxy=false", "test"}) + time.Sleep(2 * time.Second) + results.Signal(syscall.SIGTERM) + results.WaitWithDefaultTimeout() + Expect(results.OutputToString()).To(ContainSubstring("test")) + Expect(podmanTest.NumberOfContainersRunning()).To(Equal(1)) + }) }) diff --git a/test/e2e/common_test.go b/test/e2e/common_test.go index 54b2cbec2..b20b3b37e 100644 --- a/test/e2e/common_test.go +++ b/test/e2e/common_test.go @@ -239,7 +239,7 @@ func PodmanTestCreateUtil(tempDir string, remote bool) *PodmanTestIntegration { ociRuntime = "/usr/bin/runc" } } - + os.Setenv("DISABLE_HC_SYSTEMD", "true") CNIConfigDir := "/etc/cni/net.d" p := &PodmanTestIntegration{ @@ -314,6 +314,14 @@ func (s *PodmanSessionIntegration) InspectImageJSON() []inspect.ImageData { return i } +// InspectContainer returns a container's inspect data in JSON format +func (p *PodmanTestIntegration) InspectContainer(name string) []inspect.ContainerData { + cmd := []string{"inspect", name} + session := p.Podman(cmd) + session.WaitWithDefaultTimeout() + return session.InspectContainerToJSON() +} + func processTestResult(f GinkgoTestDescription) { tr := testResult{length: f.Duration.Seconds(), name: f.TestText} testResults = append(testResults, tr) diff --git a/test/e2e/healthcheck_run_test.go b/test/e2e/healthcheck_run_test.go index f178e8ad5..cd2365ce7 100644 --- a/test/e2e/healthcheck_run_test.go +++ b/test/e2e/healthcheck_run_test.go @@ -83,4 +83,100 @@ var _ = Describe("Podman healthcheck run", func() { hc.WaitWithDefaultTimeout() Expect(hc.ExitCode()).To(Equal(125)) }) + + It("podman healthcheck should be starting", func() { + session := podmanTest.Podman([]string{"run", "-dt", "--name", "hc", "--healthcheck-retries", "2", "--healthcheck-command", "\"CMD-SHELL ls /foo || exit 1\"", ALPINE, "top"}) + session.WaitWithDefaultTimeout() + Expect(session.ExitCode()).To(Equal(0)) + inspect := podmanTest.InspectContainer("hc") + Expect(inspect[0].State.Healthcheck.Status).To(Equal("starting")) + }) + + It("podman healthcheck failed checks in start-period should not change status", func() { + session := podmanTest.Podman([]string{"run", "-dt", "--name", "hc", "--healthcheck-start-period", "2m", "--healthcheck-retries", "2", "--healthcheck-command", "\"CMD-SHELL ls /foo || exit 1\"", ALPINE, "top"}) + session.WaitWithDefaultTimeout() + Expect(session.ExitCode()).To(Equal(0)) + + hc := podmanTest.Podman([]string{"healthcheck", "run", "hc"}) + hc.WaitWithDefaultTimeout() + Expect(hc.ExitCode()).To(Equal(1)) + + hc = podmanTest.Podman([]string{"healthcheck", "run", "hc"}) + hc.WaitWithDefaultTimeout() + Expect(hc.ExitCode()).To(Equal(1)) + + hc = podmanTest.Podman([]string{"healthcheck", "run", "hc"}) + hc.WaitWithDefaultTimeout() + Expect(hc.ExitCode()).To(Equal(1)) + + inspect := podmanTest.InspectContainer("hc") + Expect(inspect[0].State.Healthcheck.Status).To(Equal("starting")) + }) + + It("podman healthcheck failed checks must reach retries before unhealthy ", func() { + session := podmanTest.Podman([]string{"run", "-dt", "--name", "hc", "--healthcheck-retries", "2", "--healthcheck-command", "\"CMD-SHELL ls /foo || exit 1\"", ALPINE, "top"}) + session.WaitWithDefaultTimeout() + Expect(session.ExitCode()).To(Equal(0)) + + hc := podmanTest.Podman([]string{"healthcheck", "run", "hc"}) + hc.WaitWithDefaultTimeout() + Expect(hc.ExitCode()).To(Equal(1)) + + inspect := podmanTest.InspectContainer("hc") + Expect(inspect[0].State.Healthcheck.Status).To(Equal("starting")) + + hc = podmanTest.Podman([]string{"healthcheck", "run", "hc"}) + hc.WaitWithDefaultTimeout() + Expect(hc.ExitCode()).To(Equal(1)) + + inspect = podmanTest.InspectContainer("hc") + Expect(inspect[0].State.Healthcheck.Status).To(Equal("unhealthy")) + + }) + + It("podman healthcheck good check results in healthy even in start-period", func() { + SkipIfRootless() + session := podmanTest.Podman([]string{"run", "-dt", "--name", "hc", "--healthcheck-start-period", "2m", "--healthcheck-retries", "2", "--healthcheck-command", "\"CMD-SHELL\" \"ls\" \"||\" \"exit\" \"1\"", ALPINE, "top"}) + session.WaitWithDefaultTimeout() + Expect(session.ExitCode()).To(Equal(0)) + + hc := podmanTest.Podman([]string{"healthcheck", "run", "hc"}) + hc.WaitWithDefaultTimeout() + Expect(hc.ExitCode()).To(Equal(0)) + + inspect := podmanTest.InspectContainer("hc") + Expect(inspect[0].State.Healthcheck.Status).To(Equal("healthy")) + }) + + It("podman healthcheck single healthy result changes failed to healthy", func() { + SkipIfRootless() + session := podmanTest.Podman([]string{"run", "-dt", "--name", "hc", "--healthcheck-retries", "2", "--healthcheck-command", "\"CMD-SHELL\" \"ls\" \"/foo\" \"||\" \"exit\" \"1\"", ALPINE, "top"}) + session.WaitWithDefaultTimeout() + Expect(session.ExitCode()).To(Equal(0)) + + hc := podmanTest.Podman([]string{"healthcheck", "run", "hc"}) + hc.WaitWithDefaultTimeout() + Expect(hc.ExitCode()).To(Equal(1)) + + inspect := podmanTest.InspectContainer("hc") + Expect(inspect[0].State.Healthcheck.Status).To(Equal("starting")) + + hc = podmanTest.Podman([]string{"healthcheck", "run", "hc"}) + hc.WaitWithDefaultTimeout() + Expect(hc.ExitCode()).To(Equal(1)) + + inspect = podmanTest.InspectContainer("hc") + Expect(inspect[0].State.Healthcheck.Status).To(Equal("unhealthy")) + + foo := podmanTest.Podman([]string{"exec", "hc", "touch", "/foo"}) + foo.WaitWithDefaultTimeout() + Expect(foo.ExitCode()).To(BeZero()) + + hc = podmanTest.Podman([]string{"healthcheck", "run", "hc"}) + hc.WaitWithDefaultTimeout() + Expect(hc.ExitCode()).To(Equal(0)) + + inspect = podmanTest.InspectContainer("hc") + Expect(inspect[0].State.Healthcheck.Status).To(Equal("healthy")) + }) }) diff --git a/test/e2e/run_userns_test.go b/test/e2e/run_userns_test.go index c6c94d2f6..5c38a8950 100644 --- a/test/e2e/run_userns_test.go +++ b/test/e2e/run_userns_test.go @@ -69,6 +69,21 @@ var _ = Describe("Podman UserNS support", func() { Expect(ok).To(BeTrue()) }) + It("podman uidmapping and gidmapping with a volume", func() { + if os.Getenv("SKIP_USERNS") != "" { + Skip("Skip userns tests.") + } + if _, err := os.Stat("/proc/self/uid_map"); err != nil { + Skip("User namespaces not supported.") + } + + session := podmanTest.Podman([]string{"run", "--uidmap=0:1:70000", "--gidmap=0:20000:70000", "-v", "my-foo-volume:/foo:Z", "busybox", "echo", "hello"}) + session.WaitWithDefaultTimeout() + Expect(session.ExitCode()).To(Equal(0)) + ok, _ := session.GrepString("hello") + Expect(ok).To(BeTrue()) + }) + It("podman uidmapping and gidmapping --net=host", func() { if os.Getenv("SKIP_USERNS") != "" { Skip("Skip userns tests.") diff --git a/test/e2e/system_df_test.go b/test/e2e/system_df_test.go new file mode 100644 index 000000000..92787f17c --- /dev/null +++ b/test/e2e/system_df_test.go @@ -0,0 +1,62 @@ +// +build !remoteclient + +package integration + +import ( + "fmt" + "os" + "strings" + + . "github.com/containers/libpod/test/utils" + . "github.com/onsi/ginkgo" + . "github.com/onsi/gomega" +) + +var _ = Describe("podman system df", func() { + var ( + tempdir string + err error + podmanTest *PodmanTestIntegration + ) + + BeforeEach(func() { + tempdir, err = CreateTempDirInTempDir() + if err != nil { + os.Exit(1) + } + podmanTest = PodmanTestCreate(tempdir) + podmanTest.RestoreAllArtifacts() + }) + + AfterEach(func() { + podmanTest.Cleanup() + f := CurrentGinkgoTestDescription() + timedResult := fmt.Sprintf("Test: %s completed in %f seconds", f.TestText, f.Duration.Seconds()) + GinkgoWriter.Write([]byte(timedResult)) + }) + + It("podman system df", func() { + session := podmanTest.Podman([]string{"create", ALPINE}) + session.WaitWithDefaultTimeout() + Expect(session.ExitCode()).To(Equal(0)) + + session = podmanTest.Podman([]string{"volume", "create", "data"}) + session.WaitWithDefaultTimeout() + Expect(session.ExitCode()).To(Equal(0)) + + session = podmanTest.Podman([]string{"create", "-v", "data:/data", "--name", "container1", "busybox"}) + session.WaitWithDefaultTimeout() + Expect(session.ExitCode()).To(Equal(0)) + + session = podmanTest.Podman([]string{"system", "df"}) + session.WaitWithDefaultTimeout() + Expect(session.ExitCode()).To(Equal(0)) + Expect(len(session.OutputToStringArray())).To(Equal(4)) + images := strings.Fields(session.OutputToStringArray()[1]) + containers := strings.Fields(session.OutputToStringArray()[2]) + volumes := strings.Fields(session.OutputToStringArray()[3]) + Expect(images[1]).To(Equal("2")) + Expect(containers[1]).To(Equal("2")) + Expect(volumes[2]).To(Equal("1")) + }) +}) diff --git a/transfer.md b/transfer.md index 998a0a9e7..df91cdf21 100644 --- a/transfer.md +++ b/transfer.md @@ -78,6 +78,7 @@ There are other equivalents for these tools | `docker volume prune` | [`podman volume prune`](./docs/podman-volume-prune.1.md) | | `docker volume rm` | [`podman volume rm`](./docs/podman-volume-rm.1.md) | | `docker system` | [`podman system`](./docs/podman-system.1.md) | +| `docker system df` | [`podman system df`](./docs/podman-system-df.1.md) | | `docker system prune` | [`podman system prune`](./docs/podman-system-prune.1.md) | | `docker system info` | [`podman system info`](./docs/podman-system-info.1.md) | | `docker wait` | [`podman wait`](./docs/podman-wait.1.md) | diff --git a/troubleshooting.md b/troubleshooting.md index 882afef0c..08d79723a 100644 --- a/troubleshooting.md +++ b/troubleshooting.md @@ -210,18 +210,17 @@ cannot find newuidmap: exec: "newuidmap": executable file not found in $PATH Install a version of shadow-utils that includes these executables. Note RHEL7 and Centos 7 will not have support for this until RHEL7.7 is released. -### 10) podman fails to run in user namespace because /etc/subuid is not properly populated. +### 10) rootless setup user: invalid argument Rootless podman requires the user running it to have a range of UIDs listed in /etc/subuid and /etc/subgid. #### Symptom -If you are running podman or buildah as a user, you get an error complaining about -a missing subuid ranges in /etc/subuid. +An user, either via --user or through the default configured for the image, is not mapped inside the namespace. ``` -podman run -ti fedora sh -No subuid ranges found for user "johndoe" in /etc/subuid +podman run --rm -ti --user 1000000 alpine echo hi +Error: container create failed: container_linux.go:344: starting container process caused "setup user: invalid argument" ``` #### Solution |