aboutsummaryrefslogtreecommitdiff
path: root/pkg/api/handlers/generic
diff options
context:
space:
mode:
authorJhon Honce <jhonce@redhat.com>2019-11-01 13:03:34 -0700
committerbaude <bbaude@redhat.com>2020-01-10 09:41:39 -0600
commitd924494f561bb878a2b3a7ce438d87ecb934b5fb (patch)
tree29983e7411c8108e74e4286b90a535a114dee755 /pkg/api/handlers/generic
parent6ed88e047579bd2d1eac99a6089cc617f0c4773d (diff)
downloadpodman-d924494f561bb878a2b3a7ce438d87ecb934b5fb.tar.gz
podman-d924494f561bb878a2b3a7ce438d87ecb934b5fb.tar.bz2
podman-d924494f561bb878a2b3a7ce438d87ecb934b5fb.zip
Initial commit on compatible API
Signed-off-by: Jhon Honce <jhonce@redhat.com> Create service command Use cd cmd/service && go build . $ systemd-socket-activate -l 8081 cmd/service/service & $ curl http://localhost:8081/v1.24/images/json Signed-off-by: Jhon Honce <jhonce@redhat.com> Correct Makefile Signed-off-by: Jhon Honce <jhonce@redhat.com> Two more stragglers Signed-off-by: Jhon Honce <jhonce@redhat.com> Report errors back as http headers Signed-off-by: Jhon Honce <jhonce@redhat.com> Split out handlers, updated output Output aligned to docker structures Signed-off-by: Jhon Honce <jhonce@redhat.com> Refactored routing, added more endpoints and types * Encapsulated all the routing information in the handler_* files. * Added more serviceapi/types, including podman additions. See Info Signed-off-by: Jhon Honce <jhonce@redhat.com> Cleaned up code, implemented info content * Move Content-Type check into serviceHandler * Custom 404 handler showing the url, mostly for debugging * Refactored images: better method names and explicit http codes * Added content to /info * Added podman fields to Info struct * Added Container struct Signed-off-by: Jhon Honce <jhonce@redhat.com> Add a bunch of endpoints containers: stop, pause, unpause, wait, rm images: tag, rmi, create (pull only) Signed-off-by: baude <bbaude@redhat.com> Add even more handlers * Add serviceapi/Error() to improve error handling * Better support for API return payloads * Renamed unimplemented to unsupported these are generic endpoints we don't intend to ever support. Swarm broken out since it uses different HTTP codes to signal that the node is not in a swarm. * Added more types * API Version broken out so it can be validated in the future Signed-off-by: Jhon Honce <jhonce@redhat.com> Refactor to introduce ServiceWriter Signed-off-by: Jhon Honce <jhonce@redhat.com> populate pods endpoints /libpod/pods/.. exists, kill, pause, prune, restart, remove, start, stop, unpause Signed-off-by: baude <bbaude@redhat.com> Add components to Version, fix Error body Signed-off-by: Jhon Honce <jhonce@redhat.com> Add images pull output, fix swarm routes * docker-py tests/integration/api_client_test.py pass 100% * docker-py tests/integration/api_image_test.py pass 4/16 + Test failures include services podman does not support Signed-off-by: Jhon Honce <jhonce@redhat.com> pods endpoint submission 2 add create and others; only top and stats is left. Signed-off-by: baude <bbaude@redhat.com> Update pull image to work from empty registry Signed-off-by: Jhon Honce <jhonce@redhat.com> pod create and container create first pass at pod and container create. the container create does not quite work yet but it is very close. pod create needs a partial rewrite. also broken off the DELETE (rm/rmi) to specific handler funcs. Signed-off-by: baude <bbaude@redhat.com> Add docker-py demos, GET .../containers/json * Update serviceapi/types to reflect libpod not podman * Refactored removeImage() to provide non-streaming return Signed-off-by: Jhon Honce <jhonce@redhat.com> create container part2 finished minimal config needed for create container. started demo.py for upcoming talk Signed-off-by: baude <bbaude@redhat.com> Stop server after honoring request * Remove casting for method calls * Improve WriteResponse() * Update Container API type to match docker API Signed-off-by: Jhon Honce <jhonce@redhat.com> fix namespace assumptions cleaned up namespace issues with libpod. Signed-off-by: baude <bbaude@redhat.com> wip Signed-off-by: baude <bbaude@redhat.com> Add sliding window when shutting down server * Added a Timeout rather than closing down service on each call * Added gorilla/schema dependency for Decode'ing query parameters * Improved error handling * Container logs returned and multiplexed for stdout and stderr * .../containers/{name}/logs?stdout=True&stderr=True * Container stats * .../containers/{name}/stats Signed-off-by: Jhon Honce <jhonce@redhat.com> Improve error handling * Add check for at least one std stream required for /containers/{id}/logs * Add check for state in /containers/{id}/top * Fill in more fields for /info * Fixed error checking in service start code Signed-off-by: Jhon Honce <jhonce@redhat.com> get rest of image tests for pass Signed-off-by: baude <bbaude@redhat.com> linting our content Signed-off-by: baude <bbaude@redhat.com> more linting Signed-off-by: baude <bbaude@redhat.com> more linting Signed-off-by: baude <bbaude@redhat.com> pruning Signed-off-by: baude <bbaude@redhat.com> [CI:DOCS]apiv2 pods migrate from using args in the url to using a json struct in body for pod create. Signed-off-by: baude <bbaude@redhat.com> fix handler_images prune prune's api changed slightly to deal with filters. Signed-off-by: baude <bbaude@redhat.com> [CI:DOCS]enabled base container create tests enabling the base container create tests which allow us to get more into the stop, kill, etc tests. many new tests now pass. Signed-off-by: baude <bbaude@redhat.com> serviceapi errors: append error message to API message I dearly hope this is not breaking any other tests but debugging "Internal Server Error" is not helpful to any user. In case, it breaks tests, we can rever the commit - that's why it's a small one. Signed-off-by: Valentin Rothberg <rothberg@redhat.com> serviceAPI: add containers/prune endpoint Signed-off-by: Valentin Rothberg <rothberg@redhat.com> add `service` make target Also remove the non-functional sub-Makefile. Signed-off-by: Valentin Rothberg <rothberg@redhat.com> add make targets for testing the service * `sudo make run-service` for running the service. * `DOCKERPY_TEST="tests/integration/api_container_test.py::ListContainersTest" \ make run-docker-py-tests` for running a specific tests. Run all tests by leaving the env variable empty. Signed-off-by: Valentin Rothberg <rothberg@redhat.com> Split handlers and server packages The files were split to help contain bloat. The api/server package will contain all code related to the functioning of the server while api/handlers will have all the code related to implementing the end points. api/server/register_* will contain the methods for registering endpoints. Additionally, they will have the comments for generating the swagger spec file. See api/handlers/version.go for a small example handler, api/handlers/containers.go contains much more complex handlers. Signed-off-by: Jhon Honce <jhonce@redhat.com> [CI:DOCS]enabled more tests Signed-off-by: baude <bbaude@redhat.com> [CI:DOCS]libpod endpoints small refactor for libpod inclusion and began adding endpoints. Signed-off-by: baude <bbaude@redhat.com> Implement /build and /events * Include crypto libraries for future ssh work Signed-off-by: Jhon Honce <jhonce@redhat.com> [CI:DOCS]more image implementations convert from using for to query structs among other changes including new endpoints. Signed-off-by: baude <bbaude@redhat.com> [CI:DOCS]add bindings for golang Signed-off-by: baude <bbaude@redhat.com> [CI:DOCS]add volume endpoints for libpod create, inspect, ls, prune, and rm Signed-off-by: baude <bbaude@redhat.com> [CI:DOCS]apiv2 healthcheck enablement wire up container healthchecks for the api. Signed-off-by: baude <bbaude@redhat.com> [CI:DOCS]Add mount endpoints via the api, allow ability to mount a container and list container mounts. Signed-off-by: baude <bbaude@redhat.com> [CI:DOCS]Add search endpoint add search endpoint with golang bindings Signed-off-by: baude <bbaude@redhat.com> [CI:DOCS]more apiv2 development misc population of methods, etc Signed-off-by: baude <bbaude@redhat.com> rebase cleanup and epoch reset Signed-off-by: baude <bbaude@redhat.com> [CI:DOCS]add more network endpoints also, add some initial error handling and convenience functions for standard endpoints. Signed-off-by: baude <bbaude@redhat.com> [CI:DOCS]use helper funcs for bindings use the methods developed to make writing bindings less duplicative and easier to use. Signed-off-by: baude <bbaude@redhat.com> [CI:DOCS]add return info for prereview begin to add return info and status codes for errors so that we can review the apiv2 Signed-off-by: baude <bbaude@redhat.com> [CI:DOCS]first pass at adding swagger docs for api Signed-off-by: baude <bbaude@redhat.com>
Diffstat (limited to 'pkg/api/handlers/generic')
-rw-r--r--pkg/api/handlers/generic/containers.go306
-rw-r--r--pkg/api/handlers/generic/containers_create.go243
-rw-r--r--pkg/api/handlers/generic/containers_stats.go198
-rw-r--r--pkg/api/handlers/generic/images.go363
-rw-r--r--pkg/api/handlers/generic/info.go196
-rw-r--r--pkg/api/handlers/generic/ping.go25
-rw-r--r--pkg/api/handlers/generic/system.go18
-rw-r--r--pkg/api/handlers/generic/version.go74
8 files changed, 1423 insertions, 0 deletions
diff --git a/pkg/api/handlers/generic/containers.go b/pkg/api/handlers/generic/containers.go
new file mode 100644
index 000000000..5a0a51fd7
--- /dev/null
+++ b/pkg/api/handlers/generic/containers.go
@@ -0,0 +1,306 @@
+package generic
+
+import (
+ "context"
+ "encoding/binary"
+ "fmt"
+ "net/http"
+ "strconv"
+ "strings"
+ "sync"
+ "time"
+
+ "github.com/containers/libpod/libpod"
+ "github.com/containers/libpod/libpod/define"
+ "github.com/containers/libpod/libpod/logs"
+ "github.com/containers/libpod/pkg/api/handlers"
+ "github.com/containers/libpod/pkg/api/handlers/utils"
+ "github.com/containers/libpod/pkg/util"
+ "github.com/docker/docker/api/types"
+ "github.com/gorilla/mux"
+ "github.com/gorilla/schema"
+ "github.com/pkg/errors"
+ log "github.com/sirupsen/logrus"
+)
+
+func RemoveContainer(w http.ResponseWriter, r *http.Request) {
+ decoder := r.Context().Value("decoder").(*schema.Decoder)
+ query := struct {
+ Force bool `schema:"force"`
+ Vols bool `schema:"v"`
+ Link bool `schema:"link"`
+ }{
+ // override any golang type defaults
+ }
+
+ if err := decoder.Decode(&query, r.URL.Query()); err != nil {
+ utils.Error(w, http.StatusText(http.StatusBadRequest), http.StatusBadRequest,
+ errors.Wrapf(err, "Failed to parse parameters for %s", r.URL.String()))
+ return
+ }
+ if query.Link {
+ utils.Error(w, http.StatusText(http.StatusBadRequest), http.StatusBadRequest,
+ utils.ErrLinkNotSupport)
+ return
+ }
+ utils.RemoveContainer(w, r, query.Force, query.Vols)
+}
+
+func ListContainers(w http.ResponseWriter, r *http.Request) {
+ runtime := r.Context().Value("runtime").(*libpod.Runtime)
+
+ containers, err := runtime.GetAllContainers()
+ if err != nil {
+ utils.InternalServerError(w, err)
+ return
+ }
+
+ infoData, err := runtime.Info()
+ if err != nil {
+ utils.InternalServerError(w, errors.Wrapf(err, "Failed to obtain system info"))
+ return
+ }
+
+ var list = make([]*handlers.Container, len(containers))
+ for i, ctnr := range containers {
+ api, err := handlers.LibpodToContainer(ctnr, infoData)
+ if err != nil {
+ utils.InternalServerError(w, err)
+ return
+ }
+ list[i] = api
+ }
+ utils.WriteResponse(w, http.StatusOK, list)
+}
+
+func GetContainer(w http.ResponseWriter, r *http.Request) {
+ runtime := r.Context().Value("runtime").(*libpod.Runtime)
+
+ name := mux.Vars(r)["name"]
+ ctnr, err := runtime.LookupContainer(name)
+ if err != nil {
+ utils.ContainerNotFound(w, name, err)
+ return
+ }
+ api, err := handlers.LibpodToContainerJSON(ctnr)
+ if err != nil {
+ utils.InternalServerError(w, err)
+ return
+ }
+ utils.WriteResponse(w, http.StatusOK, api)
+}
+
+func KillContainer(w http.ResponseWriter, r *http.Request) {
+ // /{version}/containers/(name)/kill
+ con, err := utils.KillContainer(w, r)
+ if err != nil {
+ return
+ }
+ // the kill behavior for docker differs from podman in that they appear to wait
+ // for the Container to croak so the exit code is accurate immediately after the
+ // kill is sent. libpod does not. but we can add a wait here only for the docker
+ // side of things and mimic that behavior
+ if _, err = con.Wait(); err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrapf(err, "failed to wait for Container %s", con.ID()))
+ return
+ }
+ // Success
+ utils.WriteResponse(w, http.StatusNoContent, "")
+}
+
+func WaitContainer(w http.ResponseWriter, r *http.Request) {
+ var msg string
+ // /{version}/containers/(name)/wait
+ exitCode, err := utils.WaitContainer(w, r)
+ if err != nil {
+ msg = err.Error()
+ }
+ utils.WriteResponse(w, http.StatusOK, handlers.ContainerWaitOKBody{
+ StatusCode: int(exitCode),
+ Error: struct {
+ Message string
+ }{
+ Message: msg,
+ },
+ })
+}
+
+func PruneContainers(w http.ResponseWriter, r *http.Request) {
+ runtime := r.Context().Value("runtime").(*libpod.Runtime)
+
+ containers, err := runtime.GetAllContainers()
+ if err != nil {
+ utils.InternalServerError(w, err)
+ return
+ }
+
+ deletedContainers := []string{}
+ var spaceReclaimed uint64
+ for _, ctnr := range containers {
+ // Only remove stopped or exit'ed containers.
+ state, err := ctnr.State()
+ if err != nil {
+ utils.InternalServerError(w, err)
+ return
+ }
+ switch state {
+ case define.ContainerStateStopped, define.ContainerStateExited:
+ default:
+ continue
+ }
+
+ deletedContainers = append(deletedContainers, ctnr.ID())
+ cSize, err := ctnr.RootFsSize()
+ if err != nil {
+ utils.InternalServerError(w, err)
+ return
+ }
+ spaceReclaimed += uint64(cSize)
+
+ err = runtime.RemoveContainer(context.Background(), ctnr, false, false)
+ if err != nil && !(errors.Cause(err) == define.ErrCtrRemoved) {
+ utils.InternalServerError(w, err)
+ return
+ }
+ }
+ report := types.ContainersPruneReport{
+ ContainersDeleted: deletedContainers,
+ SpaceReclaimed: spaceReclaimed,
+ }
+ utils.WriteResponse(w, http.StatusOK, report)
+}
+
+func LogsFromContainer(w http.ResponseWriter, r *http.Request) {
+ decoder := r.Context().Value("decoder").(*schema.Decoder)
+ runtime := r.Context().Value("runtime").(*libpod.Runtime)
+
+ query := struct {
+ Follow bool `schema:"follow"`
+ Stdout bool `schema:"stdout"`
+ Stderr bool `schema:"stderr"`
+ Since string `schema:"since"`
+ Until string `schema:"until"`
+ Timestamps bool `schema:"timestamps"`
+ Tail string `schema:"tail"`
+ }{
+ Tail: "all",
+ }
+ if err := decoder.Decode(&query, r.URL.Query()); err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusBadRequest, errors.Wrapf(err, "Failed to parse parameters for %s", r.URL.String()))
+ return
+ }
+
+ if !(query.Stdout || query.Stderr) {
+ msg := fmt.Sprintf("%s: you must choose at least one stream", http.StatusText(http.StatusBadRequest))
+ utils.Error(w, msg, http.StatusBadRequest, errors.Errorf("%s for %s", msg, r.URL.String()))
+ return
+ }
+
+ name := mux.Vars(r)["name"]
+ ctnr, err := runtime.LookupContainer(name)
+ if err != nil {
+ utils.ContainerNotFound(w, name, err)
+ return
+ }
+
+ var tail int64 = -1
+ if query.Tail != "all" {
+ tail, err = strconv.ParseInt(query.Tail, 0, 64)
+ if err != nil {
+ utils.BadRequest(w, "tail", query.Tail, err)
+ return
+ }
+ }
+
+ var since time.Time
+ if _, found := mux.Vars(r)["since"]; found {
+ since, err = util.ParseInputTime(query.Since)
+ if err != nil {
+ utils.BadRequest(w, "since", query.Since, err)
+ return
+ }
+ }
+
+ var until time.Time
+ if _, found := mux.Vars(r)["until"]; found {
+ since, err = util.ParseInputTime(query.Until)
+ if err != nil {
+ utils.BadRequest(w, "until", query.Until, err)
+ return
+ }
+ }
+
+ options := &logs.LogOptions{
+ Details: true,
+ Follow: query.Follow,
+ Since: since,
+ Tail: tail,
+ Timestamps: query.Timestamps,
+ }
+
+ var wg sync.WaitGroup
+ options.WaitGroup = &wg
+
+ logChannel := make(chan *logs.LogLine, tail+1)
+ if err := runtime.Log([]*libpod.Container{ctnr}, options, logChannel); err != nil {
+ utils.InternalServerError(w, errors.Wrapf(err, "Failed to obtain logs for Container '%s'", name))
+ return
+ }
+ go func() {
+ wg.Wait()
+ close(logChannel)
+ }()
+
+ w.WriteHeader(http.StatusOK)
+ var builder strings.Builder
+ for ok := true; ok; ok = query.Follow {
+ for line := range logChannel {
+ if _, found := mux.Vars(r)["until"]; found {
+ if line.Time.After(until) {
+ break
+ }
+ }
+
+ // Reset variables we're ready to loop again
+ builder.Reset()
+ header := [8]byte{}
+
+ switch line.Device {
+ case "stdout":
+ if !query.Stdout {
+ continue
+ }
+ header[0] = 1
+ case "stderr":
+ if !query.Stderr {
+ continue
+ }
+ header[0] = 2
+ default:
+ // Logging and moving on is the best we can do here. We may have already sent
+ // a Status and Content-Type to client therefore we can no longer report an error.
+ log.Infof("unknown Device type '%s' in log file from Container %s", line.Device, ctnr.ID())
+ continue
+ }
+
+ if query.Timestamps {
+ builder.WriteString(line.Time.Format(time.RFC3339))
+ builder.WriteRune(' ')
+ }
+ builder.WriteString(line.Msg)
+
+ // Build header and output entry
+ binary.BigEndian.PutUint32(header[4:], uint32(len(header)+builder.Len()))
+ if _, err := w.Write(header[:]); err != nil {
+ log.Errorf("unable to write log output header: %q", err)
+ }
+ if _, err := fmt.Fprint(w, builder.String()); err != nil {
+ log.Errorf("unable to write builder string: %q", err)
+ }
+
+ if flusher, ok := w.(http.Flusher); ok {
+ flusher.Flush()
+ }
+ }
+ }
+}
diff --git a/pkg/api/handlers/generic/containers_create.go b/pkg/api/handlers/generic/containers_create.go
new file mode 100644
index 000000000..056f7e95c
--- /dev/null
+++ b/pkg/api/handlers/generic/containers_create.go
@@ -0,0 +1,243 @@
+package generic
+
+import (
+ "encoding/json"
+ "fmt"
+ "github.com/containers/libpod/pkg/api/handlers/utils"
+ "net/http"
+ "strings"
+
+ "github.com/containers/libpod/cmd/podman/shared"
+ "github.com/containers/libpod/libpod"
+ "github.com/containers/libpod/libpod/define"
+ image2 "github.com/containers/libpod/libpod/image"
+ "github.com/containers/libpod/pkg/api/handlers"
+ "github.com/containers/libpod/pkg/namespaces"
+ createconfig "github.com/containers/libpod/pkg/spec"
+ "github.com/containers/storage"
+ "github.com/docker/docker/pkg/signal"
+ "github.com/gorilla/schema"
+ "github.com/pkg/errors"
+ log "github.com/sirupsen/logrus"
+ "golang.org/x/sys/unix"
+)
+
+func CreateContainer(w http.ResponseWriter, r *http.Request) {
+ runtime := r.Context().Value("runtime").(*libpod.Runtime)
+ decoder := r.Context().Value("decoder").(*schema.Decoder)
+ input := handlers.CreateContainerConfig{}
+ query := struct {
+ Name string `schema:"name"`
+ }{
+ // override any golang type defaults
+ }
+ if err := decoder.Decode(&query, r.URL.Query()); err != nil {
+ utils.Error(w, http.StatusText(http.StatusBadRequest), http.StatusBadRequest,
+ errors.Wrapf(err, "Failed to parse parameters for %s", r.URL.String()))
+ return
+ }
+ if err := json.NewDecoder(r.Body).Decode(&input); err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrap(err, "Decode()"))
+ return
+ }
+
+ newImage, err := runtime.ImageRuntime().NewFromLocal(input.Image)
+ if err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrap(err, "NewFromLocal()"))
+ return
+ }
+ cc, err := makeCreateConfig(input, newImage)
+ if err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrap(err, "makeCreatConfig()"))
+ return
+ }
+
+ cc.Name = query.Name
+ var pod *libpod.Pod
+ ctr, err := shared.CreateContainerFromCreateConfig(runtime, &cc, r.Context(), pod)
+ if err != nil {
+ if strings.Contains(err.Error(), "invalid log driver") {
+ // this does not quite work yet and needs a little more massaging
+ w.Header().Set("Content-Type", "text/plain; charset=us-ascii")
+ w.WriteHeader(http.StatusInternalServerError)
+ msg := fmt.Sprintf("logger: no log driver named '%s' is registered", input.HostConfig.LogConfig.Type)
+ if _, err := fmt.Fprintln(w, msg); err != nil {
+ log.Errorf("%s: %q", msg, err)
+ }
+ //s.WriteResponse(w, http.StatusInternalServerError, fmt.Sprintf("logger: no log driver named '%s' is registered", input.HostConfig.LogConfig.Type))
+ return
+ }
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrap(err, "CreateContainerFromCreateConfig()"))
+ return
+ }
+
+ type ctrCreateResponse struct {
+ Id string `json:"Id"`
+ Warnings []string `json:"Warnings"`
+ }
+ response := ctrCreateResponse{
+ Id: ctr.ID(),
+ Warnings: []string{}}
+
+ utils.WriteResponse(w, http.StatusCreated, response)
+}
+
+func makeCreateConfig(input handlers.CreateContainerConfig, newImage *image2.Image) (createconfig.CreateConfig, error) {
+ var (
+ err error
+ init bool
+ tmpfs []string
+ volumes []string
+ )
+ env := make(map[string]string)
+ stopSignal := unix.SIGTERM
+ if len(input.StopSignal) > 0 {
+ stopSignal, err = signal.ParseSignal(input.StopSignal)
+ if err != nil {
+ return createconfig.CreateConfig{}, err
+ }
+ }
+
+ workDir := "/"
+ if len(input.WorkingDir) > 0 {
+ workDir = input.WorkingDir
+ }
+
+ stopTimeout := uint(define.CtrRemoveTimeout)
+ if input.StopTimeout != nil {
+ stopTimeout = uint(*input.StopTimeout)
+ }
+ c := createconfig.CgroupConfig{
+ Cgroups: "", // podman
+ Cgroupns: "", // podman
+ CgroupParent: "", // podman
+ CgroupMode: "", // podman
+ }
+ security := createconfig.SecurityConfig{
+ CapAdd: input.HostConfig.CapAdd,
+ CapDrop: input.HostConfig.CapDrop,
+ LabelOpts: nil, // podman
+ NoNewPrivs: false, // podman
+ ApparmorProfile: "", // podman
+ SeccompProfilePath: "",
+ SecurityOpts: input.HostConfig.SecurityOpt,
+ Privileged: input.HostConfig.Privileged,
+ ReadOnlyRootfs: input.HostConfig.ReadonlyRootfs,
+ ReadOnlyTmpfs: false, // podman-only
+ Sysctl: input.HostConfig.Sysctls,
+ }
+
+ network := createconfig.NetworkConfig{
+ DNSOpt: input.HostConfig.DNSOptions,
+ DNSSearch: input.HostConfig.DNSSearch,
+ DNSServers: input.HostConfig.DNS,
+ ExposedPorts: input.ExposedPorts,
+ HTTPProxy: false, // podman
+ IP6Address: "",
+ IPAddress: "",
+ LinkLocalIP: nil, // docker-only
+ MacAddress: input.MacAddress,
+ // NetMode: nil,
+ Network: input.HostConfig.NetworkMode.NetworkName(),
+ NetworkAlias: nil, // docker-only now
+ PortBindings: input.HostConfig.PortBindings,
+ Publish: nil, // podmanseccompPath
+ PublishAll: input.HostConfig.PublishAllPorts,
+ }
+
+ uts := createconfig.UtsConfig{
+ UtsMode: namespaces.UTSMode(input.HostConfig.UTSMode),
+ NoHosts: false, //podman
+ HostAdd: input.HostConfig.ExtraHosts,
+ Hostname: input.Hostname,
+ }
+
+ z := createconfig.UserConfig{
+ GroupAdd: input.HostConfig.GroupAdd,
+ IDMappings: &storage.IDMappingOptions{}, // podman //TODO <--- fix this,
+ UsernsMode: namespaces.UsernsMode(input.HostConfig.UsernsMode),
+ User: input.User,
+ }
+ pidConfig := createconfig.PidConfig{PidMode: namespaces.PidMode(input.HostConfig.PidMode)}
+ for k := range input.Volumes {
+ volumes = append(volumes, k)
+ }
+
+ // Docker is more flexible about its input where podman throws
+ // away incorrectly formatted variables so we cannot reuse the
+ // parsing of the env input
+ // [Foo Other=one Blank=]
+ for _, e := range input.Env {
+ splitEnv := strings.Split(e, "=")
+ switch len(splitEnv) {
+ case 0:
+ continue
+ case 1:
+ env[splitEnv[0]] = ""
+ default:
+ env[splitEnv[0]] = strings.Join(splitEnv[1:], "=")
+ }
+ }
+
+ // format the tmpfs mounts into a []string from map
+ for k, v := range input.HostConfig.Tmpfs {
+ tmpfs = append(tmpfs, fmt.Sprintf("%s:%s", k, v))
+ }
+
+ if input.HostConfig.Init != nil && *input.HostConfig.Init {
+ init = true
+ }
+
+ m := createconfig.CreateConfig{
+ Annotations: nil, // podman
+ Args: nil,
+ Cgroup: c,
+ CidFile: "",
+ ConmonPidFile: "", // podman
+ Command: input.Cmd,
+ UserCommand: input.Cmd, // podman
+ Detach: false, //
+ // Devices: input.HostConfig.Devices,
+ Entrypoint: input.Entrypoint,
+ Env: env,
+ HealthCheck: nil, //
+ Init: init,
+ InitPath: "", // tbd
+ Image: input.Image,
+ ImageID: newImage.ID(),
+ BuiltinImgVolumes: nil, // podman
+ ImageVolumeType: "", // podman
+ Interactive: false,
+ // IpcMode: input.HostConfig.IpcMode,
+ Labels: input.Labels,
+ LogDriver: input.HostConfig.LogConfig.Type, // is this correct
+ // LogDriverOpt: input.HostConfig.LogConfig.Config,
+ Name: input.Name,
+ Network: network,
+ Pod: "", // podman
+ PodmanPath: "", // podman
+ Quiet: false, // front-end only
+ Resources: createconfig.CreateResourceConfig{},
+ RestartPolicy: input.HostConfig.RestartPolicy.Name,
+ Rm: input.HostConfig.AutoRemove,
+ StopSignal: stopSignal,
+ StopTimeout: stopTimeout,
+ Systemd: false, // podman
+ Tmpfs: tmpfs,
+ User: z,
+ Uts: uts,
+ Tty: input.Tty,
+ Mounts: nil, // we populate
+ // MountsFlag: input.HostConfig.Mounts,
+ NamedVolumes: nil, // we populate
+ Volumes: volumes,
+ VolumesFrom: input.HostConfig.VolumesFrom,
+ WorkDir: workDir,
+ Rootfs: "", // podman
+ Security: security,
+ Syslog: false, // podman
+
+ Pid: pidConfig,
+ }
+ return m, nil
+}
diff --git a/pkg/api/handlers/generic/containers_stats.go b/pkg/api/handlers/generic/containers_stats.go
new file mode 100644
index 000000000..0c4efc1df
--- /dev/null
+++ b/pkg/api/handlers/generic/containers_stats.go
@@ -0,0 +1,198 @@
+package generic
+
+import (
+ "encoding/json"
+ "net/http"
+ "time"
+
+ "github.com/containers/libpod/libpod"
+ "github.com/containers/libpod/libpod/define"
+ "github.com/containers/libpod/pkg/api/handlers"
+ "github.com/containers/libpod/pkg/api/handlers/utils"
+ "github.com/containers/libpod/pkg/cgroups"
+ docker "github.com/docker/docker/api/types"
+ "github.com/gorilla/mux"
+ "github.com/gorilla/schema"
+ "github.com/pkg/errors"
+ "github.com/sirupsen/logrus"
+)
+
+const DefaultStatsPeriod = 5 * time.Second
+
+func StatsContainer(w http.ResponseWriter, r *http.Request) {
+ // 200 no error
+ // 404 no such
+ // 500 internal
+ runtime := r.Context().Value("runtime").(*libpod.Runtime)
+ decoder := r.Context().Value("decoder").(*schema.Decoder)
+
+ query := struct {
+ Stream bool `schema:"stream"`
+ }{
+ Stream: true,
+ }
+ if err := decoder.Decode(&query, r.URL.Query()); err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusBadRequest, errors.Wrapf(err, "Failed to parse parameters for %s", r.URL.String()))
+ return
+ }
+
+ name := mux.Vars(r)["name"]
+ ctnr, err := runtime.LookupContainer(name)
+ if err != nil {
+ utils.ContainerNotFound(w, name, err)
+ return
+ }
+
+ state, err := ctnr.State()
+ if err != nil {
+ utils.InternalServerError(w, err)
+ return
+ }
+ if state != define.ContainerStateRunning && !query.Stream {
+ utils.WriteJSON(w, http.StatusOK, &handlers.Stats{StatsJSON: docker.StatsJSON{
+ Name: ctnr.Name(),
+ ID: ctnr.ID(),
+ }})
+ return
+ }
+
+ var preRead time.Time
+ var preCPUStats docker.CPUStats
+
+ stats, err := ctnr.GetContainerStats(&libpod.ContainerStats{})
+ if err != nil {
+ utils.InternalServerError(w, errors.Wrapf(err, "Failed to obtain Container %s stats", name))
+ return
+ }
+
+ if query.Stream {
+ preRead = time.Now()
+ preCPUStats = docker.CPUStats{
+ CPUUsage: docker.CPUUsage{
+ TotalUsage: stats.CPUNano,
+ PercpuUsage: []uint64{uint64(stats.CPU)},
+ UsageInKernelmode: 0,
+ UsageInUsermode: 0,
+ },
+ SystemUsage: 0,
+ OnlineCPUs: 0,
+ ThrottlingData: docker.ThrottlingData{},
+ }
+ time.Sleep(DefaultStatsPeriod)
+ }
+
+ cgroupPath, _ := ctnr.CGroupPath()
+ cgroup, _ := cgroups.Load(cgroupPath)
+
+ for ok := true; ok; ok = query.Stream {
+ state, _ := ctnr.State()
+ if state != define.ContainerStateRunning {
+ time.Sleep(10 * time.Second)
+ continue
+ }
+
+ stats, _ := ctnr.GetContainerStats(stats)
+ cgroupStat, _ := cgroup.Stat()
+ inspect, _ := ctnr.Inspect(false)
+
+ net := make(map[string]docker.NetworkStats)
+ net[inspect.NetworkSettings.EndpointID] = docker.NetworkStats{
+ RxBytes: stats.NetInput,
+ RxPackets: 0,
+ RxErrors: 0,
+ RxDropped: 0,
+ TxBytes: stats.NetOutput,
+ TxPackets: 0,
+ TxErrors: 0,
+ TxDropped: 0,
+ EndpointID: inspect.NetworkSettings.EndpointID,
+ InstanceID: "",
+ }
+
+ s := handlers.Stats{StatsJSON: docker.StatsJSON{
+ Stats: docker.Stats{
+ Read: time.Now(),
+ PreRead: preRead,
+ PidsStats: docker.PidsStats{
+ Current: cgroupStat.Pids.Current,
+ Limit: 0,
+ },
+ BlkioStats: docker.BlkioStats{
+ IoServiceBytesRecursive: toBlkioStatEntry(cgroupStat.Blkio.IoServiceBytesRecursive),
+ IoServicedRecursive: nil,
+ IoQueuedRecursive: nil,
+ IoServiceTimeRecursive: nil,
+ IoWaitTimeRecursive: nil,
+ IoMergedRecursive: nil,
+ IoTimeRecursive: nil,
+ SectorsRecursive: nil,
+ },
+ NumProcs: 0,
+ StorageStats: docker.StorageStats{
+ ReadCountNormalized: 0,
+ ReadSizeBytes: 0,
+ WriteCountNormalized: 0,
+ WriteSizeBytes: 0,
+ },
+ CPUStats: docker.CPUStats{
+ CPUUsage: docker.CPUUsage{
+ TotalUsage: cgroupStat.CPU.Usage.Total,
+ PercpuUsage: []uint64{uint64(stats.CPU)},
+ UsageInKernelmode: cgroupStat.CPU.Usage.Kernel,
+ UsageInUsermode: cgroupStat.CPU.Usage.Total - cgroupStat.CPU.Usage.Kernel,
+ },
+ SystemUsage: 0,
+ OnlineCPUs: uint32(len(cgroupStat.CPU.Usage.PerCPU)),
+ ThrottlingData: docker.ThrottlingData{
+ Periods: 0,
+ ThrottledPeriods: 0,
+ ThrottledTime: 0,
+ },
+ },
+ PreCPUStats: preCPUStats,
+ MemoryStats: docker.MemoryStats{
+ Usage: cgroupStat.Memory.Usage.Usage,
+ MaxUsage: cgroupStat.Memory.Usage.Limit,
+ Stats: nil,
+ Failcnt: 0,
+ Limit: cgroupStat.Memory.Usage.Limit,
+ Commit: 0,
+ CommitPeak: 0,
+ PrivateWorkingSet: 0,
+ },
+ },
+ Name: stats.Name,
+ ID: stats.ContainerID,
+ Networks: net,
+ }}
+
+ utils.WriteJSON(w, http.StatusOK, s)
+ if flusher, ok := w.(http.Flusher); ok {
+ flusher.Flush()
+ }
+
+ preRead = s.Read
+ bits, err := json.Marshal(s.CPUStats)
+ if err != nil {
+ logrus.Errorf("unable to marshal cpu stats: %q", err)
+ }
+ if err := json.Unmarshal(bits, &preCPUStats); err != nil {
+ logrus.Errorf("unable to unmarshal previous stats: %q", err)
+ }
+ time.Sleep(DefaultStatsPeriod)
+ }
+}
+
+func toBlkioStatEntry(entries []cgroups.BlkIOEntry) []docker.BlkioStatEntry {
+ results := make([]docker.BlkioStatEntry, 0, len(entries))
+ for i, e := range entries {
+ bits, err := json.Marshal(e)
+ if err != nil {
+ logrus.Errorf("unable to marshal blkio stats: %q", err)
+ }
+ if err := json.Unmarshal(bits, &results[i]); err != nil {
+ logrus.Errorf("unable to unmarshal blkio stats: %q", err)
+ }
+ }
+ return results
+}
diff --git a/pkg/api/handlers/generic/images.go b/pkg/api/handlers/generic/images.go
new file mode 100644
index 000000000..8029ee861
--- /dev/null
+++ b/pkg/api/handlers/generic/images.go
@@ -0,0 +1,363 @@
+package generic
+
+import (
+ "encoding/json"
+ "fmt"
+ "io/ioutil"
+ "net/http"
+ "os"
+ "strconv"
+ "strings"
+
+ "github.com/containers/buildah"
+ "github.com/containers/image/v5/manifest"
+ "github.com/containers/libpod/libpod"
+ image2 "github.com/containers/libpod/libpod/image"
+ "github.com/containers/libpod/pkg/api/handlers"
+ "github.com/containers/libpod/pkg/api/handlers/utils"
+ "github.com/containers/libpod/pkg/util"
+ "github.com/containers/storage"
+ "github.com/docker/docker/api/types"
+ "github.com/gorilla/mux"
+ "github.com/gorilla/schema"
+ "github.com/pkg/errors"
+ "github.com/sirupsen/logrus"
+)
+
+func ExportImage(w http.ResponseWriter, r *http.Request) {
+ // 200 ok
+ // 500 server
+ runtime := r.Context().Value("runtime").(*libpod.Runtime)
+
+ name := mux.Vars(r)["name"]
+ newImage, err := runtime.ImageRuntime().NewFromLocal(name)
+ if err != nil {
+ utils.ImageNotFound(w, name, errors.Wrapf(err, "Failed to find image %s", name))
+ return
+ }
+ tmpfile, err := ioutil.TempFile("", "api.tar")
+ if err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrap(err, "unable to create tempfile"))
+ return
+ }
+ if err := tmpfile.Close(); err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrap(err, "unable to close tempfile"))
+ return
+ }
+ if err := newImage.Save(r.Context(), name, "docker-archive", tmpfile.Name(), []string{}, false, false); err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrap(err, "failed to save image"))
+ return
+ }
+ rdr, err := os.Open(tmpfile.Name())
+ if err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrap(err, "failed to read the exported tarfile"))
+ return
+ }
+ defer rdr.Close()
+ defer os.Remove(tmpfile.Name())
+ utils.WriteResponse(w, http.StatusOK, rdr)
+}
+
+func PruneImages(w http.ResponseWriter, r *http.Request) {
+ // 200 no error
+ // 500 internal
+ var (
+ dangling bool = true
+ err error
+ )
+ decoder := r.Context().Value("decoder").(*schema.Decoder)
+ runtime := r.Context().Value("runtime").(*libpod.Runtime)
+
+ query := struct {
+ filters map[string]string
+ }{
+ // This is where you can override the golang default value for one of fields
+ }
+
+ if err := decoder.Decode(&query, r.URL.Query()); err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusBadRequest, errors.Wrapf(err, "Failed to parse parameters for %s", r.URL.String()))
+ return
+ }
+
+ // FIXME This is likely wrong due to it not being a map[string][]string
+
+ // until ts is not supported on podman prune
+ if len(query.filters["until"]) > 0 {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrap(err, "until is not supported yet"))
+ return
+ }
+ // labels are not supported on podman prune
+ if len(query.filters["label"]) > 0 {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrap(err, "labelis not supported yet"))
+ return
+ }
+
+ if len(query.filters["dangling"]) > 0 {
+ dangling, err = strconv.ParseBool(query.filters["dangling"])
+ if err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrap(err, "processing dangling filter"))
+ return
+ }
+ }
+ idr := []types.ImageDeleteResponseItem{}
+ //
+ // This code needs to be migrated to libpod to work correctly. I could not
+ // work my around the information docker needs with the existing prune in libpod.
+ //
+ pruneImages, err := runtime.ImageRuntime().GetPruneImages(!dangling, []image2.ImageFilter{})
+ if err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrap(err, "unable to get images to prune"))
+ return
+ }
+ for _, p := range pruneImages {
+ repotags, err := p.RepoTags()
+ if err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrap(err, "unable to get repotags for image"))
+ return
+ }
+ if err := p.Remove(r.Context(), true); err != nil {
+ if errors.Cause(err) == storage.ErrImageUsedByContainer {
+ logrus.Warnf("Failed to prune image %s as it is in use: %v", p.ID(), err)
+ continue
+ }
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrap(err, "failed to prune image"))
+ return
+ }
+ // newimageevent is not export therefore we cannot record the event. this will be fixed
+ // when the prune is fixed in libpod
+ // defer p.newImageEvent(events.Prune)
+ response := types.ImageDeleteResponseItem{
+ Deleted: fmt.Sprintf("sha256:%s", p.ID()), // I ack this is not ideal
+ }
+ if len(repotags) > 0 {
+ response.Untagged = repotags[0]
+ }
+ idr = append(idr, response)
+ }
+ ipr := types.ImagesPruneReport{
+ ImagesDeleted: idr,
+ SpaceReclaimed: 1, // TODO we cannot supply this right now
+ }
+ utils.WriteResponse(w, http.StatusOK, handlers.ImagesPruneReport{ImagesPruneReport: ipr})
+}
+
+func CommitContainer(w http.ResponseWriter, r *http.Request) {
+ var (
+ destImage string
+ )
+ decoder := r.Context().Value("decoder").(*schema.Decoder)
+ runtime := r.Context().Value("runtime").(*libpod.Runtime)
+
+ query := struct {
+ author string
+ changes string
+ comment string
+ container string
+ //fromSrc string # fromSrc is currently unused
+ pause bool
+ repo string
+ tag string
+ }{
+ // This is where you can override the golang default value for one of fields
+ }
+
+ if err := decoder.Decode(&query, r.URL.Query()); err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusBadRequest, errors.Wrapf(err, "Failed to parse parameters for %s", r.URL.String()))
+ return
+ }
+ rtc, err := runtime.GetConfig()
+ if err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrap(err, "Decode()"))
+ return
+ }
+ sc := image2.GetSystemContext(rtc.SignaturePolicyPath, "", false)
+ tag := "latest"
+ options := libpod.ContainerCommitOptions{
+ Pause: true,
+ }
+ options.CommitOptions = buildah.CommitOptions{
+ SignaturePolicyPath: rtc.SignaturePolicyPath,
+ ReportWriter: os.Stderr,
+ SystemContext: sc,
+ PreferredManifestType: manifest.DockerV2Schema2MediaType,
+ }
+
+ input := handlers.CreateContainerConfig{}
+ if err := json.NewDecoder(r.Body).Decode(&input); err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrap(err, "Decode()"))
+ return
+ }
+
+ if len(query.tag) > 0 {
+ tag = query.tag
+ }
+ options.Message = query.comment
+ options.Author = query.author
+ options.Pause = query.pause
+ options.Changes = strings.Fields(query.changes)
+ ctr, err := runtime.LookupContainer(query.container)
+ if err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusNotFound, err)
+ return
+ }
+
+ // I know mitr hates this ... but doing for now
+ if len(query.repo) > 1 {
+ destImage = fmt.Sprintf("%s:%s", query.repo, tag)
+ }
+
+ commitImage, err := ctr.Commit(r.Context(), destImage, options)
+ if err != nil && !strings.Contains(err.Error(), "is not running") {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrapf(err, "CommitFailure"))
+ return
+ }
+ utils.WriteResponse(w, http.StatusOK, handlers.IDResponse{ID: commitImage.ID()}) // nolint
+}
+
+func CreateImageFromSrc(w http.ResponseWriter, r *http.Request) {
+ // 200 no error
+ // 404 repo does not exist or no read access
+ // 500 internal
+ decoder := r.Context().Value("decoder").(*schema.Decoder)
+ runtime := r.Context().Value("runtime").(*libpod.Runtime)
+
+ query := struct {
+ fromSrc string
+ changes []string
+ }{
+ // This is where you can override the golang default value for one of fields
+ }
+
+ if err := decoder.Decode(&query, r.URL.Query()); err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusBadRequest, errors.Wrapf(err, "Failed to parse parameters for %s", r.URL.String()))
+ return
+ }
+ // fromSrc – Source to import. The value may be a URL from which the image can be retrieved or - to read the image from the request body. This parameter may only be used when importing an image.
+ source := query.fromSrc
+ if source == "-" {
+ f, err := ioutil.TempFile("", "api_load.tar")
+ if err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrap(err, "failed to create tempfile"))
+ return
+ }
+ source = f.Name()
+ if err := handlers.SaveFromBody(f, r); err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrap(err, "failed to write temporary file"))
+ }
+ }
+ iid, err := runtime.Import(r.Context(), source, "", query.changes, "", false)
+ if err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrap(err, "unable to import tarball"))
+ return
+ }
+ tmpfile, err := ioutil.TempFile("", "fromsrc.tar")
+ if err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrap(err, "unable to create tempfile"))
+ return
+ }
+ if err := tmpfile.Close(); err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrap(err, "unable to close tempfile"))
+ return
+ }
+ // Success
+ utils.WriteResponse(w, http.StatusOK, struct {
+ Status string `json:"status"`
+ Progress string `json:"progress"`
+ ProgressDetail map[string]string `json:"progressDetail"`
+ Id string `json:"id"`
+ }{
+ Status: iid,
+ ProgressDetail: map[string]string{},
+ Id: iid,
+ })
+
+}
+
+func CreateImageFromImage(w http.ResponseWriter, r *http.Request) {
+ // 200 no error
+ // 404 repo does not exist or no read access
+ // 500 internal
+ decoder := r.Context().Value("decoder").(*schema.Decoder)
+ runtime := r.Context().Value("runtime").(*libpod.Runtime)
+
+ query := struct {
+ fromImage string
+ tag string
+ }{
+ // This is where you can override the golang default value for one of fields
+ }
+
+ if err := decoder.Decode(&query, r.URL.Query()); err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusBadRequest, errors.Wrapf(err, "Failed to parse parameters for %s", r.URL.String()))
+ return
+ }
+
+ /*
+ fromImage – Name of the image to pull. The name may include a tag or digest. This parameter may only be used when pulling an image. The pull is cancelled if the HTTP connection is closed.
+ repo – Repository name given to an image when it is imported. The repo may include a tag. This parameter may only be used when importing an image.
+ tag – Tag or digest. If empty when pulling an image, this causes all tags for the given image to be pulled.
+ */
+ fromImage := query.fromImage
+ if len(query.tag) < 1 {
+ fromImage = fmt.Sprintf("%s:%s", fromImage, query.tag)
+ }
+
+ // TODO
+ // We are eating the output right now because we haven't talked about how to deal with multiple responses yet
+ img, err := runtime.ImageRuntime().New(r.Context(), fromImage, "", "", nil, &image2.DockerRegistryOptions{}, image2.SigningOptions{}, nil, util.PullImageMissing)
+ if err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, err)
+ return
+ }
+
+ // Success
+ utils.WriteResponse(w, http.StatusOK, struct {
+ Status string `json:"status"`
+ Error string `json:"error"`
+ Progress string `json:"progress"`
+ ProgressDetail map[string]string `json:"progressDetail"`
+ Id string `json:"id"`
+ }{
+ Status: fmt.Sprintf("pulling image (%s) from %s", img.Tag, strings.Join(img.Names(), ", ")),
+ ProgressDetail: map[string]string{},
+ Id: img.ID(),
+ })
+}
+
+func GetImage(w http.ResponseWriter, r *http.Request) {
+ // 200 no error
+ // 404 no such
+ // 500 internal
+ name := mux.Vars(r)["name"]
+ newImage, err := handlers.GetImage(r, name)
+ if err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusNotFound, errors.Wrapf(err, "Failed to find image %s", name))
+ return
+ }
+ inspect, err := handlers.ImageDataToImageInspect(r.Context(), newImage)
+ if err != nil {
+ utils.Error(w, "Server error", http.StatusInternalServerError, errors.Wrapf(err, "Failed to convert ImageData to ImageInspect '%s'", inspect.ID))
+ return
+ }
+ utils.WriteResponse(w, http.StatusOK, inspect)
+}
+
+func GetImages(w http.ResponseWriter, r *http.Request) {
+ // 200 ok
+ // 500 internal
+ images, err := utils.GetImages(w, r)
+ if err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrap(err, "Failed get images"))
+ return
+ }
+ var summaries = make([]*handlers.ImageSummary, len(images))
+ for j, img := range images {
+ is, err := handlers.ImageToImageSummary(img)
+ if err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrap(err, "Failed transform image summaries"))
+ return
+ }
+ summaries[j] = is
+ }
+ utils.WriteResponse(w, http.StatusOK, summaries)
+}
diff --git a/pkg/api/handlers/generic/info.go b/pkg/api/handlers/generic/info.go
new file mode 100644
index 000000000..2bef8db4f
--- /dev/null
+++ b/pkg/api/handlers/generic/info.go
@@ -0,0 +1,196 @@
+package generic
+
+import (
+ "fmt"
+ "github.com/containers/libpod/pkg/api/handlers"
+ "github.com/containers/libpod/pkg/api/handlers/utils"
+ "io/ioutil"
+ "net/http"
+ "os"
+ goRuntime "runtime"
+ "strings"
+ "time"
+
+ "github.com/containers/libpod/libpod"
+ "github.com/containers/libpod/libpod/config"
+ "github.com/containers/libpod/libpod/define"
+ "github.com/containers/libpod/pkg/rootless"
+ "github.com/containers/libpod/pkg/sysinfo"
+ docker "github.com/docker/docker/api/types"
+ "github.com/docker/docker/api/types/swarm"
+ "github.com/google/uuid"
+ "github.com/pkg/errors"
+ log "github.com/sirupsen/logrus"
+)
+
+func GetInfo(w http.ResponseWriter, r *http.Request) {
+ // 200 ok
+ // 500 internal
+ runtime := r.Context().Value("runtime").(*libpod.Runtime)
+
+ infoData, err := runtime.Info()
+ if err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrapf(err, "Failed to obtain system memory info"))
+ return
+ }
+ hostInfo := infoData[0].Data
+ storeInfo := infoData[1].Data
+
+ configInfo, err := runtime.GetConfig()
+ if err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrapf(err, "Failed to obtain runtime config"))
+ return
+ }
+ versionInfo, err := define.GetVersion()
+ if err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrapf(err, "Failed to obtain podman versions"))
+ return
+ }
+ stateInfo := getContainersState(runtime)
+ sysInfo := sysinfo.New(true)
+
+ // FIXME: Need to expose if runtime supports Checkpoint'ing
+ // liveRestoreEnabled := criu.CheckForCriu() && configInfo.RuntimeSupportsCheckpoint()
+
+ info := &handlers.Info{Info: docker.Info{
+ Architecture: goRuntime.GOARCH,
+ BridgeNfIP6tables: !sysInfo.BridgeNFCallIP6TablesDisabled,
+ BridgeNfIptables: !sysInfo.BridgeNFCallIPTablesDisabled,
+ CPUCfsPeriod: sysInfo.CPUCfsPeriod,
+ CPUCfsQuota: sysInfo.CPUCfsQuota,
+ CPUSet: sysInfo.Cpuset,
+ CPUShares: sysInfo.CPUShares,
+ CgroupDriver: configInfo.CgroupManager,
+ ClusterAdvertise: "",
+ ClusterStore: "",
+ ContainerdCommit: docker.Commit{},
+ Containers: storeInfo["ContainerStore"].(map[string]interface{})["number"].(int),
+ ContainersPaused: stateInfo[define.ContainerStatePaused],
+ ContainersRunning: stateInfo[define.ContainerStateRunning],
+ ContainersStopped: stateInfo[define.ContainerStateStopped] + stateInfo[define.ContainerStateExited],
+ Debug: log.IsLevelEnabled(log.DebugLevel),
+ DefaultRuntime: configInfo.OCIRuntime,
+ DockerRootDir: storeInfo["GraphRoot"].(string),
+ Driver: storeInfo["GraphDriverName"].(string),
+ DriverStatus: getGraphStatus(storeInfo),
+ ExperimentalBuild: true,
+ GenericResources: nil,
+ HTTPProxy: getEnv("http_proxy"),
+ HTTPSProxy: getEnv("https_proxy"),
+ ID: uuid.New().String(),
+ IPv4Forwarding: !sysInfo.IPv4ForwardingDisabled,
+ Images: storeInfo["ImageStore"].(map[string]interface{})["number"].(int),
+ IndexServerAddress: "",
+ InitBinary: "",
+ InitCommit: docker.Commit{},
+ Isolation: "",
+ KernelMemory: sysInfo.KernelMemory,
+ KernelMemoryTCP: false,
+ KernelVersion: hostInfo["kernel"].(string),
+ Labels: nil,
+ LiveRestoreEnabled: false,
+ LoggingDriver: "",
+ MemTotal: hostInfo["MemTotal"].(int64),
+ MemoryLimit: sysInfo.MemoryLimit,
+ NCPU: goRuntime.NumCPU(),
+ NEventsListener: 0,
+ NFd: getFdCount(),
+ NGoroutines: goRuntime.NumGoroutine(),
+ Name: hostInfo["hostname"].(string),
+ NoProxy: getEnv("no_proxy"),
+ OSType: goRuntime.GOOS,
+ OSVersion: hostInfo["Distribution"].(map[string]interface{})["version"].(string),
+ OomKillDisable: sysInfo.OomKillDisable,
+ OperatingSystem: hostInfo["Distribution"].(map[string]interface{})["distribution"].(string),
+ PidsLimit: sysInfo.PidsLimit,
+ Plugins: docker.PluginsInfo{},
+ ProductLicense: "Apache-2.0",
+ RegistryConfig: nil,
+ RuncCommit: docker.Commit{},
+ Runtimes: getRuntimes(configInfo),
+ SecurityOptions: getSecOpts(sysInfo),
+ ServerVersion: versionInfo.Version,
+ SwapLimit: sysInfo.SwapLimit,
+ Swarm: swarm.Info{
+ LocalNodeState: swarm.LocalNodeStateInactive,
+ },
+ SystemStatus: nil,
+ SystemTime: time.Now().Format(time.RFC3339Nano),
+ Warnings: []string{},
+ },
+ BuildahVersion: hostInfo["BuildahVersion"].(string),
+ CPURealtimePeriod: sysInfo.CPURealtimePeriod,
+ CPURealtimeRuntime: sysInfo.CPURealtimeRuntime,
+ CgroupVersion: hostInfo["CgroupVersion"].(string),
+ Rootless: rootless.IsRootless(),
+ SwapFree: hostInfo["SwapFree"].(int64),
+ SwapTotal: hostInfo["SwapTotal"].(int64),
+ Uptime: hostInfo["uptime"].(string),
+ }
+ utils.WriteResponse(w, http.StatusOK, info)
+}
+
+func getGraphStatus(storeInfo map[string]interface{}) [][2]string {
+ var graphStatus [][2]string
+ for k, v := range storeInfo["GraphStatus"].(map[string]string) {
+ graphStatus = append(graphStatus, [2]string{k, v})
+ }
+ return graphStatus
+}
+
+func getSecOpts(sysInfo *sysinfo.SysInfo) []string {
+ var secOpts []string
+ if sysInfo.AppArmor {
+ secOpts = append(secOpts, "name=apparmor")
+ }
+ if sysInfo.Seccomp {
+ // FIXME: get profile name...
+ secOpts = append(secOpts, fmt.Sprintf("name=seccomp,profile=%s", "default"))
+ }
+ return secOpts
+}
+
+func getRuntimes(configInfo *config.Config) map[string]docker.Runtime {
+ var runtimes = map[string]docker.Runtime{}
+ for name, paths := range configInfo.OCIRuntimes {
+ runtimes[name] = docker.Runtime{
+ Path: paths[0],
+ Args: nil,
+ }
+ }
+ return runtimes
+}
+
+func getFdCount() (count int) {
+ count = -1
+ if entries, err := ioutil.ReadDir("/proc/self/fd"); err == nil {
+ count = len(entries)
+ }
+ return
+}
+
+// Just ignoring Container errors here...
+func getContainersState(r *libpod.Runtime) map[define.ContainerStatus]int {
+ var states = map[define.ContainerStatus]int{}
+ ctnrs, err := r.GetAllContainers()
+ if err == nil {
+ for _, ctnr := range ctnrs {
+ state, err := ctnr.State()
+ if err != nil {
+ continue
+ }
+ states[state] += 1
+ }
+ }
+ return states
+}
+
+func getEnv(value string) string {
+ if v, exists := os.LookupEnv(strings.ToUpper(value)); exists {
+ return v
+ }
+ if v, exists := os.LookupEnv(strings.ToLower(value)); exists {
+ return v
+ }
+ return ""
+}
diff --git a/pkg/api/handlers/generic/ping.go b/pkg/api/handlers/generic/ping.go
new file mode 100644
index 000000000..44a67d53f
--- /dev/null
+++ b/pkg/api/handlers/generic/ping.go
@@ -0,0 +1,25 @@
+package generic
+
+import (
+ "fmt"
+ "net/http"
+)
+
+func PingGET(w http.ResponseWriter, _ *http.Request) {
+ setHeaders(w)
+ fmt.Fprintln(w, "OK")
+}
+
+func PingHEAD(w http.ResponseWriter, _ *http.Request) {
+ setHeaders(w)
+ fmt.Fprintln(w, "")
+}
+
+func setHeaders(w http.ResponseWriter) {
+ w.Header().Set("API-Version", DefaultApiVersion)
+ w.Header().Set("BuildKit-Version", "")
+ w.Header().Set("Docker-Experimental", "true")
+ w.Header().Set("Cache-Control", "no-cache")
+ w.Header().Set("Pragma", "no-cache")
+ w.WriteHeader(http.StatusOK)
+}
diff --git a/pkg/api/handlers/generic/system.go b/pkg/api/handlers/generic/system.go
new file mode 100644
index 000000000..254990b95
--- /dev/null
+++ b/pkg/api/handlers/generic/system.go
@@ -0,0 +1,18 @@
+package generic
+
+import (
+ "github.com/containers/libpod/pkg/api/handlers"
+ "github.com/containers/libpod/pkg/api/handlers/utils"
+ "net/http"
+
+ docker "github.com/docker/docker/api/types"
+)
+
+func GetDiskUsage(w http.ResponseWriter, r *http.Request) {
+ utils.WriteResponse(w, http.StatusOK, handlers.DiskUsage{DiskUsage: docker.DiskUsage{
+ LayersSize: 0,
+ Images: nil,
+ Containers: nil,
+ Volumes: nil,
+ }})
+}
diff --git a/pkg/api/handlers/generic/version.go b/pkg/api/handlers/generic/version.go
new file mode 100644
index 000000000..2c2283d10
--- /dev/null
+++ b/pkg/api/handlers/generic/version.go
@@ -0,0 +1,74 @@
+package generic
+
+import (
+ "fmt"
+ "github.com/containers/libpod/pkg/api/handlers"
+ "github.com/containers/libpod/pkg/api/handlers/utils"
+ "net/http"
+ goRuntime "runtime"
+ "time"
+
+ "github.com/containers/libpod/libpod"
+ "github.com/containers/libpod/libpod/define"
+ docker "github.com/docker/docker/api/types"
+ "github.com/pkg/errors"
+)
+
+const (
+ DefaultApiVersion = "1.40" // See https://docs.docker.com/engine/api/v1.40/
+ MinimalApiVersion = "1.24"
+)
+
+func VersionHandler(w http.ResponseWriter, r *http.Request) {
+ // 200 ok
+ // 500 internal
+ runtime := r.Context().Value("runtime").(*libpod.Runtime)
+
+ versionInfo, err := define.GetVersion()
+ if err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, err)
+ return
+ }
+
+ infoData, err := runtime.Info()
+ if err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrapf(err, "Failed to obtain system memory info"))
+ return
+ }
+ hostInfo := infoData[0].Data
+
+ components := []docker.ComponentVersion{{
+ Name: "Podman Engine",
+ Version: versionInfo.Version,
+ Details: map[string]string{
+ "APIVersion": DefaultApiVersion,
+ "Arch": goRuntime.GOARCH,
+ "BuildTime": time.Unix(versionInfo.Built, 0).Format(time.RFC3339),
+ "Experimental": "true",
+ "GitCommit": versionInfo.GitCommit,
+ "GoVersion": versionInfo.GoVersion,
+ "KernelVersion": hostInfo["kernel"].(string),
+ "MinAPIVersion": MinimalApiVersion,
+ "Os": goRuntime.GOOS,
+ },
+ }}
+
+ utils.WriteResponse(w, http.StatusOK, handlers.Version{Version: docker.Version{
+ Platform: struct {
+ Name string
+ }{
+ Name: fmt.Sprintf("%s/%s/%s", goRuntime.GOOS, goRuntime.GOARCH, hostInfo["Distribution"].(map[string]interface{})["distribution"].(string)),
+ },
+ APIVersion: components[0].Details["APIVersion"],
+ Arch: components[0].Details["Arch"],
+ BuildTime: components[0].Details["BuildTime"],
+ Components: components,
+ Experimental: true,
+ GitCommit: components[0].Details["GitCommit"],
+ GoVersion: components[0].Details["GoVersion"],
+ KernelVersion: components[0].Details["KernelVersion"],
+ MinAPIVersion: components[0].Details["MinAPIVersion"],
+ Os: components[0].Details["Os"],
+ Version: components[0].Version,
+ }})
+}