summaryrefslogtreecommitdiff
path: root/pkg/api/handlers
diff options
context:
space:
mode:
authorJhon Honce <jhonce@redhat.com>2019-11-01 13:03:34 -0700
committerbaude <bbaude@redhat.com>2020-01-10 09:41:39 -0600
commitd924494f561bb878a2b3a7ce438d87ecb934b5fb (patch)
tree29983e7411c8108e74e4286b90a535a114dee755 /pkg/api/handlers
parent6ed88e047579bd2d1eac99a6089cc617f0c4773d (diff)
downloadpodman-d924494f561bb878a2b3a7ce438d87ecb934b5fb.tar.gz
podman-d924494f561bb878a2b3a7ce438d87ecb934b5fb.tar.bz2
podman-d924494f561bb878a2b3a7ce438d87ecb934b5fb.zip
Initial commit on compatible API
Signed-off-by: Jhon Honce <jhonce@redhat.com> Create service command Use cd cmd/service && go build . $ systemd-socket-activate -l 8081 cmd/service/service & $ curl http://localhost:8081/v1.24/images/json Signed-off-by: Jhon Honce <jhonce@redhat.com> Correct Makefile Signed-off-by: Jhon Honce <jhonce@redhat.com> Two more stragglers Signed-off-by: Jhon Honce <jhonce@redhat.com> Report errors back as http headers Signed-off-by: Jhon Honce <jhonce@redhat.com> Split out handlers, updated output Output aligned to docker structures Signed-off-by: Jhon Honce <jhonce@redhat.com> Refactored routing, added more endpoints and types * Encapsulated all the routing information in the handler_* files. * Added more serviceapi/types, including podman additions. See Info Signed-off-by: Jhon Honce <jhonce@redhat.com> Cleaned up code, implemented info content * Move Content-Type check into serviceHandler * Custom 404 handler showing the url, mostly for debugging * Refactored images: better method names and explicit http codes * Added content to /info * Added podman fields to Info struct * Added Container struct Signed-off-by: Jhon Honce <jhonce@redhat.com> Add a bunch of endpoints containers: stop, pause, unpause, wait, rm images: tag, rmi, create (pull only) Signed-off-by: baude <bbaude@redhat.com> Add even more handlers * Add serviceapi/Error() to improve error handling * Better support for API return payloads * Renamed unimplemented to unsupported these are generic endpoints we don't intend to ever support. Swarm broken out since it uses different HTTP codes to signal that the node is not in a swarm. * Added more types * API Version broken out so it can be validated in the future Signed-off-by: Jhon Honce <jhonce@redhat.com> Refactor to introduce ServiceWriter Signed-off-by: Jhon Honce <jhonce@redhat.com> populate pods endpoints /libpod/pods/.. exists, kill, pause, prune, restart, remove, start, stop, unpause Signed-off-by: baude <bbaude@redhat.com> Add components to Version, fix Error body Signed-off-by: Jhon Honce <jhonce@redhat.com> Add images pull output, fix swarm routes * docker-py tests/integration/api_client_test.py pass 100% * docker-py tests/integration/api_image_test.py pass 4/16 + Test failures include services podman does not support Signed-off-by: Jhon Honce <jhonce@redhat.com> pods endpoint submission 2 add create and others; only top and stats is left. Signed-off-by: baude <bbaude@redhat.com> Update pull image to work from empty registry Signed-off-by: Jhon Honce <jhonce@redhat.com> pod create and container create first pass at pod and container create. the container create does not quite work yet but it is very close. pod create needs a partial rewrite. also broken off the DELETE (rm/rmi) to specific handler funcs. Signed-off-by: baude <bbaude@redhat.com> Add docker-py demos, GET .../containers/json * Update serviceapi/types to reflect libpod not podman * Refactored removeImage() to provide non-streaming return Signed-off-by: Jhon Honce <jhonce@redhat.com> create container part2 finished minimal config needed for create container. started demo.py for upcoming talk Signed-off-by: baude <bbaude@redhat.com> Stop server after honoring request * Remove casting for method calls * Improve WriteResponse() * Update Container API type to match docker API Signed-off-by: Jhon Honce <jhonce@redhat.com> fix namespace assumptions cleaned up namespace issues with libpod. Signed-off-by: baude <bbaude@redhat.com> wip Signed-off-by: baude <bbaude@redhat.com> Add sliding window when shutting down server * Added a Timeout rather than closing down service on each call * Added gorilla/schema dependency for Decode'ing query parameters * Improved error handling * Container logs returned and multiplexed for stdout and stderr * .../containers/{name}/logs?stdout=True&stderr=True * Container stats * .../containers/{name}/stats Signed-off-by: Jhon Honce <jhonce@redhat.com> Improve error handling * Add check for at least one std stream required for /containers/{id}/logs * Add check for state in /containers/{id}/top * Fill in more fields for /info * Fixed error checking in service start code Signed-off-by: Jhon Honce <jhonce@redhat.com> get rest of image tests for pass Signed-off-by: baude <bbaude@redhat.com> linting our content Signed-off-by: baude <bbaude@redhat.com> more linting Signed-off-by: baude <bbaude@redhat.com> more linting Signed-off-by: baude <bbaude@redhat.com> pruning Signed-off-by: baude <bbaude@redhat.com> [CI:DOCS]apiv2 pods migrate from using args in the url to using a json struct in body for pod create. Signed-off-by: baude <bbaude@redhat.com> fix handler_images prune prune's api changed slightly to deal with filters. Signed-off-by: baude <bbaude@redhat.com> [CI:DOCS]enabled base container create tests enabling the base container create tests which allow us to get more into the stop, kill, etc tests. many new tests now pass. Signed-off-by: baude <bbaude@redhat.com> serviceapi errors: append error message to API message I dearly hope this is not breaking any other tests but debugging "Internal Server Error" is not helpful to any user. In case, it breaks tests, we can rever the commit - that's why it's a small one. Signed-off-by: Valentin Rothberg <rothberg@redhat.com> serviceAPI: add containers/prune endpoint Signed-off-by: Valentin Rothberg <rothberg@redhat.com> add `service` make target Also remove the non-functional sub-Makefile. Signed-off-by: Valentin Rothberg <rothberg@redhat.com> add make targets for testing the service * `sudo make run-service` for running the service. * `DOCKERPY_TEST="tests/integration/api_container_test.py::ListContainersTest" \ make run-docker-py-tests` for running a specific tests. Run all tests by leaving the env variable empty. Signed-off-by: Valentin Rothberg <rothberg@redhat.com> Split handlers and server packages The files were split to help contain bloat. The api/server package will contain all code related to the functioning of the server while api/handlers will have all the code related to implementing the end points. api/server/register_* will contain the methods for registering endpoints. Additionally, they will have the comments for generating the swagger spec file. See api/handlers/version.go for a small example handler, api/handlers/containers.go contains much more complex handlers. Signed-off-by: Jhon Honce <jhonce@redhat.com> [CI:DOCS]enabled more tests Signed-off-by: baude <bbaude@redhat.com> [CI:DOCS]libpod endpoints small refactor for libpod inclusion and began adding endpoints. Signed-off-by: baude <bbaude@redhat.com> Implement /build and /events * Include crypto libraries for future ssh work Signed-off-by: Jhon Honce <jhonce@redhat.com> [CI:DOCS]more image implementations convert from using for to query structs among other changes including new endpoints. Signed-off-by: baude <bbaude@redhat.com> [CI:DOCS]add bindings for golang Signed-off-by: baude <bbaude@redhat.com> [CI:DOCS]add volume endpoints for libpod create, inspect, ls, prune, and rm Signed-off-by: baude <bbaude@redhat.com> [CI:DOCS]apiv2 healthcheck enablement wire up container healthchecks for the api. Signed-off-by: baude <bbaude@redhat.com> [CI:DOCS]Add mount endpoints via the api, allow ability to mount a container and list container mounts. Signed-off-by: baude <bbaude@redhat.com> [CI:DOCS]Add search endpoint add search endpoint with golang bindings Signed-off-by: baude <bbaude@redhat.com> [CI:DOCS]more apiv2 development misc population of methods, etc Signed-off-by: baude <bbaude@redhat.com> rebase cleanup and epoch reset Signed-off-by: baude <bbaude@redhat.com> [CI:DOCS]add more network endpoints also, add some initial error handling and convenience functions for standard endpoints. Signed-off-by: baude <bbaude@redhat.com> [CI:DOCS]use helper funcs for bindings use the methods developed to make writing bindings less duplicative and easier to use. Signed-off-by: baude <bbaude@redhat.com> [CI:DOCS]add return info for prereview begin to add return info and status codes for errors so that we can review the apiv2 Signed-off-by: baude <bbaude@redhat.com> [CI:DOCS]first pass at adding swagger docs for api Signed-off-by: baude <bbaude@redhat.com>
Diffstat (limited to 'pkg/api/handlers')
-rw-r--r--pkg/api/handlers/containers.go194
-rw-r--r--pkg/api/handlers/containers_top.go61
-rw-r--r--pkg/api/handlers/events.go46
-rw-r--r--pkg/api/handlers/generic/containers.go306
-rw-r--r--pkg/api/handlers/generic/containers_create.go243
-rw-r--r--pkg/api/handlers/generic/containers_stats.go198
-rw-r--r--pkg/api/handlers/generic/images.go363
-rw-r--r--pkg/api/handlers/generic/info.go196
-rw-r--r--pkg/api/handlers/generic/ping.go25
-rw-r--r--pkg/api/handlers/generic/system.go18
-rw-r--r--pkg/api/handlers/generic/version.go74
-rw-r--r--pkg/api/handlers/handler.go45
-rw-r--r--pkg/api/handlers/images.go185
-rw-r--r--pkg/api/handlers/images_build.go239
-rw-r--r--pkg/api/handlers/libpod/containers.go186
-rw-r--r--pkg/api/handlers/libpod/healthcheck.go25
-rw-r--r--pkg/api/handlers/libpod/images.go165
-rw-r--r--pkg/api/handlers/libpod/pods.go465
-rw-r--r--pkg/api/handlers/libpod/volumes.go174
-rw-r--r--pkg/api/handlers/types.go534
-rw-r--r--pkg/api/handlers/unsupported.go17
-rw-r--r--pkg/api/handlers/utils/containers.go103
-rw-r--r--pkg/api/handlers/utils/errors.go86
-rw-r--r--pkg/api/handlers/utils/handler.go44
-rw-r--r--pkg/api/handlers/utils/images.go32
25 files changed, 4024 insertions, 0 deletions
diff --git a/pkg/api/handlers/containers.go b/pkg/api/handlers/containers.go
new file mode 100644
index 000000000..6b09321a0
--- /dev/null
+++ b/pkg/api/handlers/containers.go
@@ -0,0 +1,194 @@
+package handlers
+
+import (
+ "fmt"
+ "net/http"
+
+ "github.com/containers/libpod/libpod"
+ "github.com/containers/libpod/libpod/define"
+ "github.com/containers/libpod/pkg/api/handlers/utils"
+ "github.com/gorilla/mux"
+ "github.com/gorilla/schema"
+ "github.com/pkg/errors"
+)
+
+func StopContainer(w http.ResponseWriter, r *http.Request) {
+ runtime := r.Context().Value("runtime").(*libpod.Runtime)
+ decoder := r.Context().Value("decoder").(*schema.Decoder)
+
+ // /{version}/containers/(name)/stop
+ query := struct {
+ Timeout int `schema:"t"`
+ }{
+ // override any golang type defaults
+ }
+ if err := decoder.Decode(&query, r.URL.Query()); err != nil {
+ utils.Error(w, http.StatusText(http.StatusBadRequest), http.StatusBadRequest,
+ errors.Wrapf(err, "Failed to parse parameters for %s", r.URL.String()))
+ return
+ }
+
+ name := getName(r)
+ con, err := runtime.LookupContainer(name)
+ if err != nil {
+ utils.ContainerNotFound(w, name, err)
+ return
+ }
+
+ state, err := con.State()
+ if err != nil {
+ utils.InternalServerError(w, errors.Wrapf(err, "unable to get state for Container %s", name))
+ return
+ }
+ // If the Container is stopped already, send a 302
+ if state == define.ContainerStateStopped || state == define.ContainerStateExited {
+ utils.Error(w, http.StatusText(http.StatusNotModified), http.StatusNotModified,
+ errors.Errorf("Container %s is already stopped ", name))
+ return
+ }
+
+ var stopError error
+ if query.Timeout > 0 {
+ stopError = con.StopWithTimeout(uint(query.Timeout))
+ } else {
+ stopError = con.Stop()
+ }
+ if stopError != nil {
+ utils.InternalServerError(w, errors.Wrapf(stopError, "failed to stop %s", name))
+ return
+ }
+
+ // Success
+ utils.WriteResponse(w, http.StatusNoContent, "")
+}
+
+func UnpauseContainer(w http.ResponseWriter, r *http.Request) {
+ runtime := r.Context().Value("runtime").(*libpod.Runtime)
+
+ // /{version}/containers/(name)/unpause
+ name := getName(r)
+ con, err := runtime.LookupContainer(name)
+ if err != nil {
+ utils.ContainerNotFound(w, name, err)
+ return
+ }
+
+ // the api does not error if the Container is already paused, so just into it
+ if err := con.Unpause(); err != nil {
+ utils.InternalServerError(w, err)
+ return
+ }
+
+ // Success
+ utils.WriteResponse(w, http.StatusNoContent, "")
+}
+
+func PauseContainer(w http.ResponseWriter, r *http.Request) {
+ runtime := r.Context().Value("runtime").(*libpod.Runtime)
+
+ // /{version}/containers/(name)/pause
+ name := getName(r)
+ con, err := runtime.LookupContainer(name)
+ if err != nil {
+ utils.ContainerNotFound(w, name, err)
+ return
+ }
+
+ // the api does not error if the Container is already paused, so just into it
+ if err := con.Pause(); err != nil {
+ utils.InternalServerError(w, err)
+ return
+ }
+ // Success
+ utils.WriteResponse(w, http.StatusNoContent, "")
+}
+
+func StartContainer(w http.ResponseWriter, r *http.Request) {
+ decoder := r.Context().Value("decoder").(*schema.Decoder)
+ query := struct {
+ DetachKeys string `schema:"detachKeys"`
+ }{
+ // Override golang default values for types
+ }
+ if err := decoder.Decode(&query, r.URL.Query()); err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusBadRequest, errors.Wrapf(err, "Failed to parse parameters for %s", r.URL.String()))
+ return
+ }
+ if len(query.DetachKeys) > 0 {
+ // TODO - start does not support adding detach keys
+ utils.Error(w, "Something went wrong", http.StatusBadRequest, errors.New("the detachKeys parameter is not supported yet"))
+ return
+ }
+ runtime := r.Context().Value("runtime").(*libpod.Runtime)
+ name := getName(r)
+ con, err := runtime.LookupContainer(name)
+ if err != nil {
+ utils.ContainerNotFound(w, name, err)
+ return
+ }
+
+ state, err := con.State()
+ if err != nil {
+ utils.InternalServerError(w, err)
+ return
+ }
+ if state == define.ContainerStateRunning {
+ msg := fmt.Sprintf("Container %s is already running", name)
+ utils.Error(w, msg, http.StatusNotModified, errors.New(msg))
+ return
+ }
+ if err := con.Start(r.Context(), false); err != nil {
+ utils.InternalServerError(w, err)
+ return
+ }
+ utils.WriteResponse(w, http.StatusNoContent, "")
+}
+
+func RestartContainer(w http.ResponseWriter, r *http.Request) {
+ runtime := r.Context().Value("runtime").(*libpod.Runtime)
+ decoder := r.Context().Value("decoder").(*schema.Decoder)
+ // /{version}/containers/(name)/restart
+ query := struct {
+ Timeout int `schema:"t"`
+ }{
+ // Override golang default values for types
+ }
+ if err := decoder.Decode(&query, r.URL.Query()); err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusBadRequest, errors.Wrapf(err, "Failed to parse parameters for %s", r.URL.String()))
+ return
+ }
+
+ name := getName(r)
+ con, err := runtime.LookupContainer(name)
+ if err != nil {
+ utils.ContainerNotFound(w, name, err)
+ return
+ }
+
+ state, err := con.State()
+ if err != nil {
+ utils.InternalServerError(w, err)
+ return
+ }
+
+ // FIXME: This is not in the swagger.yml...
+ // If the Container is stopped already, send a 409
+ if state == define.ContainerStateStopped || state == define.ContainerStateExited {
+ msg := fmt.Sprintf("Container %s is not running", name)
+ utils.Error(w, msg, http.StatusConflict, errors.New(msg))
+ return
+ }
+
+ timeout := con.StopTimeout()
+ if _, found := mux.Vars(r)["t"]; found {
+ timeout = uint(query.Timeout)
+ }
+
+ if err := con.RestartWithTimeout(r.Context(), timeout); err != nil {
+ utils.InternalServerError(w, err)
+ return
+ }
+
+ // Success
+ utils.WriteResponse(w, http.StatusNoContent, "")
+}
diff --git a/pkg/api/handlers/containers_top.go b/pkg/api/handlers/containers_top.go
new file mode 100644
index 000000000..03081372e
--- /dev/null
+++ b/pkg/api/handlers/containers_top.go
@@ -0,0 +1,61 @@
+package handlers
+
+import (
+ "github.com/containers/libpod/pkg/api/handlers/utils"
+ "net/http"
+ "strings"
+
+ "github.com/containers/libpod/libpod"
+ "github.com/containers/libpod/libpod/define"
+ "github.com/gorilla/mux"
+ "github.com/gorilla/schema"
+ "github.com/pkg/errors"
+)
+
+func TopContainer(w http.ResponseWriter, r *http.Request) {
+ runtime := r.Context().Value("runtime").(*libpod.Runtime)
+ decoder := r.Context().Value("decoder").(*schema.Decoder)
+
+ query := struct {
+ PsArgs string `schema:"ps_args"`
+ }{
+ PsArgs: "-ef",
+ }
+ if err := decoder.Decode(&query, r.URL.Query()); err != nil {
+ utils.Error(w, http.StatusText(http.StatusBadRequest), http.StatusBadRequest,
+ errors.Wrapf(err, "Failed to parse parameters for %s", r.URL.String()))
+ return
+ }
+
+ name := mux.Vars(r)["name"]
+ ctnr, err := runtime.LookupContainer(name)
+ if err != nil {
+ utils.ContainerNotFound(w, name, err)
+ return
+ }
+
+ state, err := ctnr.State()
+ if err != nil {
+ utils.InternalServerError(w, err)
+ return
+ }
+ if state != define.ContainerStateRunning {
+ utils.ContainerNotRunning(w, name, errors.Errorf("Container %s must be running to perform top operation", name))
+ return
+ }
+
+ output, err := ctnr.Top([]string{})
+ if err != nil {
+ utils.InternalServerError(w, err)
+ return
+ }
+
+ var body = ContainerTopOKBody{}
+ if len(output) > 0 {
+ body.Titles = strings.Split(output[0], "\t")
+ for _, line := range output[1:] {
+ body.Processes = append(body.Processes, strings.Split(line, "\t"))
+ }
+ }
+ utils.WriteJSON(w, http.StatusOK, body)
+}
diff --git a/pkg/api/handlers/events.go b/pkg/api/handlers/events.go
new file mode 100644
index 000000000..267d552df
--- /dev/null
+++ b/pkg/api/handlers/events.go
@@ -0,0 +1,46 @@
+package handlers
+
+import (
+ "encoding/json"
+ "fmt"
+ "github.com/containers/libpod/pkg/api/handlers/utils"
+ "net/http"
+
+ "github.com/pkg/errors"
+)
+
+func GetEvents(w http.ResponseWriter, r *http.Request) {
+ query := struct {
+ Since string `json:"since"`
+ Until string `json:"until"`
+ Filters string `json:"filters"`
+ }{}
+ if err := decodeQuery(r, &query); err != nil {
+ utils.Error(w, "Failed to parse parameters", http.StatusBadRequest, errors.Wrapf(err, "Failed to parse parameters for %s", r.URL.String()))
+ }
+
+ var filters = map[string][]string{}
+ if found := hasVar(r, "filters"); found {
+ if err := json.Unmarshal([]byte(query.Filters), &filters); err != nil {
+ utils.BadRequest(w, "filters", query.Filters, err)
+ return
+ }
+ }
+
+ var libpodFilters = make([]string, len(filters))
+ for k, v := range filters {
+ libpodFilters = append(libpodFilters, fmt.Sprintf("%s=%s", k, v[0]))
+ }
+
+ libpodEvents, err := getRuntime(r).GetEvents(libpodFilters)
+ if err != nil {
+ utils.BadRequest(w, "filters", query.Filters, err)
+ return
+ }
+
+ var apiEvents = make([]*Event, len(libpodEvents))
+ for _, v := range libpodEvents {
+ apiEvents = append(apiEvents, EventToApiEvent(v))
+ }
+ utils.WriteJSON(w, http.StatusOK, apiEvents)
+}
diff --git a/pkg/api/handlers/generic/containers.go b/pkg/api/handlers/generic/containers.go
new file mode 100644
index 000000000..5a0a51fd7
--- /dev/null
+++ b/pkg/api/handlers/generic/containers.go
@@ -0,0 +1,306 @@
+package generic
+
+import (
+ "context"
+ "encoding/binary"
+ "fmt"
+ "net/http"
+ "strconv"
+ "strings"
+ "sync"
+ "time"
+
+ "github.com/containers/libpod/libpod"
+ "github.com/containers/libpod/libpod/define"
+ "github.com/containers/libpod/libpod/logs"
+ "github.com/containers/libpod/pkg/api/handlers"
+ "github.com/containers/libpod/pkg/api/handlers/utils"
+ "github.com/containers/libpod/pkg/util"
+ "github.com/docker/docker/api/types"
+ "github.com/gorilla/mux"
+ "github.com/gorilla/schema"
+ "github.com/pkg/errors"
+ log "github.com/sirupsen/logrus"
+)
+
+func RemoveContainer(w http.ResponseWriter, r *http.Request) {
+ decoder := r.Context().Value("decoder").(*schema.Decoder)
+ query := struct {
+ Force bool `schema:"force"`
+ Vols bool `schema:"v"`
+ Link bool `schema:"link"`
+ }{
+ // override any golang type defaults
+ }
+
+ if err := decoder.Decode(&query, r.URL.Query()); err != nil {
+ utils.Error(w, http.StatusText(http.StatusBadRequest), http.StatusBadRequest,
+ errors.Wrapf(err, "Failed to parse parameters for %s", r.URL.String()))
+ return
+ }
+ if query.Link {
+ utils.Error(w, http.StatusText(http.StatusBadRequest), http.StatusBadRequest,
+ utils.ErrLinkNotSupport)
+ return
+ }
+ utils.RemoveContainer(w, r, query.Force, query.Vols)
+}
+
+func ListContainers(w http.ResponseWriter, r *http.Request) {
+ runtime := r.Context().Value("runtime").(*libpod.Runtime)
+
+ containers, err := runtime.GetAllContainers()
+ if err != nil {
+ utils.InternalServerError(w, err)
+ return
+ }
+
+ infoData, err := runtime.Info()
+ if err != nil {
+ utils.InternalServerError(w, errors.Wrapf(err, "Failed to obtain system info"))
+ return
+ }
+
+ var list = make([]*handlers.Container, len(containers))
+ for i, ctnr := range containers {
+ api, err := handlers.LibpodToContainer(ctnr, infoData)
+ if err != nil {
+ utils.InternalServerError(w, err)
+ return
+ }
+ list[i] = api
+ }
+ utils.WriteResponse(w, http.StatusOK, list)
+}
+
+func GetContainer(w http.ResponseWriter, r *http.Request) {
+ runtime := r.Context().Value("runtime").(*libpod.Runtime)
+
+ name := mux.Vars(r)["name"]
+ ctnr, err := runtime.LookupContainer(name)
+ if err != nil {
+ utils.ContainerNotFound(w, name, err)
+ return
+ }
+ api, err := handlers.LibpodToContainerJSON(ctnr)
+ if err != nil {
+ utils.InternalServerError(w, err)
+ return
+ }
+ utils.WriteResponse(w, http.StatusOK, api)
+}
+
+func KillContainer(w http.ResponseWriter, r *http.Request) {
+ // /{version}/containers/(name)/kill
+ con, err := utils.KillContainer(w, r)
+ if err != nil {
+ return
+ }
+ // the kill behavior for docker differs from podman in that they appear to wait
+ // for the Container to croak so the exit code is accurate immediately after the
+ // kill is sent. libpod does not. but we can add a wait here only for the docker
+ // side of things and mimic that behavior
+ if _, err = con.Wait(); err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrapf(err, "failed to wait for Container %s", con.ID()))
+ return
+ }
+ // Success
+ utils.WriteResponse(w, http.StatusNoContent, "")
+}
+
+func WaitContainer(w http.ResponseWriter, r *http.Request) {
+ var msg string
+ // /{version}/containers/(name)/wait
+ exitCode, err := utils.WaitContainer(w, r)
+ if err != nil {
+ msg = err.Error()
+ }
+ utils.WriteResponse(w, http.StatusOK, handlers.ContainerWaitOKBody{
+ StatusCode: int(exitCode),
+ Error: struct {
+ Message string
+ }{
+ Message: msg,
+ },
+ })
+}
+
+func PruneContainers(w http.ResponseWriter, r *http.Request) {
+ runtime := r.Context().Value("runtime").(*libpod.Runtime)
+
+ containers, err := runtime.GetAllContainers()
+ if err != nil {
+ utils.InternalServerError(w, err)
+ return
+ }
+
+ deletedContainers := []string{}
+ var spaceReclaimed uint64
+ for _, ctnr := range containers {
+ // Only remove stopped or exit'ed containers.
+ state, err := ctnr.State()
+ if err != nil {
+ utils.InternalServerError(w, err)
+ return
+ }
+ switch state {
+ case define.ContainerStateStopped, define.ContainerStateExited:
+ default:
+ continue
+ }
+
+ deletedContainers = append(deletedContainers, ctnr.ID())
+ cSize, err := ctnr.RootFsSize()
+ if err != nil {
+ utils.InternalServerError(w, err)
+ return
+ }
+ spaceReclaimed += uint64(cSize)
+
+ err = runtime.RemoveContainer(context.Background(), ctnr, false, false)
+ if err != nil && !(errors.Cause(err) == define.ErrCtrRemoved) {
+ utils.InternalServerError(w, err)
+ return
+ }
+ }
+ report := types.ContainersPruneReport{
+ ContainersDeleted: deletedContainers,
+ SpaceReclaimed: spaceReclaimed,
+ }
+ utils.WriteResponse(w, http.StatusOK, report)
+}
+
+func LogsFromContainer(w http.ResponseWriter, r *http.Request) {
+ decoder := r.Context().Value("decoder").(*schema.Decoder)
+ runtime := r.Context().Value("runtime").(*libpod.Runtime)
+
+ query := struct {
+ Follow bool `schema:"follow"`
+ Stdout bool `schema:"stdout"`
+ Stderr bool `schema:"stderr"`
+ Since string `schema:"since"`
+ Until string `schema:"until"`
+ Timestamps bool `schema:"timestamps"`
+ Tail string `schema:"tail"`
+ }{
+ Tail: "all",
+ }
+ if err := decoder.Decode(&query, r.URL.Query()); err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusBadRequest, errors.Wrapf(err, "Failed to parse parameters for %s", r.URL.String()))
+ return
+ }
+
+ if !(query.Stdout || query.Stderr) {
+ msg := fmt.Sprintf("%s: you must choose at least one stream", http.StatusText(http.StatusBadRequest))
+ utils.Error(w, msg, http.StatusBadRequest, errors.Errorf("%s for %s", msg, r.URL.String()))
+ return
+ }
+
+ name := mux.Vars(r)["name"]
+ ctnr, err := runtime.LookupContainer(name)
+ if err != nil {
+ utils.ContainerNotFound(w, name, err)
+ return
+ }
+
+ var tail int64 = -1
+ if query.Tail != "all" {
+ tail, err = strconv.ParseInt(query.Tail, 0, 64)
+ if err != nil {
+ utils.BadRequest(w, "tail", query.Tail, err)
+ return
+ }
+ }
+
+ var since time.Time
+ if _, found := mux.Vars(r)["since"]; found {
+ since, err = util.ParseInputTime(query.Since)
+ if err != nil {
+ utils.BadRequest(w, "since", query.Since, err)
+ return
+ }
+ }
+
+ var until time.Time
+ if _, found := mux.Vars(r)["until"]; found {
+ since, err = util.ParseInputTime(query.Until)
+ if err != nil {
+ utils.BadRequest(w, "until", query.Until, err)
+ return
+ }
+ }
+
+ options := &logs.LogOptions{
+ Details: true,
+ Follow: query.Follow,
+ Since: since,
+ Tail: tail,
+ Timestamps: query.Timestamps,
+ }
+
+ var wg sync.WaitGroup
+ options.WaitGroup = &wg
+
+ logChannel := make(chan *logs.LogLine, tail+1)
+ if err := runtime.Log([]*libpod.Container{ctnr}, options, logChannel); err != nil {
+ utils.InternalServerError(w, errors.Wrapf(err, "Failed to obtain logs for Container '%s'", name))
+ return
+ }
+ go func() {
+ wg.Wait()
+ close(logChannel)
+ }()
+
+ w.WriteHeader(http.StatusOK)
+ var builder strings.Builder
+ for ok := true; ok; ok = query.Follow {
+ for line := range logChannel {
+ if _, found := mux.Vars(r)["until"]; found {
+ if line.Time.After(until) {
+ break
+ }
+ }
+
+ // Reset variables we're ready to loop again
+ builder.Reset()
+ header := [8]byte{}
+
+ switch line.Device {
+ case "stdout":
+ if !query.Stdout {
+ continue
+ }
+ header[0] = 1
+ case "stderr":
+ if !query.Stderr {
+ continue
+ }
+ header[0] = 2
+ default:
+ // Logging and moving on is the best we can do here. We may have already sent
+ // a Status and Content-Type to client therefore we can no longer report an error.
+ log.Infof("unknown Device type '%s' in log file from Container %s", line.Device, ctnr.ID())
+ continue
+ }
+
+ if query.Timestamps {
+ builder.WriteString(line.Time.Format(time.RFC3339))
+ builder.WriteRune(' ')
+ }
+ builder.WriteString(line.Msg)
+
+ // Build header and output entry
+ binary.BigEndian.PutUint32(header[4:], uint32(len(header)+builder.Len()))
+ if _, err := w.Write(header[:]); err != nil {
+ log.Errorf("unable to write log output header: %q", err)
+ }
+ if _, err := fmt.Fprint(w, builder.String()); err != nil {
+ log.Errorf("unable to write builder string: %q", err)
+ }
+
+ if flusher, ok := w.(http.Flusher); ok {
+ flusher.Flush()
+ }
+ }
+ }
+}
diff --git a/pkg/api/handlers/generic/containers_create.go b/pkg/api/handlers/generic/containers_create.go
new file mode 100644
index 000000000..056f7e95c
--- /dev/null
+++ b/pkg/api/handlers/generic/containers_create.go
@@ -0,0 +1,243 @@
+package generic
+
+import (
+ "encoding/json"
+ "fmt"
+ "github.com/containers/libpod/pkg/api/handlers/utils"
+ "net/http"
+ "strings"
+
+ "github.com/containers/libpod/cmd/podman/shared"
+ "github.com/containers/libpod/libpod"
+ "github.com/containers/libpod/libpod/define"
+ image2 "github.com/containers/libpod/libpod/image"
+ "github.com/containers/libpod/pkg/api/handlers"
+ "github.com/containers/libpod/pkg/namespaces"
+ createconfig "github.com/containers/libpod/pkg/spec"
+ "github.com/containers/storage"
+ "github.com/docker/docker/pkg/signal"
+ "github.com/gorilla/schema"
+ "github.com/pkg/errors"
+ log "github.com/sirupsen/logrus"
+ "golang.org/x/sys/unix"
+)
+
+func CreateContainer(w http.ResponseWriter, r *http.Request) {
+ runtime := r.Context().Value("runtime").(*libpod.Runtime)
+ decoder := r.Context().Value("decoder").(*schema.Decoder)
+ input := handlers.CreateContainerConfig{}
+ query := struct {
+ Name string `schema:"name"`
+ }{
+ // override any golang type defaults
+ }
+ if err := decoder.Decode(&query, r.URL.Query()); err != nil {
+ utils.Error(w, http.StatusText(http.StatusBadRequest), http.StatusBadRequest,
+ errors.Wrapf(err, "Failed to parse parameters for %s", r.URL.String()))
+ return
+ }
+ if err := json.NewDecoder(r.Body).Decode(&input); err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrap(err, "Decode()"))
+ return
+ }
+
+ newImage, err := runtime.ImageRuntime().NewFromLocal(input.Image)
+ if err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrap(err, "NewFromLocal()"))
+ return
+ }
+ cc, err := makeCreateConfig(input, newImage)
+ if err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrap(err, "makeCreatConfig()"))
+ return
+ }
+
+ cc.Name = query.Name
+ var pod *libpod.Pod
+ ctr, err := shared.CreateContainerFromCreateConfig(runtime, &cc, r.Context(), pod)
+ if err != nil {
+ if strings.Contains(err.Error(), "invalid log driver") {
+ // this does not quite work yet and needs a little more massaging
+ w.Header().Set("Content-Type", "text/plain; charset=us-ascii")
+ w.WriteHeader(http.StatusInternalServerError)
+ msg := fmt.Sprintf("logger: no log driver named '%s' is registered", input.HostConfig.LogConfig.Type)
+ if _, err := fmt.Fprintln(w, msg); err != nil {
+ log.Errorf("%s: %q", msg, err)
+ }
+ //s.WriteResponse(w, http.StatusInternalServerError, fmt.Sprintf("logger: no log driver named '%s' is registered", input.HostConfig.LogConfig.Type))
+ return
+ }
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrap(err, "CreateContainerFromCreateConfig()"))
+ return
+ }
+
+ type ctrCreateResponse struct {
+ Id string `json:"Id"`
+ Warnings []string `json:"Warnings"`
+ }
+ response := ctrCreateResponse{
+ Id: ctr.ID(),
+ Warnings: []string{}}
+
+ utils.WriteResponse(w, http.StatusCreated, response)
+}
+
+func makeCreateConfig(input handlers.CreateContainerConfig, newImage *image2.Image) (createconfig.CreateConfig, error) {
+ var (
+ err error
+ init bool
+ tmpfs []string
+ volumes []string
+ )
+ env := make(map[string]string)
+ stopSignal := unix.SIGTERM
+ if len(input.StopSignal) > 0 {
+ stopSignal, err = signal.ParseSignal(input.StopSignal)
+ if err != nil {
+ return createconfig.CreateConfig{}, err
+ }
+ }
+
+ workDir := "/"
+ if len(input.WorkingDir) > 0 {
+ workDir = input.WorkingDir
+ }
+
+ stopTimeout := uint(define.CtrRemoveTimeout)
+ if input.StopTimeout != nil {
+ stopTimeout = uint(*input.StopTimeout)
+ }
+ c := createconfig.CgroupConfig{
+ Cgroups: "", // podman
+ Cgroupns: "", // podman
+ CgroupParent: "", // podman
+ CgroupMode: "", // podman
+ }
+ security := createconfig.SecurityConfig{
+ CapAdd: input.HostConfig.CapAdd,
+ CapDrop: input.HostConfig.CapDrop,
+ LabelOpts: nil, // podman
+ NoNewPrivs: false, // podman
+ ApparmorProfile: "", // podman
+ SeccompProfilePath: "",
+ SecurityOpts: input.HostConfig.SecurityOpt,
+ Privileged: input.HostConfig.Privileged,
+ ReadOnlyRootfs: input.HostConfig.ReadonlyRootfs,
+ ReadOnlyTmpfs: false, // podman-only
+ Sysctl: input.HostConfig.Sysctls,
+ }
+
+ network := createconfig.NetworkConfig{
+ DNSOpt: input.HostConfig.DNSOptions,
+ DNSSearch: input.HostConfig.DNSSearch,
+ DNSServers: input.HostConfig.DNS,
+ ExposedPorts: input.ExposedPorts,
+ HTTPProxy: false, // podman
+ IP6Address: "",
+ IPAddress: "",
+ LinkLocalIP: nil, // docker-only
+ MacAddress: input.MacAddress,
+ // NetMode: nil,
+ Network: input.HostConfig.NetworkMode.NetworkName(),
+ NetworkAlias: nil, // docker-only now
+ PortBindings: input.HostConfig.PortBindings,
+ Publish: nil, // podmanseccompPath
+ PublishAll: input.HostConfig.PublishAllPorts,
+ }
+
+ uts := createconfig.UtsConfig{
+ UtsMode: namespaces.UTSMode(input.HostConfig.UTSMode),
+ NoHosts: false, //podman
+ HostAdd: input.HostConfig.ExtraHosts,
+ Hostname: input.Hostname,
+ }
+
+ z := createconfig.UserConfig{
+ GroupAdd: input.HostConfig.GroupAdd,
+ IDMappings: &storage.IDMappingOptions{}, // podman //TODO <--- fix this,
+ UsernsMode: namespaces.UsernsMode(input.HostConfig.UsernsMode),
+ User: input.User,
+ }
+ pidConfig := createconfig.PidConfig{PidMode: namespaces.PidMode(input.HostConfig.PidMode)}
+ for k := range input.Volumes {
+ volumes = append(volumes, k)
+ }
+
+ // Docker is more flexible about its input where podman throws
+ // away incorrectly formatted variables so we cannot reuse the
+ // parsing of the env input
+ // [Foo Other=one Blank=]
+ for _, e := range input.Env {
+ splitEnv := strings.Split(e, "=")
+ switch len(splitEnv) {
+ case 0:
+ continue
+ case 1:
+ env[splitEnv[0]] = ""
+ default:
+ env[splitEnv[0]] = strings.Join(splitEnv[1:], "=")
+ }
+ }
+
+ // format the tmpfs mounts into a []string from map
+ for k, v := range input.HostConfig.Tmpfs {
+ tmpfs = append(tmpfs, fmt.Sprintf("%s:%s", k, v))
+ }
+
+ if input.HostConfig.Init != nil && *input.HostConfig.Init {
+ init = true
+ }
+
+ m := createconfig.CreateConfig{
+ Annotations: nil, // podman
+ Args: nil,
+ Cgroup: c,
+ CidFile: "",
+ ConmonPidFile: "", // podman
+ Command: input.Cmd,
+ UserCommand: input.Cmd, // podman
+ Detach: false, //
+ // Devices: input.HostConfig.Devices,
+ Entrypoint: input.Entrypoint,
+ Env: env,
+ HealthCheck: nil, //
+ Init: init,
+ InitPath: "", // tbd
+ Image: input.Image,
+ ImageID: newImage.ID(),
+ BuiltinImgVolumes: nil, // podman
+ ImageVolumeType: "", // podman
+ Interactive: false,
+ // IpcMode: input.HostConfig.IpcMode,
+ Labels: input.Labels,
+ LogDriver: input.HostConfig.LogConfig.Type, // is this correct
+ // LogDriverOpt: input.HostConfig.LogConfig.Config,
+ Name: input.Name,
+ Network: network,
+ Pod: "", // podman
+ PodmanPath: "", // podman
+ Quiet: false, // front-end only
+ Resources: createconfig.CreateResourceConfig{},
+ RestartPolicy: input.HostConfig.RestartPolicy.Name,
+ Rm: input.HostConfig.AutoRemove,
+ StopSignal: stopSignal,
+ StopTimeout: stopTimeout,
+ Systemd: false, // podman
+ Tmpfs: tmpfs,
+ User: z,
+ Uts: uts,
+ Tty: input.Tty,
+ Mounts: nil, // we populate
+ // MountsFlag: input.HostConfig.Mounts,
+ NamedVolumes: nil, // we populate
+ Volumes: volumes,
+ VolumesFrom: input.HostConfig.VolumesFrom,
+ WorkDir: workDir,
+ Rootfs: "", // podman
+ Security: security,
+ Syslog: false, // podman
+
+ Pid: pidConfig,
+ }
+ return m, nil
+}
diff --git a/pkg/api/handlers/generic/containers_stats.go b/pkg/api/handlers/generic/containers_stats.go
new file mode 100644
index 000000000..0c4efc1df
--- /dev/null
+++ b/pkg/api/handlers/generic/containers_stats.go
@@ -0,0 +1,198 @@
+package generic
+
+import (
+ "encoding/json"
+ "net/http"
+ "time"
+
+ "github.com/containers/libpod/libpod"
+ "github.com/containers/libpod/libpod/define"
+ "github.com/containers/libpod/pkg/api/handlers"
+ "github.com/containers/libpod/pkg/api/handlers/utils"
+ "github.com/containers/libpod/pkg/cgroups"
+ docker "github.com/docker/docker/api/types"
+ "github.com/gorilla/mux"
+ "github.com/gorilla/schema"
+ "github.com/pkg/errors"
+ "github.com/sirupsen/logrus"
+)
+
+const DefaultStatsPeriod = 5 * time.Second
+
+func StatsContainer(w http.ResponseWriter, r *http.Request) {
+ // 200 no error
+ // 404 no such
+ // 500 internal
+ runtime := r.Context().Value("runtime").(*libpod.Runtime)
+ decoder := r.Context().Value("decoder").(*schema.Decoder)
+
+ query := struct {
+ Stream bool `schema:"stream"`
+ }{
+ Stream: true,
+ }
+ if err := decoder.Decode(&query, r.URL.Query()); err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusBadRequest, errors.Wrapf(err, "Failed to parse parameters for %s", r.URL.String()))
+ return
+ }
+
+ name := mux.Vars(r)["name"]
+ ctnr, err := runtime.LookupContainer(name)
+ if err != nil {
+ utils.ContainerNotFound(w, name, err)
+ return
+ }
+
+ state, err := ctnr.State()
+ if err != nil {
+ utils.InternalServerError(w, err)
+ return
+ }
+ if state != define.ContainerStateRunning && !query.Stream {
+ utils.WriteJSON(w, http.StatusOK, &handlers.Stats{StatsJSON: docker.StatsJSON{
+ Name: ctnr.Name(),
+ ID: ctnr.ID(),
+ }})
+ return
+ }
+
+ var preRead time.Time
+ var preCPUStats docker.CPUStats
+
+ stats, err := ctnr.GetContainerStats(&libpod.ContainerStats{})
+ if err != nil {
+ utils.InternalServerError(w, errors.Wrapf(err, "Failed to obtain Container %s stats", name))
+ return
+ }
+
+ if query.Stream {
+ preRead = time.Now()
+ preCPUStats = docker.CPUStats{
+ CPUUsage: docker.CPUUsage{
+ TotalUsage: stats.CPUNano,
+ PercpuUsage: []uint64{uint64(stats.CPU)},
+ UsageInKernelmode: 0,
+ UsageInUsermode: 0,
+ },
+ SystemUsage: 0,
+ OnlineCPUs: 0,
+ ThrottlingData: docker.ThrottlingData{},
+ }
+ time.Sleep(DefaultStatsPeriod)
+ }
+
+ cgroupPath, _ := ctnr.CGroupPath()
+ cgroup, _ := cgroups.Load(cgroupPath)
+
+ for ok := true; ok; ok = query.Stream {
+ state, _ := ctnr.State()
+ if state != define.ContainerStateRunning {
+ time.Sleep(10 * time.Second)
+ continue
+ }
+
+ stats, _ := ctnr.GetContainerStats(stats)
+ cgroupStat, _ := cgroup.Stat()
+ inspect, _ := ctnr.Inspect(false)
+
+ net := make(map[string]docker.NetworkStats)
+ net[inspect.NetworkSettings.EndpointID] = docker.NetworkStats{
+ RxBytes: stats.NetInput,
+ RxPackets: 0,
+ RxErrors: 0,
+ RxDropped: 0,
+ TxBytes: stats.NetOutput,
+ TxPackets: 0,
+ TxErrors: 0,
+ TxDropped: 0,
+ EndpointID: inspect.NetworkSettings.EndpointID,
+ InstanceID: "",
+ }
+
+ s := handlers.Stats{StatsJSON: docker.StatsJSON{
+ Stats: docker.Stats{
+ Read: time.Now(),
+ PreRead: preRead,
+ PidsStats: docker.PidsStats{
+ Current: cgroupStat.Pids.Current,
+ Limit: 0,
+ },
+ BlkioStats: docker.BlkioStats{
+ IoServiceBytesRecursive: toBlkioStatEntry(cgroupStat.Blkio.IoServiceBytesRecursive),
+ IoServicedRecursive: nil,
+ IoQueuedRecursive: nil,
+ IoServiceTimeRecursive: nil,
+ IoWaitTimeRecursive: nil,
+ IoMergedRecursive: nil,
+ IoTimeRecursive: nil,
+ SectorsRecursive: nil,
+ },
+ NumProcs: 0,
+ StorageStats: docker.StorageStats{
+ ReadCountNormalized: 0,
+ ReadSizeBytes: 0,
+ WriteCountNormalized: 0,
+ WriteSizeBytes: 0,
+ },
+ CPUStats: docker.CPUStats{
+ CPUUsage: docker.CPUUsage{
+ TotalUsage: cgroupStat.CPU.Usage.Total,
+ PercpuUsage: []uint64{uint64(stats.CPU)},
+ UsageInKernelmode: cgroupStat.CPU.Usage.Kernel,
+ UsageInUsermode: cgroupStat.CPU.Usage.Total - cgroupStat.CPU.Usage.Kernel,
+ },
+ SystemUsage: 0,
+ OnlineCPUs: uint32(len(cgroupStat.CPU.Usage.PerCPU)),
+ ThrottlingData: docker.ThrottlingData{
+ Periods: 0,
+ ThrottledPeriods: 0,
+ ThrottledTime: 0,
+ },
+ },
+ PreCPUStats: preCPUStats,
+ MemoryStats: docker.MemoryStats{
+ Usage: cgroupStat.Memory.Usage.Usage,
+ MaxUsage: cgroupStat.Memory.Usage.Limit,
+ Stats: nil,
+ Failcnt: 0,
+ Limit: cgroupStat.Memory.Usage.Limit,
+ Commit: 0,
+ CommitPeak: 0,
+ PrivateWorkingSet: 0,
+ },
+ },
+ Name: stats.Name,
+ ID: stats.ContainerID,
+ Networks: net,
+ }}
+
+ utils.WriteJSON(w, http.StatusOK, s)
+ if flusher, ok := w.(http.Flusher); ok {
+ flusher.Flush()
+ }
+
+ preRead = s.Read
+ bits, err := json.Marshal(s.CPUStats)
+ if err != nil {
+ logrus.Errorf("unable to marshal cpu stats: %q", err)
+ }
+ if err := json.Unmarshal(bits, &preCPUStats); err != nil {
+ logrus.Errorf("unable to unmarshal previous stats: %q", err)
+ }
+ time.Sleep(DefaultStatsPeriod)
+ }
+}
+
+func toBlkioStatEntry(entries []cgroups.BlkIOEntry) []docker.BlkioStatEntry {
+ results := make([]docker.BlkioStatEntry, 0, len(entries))
+ for i, e := range entries {
+ bits, err := json.Marshal(e)
+ if err != nil {
+ logrus.Errorf("unable to marshal blkio stats: %q", err)
+ }
+ if err := json.Unmarshal(bits, &results[i]); err != nil {
+ logrus.Errorf("unable to unmarshal blkio stats: %q", err)
+ }
+ }
+ return results
+}
diff --git a/pkg/api/handlers/generic/images.go b/pkg/api/handlers/generic/images.go
new file mode 100644
index 000000000..8029ee861
--- /dev/null
+++ b/pkg/api/handlers/generic/images.go
@@ -0,0 +1,363 @@
+package generic
+
+import (
+ "encoding/json"
+ "fmt"
+ "io/ioutil"
+ "net/http"
+ "os"
+ "strconv"
+ "strings"
+
+ "github.com/containers/buildah"
+ "github.com/containers/image/v5/manifest"
+ "github.com/containers/libpod/libpod"
+ image2 "github.com/containers/libpod/libpod/image"
+ "github.com/containers/libpod/pkg/api/handlers"
+ "github.com/containers/libpod/pkg/api/handlers/utils"
+ "github.com/containers/libpod/pkg/util"
+ "github.com/containers/storage"
+ "github.com/docker/docker/api/types"
+ "github.com/gorilla/mux"
+ "github.com/gorilla/schema"
+ "github.com/pkg/errors"
+ "github.com/sirupsen/logrus"
+)
+
+func ExportImage(w http.ResponseWriter, r *http.Request) {
+ // 200 ok
+ // 500 server
+ runtime := r.Context().Value("runtime").(*libpod.Runtime)
+
+ name := mux.Vars(r)["name"]
+ newImage, err := runtime.ImageRuntime().NewFromLocal(name)
+ if err != nil {
+ utils.ImageNotFound(w, name, errors.Wrapf(err, "Failed to find image %s", name))
+ return
+ }
+ tmpfile, err := ioutil.TempFile("", "api.tar")
+ if err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrap(err, "unable to create tempfile"))
+ return
+ }
+ if err := tmpfile.Close(); err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrap(err, "unable to close tempfile"))
+ return
+ }
+ if err := newImage.Save(r.Context(), name, "docker-archive", tmpfile.Name(), []string{}, false, false); err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrap(err, "failed to save image"))
+ return
+ }
+ rdr, err := os.Open(tmpfile.Name())
+ if err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrap(err, "failed to read the exported tarfile"))
+ return
+ }
+ defer rdr.Close()
+ defer os.Remove(tmpfile.Name())
+ utils.WriteResponse(w, http.StatusOK, rdr)
+}
+
+func PruneImages(w http.ResponseWriter, r *http.Request) {
+ // 200 no error
+ // 500 internal
+ var (
+ dangling bool = true
+ err error
+ )
+ decoder := r.Context().Value("decoder").(*schema.Decoder)
+ runtime := r.Context().Value("runtime").(*libpod.Runtime)
+
+ query := struct {
+ filters map[string]string
+ }{
+ // This is where you can override the golang default value for one of fields
+ }
+
+ if err := decoder.Decode(&query, r.URL.Query()); err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusBadRequest, errors.Wrapf(err, "Failed to parse parameters for %s", r.URL.String()))
+ return
+ }
+
+ // FIXME This is likely wrong due to it not being a map[string][]string
+
+ // until ts is not supported on podman prune
+ if len(query.filters["until"]) > 0 {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrap(err, "until is not supported yet"))
+ return
+ }
+ // labels are not supported on podman prune
+ if len(query.filters["label"]) > 0 {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrap(err, "labelis not supported yet"))
+ return
+ }
+
+ if len(query.filters["dangling"]) > 0 {
+ dangling, err = strconv.ParseBool(query.filters["dangling"])
+ if err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrap(err, "processing dangling filter"))
+ return
+ }
+ }
+ idr := []types.ImageDeleteResponseItem{}
+ //
+ // This code needs to be migrated to libpod to work correctly. I could not
+ // work my around the information docker needs with the existing prune in libpod.
+ //
+ pruneImages, err := runtime.ImageRuntime().GetPruneImages(!dangling, []image2.ImageFilter{})
+ if err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrap(err, "unable to get images to prune"))
+ return
+ }
+ for _, p := range pruneImages {
+ repotags, err := p.RepoTags()
+ if err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrap(err, "unable to get repotags for image"))
+ return
+ }
+ if err := p.Remove(r.Context(), true); err != nil {
+ if errors.Cause(err) == storage.ErrImageUsedByContainer {
+ logrus.Warnf("Failed to prune image %s as it is in use: %v", p.ID(), err)
+ continue
+ }
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrap(err, "failed to prune image"))
+ return
+ }
+ // newimageevent is not export therefore we cannot record the event. this will be fixed
+ // when the prune is fixed in libpod
+ // defer p.newImageEvent(events.Prune)
+ response := types.ImageDeleteResponseItem{
+ Deleted: fmt.Sprintf("sha256:%s", p.ID()), // I ack this is not ideal
+ }
+ if len(repotags) > 0 {
+ response.Untagged = repotags[0]
+ }
+ idr = append(idr, response)
+ }
+ ipr := types.ImagesPruneReport{
+ ImagesDeleted: idr,
+ SpaceReclaimed: 1, // TODO we cannot supply this right now
+ }
+ utils.WriteResponse(w, http.StatusOK, handlers.ImagesPruneReport{ImagesPruneReport: ipr})
+}
+
+func CommitContainer(w http.ResponseWriter, r *http.Request) {
+ var (
+ destImage string
+ )
+ decoder := r.Context().Value("decoder").(*schema.Decoder)
+ runtime := r.Context().Value("runtime").(*libpod.Runtime)
+
+ query := struct {
+ author string
+ changes string
+ comment string
+ container string
+ //fromSrc string # fromSrc is currently unused
+ pause bool
+ repo string
+ tag string
+ }{
+ // This is where you can override the golang default value for one of fields
+ }
+
+ if err := decoder.Decode(&query, r.URL.Query()); err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusBadRequest, errors.Wrapf(err, "Failed to parse parameters for %s", r.URL.String()))
+ return
+ }
+ rtc, err := runtime.GetConfig()
+ if err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrap(err, "Decode()"))
+ return
+ }
+ sc := image2.GetSystemContext(rtc.SignaturePolicyPath, "", false)
+ tag := "latest"
+ options := libpod.ContainerCommitOptions{
+ Pause: true,
+ }
+ options.CommitOptions = buildah.CommitOptions{
+ SignaturePolicyPath: rtc.SignaturePolicyPath,
+ ReportWriter: os.Stderr,
+ SystemContext: sc,
+ PreferredManifestType: manifest.DockerV2Schema2MediaType,
+ }
+
+ input := handlers.CreateContainerConfig{}
+ if err := json.NewDecoder(r.Body).Decode(&input); err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrap(err, "Decode()"))
+ return
+ }
+
+ if len(query.tag) > 0 {
+ tag = query.tag
+ }
+ options.Message = query.comment
+ options.Author = query.author
+ options.Pause = query.pause
+ options.Changes = strings.Fields(query.changes)
+ ctr, err := runtime.LookupContainer(query.container)
+ if err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusNotFound, err)
+ return
+ }
+
+ // I know mitr hates this ... but doing for now
+ if len(query.repo) > 1 {
+ destImage = fmt.Sprintf("%s:%s", query.repo, tag)
+ }
+
+ commitImage, err := ctr.Commit(r.Context(), destImage, options)
+ if err != nil && !strings.Contains(err.Error(), "is not running") {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrapf(err, "CommitFailure"))
+ return
+ }
+ utils.WriteResponse(w, http.StatusOK, handlers.IDResponse{ID: commitImage.ID()}) // nolint
+}
+
+func CreateImageFromSrc(w http.ResponseWriter, r *http.Request) {
+ // 200 no error
+ // 404 repo does not exist or no read access
+ // 500 internal
+ decoder := r.Context().Value("decoder").(*schema.Decoder)
+ runtime := r.Context().Value("runtime").(*libpod.Runtime)
+
+ query := struct {
+ fromSrc string
+ changes []string
+ }{
+ // This is where you can override the golang default value for one of fields
+ }
+
+ if err := decoder.Decode(&query, r.URL.Query()); err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusBadRequest, errors.Wrapf(err, "Failed to parse parameters for %s", r.URL.String()))
+ return
+ }
+ // fromSrc – Source to import. The value may be a URL from which the image can be retrieved or - to read the image from the request body. This parameter may only be used when importing an image.
+ source := query.fromSrc
+ if source == "-" {
+ f, err := ioutil.TempFile("", "api_load.tar")
+ if err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrap(err, "failed to create tempfile"))
+ return
+ }
+ source = f.Name()
+ if err := handlers.SaveFromBody(f, r); err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrap(err, "failed to write temporary file"))
+ }
+ }
+ iid, err := runtime.Import(r.Context(), source, "", query.changes, "", false)
+ if err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrap(err, "unable to import tarball"))
+ return
+ }
+ tmpfile, err := ioutil.TempFile("", "fromsrc.tar")
+ if err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrap(err, "unable to create tempfile"))
+ return
+ }
+ if err := tmpfile.Close(); err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrap(err, "unable to close tempfile"))
+ return
+ }
+ // Success
+ utils.WriteResponse(w, http.StatusOK, struct {
+ Status string `json:"status"`
+ Progress string `json:"progress"`
+ ProgressDetail map[string]string `json:"progressDetail"`
+ Id string `json:"id"`
+ }{
+ Status: iid,
+ ProgressDetail: map[string]string{},
+ Id: iid,
+ })
+
+}
+
+func CreateImageFromImage(w http.ResponseWriter, r *http.Request) {
+ // 200 no error
+ // 404 repo does not exist or no read access
+ // 500 internal
+ decoder := r.Context().Value("decoder").(*schema.Decoder)
+ runtime := r.Context().Value("runtime").(*libpod.Runtime)
+
+ query := struct {
+ fromImage string
+ tag string
+ }{
+ // This is where you can override the golang default value for one of fields
+ }
+
+ if err := decoder.Decode(&query, r.URL.Query()); err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusBadRequest, errors.Wrapf(err, "Failed to parse parameters for %s", r.URL.String()))
+ return
+ }
+
+ /*
+ fromImage – Name of the image to pull. The name may include a tag or digest. This parameter may only be used when pulling an image. The pull is cancelled if the HTTP connection is closed.
+ repo – Repository name given to an image when it is imported. The repo may include a tag. This parameter may only be used when importing an image.
+ tag – Tag or digest. If empty when pulling an image, this causes all tags for the given image to be pulled.
+ */
+ fromImage := query.fromImage
+ if len(query.tag) < 1 {
+ fromImage = fmt.Sprintf("%s:%s", fromImage, query.tag)
+ }
+
+ // TODO
+ // We are eating the output right now because we haven't talked about how to deal with multiple responses yet
+ img, err := runtime.ImageRuntime().New(r.Context(), fromImage, "", "", nil, &image2.DockerRegistryOptions{}, image2.SigningOptions{}, nil, util.PullImageMissing)
+ if err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, err)
+ return
+ }
+
+ // Success
+ utils.WriteResponse(w, http.StatusOK, struct {
+ Status string `json:"status"`
+ Error string `json:"error"`
+ Progress string `json:"progress"`
+ ProgressDetail map[string]string `json:"progressDetail"`
+ Id string `json:"id"`
+ }{
+ Status: fmt.Sprintf("pulling image (%s) from %s", img.Tag, strings.Join(img.Names(), ", ")),
+ ProgressDetail: map[string]string{},
+ Id: img.ID(),
+ })
+}
+
+func GetImage(w http.ResponseWriter, r *http.Request) {
+ // 200 no error
+ // 404 no such
+ // 500 internal
+ name := mux.Vars(r)["name"]
+ newImage, err := handlers.GetImage(r, name)
+ if err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusNotFound, errors.Wrapf(err, "Failed to find image %s", name))
+ return
+ }
+ inspect, err := handlers.ImageDataToImageInspect(r.Context(), newImage)
+ if err != nil {
+ utils.Error(w, "Server error", http.StatusInternalServerError, errors.Wrapf(err, "Failed to convert ImageData to ImageInspect '%s'", inspect.ID))
+ return
+ }
+ utils.WriteResponse(w, http.StatusOK, inspect)
+}
+
+func GetImages(w http.ResponseWriter, r *http.Request) {
+ // 200 ok
+ // 500 internal
+ images, err := utils.GetImages(w, r)
+ if err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrap(err, "Failed get images"))
+ return
+ }
+ var summaries = make([]*handlers.ImageSummary, len(images))
+ for j, img := range images {
+ is, err := handlers.ImageToImageSummary(img)
+ if err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrap(err, "Failed transform image summaries"))
+ return
+ }
+ summaries[j] = is
+ }
+ utils.WriteResponse(w, http.StatusOK, summaries)
+}
diff --git a/pkg/api/handlers/generic/info.go b/pkg/api/handlers/generic/info.go
new file mode 100644
index 000000000..2bef8db4f
--- /dev/null
+++ b/pkg/api/handlers/generic/info.go
@@ -0,0 +1,196 @@
+package generic
+
+import (
+ "fmt"
+ "github.com/containers/libpod/pkg/api/handlers"
+ "github.com/containers/libpod/pkg/api/handlers/utils"
+ "io/ioutil"
+ "net/http"
+ "os"
+ goRuntime "runtime"
+ "strings"
+ "time"
+
+ "github.com/containers/libpod/libpod"
+ "github.com/containers/libpod/libpod/config"
+ "github.com/containers/libpod/libpod/define"
+ "github.com/containers/libpod/pkg/rootless"
+ "github.com/containers/libpod/pkg/sysinfo"
+ docker "github.com/docker/docker/api/types"
+ "github.com/docker/docker/api/types/swarm"
+ "github.com/google/uuid"
+ "github.com/pkg/errors"
+ log "github.com/sirupsen/logrus"
+)
+
+func GetInfo(w http.ResponseWriter, r *http.Request) {
+ // 200 ok
+ // 500 internal
+ runtime := r.Context().Value("runtime").(*libpod.Runtime)
+
+ infoData, err := runtime.Info()
+ if err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrapf(err, "Failed to obtain system memory info"))
+ return
+ }
+ hostInfo := infoData[0].Data
+ storeInfo := infoData[1].Data
+
+ configInfo, err := runtime.GetConfig()
+ if err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrapf(err, "Failed to obtain runtime config"))
+ return
+ }
+ versionInfo, err := define.GetVersion()
+ if err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrapf(err, "Failed to obtain podman versions"))
+ return
+ }
+ stateInfo := getContainersState(runtime)
+ sysInfo := sysinfo.New(true)
+
+ // FIXME: Need to expose if runtime supports Checkpoint'ing
+ // liveRestoreEnabled := criu.CheckForCriu() && configInfo.RuntimeSupportsCheckpoint()
+
+ info := &handlers.Info{Info: docker.Info{
+ Architecture: goRuntime.GOARCH,
+ BridgeNfIP6tables: !sysInfo.BridgeNFCallIP6TablesDisabled,
+ BridgeNfIptables: !sysInfo.BridgeNFCallIPTablesDisabled,
+ CPUCfsPeriod: sysInfo.CPUCfsPeriod,
+ CPUCfsQuota: sysInfo.CPUCfsQuota,
+ CPUSet: sysInfo.Cpuset,
+ CPUShares: sysInfo.CPUShares,
+ CgroupDriver: configInfo.CgroupManager,
+ ClusterAdvertise: "",
+ ClusterStore: "",
+ ContainerdCommit: docker.Commit{},
+ Containers: storeInfo["ContainerStore"].(map[string]interface{})["number"].(int),
+ ContainersPaused: stateInfo[define.ContainerStatePaused],
+ ContainersRunning: stateInfo[define.ContainerStateRunning],
+ ContainersStopped: stateInfo[define.ContainerStateStopped] + stateInfo[define.ContainerStateExited],
+ Debug: log.IsLevelEnabled(log.DebugLevel),
+ DefaultRuntime: configInfo.OCIRuntime,
+ DockerRootDir: storeInfo["GraphRoot"].(string),
+ Driver: storeInfo["GraphDriverName"].(string),
+ DriverStatus: getGraphStatus(storeInfo),
+ ExperimentalBuild: true,
+ GenericResources: nil,
+ HTTPProxy: getEnv("http_proxy"),
+ HTTPSProxy: getEnv("https_proxy"),
+ ID: uuid.New().String(),
+ IPv4Forwarding: !sysInfo.IPv4ForwardingDisabled,
+ Images: storeInfo["ImageStore"].(map[string]interface{})["number"].(int),
+ IndexServerAddress: "",
+ InitBinary: "",
+ InitCommit: docker.Commit{},
+ Isolation: "",
+ KernelMemory: sysInfo.KernelMemory,
+ KernelMemoryTCP: false,
+ KernelVersion: hostInfo["kernel"].(string),
+ Labels: nil,
+ LiveRestoreEnabled: false,
+ LoggingDriver: "",
+ MemTotal: hostInfo["MemTotal"].(int64),
+ MemoryLimit: sysInfo.MemoryLimit,
+ NCPU: goRuntime.NumCPU(),
+ NEventsListener: 0,
+ NFd: getFdCount(),
+ NGoroutines: goRuntime.NumGoroutine(),
+ Name: hostInfo["hostname"].(string),
+ NoProxy: getEnv("no_proxy"),
+ OSType: goRuntime.GOOS,
+ OSVersion: hostInfo["Distribution"].(map[string]interface{})["version"].(string),
+ OomKillDisable: sysInfo.OomKillDisable,
+ OperatingSystem: hostInfo["Distribution"].(map[string]interface{})["distribution"].(string),
+ PidsLimit: sysInfo.PidsLimit,
+ Plugins: docker.PluginsInfo{},
+ ProductLicense: "Apache-2.0",
+ RegistryConfig: nil,
+ RuncCommit: docker.Commit{},
+ Runtimes: getRuntimes(configInfo),
+ SecurityOptions: getSecOpts(sysInfo),
+ ServerVersion: versionInfo.Version,
+ SwapLimit: sysInfo.SwapLimit,
+ Swarm: swarm.Info{
+ LocalNodeState: swarm.LocalNodeStateInactive,
+ },
+ SystemStatus: nil,
+ SystemTime: time.Now().Format(time.RFC3339Nano),
+ Warnings: []string{},
+ },
+ BuildahVersion: hostInfo["BuildahVersion"].(string),
+ CPURealtimePeriod: sysInfo.CPURealtimePeriod,
+ CPURealtimeRuntime: sysInfo.CPURealtimeRuntime,
+ CgroupVersion: hostInfo["CgroupVersion"].(string),
+ Rootless: rootless.IsRootless(),
+ SwapFree: hostInfo["SwapFree"].(int64),
+ SwapTotal: hostInfo["SwapTotal"].(int64),
+ Uptime: hostInfo["uptime"].(string),
+ }
+ utils.WriteResponse(w, http.StatusOK, info)
+}
+
+func getGraphStatus(storeInfo map[string]interface{}) [][2]string {
+ var graphStatus [][2]string
+ for k, v := range storeInfo["GraphStatus"].(map[string]string) {
+ graphStatus = append(graphStatus, [2]string{k, v})
+ }
+ return graphStatus
+}
+
+func getSecOpts(sysInfo *sysinfo.SysInfo) []string {
+ var secOpts []string
+ if sysInfo.AppArmor {
+ secOpts = append(secOpts, "name=apparmor")
+ }
+ if sysInfo.Seccomp {
+ // FIXME: get profile name...
+ secOpts = append(secOpts, fmt.Sprintf("name=seccomp,profile=%s", "default"))
+ }
+ return secOpts
+}
+
+func getRuntimes(configInfo *config.Config) map[string]docker.Runtime {
+ var runtimes = map[string]docker.Runtime{}
+ for name, paths := range configInfo.OCIRuntimes {
+ runtimes[name] = docker.Runtime{
+ Path: paths[0],
+ Args: nil,
+ }
+ }
+ return runtimes
+}
+
+func getFdCount() (count int) {
+ count = -1
+ if entries, err := ioutil.ReadDir("/proc/self/fd"); err == nil {
+ count = len(entries)
+ }
+ return
+}
+
+// Just ignoring Container errors here...
+func getContainersState(r *libpod.Runtime) map[define.ContainerStatus]int {
+ var states = map[define.ContainerStatus]int{}
+ ctnrs, err := r.GetAllContainers()
+ if err == nil {
+ for _, ctnr := range ctnrs {
+ state, err := ctnr.State()
+ if err != nil {
+ continue
+ }
+ states[state] += 1
+ }
+ }
+ return states
+}
+
+func getEnv(value string) string {
+ if v, exists := os.LookupEnv(strings.ToUpper(value)); exists {
+ return v
+ }
+ if v, exists := os.LookupEnv(strings.ToLower(value)); exists {
+ return v
+ }
+ return ""
+}
diff --git a/pkg/api/handlers/generic/ping.go b/pkg/api/handlers/generic/ping.go
new file mode 100644
index 000000000..44a67d53f
--- /dev/null
+++ b/pkg/api/handlers/generic/ping.go
@@ -0,0 +1,25 @@
+package generic
+
+import (
+ "fmt"
+ "net/http"
+)
+
+func PingGET(w http.ResponseWriter, _ *http.Request) {
+ setHeaders(w)
+ fmt.Fprintln(w, "OK")
+}
+
+func PingHEAD(w http.ResponseWriter, _ *http.Request) {
+ setHeaders(w)
+ fmt.Fprintln(w, "")
+}
+
+func setHeaders(w http.ResponseWriter) {
+ w.Header().Set("API-Version", DefaultApiVersion)
+ w.Header().Set("BuildKit-Version", "")
+ w.Header().Set("Docker-Experimental", "true")
+ w.Header().Set("Cache-Control", "no-cache")
+ w.Header().Set("Pragma", "no-cache")
+ w.WriteHeader(http.StatusOK)
+}
diff --git a/pkg/api/handlers/generic/system.go b/pkg/api/handlers/generic/system.go
new file mode 100644
index 000000000..254990b95
--- /dev/null
+++ b/pkg/api/handlers/generic/system.go
@@ -0,0 +1,18 @@
+package generic
+
+import (
+ "github.com/containers/libpod/pkg/api/handlers"
+ "github.com/containers/libpod/pkg/api/handlers/utils"
+ "net/http"
+
+ docker "github.com/docker/docker/api/types"
+)
+
+func GetDiskUsage(w http.ResponseWriter, r *http.Request) {
+ utils.WriteResponse(w, http.StatusOK, handlers.DiskUsage{DiskUsage: docker.DiskUsage{
+ LayersSize: 0,
+ Images: nil,
+ Containers: nil,
+ Volumes: nil,
+ }})
+}
diff --git a/pkg/api/handlers/generic/version.go b/pkg/api/handlers/generic/version.go
new file mode 100644
index 000000000..2c2283d10
--- /dev/null
+++ b/pkg/api/handlers/generic/version.go
@@ -0,0 +1,74 @@
+package generic
+
+import (
+ "fmt"
+ "github.com/containers/libpod/pkg/api/handlers"
+ "github.com/containers/libpod/pkg/api/handlers/utils"
+ "net/http"
+ goRuntime "runtime"
+ "time"
+
+ "github.com/containers/libpod/libpod"
+ "github.com/containers/libpod/libpod/define"
+ docker "github.com/docker/docker/api/types"
+ "github.com/pkg/errors"
+)
+
+const (
+ DefaultApiVersion = "1.40" // See https://docs.docker.com/engine/api/v1.40/
+ MinimalApiVersion = "1.24"
+)
+
+func VersionHandler(w http.ResponseWriter, r *http.Request) {
+ // 200 ok
+ // 500 internal
+ runtime := r.Context().Value("runtime").(*libpod.Runtime)
+
+ versionInfo, err := define.GetVersion()
+ if err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, err)
+ return
+ }
+
+ infoData, err := runtime.Info()
+ if err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrapf(err, "Failed to obtain system memory info"))
+ return
+ }
+ hostInfo := infoData[0].Data
+
+ components := []docker.ComponentVersion{{
+ Name: "Podman Engine",
+ Version: versionInfo.Version,
+ Details: map[string]string{
+ "APIVersion": DefaultApiVersion,
+ "Arch": goRuntime.GOARCH,
+ "BuildTime": time.Unix(versionInfo.Built, 0).Format(time.RFC3339),
+ "Experimental": "true",
+ "GitCommit": versionInfo.GitCommit,
+ "GoVersion": versionInfo.GoVersion,
+ "KernelVersion": hostInfo["kernel"].(string),
+ "MinAPIVersion": MinimalApiVersion,
+ "Os": goRuntime.GOOS,
+ },
+ }}
+
+ utils.WriteResponse(w, http.StatusOK, handlers.Version{Version: docker.Version{
+ Platform: struct {
+ Name string
+ }{
+ Name: fmt.Sprintf("%s/%s/%s", goRuntime.GOOS, goRuntime.GOARCH, hostInfo["Distribution"].(map[string]interface{})["distribution"].(string)),
+ },
+ APIVersion: components[0].Details["APIVersion"],
+ Arch: components[0].Details["Arch"],
+ BuildTime: components[0].Details["BuildTime"],
+ Components: components,
+ Experimental: true,
+ GitCommit: components[0].Details["GitCommit"],
+ GoVersion: components[0].Details["GoVersion"],
+ KernelVersion: components[0].Details["KernelVersion"],
+ MinAPIVersion: components[0].Details["MinAPIVersion"],
+ Os: components[0].Details["Os"],
+ Version: components[0].Version,
+ }})
+}
diff --git a/pkg/api/handlers/handler.go b/pkg/api/handlers/handler.go
new file mode 100644
index 000000000..1ea7dc60a
--- /dev/null
+++ b/pkg/api/handlers/handler.go
@@ -0,0 +1,45 @@
+package handlers
+
+import (
+ "github.com/containers/libpod/libpod"
+ "github.com/gorilla/mux"
+ "github.com/gorilla/schema"
+ "github.com/pkg/errors"
+ "net/http"
+)
+
+// Convenience routines to reduce boiler plate in handlers
+
+func getVar(r *http.Request, k string) string {
+ return mux.Vars(r)[k]
+}
+
+func hasVar(r *http.Request, k string) bool {
+ _, found := mux.Vars(r)[k]
+ return found
+}
+func getName(r *http.Request) string {
+ return getVar(r, "name")
+}
+
+func decodeQuery(r *http.Request, i interface{}) error {
+ decoder := r.Context().Value("decoder").(*schema.Decoder)
+
+ if err := decoder.Decode(i, r.URL.Query()); err != nil {
+ return errors.Wrapf(err, "Failed to parse parameters for %s", r.URL.String())
+ }
+ return nil
+}
+
+func getRuntime(r *http.Request) *libpod.Runtime {
+ return r.Context().Value("runtime").(*libpod.Runtime)
+}
+
+func getHeader(r *http.Request, k string) string {
+ return r.Header.Get(k)
+}
+
+func hasHeader(r *http.Request, k string) bool {
+ _, found := r.Header[k]
+ return found
+}
diff --git a/pkg/api/handlers/images.go b/pkg/api/handlers/images.go
new file mode 100644
index 000000000..d4cddbfb2
--- /dev/null
+++ b/pkg/api/handlers/images.go
@@ -0,0 +1,185 @@
+package handlers
+
+import (
+ "fmt"
+ "io"
+ "io/ioutil"
+ "net/http"
+ "os"
+ "strconv"
+
+ "github.com/containers/libpod/libpod"
+ "github.com/containers/libpod/libpod/image"
+ "github.com/containers/libpod/pkg/api/handlers/utils"
+ "github.com/gorilla/mux"
+ "github.com/gorilla/schema"
+ "github.com/pkg/errors"
+)
+
+func HistoryImage(w http.ResponseWriter, r *http.Request) {
+ runtime := r.Context().Value("runtime").(*libpod.Runtime)
+ name := mux.Vars(r)["name"]
+ var allHistory []HistoryResponse
+
+ newImage, err := runtime.ImageRuntime().NewFromLocal(name)
+ if err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusNotFound, errors.Wrapf(err, "Failed to find image %s", name))
+ return
+
+ }
+ history, err := newImage.History(r.Context())
+ if err != nil {
+ utils.InternalServerError(w, err)
+ return
+ }
+ for _, h := range history {
+ l := HistoryResponse{
+ ID: h.ID,
+ Created: h.Created.UnixNano(),
+ CreatedBy: h.CreatedBy,
+ Tags: h.Tags,
+ Size: h.Size,
+ Comment: h.Comment,
+ }
+ allHistory = append(allHistory, l)
+ }
+ utils.WriteResponse(w, http.StatusOK, allHistory)
+}
+
+func TagImage(w http.ResponseWriter, r *http.Request) {
+ runtime := r.Context().Value("runtime").(*libpod.Runtime)
+
+ // /v1.xx/images/(name)/tag
+ name := mux.Vars(r)["name"]
+ newImage, err := runtime.ImageRuntime().NewFromLocal(name)
+ if err != nil {
+ utils.ImageNotFound(w, name, errors.Wrapf(err, "Failed to find image %s", name))
+ return
+ }
+ tag := "latest"
+ if len(r.Form.Get("tag")) > 0 {
+ tag = r.Form.Get("tag")
+ }
+ if len(r.Form.Get("repo")) < 1 {
+ utils.Error(w, "Something went wrong.", http.StatusBadRequest, errors.New("repo parameter is required to tag an image"))
+ return
+ }
+ repo := r.Form.Get("repo")
+ tagName := fmt.Sprintf("%s:%s", repo, tag)
+ if err := newImage.TagImage(tagName); err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, err)
+ return
+ }
+ utils.WriteResponse(w, http.StatusCreated, "")
+}
+
+func RemoveImage(w http.ResponseWriter, r *http.Request) {
+ runtime := r.Context().Value("runtime").(*libpod.Runtime)
+
+ name := mux.Vars(r)["name"]
+ newImage, err := runtime.ImageRuntime().NewFromLocal(name)
+ if err != nil {
+ utils.ImageNotFound(w, name, errors.Wrapf(err, "Failed to find image %s", name))
+ return
+ }
+
+ force := false
+ if len(r.Form.Get("force")) > 0 {
+ force, err = strconv.ParseBool(r.Form.Get("force"))
+ if err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusBadRequest, err)
+ return
+ }
+ }
+ _, err = runtime.RemoveImage(r.Context(), newImage, force)
+ if err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, err)
+ return
+ }
+ // TODO
+ // This will need to be fixed for proper response, like Deleted: and Untagged:
+ m := make(map[string]string)
+ m["Deleted"] = newImage.ID()
+ foo := []map[string]string{}
+ foo = append(foo, m)
+ utils.WriteResponse(w, http.StatusOK, foo)
+
+}
+func GetImage(r *http.Request, name string) (*image.Image, error) {
+ runtime := r.Context().Value("runtime").(*libpod.Runtime)
+ return runtime.ImageRuntime().NewFromLocal(name)
+}
+
+func LoadImage(w http.ResponseWriter, r *http.Request) {
+ decoder := r.Context().Value("decoder").(*schema.Decoder)
+ runtime := r.Context().Value("runtime").(*libpod.Runtime)
+
+ query := struct {
+ //quiet bool # quiet is currently unused
+ }{
+ // This is where you can override the golang default value for one of fields
+ }
+
+ if err := decoder.Decode(&query, r.URL.Query()); err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusBadRequest, errors.Wrapf(err, "Failed to parse parameters for %s", r.URL.String()))
+ return
+ }
+
+ var (
+ err error
+ writer io.Writer
+ )
+ f, err := ioutil.TempFile("", "api_load.tar")
+ if err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrap(err, "failed to create tempfile"))
+ return
+ }
+ if err := SaveFromBody(f, r); err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrap(err, "failed to write temporary file"))
+ return
+ }
+ id, err := runtime.LoadImage(r.Context(), "", f.Name(), writer, "")
+ if err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrap(err, "failed to load image"))
+ return
+ }
+ utils.WriteResponse(w, http.StatusOK, struct {
+ Stream string `json:"stream"`
+ }{
+ Stream: fmt.Sprintf("Loaded image: %s\n", id),
+ })
+}
+
+func SaveFromBody(f *os.File, r *http.Request) error { // nolint
+ if _, err := io.Copy(f, r.Body); err != nil {
+ return err
+ }
+ return f.Close()
+}
+
+func SearchImages(w http.ResponseWriter, r *http.Request) {
+ decoder := r.Context().Value("decoder").(*schema.Decoder)
+ query := struct {
+ Term string `json:"term"`
+ Limit int `json:"limit"`
+ Filters map[string][]string `json:"filters"`
+ }{
+ // This is where you can override the golang default value for one of fields
+ }
+
+ if err := decoder.Decode(&query, r.URL.Query()); err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusBadRequest, errors.Wrapf(err, "Failed to parse parameters for %s", r.URL.String()))
+ return
+ }
+ // TODO filters are a bit undefined here in terms of what exactly the input looks
+ // like. We need to understand that a bit more.
+ options := image.SearchOptions{
+ Filter: image.SearchFilter{},
+ Limit: query.Limit,
+ }
+ results, err := image.SearchImages(query.Term, options)
+ if err != nil {
+ utils.InternalServerError(w, err)
+ }
+ utils.WriteResponse(w, http.StatusOK, results)
+}
diff --git a/pkg/api/handlers/images_build.go b/pkg/api/handlers/images_build.go
new file mode 100644
index 000000000..0ea480315
--- /dev/null
+++ b/pkg/api/handlers/images_build.go
@@ -0,0 +1,239 @@
+package handlers
+
+import (
+ "encoding/base64"
+ "encoding/json"
+ "fmt"
+ "io"
+ "io/ioutil"
+ "net/http"
+ "os"
+ "path/filepath"
+ "strings"
+
+ "github.com/containers/buildah"
+ "github.com/containers/buildah/imagebuildah"
+ "github.com/containers/libpod/pkg/api/handlers/utils"
+ "github.com/containers/storage/pkg/archive"
+ log "github.com/sirupsen/logrus"
+)
+
+func BuildImage(w http.ResponseWriter, r *http.Request) {
+ // contentType := r.Header.Get("Content-Type")
+ // if contentType != "application/x-tar" {
+ // Error(w, http.StatusText(http.StatusBadRequest), http.StatusBadRequest, errors.New("/build expects Content-Type of 'application/x-tar'"))
+ // return
+ // }
+
+ authConfigs := map[string]AuthConfig{}
+ if hasHeader(r, "X-Registry-Config") {
+ registryHeader := getHeader(r, "X-Registry-Config")
+ authConfigsJSON := base64.NewDecoder(base64.URLEncoding, strings.NewReader(registryHeader))
+ if json.NewDecoder(authConfigsJSON).Decode(&authConfigs) != nil {
+ utils.BadRequest(w, "X-Registry-Config", registryHeader, json.NewDecoder(authConfigsJSON).Decode(&authConfigs))
+ return
+ }
+ }
+
+ anchorDir, err := extractTarFile(r, w)
+ if err != nil {
+ utils.InternalServerError(w, err)
+ return
+ }
+ // defer os.RemoveAll(anchorDir)
+
+ query := struct {
+ Dockerfile string `json:"dockerfile"`
+ Tag string `json:"t"`
+ ExtraHosts string `json:"extrahosts"`
+ Remote string `json:"remote"`
+ Quiet bool `json:"q"`
+ NoCache bool `json:"nocache"`
+ CacheFrom string `json:"cachefrom"`
+ Pull string `json:"pull"`
+ Rm bool `json:"rm"`
+ ForceRm bool `json:"forcerm"`
+ Memory int `json:"memory"`
+ MemSwap int `json:"memswap"`
+ CpuShares int `json:"cpushares"`
+ CpuSetCpus string `json:"cpusetcpus"`
+ CpuPeriod int `json:"cpuperiod"`
+ CpuQuota int `json:"cpuquota"`
+ BuildArgs string `json:"buildargs"`
+ ShmSize int `json:"shmsize"`
+ Squash bool `json:"squash"`
+ Labels string `json:"labels"`
+ NetworkMode string `json:"networkmode"`
+ Platform string `json:"platform"`
+ Target string `json:"target"`
+ Outputs string `json:"outputs"`
+ }{
+ Dockerfile: "Dockerfile",
+ Tag: "",
+ ExtraHosts: "",
+ Remote: "",
+ Quiet: false,
+ NoCache: false,
+ CacheFrom: "",
+ Pull: "",
+ Rm: true,
+ ForceRm: false,
+ Memory: 0,
+ MemSwap: 0,
+ CpuShares: 0,
+ CpuSetCpus: "",
+ CpuPeriod: 0,
+ CpuQuota: 0,
+ BuildArgs: "",
+ ShmSize: 64 * 1024 * 1024,
+ Squash: false,
+ Labels: "",
+ NetworkMode: "",
+ Platform: "",
+ Target: "",
+ Outputs: "",
+ }
+
+ if err := decodeQuery(r, &query); err != nil {
+ utils.Error(w, http.StatusText(http.StatusBadRequest), http.StatusBadRequest, err)
+ return
+ }
+
+ // Tag is the name with optional tag...
+ var name = query.Tag
+ var tag string
+ if strings.Contains(query.Tag, ":") {
+ tokens := strings.SplitN(query.Tag, ":", 2)
+ name = tokens[0]
+ tag = tokens[1]
+ }
+
+ var buildArgs = map[string]string{}
+ if found := hasVar(r, "buildargs"); found {
+ if err := json.Unmarshal([]byte(query.BuildArgs), &buildArgs); err != nil {
+ utils.BadRequest(w, "buildargs", query.BuildArgs, err)
+ return
+ }
+ }
+
+ // convert label formats
+ var labels = []string{}
+ if hasVar(r, "labels") {
+ var m = map[string]string{}
+ if err := json.Unmarshal([]byte(query.Labels), &m); err != nil {
+ utils.BadRequest(w, "labels", query.Labels, err)
+ return
+ }
+
+ for k, v := range m {
+ labels = append(labels, fmt.Sprintf("%s=%v", k, v))
+ }
+ }
+
+ buildOptions := imagebuildah.BuildOptions{
+ ContextDirectory: filepath.Join(anchorDir, "build"),
+ PullPolicy: 0,
+ Registry: "",
+ IgnoreUnrecognizedInstructions: false,
+ Quiet: query.Quiet,
+ Isolation: 0,
+ Runtime: "",
+ RuntimeArgs: nil,
+ TransientMounts: nil,
+ Compression: 0,
+ Args: buildArgs,
+ Output: name,
+ AdditionalTags: []string{tag},
+ Log: nil,
+ In: nil,
+ Out: nil,
+ Err: nil,
+ SignaturePolicyPath: "",
+ ReportWriter: nil,
+ OutputFormat: "",
+ SystemContext: nil,
+ NamespaceOptions: nil,
+ ConfigureNetwork: 0,
+ CNIPluginPath: "",
+ CNIConfigDir: "",
+ IDMappingOptions: nil,
+ AddCapabilities: nil,
+ DropCapabilities: nil,
+ CommonBuildOpts: &buildah.CommonBuildOptions{},
+ DefaultMountsFilePath: "",
+ IIDFile: "",
+ Squash: query.Squash,
+ Labels: labels,
+ Annotations: nil,
+ OnBuild: nil,
+ Layers: false,
+ NoCache: query.NoCache,
+ RemoveIntermediateCtrs: query.Rm,
+ ForceRmIntermediateCtrs: query.ForceRm,
+ BlobDirectory: "",
+ Target: query.Target,
+ Devices: nil,
+ }
+
+ id, _, err := getRuntime(r).Build(r.Context(), buildOptions, query.Dockerfile)
+ if err != nil {
+ utils.InternalServerError(w, err)
+ }
+
+ // Find image ID that was built...
+ utils.WriteResponse(w, http.StatusOK,
+ struct {
+ Stream string `json:"stream"`
+ }{
+ Stream: fmt.Sprintf("Successfully built %s\n", id),
+ })
+}
+
+func extractTarFile(r *http.Request, w http.ResponseWriter) (string, error) {
+ var (
+ // length int64
+ // n int64
+ copyErr error
+ )
+
+ // build a home for the request body
+ anchorDir, err := ioutil.TempDir("", "libpod_builder")
+ if err != nil {
+ return "", err
+ }
+ buildDir := filepath.Join(anchorDir, "build")
+
+ path := filepath.Join(anchorDir, "tarBall")
+ tarBall, err := os.OpenFile(path, os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0666)
+ if err != nil {
+ return "", err
+ }
+ defer tarBall.Close()
+
+ // if hasHeader(r, "Content-Length") {
+ // length, err := strconv.ParseInt(getHeader(r, "Content-Length"), 10, 64)
+ // if err != nil {
+ // return "", errors.New(fmt.Sprintf("Failed request: unable to parse Content-Length of '%s'", getHeader(r, "Content-Length")))
+ // }
+ // n, copyErr = io.CopyN(tarBall, r.Body, length+1)
+ // } else {
+ _, copyErr = io.Copy(tarBall, r.Body)
+ // }
+ r.Body.Close()
+
+ if copyErr != nil {
+ utils.InternalServerError(w,
+ fmt.Errorf("failed Request: Unable to copy tar file from request body %s", r.RequestURI))
+ }
+ log.Debugf("Content-Length: %s", getVar(r, "Content-Length"))
+
+ // if hasHeader(r, "Content-Length") && n != length {
+ // return "", errors.New(fmt.Sprintf("Failed request: Given Content-Length does not match file size %d != %d", n, length))
+ // }
+
+ _, _ = tarBall.Seek(0, 0)
+ if err := archive.Untar(tarBall, buildDir, &archive.TarOptions{}); err != nil {
+ return "", err
+ }
+ return anchorDir, nil
+}
diff --git a/pkg/api/handlers/libpod/containers.go b/pkg/api/handlers/libpod/containers.go
new file mode 100644
index 000000000..bfb028b1b
--- /dev/null
+++ b/pkg/api/handlers/libpod/containers.go
@@ -0,0 +1,186 @@
+package libpod
+
+import (
+ "net/http"
+
+ "github.com/containers/libpod/cmd/podman/shared"
+ "github.com/containers/libpod/libpod"
+ "github.com/containers/libpod/pkg/api/handlers"
+ "github.com/containers/libpod/pkg/api/handlers/utils"
+ "github.com/gorilla/mux"
+ "github.com/gorilla/schema"
+ "github.com/pkg/errors"
+)
+
+func StopContainer(w http.ResponseWriter, r *http.Request) {
+ handlers.StopContainer(w, r)
+}
+
+func ContainerExists(w http.ResponseWriter, r *http.Request) {
+ // 404 no such container
+ // 200 ok
+ runtime := r.Context().Value("runtime").(*libpod.Runtime)
+ name := mux.Vars(r)["name"]
+ _, err := runtime.LookupContainer(name)
+ if err != nil {
+ utils.ContainerNotFound(w, name, err)
+ return
+ }
+ utils.WriteResponse(w, http.StatusNoContent, "")
+}
+
+func RemoveContainer(w http.ResponseWriter, r *http.Request) {
+ // 204 no error
+ // 400 bad param
+ // 404 no such container
+ // 409 conflict
+ // 500 internal error
+ decoder := r.Context().Value("decoder").(*schema.Decoder)
+ query := struct {
+ Force bool `schema:"force"`
+ Vols bool `schema:"v"`
+ }{
+ // override any golang type defaults
+ }
+
+ if err := decoder.Decode(&query, r.URL.Query()); err != nil {
+ utils.Error(w, http.StatusText(http.StatusBadRequest), http.StatusBadRequest,
+ errors.Wrapf(err, "Failed to parse parameters for %s", r.URL.String()))
+ return
+ }
+ utils.RemoveContainer(w, r, query.Force, query.Vols)
+}
+func ListContainers(w http.ResponseWriter, r *http.Request) {
+ decoder := r.Context().Value("decoder").(*schema.Decoder)
+ query := struct {
+ Filter []string `schema:"filter"`
+ Last int `schema:"last"`
+ Size bool `schema:"size"`
+ Sync bool `schema:"sync"`
+ }{
+ // override any golang type defaults
+ }
+
+ if err := decoder.Decode(&query, r.URL.Query()); err != nil {
+ utils.Error(w, http.StatusText(http.StatusBadRequest), http.StatusBadRequest,
+ errors.Wrapf(err, "Failed to parse parameters for %s", r.URL.String()))
+ return
+ }
+ runtime := r.Context().Value("runtime").(*libpod.Runtime)
+ opts := shared.PsOptions{
+ All: true,
+ Last: query.Last,
+ Size: query.Size,
+ Sort: "",
+ Namespace: true,
+ Sync: query.Sync,
+ }
+
+ pss, err := shared.GetPsContainerOutput(runtime, opts, query.Filter, 2)
+ if err != nil {
+ utils.InternalServerError(w, err)
+ }
+ utils.WriteResponse(w, http.StatusOK, pss)
+}
+
+func GetContainer(w http.ResponseWriter, r *http.Request) {
+ decoder := r.Context().Value("decoder").(*schema.Decoder)
+ query := struct {
+ Size bool `schema:"size"`
+ }{
+ // override any golang type defaults
+ }
+
+ if err := decoder.Decode(&query, r.URL.Query()); err != nil {
+ utils.Error(w, http.StatusText(http.StatusBadRequest), http.StatusBadRequest,
+ errors.Wrapf(err, "Failed to parse parameters for %s", r.URL.String()))
+ return
+ }
+ runtime := r.Context().Value("runtime").(*libpod.Runtime)
+ name := mux.Vars(r)["name"]
+ container, err := runtime.LookupContainer(name)
+ if err != nil {
+ utils.ContainerNotFound(w, name, err)
+ return
+ }
+ data, err := container.Inspect(query.Size)
+ if err != nil {
+ utils.InternalServerError(w, err)
+ return
+ }
+ utils.WriteResponse(w, http.StatusOK, data)
+}
+
+func KillContainer(w http.ResponseWriter, r *http.Request) {
+ // /{version}/containers/(name)/kill
+ _, err := utils.KillContainer(w, r)
+ if err != nil {
+ return
+ }
+ // Success
+ utils.WriteResponse(w, http.StatusNoContent, "")
+}
+
+func WaitContainer(w http.ResponseWriter, r *http.Request) {
+ _, err := utils.WaitContainer(w, r)
+ if err != nil {
+ utils.InternalServerError(w, err)
+ return
+ }
+ utils.WriteResponse(w, http.StatusNoContent, "")
+}
+
+func PruneContainers(w http.ResponseWriter, r *http.Request) {
+ // TODO Needs rebase to get filers; Also would be handy to define
+ // an actual libpod container prune method.
+ // force
+ // filters
+}
+
+func LogsFromContainer(w http.ResponseWriter, r *http.Request) {
+ // follow
+ // since
+ // timestamps
+ // tail string
+}
+func StatsContainer(w http.ResponseWriter, r *http.Request) {
+ //stream
+}
+func CreateContainer(w http.ResponseWriter, r *http.Request) {
+
+}
+
+func MountContainer(w http.ResponseWriter, r *http.Request) {
+ runtime := r.Context().Value("runtime").(*libpod.Runtime)
+ name := mux.Vars(r)["name"]
+ conn, err := runtime.LookupContainer(name)
+ if err != nil {
+ utils.ContainerNotFound(w, name, err)
+ return
+ }
+ m, err := conn.Mount()
+ if err != nil {
+ utils.InternalServerError(w, err)
+ }
+ utils.WriteResponse(w, http.StatusOK, m)
+}
+
+func ShowMountedContainers(w http.ResponseWriter, r *http.Request) {
+ response := make(map[string]string)
+ runtime := r.Context().Value("runtime").(*libpod.Runtime)
+ conns, err := runtime.GetAllContainers()
+ if err != nil {
+ utils.InternalServerError(w, err)
+ }
+ for _, conn := range conns {
+ mounted, mountPoint, err := conn.Mounted()
+ if err != nil {
+ utils.InternalServerError(w, err)
+ }
+ if !mounted {
+ continue
+ }
+ response[conn.ID()] = mountPoint
+ }
+ utils.WriteResponse(w, http.StatusOK, response)
+}
diff --git a/pkg/api/handlers/libpod/healthcheck.go b/pkg/api/handlers/libpod/healthcheck.go
new file mode 100644
index 000000000..0d7bf3ea7
--- /dev/null
+++ b/pkg/api/handlers/libpod/healthcheck.go
@@ -0,0 +1,25 @@
+package libpod
+
+import (
+ "net/http"
+
+ "github.com/containers/libpod/libpod"
+ "github.com/containers/libpod/pkg/api/handlers/utils"
+ "github.com/gorilla/mux"
+)
+
+func RunHealthCheck(w http.ResponseWriter, r *http.Request) {
+ // 200 ok
+ // 404 no such
+ // 500 internal
+ runtime := r.Context().Value("runtime").(*libpod.Runtime)
+ name := mux.Vars(r)["name"]
+ status, err := runtime.HealthCheck(name)
+ if err != nil {
+ if status == libpod.HealthCheckContainerNotFound {
+ utils.ContainerNotFound(w, name, err)
+ }
+ utils.InternalServerError(w, err)
+ }
+ utils.WriteResponse(w, http.StatusOK, status)
+}
diff --git a/pkg/api/handlers/libpod/images.go b/pkg/api/handlers/libpod/images.go
new file mode 100644
index 000000000..0d4e220a8
--- /dev/null
+++ b/pkg/api/handlers/libpod/images.go
@@ -0,0 +1,165 @@
+package libpod
+
+import (
+ "io/ioutil"
+ "net/http"
+ "os"
+
+ "github.com/containers/libpod/libpod"
+ "github.com/containers/libpod/pkg/api/handlers"
+ "github.com/containers/libpod/pkg/api/handlers/utils"
+ "github.com/gorilla/mux"
+ "github.com/gorilla/schema"
+ "github.com/pkg/errors"
+)
+
+// Commit
+// author string
+// "container"
+// repo string
+// tag string
+// message
+// pause bool
+// changes []string
+
+// create
+
+func ImageExists(w http.ResponseWriter, r *http.Request) {
+ // 200 ok
+ // 404 no such
+ // 500 internal
+ runtime := r.Context().Value("runtime").(*libpod.Runtime)
+ name := mux.Vars(r)["name"]
+
+ _, err := runtime.ImageRuntime().NewFromLocal(name)
+ if err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusNotFound, errors.Wrapf(err, "Failed to find image %s", name))
+ return
+ }
+ utils.WriteResponse(w, http.StatusNoContent, "")
+}
+
+func ImageTree(w http.ResponseWriter, r *http.Request) {
+ // tree is a bit of a mess ... logic is in adapter and therefore not callable from here. needs rework
+
+ //name := mux.Vars(r)["name"]
+ //_, layerInfoMap, _, err := s.Runtime.Tree(name)
+ //if err != nil {
+ // Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrapf(err, "Failed to find image information for %q", name))
+ // return
+ //}
+ // it is not clear to me how to deal with this given all the processing of the image
+ // is in main. we need to discuss how that really should be and return something useful.
+ handlers.UnsupportedHandler(w, r)
+}
+
+func GetImage(w http.ResponseWriter, r *http.Request) {
+ name := mux.Vars(r)["name"]
+ newImage, err := handlers.GetImage(r, name)
+ if err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusNotFound, errors.Wrapf(err, "Failed to find image %s", name))
+ return
+ }
+ inspect, err := newImage.Inspect(r.Context())
+ if err != nil {
+ utils.Error(w, "Server error", http.StatusInternalServerError, errors.Wrapf(err, "failed in inspect image %s", inspect.ID))
+ return
+ }
+ utils.WriteResponse(w, http.StatusOK, inspect)
+
+}
+func GetImages(w http.ResponseWriter, r *http.Request) {
+ images, err := utils.GetImages(w, r)
+ if err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrap(err, "Failed get images"))
+ return
+ }
+ var summaries = make([]*handlers.ImageSummary, len(images))
+ for j, img := range images {
+ is, err := handlers.ImageToImageSummary(img)
+ if err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrap(err, "Failed transform image summaries"))
+ return
+ }
+ // libpod has additional fields that we need to populate.
+ is.CreatedTime = img.Created()
+ is.ReadOnly = img.IsReadOnly()
+ summaries[j] = is
+ }
+ utils.WriteResponse(w, http.StatusOK, summaries)
+}
+
+func PruneImages(w http.ResponseWriter, r *http.Request) {
+ // 200 ok
+ // 500 internal
+ runtime := r.Context().Value("runtime").(*libpod.Runtime)
+ decoder := r.Context().Value("decoder").(*schema.Decoder)
+ query := struct {
+ All bool `schema:"all"`
+ Filters []string `schema:"filters"`
+ }{
+ // override any golang type defaults
+ }
+
+ if err := decoder.Decode(&query, r.URL.Query()); err != nil {
+ utils.Error(w, http.StatusText(http.StatusBadRequest), http.StatusBadRequest,
+ errors.Wrapf(err, "Failed to parse parameters for %s", r.URL.String()))
+ return
+ }
+ cids, err := runtime.ImageRuntime().PruneImages(r.Context(), query.All, query.Filters)
+ if err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, err)
+ return
+ }
+ utils.WriteResponse(w, http.StatusOK, cids)
+}
+
+func ExportImage(w http.ResponseWriter, r *http.Request) {
+ runtime := r.Context().Value("runtime").(*libpod.Runtime)
+ decoder := r.Context().Value("decoder").(*schema.Decoder)
+ query := struct {
+ Compress bool `schema:"compress"`
+ Format string `schema:"format"`
+ }{
+ // override any golang type defaults
+ }
+
+ if err := decoder.Decode(&query, r.URL.Query()); err != nil {
+ utils.Error(w, http.StatusText(http.StatusBadRequest), http.StatusBadRequest,
+ errors.Wrapf(err, "Failed to parse parameters for %s", r.URL.String()))
+ return
+ }
+
+ if len(query.Format) < 1 {
+ utils.InternalServerError(w, errors.New("format parameter cannot be empty."))
+ return
+ }
+
+ tmpfile, err := ioutil.TempFile("", "api.tar")
+ if err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrap(err, "unable to create tempfile"))
+ return
+ }
+ if err := tmpfile.Close(); err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrap(err, "unable to close tempfile"))
+ return
+ }
+ name := mux.Vars(r)["name"]
+ newImage, err := runtime.ImageRuntime().NewFromLocal(name)
+ if err != nil {
+ utils.ImageNotFound(w, name, err)
+ return
+ }
+ if err := newImage.Save(r.Context(), name, query.Format, tmpfile.Name(), []string{}, false, query.Compress); err != nil {
+ utils.Error(w, http.StatusText(http.StatusBadRequest), http.StatusBadRequest, err)
+ return
+ }
+ rdr, err := os.Open(tmpfile.Name())
+ if err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrap(err, "failed to read the exported tarfile"))
+ return
+ }
+ defer rdr.Close()
+ defer os.Remove(tmpfile.Name())
+ utils.WriteResponse(w, http.StatusOK, rdr)
+}
diff --git a/pkg/api/handlers/libpod/pods.go b/pkg/api/handlers/libpod/pods.go
new file mode 100644
index 000000000..cde1fcd48
--- /dev/null
+++ b/pkg/api/handlers/libpod/pods.go
@@ -0,0 +1,465 @@
+package libpod
+
+import (
+ "encoding/json"
+ "fmt"
+ "net/http"
+ "strings"
+
+ "github.com/containers/libpod/cmd/podman/shared"
+ "github.com/containers/libpod/cmd/podman/shared/parse"
+ "github.com/containers/libpod/libpod"
+ "github.com/containers/libpod/libpod/define"
+ "github.com/containers/libpod/pkg/api/handlers"
+ "github.com/containers/libpod/pkg/api/handlers/utils"
+ "github.com/gorilla/mux"
+ "github.com/gorilla/schema"
+ "github.com/pkg/errors"
+)
+
+func PodCreate(w http.ResponseWriter, r *http.Request) {
+ // 200 ok
+ // 500 internal
+ var (
+ runtime = r.Context().Value("runtime").(*libpod.Runtime)
+ options []libpod.PodCreateOption
+ err error
+ )
+ labels := make(map[string]string)
+ input := handlers.PodCreateConfig{}
+ if err := json.NewDecoder(r.Body).Decode(&input); err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrap(err, "Decode()"))
+ return
+ }
+ if len(input.InfraCommand) > 0 || len(input.InfraImage) > 0 {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError,
+ errors.New("infra-command and infra-image are not implemented yet"))
+ return
+ }
+ // TODO long term we should break the following out of adapter and into libpod proper
+ // so that the cli and api can share the creation of a pod with the same options
+ if len(input.CGroupParent) > 0 {
+ options = append(options, libpod.WithPodCgroupParent(input.CGroupParent))
+ }
+
+ if len(input.Labels) > 0 {
+ if err := parse.ReadKVStrings(labels, []string{}, input.Labels); err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, err)
+ return
+ }
+ }
+
+ if len(labels) != 0 {
+ options = append(options, libpod.WithPodLabels(labels))
+ }
+
+ if len(input.Name) > 0 {
+ options = append(options, libpod.WithPodName(input.Name))
+ }
+
+ if len(input.Hostname) > 0 {
+ options = append(options, libpod.WithPodHostname(input.Hostname))
+ }
+
+ if input.Infra {
+ // TODO infra-image and infra-command are not supported in the libpod API yet. Will fix
+ // when implemented in libpod
+ options = append(options, libpod.WithInfraContainer())
+ sharedNamespaces := shared.DefaultKernelNamespaces
+ if len(input.Share) > 0 {
+ sharedNamespaces = input.Share
+ }
+ nsOptions, err := shared.GetNamespaceOptions(strings.Split(sharedNamespaces, ","))
+ if err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, err)
+ return
+ }
+ options = append(options, nsOptions...)
+ }
+
+ if len(input.Publish) > 0 {
+ portBindings, err := shared.CreatePortBindings(input.Publish)
+ if err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, err)
+ return
+ }
+ options = append(options, libpod.WithInfraContainerPorts(portBindings))
+
+ }
+ // always have containers use pod cgroups
+ // User Opt out is not yet supported
+ options = append(options, libpod.WithPodCgroups())
+
+ pod, err := runtime.NewPod(r.Context(), options...)
+ if err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, err)
+ return
+ }
+ utils.WriteResponse(w, http.StatusCreated, handlers.IDResponse{ID: pod.CgroupParent()})
+}
+
+func Pods(w http.ResponseWriter, r *http.Request) {
+ // 200 ok
+ // 500 internal
+ var (
+ runtime = r.Context().Value("runtime").(*libpod.Runtime)
+ podInspectData []*libpod.PodInspect
+ )
+ decoder := r.Context().Value("decoder").(*schema.Decoder)
+ query := struct {
+ filters []string `schema:"filters"`
+ }{
+ // override any golang type defaults
+ }
+ if err := decoder.Decode(&query, r.URL.Query()); err != nil {
+ utils.Error(w, http.StatusText(http.StatusBadRequest), http.StatusBadRequest,
+ errors.Wrapf(err, "Failed to parse parameters for %s", r.URL.String()))
+ return
+ }
+ if len(query.filters) > 0 {
+ utils.Error(w, "filters are not implemented yet", http.StatusInternalServerError, define.ErrNotImplemented)
+ return
+ }
+ pods, err := runtime.GetAllPods()
+ if err != nil {
+ utils.Error(w, "Something went wrong", http.StatusInternalServerError, err)
+ return
+ }
+ for _, pod := range pods {
+ data, err := pod.Inspect()
+ if err != nil {
+ utils.Error(w, "Something went wrong", http.StatusInternalServerError, err)
+ return
+ }
+ podInspectData = append(podInspectData, data)
+ }
+ utils.WriteResponse(w, http.StatusOK, podInspectData)
+}
+
+func PodInspect(w http.ResponseWriter, r *http.Request) {
+ // 200 ok
+ // 404 no such
+ // 500 internal
+ runtime := r.Context().Value("runtime").(*libpod.Runtime)
+ name := mux.Vars(r)["name"]
+ pod, err := runtime.LookupPod(name)
+ if err != nil {
+ utils.PodNotFound(w, name, err)
+ return
+ }
+ podData, err := pod.Inspect()
+ if err != nil {
+ utils.Error(w, "Something went wrong", http.StatusInternalServerError, err)
+ return
+ }
+ utils.WriteResponse(w, http.StatusOK, podData)
+}
+
+func PodStop(w http.ResponseWriter, r *http.Request) {
+ // 200
+ // 304 not modified
+ // 404 no such
+ // 500 internal
+ var (
+ stopError error
+ runtime = r.Context().Value("runtime").(*libpod.Runtime)
+ decoder = r.Context().Value("decoder").(*schema.Decoder)
+ )
+ query := struct {
+ timeout int `schema:"t"`
+ }{
+ // override any golang type defaults
+ }
+
+ if err := decoder.Decode(&query, r.URL.Query()); err != nil {
+ utils.Error(w, http.StatusText(http.StatusBadRequest), http.StatusBadRequest,
+ errors.Wrapf(err, "Failed to parse parameters for %s", r.URL.String()))
+ return
+ }
+ allContainersStopped := true
+ name := mux.Vars(r)["name"]
+ pod, err := runtime.LookupPod(name)
+ if err != nil {
+ utils.PodNotFound(w, name, err)
+ return
+ }
+
+ // TODO we need to implement a pod.State/Status in libpod internal so libpod api
+ // users dont have to run through all containers.
+ podContainers, err := pod.AllContainers()
+ if err != nil {
+ utils.Error(w, "Something went wrong", http.StatusInternalServerError, err)
+ return
+ }
+
+ for _, con := range podContainers {
+ containerState, err := con.State()
+ if err != nil {
+ utils.Error(w, "Something went wrong", http.StatusInternalServerError, err)
+ return
+ }
+ if containerState == define.ContainerStateRunning {
+ allContainersStopped = false
+ break
+ }
+ }
+ if allContainersStopped {
+ alreadyStopped := errors.Errorf("pod %s is already stopped", pod.ID())
+ utils.Error(w, "Something went wrong", http.StatusNotModified, alreadyStopped)
+ return
+ }
+
+ if query.timeout > 0 {
+ _, stopError = pod.StopWithTimeout(r.Context(), false, query.timeout)
+ } else {
+ _, stopError = pod.Stop(r.Context(), false)
+ }
+ if stopError != nil {
+ utils.Error(w, "Something went wrong", http.StatusInternalServerError, err)
+ return
+ }
+ utils.WriteResponse(w, http.StatusOK, "")
+}
+
+func PodStart(w http.ResponseWriter, r *http.Request) {
+ // 200 ok
+ // 304 no modified
+ // 404 no such
+ // 500 internal
+ runtime := r.Context().Value("runtime").(*libpod.Runtime)
+ allContainersRunning := true
+ name := mux.Vars(r)["name"]
+ pod, err := runtime.LookupPod(name)
+ if err != nil {
+ utils.PodNotFound(w, name, err)
+ return
+ }
+
+ // TODO we need to implement a pod.State/Status in libpod internal so libpod api
+ // users dont have to run through all containers.
+ podContainers, err := pod.AllContainers()
+ if err != nil {
+ utils.Error(w, "Something went wrong", http.StatusInternalServerError, err)
+ return
+ }
+
+ for _, con := range podContainers {
+ containerState, err := con.State()
+ if err != nil {
+ utils.Error(w, "Something went wrong", http.StatusInternalServerError, err)
+ return
+ }
+ if containerState != define.ContainerStateRunning {
+ allContainersRunning = false
+ break
+ }
+ }
+ if allContainersRunning {
+ alreadyRunning := errors.Errorf("pod %s is already running", pod.ID())
+ utils.Error(w, "Something went wrong", http.StatusNotModified, alreadyRunning)
+ return
+ }
+ if _, err := pod.Start(r.Context()); err != nil {
+ utils.Error(w, "Something went wrong", http.StatusInternalServerError, err)
+ return
+ }
+ utils.WriteResponse(w, http.StatusOK, "")
+}
+
+func PodDelete(w http.ResponseWriter, r *http.Request) {
+ // 200
+ // 404 no such
+ // 500 internal
+ var (
+ runtime = r.Context().Value("runtime").(*libpod.Runtime)
+ decoder = r.Context().Value("decoder").(*schema.Decoder)
+ )
+ query := struct {
+ force bool `schema:"force"`
+ }{
+ // override any golang type defaults
+ }
+
+ if err := decoder.Decode(&query, r.URL.Query()); err != nil {
+ utils.Error(w, http.StatusText(http.StatusBadRequest), http.StatusBadRequest,
+ errors.Wrapf(err, "Failed to parse parameters for %s", r.URL.String()))
+ return
+ }
+ name := mux.Vars(r)["name"]
+ pod, err := runtime.LookupPod(name)
+ if err != nil {
+ utils.PodNotFound(w, name, err)
+ return
+ }
+ if err := runtime.RemovePod(r.Context(), pod, true, query.force); err != nil {
+ utils.Error(w, "Something went wrong", http.StatusInternalServerError, err)
+ return
+ }
+ utils.WriteResponse(w, http.StatusOK, "")
+}
+
+func PodRestart(w http.ResponseWriter, r *http.Request) {
+ // 200 ok
+ // 404 no such
+ // 500 internal
+ runtime := r.Context().Value("runtime").(*libpod.Runtime)
+ name := mux.Vars(r)["name"]
+ pod, err := runtime.LookupPod(name)
+ if err != nil {
+ utils.PodNotFound(w, name, err)
+ return
+ }
+ _, err = pod.Restart(r.Context())
+ if err != nil {
+ utils.Error(w, "Something went wrong", http.StatusInternalServerError, err)
+ return
+ }
+ utils.WriteResponse(w, http.StatusOK, "")
+}
+
+func PodPrune(w http.ResponseWriter, r *http.Request) {
+ // 200 ok
+ // 500 internal
+ var (
+ err error
+ pods []*libpod.Pod
+ runtime = r.Context().Value("runtime").(*libpod.Runtime)
+ decoder = r.Context().Value("decoder").(*schema.Decoder)
+ )
+ query := struct {
+ force bool `schema:"force"`
+ }{
+ // override any golang type defaults
+ }
+
+ if err := decoder.Decode(&query, r.URL.Query()); err != nil {
+ utils.Error(w, http.StatusText(http.StatusBadRequest), http.StatusBadRequest,
+ errors.Wrapf(err, "Failed to parse parameters for %s", r.URL.String()))
+ return
+ }
+
+ if query.force {
+ pods, err = runtime.GetAllPods()
+ if err != nil {
+ utils.Error(w, "Something went wrong", http.StatusInternalServerError, err)
+ return
+ }
+ } else {
+ // TODO We need to make a libpod.PruneVolumes or this code will be a mess. Volumes
+ // already does this right. It will also help clean this code path up with less
+ // conditionals. We do this when we integrate with libpod again.
+ utils.Error(w, "not implemented", http.StatusInternalServerError, errors.New("not implemented"))
+ return
+ }
+ for _, p := range pods {
+ if err := runtime.RemovePod(r.Context(), p, true, query.force); err != nil {
+ utils.Error(w, "Something went wrong", http.StatusInternalServerError, err)
+ return
+ }
+ }
+ utils.WriteResponse(w, http.StatusOK, "")
+}
+
+func PodPause(w http.ResponseWriter, r *http.Request) {
+ // 200 ok
+ // 404 no such
+ // 500 internal
+ runtime := r.Context().Value("runtime").(*libpod.Runtime)
+ name := mux.Vars(r)["name"]
+ pod, err := runtime.LookupPod(name)
+ if err != nil {
+ utils.PodNotFound(w, name, err)
+ return
+ }
+ _, err = pod.Pause()
+ if err != nil {
+ utils.Error(w, "Something went wrong", http.StatusInternalServerError, err)
+ return
+ }
+ utils.WriteResponse(w, http.StatusOK, "")
+}
+
+func PodUnpause(w http.ResponseWriter, r *http.Request) {
+ // 200 ok
+ // 404 no such
+ // 500 internal
+ runtime := r.Context().Value("runtime").(*libpod.Runtime)
+ name := mux.Vars(r)["name"]
+ pod, err := runtime.LookupPod(name)
+ if err != nil {
+ utils.PodNotFound(w, name, err)
+ return
+ }
+ _, err = pod.Unpause()
+ if err != nil {
+ utils.Error(w, "Something went wrong", http.StatusInternalServerError, err)
+ return
+ }
+ utils.WriteResponse(w, http.StatusOK, "")
+}
+
+func PodKill(w http.ResponseWriter, r *http.Request) {
+ // 200 ok
+ // 404 no such
+ // 409 has conflict
+ // 500 internal
+ var (
+ runtime = r.Context().Value("runtime").(*libpod.Runtime)
+ decoder = r.Context().Value("decoder").(*schema.Decoder)
+ )
+ query := struct {
+ signal int `schema:"signal"`
+ }{
+ // override any golang type defaults
+ }
+
+ if err := decoder.Decode(&query, r.URL.Query()); err != nil {
+ utils.Error(w, http.StatusText(http.StatusBadRequest), http.StatusBadRequest,
+ errors.Wrapf(err, "Failed to parse parameters for %s", r.URL.String()))
+ return
+ }
+ name := mux.Vars(r)["name"]
+ pod, err := runtime.LookupPod(name)
+ if err != nil {
+ utils.PodNotFound(w, name, err)
+ return
+ }
+ podStates, err := pod.Status()
+ if err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, err)
+ return
+ }
+ hasRunning := false
+ for _, s := range podStates {
+ if s == define.ContainerStateRunning {
+ hasRunning = true
+ break
+ }
+ }
+ if !hasRunning {
+ msg := fmt.Sprintf("Container %s is not running", pod.ID())
+ utils.Error(w, msg, http.StatusConflict, errors.Errorf("cannot kill a pod with no running containers: %s", pod.ID()))
+ return
+ }
+ // TODO How do we differentiate if a signal was sent vs accepting the pod/container default?
+ _, err = pod.Kill(uint(query.signal))
+ if err != nil {
+ utils.Error(w, "Something went wrong", http.StatusInternalServerError, err)
+ return
+ }
+ utils.WriteResponse(w, http.StatusOK, "")
+}
+
+func PodExists(w http.ResponseWriter, r *http.Request) {
+ // 200 ok
+ // 404 no such
+ // 500 internal (needs work)
+ runtime := r.Context().Value("runtime").(*libpod.Runtime)
+ name := mux.Vars(r)["name"]
+ _, err := runtime.LookupPod(name)
+ if err != nil {
+ utils.PodNotFound(w, name, err)
+ return
+ }
+ utils.WriteResponse(w, http.StatusOK, "")
+}
diff --git a/pkg/api/handlers/libpod/volumes.go b/pkg/api/handlers/libpod/volumes.go
new file mode 100644
index 000000000..ece59a4b6
--- /dev/null
+++ b/pkg/api/handlers/libpod/volumes.go
@@ -0,0 +1,174 @@
+package libpod
+
+import (
+ "encoding/json"
+ "net/http"
+
+ "github.com/containers/libpod/cmd/podman/shared"
+ "github.com/containers/libpod/libpod"
+ "github.com/containers/libpod/pkg/api/handlers"
+ "github.com/containers/libpod/pkg/api/handlers/utils"
+ "github.com/gorilla/mux"
+ "github.com/gorilla/schema"
+ "github.com/pkg/errors"
+ log "github.com/sirupsen/logrus"
+)
+
+func CreateVolume(w http.ResponseWriter, r *http.Request) {
+ // 200 ok
+ // 500 internal
+ var (
+ volumeOptions []libpod.VolumeCreateOption
+ runtime = r.Context().Value("runtime").(*libpod.Runtime)
+ decoder = r.Context().Value("decoder").(*schema.Decoder)
+ )
+ query := struct {
+ }{
+ // override any golang type defaults
+ }
+ input := handlers.VolumeCreateConfig{}
+ if err := decoder.Decode(&query, r.URL.Query()); err != nil {
+ utils.Error(w, http.StatusText(http.StatusBadRequest), http.StatusBadRequest,
+ errors.Wrapf(err, "Failed to parse parameters for %s", r.URL.String()))
+ return
+ }
+
+ // decode params from body
+ if err := json.NewDecoder(r.Body).Decode(&input); err != nil {
+ utils.Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrap(err, "Decode()"))
+ return
+ }
+
+ if len(input.Name) > 0 {
+ volumeOptions = append(volumeOptions, libpod.WithVolumeName(input.Name))
+ }
+ if len(input.Driver) > 0 {
+ volumeOptions = append(volumeOptions, libpod.WithVolumeDriver(input.Driver))
+ }
+ if len(input.Label) > 0 {
+ volumeOptions = append(volumeOptions, libpod.WithVolumeLabels(input.Label))
+ }
+ if len(input.Opts) > 0 {
+ parsedOptions, err := shared.ParseVolumeOptions(input.Opts)
+ if err != nil {
+ utils.InternalServerError(w, err)
+ }
+ volumeOptions = append(volumeOptions, parsedOptions...)
+ }
+ vol, err := runtime.NewVolume(r.Context(), volumeOptions...)
+ if err != nil {
+ utils.InternalServerError(w, err)
+ }
+ utils.WriteResponse(w, http.StatusOK, vol.Name())
+}
+
+func InspectVolume(w http.ResponseWriter, r *http.Request) {
+ // 200 ok
+ // 404 no such
+ // 500 internal
+ var (
+ runtime = r.Context().Value("runtime").(*libpod.Runtime)
+ decoder = r.Context().Value("decoder").(*schema.Decoder)
+ )
+ query := struct {
+ }{
+ // override any golang type defaults
+ }
+
+ if err := decoder.Decode(&query, r.URL.Query()); err != nil {
+ utils.Error(w, http.StatusText(http.StatusBadRequest), http.StatusBadRequest,
+ errors.Wrapf(err, "Failed to parse parameters for %s", r.URL.String()))
+ return
+ }
+ name := mux.Vars(r)["name"]
+ vol, err := runtime.GetVolume(name)
+ if err != nil {
+ utils.VolumeNotFound(w, name, err)
+ }
+ inspect, err := vol.Inspect()
+ if err != nil {
+ utils.InternalServerError(w, err)
+ }
+ utils.WriteResponse(w, http.StatusOK, inspect)
+}
+
+func ListVolumes(w http.ResponseWriter, r *http.Request) {
+ //var (
+ // runtime = r.Context().Value("runtime").(*libpod.Runtime)
+ // decoder = r.Context().Value("decoder").(*schema.Decoder)
+ //)
+ //query := struct {
+ // Filter string `json:"filter"`
+ //}{
+ // // override any golang type defaults
+ //}
+ //
+ //if err := decoder.Decode(&query, r.URL.Query()); err != nil {
+ // utils.Error(w, http.StatusText(http.StatusBadRequest), http.StatusBadRequest,
+ // errors.Wrapf(err, "Failed to parse parameters for %s", r.URL.String()))
+ // return
+ //}
+ /*
+ This is all in main in cmd and needs to be extracted from there first.
+ */
+
+}
+
+func PruneVolumes(w http.ResponseWriter, r *http.Request) {
+ // 200 ok
+ // 500 internal
+ var (
+ runtime = r.Context().Value("runtime").(*libpod.Runtime)
+ decoder = r.Context().Value("decoder").(*schema.Decoder)
+ )
+ query := struct {
+ }{
+ // override any golang type defaults
+ }
+
+ if err := decoder.Decode(&query, r.URL.Query()); err != nil {
+ utils.Error(w, http.StatusText(http.StatusBadRequest), http.StatusBadRequest,
+ errors.Wrapf(err, "Failed to parse parameters for %s", r.URL.String()))
+ return
+ }
+ pruned, errs := runtime.PruneVolumes(r.Context())
+ if errs != nil {
+ if len(errs) > 1 {
+ for _, err := range errs {
+ log.Infof("Request Failed(%s): %s", http.StatusText(http.StatusInternalServerError), err.Error())
+ }
+ }
+ utils.InternalServerError(w, errs[len(errs)-1])
+ }
+ utils.WriteResponse(w, http.StatusOK, pruned)
+}
+
+func RemoveVolume(w http.ResponseWriter, r *http.Request) {
+ // 200 ok
+ // 404 no such
+ // 500 internal
+ var (
+ runtime = r.Context().Value("runtime").(*libpod.Runtime)
+ decoder = r.Context().Value("decoder").(*schema.Decoder)
+ )
+ query := struct {
+ Force bool `schema:"force"`
+ }{
+ // override any golang type defaults
+ }
+
+ if err := decoder.Decode(&query, r.URL.Query()); err != nil {
+ utils.Error(w, http.StatusText(http.StatusBadRequest), http.StatusBadRequest,
+ errors.Wrapf(err, "Failed to parse parameters for %s", r.URL.String()))
+ return
+ }
+ name := mux.Vars(r)["name"]
+ vol, err := runtime.LookupVolume(name)
+ if err != nil {
+ utils.VolumeNotFound(w, name, err)
+ }
+ if err := runtime.RemoveVolume(r.Context(), vol, query.Force); err != nil {
+ utils.InternalServerError(w, err)
+ }
+ utils.WriteResponse(w, http.StatusOK, "")
+}
diff --git a/pkg/api/handlers/types.go b/pkg/api/handlers/types.go
new file mode 100644
index 000000000..9edbbdccc
--- /dev/null
+++ b/pkg/api/handlers/types.go
@@ -0,0 +1,534 @@
+package handlers
+
+import (
+ "context"
+ "encoding/json"
+ "fmt"
+ "strconv"
+ "strings"
+ "time"
+
+ "github.com/containers/image/v5/manifest"
+ "github.com/containers/libpod/libpod"
+ "github.com/containers/libpod/libpod/define"
+ "github.com/containers/libpod/libpod/events"
+ libpodImage "github.com/containers/libpod/libpod/image"
+ docker "github.com/docker/docker/api/types"
+ dockerContainer "github.com/docker/docker/api/types/container"
+ dockerEvents "github.com/docker/docker/api/types/events"
+ dockerNetwork "github.com/docker/docker/api/types/network"
+ "github.com/docker/go-connections/nat"
+ "github.com/pkg/errors"
+)
+
+type AuthConfig struct {
+ docker.AuthConfig
+}
+
+type ImageInspect struct {
+ docker.ImageInspect
+}
+
+type ContainerConfig struct {
+ dockerContainer.Config
+}
+
+type ImageSummary struct {
+ docker.ImageSummary
+ CreatedTime time.Time `json:"CreatedTime,omitempty"`
+ ReadOnly bool `json:"ReadOnly,omitempty"`
+}
+
+type ContainersPruneReport struct {
+ docker.ContainersPruneReport
+}
+
+type Info struct {
+ docker.Info
+ BuildahVersion string
+ CPURealtimePeriod bool
+ CPURealtimeRuntime bool
+ CgroupVersion string
+ Rootless bool
+ SwapFree int64
+ SwapTotal int64
+ Uptime string
+}
+
+type Container struct {
+ docker.Container
+ docker.ContainerCreateConfig
+}
+
+type ContainerStats struct {
+ docker.ContainerStats
+}
+
+type Ping struct {
+ docker.Ping
+}
+
+type Version struct {
+ docker.Version
+}
+
+type DiskUsage struct {
+ docker.DiskUsage
+}
+
+type VolumesPruneReport struct {
+ docker.VolumesPruneReport
+}
+
+type ImagesPruneReport struct {
+ docker.ImagesPruneReport
+}
+
+type BuildCachePruneReport struct {
+ docker.BuildCachePruneReport
+}
+
+type NetworkPruneReport struct {
+ docker.NetworksPruneReport
+}
+
+type ConfigCreateResponse struct {
+ docker.ConfigCreateResponse
+}
+
+type PushResult struct {
+ docker.PushResult
+}
+
+type BuildResult struct {
+ docker.BuildResult
+}
+
+type ContainerWaitOKBody struct {
+ StatusCode int
+ Error struct {
+ Message string
+ }
+}
+
+type CreateContainerConfig struct {
+ Name string
+ dockerContainer.Config
+ HostConfig dockerContainer.HostConfig
+ NetworkingConfig dockerNetwork.NetworkingConfig
+}
+
+type VolumeCreateConfig struct {
+ Name string `json:"name"`
+ Driver string `schema:"driver"`
+ Label map[string]string `schema:"label"`
+ Opts map[string]string `schema:"opts"`
+}
+
+type IDResponse struct {
+ ID string `json:"id"`
+}
+
+type Stats struct {
+ docker.StatsJSON
+}
+
+type ContainerTopOKBody struct {
+ dockerContainer.ContainerTopOKBody
+ ID string `json:"Id"`
+}
+
+type PodCreateConfig struct {
+ Name string `json:"name"`
+ CGroupParent string `json:"cgroup-parent"`
+ Hostname string `json:"hostname"`
+ Infra bool `json:"infra"`
+ InfraCommand string `json:"infra-command"`
+ InfraImage string `json:"infra-image"`
+ Labels []string `json:"labels"`
+ Publish []string `json:"publish"`
+ Share string `json:"share"`
+}
+
+type ErrorModel struct {
+ Message string `json:"message"`
+}
+
+type Event struct {
+ dockerEvents.Message
+}
+
+type HistoryResponse struct {
+ ID string `json:"Id"`
+ Created int64 `json:"Created"`
+ CreatedBy string `json:"CreatedBy"`
+ Tags []string `json:"Tags"`
+ Size int64 `json:"Size"`
+ Comment string `json:"Comment"`
+}
+
+type ImageLayer struct{}
+
+type ImageTreeResponse struct {
+ ID string `json:"id"`
+ Tags []string `json:"tags"`
+ Size string `json:"size"`
+ Layers []ImageLayer `json:"layers"`
+}
+
+func EventToApiEvent(e *events.Event) *Event {
+ return &Event{dockerEvents.Message{
+ Type: e.Type.String(),
+ Action: e.Status.String(),
+ Actor: dockerEvents.Actor{
+ ID: e.ID,
+ Attributes: map[string]string{
+ "image": e.Image,
+ "name": e.Name,
+ "containerExitCode": strconv.Itoa(e.ContainerExitCode),
+ },
+ },
+ Scope: "local",
+ Time: e.Time.Unix(),
+ TimeNano: e.Time.UnixNano(),
+ }}
+}
+
+func ImageToImageSummary(l *libpodImage.Image) (*ImageSummary, error) {
+ containers, err := l.Containers()
+ if err != nil {
+ return nil, errors.Wrapf(err, "Failed to obtain Containers for image %s", l.ID())
+ }
+ containerCount := len(containers)
+
+ var digests []string
+ for _, d := range l.Digests() {
+ digests = append(digests, string(d))
+ }
+
+ tags, err := l.RepoTags()
+ if err != nil {
+ return nil, errors.Wrapf(err, "Failed to obtain RepoTags for image %s", l.ID())
+ }
+
+ // FIXME: GetParent() panics
+ // parent, err := l.GetParent(context.TODO())
+ // if err != nil {
+ // return nil, errors.Wrapf(err, "Failed to obtain ParentID for image %s", l.ID())
+ // }
+
+ labels, err := l.Labels(context.TODO())
+ if err != nil {
+ return nil, errors.Wrapf(err, "Failed to obtain Labels for image %s", l.ID())
+ }
+
+ size, err := l.Size(context.TODO())
+ if err != nil {
+ return nil, errors.Wrapf(err, "Failed to obtain Size for image %s", l.ID())
+ }
+ dockerSummary := docker.ImageSummary{
+ Containers: int64(containerCount),
+ Created: l.Created().Unix(),
+ ID: l.ID(),
+ Labels: labels,
+ ParentID: l.Parent,
+ RepoDigests: digests,
+ RepoTags: tags,
+ SharedSize: 0,
+ Size: int64(*size),
+ VirtualSize: int64(*size),
+ }
+ is := ImageSummary{
+ ImageSummary: dockerSummary,
+ }
+ return &is, nil
+}
+
+func ImageDataToImageInspect(ctx context.Context, l *libpodImage.Image) (*ImageInspect, error) {
+ info, err := l.Inspect(context.Background())
+ if err != nil {
+ return nil, err
+ }
+ ports, err := portsToPortSet(info.Config.ExposedPorts)
+ if err != nil {
+ return nil, err
+ }
+ // TODO the rest of these still need wiring!
+ config := dockerContainer.Config{
+ // Hostname: "",
+ // Domainname: "",
+ User: info.User,
+ // AttachStdin: false,
+ // AttachStdout: false,
+ // AttachStderr: false,
+ ExposedPorts: ports,
+ // Tty: false,
+ // OpenStdin: false,
+ // StdinOnce: false,
+ Env: info.Config.Env,
+ Cmd: info.Config.Cmd,
+ // Healthcheck: nil,
+ // ArgsEscaped: false,
+ // Image: "",
+ // Volumes: nil,
+ // WorkingDir: "",
+ // Entrypoint: nil,
+ // NetworkDisabled: false,
+ // MacAddress: "",
+ // OnBuild: nil,
+ // Labels: nil,
+ // StopSignal: "",
+ // StopTimeout: nil,
+ // Shell: nil,
+ }
+ ic, err := l.ToImageRef(ctx)
+ if err != nil {
+ return nil, err
+ }
+ dockerImageInspect := docker.ImageInspect{
+ Architecture: l.Architecture,
+ Author: l.Author,
+ Comment: info.Comment,
+ Config: &config,
+ Created: l.Created().Format(time.RFC3339Nano),
+ DockerVersion: "",
+ GraphDriver: docker.GraphDriverData{},
+ ID: fmt.Sprintf("sha256:%s", l.ID()),
+ Metadata: docker.ImageMetadata{},
+ Os: l.Os,
+ OsVersion: l.Version,
+ Parent: l.Parent,
+ RepoDigests: info.RepoDigests,
+ RepoTags: info.RepoTags,
+ RootFS: docker.RootFS{},
+ Size: info.Size,
+ Variant: "",
+ VirtualSize: info.VirtualSize,
+ }
+ bi := ic.ConfigInfo()
+ // For docker images, we need to get the Container id and config
+ // and populate the image with it.
+ if bi.MediaType == manifest.DockerV2Schema2ConfigMediaType {
+ d := manifest.Schema2Image{}
+ b, err := ic.ConfigBlob(ctx)
+ if err != nil {
+ return nil, err
+ }
+ if err := json.Unmarshal(b, &d); err != nil {
+ return nil, err
+ }
+ // populate the Container id into the image
+ dockerImageInspect.Container = d.Container
+ containerConfig := dockerContainer.Config{}
+ configBytes, err := json.Marshal(d.ContainerConfig)
+ if err != nil {
+ return nil, err
+ }
+ if err := json.Unmarshal(configBytes, &containerConfig); err != nil {
+ return nil, err
+ }
+ // populate the Container config in the image
+ dockerImageInspect.ContainerConfig = &containerConfig
+ // populate parent
+ dockerImageInspect.Parent = d.Parent.String()
+ }
+ return &ImageInspect{dockerImageInspect}, nil
+
+}
+
+func LibpodToContainer(l *libpod.Container, infoData []define.InfoData) (*Container, error) {
+ imageId, imageName := l.Image()
+ sizeRW, err := l.RWSize()
+ if err != nil {
+ return nil, err
+ }
+
+ SizeRootFs, err := l.RootFsSize()
+ if err != nil {
+ return nil, err
+ }
+
+ state, err := l.State()
+ if err != nil {
+ return nil, err
+ }
+
+ return &Container{docker.Container{
+ ID: l.ID(),
+ Names: []string{l.Name()},
+ Image: imageName,
+ ImageID: imageId,
+ Command: strings.Join(l.Command(), " "),
+ Created: l.CreatedTime().Unix(),
+ Ports: nil,
+ SizeRw: sizeRW,
+ SizeRootFs: SizeRootFs,
+ Labels: l.Labels(),
+ State: string(state),
+ Status: "",
+ HostConfig: struct {
+ NetworkMode string `json:",omitempty"`
+ }{
+ "host"},
+ NetworkSettings: nil,
+ Mounts: nil,
+ },
+ docker.ContainerCreateConfig{},
+ }, nil
+}
+
+func LibpodToContainerJSON(l *libpod.Container) (*docker.ContainerJSON, error) {
+ _, imageName := l.Image()
+ inspect, err := l.Inspect(true)
+ if err != nil {
+ return nil, err
+ }
+ i, err := json.Marshal(inspect.State)
+ if err != nil {
+ return nil, err
+ }
+ state := docker.ContainerState{}
+ if err := json.Unmarshal(i, &state); err != nil {
+ return nil, err
+ }
+
+ // docker considers paused to be running
+ if state.Paused {
+ state.Running = true
+ }
+
+ h, err := json.Marshal(inspect.HostConfig)
+ if err != nil {
+ return nil, err
+ }
+ hc := dockerContainer.HostConfig{}
+ if err := json.Unmarshal(h, &hc); err != nil {
+ return nil, err
+ }
+ g, err := json.Marshal(inspect.GraphDriver)
+ if err != nil {
+ return nil, err
+ }
+ graphDriver := docker.GraphDriverData{}
+ if err := json.Unmarshal(g, &graphDriver); err != nil {
+ return nil, err
+ }
+
+ cb := docker.ContainerJSONBase{
+ ID: l.ID(),
+ Created: l.CreatedTime().String(),
+ Path: "",
+ Args: nil,
+ State: &state,
+ Image: imageName,
+ ResolvConfPath: inspect.ResolvConfPath,
+ HostnamePath: inspect.HostnamePath,
+ HostsPath: inspect.HostsPath,
+ LogPath: l.LogPath(),
+ Node: nil,
+ Name: l.Name(),
+ RestartCount: 0,
+ Driver: inspect.Driver,
+ Platform: "linux",
+ MountLabel: inspect.MountLabel,
+ ProcessLabel: inspect.ProcessLabel,
+ AppArmorProfile: inspect.AppArmorProfile,
+ ExecIDs: inspect.ExecIDs,
+ HostConfig: &hc,
+ GraphDriver: graphDriver,
+ SizeRw: inspect.SizeRw,
+ SizeRootFs: &inspect.SizeRootFs,
+ }
+
+ stopTimeout := int(l.StopTimeout())
+
+ ports := make(nat.PortSet)
+ for p := range inspect.HostConfig.PortBindings {
+ splitp := strings.Split(p, "/")
+ port, err := nat.NewPort(splitp[0], splitp[1])
+ if err != nil {
+ return nil, err
+ }
+ ports[port] = struct{}{}
+ }
+
+ config := dockerContainer.Config{
+ Hostname: l.Hostname(),
+ Domainname: inspect.Config.DomainName,
+ User: l.User(),
+ AttachStdin: inspect.Config.AttachStdin,
+ AttachStdout: inspect.Config.AttachStdout,
+ AttachStderr: inspect.Config.AttachStderr,
+ ExposedPorts: ports,
+ Tty: inspect.Config.Tty,
+ OpenStdin: inspect.Config.OpenStdin,
+ StdinOnce: inspect.Config.StdinOnce,
+ Env: inspect.Config.Env,
+ Cmd: inspect.Config.Cmd,
+ Healthcheck: nil,
+ ArgsEscaped: false,
+ Image: imageName,
+ Volumes: nil,
+ WorkingDir: l.WorkingDir(),
+ Entrypoint: l.Entrypoint(),
+ NetworkDisabled: false,
+ MacAddress: "",
+ OnBuild: nil,
+ Labels: l.Labels(),
+ StopSignal: string(l.StopSignal()),
+ StopTimeout: &stopTimeout,
+ Shell: nil,
+ }
+
+ m, err := json.Marshal(inspect.Mounts)
+ if err != nil {
+ return nil, err
+ }
+ mounts := []docker.MountPoint{}
+ if err := json.Unmarshal(m, &mounts); err != nil {
+ return nil, err
+ }
+
+ networkSettingsDefault := docker.DefaultNetworkSettings{
+ EndpointID: "",
+ Gateway: "",
+ GlobalIPv6Address: "",
+ GlobalIPv6PrefixLen: 0,
+ IPAddress: "",
+ IPPrefixLen: 0,
+ IPv6Gateway: "",
+ MacAddress: l.Config().StaticMAC.String(),
+ }
+
+ networkSettings := docker.NetworkSettings{
+ NetworkSettingsBase: docker.NetworkSettingsBase{},
+ DefaultNetworkSettings: networkSettingsDefault,
+ Networks: nil,
+ }
+
+ c := docker.ContainerJSON{
+ ContainerJSONBase: &cb,
+ Mounts: mounts,
+ Config: &config,
+ NetworkSettings: &networkSettings,
+ }
+ return &c, nil
+}
+
+// portsToPortSet converts libpods exposed ports to dockers structs
+func portsToPortSet(input map[string]struct{}) (nat.PortSet, error) {
+ ports := make(nat.PortSet)
+ for k := range input {
+ npTCP, err := nat.NewPort("tcp", k)
+ if err != nil {
+ return nil, errors.Wrapf(err, "unable to create tcp port from %s", k)
+ }
+ npUDP, err := nat.NewPort("udp", k)
+ if err != nil {
+ return nil, errors.Wrapf(err, "unable to create udp port from %s", k)
+ }
+ ports[npTCP] = struct{}{}
+ ports[npUDP] = struct{}{}
+ }
+ return ports, nil
+}
diff --git a/pkg/api/handlers/unsupported.go b/pkg/api/handlers/unsupported.go
new file mode 100644
index 000000000..956d31f8b
--- /dev/null
+++ b/pkg/api/handlers/unsupported.go
@@ -0,0 +1,17 @@
+package handlers
+
+import (
+ "fmt"
+ "net/http"
+
+ "github.com/containers/libpod/pkg/api/handlers/utils"
+ log "github.com/sirupsen/logrus"
+)
+
+func UnsupportedHandler(w http.ResponseWriter, r *http.Request) {
+ msg := fmt.Sprintf("Path %s is not supported", r.URL.Path)
+ log.Infof("Request Failed: %s", msg)
+
+ utils.WriteJSON(w, http.StatusInternalServerError,
+ utils.ErrorModel{Message: msg})
+}
diff --git a/pkg/api/handlers/utils/containers.go b/pkg/api/handlers/utils/containers.go
new file mode 100644
index 000000000..64d3d378a
--- /dev/null
+++ b/pkg/api/handlers/utils/containers.go
@@ -0,0 +1,103 @@
+package utils
+
+import (
+ "fmt"
+ "net/http"
+ "syscall"
+ "time"
+
+ "github.com/containers/libpod/libpod"
+ "github.com/containers/libpod/libpod/define"
+ "github.com/gorilla/mux"
+ "github.com/gorilla/schema"
+ "github.com/pkg/errors"
+)
+
+func KillContainer(w http.ResponseWriter, r *http.Request) (*libpod.Container, error) {
+ runtime := r.Context().Value("runtime").(*libpod.Runtime)
+ decoder := r.Context().Value("decorder").(*schema.Decoder)
+ query := struct {
+ Signal syscall.Signal `schema:"signal"`
+ }{
+ Signal: syscall.SIGKILL,
+ }
+ if err := decoder.Decode(&query, r.URL.Query()); err != nil {
+ Error(w, "Something went wrong.", http.StatusBadRequest, errors.Wrapf(err, "Failed to parse parameters for %s", r.URL.String()))
+ return nil, err
+ }
+ name := mux.Vars(r)["name"]
+ con, err := runtime.LookupContainer(name)
+ if err != nil {
+ ContainerNotFound(w, name, err)
+ return nil, err
+ }
+
+ state, err := con.State()
+ if err != nil {
+ InternalServerError(w, err)
+ return con, err
+ }
+
+ // If the Container is stopped already, send a 409
+ if state == define.ContainerStateStopped || state == define.ContainerStateExited {
+ Error(w, fmt.Sprintf("Container %s is not running", name), http.StatusConflict, errors.New(fmt.Sprintf("Cannot kill Container %s, it is not running", name)))
+ return con, err
+ }
+
+ err = con.Kill(uint(query.Signal))
+ if err != nil {
+ Error(w, "Something went wrong.", http.StatusInternalServerError, errors.Wrapf(err, "unable to kill Container %s", name))
+ }
+ return con, err
+}
+
+func RemoveContainer(w http.ResponseWriter, r *http.Request, force, vols bool) {
+ runtime := r.Context().Value("runtime").(*libpod.Runtime)
+ name := mux.Vars(r)["name"]
+ con, err := runtime.LookupContainer(name)
+ if err != nil {
+ ContainerNotFound(w, name, err)
+ return
+ }
+
+ if err := runtime.RemoveContainer(r.Context(), con, force, vols); err != nil {
+ InternalServerError(w, err)
+ return
+ }
+ WriteResponse(w, http.StatusNoContent, "")
+}
+
+func WaitContainer(w http.ResponseWriter, r *http.Request) (int32, error) {
+ runtime := r.Context().Value("runtime").(*libpod.Runtime)
+ decoder := r.Context().Value("decoder").(*schema.Decoder)
+ // /{version}/containers/(name)/restart
+ query := struct {
+ Interval string `schema:"interval"`
+ Condition string `schema:"condition"`
+ }{
+ // Override golang default values for types
+ }
+ if err := decoder.Decode(&query, r.URL.Query()); err != nil {
+ Error(w, "Something went wrong.", http.StatusBadRequest, errors.Wrapf(err, "Failed to parse parameters for %s", r.URL.String()))
+ return 0, err
+ }
+
+ if len(query.Condition) > 0 {
+ return 0, errors.Errorf("the condition parameter is not supported")
+ }
+
+ name := mux.Vars(r)["name"]
+ con, err := runtime.LookupContainer(name)
+ if err != nil {
+ ContainerNotFound(w, name, err)
+ return 0, err
+ }
+ if len(query.Interval) > 0 {
+ d, err := time.ParseDuration(query.Interval)
+ if err != nil {
+ Error(w, "Something went wrong.", http.StatusBadRequest, errors.Wrapf(err, "Failed to parse %s for interval", query.Interval))
+ }
+ return con.WaitWithInterval(d)
+ }
+ return con.Wait()
+}
diff --git a/pkg/api/handlers/utils/errors.go b/pkg/api/handlers/utils/errors.go
new file mode 100644
index 000000000..69d4e40f8
--- /dev/null
+++ b/pkg/api/handlers/utils/errors.go
@@ -0,0 +1,86 @@
+package utils
+
+import (
+ "fmt"
+ "github.com/containers/libpod/libpod/define"
+ "net/http"
+
+ "github.com/pkg/errors"
+ log "github.com/sirupsen/logrus"
+)
+
+var (
+ ErrLinkNotSupport = errors.New("Link is not supported")
+)
+
+// Error formats an API response to an error
+//
+// apiMessage and code must match the container API, and are sent to client
+// err is logged on the system running the podman service
+func Error(w http.ResponseWriter, apiMessage string, code int, err error) {
+ // Log detailed message of what happened to machine running podman service
+ log.Infof("Request Failed(%s): %s", http.StatusText(code), err.Error())
+ em := ErrorModel{
+ Because: (errors.Cause(err)).Error(),
+ Message: err.Error(),
+ }
+ WriteJSON(w, code, em)
+}
+
+func VolumeNotFound(w http.ResponseWriter, nameOrId string, err error) {
+ if errors.Cause(err) != define.ErrNoSuchVolume {
+ InternalServerError(w, err)
+ }
+ msg := fmt.Sprintf("No such volume: %s", nameOrId)
+ Error(w, msg, http.StatusNotFound, err)
+}
+func ContainerNotFound(w http.ResponseWriter, nameOrId string, err error) {
+ if errors.Cause(err) != define.ErrNoSuchCtr {
+ InternalServerError(w, err)
+ }
+ msg := fmt.Sprintf("No such container: %s", nameOrId)
+ Error(w, msg, http.StatusNotFound, err)
+}
+
+func ImageNotFound(w http.ResponseWriter, nameOrId string, err error) {
+ if errors.Cause(err) != define.ErrNoSuchImage {
+ InternalServerError(w, err)
+ }
+ msg := fmt.Sprintf("No such image: %s", nameOrId)
+ Error(w, msg, http.StatusNotFound, err)
+}
+
+func PodNotFound(w http.ResponseWriter, nameOrId string, err error) {
+ if errors.Cause(err) != define.ErrNoSuchPod {
+ InternalServerError(w, err)
+ }
+ msg := fmt.Sprintf("No such pod: %s", nameOrId)
+ Error(w, msg, http.StatusNotFound, err)
+}
+
+func ContainerNotRunning(w http.ResponseWriter, containerID string, err error) {
+ msg := fmt.Sprintf("Container %s is not running", containerID)
+ Error(w, msg, http.StatusConflict, err)
+}
+
+func InternalServerError(w http.ResponseWriter, err error) {
+ Error(w, http.StatusText(http.StatusInternalServerError), http.StatusInternalServerError, err)
+}
+
+func BadRequest(w http.ResponseWriter, key string, value string, err error) {
+ e := errors.Wrapf(err, "Failed to parse query parameter '%s': %q", key, value)
+ Error(w, http.StatusText(http.StatusBadRequest), http.StatusBadRequest, e)
+}
+
+type ErrorModel struct {
+ Because string `json:"cause"`
+ Message string `json:"message"`
+}
+
+func (e ErrorModel) Error() string {
+ return e.Message
+}
+
+func (e ErrorModel) Cause() error {
+ return errors.New(e.Because)
+}
diff --git a/pkg/api/handlers/utils/handler.go b/pkg/api/handlers/utils/handler.go
new file mode 100644
index 000000000..0815e6eca
--- /dev/null
+++ b/pkg/api/handlers/utils/handler.go
@@ -0,0 +1,44 @@
+package utils
+
+import (
+ "encoding/json"
+ "fmt"
+ "io"
+ "net/http"
+ "os"
+
+ log "github.com/sirupsen/logrus"
+)
+
+// WriteResponse encodes the given value as JSON or string and renders it for http client
+func WriteResponse(w http.ResponseWriter, code int, value interface{}) {
+ switch value.(type) {
+ case string:
+ w.Header().Set("Content-Type", "text/plain; charset=us-ascii")
+ w.WriteHeader(code)
+
+ if _, err := fmt.Fprintln(w, value); err != nil {
+ log.Errorf("unable to send string response: %q", err)
+ }
+ case *os.File:
+ w.Header().Set("Content-Type", "application/octet; charset=us-ascii")
+ w.WriteHeader(code)
+
+ if _, err := io.Copy(w, value.(*os.File)); err != nil {
+ log.Errorf("unable to copy to response: %q", err)
+ }
+ default:
+ WriteJSON(w, code, value)
+ }
+}
+
+func WriteJSON(w http.ResponseWriter, code int, value interface{}) {
+ w.Header().Set("Content-Type", "application/json")
+ w.WriteHeader(code)
+
+ coder := json.NewEncoder(w)
+ coder.SetEscapeHTML(true)
+ if err := coder.Encode(value); err != nil {
+ log.Errorf("unable to write json: %q", err)
+ }
+}
diff --git a/pkg/api/handlers/utils/images.go b/pkg/api/handlers/utils/images.go
new file mode 100644
index 000000000..9445298ca
--- /dev/null
+++ b/pkg/api/handlers/utils/images.go
@@ -0,0 +1,32 @@
+package utils
+
+import (
+ "fmt"
+ "net/http"
+
+ "github.com/containers/libpod/libpod"
+ "github.com/containers/libpod/libpod/image"
+ "github.com/gorilla/schema"
+)
+
+// GetImages is a common function used to get images for libpod and other compatibility
+// mechanisms
+func GetImages(w http.ResponseWriter, r *http.Request) ([]*image.Image, error) {
+ decoder := r.Context().Value("decoder").(*schema.Decoder)
+ runtime := r.Context().Value("runtime").(*libpod.Runtime)
+ query := struct {
+ //all bool # all is currently unused
+ filters []string
+ //digests bool # digests is currently unused
+ }{
+ // This is where you can override the golang default value for one of fields
+ }
+ if err := decoder.Decode(&query, r.URL.Query()); err != nil {
+ return nil, err
+ }
+ filters := query.filters
+ if len(filters) < 1 {
+ filters = append(filters, fmt.Sprintf("reference=%s", ""))
+ }
+ return runtime.ImageRuntime().GetImagesWithFilters(filters)
+}