Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

dokku is also meant to build a custom image on deploy : rather than using heroku's buildpacks, you can put a Dockerfile at the root of your project and it will be used instead.

So basically, you could put a Dockerfile file container just FROM and MAINTAINER, referring the image you want to use in the FROM, and dokku will download and execute it on `git push` (provided it can access to the image repository).



sort by: page size:

Yes I do :)

You can see on the following step which image is being used when deploying an app:

https://github.com/progrium/dokku/blob/master/dokku#L33

So what I did was change that to an image of my own based on the progrium/buildstep one:

  sudo docker run -i -t progrium/buildstep /bin/bash
Now you can make changes to it and save them into your new image (while you are still logged in to progrium/buildstep):

  sudo docker commit <id> myownimage
OK, and inside this image I have referenced my own buildstep instead of the default Heroku one. The relevant txt document is in /build/.

And to get to the point I have changed the npm install line to run pointing to the http EU repo.

  npm install --registry "http://registry.npmjs.eu" --userconfig $build_dir/.npmrc --production 2>&1 | indent
That is one way to go about it.

I've been using Dokku for a side-project, and it's a really nice tool! My only gripe with it is that it's not easy to deploy an existing docker image. You have to pull it, then transmit it over ssh with "docker save" and "docker load".[1]

Migrating the docker image building from the dokku server to a CI would be easier to do without this. On top of that, deploying an existing software into your machine would be easier.

[1] http://dokku.viewdocs.io/dokku/deployment/methods/images/#de...


There are actually two ways to do Docker with Heroku.

You can let Heroku build your image during the deployment phase by simply having a `Dockerfile` in your repository [1].

Or, you can build the image yourself on your CI service and push it to Heroku's registry and then trigger a release. This last option is useful if you want to run tests against the exact image that is going to be deployed without having to rebuilt it [2].

[1] https://blog.heroku.com/build-docker-images-heroku-yml [2] https://devcenter.heroku.com/articles/container-registry-and...


+1 for this -- requiring the docker image to be built/managed on the machine it's being deployed on is the simpler architectural choice (easier to debug, etc), but it doesn't necessarily make sense for production.

I wonder if there's a ticket about this on dokku already

[EDIT] - Couldn't find anything... Some tickets about how the containers are built and changing the base image but not much about.

I wonder if you could jury rig something like kraken[0] and make sure wherever your building images is a peer or something... Of course the simpler solution might be to add a CI step that just pushes the image (via the working `docker save` method) to the deployment machine(s)? Maybe if you have a staging environment, let CI push there, then if that machine is peered (via something like kraken) with production, production will get the image (though it may never run the image).

[0]: https://github.com/uber/kraken


It really depends how you set it up, though-- for example, Digital Ocean provides their own image (http://www.andrewmunsell.com/blog/dokku-tutorial-digital-oce...) that provides a one-click deployment of Dokku, including all of the SSH key setup and stuff. Minimally, this makes it easier to start up a new Dokku instance on a new version and push your apps to the new instance before you destroy the old one. In my experience, Dokku has been fairly easy to use (minus a weird Nginx related issue I had that was a side effect of the length of my app names).

Docker (as well as Dokku) still aren't 1.0 or deemed production ready by their respective authors, so you still should always investigate both products before using them. But it is really cool to see products like Dokku and Flynn being created and maintained, since it makes deploying apps so much easier.


The result of a Heroku build pack is also a Docker image. The switching of base layers is a feature of the docker image design, though not normally used by normal docker users.

Exactly this. For the docker images we use in production, we fork the corresponding git repo, build our own image and push it to our own local docker registry and pull it from there. It's fairly easy to setup in fact.

You can set up a git hook on repositories that listens for the completion of the docker_build task and then redeploys the app while pulling new images?

That's what I'm going to move to. It's part of the learning process for me. I want to set the image to build when I push to my repository.

I also think moving from npm to pnpm and leveraging caching will help. But the dockerfile itself is very very simple.


Thanks, I will update the readme to include more info.

To answer some: It pushes a zipped directory to the server, which includes a Dockerfile. There, the server builds the image and deploys it.

Deploying pre-build images is probably best done by using a Dockerfile that uses FROM to point to the pre-build image.


they're specifically talking about the resulting image built using whatever dockerfile you had. they can be exported to a file or pushed to repos to be reused elsewhere.

If you use a third-party builder like Paketo you can use buildpacks anywhere you can use docker images

https://paketo.io/docs/builders/


Yeah , if you use the Dockerfile, but pre-build images has tags and ID's that you can use to make sure you always get the same image.

I see a few reasons to build your own from the Dockerfile:

  - 1) You don't trust the image and want to build your own.
  - 2) You want to build something slightly different
  - 3) You want an up-to-date version.
2) is often solved by building your own image with the changes, and I think 1) is solved by the Automated Builds (?), but I haven't used them yet.

The Docker Hub does also have a mode called "Automated Builds" which will pull down the repository, build an image based on the Dockerfile and push it. However, you do have to pick one or the other (you cannot push your own image to a repository configured as an Automated Build).

I like to use a simple makefile that runs `docker build` and `docker push` commands, automatically tagging images based on the current solution version and git head. That means I can simply do `make build|rebuild|release`, and get images all properly tagged.

But yeah, all the actual work is in the Dockerfile.


You can always start from the generated image (which contains the steps used to build it) and extend it in your custom Dockerfile if your project gets so big, right?

The fact is, many open source software projects provide their own Dockerfile (and in many cases, images). Using these is akin to downloading and deploying a release tarball.

We have a setup that has been working out well for us:

1. We build docker images on every commit, in CI, and tag it with the git commit sha and branch (we don't actually use the branch tag anywhere, but we still tag it). This is essentially our "build" phase in the 12factor build/release/run. Every git commit has an associated docker image.

2. Our tooling for deploying is heavily based around the GitHub Deployments API. We have a project called Tugboat (https://github.com/remind101/tugboat) that receives deployment requests and fulfills them using the "/deploys" API of Empire. Tugboat simply deploys a docker image matching the GitHub repo, tagged with the git commit sha that is being requested for deployment (e.g. "remind101/acme-inc:<git sha>").

We originally started maintaining our own base images based on alpine, but it ended up not being worth the effort. Now, we just use the official base images for each language we use (Mostly Go, Ruby and Node here). We only run a single process inside each container. We treat our docker images much like portable Go binaries.


Yep, so Depot runs BuildKit, so the same workflows work there, just with a remote builder. You can do one of three things:

1. Just build the image — in CI that would test that the image built successfully, and the build cache would be ready for future builds. So for instance if you ran a build on a branch, but only pushed the image on the main branch workflow, the second run could just re-use the cache.

2. Build and push the image to a registry — from there you could do anything with the image in the registry (pull it from any workflow, deploy it, etc)

3. Build and pull the image to the local Docker daemon. In CI, that might be ideal for integration testing the container, like you mention.

You can also use option (2) for integration testing the container, which is especially useful with multi-platform images. Docker doesn't (yet) support storing multi-platform images locally, but it will pull the correct platform from the registry.

tl;dr — Depot supports the same options as `docker buildx build`, where you can push, pull, or just build the image depending on your needs

next

Legal | privacy