I've been using Docker a lot more recently, in order to figure out exactly what is going on. Using some dockerfiles as templates, I've actually built a few Docker images. Early on in this process I knew less about what I was doing, and had to figure out how to install something. The image I was using was based on Ubuntu, so I apt installed Vim so that I could edit files, since I hadn't yet figured out how to file transfer with a container. While it taught me how to add things to my image automatically, it also added a bunch of dependecies and shared objects, with some bloat in image size. Also, while you can gerfingerpoke containers while you are developing, I'd lean towards trying to adjust the way you create your images instead -- because you want to do stuff that persists, while what you do in a container is bespoke to just that instance. Figuring out how to do file transfers between the host and the container was a big win. This allows me to utilize all my Unix tools knowledge, and all of the host tools like vi, without relying on what is deployed. It was a bit of a mindset adjustment, because I'm used to getting in remotely and tweaking, but this requires adjusting that a bit. Logging is another bit of fun, since the main process usually dumps its logs as stdout/stderr into the Docker log mechanism. Once you get this, you'll know where to look. It took me a while to figure out images and image management. My initial impression was that everything had to go through Docker hub, which is made very easy but I was concerned that I might not want private builds on a public repository. But soon I figured out that Docker makes it easy to run a local repo, even if it isn't as easy as Docker hub. There is a lot of image management which could be its own blog entry and or vlog, but I will say for now that I learned a lot. I've done so much but there is lots left to do, for things like multi-step builds, persistent storage, and even tagging. But I have a decent understanding now, and that helps a great deal.