So, earlier I mentioned that I'd run into some backup weirdness. Well, I think that Kubernetes doesn't deal with resources vapourizing between starts. It seems the control bits don't like when a node, in this case the worker one, isn't restarted. It tried to kill the pods, but since the node was gone, it just hung. Also, I'd like to check corrections and outstanding pull requests upstream, to see if there is anything helpful in any of the pending commits or PRs from the various forks. I also have some initial thoughts about building some persistence; right now, the data about the Kubernetes cluster itself is backed up, which means your container images will be brought back, but not any of the data in your container. So, if your app or service is truly stateless, you are in the clear, but if you have to save anything, you are out of luck, at least currently. One strategy could be to just use an extrrnally running database. But I'd also like to tackle external resources like how to expose an AWS volume as something a pod could make a volume claim on, inside this space. Lastly, I changed the volume type from gp2 to standard, but I'm still getting gp2 volumes. I'm not sure if this is because of saved info or if something is hardcoded or just ignored. Another thing to dig into, fun.