Passing file from worker container into new Docker container
I was planning to create a video describing how to create custom user interfaces in Windmill, for managing Kubernetes clusters.
I need to pass the
kubeconfig.yml
file, containing the authentication details, from the Windmill worker container, into a new Docker container.
Previously, I've simply installed software (such as kubectl
) into the worker, but using a container helps make things more reliable and less prone to conflicts.
The worker container doesn't appear to have any mounts to the host, except for the Docker unix socket.
I am struggling to think of any ways to pass kubeconfig.yml
as an environment variable or file.
Any ideas?
EDIT: I suppose one option would be to modify the docker-compose.yml
to mount a named volume to the worker containers. I could then use the same named volume in my "child" containers from a Bash script. I would prefer to avoid modifying the default deployment though.10 Replies
Either mount or you scp into the container
Mount isn't an option, because the file is protected inside the worker container, not on the host filesystem.
I doubt most container images have OpenSSH server installed, so SCP probably isn't a great option.
However, you did just give me an idea .... instead of doing
docker run
, I could:
- Start with docker create
,
- Copy the file from the worker container into the new container with docker cp
, and then run docker start
- Delete the containerSomething like this .... testing it out now
This worked!!!
All you have to do now is pipe the JSON result into
jq -c .
so that you get the entire response on a single line.
š š š@pcgeek86 sorry for asking a potentially stupid question after watching your last video: why couldn't you simply mount the
kubeconfig
file and use docker run
just like your previous video on ffmpeg
? BTW, really enjoying and learning a lot from your videos!It's not a stupid question at all. There's actually a very good reason for it.
In the FFmpeg video, we recorded directly from a network stream to a file, by creating a new container with a filesystem mapping to the host. The second container running s5cmd also had a filesystem mapping to the host.
In this kubeconfig example, the variable is being captured to a file inside the worker container's filesystem. The worker doesn't have a filesystem mapping to the host's filesystem, so the kubeconfig file is "trapped" inside the worker's filesystem. There's no way to get the file onto the host, because there's no volume / filesystem mappings. Also, you can't map a volume between two different container's filesystems.
The solution is to create a new container, copy the file into it, and then start the container.
It's very confusing to mentally visualize, because the Windmill worker is actually accessing the Docker Engine on the host. Any Docker commands executed "inside" the worker, are actually spawning new containers on the host. They're siblings of the worker container, not children.
Thank you so much! Now I get it.
I believe I found a workaround/hack (after finally understanding the issue thanks to your video and explanation above!):
echo "secret key!" | docker run --rm -i -v /tmp:/root busybox sh -c 'cat > /root/key.txt'
.That's a brilliant idea @tserafim !! I didn't even think about piping data as input to the Docker container. Great find!
Thanks! I was playing with ffmpeg couple of months ago, and my script piped the video output to stdout by default so it just worked when I tried it with Windmill. After understanding your thread and video, that came to my mind and I decided to try it backwards.