Docker
I'm having difficulty figuring out how do handle a very simple scenario.
I have a flow where a step generates a file, and pushed it to S3.
I have a subsequent step where I need to run a docker image that would run a script to download that file and execute docker code on it. Since I cannot mount any files, how can I download a file and have it available to the code inside the docker container. What is the general process for executing a docker image that needs to have access to a file?
1 Reply
I figured out the answer to this and it seems like something other people should know. If you are running windmill or ECS or Kubernetes, things like the Bash scripts or Docker scripts are running inside of a docker container. To launch new Docker containers, Windmill is using Docker-In-Docker standard processes. What this means is the when you try to mount a volume (like -v /tmp/foo:/inside_new_container/foo) the first part of that is mounting the local /tmp/foo, but for D-in-D setups, that local /tmp/foo is ACTUALLY the HOST MACHINES /tmp/foo, not the container running the Bash/Docker script. SOooo... The only way to machine this is to mount a HOST directory in the windmill containers. Then you would write to that shared directory from the HOST in your Bash script and then run your new docker container as /tmp/MY_SHARED_DIRECTORY_FROM_HOST:/inside_new_container/foo, and then in the new container "inside_new_container/foo" will have the data from the MY_SHARED_DIRECTORY_FROM_HOST. This MEANS though, that data from this directory should be cleaned up after use, and it WILL be availble to other tasks that might be running on that same machine.