Daan
Daan2mo ago

Question on scaling strategy in Docker

We have defined a set of worker groups (small / med / large) in our compose file, in line with our license, with a certain memory cap. We are running into issues though where sometimes, processes are killed due to excessive memory use, while sufficient memory is still available on the system. We see a big reduction in complexity & better use of resources if we could just set a total memory pool to docker / windmill (e.g. 8GB) & then define a single worker group which can then be used for all jobs - from small to large, up to a total memory use across all owrkers of 8GB, (as example), in line with our EE license. Is this flexibility possible?
4 Replies
rubenf
rubenf2mo ago
You can't pay less than 1GB per worker so if you use 16 workers with those 8GB, we would count 16 small workers = 16*$25
Daan
DaanOP2mo ago
Hi Ruben - OK, clear; Our question is not so much about having less than 1GB for a worker, but more about flexibly being able to use the full capacity for whichever job is available on the server to start next. But I understand that we are responsible for deciding upfront which memory constraints are assigned to which worker pool, & which jobs are assigned to which worker. This is the part that is now causing issues (aka some flows use most of the time a very small footprint, but can sometimes take up more memory - we would like not having to reserve this capacity in a "large worker" pool)
rubenf
rubenf2mo ago
It's somewhat fine to use a global pool. We do not recommend it because you then have really no guarantees that a job is not gonna starve the others but that's up to you With respect to our license, if you do not assign a memory limit per worker, then we will have to assume the pricing based on the number of workers and the size of the global pool.
Daan
DaanOP2mo ago
Great thanks!

Did you find this page helpful?