Question on worker memory usage
We have configured a worker group in our compose file with a hard limit of 1024M. However, we still see successful flow executions with a peek much larger - is this expected behavior? I suspect this is Docker deciding to not kill the process if it exceeds temporarily the memory limit, but that we should not rely on this behavior being predictable & instead scale up our worker limit to be a safe % above the peak we observe. Is that assumption correct or do we miss something in the way this works?
2 Replies