Daan
Daan•2w ago

Question on worker memory usage

We have configured a worker group in our compose file with a hard limit of 1024M. However, we still see successful flow executions with a peek much larger - is this expected behavior? I suspect this is Docker deciding to not kill the process if it exceeds temporarily the memory limit, but that we should not rely on this behavior being predictable & instead scale up our worker limit to be a safe % above the peak we observe. Is that assumption correct or do we miss something in the way this works?
No description
2 Replies
rubenf
rubenf•2w ago
that's 2.8MB
Daan
DaanOP•2w ago
😄 hahaha oh my, we missed that