27 Replies
We've added
TIMEOUT=1000
on workersHi @heyenter , what do you mean by workers in a vacuum and what do you mean by deleting a worker ?
Are you just referring to seeing them in the workers page even though they are off ?
We have workflows in progress but they don't end in success or error!
can you go on one of them and see the detail of the status
@.atoo
That looks like a failure, can you show more of the flow so that I can see why it didn't end
This error is normal since all of our workers get stucked so this specific code didn't get executed in time. I thought this is our fault since we set the timeout too high. But the error that we can't fix is this one :
It happens from time to time, but when we start at scaling on high threads it's start happening
Did you increase the max connection of your postgres ?
Yes we looked into it, @heyenter changed this value but not seem to have an effect
150 yes
Maybe more is needed?
how many servers and workers do you use ?
1 server + 25 workers on the same server*
then yes you need more
Maybe add some calculations to the documentation?
so 25 replicas of the worker container with NUM_WORKERS=1 ?
For x worker, you need to have x cpu/ram and x pg pool
1 docker container with
NUM_WORKERS=25
oh
then you wouldn't need more
but you shouldn't do that
Ok I understand
I'm going to add workers containers
do you have 1 server and 1 worker container right now ?
Yes !
ok, then yes please create 25 replicas with NUM_WORKERS=1
I will actually make that the default
too many people are shooting themselves in the foot
and increase your max connection
also you might not need 25 workers
unless you have 25vCPU?
😂 haha no
4 cpu
We're going to review our installation
spawn 8 replicas at most
Ok