I have the problem that when importing dependencies in bun-scripts, they don't run when testing. The spinner just keeps spinning and the last output is "bun install". The same problem exists when deploying a script. I can resolve the issue by restarting the containers. The script then runs once correctly, then the error returns. All the logs, we found, don't help. Any ideas/hints what might be the issue or where we should look? Thanks for the help!
Running AI Agent in flow im getting error like: ExecutionErr: execution error: Internal: AI agent reached max iterations, but there are still tool calls @ai_executor.rs:721:32
Hi, How can i reduce the number of compute units. If i edit some of my script (which executing by default worker) --> to heavy worker, will it reduece the usage of default worker
Hi all, I tried the free self-hosted version installed via Helm Chart on my own kubernetes cluster, but every time I restart it, I lose all my configurations (e.g. ask me again change password, workspaces lost).
Im looking for some help and advice regarding architecture design using windmill.
We are a security company, so isolation is our top most concern. We love the permission system built into each workspace, but are having some difficulty finding the best infrastructure design.
We would like separate dev, and prod environments. Our dev workers would not be running flows and long running tasks, just used for testing scripts and dev work. Our prod instance would be auto scaling and use schedules for long running jobs.
Some of our scripts are shared and common, but there are some that are unique/customized per tenant of our apps. We'd have granular access to be able to assign some of our users to some of these tenants.
Our main decisions are:
Should we stick to 1 windmill instance, separate workspaces on windmill (dev/prod), and separate worker groups for each environment, and restrict workers to worker groups with tags. Use folders in prod for each tenant and the folder IAM for user permissions.
or deploy totally separate windmill instances, use windmill workspace per tenant, and manage IAM at workspace level.
Our main concerns are:
billing/license: if we go with 1 instance, will our dev workers be billed as prod ? I noticed the license key is per instance, but I can run a worker with a dev license on the same instance. So not sure how that would work in billing/pricing.
we would like to have fast CI/CD and SDLC, and maintaining a workspace per client (approach 2) seems like a nightmare. I see the forking option, but not sure if this would make the ability to migrate/sync code and changes across multiple workspaces any easier.
how to tie in git sync in to either of the architectures above in a way that scales and doesn't become a nightmare.
We are on the EE plan and are open to any infrastructure design. Currently our setup is on AWS with ECS.
I'd really appericiate any guidance or words of wisdom about this. Thanks!
Hi. How can I split Golang script into multiple files? In Python I can import files using 'f.<path_to_folder>.script_name', but I cannot found an example of how can I import another package (that is also a script in WindMill) in Golang.
I am using a cloud hosted windmill environment and when I click on the State present under State & Context panel in left of the screen. the option doesn't function I am not able to add the variable to use in my app
can anyone guide me or help me resolve this problem?
That host (windmill-workers-default-85c5d6db75-2fx6z, IP 10.39.162.225) is the worker pod’s internal cluster address. Alpaca needs the public egress IP that the Windmill cluster uses to reach the internet—10.39.x.x won’t be routable (it’s RFC1918). Please ask Windmill support (or check your cluster’s NAT gateway) for the external IP/CIDR blocks associated with that worker group; that’s the value Alpaca must whitelist. Once you have the public IP and get it approved, the same flow should stop returning 401s.
I would like to write a small script that returns a png with a QR code. Is it possible to return the result as binary format and not base64 encoded? I tried custom return content type but the image is still returned as base64 string.
Following the instructions to set up OAuth through here appears to automatically enable GitHub SSO. However, disabling the SSO clears out the OAuth configuration. Is it possible to uncouple these two settings?
Ideally I'd like to be able to create GitHub resources without also allowing users to use GitHub as an authentication method.
Hi again, we have a production workspace with operators that are given read-only access to our folders (all the folders).
Whenever those operators open a script/flow using a dynselect field, they get automatically disconnected from Windmill and they're unable to access/start any of these flows. We had to give them the developer role to maintain our operations but as you can imagine, this is not ideal.
We're currently using a self hosted instance (v1.525.0).
Were you aware of that issue/was it fixed in the latest versions? Thanks for your help
Hi, we're facing issues integrating MS Teams within our Windmill instance (self-hosted). We were able to create the bot using Teams portal and configure it at the instance level within Windmill. We can select the Team channel and it works. However, once the default flow selected to perform a test, we run the following command within the Team channel: '@windmill /windmill echo test' and nothing happens. Looking at Windmill runs history, it appears no script/flow was triggered.
Our Oauth/SSO set up are connected to different Azure tenants. Could it be the root cause?
Second question, does the integration also works in Teams chats? (group chat)