price below 100
We have introduced a new pricing https://www.windmill.dev/pricing
Instead of vCPUs, we are now basing pricing on compute units (a 2GB worker used for a full month). Workers have a size (small, standard, large) based on their memory limit (<=1, (1-2], >2) and each worker size gets a compute unit (0.5, 1, 2)
With this new approach, you can now get rid of setting vCPU limits which is an anti-pattern in favor of only setting them as requests not to limit unnecessarily the compute power of your workers...
yeah!
Expandable subflows directly in flows
Long requested feature. You can now expand flow steps directly inside the flows in all flows screen (flow editor, flow status, etc) making it a LOT easier to work with nested flows....
**Many UX improvements for the flow
Many UX improvements for the flow builder
We have lots of nice changes we haven't announced but here is one bulk changelog for the flow editor:
- If you write a static string that looks like an expr we will offer to make it an expr as a TAB
- you can now connect to nodes directly using a plug system similar to the nodes for apps...
As we prepare for auto-scaling (EE), we
As we prepare for auto-scaling (EE), we needed more metrics to be held internally such as occupancy_rates at different timelines:
Occupancy rates at different timelines
You can now see occupancy rates for every worker in the 15s/5min/30mins/ever timelines to give you a glimpse of how busy your workers are right now...
you can now easily change the id of
you can now easily change the id of steps in flows and it will modify every step inputs that depend on it automatically
**Major**: See service logs directly in
Major: See service logs directly in windmill:
See the logs of any workers or any servers in the service logs of the search modal. Log count is seen in the mini-graph and error count is in red...
recover error
New option for flow error handlers: recover the flow if
recover: true
is returned as part of the error handler result, also we standardized step_id to be in the default templateDynamic Select
We have a new helper function within scripts:
DynSelect_
Dynamic Select helps you to create a select field with dynamic options:
- Options within the select field can dynamically change based on input arguments.
- Support for TypeScript and Python....The args in the runs page are now
The args in the runs page are now directly synced with the url fragment/anchor, meaning you get sharable urls with pre-filled args automatically by just sharing your current URL. e.g:
https://app.windmill.dev/scripts/get/dae90ddc108ce257?workspace=windmill-labs#disable_automatic_renewal=false&send_invoice_directly=true&discount_duration=%221+year%22&discount=0&frequency=%22monthly%22&seats=1&vcpus=2&plan=%22selfhosted_ee%22&stripe_customer_data=%7B%22label%22%3A%22New+stripe+customer%22%2C%22email%22%3A%22%22%2C%22address%22%3A%7B%22line1%22%3A%22%22%2C%22postal_code%22%3A%22adsadas%22%2C%22state%22%3A%22%22%2C%22city%22%3A%22%22%7D%2C%22tax_info%22%3A%7B%22value%22%3A%22%22%7D%7D&is_quote=false&contact_email=%22bar%40acme%22&company_name=%22acme%22
It also works well with back/forward and browser history.
We also improved the whole way we dealt with the browser history and back/forward should work better in more places. We also did some nits such that editing a script from a run detail page will now prefill the test args with the jobs args...Retry #N
Retries are now easier to understand in the flow status viewer:
You can see their numbers as well as the detail of the failed jobs at a glance...
__**All about logs**__
All about logs
One of our large scale customers noticed that their database disk usage was much much higher than they anticipated. After investigation, we realized that our use of the database for streaming was very suboptimal in a few ways due to the nature of update in postgres. When you update a row in Postgres, it will actually keep the prior row as a dead tuple until it is collected. It doesn't matter in much case but it will if you're appending a few log lines to a 25MB log row, every 500ms.
We have completely refactored the way we deal with logs in major ways and starting on 1.295.0 you should feel comfortable having extremely large logs on Windmill
...
Workflows as Code | Windmill
We are releasing in beta Workflow as code for Python and Typescript. No more excuse to use Airflow or Prefect: https://www.windmill.dev/docs/core_concepts/workflows_as_code
```python
from wmill import task
import pandas as pd...
(sadly, I broke some eggs with the
(sadly, I broke some eggs with the change above, if you upgraded in the last 2 days and were using the "No flow overlap" feature of schedules for flows, then your schedules silently turned off). Update to latest release and re-enable those schedules to fix.
I will write a blog post asap on it, but
I will write a blog post asap on it, but this weekend we achieved a very cool property for a distributed sytem:
- Scripts were always 100% reliable as in, they would either execute and be completed with a success or failure, or retried if the worker crashed at ANY point (and I really mean any, even mid transaction, that's the beauty on relying on the beast that is Postgresql). It was achieved using atomic statement for pulling jobs and writing back their progress timestamps, regularly and on completion.
- Flows were 99% reliable but had some extremely ephemeral point-in-time where if a crash happened, a flow could be stuck forever. Those events were so rare and unlikely on a modern infra that we didn't prioritize improving that but that is now done:
Flows are now guaranteed to complete when they are scheduled given that enough workers are there to process them. This is done through a series of atomic statements in the right places of the finite state machine that runs the flows. If such crash on the machine happen, the flow will be guaranteed to progress in a finite amount of time and propagate the error back up, and then have it be treated by error handlers if any making windmill 100% observable....
NEW [Major] 🔴 Flow & Metadata Copilot
NEW [Major] 🔴 Flow & Metadata Copilot
Released on 15/02/2024 under v1.270.0
The Flow & Metadata Copilot is an assistant powered by an OpenAI resource that simplifies your script & flows building experience by population fields (summaries, descriptions, step input expressions) automatically based on context and prompts....
Rich Display Rendering | Windmill
NEW [Major] 🔴 Rich table display
Released on 23/01/2024 under v1.251.1
Display arrays of objects as interactive tables (search, filter, hide, view, download as csv)...