Home > Forum > Rules, JS, SQL > Handling transactions in “Move Workflows (SQL)” action

Handling transactions in “Move Workflows (SQL)” action
0

Hi everyone,

I have a scenario where, in one of our workflows, we use the “Move Workflows (SQL)” action on a transition path to update multiple workflow instances—often dozens or even hundreds. Each of these updated workflows executes several actions upon transition, including making a REST API request to an external system.

The issue arises when, for example, out of 150 instances, 140 complete successfully, but the 141st encounters an error. In such a case, all changes within the workflow instances are rolled back, but the REST API calls have already been sent, potentially causing inconsistencies between systems.

From a business process perspective, a better approach would be if each instance update were handled as a separate transaction. This way, if 145 instances update successfully and 5 fail, the user would only need to address those 5 errors rather than losing all progress due to a single failure.

Has anyone dealt with a similar case? What potential solutions do you see that balance performance and business requirements?

MVP

Hi Damian,

luckily I haven't encountered a scenario where I had to execute REST requests, which create/update data in an external system, but I have similar cases.

I don't know your scenario, but I may have a similar one:
- We need to update data in all our active claim workflows with data from the ERP system
- We don't know when the data is updated, therefore I created a technical workflow which is triggered via a cyclical action
- This technical workflow spawns subworkflows for each claim to get the data
- The technical workflow waits until all subworkflows are completed
- Creates an item list in which the current data in the claim workflow is compared with the retrieved date stored in the subworkflows
- Actions are defined for each row like "Update claim workflow", "No action"
- The claim workflows are updated in batches of 50 using a flow control which leads back to the previous step, if there's any action in "Update claim workflow"
- The technical workflow has a timeout on each step which "clicks" through the workflow.

While this is different from your case, the two approaches could be useful for you:
- Spawn subprocesses
- Update the workflow in batches

Best regards,
Daniel

In reply to: Daniel Krüger (Cosmo Consult)

Hi Damian,

luckily I haven't encountered a scenario where I had to execute REST requests, which create/update data in an external system, but I have similar cases.

I don't know your scenario, but I may have a similar one:
- We need to update data in all our active claim workflows with data from the ERP system
- We don't know when the data is updated, therefore I created a technical workflow which is triggered via a cyclical action
- This technical workflow spawns subworkflows for each claim to get the data
- The technical workflow waits until all subworkflows are completed
- Creates an item list in which the current data in the claim workflow is compared with the retrieved date stored in the subworkflows
- Actions are defined for each row like "Update claim workflow", "No action"
- The claim workflows are updated in batches of 50 using a flow control which leads back to the previous step, if there's any action in "Update claim workflow"
- The technical workflow has a timeout on each step which "clicks" through the workflow.

While this is different from your case, the two approaches could be useful for you:
- Spawn subprocesses
- Update the workflow in batches

Best regards,
Daniel

Hi Daniel,

When I was writing my post, I had a feeling you might share some valuable insights — and I wasn’t wrong. Thanks a lot for your detailed response. Subprocesses and batch updates are definitely approaches I’ve considered. While they may not be an ideal fit for my scenario, I’ll certainly keep them in mind if no better solution comes up.

I’m currently working with WEBCON BPS 2022 and was wondering — do you know if newer versions offer any native features that would allow asynchronous processing of such updates? Or perhaps someone from the WEBCON Team could share whether such a feature is planned for future releases?

I’ve also been exploring a few other ideas:
1. Using the SDK — is it possible to trigger such updates via the WEBCON REST API in a way that would give more control over individual transactions?
2. From a business process perspective, the ideal solution would work similarly to mass actions on reports — where processing is done in bulk, but a failure in one item doesn’t roll back changes already applied to others.
• Do you think embedding a report directly in the form could help achieve this?
• Or is there any way to trigger mass actions on a SQL Grid or similar control?

Any thoughts or experience around these ideas would be much appreciated.

Thanks again!

MVP
In reply to: Damian Paluch

Hi Daniel,

When I was writing my post, I had a feeling you might share some valuable insights — and I wasn’t wrong. Thanks a lot for your detailed response. Subprocesses and batch updates are definitely approaches I’ve considered. While they may not be an ideal fit for my scenario, I’ll certainly keep them in mind if no better solution comes up.

I’m currently working with WEBCON BPS 2022 and was wondering — do you know if newer versions offer any native features that would allow asynchronous processing of such updates? Or perhaps someone from the WEBCON Team could share whether such a feature is planned for future releases?

I’ve also been exploring a few other ideas:
1. Using the SDK — is it possible to trigger such updates via the WEBCON REST API in a way that would give more control over individual transactions?
2. From a business process perspective, the ideal solution would work similarly to mass actions on reports — where processing is done in bulk, but a failure in one item doesn’t roll back changes already applied to others.
• Do you think embedding a report directly in the form could help achieve this?
• Or is there any way to trigger mass actions on a SQL Grid or similar control?

Any thoughts or experience around these ideas would be much appreciated.

Thanks again!

Hi everyone,
Damian, could you share what are some causes of the error?
Are those REST API calls made in the same automation in which you 'move workflows (sql)', or are they made on path transition of moved elements?

I'm thinking how for-each loop over those instances would work (i'm almost sure this would be slower, but maybe it will have some influence on the transactions),

MVP
In reply to: Damian Paluch

Hi Daniel,

When I was writing my post, I had a feeling you might share some valuable insights — and I wasn’t wrong. Thanks a lot for your detailed response. Subprocesses and batch updates are definitely approaches I’ve considered. While they may not be an ideal fit for my scenario, I’ll certainly keep them in mind if no better solution comes up.

I’m currently working with WEBCON BPS 2022 and was wondering — do you know if newer versions offer any native features that would allow asynchronous processing of such updates? Or perhaps someone from the WEBCON Team could share whether such a feature is planned for future releases?

I’ve also been exploring a few other ideas:
1. Using the SDK — is it possible to trigger such updates via the WEBCON REST API in a way that would give more control over individual transactions?
2. From a business process perspective, the ideal solution would work similarly to mass actions on reports — where processing is done in bulk, but a failure in one item doesn’t roll back changes already applied to others.
• Do you think embedding a report directly in the form could help achieve this?
• Or is there any way to trigger mass actions on a SQL Grid or similar control?

Any thoughts or experience around these ideas would be much appreciated.

Thanks again!

Hi Damian,

I'm not aware of any asynchronous functions or individual controls over transactions.

The idea of embedding a report in the form which would allow triggering mass actions would surely work, but it could also be a long waiting time. I've used a report for something else in 2022, so I know that it would be possible.

I'm currently also working on another related area.
- We are getting data from a server which is stored in an item list with about 700 rows.
- Afterwards it's checked whether the row can be mapped to an existing workflow instance
- Depending on the result a new workflow instance is created or an existing workflow instance is updated.
- In both cases the the workflow is moved to a step which doesn't have any actions.
- The target step has a timeout which will move the workflow forward an process the actual actions
- If this fails the the workflow is moved to an "Errored" state.
- After the error is reviewed the workflow is moved back to the previous step. We couldn't use the back function in this case, because a path of the error step may have been triggered which lead back to the error step.
- If the actions have been applied, the workflow is moved back to the "Active" step and is waiting for other updates.

This primary workflow is triggered in the night, so it doesn't matter if it takes some time, when the other workflow take a few minutes to completes. The biggest drawback is, that we are generating multiple versions for each instance, if one is updated.


Off topic:
Since you are running 2022 you should keep this post in mind:
https://daniels-notes.de/posts/2025/upgrade-2022-to-2025

Best regards,
Daniel

In reply to: Daniel Krüger (Cosmo Consult)

Hi Damian,

I'm not aware of any asynchronous functions or individual controls over transactions.

The idea of embedding a report in the form which would allow triggering mass actions would surely work, but it could also be a long waiting time. I've used a report for something else in 2022, so I know that it would be possible.

I'm currently also working on another related area.
- We are getting data from a server which is stored in an item list with about 700 rows.
- Afterwards it's checked whether the row can be mapped to an existing workflow instance
- Depending on the result a new workflow instance is created or an existing workflow instance is updated.
- In both cases the the workflow is moved to a step which doesn't have any actions.
- The target step has a timeout which will move the workflow forward an process the actual actions
- If this fails the the workflow is moved to an "Errored" state.
- After the error is reviewed the workflow is moved back to the previous step. We couldn't use the back function in this case, because a path of the error step may have been triggered which lead back to the error step.
- If the actions have been applied, the workflow is moved back to the "Active" step and is waiting for other updates.

This primary workflow is triggered in the night, so it doesn't matter if it takes some time, when the other workflow take a few minutes to completes. The biggest drawback is, that we are generating multiple versions for each instance, if one is updated.


Off topic:
Since you are running 2022 you should keep this post in mind:
https://daniels-notes.de/posts/2025/upgrade-2022-to-2025

Best regards,
Daniel

Hi guys!

Thank you both — your insights are extremely valuable! While none of the suggestions fully solve my issue out of the box, combining your ideas should definitely help me work out the best solution with the client — really appreciate it!

@Maksymilian — I know that for-each loop was used previously by another developer, and it actually caused even more serious performance issues at the time. However, I’ll double-check, because if you’re right that it allows breaking the transaction scope and processing each operation "asynchronously", then combining it with Daniel’s idea of offloading the actions into a subworkflow might actually be the best direction here.

I'll do some testing and will definitely come back with the results — might be useful for someone else in the future :)

Thanks again, guys!

Best regards,
Damian

Privacy overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognizing you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.


To see a full list of the cookies we use and learn more about their purposes, visit our Privacy Policy.