An Alternative for Supporting long-running processes

There are a lot of scenarios where our applications need to support long-running processes. In this post, I will share a good alternative for implementing that.

I am talking about a lightweight library that I am working on that provides a Workflow Execution Engine: TheFlow.

This post is also an invitation for collaboration. 😉 I would appreciate some help with the code and feedback. In a few months, this code is going to be in production (so it is not a demo project, and I will not stop working on this).

Starting simple

If you are adopting Microservices, probably the best example of a long-running process would be a Saga:

Anyway, I wouldn’t like to be that specific. So, let’s start with something simpler. Please, consider the following process:

Even if you don’t fully understand BPMN, I expect you can follow the previous diagram.

This process could be implemented as a simple procedure, or as a microservice. Depending on the business complexity, it would take milliseconds or minutes to complete (consider an omnichannel scenario).

How would I recommend to implement it

Implementing a long-running process is not a trivial task. There are a lot of challenges and, for sure, hidden difficulties. That is precisely what I am trying to solve with this project.

If you are writing software that needs to support long-running processes, I strongly recommend you to adopt a workflow execution engine like TheFlow.

Why a new Workflow Execution Engine?

Being direct and straightforward: There are plenty of options outside, but nothing working as I expect. Besides that, I have been thinking about writing a Workflow Execution Engine for years. Some days ago, I concluded that I would need to get it out of my mind writing it down or I would go crazy.

Some criteria…

I want to create something handy. My first challenge was to create a Workflow Execution Engine that could work as a Saga Execution Coordinator (please, read the previous post of this series to get the context) and it is almost there. Also, I firmly believe that one of the best ways to decompose an application in smaller components (even Microservices) is by analyzing the business processes.

The Workflow Execution Engine that you will adopt should support Sagas and Business Processes (again, it is not trivial).

Here are some design goals that I am following while implementing my project:

  • To remain fully compatible with BPM and BPMN
  • Be easy to get and distribute. Either embedded in a host application or as a standalone running server
  • To provide support for long-running (hours or days) processes
  • Be fault-tolerant, allowing to securely saving execution data in a disk (thanks RavenDB)
  • Be easy to scale
  • Be flexible, easy to configure and expand

Show me the code!

The code that follows is how the process expressed in the previous diagram would be written using my library:

var model = ProcessModel.Create()
  .AddEventCatcher<ProductOrdered>()
  .AddActivity<CheckStockActivity>()
  .AddExclusiveGateway("InStock?")
  .AddSequenceFlow("OnProductOrdered", "CheckStock", "InStock?")

  .AddActivity<PickStockActivity>()
  .AddActivity<ShipOrderActivity>()
  .AddEventThrower<InStockEventThrower>()
  .AddSequenceFlow("PickStock", "ShipOrder", "InStock")
  .AddConditionalSequenceFlow("InStock?", "PickStock", true)

  .AddEventThrower<OutOfStockEventThrower>()
  .AddConditionalSequenceFlow("InStock?", "OutOfStock", false);

As you can see, the fluent interface contains each step of the process.

This model would be referenced by a “Process Manager” which will use it to create instances whenever it receives a “product ordered” event.

var manager = new ProcessManager(
  new InMemoryProcessModelsStore(model), 
  new RavenDbInstancesStore(store)
);

manager.HandleEvent(new ProductOrdered{ProductCode = "001"});

The Process Manager is the responsible for coordinating the activities execution preserving the execution history (what is going on, what activities are done, what activities are pending, and so on).

Handling activities completions and failures

Each activity takes time to be completed. For example, consulting a remote service, waiting for human intervention and so on.

After firing an activity, the process manager saves the state of the process instance waiting for a “wake up” notification. Whenever a new message arrives, the manager loads the related instance, then updates the history and move on.

If a failure is detected, then the manager starts a compensating path (which enables the Saga support).

What about the data

No data is lost! Each process instance preserves all data related (No need to say that RavenDB makes it simple) to the process execution.

For example, each event (start or intermediate) captured by the manager has the related data saved with the instance information in the execution history. Besides that, each activity can attach data through Data Objects or Data Stores.

What about parallel execution

Some activities could (and should) run in parallel. Consider the following process:

My library supports multiple execution paths running in parallel for the same process instance. Each route is controlled by an execution token provided by the manager. In the example, the parallel gateway that is splitting the execution would generate two tokens. Then the parallel gateway that is merging the execution will wait for all tokens before continue.

For example:

var model = ProcessModel.Create()
    .AddEventCatcher("start")
    .AddActivity("msgBefore", () => { })
    .AddParallelGateway("split")
    .AddSequenceFlow("start", "msgBefore", "split")
    .AddActivity("msgLeft", () => { })
    .AddEventCatcher("evtLeft")
    .AddActivity("msgRight", () => { })
    .AddParallelGateway("join")
    .AddSequenceFlow("split", "msgLeft", "evtLeft", "join")
    .AddSequenceFlow("split", "msgRight", "join")
    .AddActivity("msgAfter", () => { })
    .AddEventThrower("end")
    .AddSequenceFlow("join", "msgAfter", "end");

The process described in the previous code, when instantiated and running will wait for an intermediate event before completing.

Concluding…

There are a lot more features and things that I would love to share about this project. I will do it in future posts.

For a while, I would like to invite you to check out the code and provide me with some feedback. Also, I am looking for partners who would like to use it in production.

Compartilhe este insight:

Deixe um comentário

O seu endereço de e-mail não será publicado. Campos obrigatórios são marcados com *

Elemar Júnior

Sou fundador e CEO da EximiaCo e atuo como tech trusted advisor ajudando diversas empresas a gerar mais resultados através da tecnologia.

Elemar Júnior

Sou fundador e CEO da EximiaCo e atuo como tech trusted advisor ajudando diversas empresas a gerar mais resultados através da tecnologia.

Mais insights para o seu negócio

Veja mais alguns estudos e reflexões que podem gerar alguns insights para o seu negócio:

Confesso que estava sentindo falta de escrever em primeira pessoa. Há tempos, publicando apenas nos sites da EximiaCo, não encontrava,...
Uma das vantagens de estudar diversas linguagens, frameworks, técnicas e prática é encontrar inspiração inusitada para nosso código. Nesse post...
Last year, Mario Fusco wrote a blog post series questioning the way Java developers are implementing the Gang of Four (GoF) patterns....
Muitas vezes, em nossos sistemas, temos tarefas que demandam processamento de  longa duração ou possuem alta complexidade computacional. Na medida...
Quando pensamos sobre o código-fonte do Roslyn, deveríamos pensar em performance! Eu gostaria de compartilhar algumas técnicas de performance e...
Situações como a que estamos vivendo nos “empurram” para algumas reflexões. De certa forma, paramos de reagir e começamos a...
× Precisa de ajuda?