Contact Us

Contact Us

Utilizing a Rules Engine in a Healthcare Software for Dynamic Workflow Using Drools Workbench & KIE Execution Server

As healthcare software is becoming more complex, rules engines are gaining popularity to help in directing users to different workflows based on dynamic rules. Hospitals and patients are accumulating a massive amount of data every second. IoHT, the Internet of Healthcare Things, also is another source of massive data. All this data needs to be processed and decisions need to be made on anomalies in this data. Classic code condition checks (if-the-else, Switch Case, etc.) and simple Strategy design pattern application no more provide adequate means for applying rules to a massive amount of data.  With that background in mind, we illustrate below in this article how to use Drools Rules engine with the KIE server.

Drools Workbench

Drools Workbench, as its name suggests, is a workbench that facilitates the development of business rules for end-users through a user interface, which has a diverse set of functionalities, including, but not limited to even deploying those rules in form of artifacts on a server. Drools, however, itself, is a rules engine that can process complex rules fast against the data that it is supposed to evaluate on them. A typical use case that is often cited for the need of a rules’ engine is where the business logic for particular action(s) within a project is either constantly changing every other day as newer requirements are supposed to be met or the amount of if-conditions within a codebase is too high and supposedly it becomes too complex to handle it in the imperative programming paradigm. However, it may be noted that a rules engine must be very fast in order to process the complex rules against the data that it would like to be evaluated on. Therefore, Drools’ Rules Engine fits that criteria as it is a highly optimized engine and makes it possible to evaluate rules fast.

Below this paragraph, you may find references to an acronym named KIE (Knowledge is Everything). For the sake of clarity, we should mention how this term originated within the community. Long before, almost all projects often use Drools and jBPM together for complex rule processing and modeling business goals, and so they were fine with using the term of “drools-jbpm”, as this reflected the true essence of a typical integration. However, as newer projects were included for integrating them with Drools and jBPM, they came up with that acronym to highlight the similarities between the projects and introduced a newer unified approach for building, deploying and utilized artifacts that are shared amongst all of them.

Workspaces and Projects

The projects that are built within the Workbench are stored locally in a repository (Maven repository), whereby the user could choose to deploy them as artifacts on another server. In order to achieve ease of management for multiple projects, Drools offers workspaces. Drools call them, “spaces”. Spaces can be created as much as the end-user wants to. Therefore, a project is created in a “space”. Java classes and files that are relevant to Drools can be created through the user interface inside a project. However, the end-user also has the option to import an existing project from a git repository.

A typical KIE project also known as KIE Module (whether it belongs to Drools or jBPM) is a standard Maven project or a module with an additional metadata META-INF/kmodule.xml. This file provides configuration for KIE bases and KIE sessions to define how instances of the rule engine would be created for us. A KIE base represents a group of assets, that in turn contains rules, data models and functions. It has no runtime data associated with it but instead, KIE sessions are created from it and data is inserted into those sessions to evaluate it against the rules that are defined in the KIE base. A simplest kmodule.xml file could be having no content inside it but it’s still necessary to include it within the project. A KIE Base is very expensive to create but creating a KIE session out of it is a lightweight operation. And a KIE container allows us to instantiate a given KIE module.

Users, Roles, and Credentials

The credentials for two users that we have pre-configured within Workbench as mentioned in our deployment instructions are known by the username of ‘admin’ and ‘analyst’. Each of them belongs to two distinct roles. One role is called ‘admin’ and the other is called ‘analyst’. As their names suggest, an admin is a role that has the privilege to perform any operation within Workbench and has complete access to everything including but not limited to deployment of projects on remote servers and an analyst is a role whose role is quite similar to that of a business analyst, that is this role involves being able to view the code that lies within the different repositories available on Workbench and creating new rules for catering to newer business rules as they accumulate over time. However, it should be noted that the privileges can be configured by an administrator for a specific role.

Signing in

At the onset of the workbench, end-users are faced with a login page asking for credentials. Once an administrator has logged on, they must add a particular project to start working on. The view for projects can be opened by either of the two following ways:

  1. Click “Design”.
  2. Click “Menu” -> Click “Projects”

Adding a new project

After that, the administrator has two options for adding a new project:

  1. Click “Add project”
    1. Enter the name for the project.
    2. Enter the description for the project.
    3. If the user is interested in having a custom group ID, artifact ID and/or version, they may click “Show Advanced Options”
    4. Click “Add”
  2. In this particular use case, the administrator already has an existing project that resides in a repository and would like to import it into the Workbench.
    1. Click the vertical ellipsis that appears on the top-left side of the page.
    2. Enter the repository’s URL.
    3. Follow either of the two steps as the next step:
      1. If no authentication is needed for the repository, the administrator can click on “Import”
      2. If authentication is needed for the repository, the administrator must provide credentials as they click on “Show Authentication Options” and click “Import” thereafter
    4. The project’s name and its description as they have been defined in the repository would appear as a card. Click on the card to select it and click “Ok” to import it.
    5. This would successfully import the project and the assets of the project would appear in front of the administrator.

Viewing Projects’ Files

To view all of the project’s files, an administrator is supposed to click on any given file in the project. They must click on the “>” symbol to show the “Project Explorer”. The gear icon allows for flexible views and other configurations to change the views inside the “Project Explorer”. The preferable configuration is to use the “Repository View” and enable “Show as Links” by clicking on them respectively.

Changing settings of an existing project

Adding Dependencies

The settings for a particular project could be changed by visiting the “Settings” project. As an example, if the administrator would like to add dependencies either from Maven Central or the local repository, one could add them by clicking on “Dependencies” in the left navigation bar. Therefore, there are two scenarios for us to cover:

  1. Dependency from Maven Central
    1. Click on “Add dependency”
    2. Enter the group ID.
    3. Enter the artifact ID.
    4. Enter the version.
    5. And click on “Ok”
  2. Dependency from Local Repository
    1. Click on “Add from Repository”
    2. Select an artifact to be used as a dependency.
    3. And click on “Select”

After adding dependencies, the administrator must click on “Save” to write the changes into the project. The same functionality could be achieved by clicking on any given file in the project and showing the “Project Explorer” by clicking on the “>” symbol. Visit the “/” folder and there would be a pom.xml file inside this directory. An administrator could add as much as dependencies as they want to and thereafter, click on “Save”.

Adding KIE Bases and Sessions

The other most relevant change that the administrator would like to make in their Drools project would be to define the knowledge bases that reside in the project and the associated sessions that must be built on runtime whenever the KIE module is deployed on a KIE container. The administrator must click on the “KIE bases” in the navigation bar and follow the below-mentioned procedure:

  1. Click on “Add KIE base”.
  2. Enter a suitable name for the KIE Base and click on “Add”.
  3. Add the package name that maps to the KIE base.
  4. Click on “KIE Sessions (0)”
  5. A new modal dialog would open up where the administrator could define the KIE Sessions associated with this KIE Base.
  6. Click “Add KIE session”
  7. Add a name for the session.
  8. Depending upon the scenario, change the state of the session as either stateless or stateful. These are the only allowed values.
  9. Select the default session.
  10. After the administrator is done with adding session(s), they may click on “Done” to close the modal dialog. Click on “Save” to write those changes into the project.

The same exact functionality could be achieved by editing the kmodule.xml that resides inside the directory “/src/main/resources/META-INF/kmodule.xml”. For manual editing, one must be able to know how to use the tags (without the prefix of “kie:”) mentioned in the following links:

  1. <kmodule>


2. <kbase>


3. <ksession>


Description of our rules’ repository

Our repository, at this point of time, consists of a models’ package which has, in turn, another sub-package inside it. It also has three rules defined for the “Hemoglobin” class. The architecture of the repository that we would like to adhere to is mentioned in the following:

The parent package “org.cdshooks.rules.model” would consist of models that other developers would like to evaluate our rules against. In the meantime, we only have a single class named “Hemoglobin” which has only one data field named “amount”. A developer would provide us an instance of this class which has a specific amount set by consuming the API that the KIE execution server (more detail on it can be found below) has exposed for us and it would, in turn, evaluate that instance against the rules that we would have defined anywhere inside the project for that particular class. Rules for “Hemoglobin” objects have been defined in the DRL file inside “/src/main/resources/org/cdshooks/rules/HemoglobinRules.drl”. On the other hand, the child package is named as “org.cdshooks.rules.model.cds”. It would consist of the classes that conform to the CDS response object that has already been defined in the CDS hooks specification. The developer would receive the CDS response object in return for consuming our API. This has already been done as an example in the rules file.

It must be noted that these model classes must also exist in the CDS provider as they are supposed to be passed on to the KIE Execution server. Therefore, before adding a new model class and its relevant rules inside Workbench, the administrator must ask the developer to integrate it within the CDS provider project.

Inside our kmodule.xml which can be found in “/src/main/resources/META-INF/”, we have defined a single KIE base consisting of a single stateless KIE session. This session is created every single time (in case of a stateless session as they are disposed of immediately after rules have been evaluated in it) when someone requests the KIE container to process a “Hemoglobin” object. The consumer of the API must mention the session’s name in their request to be created for the particular object for processing it against the rules that are present in the KIE base that maps to the KIE session. If no name is mentioned, the default session is chosen for the evaluation of rules.

Adding new assets to a project

An administrator could add assets in the following two ways:

  1. By clicking on “Import Asset”, specify a suitable name for the asset, select the package that the administrator would like to store the asset in and select the file that they would like to import.
  2. Click on “Add Asset”
    1. For the creation of a simple POJO (plain old Java object) a.k.a a model class which contains data fields, select the card for “Data Object” and enter a suitable name for it. Choose the package that they would like to store the model class in. Click “Ok” afterward. Preferably, choose the package that consists of models’ classes.
    2. For the creation of a DRL file (which consists of rules), select the card for “DRL file” and enter a suitable name for it. Choose the package that they would like to store the model class in. Click “Ok” afterward. Preferably, choose the package that consists of rules’ files.

Add/customize fields for a model class

To add and/or customize fields for a model class, the administrator must open the “Data Modeller Tool Windows” for the model class. This is done by clicking on the model class in the project’ view. To add a particular field to the class, follow the procedure mentioned below:

  1. Click on “+ add field” button.
  2. A new modal dialog would open up.
  3. Enter a suitable name for the field as per Java’s variable naming.
  4. Enter a proper label (description) for the field. (This is optional)
  5. Choose the type that is most relevant for the field by clicking on “Nothing selected” and choosing it from the drop-down list.
  6. Mark the checkbox as checked if this field is a list.
  7. If the administrator is interested in adding more fields, they must click on “Create and continue” and repeat the steps that were mentioned above. However, if this field was only needed, they must click on “Create” and thus, the modal dialog would close.
  8. Click on “Save” to persist in the field(s).

These changes would be reflected in the source code of the model class. The administrator may see these changes by visiting the “Source” tab.

Structure of a rules file

The administrator must adhere to the following structure in order to make a functional rule file (DRL file):

package package-name






The package name must be replaced with the name of the package that the DRL file is residing in. The section of imports must be replaced with the imports that are being used in any of the code that follows beneath it. The section of “globals” consist of global variables that are usually external services (such as DAOs) or parameter values for rules configuration or they can be anything (lists, as an example) that someone would like to populate data into it after a particular rule’s consequence has been executed. Functions are normal Java functions but they have a bit different syntax. They are useful in scenarios where we would like to define some logic for the knowledge base, rather than having it separately in an external Java class. They may receive arguments and may or may not return data. Queries are used for retrieving objects that have become a part of an existing session. Rules are typical statements that are triggered when the condition mentioned as a part of it evaluates to true. It is composed of two parts, the condition that we would like to trigger the rule upon and the consequence for the rule if it gets evaluated to true. For more detailed information on how rule files must be written, you may have a look at the “Rule Language Reference” section of the documentation:


Sample rules file with explanation

A sample rules file from our repository has been mentioned below:

package org.cdshooks.rules;
import org.cdshooks.rules.model.Hemoglobin;
import org.cdshooks.rules.model.cds.Card;
import org.cdshooks.rules.model.cds.enums.IndicatorEnum;
import java.util.List
import org.cdshooks.rules.model.cds.Card;
global List cards;
// Lower normal limit for hemoglobin is 3.8
// Higher normal limit for hemoglobin is 6.4
rule "The percentage of hemoglobin is above normal range"
        $hemoglobin : Hemoglobin( Float.compare(amount, 6.4) > 0 )
        Card responseCard = new Card();
        responseCard.setSummary("This patient needs immediate attention because the amount of hemoglobin is above normal range");
rule "The percentage of hemoglobin is below normal range"
        $hemoglobin : Hemoglobin( Float.compare(amount, 3.8) < 0 )
        Card responseCard = new Card();
        responseCard.setSummary("This patient needs immediate attention because the amount of hemoglobin is below normal range");
rule "The percentage of hemoglobin lies within the normal range"
        $hemoglobin : Hemoglobin( Float.compare(amount, 3.8) >= 0 && Float.compare(amount, 6.4) <= 0 )
        Card responseCard = new Card();
        responseCard.setSummary("This patient's amount of hemoglobin lies within the normal range");

This file has three rules, which are triggered respectively when their conditions are met. Let us discuss the first rule:

rule "The percentage of hemoglobin is above normal range"
        $hemoglobin : Hemoglobin( Float.compare(amount, 6.4) > 0 )
        Card responseCard = new Card();
        responseCard.setSummary("This patient needs immediate attention because the amount of hemoglobin is above normal range");

As can be seen, rules have a very self-explanatory syntax. The particular name that is associated with a rule is immediately written after the word “rule”. The “when” portion consists of the conditions that must evaluate to true for triggering the rule. The “then” portion consists of the consequence of the rule. In this particular rule, we have stated that whenever there is a Hemoglobin object having an amount of greater than 6.4 inserted into the session, create a new CDS card having a custom summary and set its indicator to WARNING. This response card, then, is added into the global variable of “cards”.

Executing the rule using KIE Execution Server

For executing this rule through the KIE Execution Server, we include the following preliminary data in our request body:

  1. The session’s name that we would like to use for our particular hemoglobin object.
  2. A hemoglobin object that has an amount greater than 6.4.
  3. Set the “cards” list to empty.
  4. Fire all rules.
  5. Get the “cards” list.

The following JSON illustrates the request that has been made above:

  "lookup": "defaultStatelessKieSession",
  "commands": [
      "insert": {
        "object": {
          "Hemoglobin": {
            "amount": 11.5
      "set-global": {
        "identifier": "cards",
        "object": []
      "fire-all-rules": {}
      "get-global": {
        "identifier": "cards"

And this is what is returned as a response:

  "type": "SUCCESS",
  "msg": "Container cds-hooks-rules successfully called.",
  "result": {
    "execution-results": {
      "results": [
          "value": [
              "org.cdshooks.rules.model.cds.Card": {
                "summary": "This patient needs immediate attention because the amount of hemoglobin is above normal range",
                "detail": null,
                "indicator": "WARNING",
                "source": null,
                "suggestions": [],
                "links": []
          "key": "cards"
      "facts": []


As it can be seen in the response, the card that was added to the global variable has been returned because we requested for the global variable to be returned in the JSON. And therefore, we now know that the rule was executed. More rules for covering other scenarios can be added into the existing file or they could be added to a new rules file. A better approach would be to have established separation of concerns and add/merge rules in files on the basis of a criterion so that a set of similar rules exist in the same file rather than having a single rules file in the project having all the rules. This approach would achieve ease of management of rules by a considerable amount.

KIE Workbench’s Embedded Server Controller

Whenever the Workbench is started, it always starts an embedded KIE Server Controller with itself. The controller is responsible for managing servers and the containers associated with those servers. It provides a REST API that can be used to interact with the KIE Server instances. The REST API, furthermore, also allows to remotely manage KIE Server instance by adding/removing containers and starting/stopping containers. The complete REST API with all of the endpoints can be found here:


The REST API makes it possible to perform container-related functions within the user interface of the Workbench. Furthermore, the controller also persists its own configuration in a file, just in case, it restarts. The Controller is also able to respond to connect/disconnect requests from a KIE execution server.

Building and Deploying Projects

Projects can be built from the Workbench by clicking on the “Build” button in the Projects’ view. This validates the project’s files and any errors (in case of an unsuccessful build) that occur during the build are shown in the “Alerts” section of the page. If they can’t be seen by the administrator, they can be displayed by clicking on the “View Alerts” button. Furthermore, the status of a build is shown right after the administrator clicks on the button in a small-sized alert notification. Depending on the success of the build, the color of the alert notification is either red (unsuccessful) or green (successful). To deploy the project to a KIE Execution Server that is running beside the Workbench, Workbench must first push the project’s artifact JAR to its local Maven repository that is created by the Workbench on its startup (as the project’s JAR is needed by the execution server to instantiate it). This two-step process is automatically done by clicking on the “Deploy” button. Hence, a KIE container instantiating the project’s JAR is created on the KIE Execution Server. The alias of the KIE container is its artifact ID. This KIE container (termed as a deployment unit within the Workbench) can be seen by either of the two following ways:

  1. Click “Menu” -> Click “Execution Servers”
  2. On the front page, click on “Deploy”

Every single time a change occurs within the project and an administrator is interested in deploying the project with those changes to the KIE execution server as a KIE container, the project must be deployed this way. The administrator might also face a conflicting artifact issue during its deployment. This is normal as they must have deployed the same version earlier. In this case, the administrator must click on “Override” and allow the deployment to occur. Since we also need to integrate this container with our CDS provider and the ID of the container may be assigned anything by the Workbench, we can’t afford to allow that happen because that would mean changing the container ID within the code of the CDS provider and re-deploying it as well as part of the change that occurred in the rules. To avoid the hassle of deployments, we have decided to go with a constant container ID (as an example: 4ddcefd5-85e3-4131-bfb2-3b612831509d) and have used this within our CDS provider’s code to request the KIE container with that particular ID to execute the rules against the hemoglobin object. Therefore, we would stop the deployment unit that has been added by the Workbench and removes it by clicking on the “Remove” button and add a deployment unit ourselves with the constant container ID. To add a new deployment unit, the administrator must follow the mentioned procedure:

  1. Click on the “Add Deployment Unit” that is available within the Server Management view.
  2. Insert 4ddcefd5-85e3-4131-bfb2-3b612831509d as the name of the deployment unit.
  3. Select the artifact that the administrator is interested in deploying with the changes.
  4. Check the checkbox that says, “Start Deployment Unit”
  5. Click “Finish”

This would successfully start the deployment unit as a KIE container on the KIE execution server.


An issue that might come up during signing in is that the “Loading …” spinner doesn’t go away and the browser would not render the homepage. The workaround for this issue is to add a new line containing the following text in /WEB-INF/classes/ErraiService.properties:


This issue has been documented in Drools Workbench documentation which may be found here:


Branching strategy and push/pull changes to/from the repository

Since there is no way of creation of pull requests within Workbench and there are two repositories for the project (One that hosts the project online on Github and the other that exists within Workbench), an administrator would oversee the code changes that occur in Drools Workbench through the following way:

They would have a copy of the online repository on their computer and they would run the following command for adding a new remote:

git remote add git-kiewb {{SSH URL to git daemon}}


where {{SSH URL to git daemon}} must be replaced with the SSH URL that can be found in the settings of the project within Workbench, but they must not forget to prepend their username and an at sign (admin@, as an example) before the server’s IP address. As an example, a complete SSH URL would be: ssh://admin@http://localhost:8001/MyRepo

This remote would enable the administrator to either pull the changes from various branches created on Workbench or push changes from our repository’s branches to Workbench’s repository’s branches.

As an example, if the administrator is interested in pulling all the changes that has happened in the dev branch of the Workbench’s repository and they would like to have them pushed in the master branch of the online repository, they would run the following commands:

git checkout master
git pull git-in-kiewb dev
git push origin master

The set of above commands changes the current branch to master, pulls all the changes from the dev branch that is present in Workbench’s repository, and push those changes to the master branch of the online repository.

On the other hand, if the administrator is interested in pushing the changes that exist in dev branch to the master branch of the Workbench’s repository, they would run the following commands:

git checkout --track git-in-kiewb/master git-in-kiewb-master
git pull
git pull origin dev
git push git-in-kiewb master

The set of above commands creates a new branch named git-in-kiewb-master which is set up to track the master branch that is available on the Workbench’s repository, pulls changes from that branch, merges the changes that exist in dev and push this branch to Workbench’s repository.

KIE Execution Server


The KIE execution server is a modular, stand-alone server component that can be used to execute rules and processes. It has been designed to serve the purpose of providing a runtime environment for KIE modules and has a very low memory footprint as it uses few resources as much as possible. This is the reason why it makes it very easy to deploy in cloud environments. A particular KIE server instance can create as many KIE containers as it needs to and can run them in parallel. The KIE Execution server provides a REST API through which it exposes a set of functionalities. KIE Server instances get integrated with the Workbench seamlessly. More detail on this can be found in the “Managed Servers” section below. The capabilities of a KIE server can be disabled/enabled as per requirement. They are added by KIE Server Extensions. They are plugins that add capabilities. Custom extensions could be written for specific needs. There are two main extensions that ship along with the KIE server by default:

  1. BRM: It provides a runtime environment for the execution of business rules.
  2. BPM: It provides a runtime environment for the execution of processes.

Furthermore, a noteworthy extension to mention is the Swagger extension that allows the consumer of the KIE Server to glance at the APIs the server provides. We have disabled BPM as we are only interested in executing rules at the moment. A complete list of REST API endpoints the KIE server offers can be found here:


KIE Containers and REST API endpoints

A KIE Container, as it has been defined above, is an instantiation of a KIE Module. This is the only way a KIE module would actually receive requests and process those requests against the rules the KIE module contains. A couple of noteworthy REST API endpoints have been mentioned below:

  1. GET /server

This endpoint retrieves the KIE Server’s information including its ID, name, location, and capabilities.

     2. GET /containers

This endpoint retrieves all the current KIE containers that are running on the KIE Execution server.

     3. GET /containers/{id}

This endpoint retrieves the information for a particular container by passing its ID as a path parameter.

     4. PUT /containers/{id}

This endpoint is responsible for the creation of a KIE Container with a custom ID and the request body must contain the group ID, artifact ID, and version of the KIE module.

     5. DELETE /containers/{id}

This endpoint disposes of a container with a given ID.

     6. POST /containers/instances/{id}

This endpoint evaluates rules on a specific KIE container by passing its ID as the path parameter and evaluates rules by passing a set of commands as a request body. An example of a request/response can be found above.

Server modes

A server could be set up in the two following ways:

1. Managed Server

It must be noted that Drools’ workbench only shows managed servers in its own user interface.

A managed server requires a controller to boot-up. The server is configured with a property which points to a set of controllers’ locations. Therefore, when the instance is run, it would attempt to connect with any of the controller mentioned in that property. If for any reason, it fails to connect with any of the controllers mentioned, it would not start and therefore, shut down.

The KIE Controller offers its functionality using a REST API. The REST API is capable of starting/stopping/adding/removing containers on the KIE server. Any change related to the containers that are shown on the Workbench causes a notification and this notification results in a request to the Controller, which in turn, requests the KIE Server instance. This server is the best fit for many use cases as it is configurable from an interface.

2. Unmanaged Server

In this case, there is no controller involved. Hence, this kind of server can’t be manipulated using the Workbench. The administrator must manage the starting/stopping/adding/removing containers using the REST API that the KIE Execution Server exposes.


As you can see, it is not difficult to set up a rules engine to divert and decide software logic at run time for healthcare or any other business domain. The writer will be happy to answer any technical questions the reader may have. Please contact Technosoft should you decide to implement a rules-based engine in your environment.


Starting any Healthcare Integration Project? Get Your questions answered in a Free 30 minutes consultancy!