Ivan Kiselev

Subscribe to Ivan Kiselev: eMailAlertsEmail Alerts
Get Ivan Kiselev: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Related Topics: Java Developer Magazine

Java Developer : Article

Achieving Thread Synchronization & Parallelized Execution in Java

Building multithreaded execution capabilities in a Java application

Synchronizers serve the thread synchronization needs of the application such as making tasks executing in parallel threads merge at a synchronization point, or making two threads running in parallel exchange data at a point of time, and having critical sections in the code that must be made thread-safe when parallel threads try to execute them simultaneously. In the library, the implementation classes for these are CyclicBarrier, Exchanger, and Semaphore, respectively.

BPMS Implementation
The function of a BPMS is to help manage business processes through their lifecycle. Executing the processes at runtime to automate them is the primary responsibility of a BPMS server. At runtime, a typical BPMS server would need to take care of such things as execution of processes involving parallel paths, their merging, preservation of process state with context, asynchronous/synchronous execution of sub-process, and inbound calls to activities in the process from outside.

To execute processes, the BPMS server creates runtime instances from the process definition. Each process instance represents a unique runtime occurrence of the process based on a specific process invocation (instantiation) request from the originator of the process. The originator could be a customer, a business user, another process, or another role that can trigger process execution. For example, a customer could trigger the execution of an order fulfillment process by placing an order to buy some item. An executable process definition is typically in a standard format such as BPML (Business Process Modeling Language) or WS-BPEL (Business Process Execution Language for Web Services). Though we primarily mean BPML constructs here, the design and implementation applies as well to the equivalent WS-BPEL constructs, since the WS-BPEL constructs can be transformed into their equivalents in BPML and vice versa, retaining their semantics.

A BPMS server has to run process instances independent of other process instances at runtime, since each process instance is unique and has its own context (i.e., variables or parameters defined for the process) and state that has to be maintained throughout. The state and context of each process instance has to be stored in the process database at logical points during the process execution flow.

The process instance could be triggered by the originator synchronously (the process execution happens in the same thread as the caller's thread) or asynchronously (the process is executed in a separate thread and the caller's thread continues its own execution without waiting).

A process can contain a mix of system activities (that are executed automatically by the BPMS server) and user activities (that are performed by a user through a GUI). When the process is executed, its activities are executed by the BPMS server in the order specified in the process definition. The execution proceeds until a user activity is encountered, at which point the BPMS server makes the process wait for the user to perform the required activity.

Once the user completes the action, the user's GUI notifies the BPMS server to resume executing the rest of the process steps. Such a process would pause at all points where user interaction is involved in the process. To save computing resources, such process instances could be hibernated/passivated (removed from runtime memory) after some preconfigured time period once they enter the waiting state, and later revived when the user's GUI indicates to the BPMS server that the user action is complete. In addition to these moves, the BPML constructs such as "All" (Flow in WS-BPEL), which results in a split (fork) and join, "Call" of sub-process, and "Spawn" of sub-process have to be supported by the BPMS server.

All these functionalities require that the BPMS server be multithreaded, manage threads in a pool, and control them, i.e., a multithreaded execution model very much applies here.

Process Initiation Implementation
The execution model is to execute each process instance in an independent thread because a process might be long-running and the caller would not want to wait that long for it to complete. The BPMS server uses the class ThreadPoolExecutor from the concurrent library to set up and manage a pool of threads that can be used to execute process instances. We set the parameters such as maxpoolsize (from the configuration data), thread alive time (to five minutes), and corepoolsize (to 3) at the time of the server startup (see Listing 1).

Each process instance in memory is an instance of the Process class that represents the runtime version of the process. This object is created the first time a request for the execution of the process comes, by reading and translating the BPML process definition. It's subsequently cloned to create a new instance for each subsequent request for new process instance execution. It contains a Java.util.Vector of activities in a tree structure that mimics the activity order and nesting in the BPML process definition, and its run() method executes the activities sequentially iterating through this vector. To make the ThreadPoolExecutor execute this process object, it needs to be put in a Java.lang.Runnable object and supplied to the procPooledExecutor's execute method. So we create a ProcRunner class that extends Runnable and have a reference variable in it that points to this process object and the run() method of the ProcRunner would invoke the process.run() method to execute the process.

ProcRunner procRunner = new ProcRunner(process);
// Start the process in a new thread.
procPooledExecutor.execute(procRunner);

In the case of synchronous process execution (typically short-running process), the run() method of the process object is directly invoked in the caller's thread.

The ThreadPoolExecutor efficiently and effectively manages the pool of threads, relieving the programmer of that responsibility. Depending on the expected load on the system (i.e., the number of process instances expected to be triggered), the configuration setting for the maximum pool size can be changed so that the thread pool size is optimum.

All Activity Implementation
The "All" activity in BPML is a complex activity that means it is composed of one or more activities. An "All" activity executes all the activities that it is composed of in parallel. The equivalent construct for it in WS-BPEL is "flow." We can see "All" in terms of a fork and join, i.e., the main process execution path forks into parallel paths - as many paths as there are activities in All - executes them in parallel, and once the parallel paths complete execution, they join (merge) together with the main process path.

The join is a synchronization point that the parallel path threads converge on. The threads reaching this point first wait till others join them and they all synchronize at this point with the main process thread. Then their executions end, and the main path continues thereafter with the process execution moving to the next activity after the "All." From the concurrent utility, we use the classes ThreadPoolExecutor to realize fork and CyclicBarrier to implement the join (i.e., synchronization).

For the fork, we have an ActivityRunner class (similar to the ProcRunner mentioned in the above section) that extends the Java.lang.Runnable and holds reference to an activity object that it would execute in its run() method. For each activity in the activity set of the All activity, we create an instance of the ActivityRunner and put the reference to the activity object in it. Then the ThreadPoolExecutor's execute() method is called to run it in an independent thread. To handle the join part, CyclicBarrier is used. It is a sync point that lets a set of threads wait for each other to reach a common barrier point. Once all the threads join this barrier, a common task can be performed before each of them is released. The common task needs to extend Java.lang.Runnable and it's useful for updating the shared-state before any of the parties continue.

More Stories By Parameswaran Seshan

Parameswaran Seshan performs the role of an independent educator/trainer, architect, researcher, and architecture consultant, in Information Technology (IT). He teaches architecture, design and technology related courses. Prior to this, he worked as Principal (Education and Research) with E-Comm Research Lab, Infosys Technologies Limited, Bangalore, India. He has more than 15 years of work experience in the IT industry, involving teaching, architecture, research, and programming. His areas of interest include Enterprise Architecture, Process-centric architecture, Intelligent software systems, Intelligent agents, software architecture, Business Process Management systems, Web services and Java. You can reach Parameswaran at, contact {at} bitsintune [dot] com.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.