Valery Silaev's Blog

if it ain't broken we'll break it

tascalate2
It’s hard to start this announcement post with the regular “I’m pleased to announce…” prologue. It was a painful process, without any pleasure. A lot of disappointments. A lot of frustration with the state of the Java past version 9 and all the headaches Jigsaw (modular system) had introduced. The Java ecosystem is broken. This is a fact. Non-modular, outdated libraries. Broken build tools. Zero support from IDE for multi-release projects. But nevertheless… Per aspera ad astra…

The new 2.5 version of the Tascalate JavaFlow is out! Now it’s fully compatible with Java 9+:

  1. Bytecode modification tools (maven plugin, ant task, Java agents, specialized classloader) now understands all the new bytecode instructions up to Java 11 inclusive (nest of inner classes).
  2. All the artifacts are multi-release JAR-s that work both with old Java 1.6-1.8 AND may be used as full-fledged Java 9 modules.

Besides this, run-time Java agents was seriously reworked. Now they behave correctly when attached dynamically and may re-transform already loaded continuable classes without errors / JVM crashes. Additionally, all the hard-coded detection of the “core” Java classes is replaced with ClassLoader checks — anything loaded with a (recursive) parent of the ClassLoader.getSystemClassloader() is considered a “core” Java class. All the explicit marker interfaces’ names are externalized, and the mechanism is made extensible – these interfaces are used to skip enhancing classes these are “around advices” (like CDI / JEE interceptors). So there is no unnecessary “magic” constants left inside code.

What will follow next? I’m planning to run several iterations past 2.5.0 release to fix minor issues, like strange behavior of continuable Java / CGLib proxies, degraded performance of proxy transformation agent and alike. No major API changes are expected. Afterwards the release 3.0.0 should come out till the end of Mar 2019. Here I will do a major renaming of packages and artifacts — all org.apache.commons.javaflow.* packages will be renamed to net.tascalate.javaflow.*. I can’t keep this legacy naming clash any longer – when artifacts/modules names are different from the package names. It’s a pretty unsafe combination in post-Jigsaw world.

P.S. It’s a first time I ever released a library that depends on another library that has only a beta-release version — SLF4J. I see no better option today. Moreover, the build depends on ModiTect plugin — only a beta-version available as of now. And, finally, I forced to use snapshot (SNAPSHOT, Karl!!!) version of the Maven JavaDoc plugin – because the version released in May 2018 is total crap for multi-release projects. It’s just awful and broken in each and every place. Impressed? Here is how a Java ecosystem looks after Jigsaw…

P.P.S. On a positive side, I’d like to say thanks to the team behind ModiTect plugin. It saves me a lot of time and make the transition to Java 9+ ever possible. Seriously, I don’t know what I would do without it. Small, simple, focused — just an excellent tool for the task! My respects to authors and contributors of this excellent project!

It has been long since I’ve posted anything to this blog, and even longer since the my last post about Tascalate JavaFlow library… Meanwhile, the library is steadily progressing up to release 2.4.0 and there are some interesting stories I’d like to share.

First, Tascalate JavaFlow is used as an engine for my new Tascalate  Async / Await library. Yep, same async / await you can find in C# since 5.0 and in ECMAScript (pronounced as “JavaScript”) since ECMAScript 2017, and somewhat similar functionality in Kotlin. And it doesn’t stop with raw copying of features! Scheduling is taken seriously (better than in C#), cancellation is addressed (more convenient that in C#), even asynchronous generators (C# 8, ECMAScript 2018) are implemented! If you are new to the subject, please start your explorations with the description of the colored function problem — it’s an excellent explanation how async / await solves some of the issues of the asynchronous callbacks’ hell.

The Tascalate Async / Await library deserves its own series of articles, for now please read on the documentation available on the project home page and examples. What is relevant to this article is that the Tascalate Async / Await was started as a proof-of-concept that it’s possible to build something really useful with Tascalate JavaFlow and that API design was done right. And that there are no critical bugs left, for sure…

Obviously, there were numerous! Eric Sink in his book, Eric Sink on the Business of Software, wrote, “I like the smell of a freshly killed bug.” Though, I’m not a fan of Asian cuisine, I still remember not only the smell, but as well the taste of all that bugs discovered on the Tascalate JavaFlow release path from the version 2.0 through 2.4! And now I’m happy to announce that Tascalate Async / Await is pretty alive and well… erghhh… that Tascalate JavaFlow is mostly free from critical issues and can be used in development! So “eat your own dog food” motto proved to be true once again.

Once I get confident with a quality of my library, I were ready to address performance issues… Well, I’m lying now. What I started to do is searching is anyone around is using Tascalate JavaFlow and what is a feedback (hoping to find a positive one). So I start to google whether or not my library is popular enough. And I found a pretty interesting document that surprised me badly — Effect Handlers for the Masses by Jonathan Brachthäuser, Philipp Schuster, and Klaus Ostermann, where my library was compared against authors’ own implementation of continuations as well as numerous alternative continuations / coroutines libraries. Well, being a worst example out of five is somewhat disappointing… Being 7 times slower than the closest competitor is even worse! So I’ve contacted the author for tests used, and get back to the Tascalate JavaFlow runtime that behaves thaaaat poorly. Pretty fast, two changes was made to the code:

  1. When porting code from Apache Commons JavaFlow I totally overlooked performance issues inside org.apache.commons.javaflow.core.Stack class. There were separate stacks for each primitive type — int (covering byte, char, boolean), long, double, float. Combining them all into one stack significantly improves performance! Next, there were unconditional debug statements that eats CPU cycles just for nothing but with excellent appetite! Addressing just this single issue gives a 250+% speedup!
  2. The Tascalate JavaFlow library used mutli-shot continuations only. They are thread-safe and may be resumed multiple times. However, for a lion share of possible usage scenarios this is an overkill. Single-shot single-thread continuations are what is enough! Hence the library has now an option to create either single-shot optimized Continuation-s or multi-shot Continuation-s depending on requirements. Changes are documented in generated JavaDoc-s, so please read the API docs for details.

Just to get you a rough picture what this performance optimization brings, here is a cite from my reply to the Jonathan Brachthäuser:

  1. Initial case (with expensive debug statements and multi-shot continuations):
    1043ms
  2. Keep expensive debug but use “optimized” single-resume continuation (no array copy)
    759ms
  3. Expensive calls to debug are removed but continuation is still multi-shot:
    260ms
  4. Both optimizations applied (expensive calls to debug are removed, single-resume continuations)
    124ms

So it’s almost 8.5 times faster than the original version! Even with multi-shot continuations it’s almost 4 times faster for the given tests due to fixes in the Stack class alone! All in all, the Tascalate JavaFlow performance should be on par with Quasar for the majority of cases.

Finally, the project was split on three parts — the Tascalate JavaFlow library itself, the Tascalate JavaFlow Extras extensions that uses Java 8+ API-s and examples. This should help evolving all of three separately with their own release cycle. And among examples you may find an example how the library can be used… drum roll.. in JEE project! For now it’s tested only with WildFly 9/10/11, but the support for GlassFish / Payara is in progress!

Tascalate Concurrent library version 0.5.3 is released and available in the Maven Central Repository.

As promised, this release adds explicit cancelRemaining parameter to the overloaded Promises combinator methods like all / any / atLeast and corresponding *Strict variants. This parameter specifies is it necessary to cancel remaining pending promises once the result is known to be resolved. When omitted, the default value is cancelRemaining = true.

Besides some important bug-fixes, these release introduces a new class: DependentPromise. Let’s review why you need it in you day-to-day asynchronous code development.

You should know that once you cancel a Promise, all Promise-s that depends on this one are completed with CompletionException wrapping CancellationException. This is a standard behavior, and CompletableFuture works just like this.

However, when you cancel a derived Promise, the original Promise is not cancelled:

Promise<?> original = CompletableTask
  .supplyAsync(() -> someIoBoundMethod(), myExecutor);
Promise<?> derived = original
  .thenRunAsync(() -> someMethod() );
...
derived.cancel(true);

So if you cancel derived above it’s Runnable method, wrapping someMethod, is interrupted. However the original promise is not cancelled and someIoBoundMethod keeps running. This is not always a desired behavior, consider the following method:

public Promise<DataStructure> loadData(String url) {
   return CompletableTask
          .supplyAsync( () -> loadXml(url) )
          .thenApplyAsync( xml -> parseXml(xml) ); 
}

...
Promise<DataStructure> p = loadData("http://someserver.com/rest/ds");
...
if (someCondition()) {
  // Only second promise is canceled, parseXml.
  p.cancel(true);
}

Clients of this method see only derived promise, and once they decide to cancel it, it is expected that any of loadXml and parseXml will be interrupted if not completed yet. To address this issue the library provides DependentPromise class:

public Promise<DataStructure> loadData(String url) {
   return DependentPromise
          .from(CompletableTask.supplyAsync( () -> loadXml(url) ))
          .thenApplyAsync( xml -> parseXml(xml), true ); 
}

...
Promise<DataStructure> p = loadData("http://someserver.com/rest/ds");
...
if (someCondition()) {
  // Now the whole chain is canceled.
  p.cancel(true);
}

DependentPromise overloads methods like thenApply / thenRun / thenAccept / thenCombine etc with additional argument:

  • if method accepts no other CompletionStage, like thenApply / thenRun / thenAccept etc, then it’s a boolean flag enlistOrigin to specify whether or not the original Promise should be enlisted for cancellation.
  • if method accepts other CompletionStage, like thenCombine / applyToEither / thenAcceptBoth etc, then it’s a set of PromiseOrigin enum values, that specifies whether or not the original Promise and/or CompletionStage supplied as argument should be enlisted for cancellation along with the resulting promise

For example:

public Promise<DataStructure> loadData(String url) {
   return DependentPromise
          .from(CompletableTask.supplyAsync( () -> loadXml(url + "/source1") ))
          .thenCombine( 
              CompletableTask.supplyAsync( () -> loadXml(url + "/source2") ), 
              (xml1, xml2) -> Arrays.asList(xml1, xml2),
              PromiseOrigin.ALL
          )          .
          .thenApplyAsync( xmls -> parseXmlsList(xmls), true ); 
}

Please note, then in the planned release 0.5.4 there will be a new default method dependent in interface Promise that serves the same purpose and allows to write chained calls:

public Promise<DataStructure> loadData(String url) {
   return CompletableTask
          .supplyAsync( () -> loadXml(url) )
          .dependent()
          .thenApplyAsync( xml -> parseXml(xml), true ); 
}


Tascalate Concurrent library version 0.5.2 is released and available in the Maven Central Repository.

The library was created to overcome numerous shortcomings of the standard (and the only) implementation of the CompletionStage interface shipped with Java8 — CompletableFuture.
First and foremost, the library provides implementation of the CompletionStage that supports long-running blocking tasks (typically, I/O bound) – unlike Java 8 built-in implementation, CompletableFuture, that is primarily supports computational tasks.

Why a CompletableFuture is not enough?

There are several shortcomings associated with the CompletableFuture class implementation that complicate its usage for blocking tasks:

  1. CompletableFuture.cancel() method does not interrupt underlying thread; it merely puts future to exceptionally completed state. So if you use any blocking calls inside functions passed to thenApplyAsync / thenAcceptAsync / etc these functions will run till the end and never will be interrupted. Please see CompletableFuture can’t be interrupted by Tomasz Nurkiewicz.
  2. By default, all *Async composition methods use ForkJoinPool.commonPool() (see here) unless explicit Executor is specified. This thread pool shared between all CompletableFuture-s, all parallel streams and all applications deployed on the same JVM. This hard-coded, unconfigurable thread pool is completely outside of our control, hard to monitor and scale. Therefore you should always specify your own Executor.
  3. Additionally, built-in Java 8 concurrency classes provides pretty inconvenient API to combine several CompletionStage-s. CompletableFuture.allOf / CompletableFuture.anyOf methods accept only CompletableFuture as arguments; you have no mechanism to combine arbitrary CompletionStage-s without converting them to CompletableFuture first. Also, the return type of the aforementioned CompletableFuture.allOf is declared as CompletableFuture – hence you are unable to extract conveniently individual results of the each future supplied. CompletableFuture.anyOf is even worse in this regard; for more details please read on here: CompletableFuture in Action (see Shortcomings) by Tomasz Nurkiewicz

How to use?

Add Maven dependency:

<dependency>
    <groupId>net.tascalate.concurrent</groupId>
    <artifactId>net.tascalate.concurrent.lib</artifactId>
    <version>0.5.2</version>
</dependency>

What is inside?

1. Promise interface

The interface net.tascalate.concurrent.Promise may be best described by the formula:

Promise == CompletionStage + Future

I.e., it combines both blocking Future’s API, including cancel(boolean mayInterruptIfRunning) method, AND composition capabilities of CompletionStage’s API. Importantly, all composition methods of CompletionStage API (thenAccept, thenCombine, whenComplete etc.) are re-declared to return Promise as well.

You may notice, that Java8 CompletableFuture implements both CompletionStage AND Future interfaces as well.

2. CompletableTask

This is why this project was ever started. net.tascalate.concurrent.CompletableTask is the implementation of the net.tascalate.concurrent.Promise API for long-running blocking tasks. There are several options to create a CompletableTask:

  • CompletableTask.runAsync(Runnable runnable, Executor executor)
    You may submit Runnable to the Executor:

    Promise<Void> p = CompletableTask.runAsync(
      this::someIoBoundMethod, myExecutor
    );
    

    You may notice similarities with CompletableFuture.runAsync(Runnable runnable, Executor executor) method.

  • CompletableTask.supplyAsync(Supplier<U> supplier, Executor executor)

    Alternatively, you may submit Supplier to the Executor:

    Promise<SomeValue> p = CompletableTask.supplyAsync(() -> {
      return blockingCalculationOfSomeValue();
    }, myExecutor);
    

    Again, you can notice direct analogy with CompletableFuture.supplyAsync(Supplier<U> supplier, Executor executor) method

  • CompletableTask.asyncOn(Executor executor)
    This unit operation returns a resolved no-value Promise that is “bound” to the specified executor. I.e. any function passed to composition methods of Promise (like thenApplyAsync / thenAcceptAsync / whenCompleteAsync etc.) will be executed using this executor unless executor is overridden via explicit composition method parameter. Moreover, any recursively nested composition calls will use the same executor, if it’s not redefined via explicit composition method parameter:

    CompletableTask
      .asyncOn(myExecutor)
      .thenApplyAsync(myValueGenerator)
      .thenAcceptAsync(myConsumer)
      .thenRunAsync(myAction);
    

    All of myValueGenerator, myConsumer, myActtion will be executed using myExecutor.

  • CompletableTask.complete(T value, Executor executor)
    Same as above, but the starting point is a resolved Promise with the specified value:

    CompletableTask
       .complete("Hello!", myExecutor)
       .thenApplyAsync(myMapper)
       .thenApplyAsync(myTransformer)   
       .thenAcceptAsync(myConsumer)
       .thenRunAsync(myAction);
    

    All of myMapper, myTransformer, myConsumer, myActtion will be executed using myExecutor.

Most importantly, all composed promises support true cancellation (incl. interrupting thread) for the functions supplied as arguments:

Promise<?> p1 =
CompletableTask
  .asyncOn(myExecutor)
  .thenApplyAsync(myValueGenerator)
  .thenAcceptAsync(myConsumer);
  
Promise<?> p2 = p1.thenRunAsync(myAction);
...
p1.cancel(true);

In the example above myConsumer will be interrupted if already in progress. Both p1 and p2 will be resolved faulty: p1 with a CancellationException and p2 with a CompletionException.

It important to mention, that CompletableTask supports interrupting execution thread, but the actual behavior depends on the concrete Executor implementation: for example, the ThreadPoolExecutor will truly interrupt underlying thread when cancellation is requested but the ForkJoinPool will not.

3. Utility class Promises

First things first, the class provides several method to conveniently create promises:

  • It’s possible to convert to a ready value to successfully resolved net.tascalate.concurrent.PromisePromises.success(T value):
    Promise<String> p = Promises.success("Tascalate");
    
  • Similarly, the next method creates faulty resolved net.tascalate.concurrent.PromisePromises.failure(Throwable exception):
    Promise<?> err = Promises.failure(new IllegalStateException());
    
  • Naturally, there is a way to convert arbitrary CompletionStage to net.tascalate.concurrent.Promise
    Promises.from(CompletionStage stage):

    CompletionStage<String> stage = ...; // Get CompletionStage
    Promise<String> = Promises.from(stage);
    

But most important, the class provides convenient methods to combine several CompletionStage-s:

Promise.all([boolean cancelRemaining], CompletionStage<? extends T>... promises)
Promises.any([boolean cancelRemaining], CompletionStage<? extends T>... promises)
Promises.anyStrict([boolean cancelRemaining], CompletionStage<? extends T>... promises)
Promises.atLeast(int minResultsCount, [boolean cancelRemaining], CompletionStage<? extends T>... promises)
Promises.atLeastStrict(int minResultsCount, [boolean cancelRemaining], CompletionStage<? extends T>... promises)

These methods may (and I would say “should”) be used instead of CompletableFuture.all and CompletableFuture.any methods and here is why:

  1. When method returns single result, its result type is Promise where type argument T is a most common supertype of parameters’ type arguments — unlike Promise that is returned via CompletableFuture.any.
  2. When method returns multiple results, its result type is Promise where type argument T is a most common supertype of parameters’ type arguments AND the result of the each successfully completed promises is available at the corresponding list index — unlike Promise that is returned via CompletableFuture.all.
  3. There are several overloads for atLeast* methods — the generalization of any* methods, useful when you have to collect N out of M, N <M results. No such functionality is available in Java8 out of the box.
  4. Once resulting Promise is resolved (either successfully or faulty) all remaining promises may be cancelled — either when explicit cancelRemaining parameter is true or by default, when this parameter is omitted.
    *Explicit cancelRemaining parameter is available currently only in current master branch, this functionality is planned for release 0.5.3
  5. There are separate non-strict vs strict overloads of methods. The difference is how to tolerate errors even if you don’t need all promises passed to be completed successfully to resolve resulting Promise. Strict versions will resolve resulting Promise faulty on the first error — you can use whatever suits best for your application logic. No such option is available in Java8 out of the box.

4. Extensions to ExecutorService API

It’s not mandatory to use any specific subclasses of java.util.concurrent.Executor with the net.tascalate.concurrent.CompletableTask – you may use any implementation that supports thread interruption. However, someone may find beneficial to have a Promise-aware java.util.concurrent.ExecutorService API. Below is a list of related classes/interfaces:

  • Interface net.tascalate.concurrent.TaskExecutorService
    Specialization of the ExecutorService that uses net.tascalate.Promise as a result of submit(...) methods.:

    TaskExecutorService executor = ...; // Get concrete TaskExecutorService
    Promise<String> promise = executor
      .submit( () -> someLongRunningMethodWithStringResult() );
    
  • Class net.tascalate.concurrent.ThreadPoolTaskExecutor
    A subclass of the standard ThreadPoolExecutor that implements net.tascalate.concurrent.TaskExecutorService interface.

    ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor(
      4, 4, 0L, TimeUnit.MILLISECONDS,
      new LinkedBlockingQueue<Runnable>()
    );
    Promise<Integer> promise = executor
      .submit( () -> someLongRunningMethodWithStringResult() )
      .thenApply(String::length);
    
  • Class net.tascalate.concurrent.TaskExecutors
    A drop-in replacement for the Executors utility class that returns various useful implementations of net.tascalate.concurrent.TaskExecutorService instead of the standard ExecutorService.

    TaskExecutorService e1 = TaskExecutors.newFixedThreadPool(4);
    TaskExecutorService e2 = TaskExecutors.newCachedThreadPool();
    TaskExecutorService e3 = TaskExecutors.newSingleThreadExecutor();
    ...
    @Resource
    ManagedExecutorService managedExecutorService; // CDI injection
    ...
    TaskExecutorService tes = Executors.adapt(managedExecutorService);
    

Acknowledgements

Internal implementation details are greatly inspired by the work done by Lukáš Křečan. I want to express my great gratitude to Lukáš for his easy-to-follow, clean and bullet-proof code that served as blueprint for my implementation.

This post is a first one in a series dedicated to continuations support in JDK 1.8 – namely, continuations usage with labmdas (anonymous functions of SAM interfaces) and Stream API (java.util.stream).

When Oracle (and formerly Sun) developing next Java version, the backward compatibility is the one of the primary concerns. API compatibility, bytecode compatibility, whatever. But for tool vendors, like ones who develop compilers, IDE, ad-hoc bytecode enhancers / generators, runtime utilities relying on reflection, almost every new Java release is always a wake-up call to keep their products up to date.

JDK 1.1 adds inner classes and anonymous classes – a pretty convenient way to declare a class in-place. However, this is a great example of the leaked abstraction – tool vendors had to cope with automatically generated class names, with automatically generated constructors of non-static inner classes, with hidden accessor methods generated to use private members (fields and methods) between inner/outer classes.

Next big move was JDK 1.5 – generics added a lot of fun to the daily routine of tools vendors. Generic signatures in class/method/field/parameter/variable declarations, type erasures, covariant method return types, automatically generated bridge methods… oh my! Overwhelming list of futures to support! What can be better than upgrading thousands lines of the code to support all of this! However, this Java release brings us annotations – the real impulse to revisit all our dynamic code generation techniques anyway. And the community responded promptly – AOP-specific libraries and tools, (revisited) dependency injection techniques, mappings for JPA and XML, fully refactored JEE and so on, and so on…

Then it was JDK 1.7 with INVOKEDYNAMIC. Not a ground-shaking change for the majority of existed Java tools, but a great value for the authors of JVM-compiled languages. Just recall: JRuby (Ruby on Java) beats Ruby (Ruby on C) in terms of performance! Isn’t this amazing?! But… At that time I had not expected how this dynamic invocation gun shot at Tascalate Javaflow library with JDK 1.8 release…

So, now we are running our applications on JDK 1.8. And the number of additional features that should be taken on account by tool vendors is overwhelming. Default methods in interfaces, static methods in interfaces – supporting this is not a complex task per se, but the fact, that method’s bodies are possible inside an interface, invalidates a lot of the logic inside many tools (Tascalate JavaFlow were affected, too). However, the most ground-shaking addition was lambdas, to be precise – the way lambdas are created by compiler and Java runtime.

The key point in the phrase above is “by the compiler AND Java runtime”. This means that a related bytecode is not only generated at compile-time BUT at run-time as well. The lesser-known to the general public LambdaMetafactory class is heavily involved in this magic. The API notes for the class describes a process pretty well: the compiler desugars lambda function’s body to the auto-generated method of the enclosing class, put an INVOKEDYNAMIC instruction whenever the SAM interface is used, and the LambdaMetafactory class links this dynamic invocation with an actual interface implementation generated at run-time. And in its’ own turn, this on-the-fly implementation delegates processing to the desugared lambda body (created at compile-time). A thrilling process!

So, we cannot rely on the compile-time-only bytecode modification any longer. But the lack of the labmdas support would be a serious omission. Fortunately, we, Java developers, have good old agents inside JVM that plays for us. A pun intended: this is namely JavaAgents technology that was introduced back in JDK 1.5 days. The technology provides a facility to intercept class loading mechanism and enhance a bytecode before it’s seen by the JVM. It works even with bytecode generated at run-time.

So, to support continuable anonymous lambdas and method references, you have to download first Tascalate JavaFlow Instrumentation Agent from the latest release on GitHub. Then you must please add the following arguments to Java command line:

java -javaagent:<path-to-jar>/javaflow.instrument-continuations.jar <rest-of arguments>

The agent JAR file includes all necessary dependencies and requires no additional CLASSPATH settings. It’s possible to use this agent in conjunction with either Maven or Ant build tools supplied. Moreover, it’s even a recommended option – it helps to minimize the associated overhead of the instrumentation during class-loading process at run-time. However, using just instrumentation agent has its’ own benefits when you are developing and debugging code within your IDE of choise. Just specify the same :javaagent option for your Run/Debug configuration (screen-shot below) – and you are ready to execute quick “debug-fix” loops relying on IDE-s incremental compilation + JavaAgent instrumentation – as opposed to time-consuming full project rebuild with Mavent/Ant.

JavaAgent IDE Debug Settings.png

Next time we will explore concrete examples with continuable Java 8 lambdas as well as additional Tascalate JavaFlow utility classes that simplifies related tasks.

Tascalate JavaFlow continuations library version 2.0 is released. Now it’s published to the Maven Central Repository, so no need to build it from GitHub sources any longer. Plus, additional binary resources are uploaded to the GitHub release page:

  • Ant project templates (1 and 2) with all necessary libraries and sample build.xml
  • Command-line JAR rewrite tool
  • Java Agent to instrument classes at run-time (during class-loading) – javaflow.instrument-continuations.jar — a MUST-have if continuable code is invoked within Java 8 lambdas and a real time-saver if you are debugging continuable code from an IDE *
  • Java Agent to instrument proxies of popular CDI containers (JBoss Weld and Apache OpenWebBeans) – javaflow.instrument-cdi-proxy.jar – to correctly support continuable methods in CDI managed beans

Additional information may be found on the project’s front-page on GitHub

*In several next posts I will elaborate more about continuations with Java 8 lambdas

There is a well-defined naming convention in Java to start annotation names with an uppercase letter – same as for class/interface names. Though this naming convention is widely accepted, I think it’s a bit dogmatic. If you take a look at the list of Java-specific annotations in Scala you may see that both lowercase (like @cloneable or @throws) and uppercase (like @SerialVersionUID or @BeanProperty) forms are used. Personally, I tend to agree with decisions made by architects of the Scala’s standard library: some annotations are meta-data, for example javax.persistence.@Entity / javax.persistence.@Column or javax.xml.bind.annotation.@XmlElement / javax.xml.bind.annotation.@XmlAttribute; other ones look like a directive to processing tool or like a syntax extension. In addition, directive-like annotations typically have no extra parameters, so they are very similar to @-prefixed keywords. Hence, I think it would be more natural to write @override rather than @Override in Java — it’s a directive to compiler to check whether or not the method overrides/implements a method defined in a superclass/interface. Moreover, in some languages “override” is indeed a keyword, so it’s even more tempting to use all-lower-case variant.

The above should explain why lower-case annotation names where chosen for two annotations in Tascalate Javaflow: @continuable and @ccs. Both are directives to bytecode instrumentation tools, both have no extra parameters, and both looks like library-defined keywords:

public @continuable void execute() {
  ...
  final @ccs Runnable contRunnable = new MyContinuableRunnable(someArg);
  ...
}

What if you don’t agree with arguments above and would prefer to follow standard Java naming conventions for annotations? Not a problem at all with Tascalate Javaflow! There is a third annotation defined in the library – org.apache.commons.javaflow.api.ContinuableAnnotation – that allows you to define your own annotations instead of @continuable and @ccs. In fact, ContinuableAnnotation is a meta annotation and it may/should be applied only to other annotations. Here is how to define @ContinuableMethod annotation that you may use instead of @continuable in your code to strictly adhere to Java naming conventions:

package mycompany.myapp.annotations;

import java.lang.annotation.Documented;
import java.lang.annotation.ElementType;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;

import org.apache.commons.javaflow.api.ContinuableAnnotation;

@Documented                       // optional
@Retention(RetentionPolicy.CLASS) // mandatory, may be RetentionPolicy.RUNTIME as well
@Target({ElementType.METHOD})     // mandatory, only methods are examined
@ContinuableAnnotation            // mandatory
public @interface ContinuableMethod {
}

The rules are:

  1. Your custom @ContinuableMethod annotation must be annotated with @ContinuableAnnotation (obviously)
  2. It must be an annotation applicable to methods – any other targets have no effect and will only confuse a user of your annotation class. However, sometimes you need to have other targets and it’s ok to define more – typical example is CDI interceptor binding annotations.
  3. It must have retention policy defined either as RetentionPolicy.CLASS or as RetentionPolicy.RUNTIME, SOURCE-level annotations are not saved into class bytes

Similar, you can re-define @ccs as @ContinuableTarget:

package mycompany.myapp.annotations;

import java.lang.annotation.Documented;
import java.lang.annotation.ElementType;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;

import org.apache.commons.javaflow.api.ContinuableAnnotation;

// optional
@Documented
// mandatory, may be RetentionPolicy.RUNTIME as well
@Retention(RetentionPolicy.CLASS)
// mandatory, use exact syntax
@Target({ElementType.LOCAL_VARIABLE, ElementType.PARAMETER, ElementType.TYPE_USE})
// mandatory
@ContinuableAnnotation
public @interface ContinuableTarget {
}

The rules are similar to the method-level annotation except for @Target – please use exact syntax as above. In addition to regular variable/parameter targets you must declare ElementType.TYPE_USE (Java 8 feature) – otherwise annotation is not saved in a class bytecode.

No other customization is necessary: you may use your own annotations right away thanks to meta-annotation. By the way, there is another important use-case when ContinuableAnnotation may be useful in your code — stereotype annotations. Imaging, that your application has a method-level annotation @WorflowTask with it’s own duties. Moreover, all @WorkflowTask methods must be @continuable. To mark some business methods as workflow tasks you should use both annotations:

package mycompany.myapp.services;
import mycompany.myapp.annotations.WorkflowTask;
import org.apache.commons.javaflow.api.ContinuableAnnotation;

public class MyService {
  ...
  @WorkflowTask(timeout="3d",name="SomeTask")
  @continuable
  public int myBusinessMethod() { ... }
  ...
}

However, if you mark your @WorkflowTask annotation as @ContinuableAnnotation then you may use just one annotation in your code:

package mycompany.myapp.annotations;

import java.lang.annotation.ElementType;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;

import org.apache.commons.javaflow.api.ContinuableAnnotation;

@Retention(RetentionPolicy.CLASS)
@Target({ElementType.METHOD})
@ContinuableAnnotation
public @interface WorkflowTask{
  public String name();
  public String timeout();
  ...
}

//=====

package mycompany.myapp.services;
import mycompany.myapp.annotations.WorkflowTask;

public class MyService {
  ...
  @WorkflowTask(timeout="3d",name="SomeTask")
  public int myBusinessMethod() { ... }
  ...
}

Now @WorkflowTask annotation is a stereotype that:

  • clearly defines specific business method role
  • captures this role at a conceptual level
  • encapsulates implementation details