de.velopmind | The Det about Programming

April 14, 2008

How not to use annotations

Filed under: Design, Java — de.velopmind @ 10:12 am

Since Java 5, the markup of code stuff with so called ‘annotations’ has been expanded to further use.

While before Java 5 tagging was mainly used for javadoc and to mark a method as deprecated, since Java 5 everyone is able to define her own meta-language to annotate her code.

As annotations per se may be useful, the way to create and use them in Java 5 seems to have led to some form of usage which leaves the boundaries of OO style, changing Java programming into a kind of declarative programming style.

To talk about annotations, we first have to divide them into different categories:

  • Compile time processed vs. Runtime processed
  • System defined (say: by Sun JDK or a tool vendor) vs. User defined (say: in application development)

Annotations are ‘meta tags’, which means they provide information about the code on a higher level. The problem is, what is considered a ‘higher level’.

So let’s have a look at compile time processed annotations first:

Compile time annotation means, the annotation provided by the programmer in her source code is recognised by the compiler, when it does its work to transform the source code to byte code.

The three most commonly known annotations are: @Override, @Deprecated and @SuppressWarnings

These three just give a good example where the problem in the meta-debate is:

@Deprecated is a good example of a meta-information, which does not tell us something about the code itsself, the algorithms, the class hierarchy or other parts of our coded problem solution. Instead it informs us about plannings, about conception, specifically about a feature life-cycle.
It simply tells us, that this so marked feature will not be available in a future version, so its usage in applying code is currently not wrong, but is not recommended any more.

This annotation does not change the created byte code in any way, neither the code which is annotated, nor the code who calls that so tagged feature. It is only a hint to an application programmer about what may be happen in the future if she really uses that feature.

@SuppressWarnings is also a good example of a meta-information, as it does not say anything about the implementation and the algorithm or the structure or else, but is communication with the compiler. It tells the compiler, that a warning, created by the following feature, is indeed expected and is assumed to be of no interest.

The annotation’s existence does not change the result of the compilation in any way, but only the way the compiler works with the input it.

@Override is indeed a bad example for such a meta-information. It tells us, that a specific method is assumed to override a method with the same signature in a super class. While this may be considered a meta-information as it does not tell anything about the methods implementation itsself but our assumptions about that method, it is in fact an addition to the code, not a meta-information about it. It is in the same category like the public or the final modifiers, for example.
They all do not change the algorithm in any way, but tell the compiler our assumptions about its application. It communicates our software conception. That is, what code is about. So public means: may not be called from outside. Final means: May not be reassigned. And override means: Shall correspond to a method with same signature in super class.

Override gives a concrete information about the inheritance structure.
I think Martin Odersky thought the same when he decided to make override a keyword in Scala.

The same can be said about runtime annotations, i.e. annotations which can be analysed at runtime, so changing or controlling the systems behaviour.

While this may be a good idea, e.g. when writing test code using a testing framework (test methods get the annotation @Test instead of being detected by name prefix), this can easily be abused when applied to control application logic.

Even in the mentioned test framework, recognising the test methods by annotation is in my opinion more a hint to a design flaw in the underlying programming language (say: Java) than a feature. Obviously it is not possible to code this configuration with language inherent features, at least not in a satisfying, clear and concise way, making the framework developers falling back to reflection and annotation use.

At first, I was not unhappy with the convention over configuration approach, assuming all test methods to start with the prefix ‘test’. But beside that I suspect that there are more elegant solutions in languages that provide functions resp. closures as first-class citizens.

Beside the example above I stumbled over a question in the Groovy user list, where a validation solution in a Java programm, based on annotations, should be transformed to Groovy. The current lack of inner classes led to me doubting the proper use of annotations for that application per se.

The solution had a generic validation method which analysed a given object for annotated properties and called a further validation method for that property based on the specific annotation.

The object to be validated was an anonymous inner object created especially for that purpose, so being a form of configuration.


someMethod() {
    Object checkObject = new Object {
        String myProperty = someValue;
        public getMyProperty() ....
        // other properties of that kind, with different annotations ...
    validate ( checkObject);

While this approach seems at first a good application of runtime annotations (as the user of the validate method can “easily” configure the concrete applied checks in a declarative way) and indeed OO style (as creating the specific configuration as anonymous inner class seems OO), it indeed hides the fact that the annotation provides no real meta-information about the properties, but is used for runtime flow control.

I do not know the validate() method, but assume it to be more or less in an imperative style, assembled of reflective access to get the annotations and some if-else which calls the corresponding validate method for any annotation with the respective data value to be validated.

So we take a look at the Groovy solution to the above problem.

First, we move the validate() method to a class Validator. This class provides also all the specific validation methods for each annotation (perhaps such a class just existed for the Java solution).

class Validator {
    def checkOne(data) { ... }
    def checkTwo(data) { ... }
    def validate(c) {
        c.delegate = this

Now, assuming we have one instance validator of class Validator, we do the configuration, formerly done in the inner class object, with a closure:

someMethod() {
    validator.validate { checkOne(someValue) ; checkTwo(someOtherValue) }

Voíla, the closure put into the validate method is our configuration of what checks shall be applied to what values.

Instead of annotating properties in a class, then analysing the annotations to know which test method to call, we simply call that methods with the values. Instead of implementing what can be done, than declaring what shall be done, we simply do.

Back to our testing framework: Instead of annotating test methods and doing reflective access afterwards, we could also create a test method and give it a test closure:

test("name of test") {
    // test code

The point is: we are totally inside the programming language, not on a pseudo meta-level.


Leave a Comment »

No comments yet.

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

Blog at

%d bloggers like this: