de.velopmind | The Det about Programming

November 17, 2013

Folien zu Java8 – neue Sprachfeatures.

Filed under: FunctionalProgramming, Java, Language — de.velopmind @ 7:05 pm
Advertisements

February 16, 2011

Quicknote: Overcoming the Java-verbosity…

Filed under: Java, Language — de.velopmind @ 5:44 am

Interesting, what ideas people have to avoid the clutter of Java-code:

public Type someMethod() {try{

       Some   x = getA().getB().getC();
       if (x.check())
          return x.getR().createT();
       else
          return new Type(x);

}catch(Exception e){ return null; }}

January 23, 2011

Guice 2.0 Multibinder + Java ServiceLoader = Plugin mechanism

Filed under: Java — Tags: — de.velopmind @ 12:47 am

Currently I have the task to extend an existing legacy Java Swing client with a plugin mechanism. As I hadn’t looked into this topic before, I tried to get an idea about the possibilities and what others do.

The first things to explore when talking about Java and Plugin are surely the IDEs like Eclipse and Netbeans, as for them the plugins are an elementary part, and so they should show how to do it right.

While Eclipse has an elaborated but quite obscure extension mechanism, Netbeans uses a proprietary lookup mechanism, which is an advanced version of the standard ServiceLoader feature.

This ServiceLoader, standard part of Java 6, is the simples approach to extensibility. Some part of the Software (the consumer) defines an Interface, which another part (the provider) must implement.

Then the consumer uses the ServiceLoader API to get all classes which implement that interface, and here we are.

The lookup is really simple: The provider contains in its Jar a directory named META-INF/services. In that there is a text file which is named like the fully qualified name of the interface (without any file extension). The file contains now the fully qualified name of the implementing class in this jar.

So you only have to throw that jar somewhere into the classpath of the application, and the consumer will get an instance of the implementing class.

And just here are the limitations of this approach:

  • Every lookup retrieves a new instance.
  • Every instance is constructed via its parameterless default constructor.

That’s it. As the name ServiceLoader may suggest, this mechanism is meant for loading services, not plugins.  And in its simplest form it is for stateless services. To overcome this last limitation, one would have to wrap the service loading into a singleton of a special service class which keeps the dependency to the implementing class hidden.

This seems not to be helpful so far.

Now, when we talk about plugins, we talk about Dependency Injection, and there are already nice solutions for that, for example Googles Guice, which is currently in its version 2.0,  but with 3.0 on the way, which will be based on the new JSR-330 specification,  now becoming the standard for DI in Javaland.

Guice is  a very nice thing.  DI is handled solely by annotations of classes and consuming constructors, methods or fields.

As injected instances can themselves depend on injections, and as we can declare Providers (say: factories), this system is very flexible and able to work with parameterised constructors, singleton instances and so on.

So even in complex situations Guice can provide  valid object trees.

Guice also  provides a nice Java API to declare the bindings from interface to implementation.  And here we have two problems :

  1. The standard approach is indeed, to inject the implementation for a required interface, not all of them.
  2. The configuration is provided in a Java class which inherits from Guice’s Module and implements the configure() method.  Reconfigure means: recompile.

The first problem is already addressed by the Multibinder, which was new in Guice 2.0.  With this binding approach, different providing jars can implement an interface, register the binding, and a consumer can get all the implementations. This allows for real extension points and extensions.

But Plugin is even more: It is the possibility to plug some extension in, so to configure the application on customer site, not at compile time.

As the Guice wiki states:

Finally we must register the plugins themselves. The simplest mechanism to do so is to list them programatically:

public class PrettyTweets {
  public static void main(String[] args) {
    Injector injector = Guice.createInjector(
        new GoogleMapsPluginModule(),
        new BitlyPluginModule(),
        new FlickrPluginModule()
        ...
    );

    injector.getInstance(Frontend.class).start();
  }
}

If it is infeasible to recompile each time the plugin set changes, the list of plugin modules can be loaded from a configuration file.

Unfortunately the wiki forgot to say how. This may be left as a task to the reader.

Well, to allow for extensibility on customer site, a plugin jar would have to be put into the classpath and explored on startup. And yes, there is some work for that on the way: Guice Automatic Injection , by Daniel Manzke, which will bring the big right solution.

But wait, in the meantime there can be a simpler solution. If we look at the main method code above, which depicts the standard Guice startup, we see that all Module classes are created by their parameterless default constructor.  But wasn’t that exactly for what ServerLoader was useful for?

Indeed now we see a feasible architecture for our plugin mechanism.

  1. Create extension points by defining interfaces on consumer site.
  2. Expect to get all implementations injected by Guice’s multi mechanism
  3. Create a markup interface to mark all Module classes that configure plugin implementations.
  4. Let different parties implement the interface from 1 and register them per Multibinder in a special Module subclass which implements the markup interface from 3.
  5. Let these parties declare the Module subclass as implementation for the markup interface in the META-INF/services folder of their jar.
  6. In your application startup code (main method) use ServiceLoader to get all Module instances  that implement that markup interface and use them to create the injector.

Here are some code examples from a little test workspace:

Let’s start with the thing that causes all this trouble: The component containing the extension point:

package modone;

import com.google.inject.Inject;
import java.util.Set;

public class ModOneImpl implements ModOne {
    @Inject(optional=true) Set<ModOneExtension> mPlugins = new HashSet<ModOneExtension>();

    public String getSomeval() {
        String r = "Someval from ModOneImpl ";
        for (ModOneExtension p : mPlugins)  {
            r = p.extend(r);
        }
        return r;
    }
}

So this class has a method getSomeval which returns a String value, but gives some extensions/plugins the chance to do some processing on the string before it is returned.  The class ModOneImpl defines the extension point by expecting an injected Set of implementors of  ModOneExtension, the interface that declares  the called extend method.

As there may be no plugin at all on the classpath, the injection is marked optional with an empty set as default initialisation.

Let’s assume that our application has more than one of such extension points, so we declare the application as a whole to be extensible by plugins by creating a markup interface:

package myapp;

import com.google.inject.Module;

public interface PluginModule extends Module {}

Each plugin now creates its specific implementation of the extension interface, e.g. like this:

package pluginone;

import modone.ModOneExtension;

public class PluginOne implements ModOneExtension {
     public String extend(String x) {
          return  "extended(one): "+x;
     }
}

Additionally, this implementation is bound to that interface in the  module configuration class:

package pluginone;

import modone.ModOneExtension;
import myapp.PluginModule;
import com.google.inject.AbstractModule;
import com.google.inject.multibindings.Multibinder;

public class PluginOneModule extends AbstractModule implements PluginModule {
   public void configure() {
     Multibinder<ModOneExtension> mbinder = Multibinder.newSetBinder( binder(), ModOneExtension.class);
     mbinder.addBinding().to(PluginOne.class);
   }
}

That was the Guice part, and now the ServiceLoader part.

Create a file in the directory META-INF/services named:  myapp.PluginModule .

This file only contains one line:   pluginone.PluginOneModule

Now what is left is the application’s startup code:

package myapp;

import modone.ModOneModule;
import modrep.ReportModule;

import com.google.inject.Guice;
import com.google.inject.Module;
import com.google.inject.Injector;

import myapp.PluginModule;
import java.util.ServiceLoader;

import java.util.List;
import java.util.ArrayList;

public class Main {
   public static void main(String[] args) {
       List<Module> modlist = new ArrayList<Module>();
       modlist.add( new ModOneModule() ); // declare app modules the standard way
       modlist.add( new ReportModule() ); 

      ServiceLoader<PluginModule> extensions = ServiceLoader.load( PluginModule.class );
      for(PluginModule ext : extensions) {
          modlist.add(ext);         // add each found plugin module
      }

      Injector inj = Guice.createInjector( modlist );
      ReportApp appl = inj.getInstance( ReportApp.class );
      appl.report();
   }
}

In this main method we see how the list of all modules is created and filled with the standard application modules.   The plugin modules then are looked up by the markup interface PluginModule via the ServiceLoader mechanism and added to that list.

In the end the injector is created and the application started by getting the first managed instance from the injector, like usual.

To add more plugins, simply put their jar files on the classpath and they will be configured and managed by Guice automatically.

I hope that you enjoyed this little experiment.

PS: If you want to configure only loaded plugin modules with your injector, you can directly put the ServiceLoader instance into the createInjector() method, as the latter requires an Iterable instance, and the ServiceLoader implements this interface.

Edit: After I finished this article, I found a similar approach on Stackoverflow, with a nice alternative for module loading.

June 26, 2008

Verbose, concise and how Java relates to that

Filed under: Design, Java, Language — de.velopmind @ 11:28 am

Often when reading a forum discussion somewhere about the verbosity of Java and conciseness of some language X we meet people trying to convince us that Java is good as it is, because “I want to express everything clearly, and when reading I want to see what I get. No crypticts, no implicits”.

And when debating about that opinion, it often comes to productivity, which is then answered with the hint to “good IDEs”, so that Java expressions are typed almost as fast as code in more concise languages.

So what is this thing about expressiveness and conciseness?

Let’s have a look at other formal languages, say: mathematics.

Imagine the following term: 5*4*3*2*1

This is more than a chain of multiplications. It is a chain of multiplications starting from a given number (here: 5) counted down to 1 with stepwidth 1. And this concept has a specific name: faculty.
So “faculty” is an unambiguous, single word for the concept descibed above in more verbose words.

Likewise in symbolic language, you write this concept as: 5! (say: five faculty)
That this symbol is far better than the multiplication expression, can be easily seen when trying to write 100! in long form.

Now consider writing mathematical formulas without the symbolic abstraction of faculty, always only expressing it as multiplication chain. You can argue, that you can easily grok the pattern 5*4*3*2*1 as being faculty by reflex. But be cautious. Could you really always and immediately say, that it is faculty five?
Or isn’t it too easy to mistake it with 5*4*3+2*1 (as typo when writing, but more often when reading it in a more complex context)?  Here you have a pattern-mismatch mistake.

The same now works for programming languages. When saying, that one wants to “express everything clearly”, it seems natural to write:

public static final

Indeed, it could be abbreviated to omit typing and “verbosity”, e.g. to:

pub sta fin

Easier to write, only slightly more cryptic to read.

Much more concise would it be, when we would introduce symbols, such leading to a more formal symbolic language.
Imagine + for public, ^ for static, _ for final, then the modifier chain above could be: +^_

Write this on a paper and read it three days later. You remember what the symbols mean?

So: Symbols are much more concise, perhaps even more precise, but maybe more cryptic for a reader, especially when trying to learn a language. You must not only learn the concepts behind public, static and final, but also their symbols.

But here it comes. The muttering about Java being to verbose is not about abbreviations or introduction of symbols. It is much more about abstraction of concepts.

Look: Combination of symbols construct patterns. And such patterns express more abstract
concepts .
So when seeing the above pattern (+^_) you can immediately think of it as a “word” with unknown letters, and understand it as the concept “constant”.

And here we are! Faculty is a higher order concept, compiled and expressed by lower level terms. Always using the lower level constructs (the “How-to”) instead of the concept is not as easily grokkable like having a symbol for the concept itsself.

Likewise, when reading public static final in Java, you always have the effort to transform it to the implicit higher order concept of “constant”, much easier expressable by, for example: const .

This is what this verbose vs. concise debate is all about.

Could you imagine to write always “Small house with big entrance to shelter cars” instead of “garage”?

That’s what Java obliges you to do. And regarding IDE support: I do not believe you really want to advise a template+shortcut enabled word processor as the best solution to write texts,  instead of creating better words for things.

In the end, Java expressions are not napkin-able, and are not suited for whiteboard development.

I’ve seen very different mail forum posts about this or that algorithm, and it was always PITA to read Java code out of an email, perhaps wrapped at 80 characters.

Not so in really expressive languages!

Coming back to the pattern mismatch mistake above:

I detected this problem when I refactored a good bunch of code from Java pre-5 while loops to the Java 5 foreach loop.  In many many cases it was easy to exchange the loops. But I detected more than one place, where the loop construct was slightly different, and replacement did not work.

The point is:  I was not aware of this differences, because browsing over the code, all loops seemed equally expressing the standard iteration concept of  get iterator, while hasNext, elem = next.

Only the refactoring revealed the locations where the looping concept indeed differed.
Expressive?  Or truth hidden by word flood?

So when talking about Java’s verbosity and new concepts for Java 7 or 8 or whatever, first remember the faculty example and cogitate if the new ideas don’t express abstract concepts more explicitly and thus clearer than the verbose forms, you considered being more “clear and directly” before.

June 5, 2008

Types: Static, dynamic and the quirks of velocity

Filed under: Experiences, Java, Language, Other-lang — de.velopmind @ 11:24 am

Until shortly I was someone who felt often annoyed by the verbosity of Java and the restrictions of the type system, which much too often seemed to stand in the way of fluid development.
Even Generics seemed at a first glance to add to this pain, and not alleviate it.
I blamed the principle of strong and static typing as well as the edit-compile-run cycle for that and so oftentimes called for a dynamic language, best to all interpreted, to gain more development speed and flexibility.

But recently I stumbled over the flaws of dynamic typing, interpreted code and multi-language mixup in our codegenerator.

It began with a method which returned a Collection (and I must say, that everying worked somehow before I laid hands on). Porting some code from 1.4 iterator syntax to the new for-loops and thus introducing type parameters revealed a design failure: A method in a subclass of a hierarchy shadowed a superclass’ method far up in the hierarchy with the same name but different return value. Well, indeed only different for the compiler after parameterising the returned Collection, so it was some sort of “duck typing” before.
This flaw was never detected as the methods were called in different contexts, each expecting the type of items in the Collection which it indeed got. But after parameterising the Collection with the respective types, the IDE/compiler immediately uncovered the shadow-problem.

Now, in an IDE renaming a method is a simple refactoring step. Much too easy, so with no effort done even far at the end of a development timeline.

Unfortunately I was not aware that this method was called from within a good bunch of velocity templates, and the IDE was indeed not aware of the dependency between this templates and my Java class.
As velocity templates aren’t compiled but interpreted, there was no way to detect that mismatch during build time.

The application, as I just mentioned above, was a codegenerator, so the problem first occured on test runs, now in some classes generating lesser code than before (what generally is good, but not if happening without intention 😉 ).
Tracing the phenomena made me feel a bit … weary.

It was easily detected to be caused by my renaming action. The template code called the method under its old name, but instead of the expected collection it got an empty one. This was, because after the renaming refactoring, the actually called method was the one from the upper class.
This returned (un-)fortunately an empty collection, otherwise some other template code would have called the items in a way impossible for the type actually carried by that collection, and an exception would have thrown.
But so, simply this code was bypassed due to the empty collection, resulting in a lack of output.

What to do? Well, the calls to the method using its old name had to be changed.

Unfortunately there was no way to determine where in the templates this method was called, other than doing a dumb grep.
After changing the mass of templates, all seemed well at a first glance.
So building the codegenerator again, deploying it and doing the next test run.

But what surprise! Even more files changed, missing parts of formerly generated code!

It took now a good while to detect what was going on.

The template was used in different contexts, where it produced the same lines of generated code.

But to do that, it called the mentioned method on a reference, expecting the retrieved Collection to always contain the same type of items. Only that the type of the objects on which the method was called was not always the same! The template was also used in a context, where another type of objects was referenced, its class not being in the hierarchy mentioned before, so not sharing a common type. A typical case of duck typing: As long as it responds to my message and I get what I want, all is fine. Forget about TYPES!

So what happend so far was:

  • Class One: renaming the method after getting compiler errors due to
  • Templates: renaming the method calls after detecting that mistakenly the superclass’ method was called, leading to misbehaviour.
  • Now: getting even more misbehaviour due to wrong method calls on other types.

The latter misbehaviour resulted from the fact, that velocity does indeed not complain if in code like the following the Dependencies property cannot be found.

#foreach ($dep in ${element.Dependencies})

So the all in all consequence:

  1. Duck Typing works by intentionally doing the right things at the right time.  Knowing that, it becomes part of your development process.
  2. You can do it easily in Java too, simply use Object and the raw type Collection whereever you can. In this way:  Don’t know why people rant against Duck Typing. They did it in the pre-Generics era anyway.
  3. When intending to work with typing to get the most from the compiler, interpreted code and dynamic typing can stab you in your back, especially when your interpreter does the favor to silently ignore incorrect method calls and the like.

At the end more seriously:  After I had to maintain and first of all refactor some … ahem … not so well written foreign dynamic script code, and after getting in contact with Scala, I start to rethink my animosity against strong static type systems and compilers.
I recognise, that it mostly was driven by Java’s verbosity and implementation details than by principles.

Things changes, me too.

Java: why the heck compiler doesn’t check ?

Filed under: Experiences, Java — de.velopmind @ 8:21 am

Well, why do we use typing in our coding?   To get typechecks by the compiler, I thought.

But the compiler indeed has to know, what we want or expect.  So we give even more type information, namely by  casting and parametrisation (Generics).

But why the heck is this “fine” Java compiler not able to detect such a simple typo?

Collection<AssociationEnd>  lChildViews  =   lView. getChildViews();

for (AssociationEnd iFacade: lChildViews) {
    WidgetAssociationEnd lFacade = (WidgetAssociationEnd) lChildViews;

Well, indeed the assignment should take and cast the reference in  iFacade, not in lChildViews.

The code above resulted in a nice ClassCastException  … at runtime!

Now I am working with so much type information, and the compiler is not able to check that WidgetAssociationEnd is no subtype of Collection?   What the hell is this??

April 14, 2008

How not to use annotations

Filed under: Design, Java — de.velopmind @ 10:12 am

Since Java 5, the markup of code stuff with so called ‘annotations’ has been expanded to further use.

While before Java 5 tagging was mainly used for javadoc and to mark a method as deprecated, since Java 5 everyone is able to define her own meta-language to annotate her code.

As annotations per se may be useful, the way to create and use them in Java 5 seems to have led to some form of usage which leaves the boundaries of OO style, changing Java programming into a kind of declarative programming style.

To talk about annotations, we first have to divide them into different categories:

  • Compile time processed vs. Runtime processed
  • System defined (say: by Sun JDK or a tool vendor) vs. User defined (say: in application development)

Annotations are ‘meta tags’, which means they provide information about the code on a higher level. The problem is, what is considered a ‘higher level’.

So let’s have a look at compile time processed annotations first:

Compile time annotation means, the annotation provided by the programmer in her source code is recognised by the compiler, when it does its work to transform the source code to byte code.

The three most commonly known annotations are: @Override, @Deprecated and @SuppressWarnings

These three just give a good example where the problem in the meta-debate is:

@Deprecated is a good example of a meta-information, which does not tell us something about the code itsself, the algorithms, the class hierarchy or other parts of our coded problem solution. Instead it informs us about plannings, about conception, specifically about a feature life-cycle.
It simply tells us, that this so marked feature will not be available in a future version, so its usage in applying code is currently not wrong, but is not recommended any more.

This annotation does not change the created byte code in any way, neither the code which is annotated, nor the code who calls that so tagged feature. It is only a hint to an application programmer about what may be happen in the future if she really uses that feature.

@SuppressWarnings is also a good example of a meta-information, as it does not say anything about the implementation and the algorithm or the structure or else, but is communication with the compiler. It tells the compiler, that a warning, created by the following feature, is indeed expected and is assumed to be of no interest.

The annotation’s existence does not change the result of the compilation in any way, but only the way the compiler works with the input it.

@Override is indeed a bad example for such a meta-information. It tells us, that a specific method is assumed to override a method with the same signature in a super class. While this may be considered a meta-information as it does not tell anything about the methods implementation itsself but our assumptions about that method, it is in fact an addition to the code, not a meta-information about it. It is in the same category like the public or the final modifiers, for example.
They all do not change the algorithm in any way, but tell the compiler our assumptions about its application. It communicates our software conception. That is, what code is about. So public means: may not be called from outside. Final means: May not be reassigned. And override means: Shall correspond to a method with same signature in super class.

Override gives a concrete information about the inheritance structure.
I think Martin Odersky thought the same when he decided to make override a keyword in Scala.

The same can be said about runtime annotations, i.e. annotations which can be analysed at runtime, so changing or controlling the systems behaviour.

While this may be a good idea, e.g. when writing test code using a testing framework (test methods get the annotation @Test instead of being detected by name prefix), this can easily be abused when applied to control application logic.

Even in the mentioned test framework, recognising the test methods by annotation is in my opinion more a hint to a design flaw in the underlying programming language (say: Java) than a feature. Obviously it is not possible to code this configuration with language inherent features, at least not in a satisfying, clear and concise way, making the framework developers falling back to reflection and annotation use.

At first, I was not unhappy with the convention over configuration approach, assuming all test methods to start with the prefix ‘test’. But beside that I suspect that there are more elegant solutions in languages that provide functions resp. closures as first-class citizens.

Beside the example above I stumbled over a question in the Groovy user list, where a validation solution in a Java programm, based on annotations, should be transformed to Groovy. The current lack of inner classes led to me doubting the proper use of annotations for that application per se.

The solution had a generic validation method which analysed a given object for annotated properties and called a further validation method for that property based on the specific annotation.

The object to be validated was an anonymous inner object created especially for that purpose, so being a form of configuration.

Example:

someMethod() {
    Object checkObject = new Object {
        @SomeAnnotation
        String myProperty = someValue;
        public getMyProperty() ....
        // other properties of that kind, with different annotations ...
    };
    validate ( checkObject);
}

While this approach seems at first a good application of runtime annotations (as the user of the validate method can “easily” configure the concrete applied checks in a declarative way) and indeed OO style (as creating the specific configuration as anonymous inner class seems OO), it indeed hides the fact that the annotation provides no real meta-information about the properties, but is used for runtime flow control.

I do not know the validate() method, but assume it to be more or less in an imperative style, assembled of reflective access to get the annotations and some if-else which calls the corresponding validate method for any annotation with the respective data value to be validated.

So we take a look at the Groovy solution to the above problem.

First, we move the validate() method to a class Validator. This class provides also all the specific validation methods for each annotation (perhaps such a class just existed for the Java solution).

class Validator {
    def checkOne(data) { ... }
    def checkTwo(data) { ... }
    def validate(c) {
        c.delegate = this
        c()
    }
}

Now, assuming we have one instance validator of class Validator, we do the configuration, formerly done in the inner class object, with a closure:

someMethod() {
    validator.validate { checkOne(someValue) ; checkTwo(someOtherValue) }
}

Voíla, the closure put into the validate method is our configuration of what checks shall be applied to what values.

Instead of annotating properties in a class, then analysing the annotations to know which test method to call, we simply call that methods with the values. Instead of implementing what can be done, than declaring what shall be done, we simply do.

Back to our testing framework: Instead of annotating test methods and doing reflective access afterwards, we could also create a test method and give it a test closure:

test("name of test") {
    // test code
}

The point is: we are totally inside the programming language, not on a pseudo meta-level.

March 5, 2008

Programming languages – Contact: The first four minutes

Filed under: Java, Language, Scala — de.velopmind @ 3:10 pm

“Contact: The first four minutes” is the title of a book from Zunin and Zunin about the importance of the spontaneous impressions people get from another in the very short first moments of encounter.

From the book description:

In four minutes, you will know if the person you are talking to is someone you’re interested in. Yes, it only takes about four minutes to decide. Yet in that brief time, you and your partner will have made an indelible impression on one another. Contact: The First Four Minutes shows you how to make that impression a positive one and to develop the skills that will make you an interesting conversationalist to anyone-from a casual acquaintance to your spouse.

I detected that the same is true for a programmer getting in contact with a new programming language. The first moments determine the acceptance of and further effort put into learning a new language.

There are some different factors which play a role.

1: The character of the programmer

Some programmers are very adventurous and curious, blessed with a spirit of a pioneer. They easily embrace new languages even if (and especially when) this languages develop a completely new paradigm; at least new to him.

On the other extreme we find the programmer who is very shy, easily scared by new languages becoming hype in the press, especially if they are far from his former experiences and current skills. He wants always to feel ‘at home’.

Most programmers are somewhere between this extremes. In the center you may find the pragmatic one, torn between the two sides expressed by the statements”there’s no The Tool, always use the best one for the job” and “another hype? Well, what shall that be good for? Where is it different? I just can do it with X”

Sometimes it may even be a matter of age. When you are young, the old stuff is uncool (COBOL?? psaw !!) and the new is sexy and hip. But when you become aged, the adventuresomeness ceases, learning new stuff is not as easy as it was before (even learning must be trained to stay fit in it), continuous context switches are considered cumbersome more and more. The question Why becomes more important, there is the desperate desire to eventually find The Language, which -once learned- would be the swiss army knife for the rest of the life. (Well, this search for The Language -by the way- is also detectable in some lesser weary groups, more driven by an idea comparable to the Grand Unified Theory in physics).

2: The character of the language

Languages are developed by different people. There are languages which come to life due to the necessity to have a tool for a specific task. They are practice driven. Then there are languages developed in a scientific or engineering environment for specific purposes, providing a powerful expressiveness for specific domains. We should not forget the academic breeding ground, where languages evolve from scientific researches about development per se.

As some languages are considered common at one time by the overall developer community (most of them in enterprise contexts today) , a new language is evaluated by its similarity and comparability to this languages. So a new programming language which mostly enhances the already known ones without bringing in too much new concepts will gain a broader acceptance than others, which would be considered ‘esoteric’ by the average programmer.

So it is understandable that we can observe a kind of ‘evolution’ of PLs, where extreme variants can at most survive in niches, while the mainstream languages show more similarities in geno- and phenotype. The more a language can provide ‘some good improvements’ while on the other hand being easily recognizable as derivate of the already known stuff, the more chance there is to be accepted by the developer community.

This ‘similarities’ consist of different things:

a) Is the syntax easily understandable? When you see some code snippets, do they look even slightly ‘known’ (this can be regarding already known programming languages or regarding languages outside the IT domain, like prosa, mathematics or even specific domains. See Chef as an esoteric extreme of what I mean.)

b) Does the language provide features and concepts already known by the ones currently in use? To my opinion this is one main question of the current Java-with-Closures debate. Who is used to Ruby or Groovy can hardly imagine to develop without closures. Who is used to Java only can hardly imagine what this additional, complexity enhancing feature shall improve. We already can use inner classes, if such a application design is necessary; see Swing.

c) Does the new language target the problems I currently try to solve with a mainstream language? If you are a business developer, you will not be very much attracted by a language in which you can easily represent chemical formulas as a main feature.
If the new language is perfect for statistic calculation, but has no web framework and does not ease GUI development, it may be interesting for that specific component’s development department only, but not for the company as a whole. Introduction of such a language may become a political/strategical issue then.

3: The character of the documentation

When a developer gets in contact with a new language, it is often due to some sort of marketing. You read somewhere about it, hear from a colleague, there is a session on a conference…

Then you start by visiting the respective home page(s), possibly even download and install the software, and almost always nowadays you will search for a ‘getting started’ document, accompanied by a ‘tutorial’.

These documents now are your first closer contact with the Realm of the New Power. The quality of these is vital for your further interest in the new language. What you read here will be crucial for your (concious or unconscious) decision, if the new language is considered ‘appealing’ or ‘charming’ or ‘irritating’ and ‘weird’ or simply ‘not relevant’ or whatever words you would find for it.

In the same way as a person you meet seems as likeable as it is addressing your interests in some way, so does a programming language and its documentation seem attractive if it addresses your needs or interests as a developer.

Example: Groovy and Scala

All this parts together describe, why I liked Groovy from the first time I met it, and why there is some sort of love-hate relationship in my first encounter with Scala.

Groovy is conceptually based on Python and Ruby (which I encounterd in this order and fell in love with in this order too), but due to the tight integration with Java-the-language and Java-the-platform and due to the fact that it exactly addresses the every-day problems of an enterprise java developer, it is currently my language of choice, used in as much situations as possible.

Scala on the other hand attracts due to its conciseness in comparision to Java, that it seems to be fast (reg. some Benchmarks), fit for enterprise size projects due to its static typing and generally its typing concepts, does even run on Android mobile platform (what Groovy perhaps never will). I read many things which indeed may make Scala imaginable as the Next Java. But the complexity of e.g. the type system and other concepts of Scala are also a barrier for the all-day developer.

What I told about Documentation and the first four minutes above is why I consider my encounter with Scala a mixed experience.

When you start, the first thing you will try out typically is what you find behind a link named “beginner’s guide“. In this case “A Scala tutorial for Java programmers”. It is a nice start, shows some similarities (classes, generics) and improvements (traits) compared to Java, but leaves one with perplexity and the question: “how does this now address my daily problems as a Java programmer? What can I do (better) with Scala now?”. Especially the case classes may be a barrier as they are most far from the common Java developer’s mindset.

Even the second quick introduction, to be found behind the link “a paper giving an overview“, is not really a stumble free way to Scala. At first it seems that this paper would address some considerations of an average programmer, but the paper stays on an academic and abstract level of speech, presenting the reader with short snippets but not a continous example which seems somehow to come out of a prototypical practical experience (or are local variables really meant to be one-chars, and methods really never longer than four lines?)

Not until you come to the link of “Scala by Example” (157 pages) you find an introduction which declares the basics of the language. But after reading that, a Java developer will miss the next steps. Full of all the new language concepts you do not get the feeling that you are familiar enough with it to take a simple challenge and write a little application doing file I/O, supported by a small GUI or something like that.

The questions arise now: “How am I meant to read and write a file?”, “How does GUI programming in Scala look like?”, “Could I write a little Servlet with it and start it in Jetty?”, “What about JDBC connection or higher ORM support?”

That are exactly the subjects directly addressed by the Groovy developers and authors, what in my opinion leveraged Groovy to the current state of broad acceptance.

I’m also not shure if the upcoming “Programming in Scala” book will address this questions. Perhaps we have to check out what we see on the “Papers” page, to get a better picture? Currently I stopped with the bunch of papers I already have, and will see to gain some more understanding by a hands-on approach, taking the risk of applying Scala in a much too javaesque way.

Appendix:

What I am mostly looking at when testing a new language:

  • Syntax constructs
  • What is the intented way to work with files, console in and out
  • What is the intented way for logging
  • How is unit testing be integrated
  • What about Database support
  • How to create GUIs
  • Does it simplify XML handling
  • Is Rapid Development possible,
  • Would I write One-time quicks with it
  • Is there a modularisation concept
  • What about IDE integration

How not to use static class members

Filed under: Design, Java, Language — de.velopmind @ 1:24 pm

Recently I came across a new experience which I consider a good example of “How not to use static class members”.

Static class members are members (attributes and methods) which are shared between all instances of a class.

I.e.: If one instance changes the value of a static attribute, all other instances know this new value immediately.

Static members -particularly attributes- are shurely useful for configuration purposes, where all instances share a really really common knowledge with each other.

Static members are also indeed necessary for patterns like Singleton or Factory.

They are absolutely necessary to declare constants.

BUT: Static members are fatal if their usage include a runtime aspect.

I experienced that, when I inherited some code which extended the JUnit Framework to do validation on a loaded Model. Each validation was declared as test method of a so called “ValidationCase”. As JUnit implicitly creates new instances for each test method to be called as test case.
So when running a specific test class, there are a bunch of instances of this class. For the validation now they had to share the model element to be tested.

The quick answer to this problem was: put it into a static member, all instances will know it then. Advantage: The location where these instances were created was inside the JUnit framework and didn’t need to be touched.

BUT:
The error in this notion was: Not all instances of a test class share this model element. (As there are more model elements which are tested later and before by the same test class).

So the rule is: Only all instances of the test class which exist at a given time share this knowledge. At other times the share contains other information, shared by all instances actually existing then.

It worked well as long as only one model element was validated at a time.
The problem of this approach occured when once the validation system was extended to call a validation of one model element out of a validation of the other.

Now there were two groups of instances, one for element A, one for element B. But when B was under test, all instances shared the reference to B. And the reference to A was lost. Overwritten at a point in time.

Even this worked, as long as the validation of B happened to be the last action in the test of A, and the A reference was never needed afterwards.

But it was obvious that this card house once would collapse….

So: Everytime you introduce a static variable into a class, check carefully if its value is really valid for all instances of this class, even over time.
Avoid using static members out of well known pattern (like constants, Singleton, Factory …).

Blog at WordPress.com.