good work to sum up the discussion,
I looked into the three possibilities and wrote down the pros and cons.
First a *lot* of comments, then my detailled answer on the three
Chris, Antoni, I humbly ask you guys to read this, even if it takes
long... well, or just skip forward to the conclusions....
Just to get this right before you complain about my language: with POJO
jars I mean "plain old java jars",
the normal jars that we all use every day. POJO normally means "plain
old java object", but I would hope you can see what I mean.
It was Antoni Mylka who said at the right time 26.01.2009 14:13 the
well, its not going in circles, its just taking ages to go.
A reply to aperture-dev:
2009/1/26 Antoni Mylka <firstname.lastname@example.org>:
with the fixes provided by Antoni I managed to get the "bundelized"
aperture to run in Smila.
Leo, Christiaan, the discussion about coarse vs. fine grained bundles
has been going in circles for more than two years now.
imho the goal was always to be able to make fine-grained bundles for
but we knew that it is too much work "now",
also nobody explicitly asked for it "back then" but we knew it may
Thats why we did the coarse-grained AND thought for fine-grained
activators, for "later".
So I think the discussion is not going on forever, the goal was always
what takes forever is the bundling work,
which is bad but "the way", I mean, it is hard work, it is error prone,
its hard to test... but the same hell in OSGi or Maven
yep, a kludge but it works ;-)
solution, with selectors.xml is a kludge.
I would compare it to the (perl?-based) apt-get scripts,
a black art (to me) that somehow works....
hehe, ok, more about the other things below....
yeah, this is right.
We create some bundles but
the only way to tell if they work is to deploy them in an app and see
what happens. These are hundreds of manifest entries (Import-Package,
Export-Package, versions, ids etc). All this for three aperture
bundles, rdf2go, sesame, and third-party libs. I announced the first
version of the smila-prep-branch on 14.12.2008 - a month ago and it
started to work only now.
but it did work in OSGi already for a long term based on the
hand-maintained scripts we have
for NEPOMUK OSGi.
still that is a long-term goal I think we should work towards.
I say that if we have 20 bundles instead of 3 it will be VERY
difficult to produce a working release at all with the current source
But I am perfectly ok with "intermediate finer grained releases" that
have a coarser granularity.
Like the one we are discussing now with "core", "contrib" "impl"
Generally, in proper OSGi, bundles should never go into Bundles.
I proposed a compromise on
http://www.dfki.uni-kl.de/~mylka/aperture-smila-test.tar.gz with three
bundles the core, impl and contrib. Impl is clear to use in eclipse,
contrib is not. All the libs are either in the 'inlib' subofolder of
each module or packaged into a target platform. The inlibs go INTO the
generated bundle jars,
It was always a bit ugly to put all the external jars into the
OSGi would give us the means to do it right with individual bundling,
and together with empolis and brox we are able to do it now.
I see to "do it right" is to repack the jars as OSGi bundles and stop
doing "lib" manifests at all.
btw, I see that the "inlib" of core contains applewrapper again, which
we had shouted "out!" at before.
One of my goals would be to have ONE aperture distribution which has
valid for OSGi and POJO.
For the external jars, that means, that if we move everything to proper
we bring the two aperture releases closer to each other.
that does not solve the problem of the inlib jars not being exposed as
normal pojo jars.
the target platform goes directly into the
huh, I would see it easier to change the ant script than to move a file.
If a component is cleared with ESF, we can just move the
files, without changing anything in ant scripts
a light advantage is also that we can document <!-- comment -->
in the selector what the status of each is.
If we really want to make a proper division of concerns between
modules, we'd need something similar to the maven setup Aduna did for
Sesame (many small modules AND the ability to build a onejar),
I would rather say:
together with the ability to generate OSGI manifests, for each of them
and include them in a PDE workspace, I say generate and not store them
in SVN for manual maintenance.
together with the ability to have the same Aperture Jars work as OSGi
jars and as POJO jars.
thats good news but I trust it once it tried myself :-)))
Two years ago the state of OSGI tools
was different. Now the bnd is nice enough, I use it to generate the
sesame bundle and it works.
can you give me a hand sometime, or point me to the right pages to
I generally agree that we should go the maven way of life,
but even after two years of talking about this, we didn't
find the time yet to come up with a convincing mavenization of aperture.
and also, maven causes the same amount of problems it solves,
its the same as with any other framework.
Look at the numerous problems people have with the latest RDF2Go
(not that this is especially a maven problem,
its just: people will have the same issues with maven as with ant or
osgi, its not the silver bullet)
So there are three ways:
1. Aduna experience + research into maven-osgi problems = 45 bundles.
(don't know if we'll be able to use PDE though, the concept of a
target platform requires all jars to be in a single folder, while
maven wants each jar to be in the repository - don't know if they
sorted it out).
* clear expression of dependencies using pom.xml files.
* automated headless scripting process
* automated build, test, deploy
* automated download of 3rd party jars
* excellent integration to Eclipse AND netbeans
* not clear if the OSGi tools for pom.xml work perfectly
* not clear if all the 3rd party libs we need are in a maven repo
* we have to setup and maintain a public maven2 repo 24/7 (= a lot of
Pro and Con:
* everything needs to be separated into ~45 bundles (=45 .pom files,
projects, etc). This is both good and bad.
- There were alternative models proposed for mavenization with less
then 45 bundles.
Still, I don't see anyone really taking responsibility for the maven
and once someone picks it up, it will stick as a constant task for the
- also "the nepomuk experience" showed us that working with a target
creates its own private hell.
you spend typically two hours a week for tracking
"why the hell it does not find java.lang.Object today - is it the moon
in the wrong phase?"
Eclipse gives you no useful error messages about the target platform.
so, in theory good, does solve a lot of problems,
but not as good as a proper mavnization and NOT using PDE.
To sum up:
would be the right thing in my eyes, but would take one full person to
care for it
for the coming years.
The compromise solution is well thought, I try to list pros and cons:
2. My compromise solution -
http://www.dfki.uni-kl.de/~mylka/aperture-smila-test.tar.gz = 4
bundles (core,impl,contrib,example), it will take a day or two to
solve all minor issues, to make all tests pass and to prepare
everything, you can still work on it in eclipse as a single project,
just as you do now, if you don't want osgi, the only thing you need to
endure are 8 or 9 source folders instead of 4. If you want to work on
OSGI, set up another workspace according to instructions and you can
* JUnit tests can be run from eclipse
* compile and dependency check from eclipse PDE dev environment
* is still a valid eclipse project
* it goes towards the "each extractor into one jar" solution.
* it still doesnt really sort out the 3rd JAR problem - the "inlib"
folder is a hack. It should be OSGi dependencies using manifests, and
all should be in the target platform.
* breaks Ant compability - we have no headless build anymore (= or to
keep ant alive, we have to manage BOTH ant scripts and eclipse
configuration, the trouble which we ALREADY have)
* no headless built! It is a hellish endeavour to setup Eclipse's
headless build environment, the SMILA guys said its a bit painful
* still the same bundlization we have today: four bundles. It goes a
slight step towards the "45 bundles" solution, but it does not clearly
show us how to get the "one jar" also.
* no clear way how we build releases. pressing "export" in eclipse is
dangerous and bad, we know how many times NEPOMUK broke because some
guys used different eclipse versions, different platforms, different
Java runtimes, when they pressed this button.
To sum this up:
I don't like it because it depends too much on the Eclipse way of
which is too complex compared to ant.
also it does not help to headlessly test it in an osgi platform.
we ALWAYS have problems with OSGi, it doesn't behave 101% the same way
in developer and in runtime mode. (=heisenbugs happen here)
3. Status quo - selectors.xml, manual maintenance, more difficult
testing and lower quality
* it works today
* the java project remains simple: one big java folder with everything
in it, this is understandable by every runner-by java developer
* manual maintenance of both ant, selectors.xml, and .classpath for
* no standardized way of writing down "who needs what"
The critique is right that testing and quality assurance is indeed BAD
for our OSGi release,
BUT !! - testing and quality assurance for the POJO release works!
(or at least should work, we have a "test" target in build.xml)
== My conclusion ==
The critique is right that we need automated testing of the OSGi
release to check
if we (or some automated framework like eclipse or maven bnd) screwed
up the build process.
Note that we already have a working test for the normal POJO release,
we have JUnit tests.
I see three things to achieve:
* have two releases: one with three jars (core, impl,
therestwithfunnylicenses) and one with ~45 jars (core, extractor1,
extractor2, ... therestwithfunnylicenses)
* all release jars must be both OSGi and POJO compatible (= the 3rd
party libs inside the jars must go, no jar-in-jar anymore)
* the OSGi and POJO release must BOTH be JUnit testable headlessly from
Moving to Maven (solution 1) will NOT increase the quality of the OSGi
from what I learned in the last years there is one thing I really
learned the hard way:
just because we use a framework does not increase the quality.
Once we have moved to Maven, we are in the same position as today,
we have no way to automatically test the OSGi packages.
So besides investing months, this will gain nothing.
Moving to the Eclipse based way (solution 2) has similar caveats.
I do not trust the automated build process of eclipse,
it is very hard to setup headless and it does not help with automated
It is totally unclear how we can build the onejar and the individual
jars automatically using eclipse
just by moving to eclipse. The build process will be broken and it will
take more than two days
to get this done.
I have no idea how to do the "onejar" with eclipse building!
I prefer version 3) , because its based on the things we have done in
years and the work to do NOW is minimal.
If we stick with 3 though, we should invest the time to setup a headless
OSGi container to have a headless build process that copies
freshly built releases into an OSGi container (=probably equinox,
because we can wrestle that beast already)
If we set up a way to run JUnit tests in a headless OSGi container,
then we could raise the qualtiy and testdrive the manually-maintained
the manually-maintained manifest files.
>From all the talking in the last years my "preliminary" lesson learned
that moving to maven will not fix the problem of qualtiy assurance of
the osgi release,
it will just take a lot of time, but I hope that I will be proven false
in the coming years :-)
Note: we already have a "test" target in the build.xml, this is a good
lets just add another goal "run the JUnit tests within an OSGI
and fix the existing build.xml
(I just checked, the existing build.xml "test" target is a bit broken,
the junit task could not be found,
but I guess its better to fix that than to start something new)
DI Leo Sauermann http://www.dfki.de/~sauermann
Deutsches Forschungszentrum fuer
Kuenstliche Intelligenz DFKI GmbH
Trippstadter Strasse 122
P.O. Box 2080 Fon: +49 631 20575-116
D-67663 Kaiserslautern Fax: +49 631 20575-102
Germany Mail: email@example.com
Prof.Dr.Dr.h.c.mult. Wolfgang Wahlster (Vorsitzender)
Dr. Walter Olthoff
Vorsitzender des Aufsichtsrats:
Prof. Dr. h.c. Hans A. Aukes
Amtsgericht Kaiserslautern, HRB 2313