Menu

Java version, et al

Developers
2016-02-02
2016-11-04
1 2 > >> (Page 1 of 2)
  • Nik Trevallyn-Jones

    Hi All (but mainly Peter),

    I am about to put a minor update into SPManager.
    What is the current (minimum) version of Java source and bytecode are we compiling for these days?

    Cheers!
    Nik

     
  • Luke S

    Luke S - 2016-02-03

    The javac target in the standard ant script is 1.5. Whether this should be the target though, is another question, about which I'm not an expert. I do note that even 1.6 has been EOLed for general public users. The question would be: would upping the target version translate to any advantages from a user perspective? Any features/options it might unlock afrom development perspective?

     
  • Peter Eastman

    Peter Eastman - 2016-02-03

    It certainly would be reasonable to increase the minimum version. While there might conceivably be some ancient computers out there still running 1.5, I would guess the number is pretty tiny. But if you're asking what the current release was compiled for, then yes, it's 1.5.

    Peter

     
  • Nik Trevallyn-Jones

    Ok, then my various ant files are still fine. :)

    Cheers!
    Nik

     
  • Nik Trevallyn-Jones

    In case anyone is wondering:

    There are a couple of issues that I am aware of with the version of the Java byte-code.

    • the byte-code modification fiasco
    • dynamic method dispatch

    • Oracle in their "infinite wisdom" lobotomized the byte-code modification process in a non-backward-compatible way, such that old format modified byte code required an "enable-backwards-compatibility" flag on newer VMs, which Oracle then removed in Java 8 (and later versions of Java 7).

    This caused no end of problems with almost every Enterprise back-end Java app and app-server, since they all use byte-code modification to support core functionality - meaning that many sites could not upgrade to Java 8 (or even later Java 7) VMs because it broke their apps and app servers.

    AOI doesn't use byte-code modification (although I have considered it for at least 3 different situations), so we are (currently) immune to Oracle's lunacy on this.

    1. Java 1.7 implemented dynamic method dispatch in the byte-code. This enables dynamic methods - and in particular Groovy closures - to execute much faster.

    So in theory, since we are now using Groovy, we should investigate the possible benefits of compiling enabling the dynamic dispatch operators for Groovy.

    I haven't investigated this, but compiling Groovy into 1.7 byte code and calling it from 1.5 byte-code could well work.

    So at this stage, it seems we are probably following a safe path by generating 1.5 bytecode, but we should probably investigate any possible benefits of enabling 1.7 bytecode for Groovy - possibly only for Groovy.

    Cheers!
    Nik

     
  • Luke S

    Luke S - 2016-02-03

    Interesting thought. What sort of situations would you consider mdifying bytecode? Do you have a reference on how this has changed/what issues it might raise? If I understood you correctly, old-style, compiled to target 1.5, will not work properly on later7/8?

    Java 1.7 implemented dynamic method dispatch in the byte-code. This enables dynamic methods - and in particular Groovy closures - to execute much faster.

    Also a built-in Lambda syntax, in 1.8, I believe. Brings up some memories of the Raytracer-restructure discussion...

     
    • Nik Trevallyn-Jones

      One of the classic use-cases for byte-code modification is Aspects:
      https://en.wikipedia.org/wiki/Aspect-oriented_programming
      https://en.wikipedia.org/wiki/AspectJ

      In modern Enterprise Java apps - EJB or otherwise - such features as declarative transactions, centralised exception handling, security, caching, auditing, ORM change-detection, logging, and much more, are implemented as some form of an Aspect - often using bytecode modification.

      Instrumentation systems such as NewRelic, and Spring Insight also use byte-code modification.
      Typically, I used to see 3 or 4 tools or frameworks in use that used bytecode modification in the typical Java app.
      https://blog.newrelic.com/2014/09/29/diving-bytecode-manipulation-creating-audit-log-asm-javassist/

      More advanced uses of Aspects include things like the Visitor Pattern.

      With AoI, I seriously considered using Aspects to implement the additonal behaviour in the Raytracer that was not supported in the class design - for features such as the ShadowCatcher and earlier for the Hologram plugins. Had I used Apects, the ShadowCatcher code would have been significantly more maintainable than the current source-code overlay approach that I eventually chose - which needs to be updated every time the AoI source code changes.

      Of course, we might have been bitten by the Java 1.7 byte-code problem, but the ApsectJ folks sorted that out fairly quickly, so overall Aspects would still have been a win in that situation.

      Also a built-in Lambda syntax, in 1.8, I believe.

      Yes, with all the politicking about the Java Closure (Lambda functions) feature in Java 1.7, we ended up with the absurd situation that every language that ran on the JVM could support closures - except the Java language itself. :/

      But the Java 1.8 closures are here now, and very nice. :)

      Brings up some memories of the Raytracer-restructure discussion...

      True - closures entered the discussion regarding Raytracers for the same reason that I contemplated Aspects: closures allow a finer-grained, method-level polymorphism than the coarser-grained, class level polymorphism supported by inheritance. So closures would then be another way to allow behaviour in the Raytracer to be redefined without closures, or the cloning of existing source-code.

      I also contemplated closures as a way to access advanced CPU features such as SIMD.

       
  • Peter Eastman

    Peter Eastman - 2016-02-04

    I also contemplated closures as a way to access advanced CPU features such as SIMD.

    How would that work?

    Peter

     
  • Nik Trevallyn-Jones

    I also contemplated closures as a way to access advanced CPU features such as SIMD.

    How would that work?

    The approach I investigated was:
    * take one of the simple (and small) open-source byte-code interpreters;
    * a number of them do very simple mapping of byte-codes to internal library code;

    • modify the internal library code to make use of SIMD instructions under specific reliably-detected circumstances - resulting in a SIMD-enabled interpreter;

    • write the raytracing code in which which we wanted to use SIMD, as one or more closures;

    • arrange for the bytecode of the entire closure(s) to be passed to the SIMD-enabled interpreter in a single JNI call;

    The problems I was attempting to solve were:
    1. ensure that SIMD instructions are being used on loops over arrays of values;
    2. reduce the JNI overhead to once per closure call (outermost loop) rather than once per inner loop(s).

    More recent research implies that the modern JIT compilers are capable of generating SIMD instructions - but I haven't had a chance to test that yet.

    If they do, then it might be worth arranging AoI code to take advantage of that - after testing if it actually helps, of course.

     
  • Luke S

    Luke S - 2016-02-05

    Thanks for the links/overview.

    It sounds, at least in abstract, that a bird's eye view of some of the re-arrangements that you are considering would be similar to what would be wanted to use GPGPU (OpenCL or similar) acceleration, IE extract out all the heavy math so that it can be handled by a set of fairly simple, straightforward instructions that the hardware can run in parallel.

     
  • Peter Eastman

    Peter Eastman - 2016-02-08

    That's an interesting idea, and would be really cool if it could be made to work.

    There's two main approaches that could be used to support SIMD: the easy approach, and the fast approach. The easy way is to keep everything structured pretty much the way it is, but try to accelerate individual operations. An example is OctreeNode.findNextNode(). It's one of the main places time gets spent, and it's basically doing operations on 3-component vectors. It would be really easy to use AVX to make it close to three times as fast. This approach could produce a significant speedup, but not nearly as much as the second approach: trace rays in bundles so that instead of processing one ray at a time, you process four or eight at a time. That requires a complete redesign of the whole architecture, but has the most potential benefit.

    Peter

     
  • Stephen Parry

    Stephen Parry - 2016-10-06

    Picking up from 54a9b500 with @Luke S

    Our user base is presumably predominantly Windows / MacOSX / Linux. I think most of us can run JDK 8 on recent versions of these platforms. One of my machines is circa 10+ years old and running Linux. II think it can handle JDK8. IIRC it has JDK7 currently. For Linux I would think the bigger issue would be Oracle versus OpenJDK. Has anyone tried AOI on OpenJDK?

    Does anyone know of any full (i.e. non-mobile) JDK platforms that can run JDK5 but cannot run JDK8?

    If we can most/all run JDK8 I think pushing forward to that release opens up so many possible improvements in the codebase maintainability and performance, it seems a no-brainer.

     
  • Luke S

    Luke S - 2016-10-07

    I've been doing my recent builds on OpenJDK8. Most of my test running has been on Oracle JVM, though.

    Any Mac user who is still back at Lion or earlier will have a hard time getting anything more recent than Java6. I'm not in the mac world, but I suspect that is about as likely as a Windows user running Win98?

     
  • Peter Eastman

    Peter Eastman - 2016-10-07

    Oracle's Java does support Lion (10.7), so they'd have to be on 10.6 or earlier. Which isn't actually all that old: 10.7 was just released in 2011, so it's a little bit newer than Windows 98. :) But since the last four releases (10.9 and later) have all been free upgrades, the people still running such old versions are mainly going to be ones with very old hardware that can't support recent OS versions.

     
  • Stephen Parry

    Stephen Parry - 2016-10-07

    (Realised I posted on wrong thread).

     

    Last edit: Stephen Parry 2016-10-07
  • Luke S

    Luke S - 2016-10-12

    I've run test builds with target versions 1.6, 1.7, and 1.8.

    So far, the only thing to be aware of in the source happened in the jump from 1.5 to 1.6. The standard has changed such that all characters in .java source files are expected to be utf-8, even in the comments. If your source does not comply, the compile process errors out. We've got a few non-ascii characters in comments, mostly coded as iso-8859-1, but with a couple of byte values that don't match any character in any common encoding. Options:

    • Convert the character set
    • Explicitly pass input format to javac in the ant builds
    • Pass an option to suppress the error

    There are only a couple of affected files, so I'd prefer to convert. That's open to discussion, though. (Already have it done on a local branch)

    I do note that more recent source-target values for javac tend to have more -Xlint type warnings, which may be worth a cleanup at some point.

    Any known downsides to bumping to at least 1.6?

     
  • Stephen Parry

    Stephen Parry - 2016-10-12

    Francois G. has a lot to answer for. How dare he be French!

    Character set is a slippery issue; everything you use potentially messes with the text in different ways and you never know where the misinterpretation creeps in - HTML transfer, GIT, editor, OS, compiler. Netbeans assumes an odd Windows character set by default, so I had crazy issues too with the source, but switching the project default to UTF-8 seemed to make it OK. I'll try checking in more detail later. Can you identify the iso-8859-1 files please Luke?

    IMHO switching the encoding UTF-8 is the way to go - even Windows itself seems know what UTF8 is nowadays - and anything else that matters does. I was trying to explain UTF-8 and UTF-16 (!) to my students yesterday - much brain boiling.

     
  • Peter Eastman

    Peter Eastman - 2016-10-12

    Agreed. Let's just eliminate the non-ascii characters. Especially if they're just in comments, it's not worth adding complexity just to deal with them.

     
  • Luke S

    Luke S - 2016-10-12

    For the most part, that may be true. There are only four files that have them, though.

    They are:

    artofillusion.math.SimplexNoise
    artofillusion.ui.ThemeManager
    artofillusion.ui.ToolButton
    artofillusion.translators.POVExporter

    In two of these cases, the comment in question is the copyright notice for files contributed tby François. I can see transcoding, but changing that to a standard 'c' might, theroetically, invite legal issues. I doubt François would say anything, but it would set a nasty precedent.

    Also to be clear, the transcoding to utf-8 is already done in one of my local branches. Took me about five minutes, and git does track such changes properly. The major question right now is whether, we should bump our targeted JVM version, and which one to bump to.

     

    Last edit: Luke S 2016-10-12
  • Stephen Parry

    Stephen Parry - 2016-10-12

    Go 8! Go on you know you want to! :-)

    Seriously though, that's my vote. It means we don't take another hit for a while longer. It allows us to use all the nice new goodies. We could do another trial release of 3.03 compiled against 8 with a clear notice and ask for feedback from anyone for who takes a hit.

     
  • Stephen Parry

    Stephen Parry - 2016-10-12

    P.S. I concurr: Keep the cedillas (ç) and utf-8 everything.

     
  • Stephen Parry

    Stephen Parry - 2016-10-12

    Looked at SimplexNoise - the culprit is a MacOS encoded bacquote in a comment on line 530. I think we can safely change that to single quote!
    ThemeManager, Toolbutton are François.
    The POVExporter (lines 468, 498, 532) is more tricky however. The code has already been 'lost in translation' somewhere as the current version includes utf-8 'I can't do this' replacement characters. The offending characters are in a debug statement, and as best as I can detremine between google translate and my wife (who has a degree in German), the original word should have been Größe, meaning size or magnitude.
    These should be fairly easy to recreate in utf-8.

     

    Last edit: Stephen Parry 2016-10-12
  • Pete

    Pete - 2016-10-13

    Keep the cedillas (ç) and utf-8 everything

    That'd be considered civilized by the rest of the world ;)

     
  • Luke S

    Luke S - 2016-10-13

    We should probably just translate the German comments into english. I suspect that they were leftovers from the original development. BTW, @Stephen, your text editor is missing something. Those comments are encoded iso-8859-1, and my conversion program (iconv) correctly transcoded them to 'Größe.'

    The simplex noise line must be a typo. Per context, that character should be an apostrophe.

    Most files need zero changes, since 7-bit ascii characters are an exact match to their utf-8 equivalents, and neither format specifies any file type marker, etc.

     
  • Stephen Parry

    Stephen Parry - 2016-10-13

    I was looking at the German in raw hex, so either I saved over it inadvertently or it's a 'git up'.

     
1 2 > >> (Page 1 of 2)

Log in to post a comment.