Mapping Java Enums in Slick

We had some Scala code that depended on a bunch of Java enums, and I was adding some support for mapping them to Slick columns. (Side note: so far I think Slick is a great technology.) One of the many handy features of Slick is that you can use a type class to define a mapping between some arbitrary type and a database-friendly type. For example, if you have a Java enum “SomeEnum”, defining a mapping is as simple as creating this implicit value:

implicit val SomeEnumMapper = MappedColumnType.base[SomeEnum, String](_.name, SomeEnum.valueOf _)

Now you can use it:

def myColumn = column[SomeEnum]("myColumn") // maps to a String column

But suppose you have lots of enums, where “lots of” is an integer greater than one. You can write a generic enum mapper “generator” that pops out an enum mapper on demand, i.e. whenever the compiler needs one. This is a great case for implicit def: as you would expect, val is for when you have just one instance, and def will generate a new instance for each invocation. The tricky bit is the reverse mapping (SomeEnum.valueOf _ above) since you can’t write A.valueOf _ for a type parameter A. Instead we can use a bit of Reflection black magic to summon a Class[A], then stuff that into the generic java.lang.Enum.valueOf(Class, String) method:

implicit def JavaEnumMapper[A <: java.lang.Enum[A]](implicit classTag: ClassTag[A]) = MappedColumnType.base[A, String](_.name, { x => java.lang.Enum.valueOf(classTag.runtimeClass.asInstanceOf[Class[A]], x) })

Or the sugary version using a context bound (about 5 characters shorter… yay?):

implicit def JavaEnumMapper[A <: java.lang.Enum[A] : ClassTag] = MappedColumnType.base[A, String](_.name, { x => java.lang.Enum.valueOf( implicitly[ClassTag[A]].runtimeClass.asInstanceOf[Class[A]], x) })

Now we can write column[AnyEnum] for any Java enum type.

Strange Loop 2015

I recently returned from the Strange Loop 2015 conference. Many people gave great talks there, and I was glad to hear speakers on a number of interesting topics. One I particularly enjoyed was “Evidence-Oriented Programming” by Andreas Stefik. The speaker has created Quorum, a programming language where features are included only if proven to be useful in Randomized Controlled Trials (RCTs). Fascinating! We also learned some tidbits from his research; for example, using RCTs it appears that “for” is an unfortunate keyword choice (compared to, say, “repeat”). He also found that static typing has a slight negative impact on productivity for beginning programmers, but a large positive impact for developers with some experience. So now there is scientific evidence that a static type system does help prevent bugs. There were many other great talks, so I do recommend this conference for software engineers who want to keep up on current trends and research.

Creating video game sounds

I recently discovered some awesome tools for creating synthesized sounds. The sounds they generate are just like the sort of things you hear in an 8-bit Nintendo game.
First there is BFXR. You can easily get started by clicking the presets on the left. They aren’t “fixed” presets, but rather random distributions, so if you click “Explosion” repeatedly it generates several sounds that resemble explosions. Then you can manipulate the various synthesis parameters. The “Mutation” button jiggles the parameters a little bit. Way cool!
Then I found LabChirp, another fun tool with a similar purpose. LapChirp focuses on layering several synthesized shapes. It’s not quite as freakishly easy to use as BFXR, but it’s powerful and it also has a randomizer function to help you get going.
Two great programs for creating new sounds for your next homemade video game!

Java package structure

As I’ve built out several applications in Java and Scala, I’ve spent time musing on and researching different approaches to package structure.

The two main approaches are “package by layer” and “package by feature”. These are exclusive in the sense that you have do one of them first, at the top level of your structure, though you can do the other at a lower level.

Package by layer

  • com.example.model.feature1
  • com.example.model.feature2
  • com.example.ui.feature1
  • com.example.ui.feature2

Package by feature

  • com.example.feature1.model
  • com.example.feature1.ui
  • com.example.feature2.model
  • com.example.feature2.ui

Like many architecture discussions, things can tend to get a little religious. We’re talking about a strategy for breaking down a very large and complex problem; it’s actually viable to do either one. But I do think that in general, package by layer is a preferable approach. Here are some of the better references I found:

Package by layer: http://stepaheadsoftware.blogspot.in/2012/06/java-package-name-structure-and.html

Package by feature: http://www.javapractices.com/topic/TopicAction.do?Id=205 and http://shaunchilders.com/node/15

The main reason I prefer package by layer is this: layers are fundamental to most architectures, and package by layer puts the core architectural division at the first level. I agree with “Directory structure is fundamental to your code” section in the “javapractices.com” site that the “first strokes” are important. But I disagree that the fundamental division is features rather than layers. That article says, “The fundamental flaw with package-by-layer style, on the other hand, is that it puts implementation details ahead of high level abstractions…” I think this is misleading. Someone looking at the codebase is already interested in the “implementation details” of how it is constructed. I use apologetic quotes because we are talking about the highest-level architectural decisions, so calling them “details” seems like a hand-waving way of dismissing the structure of the software. Abstractions let us divide a problem into manageable parts. Dividing a problem into mostly-independent, loosely-coupled layers is a common first step in architecture, and I think it is much more fundamental than dividing an application into features.

Features tend to come and go much more quickly than layers. Over time, the most stable part of the software architecture is likely to be the division of layers, so that should be the top-level structural division. The division is so deep that some features may not map 1-1 with features in other layers: you may have several UI components per model component, or vice versa. It is very likely that there are fewer layers than features. And finally, specific layers are common to many software applications: you are very likely to see something like “model” and “UI” layers in most applications; by contrast, features are completely dependent on the domain. If I know very little about the project structure, and I’m looking at the project for the first time, I think it is more likely that I can find what I need if the top-level division is similar to other applications. Once I’m more familiar with the application, I probably know something about each of the layers, but I may not have touched some of the features.

So while it may not be appropriate for every application, my tendency is to prefer package by layer, unless I have a compelling reason to do something else.

Java and Mac OS X

Trying to deploy my JavaFX application on Mac has been incredibly frustrating. But this is no accident. In fact, it is a direct result of Apple’s software philosophy. Specifically, Apple wants a couple of things: 1) to have 100% iron-fisted control of the whole Apple-branded technology stack and 2) to shift 100% of the pain of software problems from users to developers. You can’t argue that their philosophy has been successful in some sense—just check their stock price.

On the other hand, they have chosen to make life much more difficult for developers than necessary—especially Java developers. I’ll give a few concrete examples.

One of the most egregious examples of Apple’s software philosophy is how their products handle errors. The general idea seems to be to hide any remotely scary-sounding error message from the user. Back in the day, Windows was infamous for its “blue screen of death” (BSoD), an iconic example of the frightening error message. Now you tend to get more friendly, but still tech-arcane error messages on Windows. At least the user (or a developer) can do a quick web search for an error message and usually find the cause and/or solution. By contrast, Mac OS X tries to either hide the error and pretend like nothing happened (even if it means, your application appears to hang for no reason), or worse, give an outright lie as the message. For example, an application that is “improperly” digitally signed may generate a message saying that the application file is corrupted, and the user should try downloading it again. Heaven forbid we mention anything that might disturb the user like an untrusted cryptographic signature.

Signing an application with a well-known, trusted authority using a standard process is not enough for a Mac application. Instead, with the new Gatekeeper features, you need to sign your application using Apple’s process and Apple’s certificate, which they will happily sell you for $99 (no concern for the small-time developer who might not want to shell out $99 for a hobby project). If your application doesn’t jump through all of Apple’s hoops, the user might get anything from silent failure to an indication of a corrupted file.

Oracle is not blameless either. They should have made it a priority a long time ago to streamline deployment of rich client applications on any supported platform. The Java Web Start user experience is a travesty. Due to security issues, vendors like Apple and Mozilla have gone out of their way to make it difficult to run any Java applications. The result is a terrible user experience for launching Java-based applications, especially on Mac OS X. Worse yet, recent “bug fix” updates to Java 7 actually break some Web Start applications in the name of security.

But all is not lost. Enter JWrapper, a free and awesome solution for deploying Java apps as native applications on Windows, Mac, and Linux. Like other commercial solutions, JWrapper builds native installers that work regardless of any JRE being present. Unlike other solutions, JWrapper doesn’t cost thousands of dollars… in fact, the basic version (with a JWrapper logo) is free, and the paid license is only a few hundred dollars. Rock on!

I still had some trouble working out a few deployment kinks. In particular, I had to include a magical JVM option “-Djavafx.macosx.embedded=true” to get JavaFX to run properly and consistently in a Java 7 + Mac environment. But the build process is relatively simple, and now my application runs on a Mac with a good installation user experience. You plug in the code certificate files and it signs your application both in the standard process and the Mac signature process. I highly recommend JWrapper for developers of Java GUI applications.

Java Web Start and Code Signing

So Oracle has been steadily tightening the screws on Java security due to all the bad press recently. The latest version of Java suggests that they might block all unsigned applications in the future.  I find this “nanny” approach to running code to be kind of annoying; shouldn’t I decide what code to execute on my own computer?  Anyway, authors of Java Web Start applications need a cheap way to sign code.  Quick searches and StackOverflow suggest a few alternatives.   Here I’ll quickly mention two that I tried.

StartSSL is a small business from Israel that wants to end the protection racket known as “certificate authorities” (CAs).  They charge you a reasonable price for the actual work they do (i.e. validating your identity), then let you create as many types of certificates as you need.  Their SSL certificates are fine, and you can even get those for free.  They advertise code/object signing certificates if you pay for identity validation, and several blog and forum posts indicate that you can use their certificates to sign Java applications.  Their website says nothing to the contrary.  But, unfortunately, you can’t.  Long story short, I paid for identity validation and found out after the fact that StartSSL certificates are not trusted by Java and the company does not support Java code signing (from direct email correspondence). Too bad, I really wanted to like them.  Ugh.

Back to the drawing board.  Comodo offers code signing certificates that are trusted by Java.  They also have a bunch of resellers that sell the certificates at “reasonable” prices (reasonable relative to the “brand name” CAs).  I bought a certificate through KSoftware.  They want a bunch of random stuff after you pay for it, like they want you to update your phone number in some online directory (’cause no identity thief could ever do that?!), but overall it was a fast and good experience.  And most importantly, the KSoftware / Comodo certificate works and is relatively cheap, currently less than $100 per year. Some CAs charge five or six times that much.

The certificate gets magically installed in your browser. (I used Chrome.) You then export the certificate, making sure to “include the private key”, to a .pfx file.  Java’s keytool can convert the .pfx file to a Java keystore:

keytool -importkeystore -srckeystore theCertYouBoughtIncludingPrivateKey.pfx -srcstoretype pkcs12 -destkeystore yourShinyNewJavaKeystore.jks -deststoretype JKS

Then use the jarsigner tool to sign your jar file during your automated build process.  Note that the “alias” in the keystore will be a long string of letters and numbers with hyphens.

Patterns of Software: Tales from the Software Community

I stumbled across Patterns of Software: Tales from the Software Community by Richard Gabriel. (You can download the book as a pdf.) Gabriel approaches software design using principles from architecture, especially the work of Christopher Alexander.

Like other good writing on software architecture, Patterns of Software focuses on the tradeoffs involved. Rather than encouraging always more and stronger abstractions, Gabriel is more pragmatic and discusses what actually seems to work in software projects and the human reasons why.

Alexander described why a detailed “master plan” usually fails to create an aesthetically coherent community, and Gabriel applies this to software: first, we cannot know all the details of how the construction will be used and fit into its environment up front. Those details become apparent only when it is actually used. Software designers recognize this as the principle of “use before reuse”: you can’t create a good abstraction before you have several concrete use cases, because you do not know which details are important. These ideas are related to the “agile” development approach. In addition, the “users” subjected to a master plan (in the case of software, the implementers) often cannot feel ownership in a plan where they have zero influence. The master plan is a monolith, stuck in the past and owned by the original designer. Lack of ownership by programmers, over time, will translate to a disconnect between the whole and the parts of a design.

Gabriel warns of trying to create “perfect” or ultimately “clear” software from the start—a dubious task that leads to software that is not flexible in the ways that you need it. Why? Because humans cannot conceive all of the details and implications of a master plan all at once. It’s like trying to play an opening chess move by thinking about all of the possible endgames. Given our limited ability to reason, we have to focus on those things that we know and can reason about from the beginning, and keeping software understandable so we can make changes down the road. Gabriel calls this quality “habitability”, “the characteristic of source code that enables programmers, coders, bug-fixers, and people coming to the code later in its life to understand its construction and intentions and to change it comfortably and confidently”. I like this metaphor for software design. Make your software a place where future programmers (including yourself) can feel at home, so they can take ownership of the “home” and grow it as needed.

Building an application with Scala, sbt, JavaFX, ProGuard, Assembly, and Jarsigner

So I’m building a desktop app in Scala with JavaFX. I need a few things from my build process. At first, I thought the build process would be the easiest part of the application. Boy, was I wrong. I’ll save my griping about sbt for another post. Here I’ll talk about how I actually got it to work.

One of the keys (cough) to using sbt effectively is knowing which “settings” or “tasks” to override. You look in Keys.scala or the relevant source for any plugin you’re using to find the key, then figure out how to change it to what you need. Unfortunately, you really have to understand a bit about how sbt works (i.e. read the docs) to get anything done (on the plus side, there are docs). In my case, I needed:

  1. To build the application with all the dependencies
  2. Obfuscate it to discourage piracy
  3. Stuff it into a single jar with all the dependencies to make deployment less of a pain
  4. Sign the jar to reduce the scariness of the warnings

ProGuard is the de facto standard for obfuscation in Java-land, and Assembly is a commonly used tool for creating the single jar. We use jarsigner from the JDK to sign the jar.

There are at least two major ProGuard plugins for sbt: sbt/sbt-proguard by Typesafe and xsbt-proguard-plugin. I actually went back and forth between them several times, just trying to get either one to work with the flow described above. Finally I ended up with Typesafe’s plugin, because the other one doesn’t expose the output of ProGuard in a way that is easy to manipulate from sbt. Typesafe’s ProGuard plugin does have a “merge” function that appears to duplicate Assembly, but I couldn’t get it to work (ran into what appeared to be a temp directory naming problem), and I did get Assembly to work.

My final build flow in build.sbt looks like this:

  1. Normal compilation procedure
  2. Call Proguard, but using only the compilation output as the “input jar” and everything else (including the Scala runtime) is a “library jar”. I had to jump through some hoops here, because the sbt plugin authors seem to think the normal usecase is to shrink/obfuscate the Scala runtime, perhaps for an Android app. But any code you run through ProGuard as input means more nasty ProGuard configuration/warnings/etc., and I couldn’t find a good standard ProGuard config for the latest Scala (2.10.2).
  3. Run the output of Proguard plus all of the library dependencies into Assembly
  4. Run the output of Assembly through jarsigner
  5. Fist pump

In case anyone might find it useful, I’m including some of my build.sbt file. As a bonus, it also includes the trick to connect from your Ecliplse or other IDE debugger.

name := "myApp"

version := "1.0"

scalaVersion := "2.10.2"

libraryDependencies ++= Seq(
"commons-io" % "commons-io" % "2.4" // ...
)

// Uncomment to see warnings
//scalacOptions ++= Seq("-unchecked", "-deprecation")

// Force sbt to run the application in a separate JVM (needed for JavaFX)
fork := true

// Uncomment to run with debugger: connect to port 5005 from your IDE
//javaOptions in run += "-Xdebug"

//javaOptions in run += "-Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=5005"

//
// Proguard
//

proguardSettings

ProguardKeys.inputs in Proguard <<= exportedProducts in Compile map { _.files }

// Application-specific Proguard config
// dontusemixedcaseclassnames: workaround because Windows files are case-insensitive
ProguardKeys.options in Proguard += """
-dontusemixedcaseclassnames
-keepclassmembers class * { ** MODULE$; }
""" // Add ProGuard config for your application

//
// Assembly
//

assemblySettings

AssemblyKeys.jarName := "myApp.jar" // final output jar name

// Include the obfuscated jar from proguard, but exclude the original unobfuscated files
// Notice the dependency on ProguardKeys.proguard. This is to make sure it actually runs Proguard first;
// otherwise you can get an IOException. You would think ProguardKeys.outputs would be sufficient, but no.
fullClasspath in AssemblyKeys.assembly <<= (fullClasspath in AssemblyKeys.assembly, exportedProducts in Compile,
ProguardKeys.outputs in Proguard, ProguardKeys.proguard in Proguard) map {
(cp, unobfuscated, obfuscated, p) =>
((cp filter { !unobfuscated.contains(_) }).files ++ obfuscated).classpath
}

// If you have duplicate errors when Assembly does the merge, you need to tell it how to resolve them, for example:
//AssemblyKeys.mergeStrategy in AssemblyKeys.assembly <<= (AssemblyKeys.mergeStrategy in AssemblyKeys.assembly) { (old) =>
// {
// case PathList("org", "xmlpull", xs @ _*) => MergeStrategy.first
// case x => old(x)
// }
//}

//
// Jarsigner
//

// Here we redefine the "package" task to suck in the Assembly jar and sign it.
Keys.`package` in Compile <<= (Keys.`package` in Compile, AssemblyKeys.assembly, sourceDirectory, crossTarget) map {
(p, a, s, c) =>
val command = "jarsigner -storepass myKeystorePassword -keystore "" + (s / "main/deploy/keystore.jks") + "" " +
""" + (c / a.getName) + "" myKeystoreAlias"
println(command) // just for fun
command !; // I love that ! executes the command in the shell
a // return the Assembly jar, which is now signed
}

/////////////////////
// Result: run "package" command to generate and sign the "single jar" file called jarName
/////////////////////

And as it says, running “package” from the sbt shell now does all the ProGuard-Assembly-Jarsigner magic. Whew!

JavaFX / FXML Injection Unit Testing

I have some UI components that use FXML files to define a view (layout), then the controller (input handling) is defined in the component code.  In Scala, it looks something like this:

class OptionsUi extends BorderPane {
private val fxml = new FXMLLoader(getClass().getResource("Options.fxml"))
fxml.setRoot(this)
fxml.setController(this)
fxml.load()
@FXML protected var okButton: Button = _
// ...
}

The class “OptionsUi” serves as the controller, and the file “Options.fxml” is the view.  One problem is that the controls defined in the FXML file need to match up with the Scala class as variables annotated with @FXML.  For example, if the FXML file has a Button with the id “okButton”, then the controller class needs the variable okButton listed above.  The FXML API injects the okButton on fxml.load() using reflection.  I have to initialize the variable to something, so I use Scala’s ubiquitous underscore, which in this context means “I don’t care”, interpreted by Scala as a null reference.  (I use “protected” instead of “private”, because private variables generate different Java byte code that had confusing reflection-related interactions with JavaFX and FXML.)

The bad news: If I make a mistake, perhaps adding/removing/renaming a variable in one place and forgetting to do it in the other, it causes a NullPointerException when the variable is accessed.  Most of the time, NPEs are just a bad dream for Scala users, a memory of an earlier & uglier past, but there’s no way around it to use the FXML API in this way.

However, I could write an assertion that checks the okButton is injected with a non-null value.  One option is to write such an assertion for every @FXML variable in my controllers.  But we can go one step further:


test("UIs based on FXML have all @FXML members set from .fxml file") {
javafx.application.Application.launch(classOf[FxmlTestApplication])
}

class FxmlTestApplication extends javafx.application.Application {
def createUis(implicit context: ApplicationContext): List[_] = List(new MainMenu, new OptionsUi)

override def start(primaryStage: Stage) {
try {
val context = new ApplicationContext()
val uis = createUis(context)
for (
ui <- uis;
f <- ui.getclass.getDeclaredFields;
if (f.getDeclaredAnnotations.exists { _.isInstanceOf[FXML] })
) yield {
f.setAccessible(true)
val value = f.get(ui)
assert(value != null, s"${ui.getClass.getName} did not inject @FXML field ${f.getName}")
}
} finally {
// Workaround; can't avoid this
println("Please ignore exception printed by JavaFX: IllegalStateException: Attempt to call defer when toolkit not running")
Platform.exit()
}
}
}

Essentially, my unit test creates some components, looks for any @FXML fields, and checks that they were injected with non-null values.  I left the “ApplicationContext” in the example to show that you do have to create the UI components for this method, which means instantiating or mocking their dependencies.  Also, my test explicitly lists all the UI components in the application—with a littler effort you could get it to scan the ClassLoader for relevant/annotated components.  Finally, there is an exception printed by JavaFX on Platform.exit(), but it appears to be harmless and does not prevent the test from passing or failing appropriately.

Anyway, this provides the nice features of unit testing: more confidence in refactoring, catching typos much earlier, etc. with respect to the @FXML-injected variables.  It caught a mistake I made literally minutes after writing the test.

Back to an NPE-free Scala world!