| date: | 2006-08-29 17:57:26 |
|---|---|
| category: | Technology News |
Since most of these new-fangled dynamic languages are open-source, we have a a strange artificial barrier to considering the right solution for our problems. This barrier is “No Open Source” policy, usually framed as We’ll Use Open Source Over My Dead Body ™. It has to be examined carefully from several points of view:
If we can get past the “dynamic languages are open source”, then we can move on to more interesting issues.
Taft gets to the issues, eventually, when he lifts up two objections that have a little more substance. Taft, however, ran out of words before getting to the real crux of these objections.
Dynamic Languages Don’t Perform Well.
This is a straw-man argument; it misses the point by attacking something irrelevant. Yes, dynamic languages are not optimized as well by the compiler as dynamic languages.
However, the point is to leverage this, and partition the architecture into the stable bits, and the flexible bits. I can see a spectrum of mutability based on the nature of the requirements. For relatively immutable requirements, use a static language, compile the heck out of it, and get it optimized so that it runs really fast. For the poorly-defined or inherently flexible requirements, use a dynamic language. Expect change, and implement it appropriately.
Dynamic Languages Don’t Prevent Enough Errors.
I’m not completely sold on the value of static type checking in languages like Java, C, C++ (and Ada for that matter). Here’s what I observe: a programmer who has a vague notion that a variable has a numeric type, but can’t identify the correct numeric subtype; he spends hours doing an empirical survey of all numeric types to see which one will compile.
This isn’t design in the usual sense of the word. The software will have the first numeric type which compiled, right, wrong or indifferent. We don’t know if that’s the right type, or merely an acceptable supertype of the right type. Static type checking hasn’t really prevented the eventual error from occurring.
Another scenario, less common, but more problematic, is the Type-Hierarchy-Meta-Framework Most General Declaration ™ where we have created a hierarchy of interfaces, abstract superclasses, and other declarative malarky to try and unify two relatively dissimilar things into a common structure. Perhaps they should have used a Façade design, but instead they aimed for a too-complex static type declaration.
This is a case of too much design, rather than too little. However, static type checking has introduced problems rather than solved them.
Just to beat this topic to death, our SQL data bindings are always dynamic. After all, there’s no strong relationship between our statically compiled Java application and our relational database. The Java type checking, while internally consistent, doesn’t have to match the database. Indeed, seemingly innocuous DB changes can lead to type compatibility problems and run-time crashes in Java.
Introducing Dynamic Languages.
I think the upshot of Taft’s article is the following pieces of advice:
I think the most important thing we can do is to look at a dynamic language as the shell on steroids.