In this post I want to focus on features of statically typed languages which I consider interesting and usefull, but still uncommon and rare among most popular ones in dev environment. First of all, lets try to answer the question: why static typing? We need to realize, what advantages of static typing do we embrace, so we can look for further improvements.
- Performance – of course this is first notable difference. Since static typing gives a compiler much more informations about generated program, we have both much heavier optimizations that could be done at compile time and less need of type checking at runtime.
- Better tooling – if our compiler can reason more about our code, so do other tools we use in application development.
- Program correctness – we don’t have to be constantly aware of all application functionalities. Because we have expert system in the form of a compiler, we can maintain larger code bases.
In scope of this post I want to notice possible features, which may enhance both programmer productivity, application logic correctness and overal code design.
Type unions and intersections
Languages: Ceylon
This feature gives us a profit of union and intersection operations as we know them in set theory, and apply them to types in our language. Type and set theories are very closely related, and they both can benefit with many of common theorems (for example: roots of Hindley-Milner type inference lies in set theory).
Type unions seems to be more natural way to represent some relations, than inheritance. Example: think about type, which could represent all numbers, lets call it Number
. From type theory it looks like the best fit would be (literally) union of integer and floating point numbers, which give us a representiation of most of the domain. If this isn’t sufficient, it’s even easier to extend our Number type with union of more than two types. Now try to do it only by using inheritance or composition…
Type unions allows us to do the most important thing, most of the non-functional languages do wrong – proper null handling. In Ceylon Null
have it’s own type (however it’s optimized away during compilation) and is subject to all mentioned rules. All reference variables and field are non-nullable by default – thanks to unions this behavior could be changed by using notation T|Null
or it’s simplified form T?
for variable type declaration. It’s actually a good thing, since this is what we expect from variables in our code for 90% of the time. So why we have to code in continous risk of null exceptions (unless we perform explicit checking), if we may tell compiler to ensure that for us before our code even run? On the other hand, there are a nullable value types, which are also quite usefull at some times. Many languages allow us to use some kind of built-in nullable wrappers, sorrounded with specific compiler magic for programmer convinience (eg. C#). But in my opinion this is not the right way of doing things. If compiler is allowed to use some nice tricks, then we as the programmers also should be able to do so.
Type constraints / Value dependent types
Languages: Ada, Eiffel, (experimentally through extensions) all ML-familly languages including Haskell, F*, Idris
Since we already have a unions and intersections, it would be nice to allow other operations to be performed directly on types. What about conditions?
Lets say I’ve got an int
type, so I could verify at compile time, that all of the necessary values are restricted to be integers. Ok. Lets go further. What if I want to narrow those integers to omit negative numbers? Actually this is still easy in most of the languages (Java, you have one job…), since they already have some kind of unsigned int
type. But what when I want to go a step further: I want to ensure that my integers are only positive numbers, that means they’re neither negatives nor 0. It should be easy, but it’s not. Almost all of the languages fail at that moment, requiring programmer to manually check that condition through code at runtime. And this is where type constraints comes to play.
They allow programmer to define specific constraints on values, they refer to. Let me precise this on some pseudocode examples:
type Positive = x:int where x > 0
type Even = x:int where x % 2 == 0
type IPv4 = x:unsigned byte[] where x.Length == 4
So why I consider this nice and usefull? Some languages (eg. C/C++, Go) already offers fixed length arrays, while equivalent the example Even
type could be created as specialized class? First of all it’s easier, since you have one way to solve multiple problems. It also could be implemented fast (think about implementing and working with custom Even class in language such as Java), which makes it more likely to be used in real life. Next, it’s all available at compiler level, when you can actually verify your code before running it and allow compiler for further optimizations. They also could be potentialy empovered when used with type set-like operations described in previous paragraph. And finally refinement types could also be used for implementing Design by Contract principles in your code, which by itself is a nice feature.
Implicit interfaces
Languages: Go, TypeScript
This one is quite controversial, especially when compared to previously mentioned features. There are many programmers considering implicit interfaces as potentially unsafe, for example by weakening type safety in static typing by creating risk of accidential interface implementation. There are many examples provided to support this thesis, but actually I found most of them is result of bad design decision and broken conventions, or simply explicit interfaces are not solution to presented problems either.
So in what cases implicit interfaces could be usefull:
- Just like in case of monkey patching it’s a way to provide additional features to existing types, that we have no access to. This may be especially usefull to patching holes in stdlib implementations (no matter how hard you try, you never foresee all of them). Example: lack of IParseable interface in .NET common lib.
- It makes easier to work with various external libs, minimazing need of constructing adapters to bypass incompatibilities.
- It also makes very simple to create mocks.
They also encourage some good practices when working with interfaces. Since they’re implicit, you will naturally try to keep them as simple and decoupled as possible. It’s easy to maintain implementation of interface having dozens of methods, when they are explicit, it’s harder to do so in implicit way. And since the purpose of interface is to abstract and describe contract between parts of your application, interfaces with numerous methods are often symphoms of a God Objects in your code. On the other hand it’s easier to create a class implementing dozens of interfaces, when they’re implicit rather than explicit. This also promotes decoupling.
Great absent
Actually I didn’t mentioned some of most core features, which benefit all statically typed languages, such as types as first-class citizens, type inference or language macros/templates. The reason of this is, that those thing are well known and more or less present in many of the popular languages we have today. My goal was to describe some potential advantages of powerfull, yet less known features we may see with hope they would be more often met and used in the future.
Comments