Type Inference

Deeply typed programming languages

• languages

Uncle Bob did it again. Some months ago he wrote Type Wars, a post to defend static type systems are not really needed if you do TDD. Now he’s back. In his latest post, The Dark Path (I know: these titles sound like Star Wars sequels) he brings new weapons for the dynamic languages enthusiasts (argument from authority included) to tell everyone else all these static type checks are useless.

In his new article, Bob tells he has been in the playground with Swift and Kotlin. And he didn’t like the experience. In his own opinion, these languages are going too far (or too deep) introducing strongly typing features. And (guess what?)… who needs such strong typing when you have TDD?

After Type Wars, I wrote my own post explaining why Uncle Bob was wrong. And I feel the irresistible need to reason why he’s wrong again.

Algebraic Types and Rudimentary Coding

• languages

This week I tweeted a fragment of code showing how to declare an algebraic type to enumerate the operating systems supported by your application in Scala. Surprisingly, there was some negative replies against this practice. Some people pointed out this was over-engineering, defending the code should be simpler (IMHO, rudimentary).

Twitter is probably the worst format to discuss the proposal and explain why this is the right direction. I tried by email with no better results. Let’s try with a blog post, at least to leave a proof of why I do (and will continue doing) things like that.

TDD vs Static Typing

• typing

Yesterday Uncle Bob posted an article about Type Wars. In summary, the post describes his own experience with different type systems across the times. How types appeared in the mainstream languages and how they competed from the early days to the dynamic vs static typing debate of our days.

There is an interesting introduction in the post about what lead to Uncle Bob to consider type systems provide a real benefit. In the early days, as an assembler coder who writes software for ancient and simple hardware architectures, types were pointless. As the systems became more sophisticated (and so its software), the benefits of type systems were obvious. Large codebases cannot be written without the aid of a type system.

In spite of this introduction, many people has interpreted that Uncle Bob is against static typing. Mainly because he affirmed that the most important thing to guarantee software quality is TDD.

Well. There is nothing more daring for a software developer than saying that Uncle Bob is not completely right. I know it. This is just a modest attempt to explain the reasons I have to consider TDD is not a replacement of static typing. Just four facts about TDD that Uncle Bob had left unsaid.

Covariance and Contravariance

• typing

Some weeks ago I gave a introductory talk of Scala for Java programmers. At some point I introduced algebraical data types in Scala using sealed traits, and I though it was a nice moment to show how the language supports covariance and contravariance for generic types.

The audience didn’t agree it was a nice moment. They weren’t familiarized with these terms at all, and we had no time enough to discuss the point in depth. So I decided it was a very good topic to discuss in typeinference.com.

Any programmer is familiarized with the subtyping relationship between two types. Let’s say we have Base and Derived classes so Derived inherits (or extends) Base, as in:

class Base { ... }
class Derived extends Base { ... }

The inheritance mechanism conveys a subtyping relationship of Derived respect Base. In other words, any instance of Derived is also an instance of Base. Because of that, the following statements are valid in Java.

Derived foo = new Derived(...);
Base bar = foo;

All right. Now let’s say we have List<Base> and List<Derived> instead. Is there any subtyping relationship among them? Is the following code valid?

List<Derived> foo = new ArrayList<Derived>();
List<Base> bar = foo;

I’m sure your brain says “Sure! Why not?” but your instinct prevents you from answering. For sure, things get complicated with generics, and this kind of subtyping relationships have non trivial consequences.

Type Inference

• typing

In a blog titled Type Inference, it’s mandatory to talk about type inference. For some time, I assumed the concept should be well known for any programmer. That’s why I thought I was a great name for a blog about programming. But recently I realized I was wrong. Not meaning in the blog name, which is absolutely cool. I was wrong about the spread of type inference concept.

In the modern software industry, there are many developers that are tied to dynamically typed languages. Python, Javascript, Ruby, NodeJS or any other trend that is cool just because everybody says so (fashion). When you show your doubts about the convenience of coding large scale systems using scripting languages, they look at you as if you were a time traveler coming from the last century. After that, most of them say but, do you really want to code in so much verbose languages like Java?

After blinking twice, you tell them about type inference. And then you realize they are not familiarized with the term at all. After a short introduction to the concept, you point out their concept of static typing does come from the last century. Every time you talk about dynamic typing I’m not thinking on Basic. So please do not relate static typing with terrible inventions like Java.

Anti-OOP Design Patterns

• oop

Any good programmer is familiarized with design patterns. Even not-so-good ones ever heard about singleton, strategy, decorator, observer, etc. What is not so common is to find programmers who are aware about the reasons that lead them to use such patterns. For most of us patterns are just a tool. Something that works. And we don’t even try to reason about where the problem they solve comes from. We apply them and continue coding.

But, if you dare to break the confirmism and reason about our good friends the design patterns, you might realise they are there to fight a surprising enemy: the object-oriented programming. Don’t believe me? Let’s analize some popular design patterns from this perspective.

Rust Data Ownership, Part I

• Rust

Rust is one of the most interesting things I discovered this year. After long time programming in C++, I had hundred of situations where its memory management model drove me mad. Even using smart pointers, it is really hard to reason about who has the ownership of every object. If you ever wrote code using Boost ASIO with C++11 lambdas you know what I mean.

I also tried some other alternatives to C++. D was promising, but its garbage collector leaves no place for RAII (one of my favorite idioms). Go! uses a garbage collector as well, and it also has a poor integration with legacy systems. So I continued programming in C++ with a sight of resignation.

Some day a friend of mine told me about Rust. I took a look with caution. I read their book, code some examples. The more I learned the more I was convinced I had something really great in front of me. Something that could lead me to stop writing any further line of code in C++.

I’ve been wishing for some time to write about how Rust solves the memory management problem in a very clever way no other mainstream language ever tried. And I’d like to do it by comparing this mechanism with the one provided by C++11. And there is a lot of things to discuss! This is the first part of a set of posts about this topic. Today we are gonna see a basic introduction to data ownership and movement semantics.