I was just reading Chenglie Hu’s important article in the Communications of the ACM under the title "Dataless Objects Considered Harmful" (ACM membership required to download.) I feel this is significant for two reasons.
First, on a practical level, I’ve also noticed that recently graduated programmers are not well educated in the classical theories of Modular programming as described by Parnas among others. Too many programmers seem to think that just by writing classes and methods, and by the appropriate use of private and public attributes, the program will be automatically modular. In fact the opposite seems to be the case. By not having an independent guiding principle for organization, the classes and public attributes become expressions of the programmers’ whims, mere packing crates for the required functionality. The OO programming languages, most notably Java, but unfortunately also C# do not do much to alleviate the situation, by prohibiting the use of procedures outside of classes and not providing information-hiding boundaries other than the class, which hides information but it is also an implementation concept. Here we have
again the old programming language conundrum: to get the benefits of a specific abstraction, say information hiding, you have to buy into a specific implementation at the same time. Of course the whole idea of abstraction should have been to leave room for different implementations, that is to factor complexity.
The other reason is that the article recalls the seminal letter to the editor from Dijkstra: “Go To statement considered harmful” which created a firestorm of controversy when it was published in 1968. I was maintaining an Algol compiler at the time for the CDC 6400 so it was not a big shock for me, but I recall that the Fortran users were quite upset. We now “know” that Dijkstra was advocating structured programming and that his objection had to do with the difficulty of proving correctness of programs with unstructured Go To statements. But re-reading the original text, I think that Dijkstra’s letter was quite obscure and I can now sympathize with the initial incomprehension of its readers. Yet the letter started a very important movement to improve Software Engineering.
In presentations on Intentional Software, I often ask the audience: Why should we think, in general, that “X is considered harmful” for any programming language feature X? At this time we have X= “Go To statement”, and X=”Dataless Objects” (for more examples see Hello world considered harmful or even Aspect-Oriented Programming Considered Harmful, and for a rather harsh critique of the question itself see here). I think the answer has to be situational in any case: it is not that an abstraction is harmful per se, it is more that certain uses of an abstraction are counterproductive.
I hinted at the solution in my other posts here and here. To decide whether something is harmful or useful, we have to refer to the problem being solved. If we do not know what the problem is, a tool is devoid of meaning (have you ever had that feeling when looking at strange tools in a cabinetmaker’s or bookbinder’s shop?) Hu also hints at this in his article:
“[…the undesirable result is that there will be] many intermediate variables that don’t correspond to separate entities of the application domain” (emphasis added)
So to generalize the “harmfulness” theory, we need to refer to the degrees of freedom in the problem statement, in the domain intention. A programming feature is harmful in proportion that it has more degrees of freedom than in the domain intention that it is used to solve. By degrees of freedom we mean the potential parametrizations (arguments, properties, attributes, or parameters) for the abstraction and the gamut of their values. This latter is determined by the parameter type: the range of possible parameter values can be small for enumerated types, or very large, for example the “parameter” is a statement list as in a loop. Harmfulness is related to excess degrees of freedoms: more parameters than necessary, or larger parameter types than necessary for the purposes of the problem intention.
The theory seems to work with the two key examples above. For example, a Go To that is used with a label parameter to implement the end of a loop has more degrees of freedom than it is necessary; it could go to any place in the scope when it needs to go to just to a single well defined place which is the test for loop completion. Similarly the dataless object has too many degrees of freedom when it is encoding a modular unit and its procedures. The object type and the object instance are all superfluous for the purpose.
It is worth restating what is wrong and what is right with extra degrees of freedom.
The wrongs are twofold: work and errors. For any parameter the programmer has to choose a value – that is work. If the parameter is “extra”, or if the parameter type accepts a greater infinity (powerset) of values relative to the problem, this work does not get easier; in fact, if anything, it gets harder. By lacking proper motivation, the naming of extra quantities (such as the arbitrary label at the top of the loop) can be particularly vexing as we discussed in the loop constructs in an earlier post. The second problem, the errors, comes from to straightforward communication theory. The larger the space of encoding, the larger the error rate. This is why voice recognition systems with limited vocabularies work much better than those with unlimited vocabularies. This is why the Palm Pilot’s Graffiti has lower error rates than other, more general, handwriting recognition systems, and keyboards, have lower error rates still. Similarly, the error rate of Go To statements that encode loops will be greater than the error rate of a structured statement, which is by the way, still not zero.
Here are a few things that can go wrong with a structured “while” statement. Consider the code:
The result is probably not what was intended due to the extra semicolon in the first line. What would be more intentional here? Suppose we could say:
“execute f(j) for all j in the interval starting with i, up to but not including 0 – in other words in a half closed half open interval; and yes, we do want to use the standard mathematical notation [a,b) for such an interval”
Using this abstraction there would be fewer degrees of freedom and there would be no possibility for the above error, and we would also eliminate the possibilities for other errors in this simple while loop like these:
i<0 or i<=0?
i++ or ++i?
i written in the comparison and some other variable written by mistake in the increment.
Of course this interval construct may not represent the absolute minimum in the degrees of freedom, but to go further we would have to know more about the problem being solved. However, the example shows that even with very local knowledge we can make substantial reduction of work and of errors if we concentrate on the degrees of freedom.
Why do we still like degrees of freedom? Why were the Fortran programmers upset when the future of their GOTO’s was threatened? Strangely enough it had to do with the lack of choice.
What I mean by this is that if we have limited choice of tools, for whatever reason, we will rationally choose the most general one so that we can cover a greater set of problems. So the Swiss Army knife is the choice of mountain climbers. It would not be the choice for the master bookbinder, not because the multi-purpose tool would not work in bookbinding, but because its degrees of freedom would mostly just get in the way both by having to open the right blade each time and it could also cause “binding errors” when the general purpose blade slipped while used for burnishing, for example.
So the better question is how come we programmers have a limited choice of tools at the level of language features? After all we are not limited in creating procedure contents, names, comments, icons, error messages, and so on. Since we are more like master bookbinders than mountain climbers, we should be better equipped.
Footnote: Actually, even within these normal functions of programming languages we have some limitations: in comments I always wanted to include sketches and diagrams, but I cannot. Names are also quite limited: the 50 year old rule of “only the 26 upper case letters in names” is breaking down only at the glacial speed of roughly 1 extra character/year. In programming we can’t have more than one name for something (name or cite or point to one thing or person that has only one name) and things in programs with names that somehow become similar to other names around them can lose their identity, which is another rule that would be utterly impractical in real life. This all may sound like griping, but in fact as we are used to restrictions in the small things, we accept the restrictions in the large.
This is further illustrated in our earlier post about the long tail of programming languages. All languages at the top were general purpose languages while we found more domain specific languages further down. Today we live in the reality that you have to pick one language and you are stuck with the abstractions of that language. The ability to tune or extend a language to get closer to the domain we solve problems in is yet possible. And of course mixing languages leads to all kinds of complications because our tools do not work well across languages.
Just as we need a language and a run-time to create the procedures that we want, we could have a language and a run-time to create language features. Unfortunately languages for syntax directed compilation (such as Yacc) did not solve this problem – they described parsers, not language features, they, in effect, had “too many degrees of freedom” which made them very difficult to use. The main difficulty, of course, was the need for the design for the syntax itself – remember, what we set out to create was a language feature, not a syntax. The road toward creating solutions to the meta-problem is truly an arduous one. But until we increase the choice of language features corresponding to the greater variation in applications domains we face, more and more of the features we now have, will at one time or another have to be “considered harmful” as the degrees of freedoms of the features exceed the degrees of freedoms in the domains.