A case for 80-rule


The [in]famous 80 character line length limit, recommended and hated by so many, has caused thousands of manhours of endless debates, lost productivity and missed deadlines. This text is yet another attempt to reason with the opponents by exploring the history of characters per line limits and how they incidentally correlate with what has been known in typography for decades: line length affects comprehension effectiveness and especially so when we’re talking about code.

History

Go has no line length limit. Don’t worry about overflowing a punched card. If a line feels too long, wrap it and indent with an extra tab. Effective Go

This quote is a recipe for disaster.

It encourages developers to use their feelings and emotions to make decisions that affect productivity and effectiveness of their fellow colleagues, trying to read and comprehend the written code in potentially entirely different state of mind, even if the next person reading it may be themselves.

Moreover, it completely misses the important distinction between punched cards and terminals. Punched card contained an encoding of data, while terminals displayed the data itself. Here’s a picture of punched card using thirty-six holes spanning ten lines to encode a single line of 15 characters:

Fortran punched card

It is true that electronic terminals, immediately succeeding punched card readers, were designed to match number of columns on punched cards, but that doesn’t necessarily mean that the line length limit originating from this circumstance should be dismissed as a historical wart, that has no relevance in modern times. In fact it points us to a more important question, that very much deserves to be answered definitively: is there and what is the optimal line length limit?

Optimal line length limit

Of course it’s not possible to answer any question about optimality without clarifying what is being optimized. After all, infinity is the only correct answer when one is optimizing to fit any text in a single line. For the purpose of exploring the question and some historical data, it’s assumed reading comprehension in general is being optimized. This assumption helps in two ways - one can consult the entire research body on typography for any knowledge on the topic and productivity of writing is not compromised, since developers have plenty of tools helping them with staying within any desired line length limit automatically.

As it turns out, reading comprehension is a very well researched topic in typography, partly because who doesn’t want their text to be well understood without unnecessary cognitive overhead and partly because typography is a business and business always optimizes the hell out of everything.

So what does typography have to say on line length limit? In typography texts are organized in columns (also called measures), and for best legibility it is recommended for column to be wide enough to contain roughly 60 characters 1. Others recommend 45-90 characters, some call 65 characters measure perfect. James Craig explains the reason for it in “Designing with Type”:

Reading a long line of type causes fatigue: the reader must move his head at the end of each line and search for the beginning of the next line.… Too short a line breaks up words or phrases that are generally read as a unit.

  • James Craig, Designing with Type

There’s plenty of reading on the topic with good advise2 3 4, but it’s fair to say that among people whose job it is to make readers comprehend text well, the question of whether optimal line length limit exists is a settled one. A research done by Nielsen Norman Group5 is probably the most enlightening piece of evidence, why limiting line length is important: when scanning the content, readers will spend most of their horizontal attention span on very few lines and skip over the rest, preferring vertical eye movements close to the left edge of the column, exhibiting a so-called F-shape reading pattern.

The critical word in previous paragraph was scanning.

Reading vs scanning

Code is not prose and characteristics inherent to how people read code with a purpose of understanding it’s function, makes everything said about optimal line length in previous section much more relevant. Prose is read sequentially and entirely, at least that’s the idea. Revisiting someone’s code is entirely different process, because of what code is.

Code is two-dimensional and it is structured heavily around this dimensionality. All the programming languages used for absolute majority of modern software have syntax that sooner or later gets transformed into an AST - Abstract Syntax Tree, this tree directs compilers or interpreters to execute operations in correct order and within correct context. Understanding code means understanding order and context of all the operations.

Two-dimensional nature of code is not a coincidence or some artifact of academia influencing the industry, it’s a simple consequence of the fact, that human vision is also two-dimensional. Humans don’t read sequentially, word by word, instead eyesight jumps and perceives text in chunks, and those chunks are two dimensional as well. In a long line of code, this verticality of perception not only doesn’t help, it also “blows the cache”, forcing user to constantly and actively ignore what’s above and below, because it doesn’t belong in context they are trying to comprehend. On the other hand, when context is organized vertically in short snippets of text, all related to each other in every dimension, reader eyesight’s ability to pick up signal shoots up.

To illustrate, consider a non-trivial examples from Clojure source code (metadata and comments removed for brevity), changed to stay within 160 characters per line limit:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
(defn join
  ([xrel yrel]
   (if (and (seq xrel) (seq yrel))
     (let [ks (intersection (set (keys (first xrel))) (set (keys (first yrel))))
           [r s] (if (<= (count xrel) (count yrel)) [xrel yrel] [yrel xrel])
           idx (index r ks)]
       (reduce (fn [ret x] (let [found (idx (select-keys x ks))] (if found (reduce #(conj %1 (merge %2 x)) ret found) ret))) #{} s))
     #{}))
  ([xrel yrel km] ;arbitrary key mapping
   (let [[r s k] (if (<= (count xrel) (count yrel)) [xrel yrel (map-invert km)] [yrel xrel km])
         idx (index r (vals k))]
     (reduce (fn [ret x] (let [found (idx (rename-keys (select-keys x (keys k)) k))] (if found (reduce #(conj %1 (merge %2 x)) ret found) ret))) #{} s))))

And now the same with 80 characters per line:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
(defn join
  "When passed 2 rels, returns the rel corresponding to the natural
  join. When passed an additional keymap, joins on the corresponding
  keys."
  {:added "1.0"}
  ([xrel yrel] ;natural join
   (if (and (seq xrel) (seq yrel))
     (let [ks (intersection (set (keys (first xrel))) (set (keys (first yrel))))
           [r s] (if (<= (count xrel) (count yrel))
                   [xrel yrel]
                   [yrel xrel])
           idx (index r ks)]
       (reduce (fn [ret x]
                 (let [found (idx (select-keys x ks))]
                   (if found
                     (reduce #(conj %1 (merge %2 x)) ret found)
                     ret)))
               #{} s))
     #{}))
  ([xrel yrel km] ;arbitrary key mapping
   (let [[r s k] (if (<= (count xrel) (count yrel))
                   [xrel yrel (map-invert km)]
                   [yrel xrel km])
         idx (index r (vals k))]
     (reduce (fn [ret x]
               (let [found (idx (rename-keys (select-keys x (keys k)) k))]
                 (if found
                   (reduce #(conj %1 (merge %2 x)) ret found)
                   ret)))
             #{} s))))

The second example is obviously easier to follow, even if one doesn’t understand the first thing about LISPs. LISP is chosen for these examples specifically because it is trivial to demonstrate the concept of “context” in code, as one could, without much error, assume every pair of parentheses is a separate context. Notice how in the second example highlighted lines break up a function and use vertical alignment to visually delineate the context to the reader. Compare to highlighted line in the first example, where the function is squashed into single line, because it fits.

Why does second example read easier than the first? Because to comprehend the code one must grasp the context and if the context is stretched horizontally, reader’s ability to maintain attention drops like a rock. Of course one could argue that this example is contrived and nobody would ever squash lines like that, but recall the quote from the beginning of the article:

If a line feels too long, wrap it

Leaving this decision up to a feeling essentially means that anything goes and target audience is setup for failure when feeling of the reader doesn’t match feeling of the writer, and the task of comprehending a line of code fails because of it. It’s important to point out here, that there are multiple ways in which failing to comprehend manifests itself - a reader may simply not understand what the code does, which prompts annoyance and a switch to a lower gear in the brain, to call forth more attention capacity and a retry may be successful; in other cases a reader will skip reading to the end, either assuming they’ve understood everything there was to be understood, or that there’s nothing of importance left, which is a mistake with radically slower feedback time and may cause failure on much higher levels of reader’s operations. Missing a crucial operation or worse, “caching” in the brain the wrong understanding of a long line of code, can lead to dozens of hours of lost productivity.

One doesn’t have to look for contrived examples, Go’s source code is full of long lines, dense with logic:

1
2
3
4
5
func (b *Reader) ReadRune() (r rune, size int, err error) {
	for b.r+utf8.UTFMax > b.w && !utf8.FullRune(b.buf[b.r:b.w]) && b.err == nil && b.w-b.r < len(b.buf) {
		b.fill() // b.w-b.r < len(buf) => buffer is not full
	}
	b.lastRuneSize = -1

Or long lines caused by deep nesting:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
func indirect(v reflect.Value, decodingNull bool) (Unmarshaler, encoding.TextUnmarshaler, reflect.Value) {
	v0 := v
	haveAddr := false
	if v.Kind() != reflect.Ptr && v.Type().Name() != "" && v.CanAddr() {
		haveAddr = true
		v = v.Addr()
	}
	for {
		if v.Kind() == reflect.Interface && !v.IsNil() {
			e := v.Elem()
			if e.Kind() == reflect.Ptr && !e.IsNil() && (!decodingNull || e.Elem().Kind() == reflect.Ptr) {
				haveAddr = false
				v = e
				continue

Whether these are too long or whether they are complex enough to be split, is subjective, but one is clear - authors at the time of writing them felt ok. Reviewers giving a “shipit” also felt ok.

Why enforce it?

The article is titled “A case for 80-rule”, naturally my recommendation is to always enforce this rule.

But why should one demand of others to behave in some way, they’ve found beneficial to them? Why be authoritarian and “exert force”, whether by prescriptions or by automatic code formatters? Why not take laissez faire approach and hope that eventually the industry will converge on what’s most effective and beneficial? I don’t think so, and here’s why.

Absolute majority of developers write code with a single goal in mind: solve the problem at hand. It is insulting to their ego to balance that goal with another one: make sure the next person can easily comprehend what they’ve written, because that implies that there will be a next person, usually because they’ve screwed up by either misimplementing a feature or underimplementing it, or simply because the requirements have changed (but they mostly focus on the former, more insulting, explanations). It doesn’t often strike their minds, that the next person may be themselves!

That’s why libertarian philosophy fails here - it dismisses the concept of negative externalities as something that shouldn’t even be considered6 by economics.

But negative externality of long lines of code exists, it is slow ramp up when adding features or addressing changed requirements and countless hours lost to debugging due to misunderstanding of what the code does. This externality should not be ignored or left to feelings.

References