A Paean for the Proxy

These verses were composed in trying times. Upon a summer’s day in August, the Year of Our Lord Two Thousand and Sixteen, it suddenly became indispensable that I should write a small bit of boilerplate, many times scrawled carelessly upon my IDE window, and never once committed to memory, with the end of orchestrating that the Play Framework, on which firmament rested my code, accomplish some minute but needful end, I remember not what. Raising anchor and casting off into the wide seas of Tela Totius Terrae, I came upon a small inlet at Stack Overflow which seemed probable to house the lore I sought. Clicking upon it, I chanced to provoke the wrath of the Proxy, the gatekeeper which guards our office network. Three times I gave the pass-word, and three times the Proxy denied me, and at last, defeated, I turned to other matters.

In the proximal hours of this mortifying rebuff, I anathematized the Proxy thrice over, and thrice maledicted it, doubling the trials it had lain upon my back like three lashes of a cat-o-nine-tails. But in the following hours I reflected on the happenings, and it came to me that I was like a child before a good shepherd, squealing to be let beyond the pasture, and the Proxy was the good shepherd, the pass-word prompt his crook across the gates, an inexorable guardian, keeping my toe upon the path of truth and my ken upon the course of righteousness. And upon that realization I composed these verses, which I dedicate to the Proxy, in admiration, may it ever watch over us sinful software developers, and keep us from temptation.

Oh Proxy, wondrous, on the net
Above us like the moon, so silvery grey
By blocking sites you never let
We software developers go astray.

You keep us, and guard us, and ensure that we
are never tempted from our tasks
You maintain our morality
But sometimes relent, if we ask.

Oh Proxy, how glorious are thee
A shepherd for this software flock
Forgive us if we question thee
On why you chose Stack Overflow to block.


A Comparison of Four Algorithms Textbooks

At some point, you can’t get any further with linked lists, selection sort, and voodoo Big O, and you have to go get a real algorithms textbook and learn all that horrible math, at least a little. But which book? There are tons of them.

I haven’t read every algorithms book out there, but I have read four of them. Maybe my experience with these four can help guide your decision. The four books are Algorithms, by Dasgupta, Papadimitriou, and Vazirani (hereafter called Dasgupta); Introduction to Algorithms, by Cormen, Leiserson, Rivest, and Stein (hereafter called CLRS); The Algorithm Design Manual, by Steve Skiena (hereafter called Skiena); and The Art of Computer Programming, Volumes 1-3, by Donald Knuth. I’ll do a five-point comparison, going over the prose style, code use, mathematical heaviness, breadth and depth of topics, and position on the continuum between theoretical and practical of each book.

There’s one thing you should know before I start: Dasgupta is available for free online, while Knuth and CLRS aren’t (well, they probably are, but not legally). So that might make your decision for you. I haven’t been able to determine if Skiena is legally available online. The book is posted in either PDF or HTML on a few legit-ish looking sites, but Skiena’s own page for it doesn’t mention anything about it being freely available, so proceed with caution.

Does it have to be one of these four?

Not at all. There are lots of good algorithms books that I’ve never read. You can even learn a lot by just reading CS.SE and Wikipedia. But these four are the ones I’ve personally used. Many people like one of these four, but they do reflect my taste and biases. But even if you decide to go with a different book, this overview might at least tell you what features to notice.

Prose style

A good algorithms book will usually explain its topics in three ways: with a natural language prose explanation, with some kind of implementation code, and with mathematical notation. When you’re starting out, the prose is usually the most important part, because it takes you from “What the hell is a binary search?” to “Got it, how do I write actual code for a binary search?”

For me, the winner on prose style has to be Knuth. He’s just a masterful writer. You might have to read some of his sentences twice, but that’s only because he gets across twice as much information as a lesser writer. Knuth’s prose explanations got me to understanding on several topics I’d despaired of ever getting with other books, like B-trees and merge sort.

Skiena is also an excellent prose stylist. Where Knuth is elegant and flowing, like the John Milton of algorithms, Skiena is direct and sharp, like the Ernest Hemingway of algorithms. Knuth and Skiena are good at different things: Skiena is good at finding a simple, direct way to explain things which are traditionally made more complicated than they need to be. His overview of Big O is amazingly clear and simple, as long as you have at least some memory of calculus. On the other hand, some topics are inherently really complicated, and this is where Knuth shines: he’ll give you several different ways to view a complicated topic, and chances are at least one of them will work for you.

Knuth is much quicker to jump to math, while Skiena mostly eschews math and tries to keep everything intuitive. We’ll discuss that more in the section on mathematical heaviness.

Dasgupta has a pretty good prose style too, more patient and beginner-friendly than either Skiena or Knuth. Sometimes, I found Dasgupta too verbose and felt the authors belabored the obvious too much, but other times that was an asset rather than a weakness (e.g. in the material on Fast Fourier Transforms).

CLRS probably has the weakest prose style of these four books. It’s closer to the standard math text style, written with a formal and distant air, sometimes favoring precision over clarity. It’s strictly functional, but tolerable if you already have some understanding of the topic.

Code use

Knuth’s big weakness is code. He uses two different notations, pseudocode and MIX assembly language, his imaginary assembly for his imaginary computer. I find both of them extremely difficult to follow.

The problems with MIX should be pretty obvious: it’s assembly language, so it’s just hard to read. Not only that, it’s pretty different from the MIPS, ARM, or x86 assembly that modern readers might have seen. It’s designed to be run on either a decimal or binary architecture and assumes a six-bit word. Knuth put a ton of effort into creating this imaginary machine and its assembly language. Since it’s made up, MIX assembly is still technically pseudocode; but MIX is to pseudocode as Quenya is to pseudo-languages. Newer editions of TAoCP use MMIX, which was redesigned to reflect modern assembly languages. I haven’t seen it yet, but I imagine it’s still hard to read since it’s assembly language.

Knuth also uses high-level pseudocode, but I find that hard to read too because it’s organized like an unstructured language with goto statements. If I were planning to implement the algorithms in an unstructured language, it would probably be fine, but I’ve always found that there’s a nontrivial translation process between Knuth’s unstructured pseudocode and structured pseudocode suitable for implementation in a modern language.

CLRS and Dasgupta both use high-level pseudocode that resembles Pascal, although not too slavishly. CLRS expresses some things as if they were methods or fields of an object, in a semi-object oriented way. Skiena also does some of this, but in addition he uses real C code.

A lot of modern books use C or Java throughout, which might be more to some people’s tastes. These books reflect my taste, and I like my algorithms language-independent. I don’t mind how Skiena uses C—he uses it mainly to show how to implement something when there’s a significant gap between theory and practice. But I’m glad he stuck to pseudocode for the most part.

Mathematical Heaviness

In this category, going from heaviest to lightest, we have roughly Knuth > CLRS > Dasgupta > Skiena. Dasgupta and Skiena are pretty close, while there’s a significant gap between them and CLRS. There’s a minor gap between CLRS and Knuth, and it really depends on the topic, but CLRS just didn’t take on as many inherently mathematical topics as Knuth did (especially in Knuth’s Volume 2, Seminumerical Algorithms, where he deals with random number generation and efficient arithmetic).

In general, Dasgupta is geared towards people (like students) who just recently learned a little math and are going to learn a little more, but haven’t internalized it all yet. Skiena is geared towards people (like working programmers, or graduate students) who’ve learned some math and forgotten a lot of it. Skiena is also student friendly, but he stresses practicality more than anything, and sometimes, in practice, you gotta do some math.

CLRS is also for students, but it’s more at the graduate student level, especially the later “Selected Topics” chapters that can sometimes get pretty heavy. Knuth seems more geared towards graduate students near the end of their Ph.Ds and practicing researchers.

All four books require you to know some combinatorics and a little number theory, and to have at least a vague memory of limits and function growth from calculus. That’s about it for Skiena and Dasgupta, though Skiena seems to expect you to have internalized the math a little more than Dasgupta does. The basic chapters of CLRS are just a little beyond Skiena in math background, but some topics get quite heavily mathematical. Knuth gets pretty heavily mathematical in almost every topic.

I’ve been conflating two things in the discussion so far, since they’re sort of linked: actual knowledge of theorems and axioms in mathematics, and the ability to read mathematical writing and proofs and follow an author’s reasoning (what’s often called “mathematical maturity”). Knuth requires the most technical knowledge of mathematics of the four here, and he’ll also stretch your reading ability pretty far; but CLRS will also stretch your reading ability, and Dasgupta is no cakewalk either, although I do think Dasgupta qualifies as a picnic. Skiena doesn’t have any proofs, but he kind of sneaks in proof-like things with his “war stories”, which usually read quite a bit like proofs, extended with discussion of some of the approaches he tried that didn’t work out.

If you’re getting your first algorithms book and you’re a CS undergrad, Dasgupta or Skiena is easiest to follow. CLRS will stretch you a little, but it’s manageable. If you’re a math or physics undergrad, CLRS shouldn’t be too hard and Knuth might be doable.

Breadth and Depth of Topics

Dasgupta is the loser here; not only does the book cover fewer topics than the others, the topics it chooses to cover are poorly organized and somewhat eccentric. I’m not sure why the authors threw in the awful chapter on quantum computing at the end; it’s totally incomprehensible unless you already understand quantum mechanics, which is no mean feat.

CLRS has the best balance of breadth and depth: it covers basic data structures; some more advanced ones like red-black trees, B-trees, and the union-find data structure; algorithmic paradigms like greediness and dynamic programming; graphs, shortest paths, and network flows; sorting; amortized analysis; and an assortment of other topics such as number theoretic algorithms, cryptography, linear programming, string matching, and NP completeness.

Skiena covers about the first third of CLRS, but he does a lot more with NP complete problems and how to design approximation schemes for them than CLRS does.

Knuth, of course, is the master of depth. Volume 1 covers math background and fundamental data structures; Volume 2 covers random number generation and arithmetic; Volume 3 covers searching and sorting, going through various sort routines and some more advanced data structures, such as B-trees, as well as developing the whole theory behind hash tables and how to choose hashing functions and table sizes; and Volume 4 covers combinatorial algorithms. Volume 4 is split into three subvolumes; right now, only Volume 4A has actually come out.

If you want to get just one book, I would get Skiena or CLRS. They include all the most important topics both in practice and for undergraduate classes. Skiena, as a bonus, even includes some computational geometry material, in case you’re doing video games or computer graphics.

Theoretical vs Practical

Skiena is the most relentlessly practical of this bunch. Knuth was actually pretty practical at the time Volume 1 came out, but he became less and less practical as time went on because people stopped writing things in assembly or using tape drives. Both give you implementation tips to make your code perform better, although Knuth’s can be hard to capitalize on since you’re not writing in assembly.

CLRS and Dasgupta are both theoretical in the sense that they mostly ignore the computer. Everything in CLRS will work, but sometimes their approach is too “straight from the theory”, as I discovered when I implemented Karp-Rabin according to their discussion and put it on Code Review.SE after struggling with numerous overflow-related bugs, only to have someone suggest a different approach that rectified all the performance issues and handled overflow elegantly.

Dasgupta and CLRS are both still good books, and if you’re just starting out learning algorithms, don’t get too caught up on this issue. Write your code in Python or Ruby, some quick and easy language that also ignores the metal. You’ll get the idea of the implementation, and you can deal with the issues around implementing it in some other language later. (Python even looks like the pseudocode in CLRS.)


I probably use CLRS the most, because of its breadth of coverage and because its pseudocode is so much easier to follow than Knuth’s. If I don’t understand the concept or the mathematical underpinnings, I’ll go to Knuth for the deep discussion, and the excellent prose style.

I just got Skiena recently, but in the future, I expect him to usurp CLRS somewhat. He covers most of the same material, but his prose style is more direct and his approach is more practical. Skiena is excellently organized and has a catalogue of problems and their algorithmic solutions in the second half of his book, good for browsing when you’re stumped.

I don’t use Dasgupta that much. Dasgupta was the text for my undergrad algorithms class, and while it was good in that capacity, it’s not really a book that continues to be useful once you’re past the first-semester course, mainly because of the lack of breadth in coverage and the eccentric organization and choice of topics.

From Java to Python in Pictures

I’ve been working mostly in Java for the past several months, for my job. It’s started to feel kind of like this:

Screen Shot 2016-06-26 at 9.48.46 PM

Screen Shot 2016-06-26 at 9.48.35 PM

And every time I have to whip out a design pattern or build yet another layer or deal with generics or interfaces or add 50 public setters for the benefit of some library, it kind of feels like this:

Screen Shot 2016-06-26 at 9.48.22 PM.png

A few weekends ago I worked on a small hobby project in Python. After all that Java I’ve been doing recently, it felt kind of like this:


Your mileage may vary.

The Ballad of Leftpad: Or, Batteries Not Included

By now, you’ve probably heard about the leftpad debacle. If not, let me be the first to explain it. Some guy wrote an 11-line Javascript function that appends padding characters to the left of a string (if you need them on the right, you’re outta luck), and for some reason he published it on NPM. Then a bunch of people, including the React team at Facebook apparently, used it as a dependency in their projects. Then the guy who wrote leftpad got mad, ragequit NPM, and took down his package. This broke Node and Javascript projects all across Silicon Valley, and since some of those projects were making money, and their making money was inexorably tied to their not being broken, this was not very good at all.

I read a blog post about this incident in which the author wondered if we’d all forgotten how to program and couldn’t have written this function ourselves. A bunch of commenters responded that you shouldn’t have to write this function yourself and it was right of these projects to include an 11-line dependency, and everyone argued.

I happen to feel that both sides of this argument are wrong. Of course you don’t want to include a dependency for this stupid little crap function. And of course you don’t want to write it yourself, even if you’re perfectly capable of doing so. Something like this really belongs in the language’s standard library.

Languages used to come with these things called standard libraries. Remember those? You didn’t have to include them in a package file or download them from a repository somewhere. They were just there. If you had the language, you had all these libraries. They could do simple things like left-padding a string, or joining the elements of a list, or even right-padding a string, that you could have done yourself. They could also do more complicated things, like converting between character encodings, or parsing CSV files, or making HTTP calls, or creating threads. Sometimes they even had these things called data structures, for storing data.

Presumably most of these people arguing in favor of leftpad were Javascript programmers. Javascript kind of has a standard library, but not really. It’s really spotty; compared with Python, Java, or C#, it’s missing a lot. Even when the library gets expanded, you have to wait for Safari and Edge to implement that part of it before you can use it. God forbid you need to support IE, even the reasonably standards-compliant IE10 and 11. So Javascript programmers will often use polyfills, which are NPM packages that implement whatever the browser is missing. AKA, dependencies. In the old days, the old days being approximately four years ago, there was no NPM; people would link in CDNs or just copy and paste the code into their current project. In that cultural context, you can see why Javascript programmers would argue in favor of leftpad: it kind of is the sort of thing you shouldn’t have to write yourself, even if you’re capable of doing so, but if you’re not going to write it yourself, you’ve got to get it from somewhere, and getting it from NPM sure is nicer than copying it off someone’s blog and pasting it into your codebase.

On the other hand, I have a lot of sympathy for this comment from the blog:

Prior to the emergence of jQuery, JavaScript development was [a] mess. Polyfills for different browsers and other abominations [were] all copy-pasted from project to project. It was a toy language that could be hacked into doing cool things (Google Maps and Gmail). The idea that a layer of abstraction that [sic] could hide browser complexity was a revelation. This idea took hold and grew till we got to the sitation [sic] where we are now. Simultaneously the “one language to rule them all” cargo cult and SPA trends emerged and JS ended up being in the right place, at the right time with the right toolsets.

Any edge cases and incompatible language implementations -including incorrect or missing operators(!) can be abstracted away and incorporated into the package and shared. I personally think that modules are a good way of working around the rickety mess that is JavaScript development. Perhaps the real answer it [sic] to stop fetishising a 20-year-old language that was thrown together in a week, and make a concerted effort to standardize on something better.

Javascript is not a great language. It’s definitely better than PHP, or COBOL. I’m gonna say it’s nicer to use than C++, too. It’s an okay language, and Brendan Eich making it as good as it is, given the constraints he was under, is a laudable achievement. But it’s not a great language, and there’s a ton of over-the-top love for it going around these days that doesn’t seem justified. But it’s what we’ve got, and we’re probably stuck with it; if it’s this hard and takes this long to get ES6, I can’t imagine a switch to a completely new language happening anytime this century. And lots of people have worked hard to make it better, and they’ve mostly done a good job.

However, despite these efforts, the Javascript ecosystem is definitely not first-class. I recently converted a project at work from Gulp to Webpack. The experience was not pleasant. It was a side project that I mostly pursued nights and weekends, and it took me over two months to finish, because Webpack is horribly complex. At the end of the day, there were still things I wasn’t satisfied with, things I had to hack in weird ways to make them work. And after those two and half months of work, I could create modules. Webpack can do more, but I wasn’t doing more; I was doing what Python does right out of the box, effortlessly. While I appreciate the engineering effort it must have taken to create Webpack, I can’t call something a great tool if it makes basic shit like creating modules that difficult.

I’m not saying these things to insult Javascript or Javascript programmers. I’m telling you guys not to settle for less. And definitely don’t create a philosophy around settling for less. Don’t create a philosophy that it’s okay not to have a standard library because you can just include a million little dependencies from NPM. That’s silly. It’s like Java needing to download the BigInteger class from Maven Central. Javascript programmers deserve a first-class standard library, just like Python and Java and Ruby and C# have. You haven’t got one right now; you’ve got a bunch of tiny packages on NPM, of questionable quality, that some guy can pull whenever he feels like it. If you own the deficiencies of your ecosystem instead of arguing that they aren’t problems, you’ll be further on your way to fixing them.

Harry Potter and Bilbo Baggins: Two Approaches to Magic

There’s a lot of magic in modern programming.

When I say “magic”, I really mean “sufficiently-advanced technology”. And when I say “sufficiently-advanced technology”, I really mean “library that you have no fricking idea what the kowloon putonghua it’s doing, but you use it anyway because it makes your job so much easier.”

There are lots of libraries that I have no idea what the mugu gaipan it’s doing to accomplish what it does. Just today, I confronted yet another one: Mockito, the mock object library for Java. Some of what it does, I know it does with reflection. Some, I can’t even imagine how it does it without hacking the bytecode. (It doesn’t. PowerMock, on the other hand, does hack the bytecode. It’s like adding capabilities to C by building a library that opens up a binary as if it were its diary, writes in a bunch of bits, and closes it up again. It’s bionic programs.)

Some people probably wouldn’t be bothered by this. They would use the magic without understanding it, happy that they have these magical artifacts to help them do their job. On the other hand, all this magic drives me starkers. I complained in my post on web frameworks that I didn’t like Rails because, as Princess Jasmine said breathlessly, “It’s all so magical!” Then, later, I tried Rails again, and all of a sudden, it didn’t bother me. What I didn’t realize at the time was that I had spent several months between those two incidents reading the odd article on metaprogramming, usually in a Lisp/Clojure context but sometimes also pertaining to Python and Ruby. A lot of these articles would drop little hints about how RSpec and ActiveRecord were implemented. By the time I came back to Rails, it suddenly wasn’t so magical, because I could see the threads of fate behind the tapestry, weaving themselves into finders and schema definitions and migrations.

I’m not going to pass judgment on people who aren’t bothered by the magic. I’m not going to say they’re ruining programming or whatever. Frankly, most of this magic is pretty well packaged up; if you ever do have problems with it, just Google whatever cryptic passage flashes on screen and you’ll find a tome of lore with the answer you seek. In practice, this is what I have to do most of the time because otherwise I would never get anything done. So I can’t blame anyone if this doesn’t bother them.

However, this does remind me of a difference in approach between two famous fantasy heroes, Bilbo Baggins and Harry Potter.

Bilbo Baggins is a hobbit. He has a mithril vest and a magic sword, Sting, that glows in the presence of orcs, and a magic ring that turns him invisible when he wears it. He doesn’t question how or why they work; they work, so he uses them to fight with the goblins, battle a troll, and escape in a barrel from the Elf-King’s hall. (If you’re not familiar with the story, you can find out all about it from the original work. Just Google “The Ballad of Bilbo Baggins”.)

Bilbo hangs out with Gandalf, a wizard, and Elrond, an elf lord. Wizards can do magic. So can elf lords. Bilbo doesn’t know or care why or how. They just can; he accepts it.

For the most part, this strategy works for Bilbo. He survives all sorts of things that would have killed a lesser man, becomes fantastically rich, and ends up sailing off into the West to be sponge-bathed by elf maidens in his dotage. He does run into a rather nasty edge case fifty years on, when it turns out the magic ring is actually the One Ring to rule them all, created by the Dark Lord Sauron in the fires of Mordor. But by now, he’s retired, so it’s really someone else’s problem.

Like Gandalf, Harry Potter is a wizard. Unlike Gandalf, Harry Potter didn’t spring into creation fully formed and able to do magic just because some cosmic guy played a harp. Harry Potter had to slave away for six years learning magic at Hogwarts. (It would have been seven years, but destroying Lord Voldemort’s Horcruxes gave him enough extracurricular credit to skip a year, sort of like doing a year-abroad in Muggle colleges, except with more death.) Harry had to take all kinds of crap to learn magic; he had to put up with Snape bullying him, and McGonagall riding him to win at Quidditch, and he had to stay up until 1 AM writing essays full of ludicrous made-up garbage for Professor Trelawney. He had to deal with Dumbledore, who refused to ever tell him anything straight out, instead making him engage in some insane version of the Socratic method where you die if you don’t guess right.

At the end of all this, Harry Potter still isn’t that powerful. He manages to do the Imperius Curse and Sectumsempra, but that doesn’t help him much since those are both illegal. After six years of slaving away, it’s hard to see how Harry Potter is more powerful than he was when he started.

In the Harry Potter world, magic is hard. There are some pre-packaged artifacts to help you, like the invisibility cloak, but to be really effective at magic, you have to learn it. You have to study hard. Harry Potter doesn’t really study hard. Maybe I should have used a different example, like His Dark Materials, or Edward Elric. But Harry Potter works; while he does benefit from pre-packaged artifacts like the Invisibility Cloak and the Elder Wand, there’s always the implication that anyone smart and hardworking can learn how those things work, and even reproduce or improve on them.

It bugs me to trust anything that can think if I can’t see where it keeps its brain. It bothers me if I can’t understand how something is implemented. It bothers me when I can’t understand how something is even possible without bytecode manipulation, and I want to know the answer, even if the answer is “it’s not possible without bytecode manipulation”. I take a Harry Potter approach to programming.

Well, here’s something else we can all fight about! Soon we’ll be insulting people by calling them hobbit programmers.

The Circle of Annoyance

A few months ago, I got a job. They pay me and everything! Specifically, they pay me to work on an internal tool for the customer engagement team. This tool is written in Java using the Play! Framework.

I might have more to say about Play in a future post. Then again, I might not. My feelings on it are pretty neutral. It’s an MVC framework. You got your models, you got your views, you got your controllers. It’s more verbose than Rails, being Java, but not hugely so. It’s rather like Django, but with batteries not included. The framework is actually implemented in Scala, and the template language is a weird Scala DSL that compiles to Scala functions that generate HTML. I’m not a huge fan of Scala. I’ve seen some incredibly nice libraries written in it, but you, and by you I mean I, have no chance of writing a library like that, because the implementation of that library is file after file of gobbledegook that looks like the horrific love child of the section of the Java language specification on generics and an academic paper on Hindley-Milner type theory. On the other hand, the arguments to Play templates are statically typed. This is nice, because templates are completely impossible to debug, and that goes double for these ones.

There was a time when Play was the only lean web framework for Java and all the other ones were eldritch monstrosities spawned in the moss-lined cesspits of unknown Kadath, but nowadays it’s a singularly unimpressive piece of work. What I’ve seen of Spring Boot really doesn’t look that bad, for instance. Play does seem more Java 8-ready than most of the other web frameworks, but given how watered down Java 8 turns out to be once you’ve spent a little time with it, I’m not sure how much I really care.

But that wasn’t the point of this little discussion. The tool I work on at my job, which I got, where they pay me, and in return I try not to force push my branch with three dumpy commits into the master branch on their GitHub repo, thereby destroying the entire app, was written by another developer who left the company before I joined. And his code drives me fucking crazy. I mean it. It drives me up the fricking wall. The formatting is a mess. The design is monolithic, impenetrable, and basically impossible to unit test. Some of the methods are 200 lines long. In one place, he checks if a string is empty or contains only whitespace using Apache Commons’s isNotBlank, while in the next he performs the exact same check using Spring’s hasText. (If Java had just included basic shit like that in its standard library, this wouldn’t be a problem.) Recently, I had to read the code that implements the tool’s Salesforce integration. While reading this code, I felt myself succumbing to the madness of the moss-lined cesspits of unknown Kadath. I started to jump at small noises, like the ringing of telephones or the questioning of dev-ops. When I came upon a coworker in the halls, I would low like a moose calf, or yap like an arctic fox, such was my shock at the sight of another being. I covered pages and pages with diagrams and flowcharts, trying to make sense of it all, finding clues to the purpose of a method in newspaper obituaries, seeing the organization of a class mirrored in a bowl of quinoa salad, muttering HTTP status codes under my breath.

So I rewrite the code, because it’s been made clear to me that this is now my code, and I can do whatever I want with it as long as the tool continues to work. (Well, the senior developers might protest if I decided to implement my own dialect of APL with an embedded MVC DSL on top of Play and rewrite the entire tool in that. But…but I can refactor all I want…so…so shut up! At least I have a job, where they pay me! They even let me write unit tests! So there!) And every few minutes, as I’m rewriting the code, I think about some future person, looking at this code. It might even be me, in the future. Or it might be some other junior developer who comes in to keep the tool running so our customer success team knows what our customers are up to. And I think about this future developer looking at this code, the very code I’m writing as I think this, and being driven fucking crazy by it.

I can’t even imagine what exactly this future developer will hate about the code I’m writing today. All I know is that he (or she–50/50 by 2020) will hate it. It will drive him/her/it/them fucking insane. He/she/it/they will be madly scribbling on the walls trying to understand this squamous code. Luckily, the walls are IdeaPaint, so they’ll erase. But the pain won’t.

And then, maybe, some day in the even further future, a further future developer will read the future developer’s code, and it will drive them fucking crazy. Even their genetically enhanced cyborg brain won’t be able to penetrate such code. It will be like that one episode of Star Trek: The Next Generation where Data came up with a plan to kill the entire Borg Collective by making them look at a really weird shape that would blow their minds, man.

It’s all part of a great circle of crappy code. Simple, crappy code becomes complex, crappy code becomes complicated crappy code becomes simple crappy code again, and at each step, a developer’s mind is lost. Someone has to break the circle of crappy code. They have to write good code. They have to rewrite the entire app in Yesod. Only then will the clouds part, and the darkness pass, and the compiler sing, and the tail calls be optimized, and the gods smile upon this little blue binary. Only then will the circle be broken.

Actually, it will probably still be crap.

Teachers, let your students use the standard libraries!

This is not a blog.

No, that wasn’t some kind of Magritte reference.

Because this is not a blog, this is a plea. I am pleading with you, professors, to end this madness of forbidding your beginning students from using the standard libraries in the programming languages you’re teaching them. And not just college professors, though they are most at fault here. This is a plea to anyone who teaches beginning programming, at any level and in any capacity, to LET! YOUR! STUDENTS! USE! THE! LIBRARIES!

If you want to get really crazy, you could even teach your students to use the libraries. I don’t expect that, but it’s something to consider.

Teaching beginning programmers at an extremely low level of abstraction, which is usually the reason for banning the standard libraries, never causes anything but pain. The students hate it. Not only is it painful, not only does it force them to accept what seem to them like bizarre and arbitrary restrictions on what code they can write, it’s also boring. They can’t do anything cool, like make a game or put up a website. All they can do is write dorky little interactive command line interest rate calculators and Fibonacci printers. And it makes things harder on you, too. You have to help students debug their ridiculously complicated programs that would be so much simpler if they could just use a list, but they can’t, because you won’t let them, because you learned to program in Pascal and doggone it, Pascal doesn’t have lists!

I understand that you want your students to learn how to use arrays, but frankly, knowing how to use arrays is just not an important skill anymore. Besides, teaching arrays by forcing students to use them in Java, C#, or even C++ is just lame. I firmly believe that the best way to teach a programming concept is to teach it in a language that takes that concept to the max. Java 8 supports a weaksauce sort of subset of functional programming, and Python supports a slightly stronger, but still pretty weaksauce, form of functional programming. But if you really want someone to learn functional programming, you don’t teach them Java 8 or Python. You teach them Haskell, or Clojure, or Erlang. Similarly, if you want them to learn arrays, teach them C, or assembly language. If you teach them C, they’ll learn not just arrays, but lots of other useful things about how real machines work. If you teach them Java, but force them to use arrays, they’ll learn that you suck.

Even more absurd than forbidding your students from using lists and maps is forbidding them from using built-in sorting. Don’t make your students write a freaking bubble sort just to sort the arrays that you made them use instead of lists or maps. Bubble sort is stupid, and your students aren’t going to get anything out of being forced to implement it after three months of programming experience. Instead, let them use built-in sorting. Later, when they take an algorithms course, they can learn a real sorting algorithm, like Quicksort. Then they can forget that as soon as the test is over and keep using built-in sorting.

There’s some room for forbidding, of course. If you’re teaching the students how to implement a linked list, you can forbid the standard library’s list. If you’re teaching functional programming using Common Lisp, you can forbid rplacd. (You should probably forbid rplacd just because it’s called rplacd.) If you’re teaching an assignment on self-hosting C compilers, you can forbid the following program:

#include "stdlib.h"
#include "strings.h"

int main(int argc, char** argv) {
    char command[50];
    strcpy(command, "gcc ");
    strcat(command, argv[1]);
    return system(command);

But if your students are writing a dorky interactive command line app, for goodness’s sake, let them use the libraries. Stem the tide of students coming to Stack Overflow with programs that look like this:

int main(void) {
    int var1;
    int var2;
    int var3;
    // . . .
    int var30;
    int var31;

    // Now calculate the monthly value
    int sum = 0;
    sum += var1;
    sum += var2;
    // . . .
    sum += var31;
    cout << sum << " is the sum." << endl;

And teach your students good habits, like using built-in sorting instead of writing their own crappy bubble sort.