The tool divergence
It used to be that the platform on which most software was developed was also the one on which most of it was deployed—-the personal desktop computer. As a developer you only needed one machine.
Now, the target deployment platform is an app for a phone or tablet. But you still need a big desktop machine to develop them.
The first effect of this development-deployment divergence is that it cements the new position of the desktop as a specialized factory machine.
But another effect is that it raises the barrier to casual hacking. In the PC age a kid who wanted to write software just needed to download a bunch of simple tools and was off running. But today that kid would need to setup a complicated toolchain on a beefy desktop. And that too in an age when they think of desktops as something quaint, and might very well not even have one, because their “primary device” is just their phone or tablet.
Even the term “primary device” probably sounds funny to them. Growing up if someone asked me what my primary device was I’d laugh at them because my PC was my only device and it was all I needed or wanted.
So here’s my wish—-and challenge. Make it so that a young person who gets the urge can code and build something on their phone or tablet. I suspect it will be easier with the extra screen real estate on a tablet, but the big problem is the same. Make it so that some of that infinite interstitial time that their phone soaks up can be filled with programming. What if at the end of that Apple ad the teenager presented his family with a cool app rather than a movie?
The Programming Language Consensus
If you look at features in mainstream programming languages over the last couple of decades, they seem to be converging around a common set of core features, surrounded by fashion, preference and whim. These core features are:
- first class functions
- lists and maps are part of the language
These features first originated in dynamically typed scripting languages like Python. (Well, they originated in Lisp, but then everything originated in Lisp.) Then they gradually found their way into statically typed compiled languages like C++1 and Java. Some of the most anticipated features of Java 8 are lambda expressions and type inference.
There is a subtle distinction between succinctness and expressivity. Expressivity almost always leads to succinct code. My favorite example is list comprehensions. But succinctness is also about omitting unnecessary verbiage. A good example is type inference enabling the omission of spelling out types.
Go had all three from the get-go.
Two things that are not yet part of this common core are objects and concurrency. I almost added “object orientation” to the list above, but then Go’s model of defining structs and interfaces made me think otherwise. It reminds me of Niklaus Wirth’s philosophy of understanding programs by looking at their data structures first. As for concurrency, there are wildly different ways of expressing it, but I have a hunch that the CSP model adopted by Go (and Erlang) will prevail. Maybe in another few years there will be five items on the list of common language features.
This is a good thing, because it looks like we’re gradually finding our way towards abstractions and constructs that are proving to be widely useful and broadly applicable. It is also rendering irrelevant many of the PL community’s fiercest debates. The agitation over static vs dynamic types starts fading when a C++ or Java code snippet with range-based for loops, lambdas and omitted type declarations looks very much like a Python or Ruby one. I was also surprised to see that while Go is very much a systems language, in chatting with other programmers I found Go eating more share from Java and Python than C++.
“Most writers manage to get by because, as the deadline creeps closer, their fears of turning in nothing eventually surpasses their fears of turning in something terrible.”
– Why Writers Are the Worst Procrastinators, by Megan McArdle
I’ve followed Kevin Kelly’s work for a while, and in my humble opinion he is our foremost apologist for technology. If you want an introduction to his work, this Edge interview is a dense yet readable summary.
I call myself a protopian, not a utopian. I believe in progress in an incremental way where every year it’s better than the year before but not by very much—just a micro amount. I don’t believe in utopia where there’s any kind of a world without problems brought on by technology. Every new technology creates almost as many problems that it solves. For most people that statement would suggest that technology is kind of a wash. It’s kind of neutral, because if you’re creating as many problems as it solves, then it’s a 50/50 wash, but the difference in my protopian view versus, say, a neutral view is that all these new technologies bring new possibilities that did not exist before, including the new possibility of doing harm versus good.
Many years ago, before I had a GPS-enabled smartphone with maps, we went on a roadtrip along the coast of Maine, stopping at nearly a dozen lighthouses. When I look back, I myself am surprised that I was able to pull that off without clutching my phone the whole way. But it was a memorable roadtrip. We saw all the lighthouses we wanted to see, and got a glorious experience of the Maine coast.
How did I navigate? In the days (weeks?) before the trip, I drew maps on paper of our entire route, with all the places we wanted to see. I used Google Maps as the source. It was probably tens of pages of hand-drawn maps. It was tedious, but it did have advantages:
I wish I had saved that paper, because it would be a memento as evocative as the pictures from that trip.
(In 2001, LineDrive tried to address some of these issues by automatically generating driving directions in a style similar to hand-drawn maps.)
I tell you this story not to reminisce about my travels (well, maybe a little bit) but to point out that analog artifacts still have affordances that give them advantages over digital devices. Even today, for complex multi-hop trips I prefer to hand-draw maps on paper.
I want my digital interfaces to be more organic and analog, and my analog experiences to make it into my digital corpus effortlessly.
“In publishing, meanwhile, the deal with the customer has always been dead simple, and the advent of digital has not changed it: You pay the asking price, and we give you the whole thing. It would make little sense to break novels or biographies into pieces, and they’re not dependent on the advertising that has kept journalism and television artificially inexpensive and that deceives the consumer into thinking the content is inexpensive to make.”
– The Publishing Industry is Thriving, by Evan Hughes
There’s a German word for it, of course: Sehnsucht, which translates as “addictive yearning.” This is, I think, what these sites evoke: the feeling of being addicted to longing for something; specifically being addicted to the feeling that something is missing or incomplete. The point is not the thing that is being longed for, but the feeling of longing for the thing. And that feeling is necessarily ambivalent, combining both positive and negative emotions.”
– Pinterest, Tumblr and the Trouble With Curation
Star Trek vs Babylon 5
Star Trek and Babylon 5 are two space operas that offer very different views of humans, technology and the relationship between them. That one is hugely popular, has entire conventions dedicated to it, still gets reruns and has spilled over into popular culture beyond just geek circles, while the other is likely to draw blank stares when one mentions it says a lot.
Star Trek is an expression of the idea that technology cleanses people. It is technology manifested outwards to people and society. And technology is clean and rational.
Star Trek depicts a clean world. Not just physically clean, but also emotionally and socially clean. There are no politics. (Did Picard ever have to contend for limited resources with other Star Fleet captains?) There is no heartbreak or jealousy. There is no class. There are no vices, no addictions, no crime. The Enterprise does not have a dark, seedy corner. There is no soul-searching. It’s telling that the non-human Data is often the most reflective character. Good and evil are clear. Star Fleet and everyone it in is good. All problems and all evil comes from the outside (Q, the Borg, other civilizations etc etc).
Babylon 5 is an expression of the idea that human nature is independent of technology. It is technology as one part of people and society. And people are messy and irrational.
Babylon 5 depicts a world of humans with all their failings. In it, technology has brought us far into space, but we are all still humans. Life is still messy and unpredictable and hard. Politics, within the humans, and among the space-faring races, is central. Society is stratified. There are haves and have-nots. It is not always clear which side is “right”. People experience joy, sorrow, heartbreak and the full range of human emotions. They battle demons and addictions.
Fat Client, Thin Client
Ben Thompson’s latest post is about Dropbox, Box and focusing on consumers vs enterprise as customers. I don’t have anything to say about that. I do, however, want to talk about his opening paragraphs:
The problem with the old thin client model was the assumption that processing power was scarce. In fact, Moore’s Law and the rise of ARM has made the exact opposite the case – processing is abundant…
…Thus, over the last few years as the number of fat clients has multiplied – phones, tablets, along with traditional computers – the idea of a thin client with processing on the server seems positively quaint; however, in the context of our data, that is the exact model more and more of us are using: centralized data easily accessible to multiple “fat” devices distinguished by their user experience.
Calling today’s phones and tablets “fat” is, I think, quaint. It is an easy trap to fall into. They are many orders of magnitude more powerful than those a decade ago. If the Palm Pilot was a thin client, certainly a modern iPhone cannot also be called “thin”.
But the thin client computing model was never about the absolute processing power, or lack thereof, of the clients. The crucial factor was the relative difference in processing power and storage between the client and the server. No matter how powerful the device we’re holding in our hands is, the warehouse-scale computer on which it depends is many, many more times more powerful. And without knowing what datacenters two decades ago looked like, I think it’s safe to say that the server/client power ratio has only increased over time, and is likely to continue increasing.
A smartphone or tablet’s impressive computing power goes largely into pushing pixels around, and giving us a great experience by responding quickly. It would be useless without all the backend services (app stores, cloud storage, social networks… the list is endless) it derives its usefulness from. A phone or tablet without net connectivity if pretty much a brick. And so, by definition, it “depends heavily on some other computer (its server) to fulfill its computational roles”.
The Contours of My Attention
Lately I’ve felt lost when in front of the computer. I feel like every time I turn to the screen, I’m at the bottom of a deep ravine and the sky is blocked off but for a thin strip between its high walls. I have to claw my way back to the top to see the lay of the land, and where I am located on it.
What was I doing? What do I need to do next? These are the most important questions when I’m working. And modern interfaces make it easy to lose track of them when I’m in the thick of it, and downright hard to answer them when I try to come back after a break, or an interruption.
The contours of my attention are varied and varying. They have peaks and valleys and plateaus. They are changing, rising and falling. Some things are central, some peripheral. Some things are important, some urgent, and some whimsical. Some peripheral things might be important. Some whimsical things might become central.
The modern UI does not know about the contours of my attention. It is a vast plain, with no distinguishing marks.
(Note that this is a different problem than that of being distracted, and being on a device where distraction is so easy. Not that I don’t suffer from that problem too but that’s a whole different story.)
I’ve taken to going analog. Writing things down. I got a couple of nice fountain pens that lightly tickle my nostalgia bone and that make me enjoy the sensual pleasure of putting wet ink on smooth paper. Underlines. Doodles. Boxes. Large writing. Small writing. They all tell a story. They all anchor thought, and recollect it.
Rather than the keyboard driving, I let the paper drive. I write tasks and designs and box and arrow diagrams. Then I go do one little thing on the computer. Then I come back to the paper. And so on. This is very similar to the idea of having an analog desk and a digital desk.
This is the next challenge in designing UIs: making them recognize and adjust to the contours of my attention.
“Being a computer science professional means more than just being a programmer. What does it take to become a successful professional in computing? Andy Begel and Beth Simon did a study computer science graduates newly hired as software developers at Microsoft. They found that the demands of the job had very little to do with what the students learned in their classes. The students’ classes gave them technical skills to design and develop, but mostly what the new hires did was to collaborate and debug.”
– What’s Our Goal for a CS Degree, and How Do We Know We Got There?, by Mark Guzdial
So I finally got to use Go at work, which was a great excuse to learn the language. Here are some newbie impressions after all of three weeks with it.
- the documentation is fantastic. Having a tutorial where one can execute (and edit!) the examples on the webpage itself really helps.
- the syntax favors terseness. It is a relief after the chattering verboseness of something like Java. Types don’t need to be always declared and are inferred wherever possible. There are a few quirky choices: I didn’t think I’d see “:=” after I moved on from Pascal. And the biggest breakthrough of all: the automatic formatter “gofmt” saves Go programmers quintillions of hours in avoided debates about formatting style and micro-futzing with spacing in the editor.
- concurrency is part of the language in the form of channels and goroutines. About time too, since we’re in the 21st century and threads cannot be implemented as a library(or at least, it’s always clunky when they are).
- like Python, the cognitive size of the language is small enough to fit in one brain without constantly thrashing out. I write Go in an editor (Emacs), not an IDE, and that works just fine. With Java there was a heavy productivity hit to opting out of something like Eclipse and auto-complete. I felt like the IDE was an earth mover for code.
If one is writing a new system (especially a backend) from scratch that doesn’t have to call existing C++ or Java code (or only calls them via RPC), then Go is a pretty compelling option right now. That is great after a decade of no other viable options.
The Human Resolution Threshold
CES is upon us. And it is CEO after CEO blasting us with specs, specs, more specs, better specs, larger specs. More GB. More pixels. More more more!
I never thought I’d say this, but its exhausting. Even worse, it’s boring.
Seeing as I make my living in the field, this conundrum spurred some soul-searching in me. I don’t want to be the software engineer who bitches and moans about tech.
My first computer had a monochrome (green!) monitor with a resolution of 720x348 pixels. Moving up to 4-bit color was a huge step up. Fast forward to today, with millions of colors and resolution indistinguishable from natural surfaces, what remains for flat displays to accomplish? The next interesting thing to do is take displays to places they haven’t been to yet, like inside your eyes.
Once displays crossed the threshold of human resolution, they were essentially “done”.
Something similar has now happened with computers. For the vast majority of users, specs like amount of RAM and CPU speed simply don’t matter any more. At some level, even individual computers don’t matter any more. What’s more interesting is what you can build with giant warehouses full of them. Capabilities like Google Now and Siri (powered by warehouse computers) have pushed what we can do with phones more than a faster SnapDragon processor.
And that’s the source of my ennui with CES. A lot of what they’re showing is pushing past the human resolution threshold. More of more. We need to go to new places, even if it is with less.
“If the Ph.D. was always about self-enrichment and not about carving out a career path, I should have treated it differently. I didn’t need to turn my life upside-down to put lines on my C.V. because no one outside academe, and not even those inside academe who would eventually hire me as an adjunct, would ever care to see them… If you’re really on the passion track, or even on the alt-ac track, and not on the secretly-hoping-for-tenure track, then take the pressure off and treat your scholarship like you would any other serious hobby: as something you value and enjoy but wouldn’t set above other things like your family or your job.”
– Beware the Passion Track, by Deb Werrlein. The author is talking about humanities PhDs, but the point that one should mould their PhD experience according to what they want to do afterwards is true in general.
The widely cited statistic that one in three women ages 35 to 39 will not be pregnant after a year of trying, for instance, is based on an article published in 2004 in the journal Human Reproduction. Rarely mentioned is the source of the data: French birth records from 1670 to 1830. The chance of remaining childless—30 percent—was also calculated based on historical populations.
In other words, millions of women are being told when to get pregnant based on statistics from a time before electricity, antibiotics, or fertility treatment. Most people assume these numbers are based on large, well-conducted studies of modern women, but they are not.”
– How Long Can You Wait To Have a Baby? by Jean Twenge.