“Most days, Nicholson Baker rises at 4 a.m. to write at his home in South Berwick, Maine. Leaving the lights off, he sets his laptop screen to black and the text to gray, so that the darkness is uninterrupted. After a couple of hours of writing in what he calls a dreamlike state, he goes back to bed, then rises at 8:30 to edit his work.”—How to write a great novel.
in an age of personal technological revolution, we all need a more explicit philosophy for adopting tools. Without this clarity, we run the risk of drowning in a sea of distracting apps and shiny web sites. My philosophy — to only adopt tools that solve a major pre-existing problem — has served me well. I use e-mail, for example, because the ability to communicate asynchronously with people around the world is quite important for my work. E-mail solves this problem. I don’t use Twitter, however, because the ability to have short, casual interactions with many people I don’t know well is not that important to my work.
On the face of it, that’s a reasonable position to take, especially for an academic whose job is to think deep thoughts and solve big problems. But it also strikes me as showing a lack of playfulness, which is important to cultivate, even if only in small doses. It might also miss out on entirely new ways to seed and develop academic ideas.
My experience with the germ of an idea shared as a Tweet at an academic conference that became a blog post, then a series of blog posts, and (eventually) a peer-reviewed article is just one example of the changing nature of scholarship. From where I sit, being a scholar now involves creating knowledge in ways that are more open, more fluid, and more easily read by wider audience.
The Amish have the undeserved reputation of being luddites, of people who refuse to employ new technology. It’s well known the strictest of them don’t use electricity, or automobiles, but rather farm with manual tools and ride in a horse and buggy. In any debate about the merits of embracing new technology, the Amish stand out as offering an honorable alternative of refusal. Yet Amish lives are anything but anti-technological. In fact on my several visits with them, I have found them to be ingenious hackers and tinkers, the ultimate makers and do-it-yourselfers and surprisingly pro technology.
Ten years ago when I was editing Wired I sent Howard Rheingold to investigate the Amish take on cell phones. His report published in January 1999 makes it clear that the Amish had not decided on cell phones yet. Ten years later they are still deciding, still trying it out. This is how the Amish determine whether technology works for them. Rather than employ the precautionary principle, which says, unless you can prove there is no harm, don’t use new technology, the Amish rely on the enthusiasm of Amish early adopters to try stuff out until they prove harm.
We all need a small budget of time for playing and evaluating tools with an open mind. We might discover uses we hadn’t thought of, and solve problems we didn’t know we had.
The word “polymath” teeters somewhere between Leonardo da Vinci and Stephen Fry. Embracing both one of history’s great intellects and a brainy actor, writer, director and TV personality, it is at once presumptuous and banal. Djerassi doesn’t want much to do with it. “Nowadays people that are called polymaths are dabblers—are dabblers in many different areas,” he says. “I aspire to be an intellectual polygamist. And I deliberately use that metaphor to provoke with its sexual allusion and to point out the real difference to me between polygamy and promiscuity.”
“To me, promiscuity is a way of flitting around. Polygamy, serious polygamy, is where you have various marriages and each of them is important. And in the ideal polygamy I suspect there’s no number one wife and no number six wife. You have a deep connection with each person.”
“Weirdly, the less social authority a profession enjoys, the more restrictive the barriers to entry and the more rigid the process of producing new producers tend to become. You can become a lawyer in three years, an M.D. in four years, and an M.D.-Ph.D. in six years, but the median time to a doctoral degree in the humanities disciplines is nine years. And the more self-limiting the profession, the harder it is to acquire the credential and enter into practice, and the tighter the identification between the individual practitioner and the discipline.”—The Ph.D. Problem, Louis Menand
Software writers themselves don’t seem immune from the new de-skilling wave. His comments point to the essential tension that has always characterized technological de-skilling: the very real benefits of labor-saving technology come at the cost of a loss of human talent. The hard challenge is knowing where to draw the line—or just realizing that there is a line to be drawn.
The comment thread on his post has some great discussion. You should go read it. In it, Carr raises two more questions, which I will try to address here.
Climbing to higher levels of abstraction
re: “each advance opens up more new opportunities than it removes”
This is a common defense of automation in pretty much all fields. (See the quote from Alfred North Whitehead in my Atlantic piece.) And it’s a good defense, as it’s very often true. But there are also a couple of counterarguments:
At some point, as the capabilities of automation software advance, the software aid begins to take over essential tasks – sensing, analysis, diagnosis, judgment making – and the human shifts to more routine functions such as input and monitoring. In other words, with the automation of skilled work there’s a point at which there are no “higher-level tasks” for the human to climb to.
In the world of software there is always a higher level to climb to. But an individual software engineer has to have both foresight and will to do it.
For example, not so long ago, we were building regular CRUD web applications by hand every time. Soon common patterns emerged, and were codified in web frameworks like Ruby On Rails and Django. The level of abstraction for coding web apps was lifted. The interval of interest is between web apps becoming common but before a large part of the boilerplate was extracted into frameworks. If you were a web app developer during that period, the repetitive nature of coding web apps should have made you feel that a large part of your current job was ripe for automation.
As in every arena where automation takes over, in software engineering too the pattern is familiar: as an activity becomes widespread, the repetitive and “non-thinking” parts of it are recognized and extracted. In the case of industry, they are extracted into a machine or robot. In the case of software, they are extracted into a layer of abstraction or framework. There is an uncanny valley just before that happens when workers get the feeling that they’re just going through the motions.
The alphas are those who extract common repetitiveness into automation. The betas are those who raise their skills to adding value on top of the newly automated tasks. Everyone else gets left behind.
Cutting across layers of abstraction
Lower-level tasks may be seen as mere drudge work by the experienced expert, but they can actually be essential to the development of rich expertise by a person learning the trade. Automation can, in other words, benefit the master, but harm the apprentice (as R. Carey suggests in the preceding comment). And in some cases even the master begins to experience skill loss by not practicing the “lower level” tasks. (This has been seen among veteran pilots depending on autopilot systems, for example.)
I think this is a real danger. Teaching new students high-level languages is great to get them started, but for for true systems-building one needs to understand the whole stack.
The thing about computer science is that all abstractions are leaky. We can’t use our abstractions without understanding their underlying implementation and limits. This happens at every layer. An integer shows its implementation as a 32-bit word on the microprocessor when adding to MAXINT wraps around. A garbage collector in a VM frees us from manual memory management, but shows up when the app experiences unpredictable pauses. Even the CPU, the bedrock abstraction, leaks its implementation details when things like speculative execution and branch prediction affect performance. The examples are endless.
The other thing is: as Bryan Cantrill points out, failing and pathological systems are the ones which truly teach us. They lay bare our tower of abstractions. One has to understand all the layers and their implementations to debug them.
So becoming comfortable and knowledgeable in just one layer is brittle, and that brittleness is exposed the moment one has to solve or debug a significant problem.
For the apprentice, this means learning and working with as many different layers of abstraction as possible. For the more experienced, it means not letting the day-to-day job in one layer make you rusty with the others. These are both challenges.
Peeking into the future of automating software engineering
So, more computing power and more sophisticated algorithms and more data allow you to build even more powerful computers and even more sophisticated algorithms. This is unlike physical industry. A tall building doesn’t help in building even taller buildings.
It is in this line of thinking—connecting the individual programmer to a datacenter’s worth of data, analysis and computational power—that I think the future of automating the process of writing software lies. My hope is that the combination of human and machine will be more potent than either alone, much like advanced chess.
“Computational creativity is an emerging branch of artiﬁcial intelligence that places computers in the center of the creative process. Broadly, creativity involves a generative step to produce many ideas and a selective step to determine the ones that are the best. Many previous attempts at computational creativity, however, have not been able to achieve a valid selective step. This work shows how bringing data sources from the creative domain and from hedonic psychophysics together with big data analytics techniques can overcome this shortcoming to yield a system that can produce novel and high-quality creative artifacts. Our data-driven approach is demonstrated through a computational creativity system for culinary recipes and menus we developed and deployed, which can operate either autonomously or semi-autonomously with human interaction.”—A Big Data Approach to Computational Creativity. Lav R. Varshney, Florian Pinel, Kush R. Varshney, Debarun Bhattacharjya, Angela Schoergendorfer, Yi-Min Chee. Arxiv.
What keeps me coming back to the writing of Nicholas Carr is that he splits me in two, violently agreeing and disagreeing at the same time. And then I have to step back and tell myself, “don’t rush to agree or disagree, just try to understand what he’s saying.” He has rapidly become one of the most thoughtful and balanced critics of our current digital age.
Psychologists have found that when we work with computers, we often fall victim to two cognitive ailments—complacency and bias—that can undercut our performance and lead to mistakes. Automation complacency occurs when a computer lulls us into a false sense of security. Confident that the machine will work flawlessly and handle any problem that crops up, we allow our attention to drift. We become disengaged from our work, and our awareness of what’s going on around us fades. Automation bias occurs when we place too much faith in the accuracy of the information coming through our monitors. Our trust in the software becomes so strong that we ignore or discount other information sources, including our own eyes and ears. When a computer provides incorrect or insufficient data, we remain oblivious to the error.
How automation is creeping higher up the labor stack to so-called white-collar knowledge jobs, especially programming, has beenon mymind for awhile. Modern IDEs are getting “helpful” enough that at times I feel like an IDE operator rather than a programmer. They have support for advanced refactoring. Linters can now tell you about design issues and code smells. The behavior all these tools encourage is not “think deeply about your code and write it carefully”, but “just write a crappy first draft of your code, and then the tools will tell you not just what’s wrong with it, but also how to make it better.”
I’ve been on both sides. There are times when I fire up Eclipse just so that I can auto-complete my way to put out some code now. There are other times when I stare at the stark screen of a text editor (not IDE), the blinking cursor accosting me: “what the hell are you going to do now without all your fancy crutches?” But after the initial stab of fear and helplessness, I revel in the freedom of it, and savor the delicious rebellion of writing code without an IDE nagging me with suggestions and hints and corrections at every damn keypress like a 21st century Clippy from a horror movie.
Am I arguing for primitive tools? No. What I’ve described in the previous paragraph are my personal feelings while writing code. The tools (at least, the good ones) encode the knowledge and hard-won lessons of an entire army of programmers. They often point me to issues I’m blind to while writing code because I’m in such a rush to just make the damn thing work. In aggregate, they lead to a cleaner codebase.
So what am I saying then? I’m saying that we should let the tools help us without becoming crutches, to let us write sharp code without dulling our minds. That sounds paradoxical. It is a subtle mental stance one takes towards one’s work, tools, and output.
The most guru programmers I’ve seen have this paradoxical zen-like stance: they use the tools to the full extent possible (because, well, that’s what they’re there for), but understand enough about what those tools do and how they work to be perfectly productive even without them. It reminds me of the concept of moh, and non-attachment, while being an active participant.
“Isn’t the essence of the Apple product that you achieve coolness simply by virtue of owning it? It doesn’t even matter what you’re creating on your Mac Air. Simply using a Mac Air, experiencing the elegant design of its hardware and software, is a pleasure in itself, like walking down a street in Paris. Whereas, when you’re working on some clunky, utilitarian PC, the only thing to enjoy is the quality of your work itself. As Kraus says of Germanic life, the PC “sobers” what you’re doing; it allows you to see it unadorned. This was especially true in the years of DOS operating systems and early Windows.”—What’s wrong with the modern world, Jonathan Franzen.
“The basis of science is the hypothetico-deductive method and the recording of experiments in sufficient detail to enable reproducibility. We report the development of Robot Scientist “Adam,” which advances the automation of both. Adam has autonomously generated functional genomics hypotheses about the yeast Saccharomyces cerevisiae and experimentally tested these hypotheses by using laboratory automation. We have confirmed Adam’s conclusions through manual experiments. To describe Adam’s research, we have developed an ontology and logical language. The resulting formalization involves over 10,000 different research units in a nested treelike structure, 10 levels deep, that relates the 6.6 million biomass measurements to their logical description. This formalization describes how a machine contributed to scientific knowledge.”—The Automation of Science, King et al, Science. 2009 Apr 3;324(5923):85-9
“Despite the prevalence of computing power, the process of finding
natural laws and their corresponding equations has resisted automation. A key challenge to finding analytic relations automatically is defining algorithmically what makes a correlation in observed data important and insightful. We propose a principle for the identification of nontriviality. We demonstrated this approach by automatically searching motion-tracking data captured from various physical systems, ranging from simple harmonic oscillators to chaotic double-pendula. Without any prior knowledge about physics, kinematics, or geometry, the algorithm discovered Hamiltonians, Lagrangians, and other laws of geometric and momentum conservation. The discovery rate accelerated as laws found for simpler systems were used to bootstrap explanations for more complex systems, gradually uncovering the “alphabet” used to describe those systems.”—Schmidt M., Lipson H. (2009) “Distilling Free-Form Natural Laws from Experimental Data,” Science, Vol. 324, no. 5923, pp. 81 - 85.
In the thousands of years of human history that predated our current moment of instantaneous communication, the very fabric of human understanding was woven to some extent out of delay, belatedness, waiting. … we need to give students an opportunity to understand the formative values of time and delay. The teaching of history has long been understood as teaching students to imagine other times; now, it also requires that they understand different temporalities. So time is not just a negative space, a passive intermission to be overcome. It is a productive or formative force in itself.
…I want to conclude with some thoughts about teaching patience as a strategy. The deliberate engagement of delay should itself be a primary skill that we teach to students. It’s a very old idea that patience leads to skill, of course—but it seems urgent now that we go further than this and think about patience itself as the skill to be learned. Granted—patience might be a pretty hard sell as an educational deliverable. It sounds nostalgic and gratuitously traditional. But I would argue that as the shape of time has changed around it, the meaning of patience today has reversed itself from its original connotations. The virtue of patience was originally associated with forbearance or sufferance. It was about conforming oneself to the need to wait for things. But now that, generally, one need not wait for things, patience becomes an active and positive cognitive state. Where patience once indicated a lack of control, now it is a form of control over the tempo of contemporary life that otherwise controls us. Patience no longer connotes disempowerment—perhaps now patience is power.
I admire the writing of Malcolm Gladwell because he picks eclectic topics to delve into, peppers his narrative with memorable anecdotes, and weaves it with just enough pop science to make me feel like I’m learning something new, without making it heavy enough to put down the book.
And when it’s finished, I feel somewhat tricked and uncomfortable. I can’t find one single glaring problem, but the final feeling I’m left with is that there just have to be a ton of simplifications and glossed-over complexities.
This is a problem with pop sci (or pop sociology) in general, but particularly with the specific strain as practiced by writers who themselves do not have a technical background.
In a recent Longform podcast (transcript) Gladwell says that his central concern is the richness of the story he is telling. He gushes with admiration for Michael Lewis because he can tell a story without citing papers.
My great hero as a writer is Michael Lewis. I just think Michael Lewis, believe it or not, is the most underrated writer of my generation. I think he is the one who will be read 50 years from now. And I think what he does is so extraordinary, from a kind of degree of difficulty standpoint. The Big Short is a gripping book, fascinating, utterly gripping book about derivatives. It blows me away how insanely hard that book was to do, and it’s brilliant. The Blind Side, I think, it might be the most perfect book I’ve read in 25 years. I don’t think there’s a single word in that that I would change. I just think it has everything. But he uses no science, right? Very little.
It’s all story. But he does more work in his stories, makes much more profound points than I do by dragging in all these sociologists and psychologists. He’s proved to me that, if you can tell a story properly, you don’t need this kind of scaffolding. You can just tell the story. And so, I’ve been trying, not entirely successfully, but trying to move in that direction over the last couple books.
That’s the classic narrative fallacy—seeing a connecting narrative where none exists.
Instead of solutions for problems, programmes for solutions… for no problem is there an absolute solution. Reason: the possibilities cannot be delimited absolutely. There is always a group of solutions, one of which is best under certain conditions.
For example, this logo for a music shop adapts to various paper sizes and layouts, while conveying an identity.
He also talks about how to think of design as choosing points in space of axes, where the axes are things like color, layout, type, size, contrast etc. That was the inspiration for my earlier post on automating web design.
Russ Roberts, the host, asked Bhagwati, “…What changed in 1991?”, to which he replied:
…the Prime Minister himself … said, look, I have lots of my own family abroad, sons and daughters and nephews and nieces, and this is true virtually of everybody in the upper classes in India—they have family abroad. They keep coming back and saying: How come we have such idiotic policies? With such enormous restrictions on diversification, on capacity expansion, etc., etc., which are driving us into the ground? And so, the diaspora effect is what I call it. A lot of young people coming back said: You really cannot have this. Because India is really losing rapidly its position in the world economy. Because if you are not performing well, nobody is going to pay attention to you. And the second thing I think was that increasing as people went … abroad and they would find that nobody took India seriously. So the Indian politicians and bureaucrats were increasingly running into situations where they were simply disregarded and looked down upon.
That might be a factor, but it strikes me as a weak one. The much larger factor was that the Gulf War brought cable TV to India, and with it, the culture and mores of the West. When you go from decades of one state-sponsored TV channel that broadcast eight hours a day to dozens that run 24 hours a day, the impact is enough to move a nation. If you grew up in India during the 90s, there is absolutely no question about the impact of cable TV. Like I wrote before:
And when we were not watching the military-industrial complex peacocking, we were soaking in its cultural counterpart. Baywatch. MTV. Moonlighting. We didn’t even have to filter the crap. Only the hits were re-broadcast. Suddenly our culture seemed parochial. Suddenly our things seemed archaic, clunky.
I’ve spent an inordinate amount of time fiddling with the theme and look of this blog, and so I decided to see what happens if I try to automate my experimentation.
You can play with the page here (you might have to allow the browser to run the embedded script). Every reload will give you a completely new, randomly generated look.
Yes, there are a few horrible-looking duds, but a surprising fraction of the generated styles don’t look half bad, a few even look very good.
One can imagine stretching this in many interesting directions. For example, you could perform sentiment/mood analysis on the body text, and match it with a typeface that has similar characteristics. Of course, there are also the million other CSS variables that I didn’t change.
“The objects in use by the new generation suffer from the fatal compromise between a supposedly “artistic” intention and the dictates of technical manufacture; from a feeble turning back to historical parallels; from the conflict between essence and appearance. Instead of recognizing and designing for the laws of machine production, the previous generation contented itself with trying anxiously to follow a tradition that was in any case only imaginary. Before them stand the works of today, untainted by the past, primary shapes which identify the aspect of our time. Car Aeroplane Telephone Wireless Factory Neon-advertising New York! These objects, designed without reference to the aesthetics of the past have been created by a new kind of man: the engineer!”—The paragraph above could stand in for an opening salvo against skeuomorphism, or a plea for a new aesthetic, today. But it was published in 1928, written by Jan Tshichold in his landmark book, “The New Typography.”
Inspired by Alex Payne’s rules. This is a distillation of and update to a much longer description of my setup. This is from the point of view of my personal, not professional, usage. I want to frame this in terms of general principles, rather than specific prescriptive implementations, which will be different for everyone.
We’re in a transitory period now, which is why there are still people who like to use machines with tons of local storage. That is about to become irrelevant. The datacenter is your compute and storage. The device in your hand (be it a phone, a tablet, or a laptop) is your modern “dumb terminal” with pretty graphics that acts as a conduit to all that data. Local, on-device storage is just a cache.
Treat service providers like banks. When you store your money in a bank rather than under your mattress, you are trusting them, while buying yourself the convenience and flexibility of not lugging around sacks of cash. But there are also a number of safeguards and incentives in place for the banks to keep your money safe and liquid. It is the same with cloud services holding your data. Also, they know this stuff much better than you. Corollary: run from providers that make it hard to get your data out.
Data longevity is a hard problem, but you can improve your chances by using plain text as much as possible (preferably in a long-term-friendly encoding like ASCII or UTF-8), and the most-used file formats for images (I think that’s JPEG).
Over the long run, it is your data that matters, not the software. That’s why this goes last. That wedding photograph from 1999 is more important than the image editor you used to touch it up. Pick one for each of your common activities (writing, image management, browsing etc.) and stick with it. Don’t listen to the lifehackers and productivity gurus—moving to the latest shiny editor will not make your prose better.
Corollary of the above: never use software that locks you into proprietary formats, or if you must, make sure to export your files out to a more portable format. The chance that you will be able to run the same hardware/software/version snowflake in a decade to decode your data is close to zero.
The Kindle app is even worse. While its PMN Caecilia typeface is pretty readable, the reader has no option to change it. And Kindle does not support embedded fonts at all.
Embedded fonts can make an iBook look terrific. I’ve shown sample pages in earlier posts on this blog.
I’m beginning to develop an eBooks Design Manifesto. Here are my first three “demands”:
eBook readers should give designers and publishers complete control over what fonts to use in their books, and support the full range of typefaces available today by enabling font embedding.
Since typefaces are an integral and vital factor in design, the designer should have the ability to disable inbuilt font choices.
The font industry should make using fonts in eBooks identical to using them in a printed book: Provided you have purchased a legitimate copy of the fonts, you should be able to use them to create as many millions of as many books as you wish.
We developers should revel in what we have. We get paid more than we’ll admit our friends who work in other fields (even while we secretly believe we deserve more). Our jobs are consistently rated the best possible job based on work environment, stress, and salary. We are served by professional organizations online and offline, mostly free. We have no need to unionize because we are not exploited. There are jobs for us whether or not we keep up with the latest technology (COBOL developers in NYC command the same salary as Rubyists). There’s no external force destroying our industry and livelihood — in fact just the opposite, software is eating the world, and we’re the teeth.
Revel while you can. Even in 1990 no journalist thought the titans of their industry would be shrinking, bankrupt, sold for pennies on the dollar, demolished by free classified ads, the rise of laymen bloggers, and a world that (wrongly?) refuses to value the tenants of their profession.
What will be next to fall? When will it be our turn to be made redundant or, at least, unvalued?
Tim Hopper recently posted a series of interviews with folks from both academia and industry, posing this question to them. (I’ve writtenaboutthe topic myself.) If you are pondering the question, you could do worse than read the entire series. 1, 2, 3, 4, 5, 6, 7.
The opening question to everyone:
A 22-year old college student has been accepted to a Ph.D. program in a technical field. He’s academically talented, and he’s always enjoyed school and his subject matter. His acceptance is accompanied with 5-years of guaranteed funding. He doesn’t have any job offers but suspects he could get a decent job as a software developer. He’s not sure what to do. What advice would you give him, or what questions might you suggest he ask himself as he goes about making the decision?
Kudos to Tim for creating a great resource on this topic.
“In the time it took you to read the last paragraph some 48-year old was laid off by The Village Voice, and they’re smarter than you and have lived ten times what you’ve lived and can write so much better than you I actually almost feel bad for you, and now they’re on the same job market trying to scramble for the same shitty 10-cents-a-word gig recapping a show about couponing for the AV Club in the hopes that they can bang out some soul-destroying tedious bullshit so that a pack of talentless losers in the comments can pick their words apart from the safety of their beige plastic cubicles as they try to distract themselves with pop culture for long enough to keep their all-devouring self-hatred at bay.”—
I want to start by saying that I greatly admire Berkun’s work in general, and have read almost all his books (I think everything but Confessions of a Public Speaker). That said, I don’t think YWP is his best to date.
YWP is an excellent account of, well, Berkun’s time at Automattic, and its culture and techniques viewed through the eyes of a “traditional” project manager. There is a disconnect in the way the book was pitched and what it actually contains. It was pitched partly as a book about remote work being the future of work, and partly as a book about how to make work more fun. It is tangentially about both those things, but the primary thrust of the book is the personal narrative of the author working and shipping at Automattic.
What was missing was any discussion of how to take Automattic’s specific culture and tools and ways of working, and extrapolate them to other settings, such as a new company, or an existing traditional one. (He later wrote about this on his blog.)
Also, for a book that wants to sell the idea of a distributed company, a significant fraction of it ends up detailing the meet-ups where teammates physically met and worked together for a few days. All team-members would come to a given city for a week, camp out in a large hotel room or apartment, and basically code, eat and drink (and maybe sleep a little) the whole time. That might be exciting and fun, but it sounds like a horrible way to work (even for one week) for someone older or with a family.
So it was a great account of how Automattic has had success with the distributed team model, but left me wanting much more. Berkun particularly shines when talking about team dynamics, leadership and project management, and I did learn quite a few new ideas and techniques from that.
Some interesting questions that I think a book on distributed teams and remote working should address:
how can an existing company either accommodate or make the transition to being distributed? This is the big one. Because existing companies dwarf new ones. If one wants to change the way work works, one has to tackle this thorny problem.
how can distributed work scale? By now it is not controversial to claim that distributed teams and remote work is effective for small to medium companies. But how can we scale it to larger teams and companies, going from hundreds to thousands of employees?
learn from the Linux kernel. To me, the most successful large distributed team ever is the one that builds the Linux kernel. The artifact is complex and is now the substrate for the modern internet, and the team building it has thousands of contributors. And it has been delivering new releases and features and bugfixes steadily for more than a decade. And it has been doing this with almost no overarching oversight, other than Linus and the subsystem lieutenants weighing in on technical matters. The amount of physical proximity with other kernel hackers is small compared even to the case of Automattic—most kernel hackers will meet their fellow hackers once a year at a Linux conference. To those living on LKML and slinging patches around, this is a completely natural way to work and has become second nature. To anyone outside, it seems like an impossible feat, especially when it seems to be so hard to ship any software even with physically co-located teams and tons of oversight. Anyone who wants to understand distributed teams has to take a long, hard look at the Linux kernel community and try and learn from what they are doing.
But all the same, I do look forward to Berkun’s future work.
“My experience with the germ of an idea shared as a Tweet at an academic conference that became a blog post, then a series of blog posts, and (eventually) a peer-reviewed article is just one example of the changing nature of scholarship. From where I sit, being a scholar now involves creating knowledge in ways that are more open, more fluid, and more easily read by wider audiences.”—From Tweet to Blog Post to Peer-Reviewed Article: How to be a Scholar Now, by Jessie Daniels
The year is 2045, and my grandchildren (as yet unborn) are exploring the attic of my house (as yet unbought). They ﬁnd a letter dated 1995 and a CD-ROM (compact disk). The letter claims that the disk contains a document that provides the key to obtaining my fortune (as yet unearned). My grandchildren are understandably excited, but they have never seen a CD before—except in old movies—and even if they can somehow ﬁnd a suitable disk drive, how will they run the software necessary to interpret the information on the disk? How can they read my obsolete digital document?
From there the paper goes down a twisty maze of reasoning to lay out exactly how difficult it is to preserve digital documents for the long term, i.e. hundreds or even thousands of years. We have stone and paper documents dating back thousands of years, but it is extremely unlikely that today’s digital documents will make it that far into the future.
There are problems at every level of abstraction. The physical media on which the bits are stored will decay. The documents have funky encodings and metadata that can only be parsed and displayed by the programs that were used to author them. Those programs will only run on certain OSs. Those OSs will only run on certain hardware. If each of those hurdles are overcome, we have some hope of recovering the old document.
That’s like raising an already small probability to the fifth power.
Like I said, it’s depressing.
And the biggest issue is that digital preservation is an active, ongoing task. I could print something on archival paper, store it in a safe deposit box, and be reasonably sure it will be readable in a century. Trying to do the same for a digital document would require fiddling (copying to newer media, maybe changing formats and encodings) every 2-5 years.
Maybe the Library of Congress will go to all this trouble to preserve digital documents it deems significant to the record of our civilization and times. What hope do I as an individual have to carry forward my digital life to the point where I could hand it over to a grown child, or even grandchildren?
On the table where I write this, I have a black and white photograph of my mother when she was about twelve years old, with my then young grandparents. Will I be able to give my twenty-year old child photographs of the vacation we went on when he was a toddler?
I just finished reading this paper1 where the author embeds himself with a group of swimmers and tries to understand how they move up (or not) through the levels.
The overall message has many similarities to Ericsson et al’s famous paper about deliberate practice (of which I wrote a summary), in that “innate talent” as something that one is born with is a useless concept used to mystify the systematic, methodical and disciplined practice of otherwise mundane habits. Hence the title of the paper.
The author concludes that:
The levels of achievement (juniors, seniors, nationals, Olympics) are discrete rather than continuous, and the practices and attitudes within each are different enough to make them disjoint parallel worlds.
The jump from one level to the next one comes not from quantitative improvements (i.e. doing more of the same thing, or doing it faster), but from qualitative shifts such as new and different techniques, attitudes and practices.
Other than that, there is no secret to achieving excellence, and indeed, the primary psychological barrier to achieving it is to get over the sheer mundanity of it.
“Talent” is indistinguishable from its effects. We only call someone “talented” if their achievements already make it obvious. The cult of talent is actually harmful because it obscures the true, achievable (but mundane) path to excellence.
The Mundanity of Excellence: An Ethnographic Report on Stratification and Olympic Swimmers. Daniel F. Chambliss. Sociological Theory, Vol. 7, No. 1 (Spring, 1989), pp. 70-86. (PDF) ↩
As director of ARPA in the 1970’s George H. Heilmeier developed a set of questions that he expected every proposal for a new research program to answer. He referred to them as the “Heilmeier Catechism”. These questions still survive at DARPA and provide high level guidance for what information a proposal should provide. It’s important to answer these questions for any individual research project, both for yourself and for communicating to others what you hope to accomplish. These questions are:
What are you trying to do? Articulate your objectives using absolutely no jargon. What is the problem? Why is it hard?
How is it done today, and what are the limits of current practice?
What’s new in your approach and why do you think it will be successful?
If you’re successful, what difference will it make? What impact will success have? How will it be measured?
What are the risks and the payoffs?
How much will it cost?
How long will it take?
What are the midterm and final “exams” to check for success? How will progress be measured?
I feel all cheesy and content-market-ish just writing that headline, but hear me out.
I read a lot, because I don’t believe minimalism is a great intellectual strategy. I have thoughts and opinions about what I’ve read, and sometimes about other stuff that springs unprompted in my head. I want to post those on the great wide internet for all to see, because, well, that’s what it’s for, isn’t it?
I also want to keep a high bar for posts on this blog. That means I polish it (as much as one can polish a blog post) and try to be coherent and have a complete thought. That takes time, and often the latency between initial spark and hitting “publish” is weeks or months.
However, in my own narcissistic way I do believe all that raw material (sometimes just a list of links, sometimes half-baked thoughts) might be valuable and interesting in and of itself.
And that’s where the newsletter comes in. I want it to be a more intimate channel to share thoughts that that are not yet ready for the harsh glare of the public web, and maybe even start a conversation and learn something.
If you subscribe, you can expect around 1-2 emails a week, on much the same set of topics that I write about on this blog: programming, technology, modern work, and occasional side-trips into culture and criticism.
“The most potentially interesting, challenging, and profound change implied by the ubiquitous computing era is a focus on calm. If computers are everywhere they better stay out of the way, and that means designing them so that the people being shared by the computers remain serene and in control. Calmness is a new challenge that UC brings to computing. When computers are used behind closed doors by experts, calmness is relevant to only a few. Computers for personal use have focused on the excitement of interaction. But when computers are all around, so that we want to compute while doing something else and have more time to be more fully human, we must radically rethink the goals, context and technology of the computer and all the other technology crowding into our lives. Calmness is a fundamental challenge for all technological design of the next fifty years.”—The coming age of Calm Technology, Mark Weiser and John Seely Brown, Xerox PARC, October 5, 1996
In no particular order, brief summaries of some papers I’ve been reading lately on the topic of programming and distributed/remote teams. Note that this is not opinion, but backed up by empirical data.
I’d love to hear of any other good references on the topic.
“Conventional wisdom … holds that distributed software development is riskier and more challenging than collocated development… [We compare] the post-release failures of components that were developed in a distributed fashion with those that were developed by collocated teams. We found a negligible difference in failures. This difference becomes even less significant when controlling for the number of developers working on a binary. Furthermore, we also found that component characteristics (such as code churn, complexity, dependency information, and test code coverage) differ very little between distributed and collocated components.” The paper goes on to detail some of the practices that contributed to making this difference small. Among other things, “developers made heavy use of synchronous communication daily. Employees took on the responsibility
of staying at work late or arriving early for a status conference call on a rotating basis, changing the site that needed to keep odd hours every week. Keeping in close and frequent
contact increases the level of awareness and the feeling of ‘teamness’. This also helps to convey status and resolve issues quickly before they escalate. In addition, Engineers also regularly traveled between remote sites during development for important meetings.” 1
A decade ago, distance mattered, a lot. In spite of that, since then, distributed work has only grown, fueled by both economic and other considerations (e.g. the talent may not be where you want them to be). A range of technologies has made this easier: shared documents and calendars, instant messaging, video conferencing over the net; but also a realization that distributed, virtual leadership is a distinct and valuable skill.2
Distributed development was “considered harmful”, but recent rechecking of those results has revealed that “the effect size of the differences seen in the collocated and distributed software was so small that it need not concern industrial practitioners. Our conclusion is that … distributed development is not considered harmful.” 3
Among the behaviors that have the biggest impact on effectively working remotely: communicating changes and status to downstream dependencies, clear communication of who owns what, responsiveness to emails and IMs, and co-operation (which means actually helping and unblocking someone). 4
“The truth is, it’s pretty much impossible to write anything that matters safely. Art, even the stuff produced purely entertainment purposes, is meant to push boundaries. If it doesn’t, it’s only reinforcing the same old status quo, which is a problem in and of itself. But just because it’s pretty much impossible not to step on toes when you’re herding sacred cows doesn’t mean we should just let them be. These are powerful stories, and they need to be told. The key, though, is to always be sure to allow these topics the room and depth they deserve within your narrative, and, most important of all, to be sure you actually know what you’re trying to say and feel proud standing behind it.”—How to Write the Hard Stuff, Rachel Aaron
“Dark Side was not an album of hits, though. It was a concept album, something to be listened to from start to finish with the only pause being the twenty seconds it took you to flip from side one to two. While other people saw its meaning in other things – such as syncing it up to The Wizard of Oz – the true appeal and wonderment of Dark Side of the Moon was in the entirety of the story within. No, it’s not a story in the truest sense, not like The Wall, but there is a deeper story within the songs, one that each listener interpreted differently or related to their own lives and mindsets in different ways.”—40 Years Of Pink Floyd’s Dark Side Of The Moon, by Michele Catalano
You must have seen the warning a thousand times: Too few young people study scientific or technical subjects, businesses can’t find enough workers in those fields, and the country’s competitive edge is threatened.
It pretty much doesn’t matter what country you’re talking about—the United States is facing this crisis, as is Japan, the United Kingdom, Australia, China,Brazil, South Africa, Singapore, India…the list goes on. In many of these countries, the predicted shortfall of STEM (short for science, technology, engineering, and mathematics) workers is supposed to number in the hundreds of thousands or even the millions…
And yet, alongside such dire projections, you’ll also find reports suggesting just the opposite—that there are more STEM workers than suitable jobs. One study found, for example, that wages for U.S. workers in computer and math fields have largely stagnated since 2000. Even as the Great Recession slowly recedes, STEM workers at every stage of the career pipeline, from freshly minted grads to mid- and late-career Ph.D.s, still struggle to find employment as many companies, including Boeing, IBM, and Symantec, continue to lay off thousands of STEM workers.
The flaw is in the assumption that every STEM degree holder is a qualified candidate for a STEM job. Carried through to its logical conclusion this implies that simply presenting a STEM degree should be enough to get one placed in a STEM job, which is absurd.
This becomes readily apparent if one has ever been on the evaluation side — grading students, or conducting interviews for jobs. I’ve been a TA for a number of CS classes while in grad school, and I’ve conducted many interviews for software engineer positions. Just from my narrow anecdotal window, it is amazing how many CS students just want to figure out the bare minimum to pass the class; and how many grads do not have a decent grasp of elementary algorithms and data structures, and are not comfortable with code.
When CEOs of tech companies complain that there are not enough STEM workers to fill their open positions, what they’re really saying is that it is very hard to find the right calibre candidate, the glut of credentialed STEM graduates notwithstanding.
“This past weekend I had an amazing experience. I went to the 1st OpenSimulator Community Conference, which I helped organize (besides helping write the server that made it possible, too). The term “went” is not quite right, but it’s not wrong either. Physically, I didn’t go anywhere other than my home office: the conference was held in a virtual environment. Mentally, however, I went to this conference, almost as intensely as any other conference that I’ve been before. I was an organizer, a speaker and an attendee, and I felt myself in those roles just as strongly as I did in other conferences I helped organize. I stressed with technical and organizational glitches, focused when giving my talks, and was inspired by some of the talks I attended. I enjoyed visiting the sponsor displays and finding out more about them. I “came back” from the conference tired, but energized, feeling that the people who develop and use OpenSimulator have just gone through a transformational shift — from a bunch of unrelated individuals to an actual community. Of the dozens of conferences and workshops I have attended in my life, I had this feeling exactly once before, at a workshop we held at PARC back in 1996.”—The Future of Conferences, Crista Lopes
I am an unabashed fan of the American National Parks Service, not just because of the precious beauty of the parks themselves, but because of the rangers who work there.
I’ve lost count of how many national parks and monuments I’ve seen over more than a decade, but at each and every one of them, without fail, the rangers were just flat out brimming with pride and joy.
They wanted me to see, truly see, not just the visual scenic beauty, but the deeper connections in the ecosystem, and the geological history that made the place what it is.
Some shared personal stories about how and why they became rangers.
Some quoted John Muir and Thoreau extensively.
Some spoke of the simple grandeur and solitude of living miles from civilization for months.
All of them spoke of the delicate preciousness of their park, looking over it with loving watchfulness.
All of them were humble, emphasizing how much they don’t understand about nature, and how little they do.
Surely they have challenges and frustrations in their line of work. It’s just that I’ve never seen it as a visitor.
There is so much ink spilt writing about how to be happy at work. But what I see in them makes the word “happy” seem flat. It must be what one feels when one is tasked with looking after something precious and lovely, yet delicate and needing protection.
Clearly there has been a renaissance lately in long-form publishing on the web. Just take a look at longform.org every morning.
But even though the web is the delivery mechanism for long-form pieces, I almost never actually read them in the browser. It just doesn’t feel right. First of all, I’m not in the frame of mind to read long-form when browsing. Secondly, no matter how clean and tasteful the page design, I prefer something even more stripped down, like Readability.
When I come by a long-form piece I want to read, I add it to my Pocket queue, and usually read it on my tablet, laid back, without any other distractions on the screen.
So the Web has become an excellent delivery mechanism for long-form content, but its actual enjoyment happens elsewhere.
tl;dr: whether you know it or not, a non-trivial fraction of your work is remote. So, build remote working skills.
Who is a remote worker? What qualifies a piece of work as remote, as opposed to situated? The answer is more subtle than it appears at first glance. The first criteria that comes to mind is: physical proximity to collaborators. But access rapidly drops off. It works well if you can turn your chair around and talk to a colleague, but if they’re on another floor or in another building, then they might as well be in another city. Add to that the cult of “respect the headphones”, especially for programmers. More often than not programmers in the same room will have giant ear muffler headphones on and communicate with each other via chat and email.
No system is an island, and modern code will typically build upon a number of other services or libraries. If you’re writing code against a number of other services and libraries, the chances that the people you need to collaborate with for all of them are physically close to you are infinitesimally small.
Hence, a good fraction of your work is likely “remote”. So whether you’re a remote worker or not, it will help you if you build skills for effectively working remotely.
The most important thing I learned, though, was that there is no such thing as “standard English” with a capital E. Instead there are many “englishes” with a lower case E. There is the english of the Caribbean and the english of the southern United States and the english of Oxbridge and the english rappers use in their music. Traditionally we’re taught that one of these is better than the rest, but in this class I learned that that’s an arbitrary distinction and not necessarily the case.
This piece by Redfin CEO Glenn Kelman did the rounds recently. He thinks engineers in the Valley are getting spoilt by high salaries and lavish perks. They’re too comfortable, too soft.
Before most computer science graduates ever walk across a stage to get their diplomas, they’re set for life. This is especially true in 2013, which will be the first year in which most companies pay top engineering graduates in Silicon Valley $100,000 or more per year in salary.
For the companies, for Redfin, the engineers are worth every penny. And for the engineers, the money is nice to have. But how many engineers hired from Stanford or Berkeley in the past year will ever feel the savage need to make something happen, to bust out of the matrix, to push the limits of their abilities?
The problem is that the young engineers earning that much become well-fed farm animals at the very moment in their lives when they should be running like wild horses.
The piece plays into several stereotypes:
The Startup is the apogee of all possible corporate forms.
The Startup is the only place where an engineer can do hard, challenging work.
Above benefits are great enough that one should take on significant risk, financial and personal.
If you are young, you have nothing to lose, so roll the dice with a (preferably my) startup!
This is the Overarching Valley Myth. But reality is much more varied, much more complex, and not as disappointing.
If hard technical problems in the real world are what you seek—and that is indeed what you should seek if you’re a fresh CS grad—then a startup is possibly the worst place for you. Startups are about product definition and finding a business model. They are only tangentially about technology. Given all the risk that a startup takes on the business side, the last thing it needs is to take take on more risk on the technology side. Corollary: startups prefer middle-of-the-road, mature technology.
On the other hand, large tech behemoths are good places to dwell on hard technical problems. They operate at a scale where such problems actually manifest themselves. They also have enough resources that they can comfortably push the envelope on new and risky fundamental technologies without worrying about whether they have enough money to make payroll next month. That’s why they make a much better training ground than startups.
For example, there is no doubt that you will learn much more about planet-scale distributed systems at places like Amazon, Facebook, Twitter and Google, than any startup under the Sun.
Kelman is also downplaying the significant risk one takes on with an early stage startup. Financially, a big chunk of compensation at a startup is equity, which of course will be worthless if the startup fails. The flip side is that you could hit the jackpot and become wildly rich. But you know the odds. Also know that if successful, rewards go disproportionately to investors and founders. So why not make a startup instead of joining one?
I’ve seen firsthand the damage that startups can do to relationships. I’ve watched marriages and friendships fall apart, seen children and partners pushed aside, and failed those in my life in all kinds of ways when work came to the fore. I’ve listened as people who are the very picture of startup success – visible in the press and social media, headlining conferences, forever founding and exiting – have confided their utter loneliness despite being seemingly at the social center of the entrepreneurial community.
There are two sides to every story. The truth is probably somewhere in the middle.
Kelman is also reinforcing the Valley age stereotype, what with all the imagery and metaphors around young, wild horses. Yes, when you’re fresh out of school, with no family and no mortgage, you have nothing to lose. But I think he’s completely missing the other side of the age/risk curve, which is engineers who have spent a decade or so at a large company, and built a comfortable financial cushion. They’ve built up an appetite for risk by then, and they’re experienced. They won’t run around like wild horses. They don’t need to. They know exactly how to build your system right the very first time, with a minimum of wildness.
Last but not least, Kelman has a massive bias: he’s complaining because he runs a startup, and startups are finding it hard to hire and retain engineers because all these other large companies in the Valley have soaked them up with pumped-up salaries and cushy perks.
Buried deep in Kelman’s piece is a sentiment I agree with: don’t get comfortable. Keep stretching yourself. Being showered with perks doesn’t mean you cannot keep seeking out hard problems. Even while burping from your gourmet meal during a mid-afternoon massage.
[Full Disclosure of Personal Bias: I happen to be happily ensconced in a large tech company. That’s partly why Kelman’s piece pricked me. Whether it has made me comfortable and soft time will tell.]
“The Pew Research Center released a study called “Modern Parenthood” in March, well after either Sandberg or Slaughter could refer to it, which is unfortunate. When it comes to work-life conflict, the study found, about half of all working parents say it is difficult to balance career and family responsibilities, with “no significant gap in attitudes between mothers and fathers.” Perhaps this is not surprising, given that mothers’ and fathers’ roles have converged dramatically in the past half century. Since 1965, Pew reports, fathers have tripled the time they spend with their children. Fathers’ attitudes about mothers’ roles are changing quickly, too: In 2009, 54 percent of men with kids younger than 17 believed that young children should have a mother who didn’t work. Just four years later, that number has dropped to 37 percent. Finally, although stay-at-home dads are still very much in the minority, their numbers have doubled in just a decade’s time.”—Home Economics: The Link Between Work-Life Balance and Income Equality, Stephen Marche
“Rock has a certain joy to it, a celebratory feel beneath the heavy bass and driving beats. Rock always feels like you’re holding something with a pulse in your hands and forever feels like it needs to feed off your enjoyment of it in order to stay alive. Rock is not necessarily an emotional music, but it isan emotional experience.
Rock is not things. Rock is not afraid. Rock is not radio friendly or hit driven. Rock is not what the band plays during the last dance at your sister’s wedding. Rock is not complacent, cooperative or content to just be. It exists for a reason and that reason is not to sell you a t-shirt. It exists to make you feel, and whether that feeling comes out in a fist pump, a head bang or power drumming on the steering wheel, it’s there beneath every sweeping note, every tortured lyric, every awkward time change.”—Rock isn’t dead, by Michele Catalano