The word “polymath” teeters somewhere between Leonardo da Vinci and Stephen Fry. Embracing both one of history’s great intellects and a brainy actor, writer, director and TV personality, it is at once presumptuous and banal. Djerassi doesn’t want much to do with it. “Nowadays people that are called polymaths are dabblers—are dabblers in many different areas,” he says. “I aspire to be an intellectual polygamist. And I deliberately use that metaphor to provoke with its sexual allusion and to point out the real difference to me between polygamy and promiscuity.”
“To me, promiscuity is a way of flitting around. Polygamy, serious polygamy, is where you have various marriages and each of them is important. And in the ideal polygamy I suspect there’s no number one wife and no number six wife. You have a deep connection with each person.””
This is the blog of Vivek Haldar
“Weirdly, the less social authority a profession enjoys, the more restrictive the barriers to entry and the more rigid the process of producing new producers tend to become. You can become a lawyer in three years, an M.D. in four years, and an M.D.-Ph.D. in six years, but the median time to a doctoral degree in the humanities disciplines is nine years. And the more self-limiting the profession, the harder it is to acquire the credential and enter into practice, and the tighter the identification between the individual practitioner and the discipline.”
Automating software engineering
Software writers themselves don’t seem immune from the new de-skilling wave. His comments point to the essential tension that has always characterized technological de-skilling: the very real benefits of labor-saving technology come at the cost of a loss of human talent. The hard challenge is knowing where to draw the line—or just realizing that there is a line to be drawn.
The comment thread on his post has some great discussion. You should go read it. In it, Carr raises two more questions, which I will try to address here.
Climbing to higher levels of abstraction
re: “each advance opens up more new opportunities than it removes” This is a common defense of automation in pretty much all fields. (See the quote from Alfred North Whitehead in my Atlantic piece.) And it’s a good defense, as it’s very often true. But there are also a couple of counterarguments:
At some point, as the capabilities of automation software advance, the software aid begins to take over essential tasks – sensing, analysis, diagnosis, judgment making – and the human shifts to more routine functions such as input and monitoring. In other words, with the automation of skilled work there’s a point at which there are no “higher-level tasks” for the human to climb to.
In the world of software there is always a higher level to climb to. But an individual software engineer has to have both foresight and will to do it.
For example, not so long ago, we were building regular CRUD web applications by hand every time. Soon common patterns emerged, and were codified in web frameworks like Ruby On Rails and Django. The level of abstraction for coding web apps was lifted. The interval of interest is between web apps becoming common but before a large part of the boilerplate was extracted into frameworks. If you were a web app developer during that period, the repetitive nature of coding web apps should have made you feel that a large part of your current job was ripe for automation.
As in every arena where automation takes over, in software engineering too the pattern is familiar: as an activity becomes widespread, the repetitive and “non-thinking” parts of it are recognized and extracted. In the case of industry, they are extracted into a machine or robot. In the case of software, they are extracted into a layer of abstraction or framework. There is an uncanny valley just before that happens when workers get the feeling that they’re just going through the motions.
The alphas are those who extract common repetitiveness into automation. The betas are those who raise their skills to adding value on top of the newly automated tasks. Everyone else gets left behind.
Cutting across layers of abstraction
Lower-level tasks may be seen as mere drudge work by the experienced expert, but they can actually be essential to the development of rich expertise by a person learning the trade. Automation can, in other words, benefit the master, but harm the apprentice (as R. Carey suggests in the preceding comment). And in some cases even the master begins to experience skill loss by not practicing the “lower level” tasks. (This has been seen among veteran pilots depending on autopilot systems, for example.)
I think this is a real danger. Teaching new students high-level languages is great to get them started, but for for true systems-building one needs to understand the whole stack.
The thing about computer science is that all abstractions are leaky. We can’t use our abstractions without understanding their underlying implementation and limits. This happens at every layer. An integer shows its implementation as a 32-bit word on the microprocessor when adding to MAXINT wraps around. A garbage collector in a VM frees us from manual memory management, but shows up when the app experiences unpredictable pauses. Even the CPU, the bedrock abstraction, leaks its implementation details when things like speculative execution and branch prediction affect performance. The examples are endless.
The other thing is: as Bryan Cantrill points out, failing and pathological systems are the ones which truly teach us. They lay bare our tower of abstractions. One has to understand all the layers and their implementations to debug them.
So becoming comfortable and knowledgeable in just one layer is brittle, and that brittleness is exposed the moment one has to solve or debug a significant problem.
For the apprentice, this means learning and working with as many different layers of abstraction as possible. For the more experienced, it means not letting the day-to-day job in one layer make you rusty with the others. These are both challenges.
Peeking into the future of automating software engineering
The peculiar thing about computing and software is that it can recursively coil around to build upon itself. Large amounts of code are used as data to gain insight into the process of writing and changing software. Sophisticated algorithms can learn from this code/data to automatically generate changes and fix bugs in existing software. And the fledgling field of computational creativity can ingest the mountains of now digitized human creative output to guide the “creativity” of an algorithm. A paper from the 80s was prescient when it claimed that software processes are software too.
So, more computing power and more sophisticated algorithms and more data allow you to build even more powerful computers and even more sophisticated algorithms. This is unlike physical industry. A tall building doesn’t help in building even taller buildings.
It is in this line of thinking—connecting the individual programmer to a datacenter’s worth of data, analysis and computational power—that I think the future of automating the process of writing software lies. My hope is that the combination of human and machine will be more potent than either alone, much like advanced chess.
“Computational creativity is an emerging branch of artiﬁcial intelligence that places computers in the center of the creative process. Broadly, creativity involves a generative step to produce many ideas and a selective step to determine the ones that are the best. Many previous attempts at computational creativity, however, have not been able to achieve a valid selective step. This work shows how bringing data sources from the creative domain and from hedonic psychophysics together with big data analytics techniques can overcome this shortcoming to yield a system that can produce novel and high-quality creative artifacts. Our data-driven approach is demonstrated through a computational creativity system for culinary recipes and menus we developed and deployed, which can operate either autonomously or semi-autonomously with human interaction.”
Sharp tools, dull minds
What keeps me coming back to the writing of Nicholas Carr is that he splits me in two, violently agreeing and disagreeing at the same time. And then I have to step back and tell myself, “don’t rush to agree or disagree, just try to understand what he’s saying.” He has rapidly become one of the most thoughtful and balanced critics of our current digital age.
Psychologists have found that when we work with computers, we often fall victim to two cognitive ailments—complacency and bias—that can undercut our performance and lead to mistakes. Automation complacency occurs when a computer lulls us into a false sense of security. Confident that the machine will work flawlessly and handle any problem that crops up, we allow our attention to drift. We become disengaged from our work, and our awareness of what’s going on around us fades. Automation bias occurs when we place too much faith in the accuracy of the information coming through our monitors. Our trust in the software becomes so strong that we ignore or discount other information sources, including our own eyes and ears. When a computer provides incorrect or insufficient data, we remain oblivious to the error.
How automation is creeping higher up the labor stack to so-called white-collar knowledge jobs, especially programming, has been on my mind for a while. Modern IDEs are getting “helpful” enough that at times I feel like an IDE operator rather than a programmer. They have support for advanced refactoring. Linters can now tell you about design issues and code smells. The behavior all these tools encourage is not “think deeply about your code and write it carefully”, but “just write a crappy first draft of your code, and then the tools will tell you not just what’s wrong with it, but also how to make it better.”
I’ve been on both sides. There are times when I fire up Eclipse just so that I can auto-complete my way to put out some code now. There are other times when I stare at the stark screen of a text editor (not IDE), the blinking cursor accosting me: “what the hell are you going to do now without all your fancy crutches?” But after the initial stab of fear and helplessness, I revel in the freedom of it, and savor the delicious rebellion of writing code without an IDE nagging me with suggestions and hints and corrections at every damn keypress like a 21st century Clippy from a horror movie.
Am I arguing for primitive tools? No. What I’ve described in the previous paragraph are my personal feelings while writing code. The tools (at least, the good ones) encode the knowledge and hard-won lessons of an entire army of programmers. They often point me to issues I’m blind to while writing code because I’m in such a rush to just make the damn thing work. In aggregate, they lead to a cleaner codebase.
So what am I saying then? I’m saying that we should let the tools help us without becoming crutches, to let us write sharp code without dulling our minds. That sounds paradoxical. It is a subtle mental stance one takes towards one’s work, tools, and output.
The most guru programmers I’ve seen have this paradoxical zen-like stance: they use the tools to the full extent possible (because, well, that’s what they’re there for), but understand enough about what those tools do and how they work to be perfectly productive even without them. It reminds me of the concept of moh, and non-attachment, while being an active participant.
“Isn’t the essence of the Apple product that you achieve coolness simply by virtue of owning it? It doesn’t even matter what you’re creating on your Mac Air. Simply using a Mac Air, experiencing the elegant design of its hardware and software, is a pleasure in itself, like walking down a street in Paris. Whereas, when you’re working on some clunky, utilitarian PC, the only thing to enjoy is the quality of your work itself. As Kraus says of Germanic life, the PC “sobers” what you’re doing; it allows you to see it unadorned. This was especially true in the years of DOS operating systems and early Windows.”
“The basis of science is the hypothetico-deductive method and the recording of experiments in sufficient detail to enable reproducibility. We report the development of Robot Scientist “Adam,” which advances the automation of both. Adam has autonomously generated functional genomics hypotheses about the yeast Saccharomyces cerevisiae and experimentally tested these hypotheses by using laboratory automation. We have confirmed Adam’s conclusions through manual experiments. To describe Adam’s research, we have developed an ontology and logical language. The resulting formalization involves over 10,000 different research units in a nested treelike structure, 10 levels deep, that relates the 6.6 million biomass measurements to their logical description. This formalization describes how a machine contributed to scientific knowledge.”
“Despite the prevalence of computing power, the process of finding natural laws and their corresponding equations has resisted automation. A key challenge to finding analytic relations automatically is defining algorithmically what makes a correlation in observed data important and insightful. We propose a principle for the identification of nontriviality. We demonstrated this approach by automatically searching motion-tracking data captured from various physical systems, ranging from simple harmonic oscillators to chaotic double-pendula. Without any prior knowledge about physics, kinematics, or geometry, the algorithm discovered Hamiltonians, Lagrangians, and other laws of geometric and momentum conservation. The discovery rate accelerated as laws found for simpler systems were used to bootstrap explanations for more complex systems, gradually uncovering the “alphabet” used to describe those systems.”
In the thousands of years of human history that predated our current moment of instantaneous communication, the very fabric of human understanding was woven to some extent out of delay, belatedness, waiting. … we need to give students an opportunity to understand the formative values of time and delay. The teaching of history has long been understood as teaching students to imagine other times; now, it also requires that they understand different temporalities. So time is not just a negative space, a passive intermission to be overcome. It is a productive or formative force in itself.
…I want to conclude with some thoughts about teaching patience as a strategy. The deliberate engagement of delay should itself be a primary skill that we teach to students. It’s a very old idea that patience leads to skill, of course—but it seems urgent now that we go further than this and think about patience itself as the skill to be learned. Granted—patience might be a pretty hard sell as an educational deliverable. It sounds nostalgic and gratuitously traditional. But I would argue that as the shape of time has changed around it, the meaning of patience today has reversed itself from its original connotations. The virtue of patience was originally associated with forbearance or sufferance. It was about conforming oneself to the need to wait for things. But now that, generally, one need not wait for things, patience becomes an active and positive cognitive state. Where patience once indicated a lack of control, now it is a form of control over the tempo of contemporary life that otherwise controls us. Patience no longer connotes disempowerment—perhaps now patience is power.”
Malcolm Gladwell and the narrative fallacy
I admire the writing of Malcolm Gladwell because he picks eclectic topics to delve into, peppers his narrative with memorable anecdotes, and weaves it with just enough pop science to make me feel like I’m learning something new, without making it heavy enough to put down the book.
And when it’s finished, I feel somewhat tricked and uncomfortable. I can’t find one single glaring problem, but the final feeling I’m left with is that there just have to be a ton of simplifications and glossed-over complexities.
This is a problem with pop sci (or pop sociology) in general, but particularly with the specific strain as practiced by writers who themselves do not have a technical background.
In a recent Longform podcast (transcript) Gladwell says that his central concern is the richness of the story he is telling. He gushes with admiration for Michael Lewis because he can tell a story without citing papers.
My great hero as a writer is Michael Lewis. I just think Michael Lewis, believe it or not, is the most underrated writer of my generation. I think he is the one who will be read 50 years from now. And I think what he does is so extraordinary, from a kind of degree of difficulty standpoint. The Big Short is a gripping book, fascinating, utterly gripping book about derivatives. It blows me away how insanely hard that book was to do, and it’s brilliant. The Blind Side, I think, it might be the most perfect book I’ve read in 25 years. I don’t think there’s a single word in that that I would change. I just think it has everything. But he uses no science, right? Very little. It’s all story. But he does more work in his stories, makes much more profound points than I do by dragging in all these sociologists and psychologists. He’s proved to me that, if you can tell a story properly, you don’t need this kind of scaffolding. You can just tell the story. And so, I’ve been trying, not entirely successfully, but trying to move in that direction over the last couple books.
That’s the classic narrative fallacy—seeing a connecting narrative where none exists.
As I’ve said before, researchers and scientists themselves are too shy to write in this manner, but
Authors want to believe. They want to find the parts of a dense technical paper that help them build a colorful story and support a somewhat related generalization.
I’d like to think that I can understand something by reading summaries of it or narratives about it, but time and I again I keep realizing that there is no substitute for the original source material.
The Internet Participation Pyramid.
“…my career is in books. There I have an unlimited special effects budget. And I can cast however I want.”
The 1964 origins of responsive design
The book Designing Programmes, by Karl Gerstner, was published in 1964. It lays out many of the same ideas that form the basis for responsive design on the web today. The author talks about how to construct a system of design that can adapt to different conditions, while retaining its core aesthetic.
The opening paragraph of the book puts it best:
Instead of solutions for problems, programmes for solutions… for no problem is there an absolute solution. Reason: the possibilities cannot be delimited absolutely. There is always a group of solutions, one of which is best under certain conditions.
For example, this logo for a music shop adapts to various paper sizes and layouts, while conveying an identity.
He also talks about how to think of design as choosing points in space of axes, where the axes are things like color, layout, type, size, contrast etc. That was the inspiration for my earlier post on automating web design.
What changed in 1991 in India?
Russ Roberts, the host, asked Bhagwati, “…What changed in 1991?”, to which he replied:
…the Prime Minister himself … said, look, I have lots of my own family abroad, sons and daughters and nephews and nieces, and this is true virtually of everybody in the upper classes in India—they have family abroad. They keep coming back and saying: How come we have such idiotic policies? With such enormous restrictions on diversification, on capacity expansion, etc., etc., which are driving us into the ground? And so, the diaspora effect is what I call it. A lot of young people coming back said: You really cannot have this. Because India is really losing rapidly its position in the world economy. Because if you are not performing well, nobody is going to pay attention to you. And the second thing I think was that increasing as people went … abroad and they would find that nobody took India seriously. So the Indian politicians and bureaucrats were increasingly running into situations where they were simply disregarded and looked down upon.
That might be a factor, but it strikes me as a weak one. The much larger factor was that the Gulf War brought cable TV to India, and with it, the culture and mores of the West. When you go from decades of one state-sponsored TV channel that broadcast eight hours a day to dozens that run 24 hours a day, the impact is enough to move a nation. If you grew up in India during the 90s, there is absolutely no question about the impact of cable TV. Like I wrote before:
And when we were not watching the military-industrial complex peacocking, we were soaking in its cultural counterpart. Baywatch. MTV. Moonlighting. We didn’t even have to filter the crap. Only the hits were re-broadcast. Suddenly our culture seemed parochial. Suddenly our things seemed archaic, clunky.
(A tiny step towards) Automating Web Design
I’ve spent an inordinate amount of time fiddling with the theme and look of this blog, and so I decided to see what happens if I try to automate my experimentation.
You can play with the page here (you might have to allow the browser to run the embedded script). Every reload will give you a completely new, randomly generated look.
Yes, there are a few horrible-looking duds, but a surprising fraction of the generated styles don’t look half bad, a few even look very good.
One can imagine stretching this in many interesting directions. For example, you could perform sentiment/mood analysis on the body text, and match it with a typeface that has similar characteristics. Of course, there are also the million other CSS variables that I didn’t change.
I’ve argued previously that a lot of what today’s creative professionals do is variation within a theme, and that can be easily replaced with an algorithmic system that incorporates enough randomness to make the output look organic. This is another small example along those lines.
Here are some screenshots, starting with the vanilla unstyled page. I used the page from the CSS Zen Garden.
This work is licensed under a Creative Commons Attribution 3.0 United States License.