“Real Artists Ship”

Colin Johnson’s blog


Archive for the ‘Changes of Perspective’ Category

Variations on Folk Sayings (20)

Sunday, August 27th, 2017

“The early worm gets caught by the bird.”

int Considered Harmful; or, Are Computer Languages Too General

Friday, August 25th, 2017

The flexibility of computer languages is considered to be one of their sources of power. The ability for a computer to do, within limits of tractability and Turing-completeness, anything with data is considered one of the great distinguishing features of computer science. Something that surprises me is that we fell into this very early on in the history of computing; very early programmable computer systems were already using languages that offered enormous flexibility. We didn’t have a multi-decade struggle where we developed various domain-specific languages, and then the invention of Turing-complete generic languages was a key point in the development of computer programming. As-powerful-as-dammit languages were—by accident, or by the fact of languages already building on a strong tradition in mathematical logic etc.—there from the start.

Yet, in practice, programmers don’t use this flexibility.

How often have we written a loop such as for (int i=0;i<t;i++)? Why, given the vast flexibility to put any expression from the language in those three slots, hardly put anything other than a couple of different things in there? I used to feel that I was an amateurish programmer for falling into these clichés all the time—surely, real programmers used the full expressivity of the language, and it was just me with my paucity of imagination that wasn’t doing this.

But, it isn’t. Perhaps, indeed, the clichés are a sign of maturity of thinking, a sign that I have learned some of the patterns of thought that make a mature programmer?

The studies of Roles of Variables put some meat onto these anecdotal bones. Over 99% of variable usages in a set of programs from a textbook were found to be doing just one of around 10 roles. An example of a role is most-wanted holder, where the variable holds the value that is the “best” value found so far, for some problem-specific value of “best”. For example, it might be the current largest in a program that is trying to find the largest number in a list.

There is a decent argument that we should make these sorts of things explicit in programming languages. Rather than saying “int” or “string” in variable declarations we should instead/additionally say “stepper” or “most recent holder”. This would allow additional pragmatic checks to see whether the programmer was using the variable in the way that they think they are.

Perhaps there is a stronger argument though. Is it possible that we might be able to reason about such a restricted language more powerfully than we can a general language? There seems to be a tension between the vast Turing-complete capability of computer languages, and the desire to verify and check properties of programs. Could a subset of a language, where the role-types had much more restricted semantics, allow more powerful reasoning systems? There is a related but distinct argument that I heard a while ago that we should develop reasoning systems that verify properties of Turing-incomplete fragments of programs (I’ll add a reference when I find it, but I think the idea was at very early stages).

Les Hatton says that Software is cursed with unconstrained creativity. We have just about got to a decent understanding of our tools when trends change, and we are forced to learn another toolset—with its own distinctive set of gotchas—all over again. Where would software engineering have got to if we had focused not on developing new languages and paradigms, but on becoming master-level skilled with the already sufficiently expressive languages that already existed? There is a similar flavour here. Are we using languages that allow us to do far more than we ever need to, and subsequently limiting the reasoning and support tools we can provide?

The Origins of (Dis)order

Friday, August 11th, 2017

I think that where I get into dispute with the social scientists and literary theorists about whether the world is “ordered” is basically down to the counterfactuals we are each thinking of. To them, the fact that sometimes some people can’t quite manage to agree that some words mean the same thing means that the world is fundamentally disordered and truth uncertain and subjective. Whereas to me, I’m constantly gobsmacked that the world isn’t just some isotropic soup of particles and energy, and regard it as amazing that we can even write down some equations that describe at least some aspects of the world to a reasonable level of accuracy, and that by some amazing happenstance the most compact description of the world isn’t just a rote list of particles and their position and momentum.

3D vs. 2D Worlds

Tuesday, May 16th, 2017

What is the habitable surface of the world? Actually, that is the wrong question. The right question is “What is the habitable volume of the world?”. It is easy to think that the ratio of marine habitat to land habitat is about 2:1—that is what we see when we look at the globe. But, this ignores the fact that, to a first approximation, the oceans are habitable in three dimensions, whereas the surface of the earth is only habitable in two. This makes the habitable volume of the seas vastly larger than our surface-biased eyes first intuit.

Ever tried. Ever failed. No matter. Try Again. Fail again. Fail better.

Wednesday, April 26th, 2017

Here’s something interesting. It is common for people in entrepreneurship and startup culture to fetishise failure—”you can’t be a proper entrepreneur until you’ve risked enough to have had a couple of failed businesses”. There’s some justification for this—new business ventures need to try new things, and it is difficult to predict in advance whether they will work. Nonetheless, it is not an unproblematic stance—I have written elsewhere about how this failure culture makes problematic assumptions about the financial and life-circumstances ability to fail without disastrous consequences.

But, the interesting point is this. No-one ever talks like this about jobs, despite the reality that a lot of people are going to try out a number of careers before finding the ideal one, or simply switch from career to career as the work landscape changes around them during their lifetime. In years of talking to students about their careers, I’ve never come across students adopting this “failure culture” about employeeship. Why is it almost compulsory for a wannabe entrepreneur to say that, try as they might, they’ll probably fail with their first couple of business ventures; yet, it is deep defeatism to say “I’m going into this career, but I’ll probably fail but it’ll be a learning experience which’ll make me better in my next career.”?

Interesting/Plausible

Friday, September 30th, 2016

A useful thought-tool that I learned from Tassos Stevens: “It is easier to make the interesting plausible, than the plausible interesting.”.

The Fallacy of Formal Representations

Friday, September 9th, 2016

I went to an interesting talk by Jens Krinke earlier this week at UCL (the video will eventually be on that page). The talk was about work by him and his colleagues on observation-based program slicing. The general idea of program slicing is to take a variable value (or, indeed any state description) at a particular point in a program, and remove parts of the program that could not affect that particular value. This is useful, e.g. for debugging code—it allows you to look at just those statements that are influential on a statement that is outputting an undesirable value—and for other applications such as investigating how closely-coupled code is, helping to split code into meaningful sub-systems, and code specialisation.

The typical methods used in slicing are to use some formal model of dependencies in a language to eliminate statements. A digraph of dependencies is built, and paths that don’t eventually lead to the node of interest are eliminated. This had had some successes, but as Jens pointed out in his talk, progress on this has largely stalled for the last decade. The formal models of dependency that we currently have only allow us to discover certain kinds of dependency, and also using a slicer on a particular program needs a particular model of the language’s semantics to be available. This latter point is particularly salient in the contemporary computing environment, where “programs” are typically built up from a number of cooperating systems, each of which might be written in a different language or framework. In order to slice the whole system, a consistent, multi-language framework would need to be available.

As a contrast to this, he proposed an empirical approach. Rather than taking the basic unit as being a “statement” in the language, take it as a line of code; in most languages these are largely co-incident. Then, work through the program, deleting lines one-by-one, recompiling, and checking whether the elimination of that line makes a difference in practice to the output on a large, comprehensive set of inputs (this over-simplifies the process of creating that input test set, as programs can be complex entities where producing a thorough set of input examples can be difficult, as sometimes a very specific set of inputs is needed to generate a specific behaviour later in the execution; nonetheless, techniques exist for building such sets). This process is repeated until a fix point is found—i.e. none of the eliminations in the current round made a difference to the output behaviour for that specific input set. Therefore, this can be applied to a wide variety of different languages; there is no dependency on a model of the language semantics, all that is needed is access to the source code and a compiler. This enables the use of this on many different kinds of computer systems. For example, in the talk, an example of using it to slice a program in a graphics-description language was given, asking the question “what parts of the code are used in producing this sub-section of the diagram?”.

Of course, there is a cost to pay for this. That cost is the lack of formal guarantee of correctness across the input space. By using only a sample of the inputs, there is a possibility that some behaviour was missed. By contrast, methods that work with a formal model of dependencies make a conservative guarantee that regardless of inputs, the slice will be correct. Clearly, this is better. But, there are limits to what can be achieved using those methods too; by using a model that only allows the elimination of a statement if it is guaranteed under that model to never have a dependency, it ignores two situations. The first of these is that the model is not powerful enough to recognise a particular dependency, even though it is formally true (this kind of thing crops up all over the place; I remember getting frustrated with the Java compiler, which used to complain that a particular variable value “might not have been initialised” when it was completely obvious that it must have been; e.g. in the situation where a variable was declared before an if statement and then given a value in both possible branches, and then used afterward that statement). The second—and it depends on the application as to whether this matters—is that perhaps a formal dependency might crop up so infrequently as to not matter in practice. By taking an empirical approach, we observe programs as they are being run, rather than how they could be run, and perhaps therefore find a more rapid route to e.g. bug-finding.

In the question session after the talk, one member of the audience (sorry, didn’t notice who it was) declared that they found this approach “depressing”. Not, “wrong” (though other people may have thought that). The source of the depression, I would contend, is what I will call the fallacy of formal representations. There is a sense that permeates computer science that because we have an underlying formal representation for our topic of study, we ought to be doing nothing other than producing tools and techniques that work on that formal representation. Empirical techniques are both dangerous—they produce results that cannot be guaranteed, mathematically, to hold—and a waste of time—we ought to be spending our time producing better techniques that formally analyse the underlying representation, and that it is a waste of time to piss around with empirical techniques, because eventually they will be supplanted by formal techniques.

I would disagree with this. “Eventually” is a long time, and some areas have just stalled—for want of better models, or just in terms of practical application to programs/systems of a meaningful size. There is a lot of code that doesn’t require the level of guarantee that the formal techniques provide, and we are holding ourselves up as a useful discipline if we focus purely on techniques that are appropriate for safety-critical systems, and dismiss techniques that are appropriate, for, say, the vast majority of the million+ apps in the app store.

Other areas of study—let’s call them “science”—are not held up by the same mental blockage. Biology and physics, for example, don’t throw their hands up in the air and say “nothing can be done”, “we’ll never really understand this”, just because there isn’t an underlying, complete set of scientific laws available a priori. Instead, a primary subject of study in those areas is the discovery of those laws, or at least useful approximations thereto. Indeed, the development of empirical techniques to discover new things about the phenomena under study is an important part of these subject areas, to the extent that Nobel Prizes have been won (e.g. 1977; 2003; 1979; 2012; 2005) for the development of various measurement and observation techniques to get a better insight into physical or biological phenomena.

We should be taking—alongside the more formal approaches—an attitude similar to this in computer science. Yes, many times we can gain a lot by looking at the underlying formal representations that produce e.g. program behaviour. But in many cases, we would be better served by taking these behaviours as data and applying the increasingly powerful data science techniques that we have to develop an understanding of them. We are good at advocating the use of data science in other areas of study; less good at taking those techniques and applying them to our own area. I would contend that the fallacy of formal representations is exactly the reason behind this; because we have access to that underlying level, we cannot convince ourselves that, with sufficient thought and care, we cannot extract the information that we need from ratiocination about that material, rather than “resorting” to looking at the resulting in an empirical way. This also prevents the development of good intermediate techniques, e.g. those that use ideas such as interval arithmetic and qualitative reasoning to analyse systems.

Mathematics has a similar problem. We are accustomed to working with proofs—and rightly so, these are the bedrock of what makes mathematics mathematics—and also with informal, sketched examples in textbooks and talks. But, we lack an intermediate level of “data rich mathematics”, which starts from formal definitions, and uses them to produce lots of examples of the objects/processes in question, to be subsequently analysed empirically, in a data-rich way, and then used as the inspiration for future proofs, conjectures and counterexamples. We have failed, again due to the fallacy of formal representations, to develop a good experimental methodology for mathematics.

It is interesting to wonder why empirical techniques are so successful in the natural sciences, yet are treated with at best a feeling of depressed compromise, at worst complete disdain, in computer science. One issue seems to be the brittleness of computer systems. We resort (ha!) to formal techniques because there is a feeling that “things could slip through the net” if we use empirical techniques. This seems to be much less the case in, say, biological sciences. Biologists will, for example, be confident what they have mapped out a signalling pathway fairly accurately having done experiments on, say, a few hundred cells. Engineers will feel that they understand the behaviour of some material having carefully analysed a few dozen samples. There isn’t the same worry that, for example, there is some critical tight temperature range, environmental condition, or similar, that could cause the whole system to behave in a radically different way. Something about programs feels much more brittle; you just need the right (wrong!) state to be reached for the whole system to change its behaviour. This is the blessing and the curse of computer programming; you can do anything, but you can also do anything, perhaps by accident. A state that is more-or-less the same as another state can be transformed into something radically different by a single line of code, which might leave the first state untouched (think about a divide-by-zero error).

Perhaps, then, the fault is with language design, or programming practice. We are stuck with practices from an era where every line of code mattered (in memory cost or execution time), so we feel the need to write very tight, brittle code. Could we redesign languages so that they don’t have this brittleness, thus obviating the need for the formal analysis methods that are there primarily to capture the behaviours that don’t occur with “typical” inputs. What if we could be confident—even, perhaps, mathematically sures—that there were no weird pathological routes through code? Alternatively, what if throwing more code at a problem actually made us more confident of it working well; rather than having tight single paths through code, have the same “behaviour” carried out and checked by a large number of concurrent processes that interact in a way that don’t have the dependencies of traditional concurrency models (when was the last time that a biosystem deadlocked, or a piece of particle physics, for that matter?). What if each time we added a new piece of code to a system, we felt that we were adding something of value that interacted in only a positive way with the remainder of the code, rather than fearing that we have opened up some kind of potential interaction or dependency that will cause the system to fail. What if a million lines of code couldn’t be wrong?

Personal Practice (1)

Tuesday, April 26th, 2016

My colleague Sally Fincher has pointed out that one interesting aspect of architecture and design academics is that the vast majority of them continue with some kind of personal practice in their discipline alongside carrying out their teaching and research work. This contrasts with computer science, where such a combination is rather unusual. It might be interesting to do a pilot scheme that gave some academic staff a certain amount of time to do this in their schedule, and see what influence it has on their research and teaching.

Interestingly, a large proportion of computer science students have a personal practice in some aspect of computing/IT. It is interesting to note quite how many of our students are running a little web design business or similar on the side, alongside their studies.

There’s no F in Strategy (and usually doesn’t need to be)

Thursday, February 11th, 2016

A while ago I read a little article whilst doing a management course that was very influential on me (I’ll find the reference and add it here soon). It argued that the process of building a team—in the strict sense a group of people who could really work closely and robustly together on a complex problem—was difficult, time-consuming and emotionally fraught, and that actually, for most business processes, there isn’t really any need to build a team as such. Instead, just a decently managed group of people with a well-defined goal was all that was needed for most activities. Indeed, this goes further; because of the stress and strain needed to build a well-functioning team in the strong sense of the word, it is really unproductive to do this, and risks fomenting a “team-building fatigue” in people.

I’m wondering if the same is true for the idea of strategy. Strategy is a really important idea in organisations, and the idea of strategic change is really important when a real transformation needs to be made. But, I worry that the constant demands to produce “strategies” of all sorts, at all levels of organisations, runs the danger of causing “strategy fatigue” too. We have to produce School strategies, Faculty strategies, University strategies, all divided un-neatly into research, undergraduate, and postgraduate, and then personal research Strategies, and Uncle Tom Cobleigh and all strategies. Really, we ought to be keeping the word and concepts around “strategy” for when it really matters; describing some pissant objective to increase the proportion of one category of students from 14.1% to 15% isn’t a strategy, it’s almost a rounding error. We really need to retain the term—and the activity—for when it really matters.

Basic Income (1)

Saturday, November 28th, 2015

It only struck me a few months ago that there is a decent minority of the political/business establishment who seem to believe that a large proportion of the population can live at a basic level without the need for any income, i.e. from some nebulous kind of family wealth. That’s not “to live well”, but the idea that the basics of housing, food, transport and basic personal care are just somehow “taken care of” in some vague way. You see this on Dragon’s Den, where entrepreneurs are urged to quit their job and show “real commitment” to their business idea. I’d always been rather bemused by statements such as this, but in light of the idea that the basics are “covered”, it makes sense—they are asking people to give up, as they see it, luxuries, not just the basics of living.

A Wild Idea for Treating Infectious Diseases

Monday, July 13th, 2015

Engineer a variant on a disease which spreads quicker within the organism, so that it drives out the standard variant in its niche. Engineer this variant with a genetic “self-destruct” switch which can be triggered by a standard drug. Then superinfect the patient with the new variant, wait until the new variant has taken over, then apply the drug to remove the infection from the system.

Seeming more Specialised than you Actually Are

Monday, June 29th, 2015

Sometimes it is important to present yourself as more specialised than you actually are. This can be true for individuals and for businesses. Take, for example, the following apparently successful businesses:

Woaah there! What’s happening here? Surely any decent web design company can provide a website for a doctor’s surgery? The specific company might provide a tiny little bit more knowledge, but surely the knowledge required to write a decent website is around 99 percent of the knowledge required to write a doctor’s surgery website. Surely, handling payments from parents for school activities is just the same as, well, umm, handling payments, and there are plenty of companies that do that perfectly well.

This, of course, misses the point. The potential customers don’t know that. To them, they are likely to trust the over-specialised presentation rather than the generic one. Indeed, the generic one might sound a little bit shady, evasive or amateurish: “What kind of web sites do you make?”, “Well, all kinds really.”, “Yes, but what are you really good at.”, “Well, it doesn’t really matter, websites are all basically the same once you get into the code.”. Contrast that with “we make websites for doctors.” Simples, innit.

So that’s my business startup advice. Find an area that uses your skills, find some specialised application of those skills, then market the hell out of your skills in that specific area. You will know that your skills are transferrable—but, your potential customers won’t, and they will trust you more as a result.

I’ve noticed the same with trying to build academic collaborations. Saying “we do optimisation and data science and visualisation and all that stuff” doesn’t really cut it. I’ve had much more success starting with a specific observation—we can provide a way of grouping your data into similar clusters, for example—than trying to describe the full range of what contemporary data science techniques can do.

Similarly with courses. Universities have done well out of providing “MBA in Marketing for XX” or whatever, when the vast majority of the course might be generic marketing skills. Again, the point here is more one of trust than one of content.

“Who Bought you That?”

Monday, December 16th, 2013

I’ve noticed a communication difference between people like me, who grew up in small families without much of a tradition of present-giving, to people who grew up in big, richly-connected families where dozens of people exchange presents for Christmas and birthdays.

People in the latter group often ask the question “Who bought you that?” when enquiring about some day-to-day object—a scarf, a watch, a pen that I have. I always thought that this was a weird question—why on earth would you imagine that someone bought it for me? But, of course, to people from such a background, the idea that you would ever need to buy such day-to-day tchotchkes is weird. For their whole lives they’ve never had any need to buy all these little bits and pieces, every since childhood they’ve had an endless supply of little day-to-day objects in the form of presents from cousins and great-aunts. Of course, they are in an economically neutral position, as they have had to keep up their part of the exchange.

For Real (1)

Wednesday, June 19th, 2013

Its odd when something turns out to be real. In the back of my head I have a vague idea that brand names like “Dolce and Gabbana” are the invention of a chap called Toby working in a Soho ad agency in the ’70s. It is weird to see that they are actually a couple of real blokes in their 50s:

BBC News: Dolce and Gabbana sentenced to jail for tax evasion

We Don’t Take Comedy Seriously Enough

Monday, April 8th, 2013

Despite the rise and rise of complex, richly engaged comedy, people in other artforms still don’t have any respect for it. For the last few years I’ve been interested—in a rather inchoate way—in how comedy and contemporary classical music might interact, in particular whether the forms and structures of comedy provide an interesting and novel analogy for the structuring of a piece of music, or whether music-theatre can learn from comedy performance practices. I’d be interested, for example, if the tension inherent in a Stewart Lee performance, and the sophisticated use of reference and callbacks, could provide an emotional flavour that could be delivered in a musical way, or whether the emotional trajectory of Daniel Kitson’s storytelling performances could give us an idea of how to hold an audience for an extended period of time.

Very few people take this seriously. When someone raised a point like this with Larry Goves at a tutorial last summer, the response was incomprehending. What could the mere stimulus-response of joke-laugh have to do with a sophisticated artform such as composing a string quartet? Similarly, at a meeting the other day, the idea that a contemporary music group might put on a joint event with a comedy group was treated rather distastefully—”I don’t know if we want to be associated with that sort of thing”, whereas collaborations with poetry and art groups were greeted with enthusiasm.

I don’t want to suggest that all comedy is deep and profound—there is a big place for “summat as meks yer ears laff”. But, when some people in contemporary comedy are making a rich and distinctive contribution to new ways of taking an audience on an emotional trajectory, it is a pity that this is ignored by other artforms.

“You turn if you want to…”

Thursday, March 21st, 2013

Interesting attempt by Labour to shift the use of the phrase “U-turn” in politics:

I’ve never really liked the aversion to “U-turns” in politics. I can see that we don’t want people flip-flopping between decisions, but too strong an aversion to changing your mind in light of changing situations and new evidence can leave politics very unagile and leaden. It would be great if politicians could say “in response to overwhelming public pressure / new evidence … / the shift towards … /etc. we have decided to …” without opening themselves to accusations of U-turning.

The Revolution will be Computerised (1)

Tuesday, December 11th, 2012

I wonder how long things like “competence in IT and familiarity with a computerised environment” are going to continue to be listed as job requirements (and this for a lectureship post in Computer Science, natch). Surely, “computerised” is the default now and the odd, specific skill that you might be looking for on odd occasions is familiarity with the opposite? And, for that matter, how long information systems type courses are going to use the case study of a paper-based system being “computerised”.

Variations on Folk Sayings (13)

Sunday, May 27th, 2012

“If God had intended us to have wings, He would have made us fly.”

Zeitgeist (3)

Friday, May 4th, 2012

An interesting shift in perspective (from the middle of a recent discussion on Ask Metafilter):

met in a bar, but parents say that they met online

Zeitgeist (3)

Saturday, December 31st, 2011

Interesting perspective from a forum post that I read somewhere t’other day: someone (presumably someone growing up since the ubiquity of mobile phones) who saw landlines as a premium service (because the company had to “put all the wires in” to the house) and mobile service as the basic service (because you just issue someone with a handset and then you are done with it), and didn’t understand why the mobile service was the more expensive one. I wonder how much this is true: is mobile only a premium-price service because historically it was, or is it still more expensive to maintain and run the mobile infrastructure?