A useful thought-tool that I learned from Tassos Stevens: “It is easier to make the interesting plausible, than the plausible interesting.”.
Archive for September, 2016
Interesting/Plausible
Friday, September 30th, 2016Family Stories (3)
Monday, September 26th, 2016When I was around 10-11 years old, my parents made a shed at the back of the garden, by putting a door and roof on a small space at the back of the garage. This was used to store gardening supplies—compost, plant pots and the like—and bottles of the dubious home-made wine and beer that was popular at the time.
One summer day I decided, on a whim, that this needed a label putting on it. So, using a chisel and hammer from the garage, I gouged the words “TOOL SHED” into the paint and wood, fairly deeply. Then, realising that the shed wasn’t used to store tools, I panicked; but a simple solution came to mind. As a result I carved the word “NOT” above the word “TOOL SHED”, with an asterisk added to retain the symmetry of four letters on each line. As a result, the shed had (and retained for several years) the label:
*NOT
TOOL
SHED
and was thus referred to in my family for many years subsequently.
I believe that I am the only person alive who remembers this.
Verboten (1)
Thursday, September 22nd, 2016Public/Private
Monday, September 12th, 2016In the public sector, we are oft being urged to emulate the supposed good practices of the private sector. According to Liam Fox, these include being fat, lazy, and playing golf on Friday afternoons. Should I be writing these practices into our latest strategy document?
“How are you?”
Monday, September 12th, 2016It sometimes surprises me quite how formulaic the smalltalk at the beginnings of conversations is. I know that it isn’t acceptable to respond to the question “How are you?” with a list of your latest ailments and insecurities, but it is still sometimes surprising how much that part of a conversation is a cognitive readymade, without any ready deviation. I remember a couple of incidents in the days after my father died.
- Meeting a colleague a few days after my father had died. Wanting, gradually, to let people know what had happened, I responded to his “How are you?” with a “Actually, not so good.”, expecting to get a query back about what had happened. Instead, I just got the response “Great, I’m fine.”, as if I had said (as I would 99.9999% of the time) “I’m fine, how are you?”. Literally, my response hadn’t been processed at all. If you want some evidence for hearing being a process of anticipation then you’ve got it there. There’s no other response in the “repertoire” to “How are you?” other than minor variants on “Fine, how are you?”, so the brain doesn’t even really bother processing what has been said. Any response is just treated as the standard one.
- Speaking to my uncle a day or two after my father had died (I had already told my uncle). This time, he asked first: “How are you?”. My response, understandably: “Not too good.”. My uncle’s response—no criticism intended, this is just a point about how deeply embedded language structures are—”Oh, why is that then?”. I was, very unusually, struck dumb for a few seconds. For a moment I thought “Perhaps I didn’t tell him that Dad had died?”; for surely, someone wouldn’t say something so crass to someone who had just lost a parent—surely it would be obvious why I “wasn’t too good”. Eventually, I managed to stutter out “Well, you know, Dad died yesterday.” It is bizarre how fixed our linguistic patterns are that, even after one of the worst things that can happen to you, saying that you are anything other than “fine” causes our whole language generation system to collapse.
Doppleganger (1)
Friday, September 9th, 2016On this week’s Only Connect, there was a beardy chap with glasses who described himself as “having a maths degree and playing the bassoon and cello” who was on a team defined by their liking for Indian food. I had to check for a moment that I hadn’t accidentally appeared on the programme and forgotten all about it.
The Fallacy of Formal Representations
Friday, September 9th, 2016I went to an interesting talk by Jens Krinke earlier this week at UCL (the video will eventually be on that page). The talk was about work by him and his colleagues on observation-based program slicing. The general idea of program slicing is to take a variable value (or, indeed any state description) at a particular point in a program, and remove parts of the program that could not affect that particular value. This is useful, e.g. for debugging code—it allows you to look at just those statements that are influential on a statement that is outputting an undesirable value—and for other applications such as investigating how closely-coupled code is, helping to split code into meaningful sub-systems, and code specialisation.
The typical methods used in slicing are to use some formal model of dependencies in a language to eliminate statements. A digraph of dependencies is built, and paths that don’t eventually lead to the node of interest are eliminated. This had had some successes, but as Jens pointed out in his talk, progress on this has largely stalled for the last decade. The formal models of dependency that we currently have only allow us to discover certain kinds of dependency, and also using a slicer on a particular program needs a particular model of the language’s semantics to be available. This latter point is particularly salient in the contemporary computing environment, where “programs” are typically built up from a number of cooperating systems, each of which might be written in a different language or framework. In order to slice the whole system, a consistent, multi-language framework would need to be available.
As a contrast to this, he proposed an empirical approach. Rather than taking the basic unit as being a “statement” in the language, take it as a line of code; in most languages these are largely co-incident. Then, work through the program, deleting lines one-by-one, recompiling, and checking whether the elimination of that line makes a difference in practice to the output on a large, comprehensive set of inputs (this over-simplifies the process of creating that input test set, as programs can be complex entities where producing a thorough set of input examples can be difficult, as sometimes a very specific set of inputs is needed to generate a specific behaviour later in the execution; nonetheless, techniques exist for building such sets). This process is repeated until a fix point is found—i.e. none of the eliminations in the current round made a difference to the output behaviour for that specific input set. Therefore, this can be applied to a wide variety of different languages; there is no dependency on a model of the language semantics, all that is needed is access to the source code and a compiler. This enables the use of this on many different kinds of computer systems. For example, in the talk, an example of using it to slice a program in a graphics-description language was given, asking the question “what parts of the code are used in producing this sub-section of the diagram?”.
Of course, there is a cost to pay for this. That cost is the lack of formal guarantee of correctness across the input space. By using only a sample of the inputs, there is a possibility that some behaviour was missed. By contrast, methods that work with a formal model of dependencies make a conservative guarantee that regardless of inputs, the slice will be correct. Clearly, this is better. But, there are limits to what can be achieved using those methods too; by using a model that only allows the elimination of a statement if it is guaranteed under that model to never have a dependency, it ignores two situations. The first of these is that the model is not powerful enough to recognise a particular dependency, even though it is formally true (this kind of thing crops up all over the place; I remember getting frustrated with the Java compiler, which used to complain that a particular variable value “might not have been initialised” when it was completely obvious that it must have been; e.g. in the situation where a variable was declared before an if statement and then given a value in both possible branches, and then used afterward that statement). The second—and it depends on the application as to whether this matters—is that perhaps a formal dependency might crop up so infrequently as to not matter in practice. By taking an empirical approach, we observe programs as they are being run, rather than how they could be run, and perhaps therefore find a more rapid route to e.g. bug-finding.
In the question session after the talk, one member of the audience (sorry, didn’t notice who it was) declared that they found this approach “depressing”. Not, “wrong” (though other people may have thought that). The source of the depression, I would contend, is what I will call the fallacy of formal representations. There is a sense that permeates computer science that because we have an underlying formal representation for our topic of study, we ought to be doing nothing other than producing tools and techniques that work on that formal representation. Empirical techniques are both dangerous—they produce results that cannot be guaranteed, mathematically, to hold—and a waste of time—we ought to be spending our time producing better techniques that formally analyse the underlying representation, and that it is a waste of time to piss around with empirical techniques, because eventually they will be supplanted by formal techniques.
I would disagree with this. “Eventually” is a long time, and some areas have just stalled—for want of better models, or just in terms of practical application to programs/systems of a meaningful size. There is a lot of code that doesn’t require the level of guarantee that the formal techniques provide, and we are holding ourselves up as a useful discipline if we focus purely on techniques that are appropriate for safety-critical systems, and dismiss techniques that are appropriate, for, say, the vast majority of the million+ apps in the app store.
Other areas of study—let’s call them “science”—are not held up by the same mental blockage. Biology and physics, for example, don’t throw their hands up in the air and say “nothing can be done”, “we’ll never really understand this”, just because there isn’t an underlying, complete set of scientific laws available a priori. Instead, a primary subject of study in those areas is the discovery of those laws, or at least useful approximations thereto. Indeed, the development of empirical techniques to discover new things about the phenomena under study is an important part of these subject areas, to the extent that Nobel Prizes have been won (e.g. 1977; 2003; 1979; 2012; 2005) for the development of various measurement and observation techniques to get a better insight into physical or biological phenomena.
We should be taking—alongside the more formal approaches—an attitude similar to this in computer science. Yes, many times we can gain a lot by looking at the underlying formal representations that produce e.g. program behaviour. But in many cases, we would be better served by taking these behaviours as data and applying the increasingly powerful data science techniques that we have to develop an understanding of them. We are good at advocating the use of data science in other areas of study; less good at taking those techniques and applying them to our own area. I would contend that the fallacy of formal representations is exactly the reason behind this; because we have access to that underlying level, we cannot convince ourselves that, with sufficient thought and care, we cannot extract the information that we need from ratiocination about that material, rather than “resorting” to looking at the resulting in an empirical way. This also prevents the development of good intermediate techniques, e.g. those that use ideas such as interval arithmetic and qualitative reasoning to analyse systems.
Mathematics has a similar problem. We are accustomed to working with proofs—and rightly so, these are the bedrock of what makes mathematics mathematics—and also with informal, sketched examples in textbooks and talks. But, we lack an intermediate level of “data rich mathematics”, which starts from formal definitions, and uses them to produce lots of examples of the objects/processes in question, to be subsequently analysed empirically, in a data-rich way, and then used as the inspiration for future proofs, conjectures and counterexamples. We have failed, again due to the fallacy of formal representations, to develop a good experimental methodology for mathematics.
It is interesting to wonder why empirical techniques are so successful in the natural sciences, yet are treated with at best a feeling of depressed compromise, at worst complete disdain, in computer science. One issue seems to be the brittleness of computer systems. We resort (ha!) to formal techniques because there is a feeling that “things could slip through the net” if we use empirical techniques. This seems to be much less the case in, say, biological sciences. Biologists will, for example, be confident what they have mapped out a signalling pathway fairly accurately having done experiments on, say, a few hundred cells. Engineers will feel that they understand the behaviour of some material having carefully analysed a few dozen samples. There isn’t the same worry that, for example, there is some critical tight temperature range, environmental condition, or similar, that could cause the whole system to behave in a radically different way. Something about programs feels much more brittle; you just need the right (wrong!) state to be reached for the whole system to change its behaviour. This is the blessing and the curse of computer programming; you can do anything, but you can also do anything, perhaps by accident. A state that is more-or-less the same as another state can be transformed into something radically different by a single line of code, which might leave the first state untouched (think about a divide-by-zero error).
Perhaps, then, the fault is with language design, or programming practice. We are stuck with practices from an era where every line of code mattered (in memory cost or execution time), so we feel the need to write very tight, brittle code. Could we redesign languages so that they don’t have this brittleness, thus obviating the need for the formal analysis methods that are there primarily to capture the behaviours that don’t occur with “typical” inputs. What if we could be confident—even, perhaps, mathematically sures—that there were no weird pathological routes through code? Alternatively, what if throwing more code at a problem actually made us more confident of it working well; rather than having tight single paths through code, have the same “behaviour” carried out and checked by a large number of concurrent processes that interact in a way that don’t have the dependencies of traditional concurrency models (when was the last time that a biosystem deadlocked, or a piece of particle physics, for that matter?). What if each time we added a new piece of code to a system, we felt that we were adding something of value that interacted in only a positive way with the remainder of the code, rather than fearing that we have opened up some kind of potential interaction or dependency that will cause the system to fail. What if a million lines of code couldn’t be wrong?
Sofa-ry, so good-y
Thursday, September 8th, 2016Design Failures (1)
Thursday, September 8th, 2016Here is an interesting design failure. A year or two ago, the entry gates on my local stations had a message from a charity saying with the slogan “no-one in Kent should face cancer alone.”. A good message, and basically well thought out. The problem is, that they were printed on two sides of the entry gates, which open when you put your ticket in it: as a result, one side of the gate says “face cancer alone”, and this part of the message is separated out when the gates open:
Interestingly, someone clearly noticed this. When a repeat of the campaign ran this year, with more-or-less the same message, it had been modified so that one side of the gate now says “don’t face cancer alone”:
There’s a design principle in here somewhere, along the lines of thinking through the lifetime of a user of the system, not just relying on a static snapshot of the design to envision what it is like.