“Real Artists Ship”

Colin Johnson’s blog


Archive for the ‘Changes of Perspective’ Category

3D vs. 2D Worlds

Tuesday, May 16th, 2017

What is the habitable surface of the world? Actually, that is the wrong question. The right question is “What is the habitable volume of the world?”. It is easy to thing that the ratio of marine habitat to land habitat is about 2:1—that is what we see when we look at the globe. But, this ignores the fact that, to a first approximation, the oceans are habitable in three dimensions, whereas the surface of the earth is only habitable in two. This makes the habitable volume of the seas vastly larger than our surface-biased eyes first intuit.

Ever tried. Ever failed. No matter. Try Again. Fail again. Fail better.

Wednesday, April 26th, 2017

Here’s something interesting. It is common for people in entrepreneurship and startup culture to fetishise failure—”you can’t be a proper entrepreneur until you’ve risked enough to have had a couple of failed businesses”. There’s some justification for this—new business ventures need to try new things, and it is difficult to predict in advance whether they will work. Nonetheless, it is not an unproblematic stance—I have written elsewhere about how this failure culture makes problematic assumptions about the financial and life-circumstances ability to fail without disastrous consequences.

But, the interesting point is this. No-one ever talks like this about jobs, despite the reality that a lot of people are going to try out a number of careers before finding the ideal one, or simply switch from career to career as the work landscape changes around them during their lifetime. In years of talking to students about their careers, I’ve never come across students adopting this “failure culture” about employeeship. Why is it almost compulsory for a wannabe entrepreneur to say that, try as they might, they’ll probably fail with their first couple of business ventures; yet, it is deep defeatism to say “I’m going into this career, but I’ll probably fail but it’ll be a learning experience which’ll make me better in my next career.”?

Interesting/Plausible

Friday, September 30th, 2016

A useful thought-tool that I learned from Tassos Stevens: “It is easier to make the interesting plausible, than the plausible interesting.”.

The Fallacy of Formal Representations

Friday, September 9th, 2016

I went to an interesting talk by Jens Krinke earlier this week at UCL (the video will eventually be on that page). The talk was about work by him and his colleagues on observation-based program slicing. The general idea of program slicing is to take a variable value (or, indeed any state description) at a particular point in a program, and remove parts of the program that could not affect that particular value. This is useful, e.g. for debugging code—it allows you to look at just those statements that are influential on a statement that is outputting an undesirable value—and for other applications such as investigating how closely-coupled code is, helping to split code into meaningful sub-systems, and code specialisation.

The typical methods used in slicing are to use some formal model of dependencies in a language to eliminate statements. A digraph of dependencies is built, and paths that don’t eventually lead to the node of interest are eliminated. This had had some successes, but as Jens pointed out in his talk, progress on this has largely stalled for the last decade. The formal models of dependency that we currently have only allow us to discover certain kinds of dependency, and also using a slicer on a particular program needs a particular model of the language’s semantics to be available. This latter point is particularly salient in the contemporary computing environment, where “programs” are typically built up from a number of cooperating systems, each of which might be written in a different language or framework. In order to slice the whole system, a consistent, multi-language framework would need to be available.

As a contrast to this, he proposed an empirical approach. Rather than taking the basic unit as being a “statement” in the language, take it as a line of code; in most languages these are largely co-incident. Then, work through the program, deleting lines one-by-one, recompiling, and checking whether the elimination of that line makes a difference in practice to the output on a large, comprehensive set of inputs (this over-simplifies the process of creating that input test set, as programs can be complex entities where producing a thorough set of input examples can be difficult, as sometimes a very specific set of inputs is needed to generate a specific behaviour later in the execution; nonetheless, techniques exist for building such sets). This process is repeated until a fix point is found—i.e. none of the eliminations in the current round made a difference to the output behaviour for that specific input set. Therefore, this can be applied to a wide variety of different languages; there is no dependency on a model of the language semantics, all that is needed is access to the source code and a compiler. This enables the use of this on many different kinds of computer systems. For example, in the talk, an example of using it to slice a program in a graphics-description language was given, asking the question “what parts of the code are used in producing this sub-section of the diagram?”.

Of course, there is a cost to pay for this. That cost is the lack of formal guarantee of correctness across the input space. By using only a sample of the inputs, there is a possibility that some behaviour was missed. By contrast, methods that work with a formal model of dependencies make a conservative guarantee that regardless of inputs, the slice will be correct. Clearly, this is better. But, there are limits to what can be achieved using those methods too; by using a model that only allows the elimination of a statement if it is guaranteed under that model to never have a dependency, it ignores two situations. The first of these is that the model is not powerful enough to recognise a particular dependency, even though it is formally true (this kind of thing crops up all over the place; I remember getting frustrated with the Java compiler, which used to complain that a particular variable value “might not have been initialised” when it was completely obvious that it must have been; e.g. in the situation where a variable was declared before an if statement and then given a value in both possible branches, and then used afterward that statement). The second—and it depends on the application as to whether this matters—is that perhaps a formal dependency might crop up so infrequently as to not matter in practice. By taking an empirical approach, we observe programs as they are being run, rather than how they could be run, and perhaps therefore find a more rapid route to e.g. bug-finding.

In the question session after the talk, one member of the audience (sorry, didn’t notice who it was) declared that they found this approach “depressing”. Not, “wrong” (though other people may have thought that). The source of the depression, I would contend, is what I will call the fallacy of formal representations. There is a sense that permeates computer science that because we have an underlying formal representation for our topic of study, we ought to be doing nothing other than producing tools and techniques that work on that formal representation. Empirical techniques are both dangerous—they produce results that cannot be guaranteed, mathematically, to hold—and a waste of time—we ought to be spending our time producing better techniques that formally analyse the underlying representation, and that it is a waste of time to piss around with empirical techniques, because eventually they will be supplanted by formal techniques.

I would disagree with this. “Eventually” is a long time, and some areas have just stalled—for want of better models, or just in terms of practical application to programs/systems of a meaningful size. There is a lot of code that doesn’t require the level of guarantee that the formal techniques provide, and we are holding ourselves up as a useful discipline if we focus purely on techniques that are appropriate for safety-critical systems, and dismiss techniques that are appropriate, for, say, the vast majority of the million+ apps in the app store.

Other areas of study—let’s call them “science”—are not held up by the same mental blockage. Biology and physics, for example, don’t throw their hands up in the air and say “nothing can be done”, “we’ll never really understand this”, just because there isn’t an underlying, complete set of scientific laws available a priori. Instead, a primary subject of study in those areas is the discovery of those laws, or at least useful approximations thereto. Indeed, the development of empirical techniques to discover new things about the phenomena under study is an important part of these subject areas, to the extent that Nobel Prizes have been won (e.g. 1977; 2003; 1979; 2012; 2005) for the development of various measurement and observation techniques to get a better insight into physical or biological phenomena.

We should be taking—alongside the more formal approaches—an attitude similar to this in computer science. Yes, many times we can gain a lot by looking at the underlying formal representations that produce e.g. program behaviour. But in many cases, we would be better served by taking these behaviours as data and applying the increasingly powerful data science techniques that we have to develop an understanding of them. We are good at advocating the use of data science in other areas of study; less good at taking those techniques and applying them to our own area. I would contend that the fallacy of formal representations is exactly the reason behind this; because we have access to that underlying level, we cannot convince ourselves that, with sufficient thought and care, we cannot extract the information that we need from ratiocination about that material, rather than “resorting” to looking at the resulting in an empirical way. This also prevents the development of good intermediate techniques, e.g. those that use ideas such as interval arithmetic and qualitative reasoning to analyse systems.

Mathematics has a similar problem. We are accustomed to working with proofs—and rightly so, these are the bedrock of what makes mathematics mathematics—and also with informal, sketched examples in textbooks and talks. But, we lack an intermediate level of “data rich mathematics”, which starts from formal definitions, and uses them to produce lots of examples of the objects/processes in question, to be subsequently analysed empirically, in a data-rich way, and then used as the inspiration for future proofs, conjectures and counterexamples. We have failed, again due to the fallacy of formal representations, to develop a good experimental methodology for mathematics.

It is interesting to wonder why empirical techniques are so successful in the natural sciences, yet are treated with at best a feeling of depressed compromise, at worst complete disdain, in computer science. One issue seems to be the brittleness of computer systems. We resort (ha!) to formal techniques because there is a feeling that “things could slip through the net” if we use empirical techniques. This seems to be much less the case in, say, biological sciences. Biologists will, for example, be confident what they have mapped out a signalling pathway fairly accurately having done experiments on, say, a few hundred cells. Engineers will feel that they understand the behaviour of some material having carefully analysed a few dozen samples. There isn’t the same worry that, for example, there is some critical tight temperature range, environmental condition, or similar, that could cause the whole system to behave in a radically different way. Something about programs feels much more brittle; you just need the right (wrong!) state to be reached for the whole system to change its behaviour. This is the blessing and the curse of computer programming; you can do anything, but you can also do anything, perhaps by accident. A state that is more-or-less the same as another state can be transformed into something radically different by a single line of code, which might leave the first state untouched (think about a divide-by-zero error).

Perhaps, then, the fault is with language design, or programming practice. We are stuck with practices from an era where every line of code mattered (in memory cost or execution time), so we feel the need to write very tight, brittle code. Could we redesign languages so that they don’t have this brittleness, thus obviating the need for the formal analysis methods that are there primarily to capture the behaviours that don’t occur with “typical” inputs. What if we could be confident—even, perhaps, mathematically sures—that there were no weird pathological routes through code? Alternatively, what if throwing more code at a problem actually made us more confident of it working well; rather than having tight single paths through code, have the same “behaviour” carried out and checked by a large number of concurrent processes that interact in a way that don’t have the dependencies of traditional concurrency models (when was the last time that a biosystem deadlocked, or a piece of particle physics, for that matter?). What if each time we added a new piece of code to a system, we felt that we were adding something of value that interacted in only a positive way with the remainder of the code, rather than fearing that we have opened up some kind of potential interaction or dependency that will cause the system to fail. What if a million lines of code couldn’t be wrong?

Personal Practice (1)

Tuesday, April 26th, 2016

My colleague Sally Fincher has pointed out that one interesting aspect of architecture and design academics is that the vast majority of them continue with some kind of personal practice in their discipline alongside carrying out their teaching and research work. This contrasts with computer science, where such a combination is rather unusual. It might be interesting to do a pilot scheme that gave some academic staff a certain amount of time to do this in their schedule, and see what influence it has on their research and teaching.

Interestingly, a large proportion of computer science students have a personal practice in some aspect of computing/IT. It is interesting to note quite how many of our students are running a little web design business or similar on the side, alongside their studies.

There’s no F in Strategy (and usually doesn’t need to be)

Thursday, February 11th, 2016

A while ago I read a little article whilst doing a management course that was very influential on me (I’ll find the reference and add it here soon). It argued that the process of building a team—in the strict sense a group of people who could really work closely and robustly together on a complex problem—was difficult, time-consuming and emotionally fraught, and that actually, for most business processes, there isn’t really any need to build a team as such. Instead, just a decently managed group of people with a well-defined goal was all that was needed for most activities. Indeed, this goes further; because of the stress and strain needed to build a well-functioning team in the strong sense of the word, it is really unproductive to do this, and risks fomenting a “team-building fatigue” in people.

I’m wondering if the same is true for the idea of strategy. Strategy is a really important idea in organisations, and the idea of strategic change is really important when a real transformation needs to be made. But, I worry that the constant demands to produce “strategies” of all sorts, at all levels of organisations, runs the danger of causing “strategy fatigue” too. We have to produce School strategies, Faculty strategies, University strategies, all divided un-neatly into research, undergraduate, and postgraduate, and then personal research Strategies, and Uncle Tom Cobleigh and all strategies. Really, we ought to be keeping the word and concepts around “strategy” for when it really matters; describing some pissant objective to increase the proportion of one category of students from 14.1% to 15% isn’t a strategy, it’s almost a rounding error. We really need to retain the term—and the activity—for when it really matters.

Basic Income (1)

Saturday, November 28th, 2015

It only struck me a few months ago that there is a decent minority of the political/business establishment who seem to believe that a large proportion of the population can live at a basic level without the need for any income, i.e. from some nebulous kind of family wealth. That’s not “to live well”, but the idea that the basics of housing, food, transport and basic personal care are just somehow “taken care of” in some vague way. You see this on Dragon’s Den, where entrepreneurs are urged to quit their job and show “real commitment” to their business idea. I’d always been rather bemused by statements such as this, but in light of the idea that the basics are “covered”, it makes sense—they are asking people to give up, as they see it, luxuries, not just the basics of living.

A Wild Idea for Treating Infectious Diseases

Monday, July 13th, 2015

Engineer a variant on a disease which spreads quicker within the organism, so that it drives out the standard variant in its niche. Engineer this variant with a genetic “self-destruct” switch which can be triggered by a standard drug. Then superinfect the patient with the new variant, wait until the new variant has taken over, then apply the drug to remove the infection from the system.

Seeming more Specialised than you Actually Are

Monday, June 29th, 2015

Sometimes it is important to present yourself as more specialised than you actually are. This can be true for individuals and for businesses. Take, for example, the following apparently successful businesses:

Woaah there! What’s happening here? Surely any decent web design company can provide a website for a doctor’s surgery? The specific company might provide a tiny little bit more knowledge, but surely the knowledge required to write a decent website is around 99 percent of the knowledge required to write a doctor’s surgery website. Surely, handling payments from parents for school activities is just the same as, well, umm, handling payments, and there are plenty of companies that do that perfectly well.

This, of course, misses the point. The potential customers don’t know that. To them, they are likely to trust the over-specialised presentation rather than the generic one. Indeed, the generic one might sound a little bit shady, evasive or amateurish: “What kind of web sites do you make?”, “Well, all kinds really.”, “Yes, but what are you really good at.”, “Well, it doesn’t really matter, websites are all basically the same once you get into the code.”. Contrast that with “we make websites for doctors.” Simples, innit.

So that’s my business startup advice. Find an area that uses your skills, find some specialised application of those skills, then market the hell out of your skills in that specific area. You will know that your skills are transferrable—but, your potential customers won’t, and they will trust you more as a result.

I’ve noticed the same with trying to build academic collaborations. Saying “we do optimisation and data science and visualisation and all that stuff” doesn’t really cut it. I’ve had much more success starting with a specific observation—we can provide a way of grouping your data into similar clusters, for example—than trying to describe the full range of what contemporary data science techniques can do.

Similarly with courses. Universities have done well out of providing “MBA in Marketing for XX” or whatever, when the vast majority of the course might be generic marketing skills. Again, the point here is more one of trust than one of content.

“Who Bought you That?”

Monday, December 16th, 2013

I’ve noticed a communication difference between people like me, who grew up in small families without much of a tradition of present-giving, to people who grew up in big, richly-connected families where dozens of people exchange presents for Christmas and birthdays.

People in the latter group often ask the question “Who bought you that?” when enquiring about some day-to-day object—a scarf, a watch, a pen that I have. I always thought that this was a weird question—why on earth would you imagine that someone bought it for me? But, of course, to people from such a background, the idea that you would ever need to buy such day-to-day tchotchkes is weird. For their whole lives they’ve never had any need to buy all these little bits and pieces, every since childhood they’ve had an endless supply of little day-to-day objects in the form of presents from cousins and great-aunts. Of course, they are in an economically neutral position, as they have had to keep up their part of the exchange.

For Real (1)

Wednesday, June 19th, 2013

Its odd when something turns out to be real. In the back of my head I have a vague idea that brand names like “Dolce and Gabbana” are the invention of a chap called Toby working in a Soho ad agency in the ’70s. It is weird to see that they are actually a couple of real blokes in their 50s:

BBC News: Dolce and Gabbana sentenced to jail for tax evasion

We Don’t Take Comedy Seriously Enough

Monday, April 8th, 2013

Despite the rise and rise of complex, richly engaged comedy, people in other artforms still don’t have any respect for it. For the last few years I’ve been interested—in a rather inchoate way—in how comedy and contemporary classical music might interact, in particular whether the forms and structures of comedy provide an interesting and novel analogy for the structuring of a piece of music, or whether music-theatre can learn from comedy performance practices. I’d be interested, for example, if the tension inherent in a Stewart Lee performance, and the sophisticated use of reference and callbacks, could provide an emotional flavour that could be delivered in a musical way, or whether the emotional trajectory of Daniel Kitson’s storytelling performances could give us an idea of how to hold an audience for an extended period of time.

Very few people take this seriously. When someone raised a point like this with Larry Goves at a tutorial last summer, the response was incomprehending. What could the mere stimulus-response of joke-laugh have to do with a sophisticated artform such as composing a string quartet? Similarly, at a meeting the other day, the idea that a contemporary music group might put on a joint event with a comedy group was treated rather distastefully—”I don’t know if we want to be associated with that sort of thing”, whereas collaborations with poetry and art groups were greeted with enthusiasm.

I don’t want to suggest that all comedy is deep and profound—there is a big place for “summat as meks yer ears laff”. But, when some people in contemporary comedy are making a rich and distinctive contribution to new ways of taking an audience on an emotional trajectory, it is a pity that this is ignored by other artforms.

“You turn if you want to…”

Thursday, March 21st, 2013

Interesting attempt by Labour to shift the use of the phrase “U-turn” in politics:

I’ve never really liked the aversion to “U-turns” in politics. I can see that we don’t want people flip-flopping between decisions, but too strong an aversion to changing your mind in light of changing situations and new evidence can leave politics very unagile and leaden. It would be great if politicians could say “in response to overwhelming public pressure / new evidence … / the shift towards … /etc. we have decided to …” without opening themselves to accusations of U-turning.

The Revolution will be Computerised (1)

Tuesday, December 11th, 2012

I wonder how long things like “competence in IT and familiarity with a computerised environment” are going to continue to be listed as job requirements (and this for a lectureship post in Computer Science, natch). Surely, “computerised” is the default now and the odd, specific skill that you might be looking for on odd occasions is familiarity with the opposite? And, for that matter, how long information systems type courses are going to use the case study of a paper-based system being “computerised”.

Variations on Folk Sayings (13)

Sunday, May 27th, 2012

“If God had intended us to have wings, He would have made us fly.”

Zeitgeist (3)

Friday, May 4th, 2012

An interesting shift in perspective (from the middle of a recent discussion on Ask Metafilter):

met in a bar, but parents say that they met online

Zeitgeist (3)

Saturday, December 31st, 2011

Interesting perspective from a forum post that I read somewhere t’other day: someone (presumably someone growing up since the ubiquity of mobile phones) who saw landlines as a premium service (because the company had to “put all the wires in” to the house) and mobile service as the basic service (because you just issue someone with a handset and then you are done with it), and didn’t understand why the mobile service was the more expensive one. I wonder how much this is true: is mobile only a premium-price service because historically it was, or is it still more expensive to maintain and run the mobile infrastructure?

Personal Progress

Wednesday, September 21st, 2011

In today’s Guardian there is an article about a dispute between Julian Assange and publisher Canongate about whether the publisher is right to publish the early draft of an autobiography that they had commissioned and paid for, but where no final version has been received nor the advance returned to the publisher.

There are lots of complex legal and moral issues here, but I want to use this to raise an issue that I’ve been thinking about for a while. We always assume that the current state of a person is the one that is allowed to make decisions that override the wishes of the person at other times. Let’s just introduce some informal notation: let persontime represent the mental state of person at time, persontime1-time2 represent the generalisation to a range of times (to do this properly requires a lot more complexity which will detract from the argument).

So, we can phrase our argument thus: why should AssangeSept 2011 be able to unmake a decision that AssangeDec 2010 made? That this is a meaningful question seems at first doubtful—he has changed his mind, and the current state of mind is the one that we universally accept as completely dominant overall all other previous states of mind (except, perhaps, in cases of temporary “out of mindness” such as mania or drunkenness).

And yet, and yet…this seems to throw up some bizarre consequences. Consider for example the case of Alice, who from 1980 to 2010 wanted to will her money to her children. At the beginning of 2011 she converts to the Church of the Flying Spaghetti Monster, and changes her will to leave her money to the church. A few months later, in a tragic incident involving a pasta strainer, Alice dies and all of her money gets left to the Church, in accordance with the will of Alice2011. The much longer lasting will of Alice1980-2010 is completely ignored. Why should Alice1980-2010 not have some—indeed, most—of the rights over what happens at the end?
Note that we are not talking about the state where Alice2011 discovers that the beliefs that Alice1980-2010 were false—e.g. that the children that she was to give her money to were acting in some way that she disapproved of; we are just talking about the case where she “merely” changed her mind.

This gets even more complex when we think about future states too. Imagine depressive Bob, who has had quite a good time of it from 2005-2010, but in 2011 decides that it is too much and that Bob2011 wants to commit suicide, against the will of Bob2005-2010. Perhaps this doesn’t matter—but, what about the will of Bob2020, who has successfully gone though therapy and is now basically happy with his life? Should Bob2020 not get some consideration, even some legal protection, from the murderous intentions of Bob2011?

A somewhat lighter example concerns phrases like “their marriage failed”. If Charlie1990-2005 and David1990-2005 are both in a happy marriage, which falls apart between 2006-07 ending in divorce during 2008, why should the opinions of Charlie2008 and David2008 be the definitive opinion on the marriage as a whole? If, in a counterfactual world, David had had a sudden heart attack in 2004, the eulogy would have talked about his “happy marriage”; why is it suddenly rendered unhappy by the events of 2006-08?

Underpinning all of this seems to be some notion of “progress” throughout life. We have worked hard to be critical of naive notions of progress in other domains; we are, on the whole, critical of accounts of, say, politics or technology as being a universal progress towards better states. Yet in terms of an individual’s personal life, we are uncritical about this. There is the occasional exception—the temporary loss of mind discussed earlier, the person who loses decision-making capability due to mental illness, or the individual who is painted as “throwing away” their previous rationality for short-term gain (as in the Anna Nicole Smith case). But, on the whole, the idea that the current person has valid domain over that person at other points in their life, particularly the past person, seems to dominate and support the notion that a person just makes “progress” throughout their life.

“Like an express train, with me standing at the platform…”

Thursday, September 15th, 2011

In universities we teach programming at a breakneck speed, even if we think that we are doing it fairly gently. Contrast the teaching of programming with the teaching of mathematics. If we taught programming the way that we taught basic mathematics, then we would spend a few hours a week for several years doing basic drill-and-practice on programming constructs: a few hundred for loops, a few thousand if statements, and so on, all in isolation before we even think of bringing it together. Why on earth should we believe that we can teach something of comparable complexity in a couple of terms of a-few-hours-a-week courses.

Why do we have this irrational exuberance in our ability to teach programming so quickly? Do we believe that it builds on transferrable skills that have been learned by that point? That students at that age (and, who have elected and been selected to do computer science or a similar subject) are capable of coping with the pace? That programming is vastly easier than these mathematical skills? That we try to teach the mathematical skills at a much too early age?

I wonder if this is why we occasionally get student feedback comments that are like the one we got many years ago: “the course is like an express train, with me still standing at the platform”.

The Unbelievable Effectiveness of Public Services (1)

Thursday, October 21st, 2010

Imagine first-class mail being pitched as a new idea on Dragon’s Den.

You pick up letters locally for delivery across the country; sensible, something a lot of customers need. And you deliver them the next day; a good, if potentially expensive, service. How does collection get arranged: do you book a slot online for it to be collected, or take it to a central location in your local city or market town? No, you just put it in a box, and we will make one of these available in every settlement of significant size in the country. Okay…but what about the staff needed to collect and charge for the letters? How do you pay for them to be near to the boxes? Or perhaps people need to pay online and then they print-out an e-ticket that they attach to the letter? No, what, you just sell stamps in small shops around the country. Good idea, but getting but-in from lots of small businesses could be challenging.

Sounds like a good idea, well thought through. What is your price point? Ten pounds per letter? Five pounds? Two pounds; but what about the infrastructure costs? What…41p…? That’s ludicrous. I’m out.