“Real Artists Ship”

Colin Johnson’s blog


Archive for the ‘Computing and IT’ Category

Machine Learning with Context (1)

Friday, March 3rd, 2017

Two interesting machine learning/AI challenges (emerging from a chat with my former PhD student Lawrence Beadle yesterday):

  1. Devise a system for automatically doing substitutions in online grocery shopping, including the case which recognises that substituting a Manchester City-themed birthday cake is not an adequate substitution for a Manchester United-themed birthday cake, despite them both being birthday cakes, of the same weight, same price, and both having the word “Manchester” in the name.
  2. Devise a forecasting system that will not predict that demand for turkeys will be enormous on December 27th, or flowers on February 15th.

Both of these need some notion of context, and perhaps even explanation.

Contradiction in Law

Saturday, February 11th, 2017

Why aren’t more legal-regulatory systems in conflict? A typical legal decision involves a number of different legal, contractual, and regulatory systems, each of which consists of thousands of statements of law and precedents, that latter only fuzzily fitting the current situation, with little meta-law to describe how these different systems and statements interact. Why, therefore, is it very rarely, if at all, that court cases and other legal decisions end up with a throwing up of hands and the judgement-makers saying “this says this, this says this, they contradict, therefore we cannot come to a well-defined decision”. Somehow, we avoid this situation—decisions are come to fairly definitively, albeit sometimes controversially. I cannot imagine that people framing laws and regulations have a sufficiently wide knowledge of the entire system to enable them to add decisions without contradiction. Perhaps something else is happening; the “frames” (in the sense of the frame problem in AI) are sufficiently constrained and non-interacting that it is possible to make statements without running the risk of contradiction elsewhere.

If we could understand this, could we learn something useful about how to build complex software systems?

Kruft (1)

Sunday, November 20th, 2016

I often refer to the process of taking the content that I want to communicate and putting it into the 200-by-300 pixel box reserved for content in the middle of our University’s webpages as “putting the clutter in”. I get the impression that my colleagues on the Marketing and Communication team don’t quite see it this way.

Mistypings

Thursday, November 17th, 2016

Been doing quite a bit of python programming this week. So far I have managed to mistype python as:

  • pyhton
  • phythion
  • phtyton
  • phtyon
  • phytion
  • pthyon
  • phyton
  • pytohn

Feedback (1)

Sunday, November 6th, 2016

Bought the an album called Sex from Amazon a few days ago (by the excellent jazz trio The Necks). Inevitably, this caused the following request for feedback to appear in my inbox a few days later:

"Colin, did 'Sex' meet your expectations? Review it on  amazon.co.uk

Followed, inevitably, by the following when I next went onto the Amazon website:

You purchased: Sex (Used)

The Fallacy of Formal Representations

Friday, September 9th, 2016

I went to an interesting talk by Jens Krinke earlier this week at UCL (the video will eventually be on that page). The talk was about work by him and his colleagues on observation-based program slicing. The general idea of program slicing is to take a variable value (or, indeed any state description) at a particular point in a program, and remove parts of the program that could not affect that particular value. This is useful, e.g. for debugging code—it allows you to look at just those statements that are influential on a statement that is outputting an undesirable value—and for other applications such as investigating how closely-coupled code is, helping to split code into meaningful sub-systems, and code specialisation.

The typical methods used in slicing are to use some formal model of dependencies in a language to eliminate statements. A digraph of dependencies is built, and paths that don’t eventually lead to the node of interest are eliminated. This had had some successes, but as Jens pointed out in his talk, progress on this has largely stalled for the last decade. The formal models of dependency that we currently have only allow us to discover certain kinds of dependency, and also using a slicer on a particular program needs a particular model of the language’s semantics to be available. This latter point is particularly salient in the contemporary computing environment, where “programs” are typically built up from a number of cooperating systems, each of which might be written in a different language or framework. In order to slice the whole system, a consistent, multi-language framework would need to be available.

As a contrast to this, he proposed an empirical approach. Rather than taking the basic unit as being a “statement” in the language, take it as a line of code; in most languages these are largely co-incident. Then, work through the program, deleting lines one-by-one, recompiling, and checking whether the elimination of that line makes a difference in practice to the output on a large, comprehensive set of inputs (this over-simplifies the process of creating that input test set, as programs can be complex entities where producing a thorough set of input examples can be difficult, as sometimes a very specific set of inputs is needed to generate a specific behaviour later in the execution; nonetheless, techniques exist for building such sets). This process is repeated until a fix point is found—i.e. none of the eliminations in the current round made a difference to the output behaviour for that specific input set. Therefore, this can be applied to a wide variety of different languages; there is no dependency on a model of the language semantics, all that is needed is access to the source code and a compiler. This enables the use of this on many different kinds of computer systems. For example, in the talk, an example of using it to slice a program in a graphics-description language was given, asking the question “what parts of the code are used in producing this sub-section of the diagram?”.

Of course, there is a cost to pay for this. That cost is the lack of formal guarantee of correctness across the input space. By using only a sample of the inputs, there is a possibility that some behaviour was missed. By contrast, methods that work with a formal model of dependencies make a conservative guarantee that regardless of inputs, the slice will be correct. Clearly, this is better. But, there are limits to what can be achieved using those methods too; by using a model that only allows the elimination of a statement if it is guaranteed under that model to never have a dependency, it ignores two situations. The first of these is that the model is not powerful enough to recognise a particular dependency, even though it is formally true (this kind of thing crops up all over the place; I remember getting frustrated with the Java compiler, which used to complain that a particular variable value “might not have been initialised” when it was completely obvious that it must have been; e.g. in the situation where a variable was declared before an if statement and then given a value in both possible branches, and then used afterward that statement). The second—and it depends on the application as to whether this matters—is that perhaps a formal dependency might crop up so infrequently as to not matter in practice. By taking an empirical approach, we observe programs as they are being run, rather than how they could be run, and perhaps therefore find a more rapid route to e.g. bug-finding.

In the question session after the talk, one member of the audience (sorry, didn’t notice who it was) declared that they found this approach “depressing”. Not, “wrong” (though other people may have thought that). The source of the depression, I would contend, is what I will call the fallacy of formal representations. There is a sense that permeates computer science that because we have an underlying formal representation for our topic of study, we ought to be doing nothing other than producing tools and techniques that work on that formal representation. Empirical techniques are both dangerous—they produce results that cannot be guaranteed, mathematically, to hold—and a waste of time—we ought to be spending our time producing better techniques that formally analyse the underlying representation, and that it is a waste of time to piss around with empirical techniques, because eventually they will be supplanted by formal techniques.

I would disagree with this. “Eventually” is a long time, and some areas have just stalled—for want of better models, or just in terms of practical application to programs/systems of a meaningful size. There is a lot of code that doesn’t require the level of guarantee that the formal techniques provide, and we are holding ourselves up as a useful discipline if we focus purely on techniques that are appropriate for safety-critical systems, and dismiss techniques that are appropriate, for, say, the vast majority of the million+ apps in the app store.

Other areas of study—let’s call them “science”—are not held up by the same mental blockage. Biology and physics, for example, don’t throw their hands up in the air and say “nothing can be done”, “we’ll never really understand this”, just because there isn’t an underlying, complete set of scientific laws available a priori. Instead, a primary subject of study in those areas is the discovery of those laws, or at least useful approximations thereto. Indeed, the development of empirical techniques to discover new things about the phenomena under study is an important part of these subject areas, to the extent that Nobel Prizes have been won (e.g. 1977; 2003; 1979; 2012; 2005) for the development of various measurement and observation techniques to get a better insight into physical or biological phenomena.

We should be taking—alongside the more formal approaches—an attitude similar to this in computer science. Yes, many times we can gain a lot by looking at the underlying formal representations that produce e.g. program behaviour. But in many cases, we would be better served by taking these behaviours as data and applying the increasingly powerful data science techniques that we have to develop an understanding of them. We are good at advocating the use of data science in other areas of study; less good at taking those techniques and applying them to our own area. I would contend that the fallacy of formal representations is exactly the reason behind this; because we have access to that underlying level, we cannot convince ourselves that, with sufficient thought and care, we cannot extract the information that we need from ratiocination about that material, rather than “resorting” to looking at the resulting in an empirical way. This also prevents the development of good intermediate techniques, e.g. those that use ideas such as interval arithmetic and qualitative reasoning to analyse systems.

Mathematics has a similar problem. We are accustomed to working with proofs—and rightly so, these are the bedrock of what makes mathematics mathematics—and also with informal, sketched examples in textbooks and talks. But, we lack an intermediate level of “data rich mathematics”, which starts from formal definitions, and uses them to produce lots of examples of the objects/processes in question, to be subsequently analysed empirically, in a data-rich way, and then used as the inspiration for future proofs, conjectures and counterexamples. We have failed, again due to the fallacy of formal representations, to develop a good experimental methodology for mathematics.

It is interesting to wonder why empirical techniques are so successful in the natural sciences, yet are treated with at best a feeling of depressed compromise, at worst complete disdain, in computer science. One issue seems to be the brittleness of computer systems. We resort (ha!) to formal techniques because there is a feeling that “things could slip through the net” if we use empirical techniques. This seems to be much less the case in, say, biological sciences. Biologists will, for example, be confident what they have mapped out a signalling pathway fairly accurately having done experiments on, say, a few hundred cells. Engineers will feel that they understand the behaviour of some material having carefully analysed a few dozen samples. There isn’t the same worry that, for example, there is some critical tight temperature range, environmental condition, or similar, that could cause the whole system to behave in a radically different way. Something about programs feels much more brittle; you just need the right (wrong!) state to be reached for the whole system to change its behaviour. This is the blessing and the curse of computer programming; you can do anything, but you can also do anything, perhaps by accident. A state that is more-or-less the same as another state can be transformed into something radically different by a single line of code, which might leave the first state untouched (think about a divide-by-zero error).

Perhaps, then, the fault is with language design, or programming practice. We are stuck with practices from an era where every line of code mattered (in memory cost or execution time), so we feel the need to write very tight, brittle code. Could we redesign languages so that they don’t have this brittleness, thus obviating the need for the formal analysis methods that are there primarily to capture the behaviours that don’t occur with “typical” inputs. What if we could be confident—even, perhaps, mathematically sures—that there were no weird pathological routes through code? Alternatively, what if throwing more code at a problem actually made us more confident of it working well; rather than having tight single paths through code, have the same “behaviour” carried out and checked by a large number of concurrent processes that interact in a way that don’t have the dependencies of traditional concurrency models (when was the last time that a biosystem deadlocked, or a piece of particle physics, for that matter?). What if each time we added a new piece of code to a system, we felt that we were adding something of value that interacted in only a positive way with the remainder of the code, rather than fearing that we have opened up some kind of potential interaction or dependency that will cause the system to fail. What if a million lines of code couldn’t be wrong?

Indirect Remembering

Tuesday, August 23rd, 2016

Here’s an interesting phenomenon about memory. I sometimes remember things in an indirect way, that is, rather than remembering something directly, I remember how it deviates from the default. Two examples:

  • On my father’s old car, I remembered how to open the window as “push the switch in the opposite way to what seems like the right direction.”
  • On my computer, I remember how to find things about PhD vivas as “really these ought to be classified under research, but there’s already a directory called ‘external examining’ under teaching, so go in there and look for the directory called extExams and then the sub-directory called PhD“.

It makes me wonder what other things that I do have a similar convoluted story in my memory, but where the process just all happens pre-consciously.

BCI (1)

Saturday, May 21st, 2016

Brain-computer interfaces are currently at about the same stage of development as a system where a moderately skilled person throws golf balls at a keyboard from around 30 feet away.

Personal Practice (1)

Tuesday, April 26th, 2016

My colleague Sally Fincher has pointed out that one interesting aspect of architecture and design academics is that the vast majority of them continue with some kind of personal practice in their discipline alongside carrying out their teaching and research work. This contrasts with computer science, where such a combination is rather unusual. It might be interesting to do a pilot scheme that gave some academic staff a certain amount of time to do this in their schedule, and see what influence it has on their research and teaching.

Interestingly, a large proportion of computer science students have a personal practice in some aspect of computing/IT. It is interesting to note quite how many of our students are running a little web design business or similar on the side, alongside their studies.

Creative (1)

Wednesday, April 6th, 2016

An interesting challenge for computational creativity research. Build a system which takes in a large dataset, and which builds an interesting and informative infographic from that data.

The Diversity of My Interests (1)

Thursday, March 10th, 2016

What TripAdvisor thinks I should be interested in; an interesting combination:

Berghain, Berlin; Abbey Farm Bed and Breakfast, Hinckley

Tech is Blue

Saturday, November 28th, 2015

Here’s an interesting and unexpected result. Do a google image search for “tech”. You will, at the time of writing, get something like this:

techIsBlue

Tech is clearly blue. The same is true for “digital”:

digital

and for “cyber”:

cyber

I had to make sure that the search-by-colour filter was turned off. This is really surprising to me. I have seen lots of these kinds of images before, but I am gobsmacked at how dominant this colour scheme is as a way of depicting technology. Where does it come from? Some vague notion of “computers are made of electricity, and electricity looks something like a lighting bolt going across a twilit sky”? The second choice seems to be some kind of green-screen terminal green, which is vaguely comprehensible; but, even so, odd. I am in my forties and probably of the youngest generation to have used a terminal for real, and even then only for a few years whilst I was at university.

I wonder what other hidden colour schemes there are out there?

Aside: our university timetable still calls classes held in a computer room “terminal” classes. I wonder what proportion of the students would have any idea why they have this name? I suspect that the vast majority just take it as an arbitrary signifier, and have no idea of its origins.

Training (1)

Thursday, September 10th, 2015

If I could harness the faith that administrators have in “training” to compensate for crappy user-focused software and bad user interfaces, I would be able to start the most powerful religion in the world.

Agility 17, Wisdom 8

Wednesday, September 9th, 2015

Software engineering education needs to give students a more nuanced understanding of software development processes than one which causes students to say, in effect “There are two kinds of software development: waterfall, which is noisy and old fashioned and so we won’t use it, and agile, which we will use because it means that we can do what we like.”

Abbrvs (2)

Sunday, August 30th, 2015

Sent this perfectly reasonably titled file as an email attachment to some colleagues:

00 A fullPack.png

Unfortunately, it ended up in the email attachments window looking like this:

00 A Fu...ck.pdf

…I’ll get me coat…

How? (1)

Sunday, July 12th, 2015

If we are writing a program that takes four numbers, a, b, c, and d, and adds the four of them together, how do we know that (for example) writing the expression a+b is a good program fragment to write? If we could understand that sort of question in general, we would be a long way towards building a scalable system that could write code automatically.

You Didn’t Need to Do That

Wednesday, May 20th, 2015

This epitomises the idea “if you don’t have anything to say, you don’t have to say anything”. I think some people genuinely think that if there is a box on a web page for comments then they have been singled out from all the people on the web to make that comment, and so feel obliged to reply. Or, they were just being facetious ;-)

"Can the new owners re-invent BHS?" "Don't know depends on who you ask someone connected with business mabe."

Autocomplete Fail (1)

Monday, March 9th, 2015

I regularly use Skype to discuss technical issues with colleagues. As part of this we sometimes post code to each other using the Skype chat window. Something that I had forgotten is that certain strings of text—particularly those contained in parentheses—get automatically converted into smilies. As a result, occasionally this happens:

Code with a smiley in the middle.

Not what Radio-buttons are for

Wednesday, January 28th, 2015

How long ago was it when you FIRST visited us? In the last year, More than a year ago, More than 2 years ago, More than 5 years ago, Before then, Never been, Not sure

Uh-oh (1)

Tuesday, November 11th, 2014

From a colleague’s email: “SharePoint is very precise and there is plenty of room for human error to interfere with the workflows.” Uh-oh.