One of the points where mathematics and day-to-day intuitions jar is in estimating numbers of combinations and similar combinatorial problems. I’ve just made a booking on Eurostar, and my confirmation code is a 6-letter code. Surely, my intuitive brain says, this isn’t enough; all of those people going on all of those journeys on those really long trains, day-in, day-out. Yet there are a vast number of possibilities; with one letter of the 26 letter alphabet for each of the 6 letters in the code, there are 26^6=308,915,776 possible combinations. Given that there are 10 million Eurostar passengers each year, this is enough to allocate unique codes for passengers for around thirty years. It then make you wonder why some codes are so long, like the 90-digit MATLAB registration code that I had to type in by hand a couple of years ago.
Archive for the ‘Mathematics’ Category
I really really really wish we hadn’t settled on the term “statistically significant”. There’s just too much temptation to elide from “these results show that situation X is statistically significantly different to situation Y” to “the difference between X and Y is significant” to “the difference between X and Y is important”.
Statistical significance is about deciding whether it is reasonable to say that the difference between two things is not due to sampling error. Two things can be statistically significantly different and the magnitude of the difference of no “significance” (in the day-to-day sense) to the situation at hand.
We really should have gone for a term like “robustly distinguishable” or something that doesn’t convey the idea that the difference is important or large in magnitude.
When I first started to meet humanities academics, it surprised me how many defined their interests in terms of nations: they were “Scottish historians”, their subject was “British cinema”, etc. There were even things like “American philosophy”—the idea that something as abstract as this can be influenced by something as concrete as nationhood still discombobulates me.
This struck me as rather odd. I’d assumed that this sort of characterisation would be super-naive. I would not have dreamed of asking a historian which country they specialised in: I assumed this would be like asking a mathematician which of the four basic arithmetic operations they specialised in, or (more controversial, this) a computer scientist which programming language they use.
I suppose I thought that at the research level, humanities would be characterised by larger, more abstract problems: the relationship between expressiveness and language, the common features of political systems throughout world history, the interplay between economic forces and art produced, etc., etc.
Of course, the humanities do deal with questions at this level of abstraction; but, largely through the lens of a particular example. There is a similarity here with biology. Biologists will characterise themselves as being experts in fruitflies or large primates or whatever; I have just about gotten over a sense of mild amusement at seeing signs on campuses for things like the “British Yeast Symposium”. Of course, they are using these as a means of investigating deeper issues about gene expression, development, virus transmission, or whatever. It is easier to focus on one organism, as the techniques vary so much for different organism types. Similar with history, but perhaps the issue is less one of techniques than one of accumulated knowledge.
Why did I make this assumption that this characterisation was naive? I suppose I am used to this from studying mathematics, where we leave behind concreteness at a dizzying rate. But, then, it is possible to study mathematics in abstraction; once you have defined a mathematical concept formally, you can deal with it as a formal object, rather than through concrete examples. This isn’t so possible in the humanities; theoretical points are usually argued via the concrete examples. Perhaps there is scope, in some areas, for “big data” methods to change this—for example, having tools that allow historians to take a concept and a database of its realisations in different historical periods and ask questions about that mass of realisations, rather than give a couple of examples and a vague hint that this is a large scale phenomenon.
Multiscale modelling is a really interesting scientific challenge that is important in a number of areas. Basically, the issue is how to create models of systems where activities and interactions at a large number of different (temporal and/or spatial) activities happen at the same time. Due to computational costs and complexity constraints we cannot just model everything at the smallest level; yet, sometimes, small details matter.
I wonder if there is a role for some kind of machine learning here? This is a very vague thought, but I wonder if somehow we can use learning to abstract simple models from more detailed models, and use those simple models as proxies for the more detailed model, with the option to drop back into the detailed model when and only when it is specifically needed?
In a recent Guardian article, Bonnie Greer suggests that Kurt Gödel “had shown the world years before that nothing can be 100% proven” (“Me and Sister Carmela”, 20th September). In fact, what he showed was the subtly different notion that not 100% of true statements (of a particular, broad class of mathematical statements) can be proven.
This is not just a pedantic factual correction. Frequently, mathematicians (and practitioners of other rigorous reasoning systems) are attacked in the media for their arrogance. This is often characterised as an assumption that “everything” can be shown to be true or false with 100% certainty. By contrast, only specific types of statements are amenable to mathematical methods; furthermore, even within that domain, not everything will be provable!
In particular, the elision of words used in some specific technical way (“proven”, “statement”) to imply that these narrow technical results magically mean something about the day-to-day meaning of these words is ubiquitous. It is not the mathematicians who are at fault in such situations, as they are precise about the narrowness of the applicability of their results.
It could be argued that it is the practitioners of the literary arts that are guilty of the arrogant over-reach that mathematicians are frequently blamed for: consider the slapdash use of metaphor to extend the reach of statements, overinterpretation of the meaning of technical notions based on mere co-incidence of words, and drawn out discussions that amount to little more than extended puns. This is ultimately destructive to both the understanding of science and literature and to attempts to create a meaningful dialogue between the disciplines.
Lotteries are often described as “a tax on people who are innumerate”. The idea is that any rationalist would not play a lottery, because the return on investment is shoddy—negative, indeeed, stunningly so. Back to the post office savings account then.
But hang on there! Is this really why people play lotteries? Often the driving force is the remote chance of a truly life-transforming event, which is not adequately measured by the ROI.
The interesting observation is that this argument also works for events with negative consequences. Indeed, we are accustomed to this kind of reasoning about negative events. For example, people will readily argue that, whilst they know that the chances of a plane crash are minuscule, nonetheless they aren’t going anywhere a damn plane—because the consequences of being in a plane crash, however remote, are horrifying. Again, a life-transforming event, but one with bad rather than good consequences. The behaviour seems to be controlled by the same mechanism—I wonder if a carefully controlled experiment would show that the underlying structure of thought is basically the same?