“Oi!” (Whole bus goes silent) (parent shouting at child) “You’ve got a tissue in your pocket—don’t wipe your bogies on other people.” (collective “ugh” from the remainder of the bus).
The major social media companies have basically been providing the same, largely unchanging product, for the last decade. Yes—they are doing it very well, managing to scale number of users and amounts of activity, and optimising the various conflicting factors around usability, advertising, etc. But, basically, Twitter has been doing the same schtick for the last decade. Yet, if media and government were looking to talk to an innovative, forward-looking company, they might well still turn to such companies.
By contrast, universities, where there is an enormous, rolling programme of change and updating, keeping up with research, innovating in teaching, all in the context of a regulatory and compliance regime that would be seen as mightily fuckoffworthy if imposed on such companies, are portrayed as the lumbering, conservative forces. Why is this? How have the social media companies managed to convey that impression—and how have we in higher education failed?
I’ve been on a lot of student disciplinary panels over the years—examining students for plagiarism, etc.—and something that comes up over and over again is that some weaker students just can’t imagine that students are able to produce work of high quality without some amount of copying, patch-writing, or similar processes The idea that you could sit down and produce from your head a fluent piece of fully referenced writing just isn’t what they imagine “ordinary people” are capable of. Writing, comes from elsewhere—a mysterious world of books and articles that is somehow disjoint from the day-to-day world of ordinary people.
I once came across a maths version of this—a student who, when asked to solve simple algebra problems, was just plucking numbers from the air. They couldn’t imagine that other students in the class were actually solving the problems as quickly as they were. Instead, they assumed that the other students were somehow getting there by some kind of mysterious intuitive process, and that the way to get to that was just to start by “saying the first number that comes into your head” and then, over time, their subconscious would start to work things out and after a while the numbers that emerged would start to coincide with the solutions to the problems.
I think I had a similar problem with singing once upon a time (though, at least I was conscious that there was something I wasn’t getting). People who had had no problem with grokking how to sing in tune with others would just say “you listen to the note and then you sing along with it”, which put me in the same position as our maths friend above—it just seemed to be something that you did until some pre-conscious process gradually learned how to do it. It doesn’t. Eventually, thanks to a very careful description from the wonderful Sarah Leonard of exactly what the head/mouth/ears feel like when you are making the same note as others, I was able to improve that skill in a rational way. Before that, I just couldn’t imagine that other people were managing to do this in anything other than a mysterious, pre-conscious way. Somehow I had failed to pick up what that “in tune” feeling was like as a child, and carried this a decent way into adulthood.
For a while I wondered what these benches were all about:
They appear at a number of London and South-East railway stations, and when I first saw them I thought they were a bizarre and out of keeping design decision. Why choose something in such bright, primary-ish colours against a generally muted design scheme. They wouldn’t be out of keeping somewhere—but, not here! And after a couple of years it suddenly struck me—they are the olympic rings, that hung up at St. Pancras during the games, sliced and turned into benches! My supposition is confirmed by Londonist here.
What’s going on here?
This is the back of the packaging of my protein bar. What’s with the white stripe across the top left? It reads, basically “# _____ DAY, fuelled by 12g of PRIMAL PROTEIN”. Presumably the the # is a hashtag marker, and there is meant to be some text between that and “DAY”. Is this some kind of fill-in-the-blank exercise? I don’t think so, it seems rather obscure without any further cue. Did it at one point say something that they had to back away from for legal reasons: “# TWO OF YOUR FIVE A DAY”, perhaps? If so, why redesign it with a white block? Does packaging work on such a tight timescale that they were all ready to go, when someone emailed from legal to say “uh, oh, better drop that” and so someone fired up Indesign and put a white block there. Surely it can’t be working on such a timescale that there wasn’t time enough to make it the same shade of red as the rest, or rethink it, or just blank out the whole thing. Is it just a production error? At first I thought it was a post-hoc sticker to cover up some unfortunate error, but it is a part of the printed packaging. A minor mystery indeed.
Here is a graph that purports to be a summary of numbers of divorces per 1000 married people between 2009-2016, i.e. the first part of the graph up to 2014 is before same-sex marriage became legal.
My immediate thought is that this must be wrong—if every marriage is between a man and a woman, then the numbers of divorces must be equal between men and women. So, could the “per 1000 married people” be the gotcha here? Again, no. It doesn’t say “per 1000 people”, but “per thousand married people”, and so in the era that this is referring to, the number of married men and married women would be identical. This suggests that there is an error in the calculation here; oddly, the graph has identical numbers from 2013 onwards; we might expect some divergence if we carry on with the graph, even simply due to statistical fluctuations the number of same-sex male divorces and same-sex female divorces is likely to be different.
So, what is happening during the 2009-2012 part of the graph? I suspected initially that they have mistakenly used “per 1000 people” on those entries in the graph, rather than “per thousand married people”. But, this is at odds with the numbers from 2013-2016, where the graph is as expected—numbers “per thousand people” will be a lot less than “per thousand married people”, and this huge leap isn’t apparent between the figures for 2012 and 2013. So, what explains it?
I’ll restrain myself from ranting about the heinous sin of connecting discrete values with lines.
Here’s another graph (from this Daily Mail article (ugh!)) that seems to be from the same source and shows a similar error:
Sometimes I find myself making an apology in the following form: “Sorry, but I assumed…”. I’ve occasionally been upbraided for this with a response like “Well, you shouldn’t have assumed in the first place, you should have asked.”. There is perhaps something reasonable here—it isn’t good to be presumptuous, and it isn’t good to offer a glossed apology—but, I usually leave such an encounter with a feeling of “Well, that all sounds very reasonable, but in practice we can’t go around constantly questioning and digging into every detail of an interaction; at some point we have to make a pragmatic choice to use background knowledge and assumptions built on our knowledge of social rules and norms, the particular person, and the particular situation.”
Then I realised. When A says to B “I’m sorry, but I assumed…” it is actually a subtle upbraiding of B by A. The less polite version of this is A saying to be “Sorry, but I perfectly reasonably assumed that we were working in our regular framework of norms of communication and our mutual knowledge of each other and the situation, and you unreasonably did something that didn’t fit into those norms and now you seem to be blaming me for making a perfectly reasonable assumption rather than what should have happened which is that you were doing something that was socially or individually uncharacteristic and so you should have proactively given me reasonable information so that I could understand the situation in which we were interacting (innit).”. Of course, this is complicated—one of the reasons that these misunderstandings occur is when A and B think that they are on common ground (what Wittgenstein calls “agreement not in opinions, but rather in form of life”), but actually are working with a different framework.
In his book The English Constitution, Walter Bagehot describes two components of government. The first are the “efficient” components, such as the cabinet, that get on with the actual business of government, making decisions about the nation. The second are the “dignified” components, such as the monarchy, that have little decision making power (either de jure or de facto) but which play a role in serving as a, largely uncontroversial, locus for patriotism and the stability of the nation. England is a key example of a polity where these two components are largely separate; in some countries, largely to their detriment, the components blur. Clearly, this can change through time; at one time the king’s very word was law, now the role of the queen in the day-to-day business of politics is minimal.
I would like to speculate that the US presidency is on its way from becoming an efficient institution to a dignified one. The election of Trump has provided us with a figure whom other components of the government have openly said they will ignore—a military leader, being interviewed about the US nuclear capability, has argued that they would make a considered decision about an order from Trump to make a nuclear strike, despite this being formally an uncomplicated order from a superior officer (commander-in-chief, natch!) to a more junior one. Whilst this has probably been the truth throughout nuclear history—there are reports of various cold-war nuclear command officers deciding to take a “watch and wait” approach when the preconditions for a nuclear strike have already been met—this is probably the first time that this has been discussed so openly. This marks the beginning of the presidency being regarded as a ceremonial, “dignified” institution; I would assume that a command from Queen Elizabeth II would be taken with similar cynicism by the UK military.
So, is this just an aberration? A one-off, to be replaced in 2020 by a return to business-as-usual? This is entirely possible; a nation weary of celebrity posturing could return to the model of the politically experienced leader as the ideal candidate. But, there is hunger from different directions for another celebrity-POTUS. Even if the US tires of isolationist nationalism, there is a decent chance that the Democrats won’t be willing to field another explicitly large-P Political figure against the celebrity of Trump in 2020 (especially as by that point, their store of public-profile figures is running thin; Obama timed out, figures such as Clinton and Kerry tainted by previous unsuccessful runs). Would you really put up a governor of a flyover state when you have an Oprah or Zuckerberg? So, let’s say that Oprah wins in 2020, and serves two successful terms of office, taking us to 2028. Already, we’re reaching a stage where the idea of electing some competent former ambassador seems so boring and 20th century. After four years of President Zuck struggling to control the growing power of the BRICS and some crisis yet to be imagined, we reach a point where a shadow system of efficient institutions is starting to sweep in underneath to take on the substantive job of executive government. By 2032, Will Smith and Ellen DeGeneres are the sort of people who are the serious, establishment candidates, fighting not to be seen as boring establishment figures against the candidacy of Katy Perry. By 2050, the Presidency is a ribbon-cutting, “dignified” institution, as much a sign of faded-celebrity-trying-to-raise-their-profile as I’m a Celebrity… is today. A young turk in the present day would be better studying which institution will rise to take the place of the efficient powers of the President, than plotting a 40-year route to the role itself.
I’d wondered for a while if celebrity would one day take the Presidential role—after all, there is a system of (more-or-less) direct election, both at the primaries and the final vote, that provides a way to circumvent the slog of e.g. UK national politics. But, I always though that this would come about from an independent candidate standing on a largely youth-oriented platform. I had assumed that at some point some cocky chancer like Jay-Z might decide to go for it as a mid-life crisis thing, taking around 15% of the vote as an anti-politics third candidate, Nadering-out a decent Democratic candidate in favour of a Dubya-like Republican due to demographics, earning the ire of mainstream politicians en route. I was blindsided by Trumps’ candididacy—playing a role as an anti-politics candidate whilst remaining within a party structure (thus getting the automatic votes of the always-Republican rump) was a stroke of genius. That canny move may well have re-configured the Presidential role for the next century—Swift 2052 for the win!
An odd contradiction on the economic right of politics:
- There is objection to ideas such as basic income, unemployment benefits, etc. on the grounds that once people have basic needs catered for, their motivation to carry out additional economic activity for the marginal benefits it provides are minimal. A person who has basic housing costs paid for and a few hundred quid per month living expenses is assumed to be unmotivated to work further.
- There is objection to ideas of increasing tax take at the higher end, on the grounds that it will reduce motivation to work. Even though someone might be earning £100k or more, the idea is that they will be significantly demotivated if they have to pay another few hundred quid per year in taxes.
This seem contradictory. Either people are willing to work harder for more money, or there is a level where the marginal monetary benefit will not produce additional motivation. If anything, you might expect it to be the other way round—the marginal benefit to the person in desperate economic circumstances of a small amount additional income gives a larger lifestyle change than for the person on a large income. I suspect that at the heart of the contradiction is a belief that there are two sorts of people—the lazy, who wouldn’t care, and the motivated, who will always be willing to do more for a larger benefit. I think motivation is more complex than that.
That A-team, eh? They really liked making quiches, yes? They loved it when a flan came together.
Every cloud computer has a very expensive data centre lining.
Firms selling things have a dilemma. Price something too low, and, whilst it will sell well, it won’t make enough money to be worth doing (leading to the old joke: “We’re selling each item at a loss; but, don’t worry, we’ll make up on it in volume.”). Price something too high, and you won’t sell enough widgets to make enough money. The traditional view on this is that it is a tradeoff; find a mid-range price where you sell enough widgets at a high enough price. If you can’t do this, then the business isn’t viable.
This is finessed by the notion of adaptive pricing. This is where the same widget is sold to different people at different prices. This makes more businesses financially viable. This is where firms adjust prices based on some information that they can observe, or some structuring of how/when/where/to whom the products are sold:
- Selling to different demographics based on broad ability to pay. Discounts for students or retired people, who are likely to have a lower income. Changing prices at different times of the day, based on the demographic that is around (e.g. a price premium for buying a coffee at the station at peak commuter time; or, more simply, the idea of peak time tickets).
- Rewarding time/organisation: tickets come on sale at a particular date/time, but there are only a finite number at that price. People who are time rich/cash poor can spend time to be organised to buy at the cheaper price, whereas people who have more money don’t have to spend the time, they just buy at the higher price later.
- Selling at different prices in different locations. This has a dark side too; some firms have exploited the lack of transport options of poor people living in cut-off areas by selling at a higher price.
- Auctions, where items are sold for a bespoke price based on demand.
- Secondary markets, where a firm sells widgets cheaply and efficiently, but a secondary retailer (such as a ticket tout) buys up some of them and sells them on to the final purchaser at an inflated price.
- Hiding prices. Rather than a price being given up-front, you have to go through some intermediary system that judges your ability to pay, or your need for the product, and adjusts prices accordingly. The watch shop that judges whether you are a middle-income watch enthusiast or a rich person who wants to brag about the cost of their watch; the retailer of tools who judges whether you will be using the tool day-in-day out or are an occasional user who would buy it for a sufficiently low price.
- Similarly, making use of your purchasing history to adjust prices on an online system.
- Micropayments. Rather than paying up-front to purchase something, you pay by the number of minutes/hours that you use it, or what you use it for.
- Time-adjusted pricing. You show an interest, and if you want it right now you pay the price; the price goes down with time, but if you wait too long you run the risk (perhaps entirely artificially generated) that stock will run out. The TV-based retailer PriceDrop is canonical here.
- Rewards. You all pay the same price up front, but more price-sensitive customers are given some of that money back as vouchers so that their average spend per widget is lower in the long run.
- Direct demand-adjusted pricing. Uber’s entirely-up-front “surge pricing”, for example. Again, speaks to the time/money tradeoff; someone who needs a lower price might be prepared to wait for half-an-hour to see if surge pricing goes away.
- Artificial hobbling. You all buy the same product, making manufacturing easy, but some features are turned off on the lower product range. Tesla cars work like this; you can buy a cheaper version, which has a lower distance range; but, the hardware is the same as the premium product, the distance is just limited by a software switch in the cheaper version.
- Things that seem more different. The same object sold with changes to the branding. Surplus stock sold to a poundshop on the condition that they repackage it. Cheap train tickets sold through a different brand, but when you show up you are on the same train in the same seats as people who paid a lot more.
- Superficial benefits. Exploiting that some people will pay for “the best” regardless. First-class train travel is probably a decent example here; a slightly more comfortable seat and free tea/coffee, but sometimes at a price premium which seems irrationally larger.
I would make an educated guess that cracking adaptive pricing will be one of the big innovations in business in this century. It is increasingly used, but there is still a huge amount of finesse to do here. Already, supermarkets are experimenting with systems such as electronic price displays, allowing dynamic adjusting of price during the day, either by broad demographic shifts, or by minute-by-minute demand. And there are already critiques: the transport company that (algorithmically) increases its prices following a natural disaster, the company that (algorithmically) sells the music of a recently-dead star at a premium.
Interestingly, there is a weird potential consequence to all of this. Will this mean that differences in income become less pronounced? If I had an ideal adaptive pricing system, where, say, I charged people not a price, but a proportion of their income, for my product, then that would have the outcome that people would de facto have the same income. Clearly, the systems above are not at that level yet; but, each adaptive pricing innovation brings us closer to that.
A long time ago, as a wet-behind-the-ears English person coming to Scotland for the first time, I was intrigued/surprised/amused to see a copy of The New Testament in Scots in a bookshop (the old James Thin on South Bridge, now a branch of Blackwells).
I was vaguely aware that there was a Gaelic language, which not many people used, and had a basic knowledge that there was a Scots accent and vocabulary, albeit largely gleaned from watching Russ Abbot’s “see u Jimmy” character on TV:
…but the idea of treating this as a language was alien to me. I’ve developed by knowledge of this world over the years, and can appreciate the literary qualities of it, particularly through the thoughtful work by Hugh MacDiarmid. But, what explains my initial sense that this sort of thing is a bit ludicrous, a little trying-too-hard:
…a little too close to the clearly humorous (though perhaps not evangelically purposeless) Ee by Gum, Lord!: The Gospels in Broad Yorkshire.
Why did I, 25 years ago, think that its description as “a translation” was odd? I wouldn’t have regarded a translation into French or Japanese or Guarani strange—so, why Scots? This touches, I suppose, on the language vs. dialect debate; when does a dialect become a separate language. This seems to be an ill-defined question; there is clearly a continuum, and whilst groups of language-users cluster at certain points thereon, this doesn’t happen cleanly enough to be a series of isolated clumps.
One idea that might help to explain this is the uncanny valley; here’s one of its inhabitants, a rather realistic looking humanoid robot:
This sort of thing—not far of being human, but not close enough to “pass”—is said to be uncanny, and this is backed up by a number of empirical studies. People are freaked out by this, much more than something really realistic or something more cartoony and obviously unrealistic. There is a point on the similarity scale, close to full realism, where suddenly people’s familiarity and comfort with the thing rockets downward:
I think the same is true for languages. Sufficiently far away—English to French, say, or Sanskrit—and the language is dissimilar, clearly different. Close enough—Nottinghamshire to Yorkshire, say—and the similarities are unremarkable. But the distance from RP English to Scots sits just at the right distance of unfamiliarity; like enough to be familiar, far enough away to seem different. Interestingly, the reaction is one of amusement rather than unsettledness; but, the idea of an emotional reaction being triggered by something close to but not really close to something is still there.
Slack—like email, but somehow with a lot less guilt about ignoring it.
Every time we have an open day at Kent, the University of Essex (hello to my dear friends there!) pays someone to drive a bloody great van with a mahoosive “University of Essex” poster on it and park it all day opposite the main entrance to our campus.
I can’t imagine that 20-30 years ago, when we first started to talk about having some kind of competitive ethos between universities, that we would ever have imagined that we would end up in a situation like this. And it seems to be a systematic inefficiency baked into the system. Unlike the often talked about “inefficiencies” of public sector management, which seem to be just a matter of motivation and management skill, there are real, ongoing, impossible to avoid inefficiencies at the core of a competition based system.
This is a few hundred pounds that could be going into student’s education or research or goddamn it on nicer port for the vice-chancellor’s summer party. Is there any way in which we can get out of this kind of arms race that is consuming vast amounts of money, time, and attention?
It’s surprising to me, in a world where social media is generally assumed to be ubiquitous, how many people have minimal-to-no online presence. Whilst I was sorting through piles of stuff from my Dad’s house (well, sorting out in the sense of looking at it and then putting it in a box in a storage unit), I came across a lot of things with names on—old school photos, programmes from concerts and plays at school with lists of pupils and teachers, lists of people who were involved in societies at University, details of distant family members, etc. Looking up some people online, I was surprised how often there was no online trace. I understand that some people might have changed names, gone to ground, died, or whatever, but a good third of people, I would say, had no or close-to-no online presence. Don’t quite know what to make of this, but it shows how the idea that we are a completely online community to be unreliable.
When I hear about the gun debate in the USA, it sounds to me like this:
Alice: “So, in your workplace, how do they make sure that people do their work well?”
Bob: “Well, its straightforward really. Its written into our contracts—which we’re all very respectful of—that our bosses can hit us over the head with a large piece of wood if we are even a little bit slacking. So, each of the bosses has this piece of wood, and they walk around with it all day,…”
Alice: “But that sounds terrible. Why do people put up with it?”
Bob: “Well, actually it’s not too bad. You see, we have a very strong union, and they’ve agreed that we can all have large pieces of wood too, and so we can hit back and defend ourselves.”
Alice: “But, wouldn’t it be easier for you to all agree not to have the pieces of wood in the first place?”
Bob: “I’m not quite too sure I get you there…”
The flexibility of computer languages is considered to be one of their sources of power. The ability for a computer to do, within limits of tractability and Turing-completeness, anything with data is considered one of the great distinguishing features of computer science. Something that surprises me is that we fell into this very early on in the history of computing; very early programmable computer systems were already using languages that offered enormous flexibility. We didn’t have a multi-decade struggle where we developed various domain-specific languages, and then the invention of Turing-complete generic languages was a key point in the development of computer programming. As-powerful-as-dammit languages were—by accident, or by the fact of languages already building on a strong tradition in mathematical logic etc.—there from the start.
Yet, in practice, programmers don’t use this flexibility.
How often have we written a loop such as for (int i=0;i<t;i++)? Why, given the vast flexibility to put any expression from the language in those three slots, hardly put anything other than a couple of different things in there? I used to feel that I was an amateurish programmer for falling into these clichés all the time—surely, real programmers used the full expressivity of the language, and it was just me with my paucity of imagination that wasn’t doing this.
But, it isn’t. Perhaps, indeed, the clichés are a sign of maturity of thinking, a sign that I have learned some of the patterns of thought that make a mature programmer?
The studies of Roles of Variables put some meat onto these anecdotal bones. Over 99% of variable usages in a set of programs from a textbook were found to be doing just one of around 10 roles. An example of a role is most-wanted holder, where the variable holds the value that is the “best” value found so far, for some problem-specific value of “best”. For example, it might be the current largest in a program that is trying to find the largest number in a list.
There is a decent argument that we should make these sorts of things explicit in programming languages. Rather than saying “int” or “string” in variable declarations we should instead/additionally say “stepper” or “most recent holder”. This would allow additional pragmatic checks to see whether the programmer was using the variable in the way that they think they are.
Perhaps there is a stronger argument though. Is it possible that we might be able to reason about such a restricted language more powerfully than we can a general language? There seems to be a tension between the vast Turing-complete capability of computer languages, and the desire to verify and check properties of programs. Could a subset of a language, where the role-types had much more restricted semantics, allow more powerful reasoning systems? There is a related but distinct argument that I heard a while ago that we should develop reasoning systems that verify properties of Turing-incomplete fragments of programs (I’ll add a reference when I find it, but I think the idea was at very early stages).
Les Hatton says that Software is cursed with unconstrained creativity. We have just about got to a decent understanding of our tools when trends change, and we are forced to learn another toolset—with its own distinctive set of gotchas—all over again. Where would software engineering have got to if we had focused not on developing new languages and paradigms, but on becoming master-level skilled with the already sufficiently expressive languages that already existed? There is a similar flavour here. Are we using languages that allow us to do far more than we ever need to, and subsequently limiting the reasoning and support tools we can provide?
Old joke: A scientist has a good-luck horseshoe hanging over the door to their lab. A visitor to the lab says to them “Surely you don’t believe in superstitious nonsense like that?”; the scientist replies “Of course not; but, I am told it works even if you don’t believe in it.”
New joke: An atheist goes to church and joins in enthusiastically with the hymns and prayers. Their friend says to them “I thought that you didn’t believe in all of that religious stuff?”; the atheist replies “I don’t; but, I am told it doesn’t work even if you believe in it.”