An extremely vivid memory from childhood—probably about seven or eight years old. Waking up and coming downstairs with an absolute, unshakable conviction that what I wanted to do with all of my spare time for the next few months was to build near-full sized fairground rides in our back garden. I don’t know where this came from; prior to that point I had no especial interest in fairground rides, beyond the annual visit to the Goose Fair. I wanted to go into the garage immediately and start measuring pieces of wood, making designs, etc. It took my parents a couple of hours to dissuade me that doing this was utterly impractical, against my deep, passionate protestations. Truly I cannot think of anything before or since that I wanted to do with such utter conviction.
To historians, “history” basically means the (complex, disputed) knowledge that contemporary people have about what happened in the past. To the general public, “history” is the stuff that happened—about which contemporary people might have limited evidence, disputes of interpretation, etc. This can lead to confusion in communicating ideas about the methodology and ontology of history. For example, when I first came across people saying things along the lines of “historical facts change over time”, I thought that they were embracing a much more radical vision of history than they were. They were making the (important) point that what we call “facts” are based on incomplete evidence and biased by political/social/religious views and our biases coming from the contemporary world. I thought that they were making the much more radical claim that the subjective experience of people in the past changed due to our contemporary interpretations—a kind of reverse causality.
Do KPIs encourage a culture of making small improvements to stuff that we know how to measure well, rather than disruptive changes in areas where we haven’t even thought about how to measure things yet?
Bus driver (paraphrased): “Since the new big-businessman owner took over, [my local football club]’s been run like a profitable business.”, “Sounds good”, “No, its crap. When rich people have taken over other clubs, the’ve done it for a hobby, and put loads of money into paying top players; our man wants to run it like a proper business.”.
Contemporary governments typically like competition, and also want to allow companies to act in a free market. Unfortunately, the free market also means that companies are free to purchase other companies, and regularly do so, usually in cognate areas to their current areas of business. This ends up creating uncompetitive situations where there are few buyers and sellers in a single area of business. To combat this, an interventionist scheme is usually put in place, whereby mergers and takeovers have to be approved by some governmental body. One of the occasions when that body will typically exercise that power is when the merger creates sufficiently few firms to compete effectively.
This is clumsy. It makes single, complex decision points and is prone to political intervention and bias. Perhaps instead, we could have a system that delegates this choice to the companies. For example, let’s imagine a graded scale of costs to register annually as a limited company. If you are registering in a business area where there are lots of players competing, then the cost is minimal—say, close to the cost of administering the registration. As the number of viable players gets smaller, the cost artificially ramps up very rapidly; if you are looking to merge two out of the last three remaining supermarket chains, then the annual registration cost is millions.
If, like me, you believe that hypothecation of taxes isn’t automatically to be avoided, you might even dedicate the sums earned from this to a fund to support startup/disruptor businesses in business areas with little competition.
The details are tricky. How do you set the cost, and the ramping? How do you define “the same business area”? How do you prevent formally distinct entities actually being controlled by the same entity in practice? But, these might not be insurmountable.
The sorites (Greek for “heap”) paradox is a puzzle about language. We unambiguously use the word “heap” to represent a large pile of, say, stones—say a few hundred. If we remove one, that is still, uncomplicatedly, a heap. Yet, we cannot do this indefinitely. Once we have, say, two stones, everyone agrees that this is clearly not a heap. The usual resolution to this is to argue that concepts such as “heap” are irreducibly vague; there will always be a fuzzy middle ground between “heap” and “non-heap”.
Interestingly, there are still examples of this at very small scales. There is currently a proposal to merge two of the small number of supermarket chains in the UK. At present, most people would agree that the current system is decently competitive. Reduce is by one and—well, is it still a competitive system? Interestingly, this shows that a sorites-like situation can exist with small numbers of objects, and so perhaps isn’t a problem of fine-grainedness as much as we might first think.
Graduation ceremonies should have credits, in the same way that films do. This would emphasise to students and a wider set of stakeholders the scale of the support and the hidden activity that goes into providing the environment in which students can flourish.
“Oi!” (Whole bus goes silent) (parent shouting at child) “You’ve got a tissue in your pocket—don’t wipe your bogies on other people.” (collective “ugh” from the remainder of the bus).
The major social media companies have basically been providing the same, largely unchanging product, for the last decade. Yes—they are doing it very well, managing to scale number of users and amounts of activity, and optimising the various conflicting factors around usability, advertising, etc. But, basically, Twitter has been doing the same schtick for the last decade. Yet, if media and government were looking to talk to an innovative, forward-looking company, they might well still turn to such companies.
By contrast, universities, where there is an enormous, rolling programme of change and updating, keeping up with research, innovating in teaching, all in the context of a regulatory and compliance regime that would be seen as mightily fuckoffworthy if imposed on such companies, are portrayed as the lumbering, conservative forces. Why is this? How have the social media companies managed to convey that impression—and how have we in higher education failed?
I’ve been on a lot of student disciplinary panels over the years—examining students for plagiarism, etc.—and something that comes up over and over again is that some weaker students just can’t imagine that students are able to produce work of high quality without some amount of copying, patch-writing, or similar processes The idea that you could sit down and produce from your head a fluent piece of fully referenced writing just isn’t what they imagine “ordinary people” are capable of. Writing, comes from elsewhere—a mysterious world of books and articles that is somehow disjoint from the day-to-day world of ordinary people.
I once came across a maths version of this—a student who, when asked to solve simple algebra problems, was just plucking numbers from the air. They couldn’t imagine that other students in the class were actually solving the problems as quickly as they were. Instead, they assumed that the other students were somehow getting there by some kind of mysterious intuitive process, and that the way to get to that was just to start by “saying the first number that comes into your head” and then, over time, their subconscious would start to work things out and after a while the numbers that emerged would start to coincide with the solutions to the problems.
I think I had a similar problem with singing once upon a time (though, at least I was conscious that there was something I wasn’t getting). People who had had no problem with grokking how to sing in tune with others would just say “you listen to the note and then you sing along with it”, which put me in the same position as our maths friend above—it just seemed to be something that you did until some pre-conscious process gradually learned how to do it. It doesn’t. Eventually, thanks to a very careful description from the wonderful Sarah Leonard of exactly what the head/mouth/ears feel like when you are making the same note as others, I was able to improve that skill in a rational way. Before that, I just couldn’t imagine that other people were managing to do this in anything other than a mysterious, pre-conscious way. Somehow I had failed to pick up what that “in tune” feeling was like as a child, and carried this a decent way into adulthood.
For a while I wondered what these benches were all about:
They appear at a number of London and South-East railway stations, and when I first saw them I thought they were a bizarre and out of keeping design decision. Why choose something in such bright, primary-ish colours against a generally muted design scheme. They wouldn’t be out of keeping somewhere—but, not here! And after a couple of years it suddenly struck me—they are the olympic rings, that hung up at St. Pancras during the games, sliced and turned into benches! My supposition is confirmed by Londonist here.
What’s going on here?
This is the back of the packaging of my protein bar. What’s with the white stripe across the top left? It reads, basically “# _____ DAY, fuelled by 12g of PRIMAL PROTEIN”. Presumably the the # is a hashtag marker, and there is meant to be some text between that and “DAY”. Is this some kind of fill-in-the-blank exercise? I don’t think so, it seems rather obscure without any further cue. Did it at one point say something that they had to back away from for legal reasons: “# TWO OF YOUR FIVE A DAY”, perhaps? If so, why redesign it with a white block? Does packaging work on such a tight timescale that they were all ready to go, when someone emailed from legal to say “uh, oh, better drop that” and so someone fired up Indesign and put a white block there. Surely it can’t be working on such a timescale that there wasn’t time enough to make it the same shade of red as the rest, or rethink it, or just blank out the whole thing. Is it just a production error? At first I thought it was a post-hoc sticker to cover up some unfortunate error, but it is a part of the printed packaging. A minor mystery indeed.
Here is a graph that purports to be a summary of numbers of divorces per 1000 married people between 2009-2016, i.e. the first part of the graph up to 2014 is before same-sex marriage became legal.
My immediate thought is that this must be wrong—if every marriage is between a man and a woman, then the numbers of divorces must be equal between men and women. So, could the “per 1000 married people” be the gotcha here? Again, no. It doesn’t say “per 1000 people”, but “per thousand married people”, and so in the era that this is referring to, the number of married men and married women would be identical. This suggests that there is an error in the calculation here; oddly, the graph has identical numbers from 2013 onwards; we might expect some divergence if we carry on with the graph, even simply due to statistical fluctuations the number of same-sex male divorces and same-sex female divorces is likely to be different.
So, what is happening during the 2009-2012 part of the graph? I suspected initially that they have mistakenly used “per 1000 people” on those entries in the graph, rather than “per thousand married people”. But, this is at odds with the numbers from 2013-2016, where the graph is as expected—numbers “per thousand people” will be a lot less than “per thousand married people”, and this huge leap isn’t apparent between the figures for 2012 and 2013. So, what explains it?
I’ll restrain myself from ranting about the heinous sin of connecting discrete values with lines.
Here’s another graph (from this Daily Mail article (ugh!)) that seems to be from the same source and shows a similar error:
Sometimes I find myself making an apology in the following form: “Sorry, but I assumed…”. I’ve occasionally been upbraided for this with a response like “Well, you shouldn’t have assumed in the first place, you should have asked.”. There is perhaps something reasonable here—it isn’t good to be presumptuous, and it isn’t good to offer a glossed apology—but, I usually leave such an encounter with a feeling of “Well, that all sounds very reasonable, but in practice we can’t go around constantly questioning and digging into every detail of an interaction; at some point we have to make a pragmatic choice to use background knowledge and assumptions built on our knowledge of social rules and norms, the particular person, and the particular situation.”
Then I realised. When A says to B “I’m sorry, but I assumed…” it is actually a subtle upbraiding of B by A. The less polite version of this is A saying to be “Sorry, but I perfectly reasonably assumed that we were working in our regular framework of norms of communication and our mutual knowledge of each other and the situation, and you unreasonably did something that didn’t fit into those norms and now you seem to be blaming me for making a perfectly reasonable assumption rather than what should have happened which is that you were doing something that was socially or individually uncharacteristic and so you should have proactively given me reasonable information so that I could understand the situation in which we were interacting (innit).”. Of course, this is complicated—one of the reasons that these misunderstandings occur is when A and B think that they are on common ground (what Wittgenstein calls “agreement not in opinions, but rather in form of life”), but actually are working with a different framework.
In his book The English Constitution, Walter Bagehot describes two components of government. The first are the “efficient” components, such as the cabinet, that get on with the actual business of government, making decisions about the nation. The second are the “dignified” components, such as the monarchy, that have little decision making power (either de jure or de facto) but which play a role in serving as a, largely uncontroversial, locus for patriotism and the stability of the nation. England is a key example of a polity where these two components are largely separate; in some countries, largely to their detriment, the components blur. Clearly, this can change through time; at one time the king’s very word was law, now the role of the queen in the day-to-day business of politics is minimal.
I would like to speculate that the US presidency is on its way from becoming an efficient institution to a dignified one. The election of Trump has provided us with a figure whom other components of the government have openly said they will ignore—a military leader, being interviewed about the US nuclear capability, has argued that they would make a considered decision about an order from Trump to make a nuclear strike, despite this being formally an uncomplicated order from a superior officer (commander-in-chief, natch!) to a more junior one. Whilst this has probably been the truth throughout nuclear history—there are reports of various cold-war nuclear command officers deciding to take a “watch and wait” approach when the preconditions for a nuclear strike have already been met—this is probably the first time that this has been discussed so openly. This marks the beginning of the presidency being regarded as a ceremonial, “dignified” institution; I would assume that a command from Queen Elizabeth II would be taken with similar cynicism by the UK military.
So, is this just an aberration? A one-off, to be replaced in 2020 by a return to business-as-usual? This is entirely possible; a nation weary of celebrity posturing could return to the model of the politically experienced leader as the ideal candidate. But, there is hunger from different directions for another celebrity-POTUS. Even if the US tires of isolationist nationalism, there is a decent chance that the Democrats won’t be willing to field another explicitly large-P Political figure against the celebrity of Trump in 2020 (especially as by that point, their store of public-profile figures is running thin; Obama timed out, figures such as Clinton and Kerry tainted by previous unsuccessful runs). Would you really put up a governor of a flyover state when you have an Oprah or Zuckerberg? So, let’s say that Oprah wins in 2020, and serves two successful terms of office, taking us to 2028. Already, we’re reaching a stage where the idea of electing some competent former ambassador seems so boring and 20th century. After four years of President Zuck struggling to control the growing power of the BRICS and some crisis yet to be imagined, we reach a point where a shadow system of efficient institutions is starting to sweep in underneath to take on the substantive job of executive government. By 2032, Will Smith and Ellen DeGeneres are the sort of people who are the serious, establishment candidates, fighting not to be seen as boring establishment figures against the candidacy of Katy Perry. By 2050, the Presidency is a ribbon-cutting, “dignified” institution, as much a sign of faded-celebrity-trying-to-raise-their-profile as I’m a Celebrity… is today. A young turk in the present day would be better studying which institution will rise to take the place of the efficient powers of the President, than plotting a 40-year route to the role itself.
I’d wondered for a while if celebrity would one day take the Presidential role—after all, there is a system of (more-or-less) direct election, both at the primaries and the final vote, that provides a way to circumvent the slog of e.g. UK national politics. But, I always though that this would come about from an independent candidate standing on a largely youth-oriented platform. I had assumed that at some point some cocky chancer like Jay-Z might decide to go for it as a mid-life crisis thing, taking around 15% of the vote as an anti-politics third candidate, Nadering-out a decent Democratic candidate in favour of a Dubya-like Republican due to demographics, earning the ire of mainstream politicians en route. I was blindsided by Trumps’ candididacy—playing a role as an anti-politics candidate whilst remaining within a party structure (thus getting the automatic votes of the always-Republican rump) was a stroke of genius. That canny move may well have re-configured the Presidential role for the next century—Swift 2052 for the win!
An odd contradiction on the economic right of politics:
- There is objection to ideas such as basic income, unemployment benefits, etc. on the grounds that once people have basic needs catered for, their motivation to carry out additional economic activity for the marginal benefits it provides are minimal. A person who has basic housing costs paid for and a few hundred quid per month living expenses is assumed to be unmotivated to work further.
- There is objection to ideas of increasing tax take at the higher end, on the grounds that it will reduce motivation to work. Even though someone might be earning £100k or more, the idea is that they will be significantly demotivated if they have to pay another few hundred quid per year in taxes.
This seem contradictory. Either people are willing to work harder for more money, or there is a level where the marginal monetary benefit will not produce additional motivation. If anything, you might expect it to be the other way round—the marginal benefit to the person in desperate economic circumstances of a small amount additional income gives a larger lifestyle change than for the person on a large income. I suspect that at the heart of the contradiction is a belief that there are two sorts of people—the lazy, who wouldn’t care, and the motivated, who will always be willing to do more for a larger benefit. I think motivation is more complex than that.
That A-team, eh? They really liked making quiches, yes? They loved it when a flan came together.
Every cloud computer has a very expensive data centre lining.
Firms selling things have a dilemma. Price something too low, and, whilst it will sell well, it won’t make enough money to be worth doing (leading to the old joke: “We’re selling each item at a loss; but, don’t worry, we’ll make up on it in volume.”). Price something too high, and you won’t sell enough widgets to make enough money. The traditional view on this is that it is a tradeoff; find a mid-range price where you sell enough widgets at a high enough price. If you can’t do this, then the business isn’t viable.
This is finessed by the notion of adaptive pricing. This is where the same widget is sold to different people at different prices. This makes more businesses financially viable. This is where firms adjust prices based on some information that they can observe, or some structuring of how/when/where/to whom the products are sold:
- Selling to different demographics based on broad ability to pay. Discounts for students or retired people, who are likely to have a lower income. Changing prices at different times of the day, based on the demographic that is around (e.g. a price premium for buying a coffee at the station at peak commuter time; or, more simply, the idea of peak time tickets).
- Rewarding time/organisation: tickets come on sale at a particular date/time, but there are only a finite number at that price. People who are time rich/cash poor can spend time to be organised to buy at the cheaper price, whereas people who have more money don’t have to spend the time, they just buy at the higher price later.
- Selling at different prices in different locations. This has a dark side too; some firms have exploited the lack of transport options of poor people living in cut-off areas by selling at a higher price.
- Auctions, where items are sold for a bespoke price based on demand.
- Secondary markets, where a firm sells widgets cheaply and efficiently, but a secondary retailer (such as a ticket tout) buys up some of them and sells them on to the final purchaser at an inflated price.
- Hiding prices. Rather than a price being given up-front, you have to go through some intermediary system that judges your ability to pay, or your need for the product, and adjusts prices accordingly. The watch shop that judges whether you are a middle-income watch enthusiast or a rich person who wants to brag about the cost of their watch; the retailer of tools who judges whether you will be using the tool day-in-day out or are an occasional user who would buy it for a sufficiently low price.
- Similarly, making use of your purchasing history to adjust prices on an online system.
- Micropayments. Rather than paying up-front to purchase something, you pay by the number of minutes/hours that you use it, or what you use it for.
- Time-adjusted pricing. You show an interest, and if you want it right now you pay the price; the price goes down with time, but if you wait too long you run the risk (perhaps entirely artificially generated) that stock will run out. The TV-based retailer PriceDrop is canonical here.
- Rewards. You all pay the same price up front, but more price-sensitive customers are given some of that money back as vouchers so that their average spend per widget is lower in the long run.
- Direct demand-adjusted pricing. Uber’s entirely-up-front “surge pricing”, for example. Again, speaks to the time/money tradeoff; someone who needs a lower price might be prepared to wait for half-an-hour to see if surge pricing goes away.
- Artificial hobbling. You all buy the same product, making manufacturing easy, but some features are turned off on the lower product range. Tesla cars work like this; you can buy a cheaper version, which has a lower distance range; but, the hardware is the same as the premium product, the distance is just limited by a software switch in the cheaper version.
- Things that seem more different. The same object sold with changes to the branding. Surplus stock sold to a poundshop on the condition that they repackage it. Cheap train tickets sold through a different brand, but when you show up you are on the same train in the same seats as people who paid a lot more.
- Superficial benefits. Exploiting that some people will pay for “the best” regardless. First-class train travel is probably a decent example here; a slightly more comfortable seat and free tea/coffee, but sometimes at a price premium which seems irrationally larger.
I would make an educated guess that cracking adaptive pricing will be one of the big innovations in business in this century. It is increasingly used, but there is still a huge amount of finesse to do here. Already, supermarkets are experimenting with systems such as electronic price displays, allowing dynamic adjusting of price during the day, either by broad demographic shifts, or by minute-by-minute demand. And there are already critiques: the transport company that (algorithmically) increases its prices following a natural disaster, the company that (algorithmically) sells the music of a recently-dead star at a premium.
Interestingly, there is a weird potential consequence to all of this. Will this mean that differences in income become less pronounced? If I had an ideal adaptive pricing system, where, say, I charged people not a price, but a proportion of their income, for my product, then that would have the outcome that people would de facto have the same income. Clearly, the systems above are not at that level yet; but, each adaptive pricing innovation brings us closer to that.
A long time ago, as a wet-behind-the-ears English person coming to Scotland for the first time, I was intrigued/surprised/amused to see a copy of The New Testament in Scots in a bookshop (the old James Thin on South Bridge, now a branch of Blackwells).
I was vaguely aware that there was a Gaelic language, which not many people used, and had a basic knowledge that there was a Scots accent and vocabulary, albeit largely gleaned from watching Russ Abbot’s “see u Jimmy” character on TV:
…but the idea of treating this as a language was alien to me. I’ve developed by knowledge of this world over the years, and can appreciate the literary qualities of it, particularly through the thoughtful work by Hugh MacDiarmid. But, what explains my initial sense that this sort of thing is a bit ludicrous, a little trying-too-hard:
…a little too close to the clearly humorous (though perhaps not evangelically purposeless) Ee by Gum, Lord!: The Gospels in Broad Yorkshire.
Why did I, 25 years ago, think that its description as “a translation” was odd. I wouldn’t have regarded a translation into French or Japanese or Guarani strange—so, why Scots? This touches, I suppose, on the language vs. dialect debate; when does a dialect become a separate language. This seems to be an ill-defined question; there is clearly a continuum, and whilst groups of language-users cluster at certain points thereon, this doesn’t happen cleanly enough to be a series of isolated clumps.
One idea that might help to explain this is the uncanny valley; here’s one of its inhabitants, a rather realistic looking humanoid robot:
This sort of thing—not far of being human, but not close enough to “pass”—is said to be uncanny, and this is backed up by a number of empirical studies. People are freaked out by this, much more than something really realistic or something more cartoony and obviously unrealistic. There is a point on the similarity scale, close to full realism, where suddenly people’s familiarity and comfort with the thing rockets downward:
I think the same is true for languages. Sufficiently far away—English to French, say, or Sanskrit—and the language is dissimilar, clearly different. Close enough—Nottinghamshire to Yorkshire, say—and the similarities are unremarkable. But the distance from RP English to Scots sits just at the right distance of unfamiliarity; like enough to be familiar, far enough away to seem different. Interestingly, the reaction is one of amusement rather than unsettledness; but, the idea of an emotional reaction being triggered by something close to but not really close to something is still there.