In B&Q yesterday there were two parents and a child (around 5-6 years old) pushing a trolley out to their car. The child was insistently declaring an interest in helping to move the large boxes of tiles from the trolley to the car; the father insisting each time that it was pointless, that it would take two adults to move it, and that there wasn’t any point in helping.
One thing that helped me to develop a “growth mindset”—the view that skills and intelligence are largely not fixed or innate but the result of the right kind of study and development—was that my parents found lots of ways to involve me, at a level appropriate to my knowledge, skills, and development, in so many areas of life. I have no idea whether this was a deliberate strategy or that they just fell into it, but it was very helpful in instilling a positive view of the value of productive work.
A side note: I have often wondered if being a (to a first approximation) only child helped with my learning a wide range of skills, in particular not having a gender-sterotyped pattern of skills. Because I was the only child around, I would be co-opted into helping with a lot of things, whether cooking or washing, car-repair or plumbing. Perhaps in a larger family with a mixture of genders in the children, the girls might go off to help with “women’s stuff” from female relatives, whilst the boys do “men’s stuff” with males.
An idea that I got from Colin Runciman. When marking student work, and you come across a bad answer, ask yourself “is this blank-equivalent, i.e. does this show the same level of insight into the problem as if the student had written nothing?”. In many cases, the answer is “no”. We frequently fail to use points on the marking scale that are between zero and pass, particularly when marking short answer questions in exams. Thinking about “blank equivalence” gives us a tool to decide which answers genuinely show insufficient knowledge or skill to be worth any marks, from those that are still fails, but nonetheless show some insight.
Perhaps the idea of “blank-equivalence” is valuable elsewhere. Perhaps a work of art is not good enough to be worthy of critical attention and positive aesthetic judgement—but, it is still not sufficiently devoid of skill and imagination to make the same impact on the world as doing nothing.
An extremely vivid memory from childhood—probably about seven or eight years old. Waking up and coming downstairs with an absolute, unshakable conviction that what I wanted to do with all of my spare time for the next few months was to build near-full sized fairground rides in our back garden. I don’t know where this came from; prior to that point I had no especial interest in fairground rides, beyond the annual visit to the Goose Fair. I wanted to go into the garage immediately and start measuring pieces of wood, making designs, etc. It took my parents a couple of hours to dissuade me that doing this was utterly impractical, against my deep, passionate protestations. Truly, I cannot think of anything before or since that I wanted to do with such utter conviction.
To historians, “history” basically means the (complex, disputed) knowledge that contemporary people have about what happened in the past. To the general public, “history” is the stuff that happened—about which contemporary people might have limited evidence, disputes of interpretation, etc. This can lead to confusion in communicating ideas about the methodology and ontology of history. For example, when I first came across people saying things along the lines of “historical facts change over time”, I thought that they were embracing a much more radical vision of history than they were. They were making the (important) point that what we call “facts” are based on incomplete evidence and biased by political/social/religious views and our biases coming from the contemporary world. I thought that they were making the much more radical claim that the subjective experience of people in the past changed due to our contemporary interpretations—a kind of reverse causality.
Do KPIs encourage a culture of making small improvements to stuff that we know how to measure well, rather than disruptive changes in areas where we haven’t even thought about how to measure things yet?
Bus driver (paraphrased): “Since the new big-businessman owner took over, [my local football club]’s been run like a profitable business.”, “Sounds good”, “No, its crap. When rich people have taken over other clubs, the’ve done it for a hobby, and put loads of money into paying top players; our man wants to run it like a proper business.”.
Contemporary governments typically like competition, and also want to allow companies to act in a free market. Unfortunately, the free market also means that companies are free to purchase other companies, and regularly do so, usually in cognate areas to their current areas of business. This ends up creating uncompetitive situations where there are few buyers and sellers in a single area of business. To combat this, an interventionist scheme is usually put in place, whereby mergers and takeovers have to be approved by some governmental body. One of the occasions when that body will typically exercise that power is when the merger creates sufficiently few firms to compete effectively.
This is clumsy. It makes single, complex decision points and is prone to political intervention and bias. Perhaps instead, we could have a system that delegates this choice to the companies. For example, let’s imagine a graded scale of costs to register annually as a limited company. If you are registering in a business area where there are lots of players competing, then the cost is minimal—say, close to the cost of administering the registration. As the number of viable players gets smaller, the cost artificially ramps up very rapidly; if you are looking to merge two out of the last three remaining supermarket chains, then the annual registration cost is millions.
If, like me, you believe that hypothecation of taxes isn’t automatically to be avoided, you might even dedicate the sums earned from this to a fund to support startup/disruptor businesses in business areas with little competition.
The details are tricky. How do you set the cost, and the ramping? How do you define “the same business area”? How do you prevent formally distinct entities actually being controlled by the same entity in practice? But, these might not be insurmountable.
The sorites (Greek for “heap”) paradox is a puzzle about language. We unambiguously use the word “heap” to represent a large pile of, say, stones—say a few hundred. If we remove one, that is still, uncomplicatedly, a heap. Yet, we cannot do this indefinitely. Once we have, say, two stones, everyone agrees that this is clearly not a heap. The usual resolution to this is to argue that concepts such as “heap” are irreducibly vague; there will always be a fuzzy middle ground between “heap” and “non-heap”.
Interestingly, there are still examples of this at very small scales. There is currently a proposal to merge two of the small number of supermarket chains in the UK. At present, most people would agree that the current system is decently competitive. Reduce is by one and—well, is it still a competitive system? Interestingly, this shows that a sorites-like situation can exist with small numbers of objects, and so perhaps isn’t a problem of fine-grainedness as much as we might first think.
Graduation ceremonies should have credits, in the same way that films do. This would emphasise to students and a wider set of stakeholders the scale of the support and the hidden activity that goes into providing the environment in which students can flourish.
“Oi!” (Whole bus goes silent) (parent shouting at child) “You’ve got a tissue in your pocket—don’t wipe your bogies on other people.” (collective “ugh” from the remainder of the bus).
The major social media companies have basically been providing the same, largely unchanging product, for the last decade. Yes—they are doing it very well, managing to scale number of users and amounts of activity, and optimising the various conflicting factors around usability, advertising, etc. But, basically, Twitter has been doing the same schtick for the last decade. Yet, if media and government were looking to talk to an innovative, forward-looking company, they might well still turn to such companies.
By contrast, universities, where there is an enormous, rolling programme of change and updating, keeping up with research, innovating in teaching, all in the context of a regulatory and compliance regime that would be seen as mightily fuckoffworthy if imposed on such companies, are portrayed as the lumbering, conservative forces. Why is this? How have the social media companies managed to convey that impression—and how have we in higher education failed?
I’ve been on a lot of student disciplinary panels over the years—examining students for plagiarism, etc.—and something that comes up over and over again is that some weaker students just can’t imagine that students are able to produce work of high quality without some amount of copying, patch-writing, or similar processes The idea that you could sit down and produce from your head a fluent piece of fully referenced writing just isn’t what they imagine “ordinary people” are capable of. Writing, comes from elsewhere—a mysterious world of books and articles that is somehow disjoint from the day-to-day world of ordinary people.
I once came across a maths version of this—a student who, when asked to solve simple algebra problems, was just plucking numbers from the air. They couldn’t imagine that other students in the class were actually solving the problems as quickly as they were. Instead, they assumed that the other students were somehow getting there by some kind of mysterious intuitive process, and that the way to get to that was just to start by “saying the first number that comes into your head” and then, over time, their subconscious would start to work things out and after a while the numbers that emerged would start to coincide with the solutions to the problems.
I think I had a similar problem with singing once upon a time (though, at least I was conscious that there was something I wasn’t getting). People who had had no problem with grokking how to sing in tune with others would just say “you listen to the note and then you sing along with it”, which put me in the same position as our maths friend above—it just seemed to be something that you did until some pre-conscious process gradually learned how to do it. It doesn’t. Eventually, thanks to a very careful description from the wonderful Sarah Leonard of exactly what the head/mouth/ears feel like when you are making the same note as others, I was able to improve that skill in a rational way. Before that, I just couldn’t imagine that other people were managing to do this in anything other than a mysterious, pre-conscious way. Somehow I had failed to pick up what that “in tune” feeling was like as a child, and carried this a decent way into adulthood.
For a while I wondered what these benches were all about:
They appear at a number of London and South-East railway stations, and when I first saw them I thought they were a bizarre and out of keeping design decision. Why choose something in such bright, primary-ish colours against a generally muted design scheme. They wouldn’t be out of keeping somewhere—but, not here! And after a couple of years it suddenly struck me—they are the olympic rings, that hung up at St. Pancras during the games, sliced and turned into benches! My supposition is confirmed by Londonist here.
What’s going on here?
This is the back of the packaging of my protein bar. What’s with the white stripe across the top left? It reads, basically “# _____ DAY, fuelled by 12g of PRIMAL PROTEIN”. Presumably the the # is a hashtag marker, and there is meant to be some text between that and “DAY”. Is this some kind of fill-in-the-blank exercise? I don’t think so, it seems rather obscure without any further cue. Did it at one point say something that they had to back away from for legal reasons: “# TWO OF YOUR FIVE A DAY”, perhaps? If so, why redesign it with a white block? Does packaging work on such a tight timescale that they were all ready to go, when someone emailed from legal to say “uh, oh, better drop that” and so someone fired up Indesign and put a white block there. Surely it can’t be working on such a timescale that there wasn’t time enough to make it the same shade of red as the rest, or rethink it, or just blank out the whole thing. Is it just a production error? At first I thought it was a post-hoc sticker to cover up some unfortunate error, but it is a part of the printed packaging. A minor mystery indeed.
Here is a graph that purports to be a summary of numbers of divorces per 1000 married people between 2009-2016, i.e. the first part of the graph up to 2014 is before same-sex marriage became legal.
My immediate thought is that this must be wrong—if every marriage is between a man and a woman, then the numbers of divorces must be equal between men and women. So, could the “per 1000 married people” be the gotcha here? Again, no. It doesn’t say “per 1000 people”, but “per thousand married people”, and so in the era that this is referring to, the number of married men and married women would be identical. This suggests that there is an error in the calculation here; oddly, the graph has identical numbers from 2013 onwards; we might expect some divergence if we carry on with the graph, even simply due to statistical fluctuations the number of same-sex male divorces and same-sex female divorces is likely to be different.
So, what is happening during the 2009-2012 part of the graph? I suspected initially that they have mistakenly used “per 1000 people” on those entries in the graph, rather than “per thousand married people”. But, this is at odds with the numbers from 2013-2016, where the graph is as expected—numbers “per thousand people” will be a lot less than “per thousand married people”, and this huge leap isn’t apparent between the figures for 2012 and 2013. So, what explains it?
I’ll restrain myself from ranting about the heinous sin of connecting discrete values with lines.
Here’s another graph (from this Daily Mail article (ugh!)) that seems to be from the same source and shows a similar error:
Sometimes I find myself making an apology in the following form: “Sorry, but I assumed…”. I’ve occasionally been upbraided for this with a response like “Well, you shouldn’t have assumed in the first place, you should have asked.”. There is perhaps something reasonable here—it isn’t good to be presumptuous, and it isn’t good to offer a glossed apology—but, I usually leave such an encounter with a feeling of “Well, that all sounds very reasonable, but in practice we can’t go around constantly questioning and digging into every detail of an interaction; at some point we have to make a pragmatic choice to use background knowledge and assumptions built on our knowledge of social rules and norms, the particular person, and the particular situation.”
Then I realised. When A says to B “I’m sorry, but I assumed…” it is actually a subtle upbraiding of B by A. The less polite version of this is A saying to be “Sorry, but I perfectly reasonably assumed that we were working in our regular framework of norms of communication and our mutual knowledge of each other and the situation, and you unreasonably did something that didn’t fit into those norms and now you seem to be blaming me for making a perfectly reasonable assumption rather than what should have happened which is that you were doing something that was socially or individually uncharacteristic and so you should have proactively given me reasonable information so that I could understand the situation in which we were interacting (innit).”. Of course, this is complicated—one of the reasons that these misunderstandings occur is when A and B think that they are on common ground (what Wittgenstein calls “agreement not in opinions, but rather in form of life”), but actually are working with a different framework.
In his book The English Constitution, Walter Bagehot describes two components of government. The first are the “efficient” components, such as the cabinet, that get on with the actual business of government, making decisions about the nation. The second are the “dignified” components, such as the monarchy, that have little decision making power (either de jure or de facto) but which play a role in serving as a, largely uncontroversial, locus for patriotism and the stability of the nation. England is a key example of a polity where these two components are largely separate; in some countries, largely to their detriment, the components blur. Clearly, this can change through time; at one time the king’s very word was law, now the role of the queen in the day-to-day business of politics is minimal.
I would like to speculate that the US presidency is on its way from becoming an efficient institution to a dignified one. The election of Trump has provided us with a figure whom other components of the government have openly said they will ignore—a military leader, being interviewed about the US nuclear capability, has argued that they would make a considered decision about an order from Trump to make a nuclear strike, despite this being formally an uncomplicated order from a superior officer (commander-in-chief, natch!) to a more junior one. Whilst this has probably been the truth throughout nuclear history—there are reports of various cold-war nuclear command officers deciding to take a “watch and wait” approach when the preconditions for a nuclear strike have already been met—this is probably the first time that this has been discussed so openly. This marks the beginning of the presidency being regarded as a ceremonial, “dignified” institution; I would assume that a command from Queen Elizabeth II would be taken with similar cynicism by the UK military.
So, is this just an aberration? A one-off, to be replaced in 2020 by a return to business-as-usual? This is entirely possible; a nation weary of celebrity posturing could return to the model of the politically experienced leader as the ideal candidate. But, there is hunger from different directions for another celebrity-POTUS. Even if the US tires of isolationist nationalism, there is a decent chance that the Democrats won’t be willing to field another explicitly large-P Political figure against the celebrity of Trump in 2020 (especially as by that point, their store of public-profile figures is running thin; Obama timed out, figures such as Clinton and Kerry tainted by previous unsuccessful runs). Would you really put up a governor of a flyover state when you have an Oprah or Zuckerberg? So, let’s say that Oprah wins in 2020, and serves two successful terms of office, taking us to 2028. Already, we’re reaching a stage where the idea of electing some competent former ambassador seems so boring and 20th century. After four years of President Zuck struggling to control the growing power of the BRICS and some crisis yet to be imagined, we reach a point where a shadow system of efficient institutions is starting to sweep in underneath to take on the substantive job of executive government. By 2032, Will Smith and Ellen DeGeneres are the sort of people who are the serious, establishment candidates, fighting not to be seen as boring establishment figures against the candidacy of Katy Perry. By 2050, the Presidency is a ribbon-cutting, “dignified” institution, as much a sign of faded-celebrity-trying-to-raise-their-profile as I’m a Celebrity… is today. A young turk in the present day would be better studying which institution will rise to take the place of the efficient powers of the President, than plotting a 40-year route to the role itself.
I’d wondered for a while if celebrity would one day take the Presidential role—after all, there is a system of (more-or-less) direct election, both at the primaries and the final vote, that provides a way to circumvent the slog of e.g. UK national politics. But, I always though that this would come about from an independent candidate standing on a largely youth-oriented platform. I had assumed that at some point some cocky chancer like Jay-Z might decide to go for it as a mid-life crisis thing, taking around 15% of the vote as an anti-politics third candidate, Nadering-out a decent Democratic candidate in favour of a Dubya-like Republican due to demographics, earning the ire of mainstream politicians en route. I was blindsided by Trumps’ candididacy—playing a role as an anti-politics candidate whilst remaining within a party structure (thus getting the automatic votes of the always-Republican rump) was a stroke of genius. That canny move may well have re-configured the Presidential role for the next century—Swift 2052 for the win!
An odd contradiction on the economic right of politics:
- There is objection to ideas such as basic income, unemployment benefits, etc. on the grounds that once people have basic needs catered for, their motivation to carry out additional economic activity for the marginal benefits it provides are minimal. A person who has basic housing costs paid for and a few hundred quid per month living expenses is assumed to be unmotivated to work further.
- There is objection to ideas of increasing tax take at the higher end, on the grounds that it will reduce motivation to work. Even though someone might be earning £100k or more, the idea is that they will be significantly demotivated if they have to pay another few hundred quid per year in taxes.
This seem contradictory. Either people are willing to work harder for more money, or there is a level where the marginal monetary benefit will not produce additional motivation. If anything, you might expect it to be the other way round—the marginal benefit to the person in desperate economic circumstances of a small amount additional income gives a larger lifestyle change than for the person on a large income. I suspect that at the heart of the contradiction is a belief that there are two sorts of people—the lazy, who wouldn’t care, and the motivated, who will always be willing to do more for a larger benefit. I think motivation is more complex than that.
That A-team, eh? They really liked making quiches, yes? They loved it when a flan came together.