Gresham’s Law for the 21st Century: “Bad cultural relativism drives out good cultural relativism.”
Places with a pretension to being high-end often put a human in the loop in the belief that it makes it a better service. This is particularly the case in countries where basic labour costs are cheap. The idea, presumably, is that you can ask for exactly what you want, and get it, rather than muddling through understanding the system yourself. But, this can sometimes make for a worse service, by putting a social barrier in the loop. For example, I have just gone to a coffee machine at a conference, where there was someone standing by it waiting to operate it. As a result, I got a worse outcome than if I had been able to operate it myself. Firstly, I was too socially embarrassed to ask for what I would have done myself—press the espresso button twice— because that seems like an “odd” thing to do. Secondly, I got some side-eye from the server when I didn’t take the saucer; as a northerner I don’t really believe in them. So, by trying to make this more of a “service” culture, the outcome was worse for me, both socially and in terms of the product that I received.
My (hard right-wing) mother used to go on about how people in some countries needed a strong dictator to keep them under control. It is one of the remarkable features of the last few years that politicians in democratic countries have managed to persuade their own populations that it is in their interests to vote for near-dictators to keep them under control.
Is the current state of computer science education analogous to a situation where there were no business schools, and everyone who wanted to do “business studies” had to do economics instead?
I’ve sometimes joked that I only have hobbies because they are necessary for me to indulge my meta-hobbies of project management, product design, and logistics. Sometimes, I worry that I get more pleasure from the planning that goes around an activity than doing the activity itself. The planning the travel and activities for a trip, the well-organised and well-chosen set of accessories or tools for doing some craft, preferring to be the person who organises the meetings and makes up the groups rather than being a participant in the activity.
I wonder where this comes from? I think part of it is from growing up in a household where there wasn’t much money to spend on leisure stuff. As a result, I spent a lot of my childhood planning what I would do when I had things, making tables and catalogues of things, and endlessly going over the same small number of resources. I remember planning in great detail things like model railway layouts, devising complex electrical circuits, and filling notebook-after-notebook with code in anticipation of the day when I might finally have access to a computer to run it on—a computer which would be chosen not on a whim, but from detailed comparison tables I had drawn up from catalogues and ads so as to get the very best one for the limited money we had.
The intellectual resources I had access to were interesting. We had some books, bought from W.H. Smith, brought home from the school where my father taught, bought from a catalogue of discount improving educational books which was available at School (which introduced me to the excellent Usborne books which I still think are a model for exposition of complex concepts), or bought from the eccentric selection available at remainder shops (I particularly remember three random volumes of encyclopaedia that I had bought from one such shop). The local library was a good resource too, but I rapidly exhausted the books on topics of relevance to me, and just started reading my way through everything; one week I remember bringing home a haul of books on Anglicanism, resulting in my mother’s immortal line “You’re not going to become a bloody vicar, are you?”. Catalogues and the like were an endless source of information too, I remember endless poring over detailed technical catalogues such as the Maplin one, and spec sheets from computer shops, compiling my own lists and tables of electrical components, details of how different computers worked, etc. I remember really working through what limited resource I had; endlessly reading through the couple of advanced university-level science books that a colleague of my mother’s had given to her via a relative who had done some scientific studies at university.
There’s something to be said for trying damn hard to understand something that is just too difficult. I remember working for hours at a complex mathematical book from the local library about electrical motors, just because it was there and on an interesting topic, and learning linear and dynamic programming, university level maths topics, again because there happened to be a good book on it in the local library. These days, with access to a vast university library, books at cheap prices on Amazon, and talks on almost every imaginable topic available on YouTube, I think I waste a lot of time trying to find some resource that is just at my level, rather than really pushing myself to make my own meaning out of something that is on the very fringe of my level of possible understanding. Similarly, I remember the same for courses at University—I got a crazily high mark (88% or something) in a paper on number theory, where I had struggled to understand and the textbooks were pretty ropey, whereas the well-presented topics with nice neatly presented textbooks were the golden road to a 2:1 level of achievement.
Talking of lectures and YouTube etc., another thing that is near impossible to have a feel for was the ephemerality of media. There were decent TV and radio programmes on topics I was interested in, science and technology and the like, but it seems incomprehensibly primitive that these were shown once, at a specific time, and then probably not repeated for months. How bizarre that I couldn’t just revisit it. But, again, in made it special; I had to be there at a specific time. I think this is why lecture courses remain an important part of university education. About 20 years ago I worked with someone called Donald Bligh, who wrote an influential book called What’s the Use of Lectures?, which anticipated lots of the later developments in the flipped classroom etc. He couldn’t understand why, with the technology available to deliver focused, reviewable, breakable-downable, indexable online material, we still obsessed about the live lecture. I have a lot of sympathy for that point of view, but I think lecture courses deliver pace and, at their best, model “thinking out loud”—particularly, for technical and mathematical subjects. When everything is available at hand, we just get stuck in focus paralysis; I do that with things I want to learn, there are too many things and it is too easy when something gets hard to not persevere, and to turn to something else instead; or, I spend endless amounts of time in search of the perfect resource, one that is just at my level. This is what I wasn’t able to do, 30 years ago, in my little room with limited resources, and so I got on with the task at hand.
How can we regain this focus in a world of endless intellectual resource abundance? Some approaches are just to pace stuff out—even MOOCs, where the resources are at hand and could be released, box-set-like, all at once, nonetheless spoon them out bit-by-bit in an attempt to create a cohort and a sense of pace. Another approach is pure self-discipline; I force myself to sit down with a specific task for the day, and use techniques such as the Pomodoro technique to pace out my time appropriately. Others use technologies to limit the amount of time spent online, such as web-blockers that limit the amount of time spent either on the web in general, or specifically on distractors such as social media. But, I still think that we don’t have a really good solution to this.
When I was around 12 years old, we went for one of our regular family trips into the Derbyshire countryside. After lunch, I went off for a bike ride. I thought that I had communicated this to my parents, but they thought I had meant that I was going to ride my bike through the woods for 5-10 minutes, whereas I meant that I was going for an hour or two of riding.
When I got back, my family were worried sick about where I had got to. Later, I found out that my grandmother had at some point during my absence uttered the immortal line: “If he’s gone and cycled off a cliff, I’ll bloody well kill him!”.
At York university in the 1990’s, there was a lane called “Retreat Lane” which was the start of the main route from campus into town. It was somewhat sketchy, and we were warned not to use it at night; it is good to see that it has had proper lighting installed a while ago. There were three prominent pieces of graffiti on the walls and gates:
- The words “WATFROD F.C. RULES OK” (yes, that spelling) in huge letters.
- The words “Ah good the sea!” in chalk. That seems to have been there for years, it was still there a few years ago, people must re-chalk it from time-to-time (I would, I suppose, if I noticed it was fading).
- The words “Meat is Murder” written at the top of a gate to a field that sometimes had cows in it. Later joined by various other (rather less sincerely meant) slogans, such as “Veg is Vomit” and “Fish is Foul”.
The idea of computational thinking as as set of skills that should be promoted as part of a broad education. The term originates with work by Jeanette Wing (e.g. this CACM article) over a decade ago. Computational thinking has developed to mean two, slightly different things. Firstly, the use of ideas coming out of computing for a wide variety of tasks, not always concerned with implementing solutions on computers. Systematic descriptions of processes, clear descriptions of data, ideas of data types, etc. are seen as valuable mental concepts for everyone to learn and apply. As a pithy but perhaps rather tone-deaf saying has it: “coding is the new Latin”.
A second, related, meaning is the kinds of thinking required to convert a complex real-world problem into something that can be solved on a computer. This requires a good knowledge of coding and the capabilities of computer systems, but is isn’t exactly the code process as such: it is the process required to get to the point where the task is obvious to an experienced coder. These are the kind of tasks that are found in the Bebras problem sets, for example. We have found these very effective in checking whether people have the skills in abstraction and systematisation that are needed before attempting to learn to code; they test the kinds of things that are needed in computational thinking without requiring actual computing knowledge.
A thought that occurred to me today is that these problems provide a really good challenge for artificial intelligence. Despite being described as “computational thinking” problems, they are actually problems that test the kind of things that computers cannot do—the interstitial material between the messy real world and the structured problems that can be tackled by computer. This makes them exactly the sort of things that AI ought to be working towards and where we could gain lots of insight about intelligence. One promising approach is the “mind’s eye” visual manipulation described by Maithilee Kunda in this paper about visual mental imagery and AI.
As we approach the beginning of term, and a new cohort of students joining our universities, it is worth remembering that a decent number of our new students are arriving frightened of us, or assuming that we will look down on them. I think that the comment here, from a student admissions forum, is not untypical:
It is important, in our first few interactions with them, to make it clear that this isn’t the case.
In B&Q yesterday there were two parents and a child (around 5-6 years old) pushing a trolley out to their car. The child was insistently declaring an interest in helping to move the large boxes of tiles from the trolley to the car; the father insisting each time that it was pointless, that it would take two adults to move it, and that there wasn’t any point in helping.
One thing that helped me to develop a “growth mindset”“—the view that skills and intelligence are largely not fixed or innate but the result of the right kind of study and development—was that my parents found lots of ways to involve me, at a level appropriate to my knowledge, skills, and development, in so many areas of life. I have no idea whether this was a deliberate strategy or that they just fell into it, but it was very helpful in instilling a positive view of the value of productive work.
A side note: I have often wondered if being a (to a first approximation) only child helped with my learning a wide range of skills, in particular not having a gender-sterotyped pattern of skills. Because I was the only child around, I would be co-opted into helping with a lot of things, whether cooking or washing, car-repair or plumbing. Perhaps in a larger family with a mixture of genders in the children, the girls might go off to help with “women’s stuff” from female relatives, whilst the boys do “men’s stuff” with males.
An idea that I got from Colin Runciman. When marking student work, and you come across a bad answer, ask yourself “is this blank-equivalent, i.e. does this show the same level of insight into the problem as if the student had written nothing?”. In many cases, the answer is “no”. We frequently fail to use points on the marking scale that are between zero and pass, particularly when marking short answer questions in exams. Thinking about “blank equivalence” gives us a tool to decide which answers genuinely show insufficient knowledge or skill to be worth any marks, from those that are still fails, but nonetheless show some insight.
Perhaps the idea of “blank-equivalence” is valuable elsewhere. Perhaps a work of art is not good enough to be worthy of critical attention and positive aesthetic judgement—but, it is still not sufficiently devoid of skill and imagination to make the same impact on the world as doing nothing.
An extremely vivid memory from childhood—probably about seven or eight years old. Waking up and coming downstairs with an absolute, unshakable conviction that what I wanted to do with all of my spare time for the next few months was to build near-full sized fairground rides in our back garden. I don’t know where this came from; prior to that point I had no especial interest in fairground rides, beyond the annual visit to the Goose Fair. I wanted to go into the garage immediately and start measuring pieces of wood, making designs, etc. It took my parents a couple of hours to dissuade me that doing this was utterly impractical, against my deep, passionate protestations. Truly, I cannot think of anything before or since that I wanted to do with such utter conviction.
To historians, “history” basically means the (complex, disputed) knowledge that contemporary people have about what happened in the past. To the general public, “history” is the stuff that happened—about which contemporary people might have limited evidence, disputes of interpretation, etc. This can lead to confusion in communicating ideas about the methodology and ontology of history. For example, when I first came across people saying things along the lines of “historical facts change over time”, I thought that they were embracing a much more radical vision of history than they were. They were making the (important) point that what we call “facts” are based on incomplete evidence and biased by political/social/religious views and our biases coming from the contemporary world. I thought that they were making the much more radical claim that the subjective experience of people in the past changed due to our contemporary interpretations—a kind of reverse causality.
Do KPIs encourage a culture of making small improvements to stuff that we know how to measure well, rather than disruptive changes in areas where we haven’t even thought about how to measure things yet?
Bus driver (paraphrased): “Since the new big-businessman owner took over, [my local football club]’s been run like a profitable business.”, “Sounds good”, “No, its crap. When rich people have taken over other clubs, the’ve done it for a hobby, and put loads of money into paying top players; our man wants to run it like a proper business.”.
Contemporary governments typically like competition, and also want to allow companies to act in a free market. Unfortunately, the free market also means that companies are free to purchase other companies, and regularly do so, usually in cognate areas to their current areas of business. This ends up creating uncompetitive situations where there are few buyers and sellers in a single area of business. To combat this, an interventionist scheme is usually put in place, whereby mergers and takeovers have to be approved by some governmental body. One of the occasions when that body will typically exercise that power is when the merger creates sufficiently few firms to compete effectively.
This is clumsy. It makes single, complex decision points and is prone to political intervention and bias. Perhaps instead, we could have a system that delegates this choice to the companies. For example, let’s imagine a graded scale of costs to register annually as a limited company. If you are registering in a business area where there are lots of players competing, then the cost is minimal—say, close to the cost of administering the registration. As the number of viable players gets smaller, the cost artificially ramps up very rapidly; if you are looking to merge two out of the last three remaining supermarket chains, then the annual registration cost is millions.
If, like me, you believe that hypothecation of taxes isn’t automatically to be avoided, you might even dedicate the sums earned from this to a fund to support startup/disruptor businesses in business areas with little competition.
The details are tricky. How do you set the cost, and the ramping? How do you define “the same business area”? How do you prevent formally distinct entities actually being controlled by the same entity in practice? But, these might not be insurmountable.
The sorites (Greek for “heap”) paradox is a puzzle about language. We unambiguously use the word “heap” to represent a large pile of, say, stones—say a few hundred. If we remove one, that is still, uncomplicatedly, a heap. Yet, we cannot do this indefinitely. Once we have, say, two stones, everyone agrees that this is clearly not a heap. The usual resolution to this is to argue that concepts such as “heap” are irreducibly vague; there will always be a fuzzy middle ground between “heap” and “non-heap”.
Interestingly, there are still examples of this at very small scales. There is currently a proposal to merge two of the small number of supermarket chains in the UK. At present, most people would agree that the current system is decently competitive. Reduce is by one and—well, is it still a competitive system? Interestingly, this shows that a sorites-like situation can exist with small numbers of objects, and so perhaps isn’t a problem of fine-grainedness as much as we might first think.
Graduation ceremonies should have credits, in the same way that films do. This would emphasise to students and a wider set of stakeholders the scale of the support and the hidden activity that goes into providing the environment in which students can flourish.