“Real Artists Ship”

Colin Johnson’s blog


Variations on Folk Sayings (22) (Beatles special)

December 1st, 2020

“The future is dying in the wrong order.”

Variantions on Folk Sayings (21)

December 1st, 2020

“Nice world if you can get it.”

Exit Questionnaires and Interviews

December 1st, 2020

Organisations like to do exit questionnaires and interviews with people who are leaving the organisation voluntarily. They want to understand why people have chosen to leave their job, whether there are any problems or any way in which they can improve their talent development processes or pipelines.

But, there is no upside to this for the (former) employee. They are leaving or have left—they don’t owe the labour of the questionnaire or interview to to the former employer in any contractual sense. Also, there is a considerable downside risk. If someone says something damning or (perhaps unintentionally) disruptive at such an interview, it can burn bridges for future partnerships or a future return to that organisation.

The risk is stacked against the employee and in favour of the employer. So, it seems only reasonable that a sensible employee would refuse such a request. Perhaps, therefore, there needs to be some motivation to compensate for the risk. I don’t think that it is unreasonable for the former employer to pay the former employee a non-token amount to do this.

We baulk at this. Why should we pay for this? Well, if we value the information, we should be able to work out a reasonable monetary value for that information—how much would our organisation gain from knowing that piece of information? We seem very reluctant to quantify in monetary terms the cost of information, probably because (unlike a physical thing) it is literally intangible, and so ought, surely to cost nothing. There are exceptions. Companies subscribe to market intelligence briefings. But, overall, we are reluctant to do this. One exception is in management accounting, which has a well-developed idea of doing a cost-benefit analysis of gathering information. Sometimes, information just isn’t worth knowing—the difference it would make to our decision making is outweighed by the cost of getting to know the information. This still jars with a very human understanding of information.

Why is Funny Funny?

November 16th, 2020

Occasionally, I hear the opinion that topical TV panel shows such as Have I Got News for You and Mock the Week are “scripted”. Clearly, this is meant pejoratively, not merely descriptively. A scripted programme would not presenting itself to us honestly.

I don’t believe this (I have seen a couple of recordings of similar shows, and there isn’t any evidence of scripting to my eye), but equally they aren’t simply a handful of people going into a studio for half-an-hour and chatting off the top of their head. My best guess for what is happening is a mixture of genuinely off-the-cuff chat, lines prepared in advance by the performers themselves, lines suggested by programme associates, material workshopped briefly before the performance, and some pre-agreed topics so that performers can work in material that they use in their live performances. All this, of course, topped by the fact that a lot of material is recorded, and the final programme is a selective edit of this material.

But, if it were to be scripted from end-to-end, and the performers essentially actors reading off an autocue, why would that be a problem? Like Pierre Menard’s version of Don Quixote, we wouldn’t know the difference. Why would knowledge that these programmes were scripted actually make them less funny? That is, that knowledge would make us laugh less at them—this isn’t just some contextual information, where we would still find it just as funny, but feel slightly cheated that it wasn’t as spontaneous as we are led to believe. We would, I would imagine, actually find it less funny.

There’s something about the human connection here. Even though we don’t know the performers personally, there is still some idea of it being “contextually funny”. Perhaps in some odd way it is “funny enough” to be funny if we believe it to be spontaneous, but not funny enough if we believe it to be scripted. Perhaps we are admiring the skill of being able to come up with the lines “on the fly”—but admiration doesn’t usually cash out in laughter. Somehow, it seems to do with the human connection that we have with these people. We find it genuinely funny because of the context.

I’ve often wondered why I can’t find other country’s political satire funny. I can work out the wordplay in Le Canard enchaîné, but I don’t chuckle at it. I might admire it, but the subjects of the satire are just too distant; perhaps I don’t have a stake in the subjects in the same way that I do in the people that I read about in Private Eye.

When I used to lecture on the Computational Creativity module at Kent, I would talk about the Joking Computer system, an NLP system that could generate competent puns such as “What do you get if you cross a frog with a street? A main toad.”. I used to say that we would find that joke funny—genuinely funny—if it was told to us by a six-year-old child, say your younger brother or sister, even though it isn’t a hilarious joke. Similarly, perhaps, we might give the computer some leeway—it isn’t going to produce an amazingly funny joke, but it is funny for a computer. But, this argument always felt a bit flat. Perhaps it is the human connection—we don’t care that the (soul-less) computer has “managed” to make a joke, we lack that human connection.

My drama teacher at school used to say about the performances that we took part in that he wanted people to say that they had seen a “good play”, not a “good school play”. There is something in that. Perhaps, the same is true for computational creativity. It needs to be “creative enough” to be essentially acontextual before we start to find it genuinely creative.

Acceptability of Deepfakes for Trivial Corrections: The Thin End of a Wedge?

June 17th, 2020

Clearly deepfakes are unacceptable, yes? It is morally unsound to create a fake video of someone saying or doing something, and to play that off as a real recording of that person doing it.

But, what about a minor correction? I recently saw a video about personal development, talking about how people move through various stages of life, and making a number of very positive points and pieces of advice. I thought that this might be useful as part of a professional development session to show to a group of students. But, there was a problem. At some point, the speaker talks about life changes, and talks about adolescence, including a reference to “when people start to get interested in the opposite sex”. The heteronormativity of this made me flinch, and I certainly wouldn’t want this to be presented, unadorned, to a group of students. This is both because of the content as such, and because I wouldn’t want the session to be derailed onto a discussion of this specific point, when it was a minor and easily replaceable example, not core to the argument.

I suppose what I would typically do would be to use it, but to offer a brief comment at the beginning that there was something not germane to the main argument, but which was problematic, but on balance I thought it would be good to use this resource despite the problematic phrase. I might even edit it out. Certainly if I was handing out a transcript rather than using the video, I would cut it out using an […] ellipsis. But, these solutions might still focus attention on it.

So—would it be acceptable to use a deepfake here? To replace “when people start to get interested in the opposite sex” with “when people start to develop an awareness of sexuality”, for example? There seems something dubious about this—we are putting words into someones mouth (well, more accurately, putting their mouth around some words). But, we aren’t really manipulating the main point. It’s a bit like how smoking has been edited out of some films, particularly when they are to be shown to children—the fact of the character smoking isn’t a big plot point, it was just what a character happened to be doing.

So, is this acceptable? Not acceptable? Just about okay, but the thin end of the wedge?

Big Scary Words

May 19th, 2020

I once saw a complaint about a crowdfunded project that was going awry. The substance of the complaint was that, in addition to their many other failings, the people funded by the project had used some of the money to set up a company. Basically: “I paid you to make a widget, not to waste my money setting up a company”. There’s an interesting contrast in the view of the word “company” here. To someone in business, spending a few hundred pounds to register a company is a basic starting point, providing a legal entity that can take money, hold the legal rights to inventions in a safe way, provide protection from personal bankruptcy, etc. But to the person making the complaint, “setting up a company” no doubt meant buying a huge office building, employing piles of accountants and HR departments, and whatnot.

We see a similar thing with other terms—some things that are part of normal business processes sound like something special and frightening to people who aren’t engaging with these processes as part of their day-to-day life. For example, your data being put “on a database” can sound like a big and scary process, something out-of-the-ordinary, rather than just how almost all data is stored in organisations of any substantial size. Similarly, “using an algorithm” can sound like your data is being processed in a specific way (perhaps rather anonymous and deterministic—the computer “deciding your fate”), rather than being a word used to describe any any computer-based process.

We need to be wary of such misunderstandings in describing our processes to a wider public.

The Diversity of my Interests (2)

April 29th, 2020

(no, I don’t know where “boxing gloves and pads” came from either)

Repetition and Communication

April 17th, 2020

As I so often say, repetition is a key point in communication.

I’ve been in endless meetings about, for example, student induction, where we have a futile discussion about how to present lots of information. On one hand, should we present it all at once – the single, long induction event, where we try to tell everyone everything. No, we shouldn’t! People will get bored, they won’t take much in, they’ll be frightened by the amount of information. But no! If we don’t tell everyone everything up front, they’ll be confused and anxious. They won’t know what’s what, and before we know it, we’ll have people picking up random wrong information here and there. Better to get it out the way at the beginning.

Why not both? Start with the big, comprehensive, presentation, but recognise (and be clear that) people won’t be taking everything in. There’ll be reminders! There’s a reference here where you can look things up! If you don’t know, ask this person! That way, we give people a framework from which they can take the gist, and then we remind them, and repetition makes for a stronger memory (“stay home, protect the NHS, save lives”).

I think a lot of people have internalised an idea that (one-to-many) communication of information/procedures/policies should be a one-shot thing. If you’re not communicating everything, perfectly, at your first attempt, then you’d damn well better make it better so that it does come across. I don’t know where this pernicious idea comes from.

Perhaps I’ve had it squeezed out of me through years of studying complex maths and similar topics. When I was at university, it was clear that you weren’t going to get the topics right away. You’d go to a lecture, and perhaps get the broad idea, but then you’d need to spend ages reading the book – over and over again – trying problems, working out your own examples, before you really grokked the idea. Indeed, there was a useful piece of guidance about reading maths textbooks in our student handbook – “sometimes it’ll take an hour or two to understand how to go from one line to the next”.

As I said earlier, let’s embrace repetition. Again, and again.

A Theory of Stuff (1)

August 12th, 2019

What underpins the broad shift in (broadly Western) design from highly decorated things in times up to the 19th century to the eschewal of decoration later in the 20th century and beyond? Here is a rough-cut of an economic theory of decoration.

Prior to the industrial century, individual things were expensive. The cost of making the stuff was in material—not necessarily raw material, but getting material from its raw state to a state where it can be used. A lot of this is semi-skilled labour cost, but a lot of it. There is an interesting argument that a shirt in mediaeval times cost around 3500 USD in modern money. For example, spinning (by hand) the thread costs 500 hours of work, and weaving the cloth another 72. Therefore, each shirt was a very valued object, and worn to exhaustion, frequently repaired, and repurposed if it was not viable in its original form (there is a nice discussion of this in Susannah Walker’s recent book The Life of Stuff).

Similarly for other material. Transport costs in an era where horse and human motive power was the prime driving force was huge. The cost of getting material to a building site—a minor cost of building a modern building—might have been a huge proportion of the cost.

By contrast, the marginal cost of adding some decorative addition to something is therefore small. If you have paid hundreds of hours of labour to get your basic shirt, adding a few more days to add some decoration is a minor marginal cost.

By contrast, with 20th century manufacturing techniques, the cost of producing the object is much less: the core materials can be produced and shipped at low cost, and a lot of the cost is coordinating the various low-cost steps and delivering the object to the final consumer. The relative labour cost of adding elaborate decoration is high. This doesn’t fully stand up—after all, modern techniques of manufacture can add some decoration very cheaply and easily. But perhaps in some cases it holds—I can see this particularly in the case of architecture, where the logistical cost of coordinating lots of special decorative components will be high.

Legacy Code (1)

June 24th, 2019

It’s fascinating what hangs around in code-bases for decades. Following a recent update, Microsoft Excel files in a certain format (the old style .xls files rather than the .xlsx files) started showing up with this icon:

(old excel icon)

Which I haven’t seen for a couple of decades. More interestingly, the smaller version of the icon was this one:

(old resedit icon)

What has this to do with Excel? It looked vaguely familiar, but I couldn’t place it. After a bit of thought and Googling around, I realised that this was the icon for a program called ResEdit, which was an editor for binary files that I remember using back in the pre-OS X days. Looking at this further, I realised that the last version of ResEdit was made in 1994.

How did these suddenly appear? There are occasional references to this happening on various Mac forums from the last few years. I suspect that somehow they are in collections of visual assets in codebases that have been under continuous development for the last 30 years or more, and that somehow some connection to the contemporary icon has been deleted or mis-asssigned. I’m particularly surprised that Excel wasn’t completely written from scratch for OS X.

What do people think coding is like?

April 22nd, 2019

I wonder what activity non-coders think coding is like? I remember having a conversation with a civil servant a few years ago, where he struggled to understand why we were talking about coding being “creative” etc. I think that his point of view is not uncommon—seeing coding as something that requires both intellectual vigilance and slog, but is fairly “flat” as an activity.

Perhaps people think of it as like indexing a book? Lots of focus and concentration is needed, and you need some level of knowledge, and it is definitely intellectual, “close work”. But, in the end, it doesn’t have its ups and downs, and isn’t typically that creative; it’s just a job that you get on with.

Perhaps they think it is like what they think mathematics is like? Lots of pattern-matching, finding which trick fits which problem, working through lots of line-by-line stuff that kinda rolls out, albeit slowly and carefully, once you know what to do. This isn’t entirely absent from the coding process, but it doesn’t have the ups and downs that doing maths or doing coding has.

If people have a social science background, perhaps they think of “coding” in the sense of “coding an interview”—going through, step by step, assigning labels to text (and often simultaneously coming up with or modifying that labelling scheme). Again, this has the focus that we associate with coding, but again it is rather “flat”.

Perhaps it would be interesting to do a survey on this?

Differentiation in the Lecture Room

February 14th, 2019

Students come to university with a wide range of ability and prior knowledge, and take to different subjects with different levels of engagement and competence. This spread isn’t as wide as in other areas of education—after all, students have chosen to attend, been selected in a particular grade boundary, and are doing a subject of their choice—but, there is still a decent amount of variation there.

How do we deal with this variation? In school education, they talk a lot about differentiation—arranging teaching and learning activities so that students of different levels of ability, knowledge, progress, etc. can work on a particular topic. I think that we need to do more of this at university; so much university teaching is either aimed at the typical 2:1 student, or is off-the-scale advanced. How can we make adjustments so that our teaching recognises the diversity of student’s knowledge and experience?

In particular, how can we do this in lectures? If we have a canonical, non-interactive lecture, can we do this? I think we can: here are some ideas:

Asides. I find it useful to give little parenthetical asides as part of the lecture. Little definitions, bits of background knowledge. I do this particularly for the cultural background knowledge in the Computational Creativity module, often introduced with the phrase “as you may know”. For example: “Picasso—who, as you may know, was a painter in the early-mid 20th century who invented cubism which plays with multiple perspectives in the same painting—was…”. This is phrased so that it more-or-less washes over those who don’t need it, but is there as a piece of anchoring information for those that do. Similarly for mathematical definitions: “Let’s represent this as a matrix—which, you will remember from you maths course, is a grid of numbers—…”. Again, the reinforcement/reminder is there, without patronising or distracting the students who have this knowledge by having a “for beginners” slide.

Additional connections. Let’s consider the opposite—those students who are very advanced, and have a good knowledge of the area are broadly. I differentiate for these by making little side-comments that connect to the wider course or other background knowledge. Sometimes introduced with a phrase such as “if you have studied…” or “for those of you that know about…”. For example: “for those of you who have done an option in information retrieval, this might remind you of tf-idf.”. Again, this introduces the connection without putting on a slide and make it seem big and important for those students who are struggling to manage the basics, but gives some additional information and a spark of a connection for the students who are finding the material humdrum. (I am reminded of an anecdote from John Maynard Smith, who talked about a research seminar where the speaker had said “this will remind you of a phase transition in statistical physics”: “I can’t imagine a time in my life when anything will remind me of a phase transition”).

Code examples. A computing-specific one, this. I’ve found that a lot of students click into something once they have seen a code example. These aren’t needed for the high-flying coding ninjas, who can go from a more abstract description to working out how the code is put together. But, for many students, the code example is the point where all the abstract waffle from the previous few minutes clicks into place. The stronger students can compare the code that they have been writing in their heads to mine. I sometimes do the coding live, but I’ve sometimes chickened out and used a screencap video (this also helps me to talk over the coding activity). A particularly clear example of this was where I showed a double-summation in sigma notation to a group, to largely blank looks, followed by the same process on the next slide as a nested loop, where most students seemed to be following clearly.

Any other thoughts for differentiation tricks and tips specifically in the context of giving lectures?

Microtrends (1)

February 7th, 2019

Noticeable recent microtrend—people walking around, holding a phone about 40cm from their face, having a video chat on FaceTime/Skype. Been possible for years, but I’ve noticed a real uptick in this over the last few weeks.

On Bus Drivers and Theorising

February 7th, 2019

Why are bus drivers frequently almost aggressively literal? I get a bus from campus to my home most days (about a 2 kilometre journey), and there are two routes. Route 1 goes about every five minutes from campus, takes a fairly direct route into town, and stops at a stop about 100 metres from the West Station before turning off and going to the bus station. Route 2 goes about every half hour, takes a convoluted route through campus before passing the infrequently-used West Station bus-stop, then goes on to the bus station.

Most weeks—it has happened twice this week—someone gets on a route 1 stop, asks for a “ticket to the West Station”, and is told “this bus doesn’t go there”. About half the time they then get off, about half the time they manage to weasel out the information that the bus goes near-as-dammit there. I appreciate that the driver’s answer is literally true—there is a “West Station” stop and route 1 buses don’t stop there. But, surely the reasonable answer isn’t a bluff “the bus doesn’t go there” but instead to say “the bus stops about five minutes walk away, is that okay?”. Why are they—in what seems to me to be a kind of flippant, almost aggressive way—not doing that?

I realised a while ago that I have a tendency towards theorising. When I get information, I fit it into some—sometimes mistaken—framework of understanding. I used to think that everyone did this but plenty of people don’t. When I hear “A ticket to the West Station, please” I don’t instantly think “can’t be done” but I think “this person wants to go to the West Station; this bus doesn’t go there, but the alternative is to wait around 15 minutes on average, then take the long route around the campus; but, if they get on this bus, it’ll go now directly to a point about five minutes from where they want to get to, so they should get this one.” It is weird to think that lots of people just don’t theorise in that way much at all. And I thought I was the non-neurotypical one!

Coke, Pepsi, and Universities

February 2nd, 2019

Why does Coca-Cola still advertise? For most people in most of the world, it is a universal product—everyone knows about it, and more advertising doesn’t give you more information to help you make a purchasing decision. After a while, advertising spend and marketing effort is primarily about maintaining public awareness, keeping the product in the public eye, rather than giving people more information on which to make a decision. There is something of the “Red Queen” effect here; if competitors are spending a certain amount to keep their product at the forefront of public attention, then you are obliged to do so, even though the best thing for all of the companies involved, and for the public, would be to scale it down. (This is explained nicely in an old documentary called Burp! Pepsi vs. Coke: the Ice Cold War.) There’s a certain threshold where advertising/marketing/promotion tips over from informative to merely awareness-raising.

This is true for Universities as much as other organisations. A certain amount of promotional material is useful for prospective students, giving a feel of the place and the courses that are available. But, after a while, a decent amount of both student’s own fee money, and public investment, goes into spend over this threshold; mere spend for the purpose of maintaining awareness. However, in this case, we do have some mechanism to stop it. Perhaps universities should have a cap on the proportion of their turnover that they can spend on marketing activities, enforced by the withdrawal of (say) loan entitlements if they exceed this threshold.

On Exponential Growth and Fag Ends

January 9th, 2019

I have often been confused when people talking about family history—often people with good genealogical knowledge—talk about their family “coming from” a particular location in the distant past. Don’t they know anything about exponential growth? When you talk about your family living in some small region of north Norfolk 400 years ago, what does that mean? That’s (inbreeding aside) over 32,000 people! Surely they didn’t all live in a few local villages.

Now, I appreciate that this is a bit of an exaggeration. Over a few hundred years there will be some (hopefully fairly distant) inbreeding and so each person won’t have tens of thousands of distinct relatives. I appreciate, too, that people travelled less in the past, and that even if you are genuinely descended from thousands of distinct people, those people will have been more concentrated in the past. But, still, the intuition that “your family” (by which they are imagining, I think, a few dozen people at a time) “comes from somewhere” still seems a little off.

The naïve explanation is that they just don’t realise the scale of this growth. I would imagine that most people, asked for an intuitive stab at how many great-great-···-grandparents they had 400 years ago, would guess at a few dozen, not a number in the tens of thousands. Perhaps they have some cultural bias that a particular part of the family tree is the “main line”, perhaps that matrilineal or patrilineal lines are the important ones, and that other parts of the family are just other families merging in. Or, perhaps they recognise that in practice main lines emerge in families when there are particular fecund sub-families, and other branches fade out.

Overall, these “fag ends” are not very well acknowledged. Most people depicted in fiction, e.g. in the complex family interconnections of soap operas, have a rich, involved family. There isn’t much depiction of the sort of family that I come from, which is at the ragged, grinding to a halt twig of a family tree.

Let’s think about my family as an example. Both of my parents were somewhat isolated within their families. My mother had three siblings, two of whom died in infancy. The other, my uncle, went on to have three children, two of whom in turn have had children and and grandchildren, and the one who didn’t married into a vast family (his wife has something like ten siblings). By contrast, my mother had only me, who hasn’t had any children, and didn’t get on particularly well with her brother, so we were fairly isolated from her side of the family after my grandmother died. So, from the point of view of my grandmother’s position in the family tree, it is clear that my uncle’s line is the “main line” of the family.

Similarly, on my father’s side, he was similarly at a ragged end. He had three sisters. One died fairly young (having had Down’s syndrome). The one he was closest to went to Australia and had a large family—four children, lots of grandchildren, etc; but, they were rather geographically isolated. The one that lived a few miles from us he wasn’t particularly close to, and only had one child, who remained child-free. He had one child from his first marriage (who had children and grandchildren and great-grandchildren, which bizarrely meant that by the age of 44 I was a great-great uncle), and had only me from his marriage to my mother. Again, there are big branches and fag ends: the branches of the family tree that dominate hugely are the Australian one, and the one starting from my half-brother, whereas mine (no children), and my aunt (who had only one child) are minor twigs.

So, perhaps there is some truth in the genealogist’s intuition after all. A small number of branches in the tree become the “main lines”, and others become “fag ends”, and there isn’t much in between. It would be interesting to formalise this using network science ideas, and test whether the anecdotal example that I have in my own family is typical when we look at lots of family trees.

On Responsibility

December 30th, 2018

When people collaborate on a codebase to build complex software systems, one of the purported advantages is that fixes spread. It is good to fix or improve something at a high level of abstraction, because then that fix not only helps your own code, but also redounds to improvements in code across the codebase.

However, people often don’t do this. Rather than fixing a problem with some class high up in the class hierarchy, or adding some behaviour to a well-used utility function, they instead write their own, local, often over-specialised version of it.

Why does this happen? One theory is about fear of breaking things. The fix you make might be right for you, but who knows what other changes it will have? The code’s intended functionality might be very well documented, but perhaps people are using abstruse features of a particular implementation to achieve something in their own code. In theory this shouldn’t happen, but in practice the risk:reward ratio is skewed towards not doing the fix.

Another reason—first pointed out to me by Hila Peleg—is that once you have fixed it, your name is in the version control system as the most recent modifier of the code. This often means that the code becomes your de facto responsibility, and questions about it then come to you. Particularly with a large code base and a piece of code that is well used, you end up taking on a large job that you hadn’t asked for, just for the sake of fixing a minor problem in your code. Better to write your own version and duck that responsibility.

Learning what is Unnecessary

December 28th, 2018

Learning which steps in a process are unnecessary is one of the hardest things to learn. Steps that are unnecessary yet harmless can easily be worked into a routine, and because they cause no problems apart from the waste of time, don’t readily appear as problems.

An example. A few years ago a (not very technical) colleague was demonstrating something to me on their computer at work. At one point, I asked them to google something, and they opened the web browser, typed the URL of the University home page into the browser, went to that page, then typed the Google URL into the browser, went the Google home page, and then typed their query. This was not at trivial time cost; they were a hunt-and-peck typist who took a good 20-30 seconds to type each URL.

Why did they do the unnecessary step of going to the University home page first? Principally because when they had first seen someone use Google, that person had been at the University home page, and then gone to the Google page; they interpreted being at the University home page as some kind of precondition for going to Google. Moreover, it was harmless—it didn’t stop them from doing what they set out to do, and so it wasn’t flagged up to them that it was a problem. Indeed, they had built a vague mental model of what they were doing—by going to the University home page, they were somehow “logging on”, or “telling Google that this was a search from our University”. It was only on demonstrating it to me that it became clear that it was redundant, because I asked why they were doing it.

Another example. When I first learned C++, I put semicolons after the brackets at the end of each block, after the curly bracket. Again, this is harmless: all it does is to insert some null statements into the code, which I assume the compiler strips out at optimisation. Again, I had a decent mental model for this: a vague notion of “you put semicolons at the end of meaningful units to mark the end”. It was only when I started to look at other people’s code in detail that I realised that this was unnecessary.

Learning these is hard, and usually requires us to either look carefully at external examples and compare them to our behaviour, or for a more experienced person to point them out to us. In many cases it isn’t all that important; all you lose is a bit of time. But, sometimes it can mark you out as a rube, with worse consequences than wasting a few seconds of time; an error like this can cause people to think “if they don’t know something as simple as that, then what else don’t they know?”.

Gresham’s Law (1)

December 7th, 2018

Gresham’s Law for the 21st Century: “Bad cultural relativism drives out good cultural relativism.”

Human in the Loop (1)

November 19th, 2018

Places with a pretension to being high-end often put a human in the loop in the belief that it makes it a better service. This is particularly the case in countries where basic labour costs are cheap. The idea, presumably, is that you can ask for exactly what you want, and get it, rather than muddling through understanding the system yourself. But, this can sometimes make for a worse service, by putting a social barrier in the loop. For example, I have just gone to a coffee machine at a conference, where there was someone standing by it waiting to operate it. As a result, I got a worse outcome than if I had been able to operate it myself. Firstly, I was too socially embarrassed to ask for what I would have done myself—press the espresso button twice— because that seems like an “odd” thing to do. Secondly, I got some side-eye from the server when I didn’t take the saucer; as a northerner I don’t really believe in them. So, by trying to make this more of a “service” culture, the outcome was worse for me, both socially and in terms of the product that I received.