One weird consequence of being more honest and direct that the average person in your culture is that people accuse you of dishonesty. It’s like “you were being unreasonable in not telling me the usual lies that I was expecting”.
Archive for the ‘Communication’ Category
Autistics are usually characterised as having a weak “theory of mind”. But when it comes to writing instructions and guidance I’ve found that autistics are much much better at being able to imagine themselves into the position of the target audience, think in a careful way about what needs to be said, diagnose what assumptions are missing, and work out how set things out in a step-by step way.
By contrast, neurotypical people write guidance that is full of missed assumptions and absent steps, and then blame the target audience for being thick or ignorant when they fail to follow the shoddily written guidance.
Occasionally, I hear the opinion that topical TV panel shows such as Have I Got News for You and Mock the Week are “scripted”. Clearly, this is meant pejoratively, not merely descriptively. A scripted programme would not presenting itself to us honestly.
I don’t believe this (I have seen a couple of recordings of similar shows, and there isn’t any evidence of word-by-word scripting to my eye), but equally they aren’t simply a handful of people going into a studio for half-an-hour and chatting off the top of their head. My best guess for what is happening is a mixture of genuinely off-the-cuff chat, lines prepared in advance by the performers themselves, lines suggested by programme associates, material workshopped briefly before the performance, and some pre-agreed topics so that performers can work in material that they use in their live performances. All this, of course, topped by the fact that a lot of material is recorded, and the final programme is a selective edit of this material.
But, if it were to be scripted from end-to-end, and the performers essentially actors reading off an autocue, why would that be a problem? Like Pierre Menard’s version of Don Quixote, we wouldn’t know the difference. Why would knowledge that these programmes were scripted actually make them less funny? That is, that knowledge would make us laugh less at them—this isn’t just some contextual information, where we would still find it just as funny, but feel slightly cheated that it wasn’t as spontaneous as we are led to believe. We would, I would imagine, actually find it less funny.
There’s something about the human connection here. Even though we don’t know the performers personally, there is still some idea of it being “contextually funny”. Perhaps in some odd way it is “funny enough” to be funny if we believe it to be spontaneous, but not funny enough if we believe it to be scripted. Perhaps we are admiring the skill of being able to come up with the lines “on the fly”—but admiration doesn’t usually cash out in laughter. Somehow, it seems to do with the human connection that we have with these people. We find it genuinely funny because of the context.
I’ve often wondered why I can’t find other country’s political satire funny. I can work out the wordplay in Le Canard Enchaîné, but I don’t chuckle at it. I might admire it, but the subjects of the satire are just too distant; perhaps I don’t have a stake in the subjects in the same way that I do in the people that I read about in Private Eye.
When I used to lecture on the Computational Creativity module at Kent, I would talk about the Joking Computer system, an NLP system that could generate competent puns such as “What do you get if you cross a frog with a street? A main toad.”. I used to say that we would find that joke funny—genuinely funny—if it was told to us by a six-year-old child, say your younger brother or sister, even though it isn’t a hilarious joke. Similarly, perhaps, we might give the computer some leeway—it isn’t going to produce an amazingly funny joke, but it is funny for a computer. But, this argument always felt a bit flat. Perhaps it is the human connection—we don’t care that the (soul-less) computer has “managed” to make a joke, we lack that human connection.
My drama teacher at school used to say about the performances that we took part in that he wanted people to say that they had seen a “good play”, not a “good school play”. There is something in that. Perhaps, the same is true for computational creativity. It needs to be “creative enough” to be essentially acontextual before we start to find it genuinely creative.
Clearly deepfakes are unacceptable, yes? It is morally unsound to create a fake video of someone saying or doing something, and to play that off as a real recording of that person doing it.
But, what about a minor correction? I recently saw a video about personal development, talking about how people move through various stages of life, and making a number of very positive points and pieces of advice. I thought that this might be useful as part of a professional development session to show to a group of students. But, there was a problem. At some point, the speaker talks about life changes, and talks about adolescence, including a reference to “when people start to get interested in the opposite sex”. The heteronormativity of this made me flinch, and I certainly wouldn’t want this to be presented, unadorned, to a group of students. This is both because of the content as such, and because I wouldn’t want the session to be derailed onto a discussion of this specific point, when it was a minor and easily replaceable example, not core to the argument.
I suppose what I would typically do would be to use it, but to offer a brief comment at the beginning that there was something not germane to the main argument, but which was problematic, but on balance I thought it would be good to use this resource despite the problematic phrase. I might even edit it out. Certainly if I was handing out a transcript rather than using the video, I would cut it out using an […] ellipsis. But, these solutions might still focus attention on it.
So—would it be acceptable to use a deepfake here? To replace “when people start to get interested in the opposite sex” with “when people start to develop an awareness of sexuality”, for example? There seems something dubious about this—we are putting words into someones mouth (well, more accurately, putting their mouth around some words). But, we aren’t really manipulating the main point. It’s a bit like how smoking has been edited out of some films, particularly when they are to be shown to children—the fact of the character smoking isn’t a big plot point, it was just what a character happened to be doing.
So, is this acceptable? Not acceptable? Just about okay, but the thin end of the wedge?
As I so often say, repetition is a key point in communication.
I’ve been in endless meetings about, for example, student induction, where we have a futile discussion about how to present lots of information. On one hand, should we present it all at once – the single, long induction event, where we try to tell everyone everything. No, we shouldn’t! People will get bored, they won’t take much in, they’ll be frightened by the amount of information. But no! If we don’t tell everyone everything up front, they’ll be confused and anxious. They won’t know what’s what, and before we know it, we’ll have people picking up random wrong information here and there. Better to get it out the way at the beginning.
Why not both? Start with the big, comprehensive, presentation, but recognise (and be clear that) people won’t be taking everything in. There’ll be reminders! There’s a reference here where you can look things up! If you don’t know, ask this person! That way, we give people a framework from which they can take the gist, and then we remind them, and repetition makes for a stronger memory (“stay home, protect the NHS, save lives”).
I think a lot of people have internalised an idea that (one-to-many) communication of information/procedures/policies should be a one-shot thing. If you’re not communicating everything, perfectly, at your first attempt, then you’d damn well better make it better so that it does come across. I don’t know where this pernicious idea comes from.
Perhaps I’ve had it squeezed out of me through years of studying complex maths and similar topics. When I was at university, it was clear that you weren’t going to get the topics right away. You’d go to a lecture, and perhaps get the broad idea, but then you’d need to spend ages reading the book – over and over again – trying problems, working out your own examples, before you really grokked the idea. Indeed, there was a useful piece of guidance about reading maths textbooks in our student handbook – “sometimes it’ll take an hour or two to understand how to go from one line to the next”.
As I said earlier, let’s embrace repetition. Again, and again.
I wonder what activity non-coders think coding is like? I remember having a conversation with a civil servant a few years ago, where he struggled to understand why we were talking about coding being “creative” etc. I think that his point of view is not uncommon—seeing coding as something that requires both intellectual vigilance and slog, but is fairly “flat” as an activity.
Perhaps people think of it as like indexing a book? Lots of focus and concentration is needed, and you need some level of knowledge, and it is definitely intellectual, “close work”. But, in the end, it doesn’t have its ups and downs, and isn’t typically that creative; it’s just a job that you get on with.
Perhaps they think it is like what they think mathematics is like? Lots of pattern-matching, finding which trick fits which problem, working through lots of line-by-line stuff that kinda rolls out, albeit slowly and carefully, once you know what to do. This isn’t entirely absent from the coding process, but it doesn’t have the ups and downs that doing maths or doing coding has.
If people have a social science background, perhaps they think of “coding” in the sense of “coding an interview”—going through, step by step, assigning labels to text (and often simultaneously coming up with or modifying that labelling scheme). Again, this has the focus that we associate with coding, but again it is rather “flat”.
Perhaps it would be interesting to do a survey on this?
Students come to university with a wide range of ability and prior knowledge, and take to different subjects with different levels of engagement and competence. This spread isn’t as wide as in other areas of education—after all, students have chosen to attend, been selected in a particular grade boundary, and are doing a subject of their choice—but, there is still a decent amount of variation there.
How do we deal with this variation? In school education, they talk a lot about differentiation—arranging teaching and learning activities so that students of different levels of ability, knowledge, progress, etc. can work on a particular topic. I think that we need to do more of this at university; so much university teaching is either aimed at the typical 2:1 student, or is off-the-scale advanced. How can we make adjustments so that our teaching recognises the diversity of student’s knowledge and experience?
In particular, how can we do this in lectures? If we have a canonical, non-interactive lecture, can we do this? I think we can: here are some ideas:
Asides. I find it useful to give little parenthetical asides as part of the lecture. Little definitions, bits of background knowledge. I do this particularly for the cultural background knowledge in the Computational Creativity module, often introduced with the phrase “as you may know”. For example: “Picasso—who, as you may know, was a painter in the early-mid 20th century who invented cubism which plays with multiple perspectives in the same painting—was…”. This is phrased so that it more-or-less washes over those who don’t need it, but is there as a piece of anchoring information for those that do. Similarly for mathematical definitions: “Let’s represent this as a matrix—which, you will remember from you maths course, is a grid of numbers—…”. Again, the reinforcement/reminder is there, without patronising or distracting the students who have this knowledge by having a “for beginners” slide.
Additional connections. Let’s consider the opposite—those students who are very advanced, and have a good knowledge of the area are broadly. I differentiate for these by making little side-comments that connect to the wider course or other background knowledge. Sometimes introduced with a phrase such as “if you have studied…” or “for those of you that know about…”. For example: “for those of you who have done an option in information retrieval, this might remind you of tf-idf.”. Again, this introduces the connection without putting on a slide and make it seem big and important for those students who are struggling to manage the basics, but gives some additional information and a spark of a connection for the students who are finding the material humdrum. (I am reminded of an anecdote from John Maynard Smith, who talked about a research seminar where the speaker had said “this will remind you of a phase transition in statistical physics”: “I can’t imagine a time in my life when anything will remind me of a phase transition”).
Code examples. A computing-specific one, this. I’ve found that a lot of students click into something once they have seen a code example. These aren’t needed for the high-flying coding ninjas, who can go from a more abstract description to working out how the code is put together. But, for many students, the code example is the point where all the abstract waffle from the previous few minutes clicks into place. The stronger students can compare the code that they have been writing in their heads to mine. I sometimes do the coding live, but I’ve sometimes chickened out and used a screencap video (this also helps me to talk over the coding activity). A particularly clear example of this was where I showed a double-summation in sigma notation to a group, to largely blank looks, followed by the same process on the next slide as a nested loop, where most students seemed to be following clearly.
Any other thoughts for differentiation tricks and tips specifically in the context of giving lectures?
Noticeable recent microtrend—people walking around, holding a phone about 40cm from their face, having a video chat on FaceTime/Skype. Been possible for years, but I’ve noticed a real uptick in this over the last few weeks.
When people collaborate on a codebase to build complex software systems, one of the purported advantages is that fixes spread. It is good to fix or improve something at a high level of abstraction, because then that fix not only helps your own code, but also redounds to improvements in code across the codebase.
However, people often don’t do this. Rather than fixing a problem with some class high up in the class hierarchy, or adding some behaviour to a well-used utility function, they instead write their own, local, often over-specialised version of it.
Why does this happen? One theory is about fear of breaking things. The fix you make might be right for you, but who knows what other changes it will have? The code’s intended functionality might be very well documented, but perhaps people are using abstruse features of a particular implementation to achieve something in their own code. In theory this shouldn’t happen, but in practice the risk:reward ratio is skewed towards not doing the fix.
Another reason—first pointed out to me by Hila Peleg—is that once you have fixed it, your name is in the version control system as the most recent modifier of the code. This often means that the code becomes your de facto responsibility, and questions about it then come to you. Particularly with a large code base and a piece of code that is well used, you end up taking on a large job that you hadn’t asked for, just for the sake of fixing a minor problem in your code. Better to write your own version and duck that responsibility.
To historians, “history” basically means the (complex, disputed) knowledge that contemporary people have about what happened in the past. To the general public, “history” is the stuff that happened—about which contemporary people might have limited evidence, disputes of interpretation, etc. This can lead to confusion in communicating ideas about the methodology and ontology of history. For example, when I first came across people saying things along the lines of “historical facts change over time”, I thought that they were embracing a much more radical vision of history than they were. They were making the (important) point that what we call “facts” are based on incomplete evidence and biased by political/social/religious views and our biases coming from the contemporary world. I thought that they were making the much more radical claim that the subjective experience of people in the past changed due to our contemporary interpretations—a kind of reverse causality.
First law of Exciting News: Inevitably, when you get an email from some company entitled “exciting news” it is going to contain an announcement that they have “merged with” (been taken over by) a “major partner” (a larger, rather more anonymous company), and that they are “looking forward to the opportunities that are offered by this exciting new development” (ready to make some more money from you by offering you a slightly diminished service level).
I often refer to the process of taking the content that I want to communicate and putting it into the 200-by-300 pixel box reserved for content in the middle of our University’s webpages as “putting the clutter in”. I get the impression that my colleagues on the Marketing and Communication team don’t quite see it this way.
It sometimes surprises me quite how formulaic the smalltalk at the beginnings of conversations is. I know that it isn’t acceptable to respond to the question “How are you?” with a list of your latest ailments and insecurities, but it is still sometimes surprising how much that part of a conversation is a cognitive readymade, without any ready deviation. I remember a couple of incidents in the days after my father died.
- Meeting a colleague a few days after my father had died. Wanting, gradually, to let people know what had happened, I responded to his “How are you?” with a “Actually, not so good.”, expecting to get a query back about what had happened. Instead, I just got the response “Great, I’m fine.”, as if I had said (as I would 99.9999% of the time) “I’m fine, how are you?”. Literally, my response hadn’t been processed at all. If you want some evidence for hearing being a process of anticipation then you’ve got it there. There’s no other response in the “repertoire” to “How are you?” other than minor variants on “Fine, how are you?”, so the brain doesn’t even really bother processing what has been said. Any response is just treated as the standard one.
- Speaking to my uncle a day or two after my father had died (I had already told my uncle). This time, he asked first: “How are you?”. My response, understandably: “Not too good.”. My uncle’s response—no criticism intended, this is just a point about how deeply embedded language structures are—”Oh, why is that then?”. I was, very unusually, struck dumb for a few seconds. For a moment I thought “Perhaps I didn’t tell him that Dad had died?”; for surely, someone wouldn’t say something so crass to someone who had just lost a parent—surely it would be obvious why I “wasn’t too good”. Eventually, I managed to stutter out “Well, you know, Dad died yesterday.” It is bizarre how fixed our linguistic patterns are that, even after one of the worst things that can happen to you, saying that you are anything other than “fine” causes our whole language generation system to collapse.
A while ago I read a little article whilst doing a management course that was very influential on me (I’ll find the reference and add it here soon). It argued that the process of building a team—in the strict sense a group of people who could really work closely and robustly together on a complex problem—was difficult, time-consuming and emotionally fraught, and that actually, for most business processes, there isn’t really any need to build a team as such. Instead, just a decently managed group of people with a well-defined goal was all that was needed for most activities. Indeed, this goes further; because of the stress and strain needed to build a well-functioning team in the strong sense of the word, it is really unproductive to do this, and risks fomenting a “team-building fatigue” in people.
I’m wondering if the same is true for the idea of strategy. Strategy is a really important idea in organisations, and the idea of strategic change is really important when a real transformation needs to be made. But, I worry that the constant demands to produce “strategies” of all sorts, at all levels of organisations, runs the danger of causing “strategy fatigue” too. We have to produce School strategies, Faculty strategies, University strategies, all divided un-neatly into research, undergraduate, and postgraduate, and then personal research Strategies, and Uncle Tom Cobleigh and all strategies. Really, we ought to be keeping the word and concepts around “strategy” for when it really matters; describing some pissant objective to increase the proportion of one category of students from 14.1% to 15% isn’t a strategy, it’s almost a rounding error. We really need to retain the term—and the activity—for when it really matters.
There is a wonderful subreddit called Dear Reddit, Today I Fucked Up… in which people post (usually fairly lighthearted) accounts of how they erred during the current day, beginning with the abbreviation “TIFU”. Here is my post there from today.
TIFU by starting to ask someone the question ‘So, where are you from?’, realising as I opened my mouth that it often sounds a little bit racist (with its implication of ‘So, where are you from *really*?’), deciding to draw attention to the fact that I know that it’s a stupid and clichéd question by putting it in air quotes, then didn’t really start moving my fingers until the last word of the question, which made it look like I was saying ‘So, where are you “from”?’ which made the question even worse.
I am a fairly informal person, but occasionally even I get a surprise, like this recent email from HMRC:
In just a generation we have gone from addressing each other as “Sir” and “Madam” to the point where one of the stuffiest parts of government says “Hi!” to me. To people of my father’s generation, who struggled with their doctor referring to them by their first name, this shift would have been almost incomprehensible.
Sometimes it is important to present yourself as more specialised than you actually are. This can be true for individuals and for businesses. Take, for example, the following apparently successful businesses:
- www.mysurgerywebsite.co.uk—this is a business that builds websites for doctor’s surgeries.
- www.parentpay.com—this is a business that describes itself as the “market leader in online payment for schools”
Woaah there! What’s happening here? Surely any decent web design company can provide a website for a doctor’s surgery? The specific company might provide a tiny little bit more knowledge, but surely the knowledge required to write a decent website is around 99 percent of the knowledge required to write a doctor’s surgery website. Surely, handling payments from parents for school activities is just the same as, well, umm, handling payments, and there are plenty of companies that do that perfectly well.
This, of course, misses the point. The potential customers don’t know that. To them, they are likely to trust the over-specialised presentation rather than the generic one. Indeed, the generic one might sound a little bit shady, evasive or amateurish: “What kind of web sites do you make?”, “Well, all kinds really.”, “Yes, but what are you really good at.”, “Well, it doesn’t really matter, websites are all basically the same once you get into the code.”. Contrast that with “we make websites for doctors.” Simples, innit.
So that’s my business startup advice. Find an area that uses your skills, find some specialised application of those skills, then market the hell out of your skills in that specific area. You will know that your skills are transferrable—but, your potential customers won’t, and they will trust you more as a result.
I’ve noticed the same with trying to build academic collaborations. Saying “we do optimisation and data science and visualisation and all that stuff” doesn’t really cut it. I’ve had much more success starting with a specific observation—we can provide a way of grouping your data into similar clusters, for example—than trying to describe the full range of what contemporary data science techniques can do.
Similarly with courses. Universities have done well out of providing “MBA in Marketing for XX” or whatever, when the vast majority of the course might be generic marketing skills. Again, the point here is more one of trust than one of content.
This epitomises the idea “if you don’t have anything to say, you don’t have to say anything”. I think some people genuinely think that if there is a box on a web page for comments then they have been singled out from all the people on the web to make that comment, and so feel obliged to reply. Or, they were just being facetious 😉
When we are learning creative writing at school, we learn that it is important to use a wide variety of terms to refer to the same thing. To refer to something over and over again using the same word is seen as “boring” and something to be avoided.
It is easy to think that this is a good rule for writing in general. However, in areas where precision is required—technical and scientific writing, policy documents, regulations—it is the wrong thing to be doing. Instead, we need to be very precise about what we are saying, and using different terminology for the sake of making the writing more “interesting” is likely to damn the future reader of the document to hours of careful analysis of whether you meant two different-but-overlapping words to refer to the same thing or not.
A while ago I had a conversation with a colleague, that went something like this:
Me: “I’ve come across a new book that would be really useful to you for the module you’re teaching next term.”
Colleague: “I don’t really think I need that.”
Me: “No, it’s really good, you will find it really useful.”
Colleague (rather angry): “I appreciate your suggestions, but I REALLY DON’T NEED A BOOK ON THE SUBJECT.”
It eventually transpired that my colleague was interpreting “you will find this book useful” as “Because you don’t know the subject of the course very well, you will need a book to help you learn the subject before you teach it to the students.”. By contrast, I was meaning “you will find it useful as a book to recommend to your students“.
This subtle elision between “you” being taken literally and being used in a slightly elided way to mean “something you are responsible for” is easily misunderstood. Another example that comes up frequently is when I am discussing with students some work that they have to do on a project. I will say something like “you need to make an index of the terms in the set of documents”, using the common elision in software development of “you need to” to mean “you need to write code to”, not “you need to do this by hand”. Most of the time the students get this, but on a significant minority of occasions there is a look of incomprehension on the student’s faces as they think I have asked them to do the whole damn tedious thing by themselves.