Really think that Jacques Derrida’s children should start a podcast called “My dad wrote a pomo”.
Wrote a…
September 28th, 2022Harder than you Think
September 28th, 2022Here’s an interesting example of where technology that tries to be helpful makes something really difficult when you step just a tiny bit outside the intended use case.
A colleague sent me a document with a QR code for an event—a link to eventbrite. There was no URL for the event, and I wanted to send the link to a group of students via email. Ok—let’s see if we can scan the QR code from the document somehow. No dice. There isn’t, as far as I can tell, any way to make my laptop QR scanner look at something on the screen rather than through the camera.
Ok, let’s look at it on my phone. Okay, taking a photo of my laptop screen on my phone detects the QR code, but it automatically forwards it to the eventbrite app, so I don’t have a URL. Tried searching for the name of the event on eventbrite, but it is a private, unlisted event so it didn’t show up in search.
Tried next to see if there was a website that converted QR to URL. Hard to find—most searches for “QR to URL” took me to sites that created QR codes from URLs, even with “QR to URL”, “URL from QR” etc. in quotation marks. Most of the “successful” links were not services to do it, but code fragments for coders wanting to build this functionality into a program. Found two sites that claimed to do this, took a screengrab of the URL, uploaded it to the sites, one didn’t recognise it, one just hung with “processing…” for several minutes.
Eventual solution. Downloaded a new QR reader app on my phone which, thankfully, didn’t automatically open the eventbrite app. Copied-and-pasted the URL into the Notes app, which synced with my laptop, then I was able to cut-and-paste into an email.
Dans le Noir
August 5th, 2021A game that I invented, in whole cloth, in a dream last night.
There is a group of players in a house. The house is in darkness. Each player takes it in turn to draw a card (call them the “active player”), and they read that card (with the aid of a torch). This gives them a task that they have to do, e.g.:
- Find ten pairs of socks and return them to a base point
- Find a piece of out-of-date technology and return it to a base point
- Find a hat and “crown” another person by putting it on their head
The active player is allowed to use the torch whilst carrying out the task, whilst other players are not allowed any light source. The active player wins by completing the task before the other players work out what they are doing, whilst the other players win by saying what the active player is doing before they complete the task.
A harder form of the game involves the active player choosing an “adjective” card, e.g.:
- “red”
- “German”
and they have to relate their solution to the task to this in some way.
For something that I invented when I was fast asleep, it isn’t awful, and I’m sure could be developed into something playable—health and safety considerations notwithstanding!
Dishonest Honesty
February 23rd, 2021One weird consequence of being more honest and direct that the average person in your culture is that people accuse you of dishonesty. It’s like “you were being unreasonable in not telling me the usual lies that I was expecting”.
Furniture Form in Contemporary Composition
January 13th, 2021I once watched a TV programme about the furnishing of historic aristocratic houses. Something that was commented on is that there wasn’t typically a unified interior design idea to the furnishing of such houses. Instead, the house has accumulated a collection of furniture and accessories over the years, of various different styles. They are not placed willy-nilly—some thought has typically gone into what goes where—but, things are retained for their own aesthetic value, rather than being chosen because they belong to an overall design concept.
And—this is the important point—the expert fronting the programme made the point that as long as the individual pieces are of high quality, the assembly doesn’t matter that much. Each “stands alone” in its context, and has sufficient heft to act as an independent aesthetic object as part of a collection of furniture in a room.
We are perhaps more used to this in architecture. It is seen as aesthetically naive to demand that buildings “fit in” with their neighbours. If a building is seen to have sufficient quality, then it will make an impact—a “statement building”—in a context of buildings of many different styles. A number of such “statement buildings” in different styles can fit together to provide a meaningful whole composition. Again, placement and look are not arbitrary—there is some consideration, for example, to the overall massing of a group of buildings, and a fine building can still look bad in the wrong place—but, we rarely demand the same kind of surface aesthetic coherence that we might demand of a contemporary interior design.
I’ve been influenced a lot by this idea in thinking about musical composition. How do we put together “musical material”? Styles of music are characterised to some extent by “form”. Classical music has ideas of “sonata form”, where a couple of pieces of (melodic+harmonic) material are introduced, then varied/developed/combined, and then re-presented at the end. Traditional pop songs have a structure that alternates verses and choruses, perhaps occasionally also including an instrumental interlude. Much electronic dance music is based on a layered form: there are various pieces of music that “fit together” are placed on top of each other (a drum track, a vocal, a piano break, a sustained synth part) and tensions and dynamics work by introducing and dropping these layers. Jazz is often structured around alternations of a melody and solos that are grounded in the underlying harmony of that melody.
Typically, all of these forms rely on some relation between the different components in the form. Could there be a “furniture form”, where different strong pieces are presented in the same environment, without a strong relation between the different pieces? There is a resonance here with ideas such as happenings, John Cage’s Musicircuses, and collage. Perhaps a piece such as Michael Finnissy’s Molly-House is a good example:
Here, we see various different sub-groups in a large ensemble presenting different kinds of music. The different musics have been carefully created so that the performing of them in the same space at the same time makes sense—this isn’t an arbitrary pile of musics. Indeed, the composer describes it as an “assemblage”. Nonetheless, the relationship between them isn’t really anything to do with traditional musical form and structure—they aren’t related harmonically, or developed out of each other, or anything. Like the pieces of furniture, or the buildings around the town square, they make sense alone, but reinforce each other by dint of being in the same space.
(A)battoir Worker to (Z)oologist
January 13th, 2021When I was at school, there was a set of filing cabinets in the corner of the school library containing information about careers. It was arranged alphabetically. The first entry was “abattoir worker”, which we found an ongoing source of amusement. I think it went all the way through to “zoologist”.
What I liked about this was that it presented a “flat” view of possible careers. There wasn’t some notion that some careers were more important, prestigious, favoured compared to others. As pointed out in Judith Flanders’s fascinating recent book A Place for Everything: The Curious History of Alphabetical Order (ISBN 978-1509881567), the fact that alphabetical order removes hierarchy from collections was seen as radical, uncomfortable, or just “wrong” when it was first introduced. Surely, an organisational system that ranks “angels” before “gods” much be deeply flawed?
Of course, we don’t have any such qualms about alphabetical order these days. But, I think that we also neglect the power of listing things things in this flat, neutral, arbitrary way. One thing that our school did very well was to present the whole gamut of careers as a set of possibilities, and that the role of careers advice was to help us to think about the career that suited our skills, aptitudes, and temperament.
It is easy to criticise this as naive. Surely, we should be warning people from a comprehensive school in the middle of a council estate that they would face certain challenges if they choose a career that is “inappropriate” for them socially. It it really right to encourage such people to think that they could be barristers or whatever? Surely, the social barriers are too great.
But, I think that this gave us the naive confidence to bluster through these barriers. Because we hadn’t been warned, we blundered—I’m sure somewhat naively—into a huge variety of careers. I think a certain naivety can give you confidence—if no one has told you that you will face barriers, you can blunder though those barriers.
Perhaps we should give alphabetical order more credit!
Autistics and Theory of Mind
January 7th, 2021Autistics are usually characterised as having a weak “theory of mind”. But when it comes to writing instructions and guidance I’ve found that autistics are much much better at being able to imagine themselves into the position of the target audience, think in a careful way about what needs to be said, diagnose what assumptions are missing, and work out how set things out in a step-by step way.
By contrast, neurotypical people write guidance that is full of missed assumptions and absent steps, and then blame the target audience for being thick or ignorant when they fail to follow the shoddily written guidance.
Accentuate the Positive
December 31st, 2020A few years ago I went to a talk about software project management. The first half of the talk was a description of a way of assessing progress on software projects, which resulted in a grade on a three-point scale (red-amber-green) being allocated to each part of the project. So far, so good—the ideas were decent. Then, the speaker turned to the audience, and said:
“So, now we have assigned a grade to each of these aspects of the project, where should management prioritise its attention?”
So here, I thought, is going to be the nub of the talk. The speaker is going to assume that we with think that the attention is to be given to the red zone—the danger zone—and then dramatically reveal that actually it is the green zone that we should be paying attention to.
Of course, disappointment hit with the next sentence. After some vague mumblings from the crowd, the speaker said something along the lines of “…of course, the red zone, because this grading has allowed us to identify those aspects of the project that are struggling and need support.”
Of course, I don’t deny this. Part of the point of doing an exercise like this is to identify areas where there are problems. But, I would argue for at least equal attention to the green zone. As managers, we need to understand why things go well when they do. Going well isn’t a default thing that “just happens” and so we need only to attend to when things are going badly.
I see this in many aspects of higher education management and leadership. Focus on survey results such as the NSS almost inevitably turns to those aspects of provision that are lowest graded. Staff evaluations focus on identifying people who have problems.
What does this look like in practice? One practice I have put in place is for regular committee meetings to have a section where we talk about good practices in a particular area—what do we do well in e.g. student recruitment, research grant applications, mental health, etc? Or for each school (etc.) that is represented at such a meeting to present what they think goes particularly well in their area of work. Sometimes this is successful, but sometimes people have difficulty understanding what is good practice in their area—the smooth, well-thought-out processes are just that—smooth, well-thought out, and as a result are just invisible. As a results I’ve turned to just asking people to articulate what they do. This can easily end up throwing up surprises, as one unit is amazed at how easily something that they struggle with is done elsewhere.
Similarly, when looking at surveys and the like, there is a lot to be gained from asking why a particular well-rated aspect is well rated. Can we learn from that to promote excellence elsewhere? Can we partner a unit that is successful in one area with a unit that is struggling in that area, so that the struggling one can learn? Can we develop a repository of good practices that work well in an area, so that people coming new to doing or managing that area can start from a high baseline.
This seems to me to be where management transitions into leadership. It is important for short-term success to understand problems and manage them up to competence or success. But, for longer term success, we need a wider understanding of the successes in our organisation.
Exit Questionnaires and Interviews
December 1st, 2020Organisations like to do exit questionnaires and interviews with people who are leaving the organisation voluntarily. They want to understand why people have chosen to leave their job, whether there are any problems or any way in which they can improve their talent development processes or pipelines.
But, there is no upside to this for the (former) employee. They are leaving or have left—they don’t owe the labour of the questionnaire or interview to to the former employer in any contractual sense. Also, there is a considerable downside risk. If someone says something damning or (perhaps unintentionally) disruptive at such an interview, it can burn bridges for future partnerships or a future return to that organisation.
The risk is stacked against the employee and in favour of the employer. So, it seems only reasonable that a sensible employee would refuse such a request. Perhaps, therefore, there needs to be some motivation to compensate for the risk. I don’t think that it is unreasonable for the former employer to pay the former employee a non-token amount to do this.
We baulk at this. Why should we pay for this? Well, if we value the information, we should be able to work out a reasonable monetary value for that information—how much would our organisation gain from knowing that piece of information? We seem very reluctant to quantify in monetary terms the cost of information, probably because (unlike a physical thing) it is literally intangible, and so ought, surely to cost nothing. There are exceptions. Companies subscribe to market intelligence briefings. But, overall, we are reluctant to do this. One exception is in management accounting, which has a well-developed idea of doing a cost-benefit analysis of gathering information. Sometimes, information just isn’t worth knowing—the difference it would make to our decision making is outweighed by the cost of getting to know the information. This still jars with a very human understanding of information.
Why is Funny Funny?
November 16th, 2020Occasionally, I hear the opinion that topical TV panel shows such as Have I Got News for You and Mock the Week are “scripted”. Clearly, this is meant pejoratively, not merely descriptively. A scripted programme would not presenting itself to us honestly.
I don’t believe this (I have seen a couple of recordings of similar shows, and there isn’t any evidence of word-by-word scripting to my eye), but equally they aren’t simply a handful of people going into a studio for half-an-hour and chatting off the top of their head. My best guess for what is happening is a mixture of genuinely off-the-cuff chat, lines prepared in advance by the performers themselves, lines suggested by programme associates, material workshopped briefly before the performance, and some pre-agreed topics so that performers can work in material that they use in their live performances. All this, of course, topped by the fact that a lot of material is recorded, and the final programme is a selective edit of this material.
But, if it were to be scripted from end-to-end, and the performers essentially actors reading off an autocue, why would that be a problem? Like Pierre Menard’s version of Don Quixote, we wouldn’t know the difference. Why would knowledge that these programmes were scripted actually make them less funny? That is, that knowledge would make us laugh less at them—this isn’t just some contextual information, where we would still find it just as funny, but feel slightly cheated that it wasn’t as spontaneous as we are led to believe. We would, I would imagine, actually find it less funny.
There’s something about the human connection here. Even though we don’t know the performers personally, there is still some idea of it being “contextually funny”. Perhaps in some odd way it is “funny enough” to be funny if we believe it to be spontaneous, but not funny enough if we believe it to be scripted. Perhaps we are admiring the skill of being able to come up with the lines “on the fly”—but admiration doesn’t usually cash out in laughter. Somehow, it seems to do with the human connection that we have with these people. We find it genuinely funny because of the context.
I’ve often wondered why I can’t find other country’s political satire funny. I can work out the wordplay in Le Canard Enchaîné, but I don’t chuckle at it. I might admire it, but the subjects of the satire are just too distant; perhaps I don’t have a stake in the subjects in the same way that I do in the people that I read about in Private Eye.
When I used to lecture on the Computational Creativity module at Kent, I would talk about the Joking Computer system, an NLP system that could generate competent puns such as “What do you get if you cross a frog with a street? A main toad.”. I used to say that we would find that joke funny—genuinely funny—if it was told to us by a six-year-old child, say your younger brother or sister, even though it isn’t a hilarious joke. Similarly, perhaps, we might give the computer some leeway—it isn’t going to produce an amazingly funny joke, but it is funny for a computer. But, this argument always felt a bit flat. Perhaps it is the human connection—we don’t care that the (soul-less) computer has “managed” to make a joke, we lack that human connection.
My drama teacher at school used to say about the performances that we took part in that he wanted people to say that they had seen a “good play”, not a “good school play”. There is something in that. Perhaps, the same is true for computational creativity. It needs to be “creative enough” to be essentially acontextual before we start to find it genuinely creative.
Acceptability of Deepfakes for Trivial Corrections: The Thin End of a Wedge?
June 17th, 2020Clearly deepfakes are unacceptable, yes? It is morally unsound to create a fake video of someone saying or doing something, and to play that off as a real recording of that person doing it.
But, what about a minor correction? I recently saw a video about personal development, talking about how people move through various stages of life, and making a number of very positive points and pieces of advice. I thought that this might be useful as part of a professional development session to show to a group of students. But, there was a problem. At some point, the speaker talks about life changes, and talks about adolescence, including a reference to “when people start to get interested in the opposite sex”. The heteronormativity of this made me flinch, and I certainly wouldn’t want this to be presented, unadorned, to a group of students. This is both because of the content as such, and because I wouldn’t want the session to be derailed onto a discussion of this specific point, when it was a minor and easily replaceable example, not core to the argument.
I suppose what I would typically do would be to use it, but to offer a brief comment at the beginning that there was something not germane to the main argument, but which was problematic, but on balance I thought it would be good to use this resource despite the problematic phrase. I might even edit it out. Certainly if I was handing out a transcript rather than using the video, I would cut it out using an […] ellipsis. But, these solutions might still focus attention on it.
So—would it be acceptable to use a deepfake here? To replace “when people start to get interested in the opposite sex” with “when people start to develop an awareness of sexuality”, for example? There seems something dubious about this—we are putting words into someones mouth (well, more accurately, putting their mouth around some words). But, we aren’t really manipulating the main point. It’s a bit like how smoking has been edited out of some films, particularly when they are to be shown to children—the fact of the character smoking isn’t a big plot point, it was just what a character happened to be doing.
So, is this acceptable? Not acceptable? Just about okay, but the thin end of the wedge?
Big Scary Words
May 19th, 2020I once saw a complaint about a crowdfunded project that was going awry. The substance of the complaint was that, in addition to their many other failings, the people funded by the project had used some of the money to set up a company. Basically: “I paid you to make a widget, not to waste my money setting up a company”. There’s an interesting contrast in the view of the word “company” here. To someone in business, spending a few hundred pounds to register a company is a basic starting point, providing a legal entity that can take money, hold the legal rights to inventions in a safe way, provide protection from personal bankruptcy, etc. But to the person making the complaint, “setting up a company” no doubt meant buying a huge office building, employing piles of accountants and HR departments, and whatnot.
We see a similar thing with other terms—some things that are part of normal business processes sound like something special and frightening to people who aren’t engaging with these processes as part of their day-to-day life. For example, your data being put “on a database” can sound like a big and scary process, something out-of-the-ordinary, rather than just how almost all data is stored in organisations of any substantial size. Similarly, “using an algorithm” can sound like your data is being processed in a specific way (perhaps rather anonymous and deterministic—the computer “deciding your fate”), rather than being a word used to describe any any computer-based process.
We need to be wary of such misunderstandings in describing our processes to a wider public.
The Diversity of my Interests (2)
April 29th, 2020(no, I don’t know where “boxing gloves and pads” came from either)
Repetition and Communication
April 17th, 2020As I so often say, repetition is a key point in communication.
I’ve been in endless meetings about, for example, student induction, where we have a futile discussion about how to present lots of information. On one hand, should we present it all at once – the single, long induction event, where we try to tell everyone everything. No, we shouldn’t! People will get bored, they won’t take much in, they’ll be frightened by the amount of information. But no! If we don’t tell everyone everything up front, they’ll be confused and anxious. They won’t know what’s what, and before we know it, we’ll have people picking up random wrong information here and there. Better to get it out the way at the beginning.
Why not both? Start with the big, comprehensive, presentation, but recognise (and be clear that) people won’t be taking everything in. There’ll be reminders! There’s a reference here where you can look things up! If you don’t know, ask this person! That way, we give people a framework from which they can take the gist, and then we remind them, and repetition makes for a stronger memory (“stay home, protect the NHS, save lives”).
I think a lot of people have internalised an idea that (one-to-many) communication of information/procedures/policies should be a one-shot thing. If you’re not communicating everything, perfectly, at your first attempt, then you’d damn well better make it better so that it does come across. I don’t know where this pernicious idea comes from.
Perhaps I’ve had it squeezed out of me through years of studying complex maths and similar topics. When I was at university, it was clear that you weren’t going to get the topics right away. You’d go to a lecture, and perhaps get the broad idea, but then you’d need to spend ages reading the book – over and over again – trying problems, working out your own examples, before you really grokked the idea. Indeed, there was a useful piece of guidance about reading maths textbooks in our student handbook – “sometimes it’ll take an hour or two to understand how to go from one line to the next”.
As I said earlier, let’s embrace repetition. Again, and again.
A Theory of Stuff (1)
August 12th, 2019What underpins the broad shift in (broadly Western) design from highly decorated things in times up to the 19th century to the eschewal of decoration later in the 20th century and beyond? Here is a rough-cut of an economic theory of decoration.
Prior to the industrial revolution, individual things were expensive. The cost of making the stuff was in material—not necessarily raw material, but getting material from its raw state to a state where it can be used. A lot of this is semi-skilled labour cost, but a lot of it. There is an interesting argument that a shirt in mediaeval times cost around 3500 USD in modern money. For example, spinning (by hand) the thread costs 500 hours of work, and weaving the cloth another 72. Therefore, each shirt was a very valued object, and worn to exhaustion, frequently repaired, and repurposed if it was not viable in its original form (there is a nice discussion of this in Susannah Walker’s recent book The Life of Stuff).
Similarly for other material. Transport costs in an era where horse and human motive power was the prime driving force was huge. The cost of getting material to a building site—a minor cost of building a modern building—might have been a huge proportion of the cost.
By contrast, the marginal cost of adding some decorative addition to something is therefore small. If you have paid hundreds of hours of labour to get your basic shirt, adding a few more days to add some decoration is a minor marginal cost.
By contrast, with 20th century manufacturing techniques, the cost of producing the object is much less: the core materials can be produced and shipped at low cost, and a lot of the cost is coordinating the various low-cost steps and delivering the object to the final consumer. The relative labour cost of adding elaborate decoration is high. This doesn’t fully stand up—after all, modern techniques of manufacture can add some decoration very cheaply and easily. But perhaps in some cases it holds—I can see this particularly in the case of architecture, where the logistical cost of coordinating lots of special decorative components will be high.
Legacy Code (1)
June 24th, 2019It’s fascinating what hangs around in code-bases for decades. Following a recent update, Microsoft Excel files in a certain format (the old style .xls files rather than the .xlsx files) started showing up with this icon:
Which I haven’t seen for a couple of decades. More interestingly, the smaller version of the icon was this one:
What has this to do with Excel? It looked vaguely familiar, but I couldn’t place it. After a bit of thought and Googling around, I realised that this was the icon for a program called ResEdit, which was an editor for binary files that I remember using back in the pre-OS X days. Looking at this further, I realised that the last version of ResEdit was made in 1994.
How did these suddenly appear? There are occasional references to this happening on various Mac forums from the last few years. I suspect that somehow they are in collections of visual assets in codebases that have been under continuous development for the last 30 years or more, and that somehow some connection to the contemporary icon has been deleted or mis-asssigned. I’m particularly surprised that Excel wasn’t completely written from scratch for OS X.
What do people think coding is like?
April 22nd, 2019I wonder what activity non-coders think coding is like? I remember having a conversation with a civil servant a few years ago, where he struggled to understand why we were talking about coding being “creative” etc. I think that his point of view is not uncommon—seeing coding as something that requires both intellectual vigilance and slog, but is fairly “flat” as an activity.
Perhaps people think of it as like indexing a book? Lots of focus and concentration is needed, and you need some level of knowledge, and it is definitely intellectual, “close work”. But, in the end, it doesn’t have its ups and downs, and isn’t typically that creative; it’s just a job that you get on with.
Perhaps they think it is like what they think mathematics is like? Lots of pattern-matching, finding which trick fits which problem, working through lots of line-by-line stuff that kinda rolls out, albeit slowly and carefully, once you know what to do. This isn’t entirely absent from the coding process, but it doesn’t have the ups and downs that doing maths or doing coding has.
If people have a social science background, perhaps they think of “coding” in the sense of “coding an interview”—going through, step by step, assigning labels to text (and often simultaneously coming up with or modifying that labelling scheme). Again, this has the focus that we associate with coding, but again it is rather “flat”.
Perhaps it would be interesting to do a survey on this?
Differentiation in the Lecture Room
February 14th, 2019Students come to university with a wide range of ability and prior knowledge, and take to different subjects with different levels of engagement and competence. This spread isn’t as wide as in other areas of education—after all, students have chosen to attend, been selected in a particular grade boundary, and are doing a subject of their choice—but, there is still a decent amount of variation there.
How do we deal with this variation? In school education, they talk a lot about differentiation—arranging teaching and learning activities so that students of different levels of ability, knowledge, progress, etc. can work on a particular topic. I think that we need to do more of this at university; so much university teaching is either aimed at the typical 2:1 student, or is off-the-scale advanced. How can we make adjustments so that our teaching recognises the diversity of student’s knowledge and experience?
In particular, how can we do this in lectures? If we have a canonical, non-interactive lecture, can we do this? I think we can: here are some ideas:
Asides. I find it useful to give little parenthetical asides as part of the lecture. Little definitions, bits of background knowledge. I do this particularly for the cultural background knowledge in the Computational Creativity module, often introduced with the phrase “as you may know”. For example: “Picasso—who, as you may know, was a painter in the early-mid 20th century who invented cubism which plays with multiple perspectives in the same painting—was…”. This is phrased so that it more-or-less washes over those who don’t need it, but is there as a piece of anchoring information for those that do. Similarly for mathematical definitions: “Let’s represent this as a matrix—which, you will remember from you maths course, is a grid of numbers—…”. Again, the reinforcement/reminder is there, without patronising or distracting the students who have this knowledge by having a “for beginners” slide.
Additional connections. Let’s consider the opposite—those students who are very advanced, and have a good knowledge of the area are broadly. I differentiate for these by making little side-comments that connect to the wider course or other background knowledge. Sometimes introduced with a phrase such as “if you have studied…” or “for those of you that know about…”. For example: “for those of you who have done an option in information retrieval, this might remind you of tf-idf.”. Again, this introduces the connection without putting on a slide and make it seem big and important for those students who are struggling to manage the basics, but gives some additional information and a spark of a connection for the students who are finding the material humdrum. (I am reminded of an anecdote from John Maynard Smith, who talked about a research seminar where the speaker had said “this will remind you of a phase transition in statistical physics”: “I can’t imagine a time in my life when anything will remind me of a phase transition”).
Code examples. A computing-specific one, this. I’ve found that a lot of students click into something once they have seen a code example. These aren’t needed for the high-flying coding ninjas, who can go from a more abstract description to working out how the code is put together. But, for many students, the code example is the point where all the abstract waffle from the previous few minutes clicks into place. The stronger students can compare the code that they have been writing in their heads to mine. I sometimes do the coding live, but I’ve sometimes chickened out and used a screencap video (this also helps me to talk over the coding activity). A particularly clear example of this was where I showed a double-summation in sigma notation to a group, to largely blank looks, followed by the same process on the next slide as a nested loop, where most students seemed to be following clearly.
Any other thoughts for differentiation tricks and tips specifically in the context of giving lectures?