That A-team, eh? They really liked making quiches, yes? They loved it when a flan came together.
Every cloud computer has a very expensive data centre lining.
Firms selling things have a dilemma. Price something too low, and, whilst it will sell well, it won’t make enough money to be worth doing (leading to the old joke: “We’re selling each item at a loss; but, don’t worry, we’ll make up on it in volume.”). Price something too high, and you won’t sell enough widgets to make enough money. The traditional view on this is that it is a tradeoff; find a mid-range price where you sell enough widgets at a high enough price. If you can’t do this, then the business isn’t viable.
This is finessed by the notion of adaptive pricing. This is where the same widget is sold to different people at different prices. This makes more businesses financially viable. This is where firms adjust prices based on some information that they can observe, or some structuring of how/when/where/to whom the products are sold:
- Selling to different demographics based on broad ability to pay. Discounts for students or retired people, who are likely to have a lower income. Changing prices at different times of the day, based on the demographic that is around (e.g. a price premium for buying a coffee at the station at peak commuter time; or, more simply, the idea of peak time tickets).
- Rewarding time/organisation: tickets come on sale at a particular date/time, but there are only a finite number at that price. People who are time rich/cash poor can spend time to be organised to buy at the cheaper price, whereas people who have more money don’t have to spend the time, they just buy at the higher price later.
- Selling at different prices in different locations. This has a dark side too; some firms have exploited the lack of transport options of poor people living in cut-off areas by selling at a higher price.
- Auctions, where items are sold for a bespoke price based on demand.
- Secondary markets, where a firm sells widgets cheaply and efficiently, but a secondary retailer (such as a ticket tout) buys up some of them and sells them on to the final purchaser at an inflated price.
- Hiding prices. Rather than a price being given up-front, you have to go through some intermediary system that judges your ability to pay, or your need for the product, and adjusts prices accordingly. The watch shop that judges whether you are a middle-income watch enthusiast or a rich person who wants to brag about the cost of their watch; the retailer of tools who judges whether you will be using the tool day-in-day out or are an occasional user who would buy it for a sufficiently low price.
- Similarly, making use of your purchasing history to adjust prices on an online system.
- Micropayments. Rather than paying up-front to purchase something, you pay by the number of minutes/hours that you use it, or what you use it for.
- Time-adjusted pricing. You show an interest, and if you want it right now you pay the price; the price goes down with time, but if you wait too long you run the risk (perhaps entirely artificially generated) that stock will run out. The TV-based retailer PriceDrop is canonical here.
- Rewards. You all pay the same price up front, but more price-sensitive customers are given some of that money back as vouchers so that their average spend per widget is lower in the long run.
- Direct demand-adjusted pricing. Uber’s entirely-up-front “surge pricing”, for example. Again, speaks to the time/money tradeoff; someone who needs a lower price might be prepared to wait for half-an-hour to see if surge pricing goes away.
- Artificial hobbling. You all buy the same product, making manufacturing easy, but some features are turned off on the lower product range. Tesla cars work like this; you can buy a cheaper version, which has a lower distance range; but, the hardware is the same as the premium product, the distance is just limited by a software switch in the cheaper version.
- Things that seem more different. The same object sold with changes to the branding. Surplus stock sold to a poundshop on the condition that they repackage it. Cheap train tickets sold through a different brand, but when you show up you are on the same train in the same seats as people who paid a lot more.
- Superficial benefits. Exploiting that some people will pay for “the best” regardless. First-class train travel is probably a decent example here; a slightly more comfortable seat and free tea/coffee, but sometimes at a price premium which seems irrationally larger.
I would make an educated guess that cracking adaptive pricing will be one of the big innovations in business in this century. It is increasingly used, but there is still a huge amount of finesse to do here. Already, supermarkets are experimenting with systems such as electronic price displays, allowing dynamic adjusting of price during the day, either by broad demographic shifts, or by minute-by-minute demand. And there are already critiques: the transport company that (algorithmically) increases its prices following a natural disaster, the company that (algorithmically) sells the music of a recently-dead star at a premium.
Interestingly, there is a weird potential consequence to all of this. Will this mean that differences in income become less pronounced? If I had an ideal adaptive pricing system, where, say, I charged people not a price, but a proportion of their income, for my product, then that would have the outcome that people would de facto have the same income. Clearly, the systems above are not at that level yet; but, each adaptive pricing innovation brings us closer to that.
A long time ago, as a wet-behind-the-ears English person coming to Scotland for the first time, I was intrigued/surprised/amused to see a copy of The New Testament in Scots in a bookshop (the old James Thin on South Bridge, now a branch of Blackwells).
I was vaguely aware that there was a Gaelic language, which not many people used, and had a basic knowledge that there was a Scots accent and vocabulary, albeit largely gleaned from watching Russ Abbot’s “see u Jimmy” character on TV:
…but the idea of treating this as a language was alien to me. I’ve developed by knowledge of this world over the years, and can appreciate the literary qualities of it, particularly through the thoughtful work by Hugh MacDiarmid. But, what explains my initial sense that this sort of thing is a bit ludicrous, a little trying-too-hard:
…a little too close to the clearly humorous (though perhaps not evangelically purposeless) Ee by Gum, Lord!: The Gospels in Broad Yorkshire.
Why did I, 25 years ago, think that its description as “a translation” was odd. I wouldn’t have regarded a translation into French or Japanese or Guarani strange—so, why Scots? This touches, I suppose, on the language vs. dialect debate; when does a dialect become a separate language. This seems to be an ill-defined question; there is clearly a continuum, and whilst groups of language-users cluster at certain points thereon, this doesn’t happen cleanly enough to be a series of isolated clumps.
One idea that might help to explain this is the uncanny valley; here’s one of its inhabitants, a rather realistic looking humanoid robot:
This sort of thing—not far of being human, but not close enough to “pass”—is said to be uncanny, and this is backed up by a number of empirical studies. People are freaked out by this, much more than something really realistic or something more cartoony and obviously unrealistic. There is a point on the similarity scale, close to full realism, where suddenly people’s familiarity and comfort with the thing rockets downward:
I think the same is true for languages. Sufficiently far away—English to French, say, or Sanskrit—and the language is dissimilar, clearly different. Close enough—Nottinghamshire to Yorkshire, say—and the similarities are unremarkable. But the distance from RP English to Scots sits just at the right distance of unfamiliarity; like enough to be familiar, far enough away to seem different. Interestingly, the reaction is one of amusement rather than unsettledness; but, the idea of an emotional reaction being triggered by something close to but not really close to something is still there.
Slack—like email, but somehow with a lot less guilt about ignoring it.
Every time we have an open day at Kent, the University of Essex (hello to my dear friends there!) pays someone to drive a bloody great van with a mahoosive “University of Essex” poster on it and park it all day opposite the main entrance to our campus.
I can’t imagine that 20-30 years ago, when we first started to talk about having some kind of competitive ethos between universities, that we would ever have imagined that we would end up in a situation like this. And it seems to be a systematic inefficiency baked into the system. Unlike the often talked about “inefficiencies” of public sector management, which seem to be just a matter of motivation and management skill, there are real, ongoing, impossible to avoid inefficiencies at the core of a competition based system.
This is a few hundred pounds that could be going into student’s education or research or goddamn it on nicer port for the vice-chancellor’s summer party. Is there any way in which we can get out of this kind of arms race that is consuming vast amounts of money, time, and attention?
It’s surprising to me, in a world where social media is generally assumed to be ubiquitous, how many people have minimal-to-no online presence. Whilst I was sorting through piles of stuff from my Dad’s house (well, sorting out in the sense of looking at it and then putting it in a box in a storage unit), I came across a lot of things with names on—old school photos, programmes from concerts and plays at school with lists of pupils and teachers, lists of people who were involved in societies at University, details of distant family members, etc. Looking up some people online, I was surprised how often there was no online trace. I understand that some people might have changed names, gone to ground, died, or whatever, but a good third of people, I would say, had no or close-to-no online presence. Don’t quite know what to make of this, but it shows how the idea that we are a completely online community to be unreliable.
When I hear about the gun debate in the USA, it sounds to me like this:
Alice: “So, in your workplace, how do they make sure that people do their work well?”
Bob: “Well, its straightforward really. Its written into our contracts—which we’re all very respectful of—that our bosses can hit us over the head with a large piece of wood if we are even a little bit slacking. So, each of the bosses has this piece of wood, and they walk around with it all day,…”
Alice: “But that sounds terrible. Why do people put up with it?”
Bob: “Well, actually it’s not too bad. You see, we have a very strong union, and they’ve agreed that we can all have large pieces of wood too, and so we can hit back and defend ourselves.”
Alice: “But, wouldn’t it be easier for you to all agree not to have the pieces of wood in the first place?”
Bob: “I’m not quite too sure I get you there…”
The flexibility of computer languages is considered to be one of their sources of power. The ability for a computer to do, within limits of tractability and Turing-completeness, anything with data is considered one of the great distinguishing features of computer science. Something that surprises me is that we fell into this very early on in the history of computing; very early programmable computer systems were already using languages that offered enormous flexibility. We didn’t have a multi-decade struggle where we developed various domain-specific languages, and then the invention of Turing-complete generic languages was a key point in the development of computer programming. As-powerful-as-dammit languages were—by accident, or by the fact of languages already building on a strong tradition in mathematical logic etc.—there from the start.
Yet, in practice, programmers don’t use this flexibility.
How often have we written a loop such as for (int i=0;i<t;i++)? Why, given the vast flexibility to put any expression from the language in those three slots, hardly put anything other than a couple of different things in there? I used to feel that I was an amateurish programmer for falling into these clichés all the time—surely, real programmers used the full expressivity of the language, and it was just me with my paucity of imagination that wasn’t doing this.
But, it isn’t. Perhaps, indeed, the clichés are a sign of maturity of thinking, a sign that I have learned some of the patterns of thought that make a mature programmer?
The studies of Roles of Variables put some meat onto these anecdotal bones. Over 99% of variable usages in a set of programs from a textbook were found to be doing just one of around 10 roles. An example of a role is most-wanted holder, where the variable holds the value that is the “best” value found so far, for some problem-specific value of “best”. For example, it might be the current largest in a program that is trying to find the largest number in a list.
There is a decent argument that we should make these sorts of things explicit in programming languages. Rather than saying “int” or “string” in variable declarations we should instead/additionally say “stepper” or “most recent holder”. This would allow additional pragmatic checks to see whether the programmer was using the variable in the way that they think they are.
Perhaps there is a stronger argument though. Is it possible that we might be able to reason about such a restricted language more powerfully than we can a general language? There seems to be a tension between the vast Turing-complete capability of computer languages, and the desire to verify and check properties of programs. Could a subset of a language, where the role-types had much more restricted semantics, allow more powerful reasoning systems? There is a related but distinct argument that I heard a while ago that we should develop reasoning systems that verify properties of Turing-incomplete fragments of programs (I’ll add a reference when I find it, but I think the idea was at very early stages).
Les Hatton says that Software is cursed with unconstrained creativity. We have just about got to a decent understanding of our tools when trends change, and we are forced to learn another toolset—with its own distinctive set of gotchas—all over again. Where would software engineering have got to if we had focused not on developing new languages and paradigms, but on becoming master-level skilled with the already sufficiently expressive languages that already existed? There is a similar flavour here. Are we using languages that allow us to do far more than we ever need to, and subsequently limiting the reasoning and support tools we can provide?
Old joke: A scientist has a good-luck horseshoe hanging over the door to their lab. A visitor to the lab says to them “Surely you don’t believe in superstitious nonsense like that?”; the scientist replies “Of course not; but, I am told it works even if you don’t believe in it.”
New joke: An atheist goes to church and joins in enthusiastically with the hymns and prayers. Their friend says to them “I thought that you didn’t believe in all of that religious stuff?”; the atheist replies “I don’t; but, I am told it doesn’t work even if you believe in it.”
I have a colleague who is a non-native speaker of English, but who speaks basically fluent English. One gotcha is that he refers to “scrap paper” as “crap paper”—which, when you think about it, isn’t too unreasonable. It’s not unreasonable that “crap paper” could be a commonly-user term for paper that doesn’t have any focused use. I’ve been procrastinating for years about whether to mention this infelicity; it is probably too late now.
Bigger lesson—it is hard, when learning a language, to hoover up that final 0.01% of erroneous knowledge.
There was an interesting question on AskMe a little while ago—what “about us we are oblivious to, but are totally obvious to others?”. There are a number of excellent responses there. My response was that there are lots of people who go through life oblivious to how disorganised they are. There are people who are frightfully disorganised, and don’t realise the amount of picking up/reminding/pre-emptive care/doing stuff that the people are taking around them to ensure that their life/work/whatever doesn’t collapse in on them. They just think that they are doing the norm, and that somehow the world works with the level of organisation that they have.
This has subsequently provoked in me one of my long dark night of the soul moments, where I worry about what I am doing that doesn’t fit in, that irritates people, etc. I consider myself to be fairly relaxed and laid back, and I often deal with things in a way that is organised but not obviously rushed. I think I am calm but on top of the situation—but, to other people, am I the undercommunicative person who is causing hassle for other people by being too relaxed? Or, have I got the balance right? Perhaps this is the sort of thing that would be interesting to discuss at a 360° review or similar.
I think that where I get into dispute with the social scientists and literary theorists about whether the world is “ordered” is basically down to the counterfactuals we are each thinking of. To them, the fact that sometimes some people can’t quite manage to agree that some words mean the same thing means that the world is fundamentally disordered and truth uncertain and subjective. Whereas to me, I’m constantly gobsmacked that the world isn’t just some isotropic soup of particles and energy, and regard it as amazing that we can even write down some equations that describe at least some aspects of the world to a reasonable level of accuracy, and that by some amazing happenstance the most compact description of the world isn’t just a rote list of particles and their position and momentum.
Recently, I spent an hour sitting in a room with around 30 of my colleagues, where we spent the time writing a 100 word description of one of our research papers, sharing it with colleagues, and working together to improve the description. Next month, we will have another session like this, another 30 person hours of effort spent. Another university with which I am familiar employed a creative writing tutor to come in for the afternoon and facilitate a similar exercise.
Why were we doing this? Because one of the requirements of the Research Excellence Framework (REF)—the national assessment of university research quality—requires the submission of research papers to an evaluation panel, each accompanied by a 100 word summary. Even though the next REF isn’t likely to happen until 2021 at the earliest, we are committing a reasonable amount of effort and attention to this; not just to writing our 100 word summaries, but to various mock REF exercises, external evaluations, consulting with evaluators from previous rounds, reading exemplars from previously successful universities, etc. If every university is asking its staff to commit a few hours this year to this kind of activity, this mounts up to about 70 person-years of academic staff effort just this year across the country, not counting the REF officers etc. that the universities employ.
As I have noted elsewhere, I can’t imagine that the politicians and civil servants who devised this scheme had any idea that it would be acted on with this amount of diligence. I imagine that they think that come 2021, we will look at what we have been doing over the last few years, spend a hour or so writing the summaries, and that would be that. The idea that we are practicing for this four years in advance wouldn’t even have crossed their mind (despite the fact that, I’m sure, they are equally driven to do vast amounts of similar exercises—mock elections, draft manifestos, etc.).
Why do we do this? Why don’t we just stick to our core business and do good research, then when it comes to the REF just do the summaries etc. and be done with it? Largely, because of the importance of these results; they are fairly granular, last a long time, and the results are financially and reputationally important, therefore a minor screwup could result in bad consequences for a long time. Also, perhaps, because of the sense of needing to be doing something—we have absorbed some idea that managed is better than unmanaged. And also, because everyone else is doing it. If somehow we could all agree to hold back on this and be equally shoddy, we would be in the same position; but, we are in a “red queen” position where we all must run to be in the same place. Such are the structural inefficiencies of a competition-based system,
Here’s a thought, which came from a conversation with Richard Harvey t’other week. Is it possible for a degree to harm your job prospects? The example that he came up with was a third class degree in some vocational or quasi-vocational subject such as computer science. If you have a third class degree in CS, what does that say to prospective employers? Firstly, that you are not much of a high-flyer in the subject—that is a no-brainer. But, it also labels you as someone who is a specialist—and not a very good one! The holder of a third in history, unless they are applying specifically for a job relating to history, isn’t too much harmed by their degree. Someone sufficiently desperate will take them on to do something generic (this relates to another conversation I had about careers recently—what are universities doing to engage with the third-class employers that will take on our third-class graduates? Perhaps we need to be more proactive in this area, rather than just dismissive, but this requires a degree of tact beyond most people.). But a third-class computing/architecture/pharmacy student is stuck in the bind that they have declared a professional specialism, and so employers will not consider them for a generic role; whilst at the same time evidencing that they are not very good in the specialism that they have identified with. Perhaps we need to do more for these students by emphasising the generic skills that computer science can bring to the workplace—”computing is the new Latin” as a rather tone-deaf saying goes.
What is the habitable surface of the world? Actually, that is the wrong question. The right question is “What is the habitable volume of the world?”. It is easy to think that the ratio of marine habitat to land habitat is about 2:1—that is what we see when we look at the globe. But, this ignores the fact that, to a first approximation, the oceans are habitable in three dimensions, whereas the surface of the earth is only habitable in two. This makes the habitable volume of the seas vastly larger than our surface-biased eyes first intuit.
It is depressing, yet informative, that the end result of no-doubt endless meetings and careful planning and strategy documents and analyses of employability results in the NSS and all that woffle ended in the following fragment of conversation from two students on the bus t’other week discussing the assessments that they had to finish by the end of term:
“…and then there’s [whatever it was], but it’s just that employability shit, so it doesn’t matter.”
(Meta-lesson. You learn a lot by getting the bus up to campus.)