Places with a pretension to being high-end often put a human in the loop in the belief that it makes it a better service. This is particularly the case in countries where basic labour costs are cheap. The idea, presumably, is that you can ask for exactly what you want, and get it, rather than muddling through understanding the system yourself. But, this can sometimes make for a worse service, by putting a social barrier in the loop. For example, I have just gone to a coffee machine at a conference, where there was someone standing by it waiting to operate it. As a result, I got a worse outcome than if I had been able to operate it myself. Firstly, I was too socially embarrassed to ask for what I would have done myself—press the espresso button twice— because that seems like an “odd” thing to do. Secondly, I got some side-eye from the server when I didn’t take the saucer; as a northerner I don’t really believe in them. So, by trying to make this more of a “service” culture, the outcome was worse for me, both socially and in terms of the product that I received.
Archive for the ‘Business’ Category
Bus driver (paraphrased): “Since the new big-businessman owner took over, [my local football club]’s been run like a profitable business.”, “Sounds good”, “No, its crap. When rich people have taken over other clubs, the’ve done it for a hobby, and put loads of money into paying top players; our man wants to run it like a proper business.”.
Contemporary governments typically like competition, and also want to allow companies to act in a free market. Unfortunately, the free market also means that companies are free to purchase other companies, and regularly do so, usually in cognate areas to their current areas of business. This ends up creating uncompetitive situations where there are few buyers and sellers in a single area of business. To combat this, an interventionist scheme is usually put in place, whereby mergers and takeovers have to be approved by some governmental body. One of the occasions when that body will typically exercise that power is when the merger creates sufficiently few firms to compete effectively.
This is clumsy. It makes single, complex decision points and is prone to political intervention and bias. Perhaps instead, we could have a system that delegates this choice to the companies. For example, let’s imagine a graded scale of costs to register annually as a limited company. If you are registering in a business area where there are lots of players competing, then the cost is minimal—say, close to the cost of administering the registration. As the number of viable players gets smaller, the cost artificially ramps up very rapidly; if you are looking to merge two out of the last three remaining supermarket chains, then the annual registration cost is millions.
If, like me, you believe that hypothecation of taxes isn’t automatically to be avoided, you might even dedicate the sums earned from this to a fund to support startup/disruptor businesses in business areas with little competition.
The details are tricky. How do you set the cost, and the ramping? How do you define “the same business area”? How do you prevent formally distinct entities actually being controlled by the same entity in practice? But, these might not be insurmountable.
An odd contradiction on the economic right of politics:
- There is objection to ideas such as basic income, unemployment benefits, etc. on the grounds that once people have basic needs catered for, their motivation to carry out additional economic activity for the marginal benefits it provides are minimal. A person who has basic housing costs paid for and a few hundred quid per month living expenses is assumed to be unmotivated to work further.
- There is objection to ideas of increasing tax take at the higher end, on the grounds that it will reduce motivation to work. Even though someone might be earning £100k or more, the idea is that they will be significantly demotivated if they have to pay another few hundred quid per year in taxes.
This seem contradictory. Either people are willing to work harder for more money, or there is a level where the marginal monetary benefit will not produce additional motivation. If anything, you might expect it to be the other way round—the marginal benefit to the person in desperate economic circumstances of a small amount additional income gives a larger lifestyle change than for the person on a large income. I suspect that at the heart of the contradiction is a belief that there are two sorts of people—the lazy, who wouldn’t care, and the motivated, who will always be willing to do more for a larger benefit. I think motivation is more complex than that.
Firms selling things have a dilemma. Price something too low, and, whilst it will sell well, it won’t make enough money to be worth doing (leading to the old joke: “We’re selling each item at a loss; but, don’t worry, we’ll make up on it in volume.”). Price something too high, and you won’t sell enough widgets to make enough money. The traditional view on this is that it is a tradeoff; find a mid-range price where you sell enough widgets at a high enough price. If you can’t do this, then the business isn’t viable.
This is finessed by the notion of adaptive pricing. This is where the same widget is sold to different people at different prices. This makes more businesses financially viable. This is where firms adjust prices based on some information that they can observe, or some structuring of how/when/where/to whom the products are sold:
- Selling to different demographics based on broad ability to pay. Discounts for students or retired people, who are likely to have a lower income. Changing prices at different times of the day, based on the demographic that is around (e.g. a price premium for buying a coffee at the station at peak commuter time; or, more simply, the idea of peak time tickets).
- Rewarding time/organisation: tickets come on sale at a particular date/time, but there are only a finite number at that price. People who are time rich/cash poor can spend time to be organised to buy at the cheaper price, whereas people who have more money don’t have to spend the time, they just buy at the higher price later.
- Selling at different prices in different locations. This has a dark side too; some firms have exploited the lack of transport options of poor people living in cut-off areas by selling at a higher price.
- Auctions, where items are sold for a bespoke price based on demand.
- Secondary markets, where a firm sells widgets cheaply and efficiently, but a secondary retailer (such as a ticket tout) buys up some of them and sells them on to the final purchaser at an inflated price.
- Hiding prices. Rather than a price being given up-front, you have to go through some intermediary system that judges your ability to pay, or your need for the product, and adjusts prices accordingly. The watch shop that judges whether you are a middle-income watch enthusiast or a rich person who wants to brag about the cost of their watch; the retailer of tools who judges whether you will be using the tool day-in-day out or are an occasional user who would buy it for a sufficiently low price.
- Similarly, making use of your purchasing history to adjust prices on an online system.
- Micropayments. Rather than paying up-front to purchase something, you pay by the number of minutes/hours that you use it, or what you use it for.
- Time-adjusted pricing. You show an interest, and if you want it right now you pay the price; the price goes down with time, but if you wait too long you run the risk (perhaps entirely artificially generated) that stock will run out. The TV-based retailer PriceDrop is canonical here.
- Rewards. You all pay the same price up front, but more price-sensitive customers are given some of that money back as vouchers so that their average spend per widget is lower in the long run.
- Direct demand-adjusted pricing. Uber’s entirely-up-front “surge pricing”, for example. Again, speaks to the time/money tradeoff; someone who needs a lower price might be prepared to wait for half-an-hour to see if surge pricing goes away.
- Artificial hobbling. You all buy the same product, making manufacturing easy, but some features are turned off on the lower product range. Tesla cars work like this; you can buy a cheaper version, which has a lower distance range; but, the hardware is the same as the premium product, the distance is just limited by a software switch in the cheaper version.
- Things that seem more different. The same object sold with changes to the branding. Surplus stock sold to a poundshop on the condition that they repackage it. Cheap train tickets sold through a different brand, but when you show up you are on the same train in the same seats as people who paid a lot more.
- Superficial benefits. Exploiting that some people will pay for “the best” regardless. First-class train travel is probably a decent example here; a slightly more comfortable seat and free tea/coffee, but sometimes at a price premium which seems irrationally larger.
I would make an educated guess that cracking adaptive pricing will be one of the big innovations in business in this century. It is increasingly used, but there is still a huge amount of finesse to do here. Already, supermarkets are experimenting with systems such as electronic price displays, allowing dynamic adjusting of price during the day, either by broad demographic shifts, or by minute-by-minute demand. And there are already critiques: the transport company that (algorithmically) increases its prices following a natural disaster, the company that (algorithmically) sells the music of a recently-dead star at a premium.
Interestingly, there is a weird potential consequence to all of this. Will this mean that differences in income become less pronounced? If I had an ideal adaptive pricing system, where, say, I charged people not a price, but a proportion of their income, for my product, then that would have the outcome that people would de facto have the same income. Clearly, the systems above are not at that level yet; but, each adaptive pricing innovation brings us closer to that.
Every time we have an open day at Kent, the University of Essex (hello to my dear friends there!) pays someone to drive a bloody great van with a mahoosive “University of Essex” poster on it and park it all day opposite the main entrance to our campus.
I can’t imagine that 20-30 years ago, when we first started to talk about having some kind of competitive ethos between universities, that we would ever have imagined that we would end up in a situation like this. And it seems to be a systematic inefficiency baked into the system. Unlike the often talked about “inefficiencies” of public sector management, which seem to be just a matter of motivation and management skill, there are real, ongoing, impossible to avoid inefficiencies at the core of a competition based system.
This is a few hundred pounds that could be going into student’s education or research or goddamn it on nicer port for the vice-chancellor’s summer party. Is there any way in which we can get out of this kind of arms race that is consuming vast amounts of money, time, and attention?
Here’s a thought, which came from a conversation with Richard Harvey t’other week. Is it possible for a degree to harm your job prospects? The example that he came up with was a third class degree in some vocational or quasi-vocational subject such as computer science. If you have a third class degree in CS, what does that say to prospective employers? Firstly, that you are not much of a high-flyer in the subject—that is a no-brainer. But, it also labels you as someone who is a specialist—and not a very good one! The holder of a third in history, unless they are applying specifically for a job relating to history, isn’t too much harmed by their degree. Someone sufficiently desperate will take them on to do something generic (this relates to another conversation I had about careers recently—what are universities doing to engage with the third-class employers that will take on our third-class graduates? Perhaps we need to be more proactive in this area, rather than just dismissive, but this requires a degree of tact beyond most people.). But a third-class computing/architecture/pharmacy student is stuck in the bind that they have declared a professional specialism, and so employers will not consider them for a generic role; whilst at the same time evidencing that they are not very good in the specialism that they have identified with. Perhaps we need to do more for these students by emphasising the generic skills that computer science can bring to the workplace—”computing is the new Latin” as a rather tone-deaf saying goes.
Here’s something interesting. It is common for people in entrepreneurship and startup culture to fetishise failure—”you can’t be a proper entrepreneur until you’ve risked enough to have had a couple of failed businesses”. There’s some justification for this—new business ventures need to try new things, and it is difficult to predict in advance whether they will work. Nonetheless, it is not an unproblematic stance—I have written elsewhere about how this failure culture makes problematic assumptions about the financial and life-circumstances ability to fail without disastrous consequences.
But, the interesting point is this. No-one ever talks like this about jobs, despite the reality that a lot of people are going to try out a number of careers before finding the ideal one, or simply switch from career to career as the work landscape changes around them during their lifetime. In years of talking to students about their careers, I’ve never come across students adopting this “failure culture” about employeeship. Why is it almost compulsory for a wannabe entrepreneur to say that, try as they might, they’ll probably fail with their first couple of business ventures; yet, it is deep defeatism to say “I’m going into this career, but I’ll probably fail but it’ll be a learning experience which’ll make me better in my next career.”?
It only struck me a few months ago that there is a decent minority of the political/business establishment who seem to believe that a large proportion of the population can live at a basic level without the need for any income, i.e. from some nebulous kind of family wealth. That’s not “to live well”, but the idea that the basics of housing, food, transport and basic personal care are just somehow “taken care of” in some vague way. You see this on Dragon’s Den, where entrepreneurs are urged to quit their job and show “real commitment” to their business idea. I’d always been rather bemused by statements such as this, but in light of the idea that the basics are “covered”, it makes sense—they are asking people to give up, as they see it, luxuries, not just the basics of living.
Sometimes it is important to present yourself as more specialised than you actually are. This can be true for individuals and for businesses. Take, for example, the following apparently successful businesses:
- www.mysurgerywebsite.co.uk—this is a business that builds websites for doctor’s surgeries.
- www.parentpay.com—this is a business that describes itself as the “market leader in online payment for schools”
Woaah there! What’s happening here? Surely any decent web design company can provide a website for a doctor’s surgery? The specific company might provide a tiny little bit more knowledge, but surely the knowledge required to write a decent website is around 99 percent of the knowledge required to write a doctor’s surgery website. Surely, handling payments from parents for school activities is just the same as, well, umm, handling payments, and there are plenty of companies that do that perfectly well.
This, of course, misses the point. The potential customers don’t know that. To them, they are likely to trust the over-specialised presentation rather than the generic one. Indeed, the generic one might sound a little bit shady, evasive or amateurish: “What kind of web sites do you make?”, “Well, all kinds really.”, “Yes, but what are you really good at.”, “Well, it doesn’t really matter, websites are all basically the same once you get into the code.”. Contrast that with “we make websites for doctors.” Simples, innit.
So that’s my business startup advice. Find an area that uses your skills, find some specialised application of those skills, then market the hell out of your skills in that specific area. You will know that your skills are transferrable—but, your potential customers won’t, and they will trust you more as a result.
I’ve noticed the same with trying to build academic collaborations. Saying “we do optimisation and data science and visualisation and all that stuff” doesn’t really cut it. I’ve had much more success starting with a specific observation—we can provide a way of grouping your data into similar clusters, for example—than trying to describe the full range of what contemporary data science techniques can do.
Similarly with courses. Universities have done well out of providing “MBA in Marketing for XX” or whatever, when the vast majority of the course might be generic marketing skills. Again, the point here is more one of trust than one of content.
When organisations become confused about their mission, they drop their full title and insist on people referring to them by just their initials. This way, the public is confused about their mission, too, and so all is well and good because everyone is on the same page.
So, let me get this right. The company that sent this letter used a private mail provider, which have been encouraged because it is assumed that they would be able to undercut the publicly run mail service due to “private sector efficiencies”. Then, having taken its admin costs and profit from that service, they were able to subcontract this out to the publicly-run Royal Mail, who were able to do the work at break-even or better for whatever money was left. Who’s efficient now?
This has somewhat of the same flavour as James Meek’s piece for the London Review of Books, in which he points out that one of the completely unexpected consequences of electricity privatisation was that the privatised industries would, to a large extent, be bought up by nationalised companies elsewhere in Europe: “Why was it that we had to lose our nationalised industries in order to hand them over to nationalised industries from other countries?”
Why is it almost an axiom of debate on public service provision that the gain in efficiency through outsourcing will outweigh the inevitable additional cost of the service provider’s profit, whilst it is also assumed that, however hard they try, public organisations will not be able to learn these efficiency gains?
Isn’t there a long-term advantage for public organisations to learn these more efficient means of practice, as that would, after the initial cost of doing so, pay off indefinitely, whilst paying an outside organisation to realise these gains is an ongoing cost.
There is a flavour of “give someone a fish, and they’ll have a meal; teach them to fish, and they’ll have meals for the rest of their life” about this.
Will the rise in internet-based distribution of media content mean that increasing numbers of presents end up being random knick-knacks rather than books, CDs, DVDs? It would be a pity; but there isn’t a particularly elegant way to present a gift of an e-book or MP3 album. Perhaps there is a business opportunity here?
More times than I’d like to think, when I talk to someone in a call centre, or fill out an online form, nothing happens. For example, a few weeks ago I had a perfectly clear and polite conversation with a call centre person from O2 about increasing my phone data allowance. They explained very clearly what the options were, and what the costs were, I chose one, they confirmed when it would start, and then…nothing happened!
This isn’t snark. I’m just interested to know what happens within the business logic of the organisation that leads from this seemingly clear conversation to no actual action. Do these requests get lost immediately after the request has been made, e.g. the person makes some notes and then gets another call and loses the notes, or the context of the notes, when they get a chance to return to them? Surely large organisations can’t be relying on such a half-arsed system?
Do requests get added to some kind of queue or ticket based system to be actioned elsewhere in the organisation, and then somehow time out after a while, or get put in a permanent holding position whilst more urgent queries are dealt with? Or, are the requests that I am making too unreasonable or complex, so that the company policy is to make sympathetic noises to the customer and then just ignore them once they have got them off the phone? I can imagine that this might occasionally be the case, but surely not for a request like the one above, which must be one of the simplest piece of business logic for organisation to execute.
Or, are there people in the organisation who are just being lazy and ticking off a lot of their work without actually doing it, like my schoolfriend who, for months on end, got all of the advantages of having a paper round without any of the actual work by systematically collecting a bag of papers every morning, then setting fire to them in a ditch in the local park?
This strikes me as something that would be almost impossible to research, and indeed very difficult even for companies to discover the cause of internally. Yet, this must be a massive issue; I would reckon that around 20% of interactions of this kind have resulted in the agreed action not happening. What can organisations do about this?
When we can afford to innovate, we don’t. We are happy. We are making lots of money, our staff are happy and busy, we have too much damn stuff to do to worry about the Next Big Thing. Besides, the current Big Thing will be around for ever, surely?
When we most need to innovate, we can’t. We are in straightened times, struggling to do what we need to do with the current staffing levels, trying desperately to hang on to our current activities and make them viable. We just don’t have the time or resources to risk on something that might not work out.
I’ve seen this happen in universities. Departments that are doing well see little (strategic) need to consider delivering new degree programmes: the current programmes are recruiting well, staff are enjoying teaching them, the students are enthusastic and there doesn’t seem to be any shift in the supply of new applicants year on year. We could easily put together something new, risky and exciting; but, who cares? When student numbers dry up, we flail around for new courses to deliver, and end up putting on untried and cobbled-together courses without the staff effort to do it properly.
I would imagine that this same cycle holds in many kinds of organisations.
What can we do, as a management strategy, to handle this cycle better?
One piece of advice that is commonly given to candidates in job interviews and similar situations is to evidence what they say. If asked “Why should I believe that you are capable of doing X?” the suggested response is to find an example of where they did X in the past, or something that is similar to X is some way, and build an argument around that. This seems very sensible to me, it prevents waffle and generic assertive statements.
Watching The Apprentice recently, though, makes me wonder about this. Often this kind of evidence-based response is dismissed by Lord Sugar using phrases along the lines of “don’t tell us your bloody life story”. It seems like he is expecting a generic, assertive statement and that providing evidence from what the candidate has done before is looking backwards rather than looking forwards.
What should a candidate do if they encounter such an attitude in a real interview? I can’t decide whether it is best just to redound to the generic statements that seem to be being demanded here, or whether to try to turn it around by being very explicit about how the evidence relates to the question being asked.
Online organisations usually have the choice between two ways of making their information available. One is what we will call info-stream, where the information is made available in the form of a stream of machine-readable information that people can view and process in different forms. Twitter is a good example of this: whilst it does provide a fallback option of viewing it through the Twitter website, many users use a different way of interfacing with it such as an app on a computer or phone, or an alternative web interface. By contrast, some other organisation choose to provide the information through a specific graphical interface. An example here is Facebook, who clearly expect all users to interface with the content through the Facebook web-site (or Facebook-provided mobile app). People wanting a different view of the content can get alternative interfaces (e.g. Social Fixer or Facebook Purity, but these appear to work by a screenscraping-style approach that is not designed for in the way Facebook designs its information provision.
What is the business argument (in the broadest sense) for making one or the other of these choices? Clearly one argument for the interface approach is concerned with advertising. One problem with providing an info-stream is that this makes it very easy to filter out advertising. Organisations that have adopted an info-stream approach tend to have a very tight integration between their advertising and their content. For example, advertising in Twitter is in the form of promoted Tweets or Trends, which are Tweets or Trends in their own right; by contrast, the content delivered by Facebook has advertising, but not as part of the main News Feed content.
A more complex example is provided by the choices made by travel, insurance, banking and energy companies. In the early days of the web, much was made of the idea that online commerce would be a purer form of commerce because aggregators would be able to draw a direct comparison between different providers. Clearly, this vision has been realised—up to a point. A number of firms, for example insurance firms Direct Line and Aviva and some of the discount airlines have largely avoided being on comparison sites. What is the business case for this? Possibly, to avoid the commission fees charged by the sites; possibly, to create a more direct channel of direct negotiation with the customer, akin to the old print-advertising strategy of not listing prices but saying “call us for our best price”. Again, this is an info-stream versus interface decision: the “not on comparison sites” are pursuing an interface strategy, where they want to control the interface between the information and customer in their own way; by contrast, the firms that are supplying information to comparison sites are providing information in an info-stream fashion.
This is clearly not something that was anticipated in the early discussions about e-commerce. It was assumed that organisations would be falling over themselves to provide information for aggregation and comparison. Clearly, though, it is possible for firms to adopt a strategy of opting out of such comparisons. This does not bode well for the development of the semantic web, which (rather naively) assumes that any organisation online will want to readily provide information in a computer-readable fashion. Instead, the choice for a firm is more complex: to provide an info-stream and work on an objective (as far as the measures used) comparison as a strategy, or to provide an interface and rely on more traditional advertising and marketing strategies that leverage the lack of ability to compare directly.
Are there other organisational/business arguments about the info-stream/interface choice?
Over the last few years lots of money has been spent building automated ticket barriers at stations. However, I wonder if this is all going to be rather wasted, as train companies are gradually moving towards e-ticketing, and the barriers are designed to take a very specific form factor of printed ticket. At the moment, e-tickets have to be checked by a human operator, which kind of defeats the point. I wonder if this is why the new barriers at King’s Cross have some kind of barcode scanner thingy on them?
The “Valley of Death” is the rather overwrought term used in technology transfer for the difficulty of getting technology developed in a university research environment into commercial use. This is a big concern for governments—for example, the UK government Science and Technology Committee recently held a consultation on this very issue.
Thinking about how the university world operates compared to other areas, I wonder if one problem is the ready availability of people at the university end to do medium scale pieces of work. The university research workforce breaks down into largely two categories of people: the lecturing/professorial staff, who have lots of expertise but also lots of calls on their time, and the PhD students and postdocs, who have specific expertise and whose time is largely taken up by the project that they are working on. It is easy enough for a commercial organisation to get a little piece of consultancy, e.g. running a few ideas past a professor for a day or two; similarly, a firm that is happy to make a larger commitment, e.g. to sponsor a postdoc, PhD student or KTP associate for two or three years fits into the system readily.
The difficulty is the middle ground. What about a project that requires specific expertise in a particular area, but which also requires a substantial commitment of time, say three to six months. In many other industries—say, product design—a designer would be available from the pool of designers employed permanently by a consultancy to work on projects. One initially attractive proposition, therefore, would be for a university to retain a number of such “consultants” to work on projects as needed. However, this fails; the expertise required in a research-driven project is rather specific, and it would be impossible for such a consultant to have the breadth of knowledge required to work immediately on projects.
I wonder if a very low-ceremony secondment scheme for postdocs and PhD students would work here. I am sure it is possible for, e.g., a research council project to be extended by three months to allow a postdoc to work for three months as such a consultant; but, I would be put off investigating this, as I would be concerned that the amount of admin overhead in extending the project etc. would be large. What we need is a simple way to do this; a one-page web form where a PI can request a small number of months extension to a project so that a postdoc or student currently employed in a cognate area could take some time off their project and be paid by the firm to do a medium-term project of a few months. This would provide both the flexibility and the expertise, and would mean that universities could respond more rapidly to such requests. If sufficiently well remunerated by firms, I can see this being appealing to the secondees, with the opportunity to work on something relevant and probably earn a little more money for a while than they normally would do.