"Real Artists Ship"

Colin Johnson’s blog


Archive for the ‘Science’ Category

Old Joke, New Joke (1)

Wednesday, August 23rd, 2017

Old joke: A scientist has a good-luck horseshoe hanging over the door to their lab. A visitor to the lab says to them “Surely you don’t believe in superstitious nonsense like that?”; the scientist replies “Of course not; but, I am told it works even if you don’t believe in it.”

New joke: An atheist goes to church and joins in enthusiastically with the hymns and prayers. Their friend says to them “I thought that you didn’t believe in all of that religious stuff?”; the atheist replies “I don’t; but, I am told it doesn’t work even if you believe in it.”

The Origins of (Dis)order

Friday, August 11th, 2017

I think that where I get into dispute with the social scientists and literary theorists about whether the world is “ordered” is basically down to the counterfactuals we are each thinking of. To them, the fact that sometimes some people can’t quite manage to agree that some words mean the same thing means that the world is fundamentally disordered and truth uncertain and subjective. Whereas to me, I’m constantly gobsmacked that the world isn’t just some isotropic soup of particles and energy, and regard it as amazing that we can even write down some equations that describe at least some aspects of the world to a reasonable level of accuracy, and that by some amazing happenstance the most compact description of the world isn’t just a rote list of particles and their position and momentum.

100 Words

Saturday, July 29th, 2017

Recently, I spent an hour sitting in a room with around 30 of my colleagues, where we spent the time writing a 100 word description of one of our research papers, sharing it with colleagues, and working together to improve the description. Next month, we will have another session like this, another 30 person hours of effort spent. Another university with which I am familiar employed a creative writing tutor to come in for the afternoon and facilitate a similar exercise.

Why were we doing this? Because one of the requirements of the Research Excellence Framework (REF)—the national assessment of university research quality—requires the submission of research papers to an evaluation panel, each accompanied by a 100 word summary. Even though the next REF isn’t likely to happen until 2021 at the earliest, we are committing a reasonable amount of effort and attention to this; not just to writing our 100 word summaries, but to various mock REF exercises, external evaluations, consulting with evaluators from previous rounds, reading exemplars from previously successful universities, etc. If every university is asking its staff to commit a few hours this year to this kind of activity, this mounts up to about 70 person-years of academic staff effort just this year across the country, not counting the REF officers etc. that the universities employ.

As I have noted elsewhere, I can’t imagine that the politicians and civil servants who devised this scheme had any idea that it would be acted on with this amount of diligence. I imagine that they think that come 2021, we will look at what we have been doing over the last few years, spend a hour or so writing the summaries, and that would be that. The idea that we are practicing for this four years in advance wouldn’t even have crossed their mind (despite the fact that, I’m sure, they are equally driven to do vast amounts of similar exercises—mock elections, draft manifestos, etc.).

Why do we do this? Why don’t we just stick to our core business and do good research, then when it comes to the REF just do the summaries etc. and be done with it? Largely, because of the importance of these results; they are fairly granular, last a long time, and the results are financially and reputationally important, therefore a minor screwup could result in bad consequences for a long time. Also, perhaps, because of the sense of needing to be doing something—we have absorbed some idea that managed is better than unmanaged. And also, because everyone else is doing it. If somehow we could all agree to hold back on this and be equally shoddy, we would be in the same position; but, we are in a “red queen” position where we all must run to be in the same place. Such are the structural inefficiencies of a competition-based system,

A Wild Idea for Treating Infectious Diseases

Monday, July 13th, 2015

Engineer a variant on a disease which spreads quicker within the organism, so that it drives out the standard variant in its niche. Engineer this variant with a genetic “self-destruct” switch which can be triggered by a standard drug. Then superinfect the patient with the new variant, wait until the new variant has taken over, then apply the drug to remove the infection from the system.

Language (1)

Monday, October 27th, 2014

When we are learning creative writing at school, we learn that it is important to use a wide variety of terms to refer to the same thing. To refer to something over and over again using the same word is seen as “boring” and something to be avoided.

It is easy to think that this is a good rule for writing in general. However, in areas where precision is required—technical and scientific writing, policy documents, regulations—it is the wrong thing to be doing. Instead, we need to be very precise about what we are saying, and using different terminology for the sake of making the writing more “interesting” is likely to damn the future reader of the document to hours of careful analysis of whether you meant two different-but-overlapping words to refer to the same thing or not.

A Plan for Antibiotic Resistance

Wednesday, July 2nd, 2014

Antibiotic resistance is in the news again today. The focus for a solution is primarily an economic one. The economic benefit to pharma companies to discover antibiotics is minimal, because the market for new antibiotics should, rationally, be small; unlike new drugs in most areas of medicine, which will rapidly displace older drugs through a mixture of greater efficacy, fewer side-effects and canny marketing, there is a motivation for new antibiotics to be held back as a “drug of last resort”, and only used when existing antibiotics fail. This offers the most sensible way to avoid bacteria evolving to become resistant to the new antibiotics. As a result, the focus is on the creation of artificial market incentives to support pharma companies in the development of those antibiotics, despite the small market that they will have.

Here is an alternative suggestion. Instead of routinely using all currently available antibiotics, have a schedule where every few years, a subset of antibiotics is authorised for use. Then, after a few years, these are proscribed from use, and another subset used. The ides is that during the “fallow years” when a particular antibiotic is not being used, the bacteria will have no evolutionary pressure to maintain that resistance, and an energetic cost to keeping it, and so over time it would lose that resistance. The timescales would need to be worked out based on the timescale during which antibiotic resistance is gained/lost.

Critical to whether this would work is the question of whether antibiotic resistance would be lost by the bacteria on a short enough timescale. There is an informal discussion of this here, which suggests that is can be retained for a long time. But, it would be interesting to know if this has been verified experimentally.

Journals (1)

Friday, May 31st, 2013

What could possibly go wrong?

Publish your paper in a week without peer review at free of cost

Multiscale Modelling (1)

Monday, December 31st, 2012

Multiscale modelling is a really interesting scientific challenge that is important in a number of areas. Basically, the issue is how to create models of systems where activities and interactions at a large number of different (temporal and/or spatial) activities happen at the same time. Due to computational costs and complexity constraints we cannot just model everything at the smallest level; yet, sometimes, small details matter.

I wonder if there is a role for some kind of machine learning here? This is a very vague thought, but I wonder if somehow we can use learning to abstract simple models from more detailed models, and use those simple models as proxies for the more detailed model, with the option to drop back into the detailed model when and only when it is specifically needed?

Out of the Valley of Death

Monday, February 13th, 2012

The “Valley of Death” is the rather overwrought term used in technology transfer for the difficulty of getting technology developed in a university research environment into commercial use. This is a big concern for governments—for example, the UK government Science and Technology Committee recently held a consultation on this very issue.

Thinking about how the university world operates compared to other areas, I wonder if one problem is the ready availability of people at the university end to do medium scale pieces of work. The university research workforce breaks down into largely two categories of people: the lecturing/professorial staff, who have lots of expertise but also lots of calls on their time, and the PhD students and postdocs, who have specific expertise and whose time is largely taken up by the project that they are working on. It is easy enough for a commercial organisation to get a little piece of consultancy, e.g. running a few ideas past a professor for a day or two; similarly, a firm that is happy to make a larger commitment, e.g. to sponsor a postdoc, PhD student or KTP associate for two or three years fits into the system readily.

The difficulty is the middle ground. What about a project that requires specific expertise in a particular area, but which also requires a substantial commitment of time, say three to six months. In many other industries—say, product design—a designer would be available from the pool of designers employed permanently by a consultancy to work on projects. One initially attractive proposition, therefore, would be for a university to retain a number of such “consultants” to work on projects as needed. However, this fails; the expertise required in a research-driven project is rather specific, and it would be impossible for such a consultant to have the breadth of knowledge required to work immediately on projects.

I wonder if a very low-ceremony secondment scheme for postdocs and PhD students would work here. I am sure it is possible for, e.g., a research council project to be extended by three months to allow a postdoc to work for three months as such a consultant; but, I would be put off investigating this, as I would be concerned that the amount of admin overhead in extending the project etc. would be large. What we need is a simple way to do this; a one-page web form where a PI can request a small number of months extension to a project so that a postdoc or student currently employed in a cognate area could take some time off their project and be paid by the firm to do a medium-term project of a few months. This would provide both the flexibility and the expertise, and would mean that universities could respond more rapidly to such requests. If sufficiently well remunerated by firms, I can see this being appealing to the secondees, with the opportunity to work on something relevant and probably earn a little more money for a while than they normally would do.

Linda McCartney vs. the Laws of Physics

Thursday, September 29th, 2011

Linda McCartney sausages for tea tonight (not had them for a long time; still don’t really rate them relative to Quorn). Here are the cooking instructions on the back:

Here’s what puzzles me. I don’t understand why grilling 6 sausages takes longer than grilling 2 sausages. Surely the grilling process is about radiative heat transfer, and any heat that goes where the “missing” sausages are in the 2-sausage case is just wasted, heating up the grill pan or whatever. I can understand the situation more for the oven, because the (cold) sausages are in a contained thermodynamic system with the heating element, and so the total amount of energy required is larger as there is more stuff to be heated up for the same output from the heating element—but, the grilling situation I don’t get at all.

The Two Cultures (1)

Wednesday, October 13th, 2010

In a recent Guardian article, Bonnie Greer suggests that Kurt Gödel “had shown the world years before that nothing can be 100% proven” (“Me and Sister Carmela”, 20th September). In fact, what he showed was the subtly different notion that not 100% of true statements (of a particular, broad class of mathematical statements) can be proven.

This is not just a pedantic factual correction. Frequently, mathematicians (and practitioners of other rigorous reasoning systems) are attacked in the media for their arrogance. This is often characterised as an assumption that “everything” can be shown to be true or false with 100% certainty. By contrast, only specific types of statements are amenable to mathematical methods; furthermore, even within that domain, not everything will be provable!

In particular, the elision of words used in some specific technical way (“proven”, “statement”) to imply that these narrow technical results magically mean something about the day-to-day meaning of these words is ubiquitous. It is not the mathematicians who are at fault in such situations, as they are precise about the narrowness of the applicability of their results.

It could be argued that it is the practitioners of the literary arts that are guilty of the arrogant over-reach that mathematicians are frequently blamed for: consider the slapdash use of metaphor to extend the reach of statements, overinterpretation of the meaning of technical notions based on mere co-incidence of words, and drawn out discussions that amount to little more than extended puns. This is ultimately destructive to both the understanding of science and literature and to attempts to create a meaningful dialogue between the disciplines.

Department of the Bleeding Obvious

Wednesday, September 8th, 2010

A difficult challenge for people involved in science popularisation is to decide how to choose examples for use in articles, talks et cetera. One approach is to take examples of day-to-day phenomena and to discuss how a scientific approach might shed light on this—the “physics of biscuit dunking” approach. The opposite alternative is to focus on complex phenomena that are incomprehensible to the average reader/listener—science as “mindbogglingly complex”.

Both of these approaches readily attract criticism. The first approach is open to criticism that science is all about “proving the bleeding obvious”—what gets missed out in reports of this kind is that these kinds of examples are meant as just that—easily comprehensible examples that are meant to illustrate the methods of science, not to be representative of the actual results of scientific research. Nonetheless, the second approach is also unsatisfactory, as it attracts a different kind of criticism: that scientists are just interested in obscure things that they can’t explain and that are of any interest to ordinary people, and that the scientific enterprise is just an exercise in self-indulgent cliquiness.

Ironically, in light of the ongoing discussions about how industry should be involved more in science funding, it is precisely the “department of the bleeding obvious” type studies that often end up having been sponsored by industry. I am thinking of the kinds of studies where some gullible scientist has taken a few hundred quid from Poppleton Pork Products to come up with an equation for the shape of the perfect sausage.

So, what are we, as scientists and science popularisers, to do? How can we come up with examples that are easy enough to communicate quickly and pithily, without resorting to trivia? Or should we be trying to convince people that science is interesting but not reducible to simple examples, and that effort and time is required to understand it?

A Modest Suggestion concerning Scientific Conferences

Wednesday, January 6th, 2010

Typically, a scientific conference works like this (certainly in the areas in which I have worked): authors submit papers with a strict page limit n, struggle to fit their exciting work into n pages, get reviews, then if the paper is accepted they have to address a pile of reviewers comments about the paper, and squeeze all of this into the same number of pages. Of course, reviewers rarely ask for things to be taken out: most comments are “clarify this”, “explain this in more detail”, “give the parameters/pseudocode for this”, “do some more experiments do whatever and include the results”. Of course, this is impossible, and so we end up taking out more of the stuff that made the paper comprehensible in the first place.

Suggestion: instead of having the same page limit for both phases, have a page limit of n+1 or n+2 for the final version; then, the authors can have a hope of address the reviewer’s comments properly. I’ve come across this idea discussed a couple of times, but I can’t remember it being used for real.

And whilst we’re on the subject, why do we care about strict page limits in an age where the ink is mostly bits anyway…but that is probably an argument for another day.

Collective Intelligence of Intelligences

Tuesday, June 9th, 2009

We are increasingly happy with the idea of collective intelligence—the idea that intelligence can emerge from the collective action of a number of individually unintelligent agents. A canonical example is the ant colony, which acts collectively in an intelligent fashion despite the individual components working according to simple rules. However, we are still somewhat wary of the idea of something made up of things that are in themselves intelligent being described as intelligent. It is as if we want to have a single source of intelligence to “blame” for the actions of the system.