Archive for September, 2014
Oh for God’s sake Grauniad, learn the difference between a “physician” and a “physicist”:
Next time I get theoretically ill I’m certainly going to a theoretical physician.
What will eventually replace programming? As computing technology gets more advanced, will there be a point at which (most) tasks that are currently carried out by programming get carried out by some other method? I can think of two inter-related ideas.
The first is that more-and-more of the tasks that we currently do by programming get done by some kind of machine learning, some kind of abstraction from examples. We are already beginning to see this. Take, for example, the FlashFill feature in Microsoft Excel. This is a system where you highlight a number of columns, and then fill in a further column with examples of the calculation/transformation that you want to see achieved. As you fill in examples, a machine learning algorithm works behind the scene to learn a macro that matches the examples, and fills in the remaining columns automatically. If there are still errors, you can keep on feeding it more examples until it works. What is the analogy of this in other areas (e.g. database report generation)?
The second is that code generation becomes so good that we don’t program any more, we just give specifications, test cases etc. and the programs “write themselves”. We are unlikely to see this happen wholesale. But, it should be possible using current technologies to create an additional kind of tab in an IDE—not one that contains user-written code, but one that contains examples, and the code to realise them gets generated behind the scenes with machine learning and data mining from vast code-bases.
A lot of work in machine learning of programs has foundered on the problem of “how do you specify a whole system so that it can be learned”. We might eventually get there, but I thing we are more likely to see more fine-grained gains at the method/function/transformation level first.
Of course, there is still a need for some programming in these scenarios. But, it would play the role rather that operating systems, or firmware, plays to the average programmer these days—usually something that you don’t have to explicitly worry about at all.
University research often works well when there is a critical mass in some area. University degrees usually aim to give a balanced coverage of the different topics within the subject. This is usually seen as a problem—how can a set of staff with narrow research specialities deliver such a broad programme of studies?
One solution to this is to encourage staff to develop teaching specialities. That is, to develop a decent knowledge of some syllabus topic that is (perhaps) completely contrasted with their research interests.
One problem is that we are apologetic with staff about asking them to teach outside of their research area. Perhaps a little bit of first year stuff? Okay, but teaching something elsewhere in the syllabus? We tend to say to people “would you possibly, in extenuating circumstances, just for this year, pretty, pretty, please teach this second year module”. This is completely the wrong attitude to be taking. By making it sound like an exception, we are encouraging those staff to treat it superficially. A better approach would be to be honest about the teaching needs in the department, and to say something more like “this is an important part of the syllabus, no-one does research in this area, but if you are prepared to teach this area then we will (1) give you time in the workload allocation to prepare materials and get up to a high level of knowledge in the subject and (2) commit, as much as is practical, to making this topic a major part of your teaching for the next five years or more”.
In practice, this just makes honest the practice that ends up happening anyway. You take a new job, and, as much as the university would like to offer you your perfect teaching, you end up taking over exactly what the person who retired/died/got a research fellowship/moved somewhere else/got promoted to pro vice chancellor/whatever was doing a few weeks earlier. Teaching is, amongst other things, a pragmatic activity, and being able to teach anything on the core syllabus seems a reasonable expectation for someone with pretensions to being a university lecturer in a subject.
Is this an unreasonable burden? Hell no! Let’s work out what the “burden” of learning material for half a module is. Let’s assume—super-conservatively—that the person hasn’t any knowledge of the subject; e.g. they have changed disciplines between undergraduate studies and their teaching career, or didn’t study it as an option in their degree, or it is a new topic since their studies. We expect students, who are coming at this with no background, and (compared to a lecturer) comparatively weak study skills, to be able to get to grips with four modules each term. So, half a module represents around a week-and-a-half of study. Even that probably exaggerates the amount of time a typical student spends on the module; a recent study has shown that students put about 900 hours each year into their studies, a contrast with university assertions that 1200 hours is a sensible number of hours. So, we are closer to that half-module representing around a week’s worth of full-time work.
Would it take someone who was really steeped in the subject that long to get to grips with it? Probably not; we could probably halve that figure. On the other hand, we are expecting a level of mastery considerably higher than the student, so let’s double the figure. We are still at around a week of work; amortised over five years, around a day per year. Put this way, this approach seems very reasonable, and readily incorporable into workload allocation models.
The problem of keeping track of institutional wisdom is well known. It is very easy for knowledge about how an organisation works, tips and tricks of solving common problems, or just basic knowledge about people and resources to be “stored” in the head of a single individual or small group. Many years ago I worked at a university where one person, and only that person, knew everything about how the university timetable worked. He was about to retire, and it was a great effort for him to formalise 40 years of informal rules and casual knowledge to pass onto his successors.
Nonetheless, there are ways of eliciting this information. A hard problem, perhaps, is that of recording institutional ignorance. What don’t we know? What issues are disputed? What issues are handled in different ways? Clearly, if such issues can be resolved quickly, then they can be put onto agendas of meetings, and ways found to resolve them. But, what about the longer-term issues? Those that don’t have a quick fix, or where there is deep-seated disagreement about them, or where there is just no resource to sort them out.
It isn’t easy to record this sort of thing. Some people like to think that everything is clean and well-run within an organisation, and that even the act of recording these issues is an admission of failure. In other organisations, there is a lack of willingness to accept that the “official line” isn’t being kept, that people are running shadow systems because the official system isn’t working for the task at hand, or that the only reason for the dispute is lack of knowledge about the “proper” way of doing things. Other organisations are “solution focused”, and believe that anything that is problematic should be sorted out as soon as possible, in ignorance of the constraints of time and resources. All of these belie the complexity of large organisations. It would be shocking to find an organisation in which at least a few issues were unresolved or disputed.
One technology which has a lot of potential for this are shared wiki-style documents. These have the flexibility to act as a repository for information of this kind, without the commitment to “sort it out” that a meeting agenda has. Furthermore, different people can contribute, and, if the organisation is confident enough, different approaches included and discussed, putative solutions recorded, etc. Another advantage is that things are grouped by topic rather than by time.
More generally, there is a need for a repository in-between meeting minutes and policies. A place to store the casual wisdom and notes on practice that are not important or authoritative enough to be recorded in institutional regulations and policies.
On the day of the Scottish independence referendum, it is interesting to think about how large collections of people should make decent decisions on big issues. Voting isn’t a bad way forward, but when issues are big and likely to be irreversible (at least for a while), there is a fear that a bad decision might be made. In particular, there is always a fear that some minor slip-up, or some temporary surge of feeling, might distort the result.
One approach to this is to require a “supermajority”. That is, the change needs the approval of more than 50%, for example needing 66% support or 80% support. Surely, the argument goes, if a decision is that important, it oughtn’t to depend on the whims of a few people around the borderline. This approach brings a bias towards the status quo—it sees the change as the problem, whereas we might want to say that the decision not to change might be just as momentous a decision. Put another way, once something has been fixed one way, it means that a small minority can keep it that way.
Instead, I propose multiple votes over a reasonable time scale. One of the problems with the single vote, even with a supermajority, is the “morning after” effect; a rush of enthusiasm for one side or the other, or a single screwup by one side, can mean that people might make a capricious decision on the day. By repeating the vote a number of times and averaging in some way, these effects could be smoothed out.
My heart sinks whenever I speak to a student who says “I thought that something was wrong (e.g. with marking), but I didn’t want to offend the lecturers by suggesting it.”. Sometimes the implication is worse—”I don’t want to bias lecturers against me in future classes by being seen to be a troublemaker.”, or “I didn’t want to challenge the accusation of plagiarism, even though I had a good explanation, because I don’t want the lecturers to mark me down on future assessments.”.
My impression is that universities are, on the whole, not like this. Indeed, the idea that we have time to pursue grudges like this, even if we had the inclination (and we don’t), seems risible from where I sit. Nonetheless, we have a genuine problem here; one of “justice being seen to be done” as well as justice being done.
Universities try to deal with complaints, plagiarism cases, problems with marking, etc. by having a clear, unbiased system—as much as there is a model at all, it is the judicial system. But, some students don’t see it like that. However much we emphasise that the process is neutral, there is always a fear of those exhortations being seen as a smokescreen to hide an even deeper bias. The same, of course, is true in the broader world—disadvantaged groups believe (in some cases correctly) that the justice system is set up against them, and no amount of exhortation that it is a neutral system will help.
What can we do? Firstly, I wonder if we need to explain more. In particular, we need to explain that things are different from school, that students are treated as adults at university, and that a university review process consists in a neutral part of the university making a fair judgement between the part of the university that is making the accusation and the student. Students entering the university system have only the school system to base their idea of an educational disciplinary/judicial system on, and that is a very different model. Certainly when I was at school, it was a rather whimsical system, which could have consequences for other aspects of school life. In particular, something which wound me up at the time was the reluctance of teachers to treat issues as substantive; if someone hit you over the head, and you put your hands up to stop them, then you were both seen as “fighting” and had to do detention. Universities are not like this, and perhaps we need to emphasise this difference more.
A second thing is to recruit student unions to play a greater role in the process. I’ve been on dozens of student disciplinary and appeal panels over the years, and the number of students who exercise their right to bring someone with them is tiny. If I were in their shoes, I’d damn well want a hard-headed union representative sat next to me. Speaking as someone who wants the best for everyone in these situations, I’d like them to be as nonconfrontational as possible; but, I wonder if making them slightly more adversarial would give a stronger reassurance that they were working fairly.
Thirdly, I wonder about the role of openness in these systems. One way that national judicial systems increase confidence in their workings is by transacting their business in public; only the rarest of trials are redacted. There is clearly a delicate issue around student and staff privacy here. Nonetheless, I wonder if there is some way in which suitably anonymised cases could be made public; or, whether we might regard the tradeoff of a little loss of privacy to be worth it in the name of justice being seen to be done. Certainly, the cases that go as far as the Office of the Independent Adjudicator are largely public.