PhD and Fellowship Ideas

Here are some ideas for PhD projects and postdoc research fellowship projects. If you are interested in any of these, email me ( and we can discuss the ideas further.

The University of Nottingham has each year sometimes has funded PhD scholarships; when these are available, they are listed at in this list of current scholarship opportunities. Due to restrictions placed on us by funders, these are not usually available to overseas students. For people who have completed their PhD, there are a number of sources of funding for postdoctoral work, though these are very competitive. Some possibilities include:

If you are interested in applying, then do get in touch and we will see if there is sufficient common interest to put in the (often not insubstantial) effort to put together an application.

Program Execution as Big Data

One perspective on a running computer program is that it is a sequence of changes of state of a computer’s memory. Each time something is executed in a program, the memory changes into a different state. We can analogise this as the memory consisting of “frames” in the “movie” that is the execution of the program. Similarly to visual movies, these will largely consist of small changes from frame to frame. Inputs can be represented by (partially complete) frames, as can a specific desired state such as the expected output from a specific test.

The aim of this project is to take these sequences of frames and apply machine learning methods such as deep learning to understand more about them. For example, this might be to identify regions of code that have logic errors relative to a given specification or test set, or to identify problematic situations such as memory leaks, by creating a large number of examples and using machine learning to identify common patterns between them. This approach could also be used to look for regions of similarity between programs, or parts of programs.

I am particularly interested in the application of these ideas to finding the location of faults in software, and subsequently fixing those faults.

Giving Artificial Intelligence a Mind’s Eye

"If you dangle a rat by the tail, which is closer to the ground, it’s nose or ears?" (Shananhan, 2015). This is a trivial question for a person, but a very difficult question for a computer. A person would solve the problem by visualising the scenario in their mind’s eye. The aim of this project would be to give this capacity of mind’s eye visualisation to an artificial intelligence system, and test the effectiveness of that system.

One way to represent this mind’s eye representation would be for the system to represent what it learns as objects in a game engine such as Unity. The idea would be that we train a system not just to associate labels with objects, but to associate rich, active models, including physical models and models of behaviour. Then, we could answer questions about the world by setting up those models in the game engine, and letting them interact.

There are a number of possible systems that could be used to test such as system. One might be computational thinking systems such as Bebras, because these exemplify exactly what we want AI systems to do—bridge the gap between informal descriptions of tasks and computational descriptions.

The recent papers by Kunda (2018) and Ha and Schmidhuber (2018) give useful background for this project and are a good starting point for understanding more.


Deep Genetic Programming

In the last few years, deep learning using neural networks has become the most prominent technique in machine learning (see the Goodfellow/Bengio/Courville book). It could be argued that the success of deep learning has little to do with neural networks as such; it is the development of deep, layered representations that is important, together with means of training these. In particular, there are powerful ideas such as stacked autoencoders, where layers of unsupervised representation extract increasingly complex representations in a layered fashion before finally applying a supervised process to solve a particular problem; these are concerned with training regimes rather than neural networks as such.

This opens an interesting area for investigation, which is the development of deep learning algorithms that use representations other than neural networks. One idea would be to use code fragments, in the tradition of methods such as genetic programming (see the GP Field Guide for an overview). The aim of this project would be to investigate the power of combining GP-like representations with deeply layered representations and training regimes from the deep learning tradition. We would aim to answer questions such as whether these approaches can produce results on standard test problems that are as accurate or more than neural-based representations, whether they produce more comprehensible outputs, and whether the models learned are smaller than the neural models.

This leads onto a deeper investigation, which is whether this approach can be used to learn models that combine pattern recognition and reasoning methods. Current deep learning methods are good at learning to recognise patterns in e.g. visual images, but weak at problems that require e.g. arithmetical or logical reasoning. One approach would be to create deep learning methods that have “recognition” layers and “reasoning layers”; the challenge then is to devise appropriate training regimes. For the philosophically inclined, we can see this as a challenge of how to combine different kinds of knowledge: in particular, how to combine a posteriori knowledge (from experience) with a priori knowledge (independent of experience); in particular, Kant’s notion of synthetic a priori concepts. There might be some useful clues from the philosophical literature about how to represent and combine these ideas.

Some related ideas can be found in the recent paper by Lino Rodriguez-Coayahuitl, Alicia Morales-Reyes, Hugo Jair Escalante ( Structurally Layered Representation Learning: Towards Deep Learning through Genetic Programming, Proceedings of the 2018 European Conference on Genetic Programming)

Machine Learning of Adjectives

Recent advances in machine learning have focused on two topics. One is classification problems—associating labels with data. Another is learning behaviours through reinforcement learning. Looked at from a linguistic perspective, we can see these as being the nouns and verbs in language. However, a nuanced view of the world requires other components of language, in particular adjectives and adverbs; and ways of composing these components.

There has been some work in AI and machine learning on specific aspects of adjectival and adverbial descriptions. For example, there is work in computer vision on learning the concept of “uprightness”, and on ideas of texture recognition; in audio processing there is some work on timbre spaces. However, there hasn’t been a systematic attempt to investigate the capacity of machine learning to detect adjectives, or to build new learning approaches that are particularly suited to adjectival learning. To do so would add much richness to AI systems.

Therefore, the aim of this project is to build an appropriate dataset of data with adjectival descriptions, to test the capacity of existing machine learning methods to learn them, and by reflecting on the successes and failures of those tests to devise new machine learning methods that are specifically adapted to adjectives and adverbs.

One specifically interesting set of continuum adjectives is those that describe distance from a desirable state. For example, the idea of “blurriness” in visual images, the idea of “clarity” in sound. If a system could be devised that could learn these concepts, then it could be used to build fast learning systems that move from a less desired to a more desired state (e.g. cleaning a noisy sound recording). There is a sketch of how this might work in this paper.

Machine Learning for Polymorphism

As object-oriented programming has developed, an increasingly sophisticated set of ways to express IS-A relationships has developed: subtype-, parametric- and ad hoc- polymorphism. One problem with these is that much of the information required to keep the polymorphic relationship consistent is not formally expressed in the language, and so it is left to the mental model of the programmer to make sure that the polymorphic relation is correct. More difficult is maintaining this relationship as the code is developed and functionality is added, particularly if the people adding the functionality are not the original developers.

The aim of this project would be to use machine learning to support and partially automate this process. This could include ideas such as heuristic search to flag up potential inconsistencies between different classes the polymorphic hierarchy, automatic synthesis of potential higher-level abstractions, and automated refactoring of code to make use of well-known efficient techniques at higher levels of abstraction. There are also interesting questions about how we avoid the problem of identifying trivial, useless abstractions, which could use some of the techniques from interestingness measures from data mining. This paper gives a little more background.

Simulating Adaptive Pricing

Adaptive pricing is the practice by firms of changing the price of goods according to information that they have about the potential buyer, rather than by declaring a fixed public price. This exists in various forms: travel tickets are sold at different prices according to how many have been already sold and the closeness to departure times; the same item is sold in different branches of a shop depending on the ability and need of customers for that item; items are advertised without a price and then a sales representative judges a price based on discussions with the potential buyer; prices for a service depend on current levels of demand; etc.

There is evidence that this practice is increasing. For example, supermarkets are experimenting with interactive price displays, which can adjust the price of each item dynamically. This could be used, for example, to set a higher price at a time when time-poor/cash-rich commuters are buying food for the journey, then adjusted lower during the daytime when more price-sensitive family customers are purchasing.

The aim of this project is to simulate the effects of this on the broader economy, perhaps using agent-based computational economics approaches. In particular, adaptive pricing potentially has huge consequences for the meaning of wages: for example, in a "perfect" adaptive pricing system, prices are tied very strongly to ability to pay, meaning that differences in income make minimal differences in purchasing capacity. There are also interesting questions with a game-theoretic flavour about when a firm should adopt adaptive pricing versus public pricing.Another interesting aspect is how to implement adaptive pricing effectively; this opens up a rich stream of work in the applications of machine learning and data mining to this area.

New Challenges for Computational Creativity

Computational creativity is the application of artificial intelligence to areas that require creative thought: coming up with ideas that are novel, valuable, demonstrate skill, and where the machine can reflect on its creations to evaluate and explain what it has created. Research into computational creativity has focused on a broad set of artistic areas, producing programs that can act creatively in music, visual arts, games, and literature, and criticism of those areas; but, there has also been activity in building systems to exhibit mathematical and scientific creativity.

There are a number of interesting avenues for exploration in this area. The first is creative systems that combine different media and approaches. One example would be the automated creation of infographics, which requires not only the effective visualisation of data, but also the use of visual analogy to convey the ideas in an engaging way. Another example is the use of analogy across creative systems: for example, musical improvisers often discuss their performance in terms of analogy with visual imagery or texture. How could we build a computer system that carried out that kind of creative thought and action? How do we evaluate such systems?

A second area that is of interest is the intersection of computational creativity and conceptual art. A number of computer systems have been built that make visual art, and demonstrate ideas such as transferring visual style from one set of images to another, creating visual works that aim to express (or at least represent) certain emotions, or which make connections between the art produced and the outside world. However, no exploration has been done of computers creating conceptual art—and engaging with the discourse that makes conceptual art possible.

There is also the possibility of more (humanities) theory work in this area. For example, almost of the critical work in computational creativity is grounded in rather early, somewhat naïve art-theory ideas. Exploring the body of computational creativity from the art-critical perspectives of the last few decades could be an interesting area of study. Another area would be to examine computational creativity and AI-based art from the point of view of philosophical theories of aesthetics.