Here are some ideas for PhD projects and postdoc research fellowship proposals. If you are interested in any of these, email me ( C.G.Johnson@kent.ac.uk) and we can discuss the ideas further.
In the last few years, deep learning using neural networks has become the most prominent technique in machine learning (see the Goodfellow/Bengio/Courville book). It could be argued that the success of deep learning has little to do with neural networks as such; it is the development of deep, layered representations that is important, together with means of training these. In particular, there are powerful ideas such as stacked autoencoders, where layers of unsupervised representation extract increasingly complex representations in a layered fashion before finally applying a supervised process to solve a particular problem; these are concerned with training regimes rather than neural networks as such.
This opens an interesting area for investigation, which is the development of deep learning algorithms that use representations other than neural networks. One idea would be to use code fragments, in the tradition of methods such as genetic programming (see the GP Field Guide for an overview). The aim of this project would be to investigate the power of combining GP-like representations with deeply layered representations and training regimes from the deep learning tradition. We would aim to answer questions such as whether these approaches can produce results on standard test problems that are as accurate or more than neural-based representations, whether they produce more comprehensible outputs, and whether the models learned are smaller than the neural models.
This leads onto a deeper investigation, which is whether this approach can be used to learn models that combine pattern recognition and reasoning methods. Current deep learning methods are good at learning to recognise patterns in e.g. visual images, but weak at problems that require e.g. arithmetical or logical reasoning. One approach would be to create deep learning methods that have “recognition” layers and “reasoning layers”; the challenge then is to devise appropriate training regimes. For the philosophically inclined, we can see this as a challenge of how to combine different kinds of knowledge: in particular, how to combine a posteriori knowledge (from experience) with a priori knowledge (independent of experience); in particular, Kant’s notion of synthetic a priori concepts. There might be some useful clues from the philosophical literature about how to represent and combine these ideas.
Some related ideas can be found in the recent paper by Lino Rodriguez-Coayahuitl, Alicia Morales-Reyes, Hugo Jair Escalante ( Structurally Layered Representation Learning: Towards Deep Learning through Genetic Programming, Proceedings of the 2018 European Conference on Genetic Programming)
"If you dangle a rat by the tail, which is closer to the ground, it’s nose or ears?" (Shananhan, 2015). This is a trivial question for a person, but a very difficult question for a computer. A person would solve the problem by visualising the scenario in their mind’s eye. The aim of this project would be to give this capacity of mind’s eye visualisation to an artificial intelligence system, and test the effectiveness of that system.
One way to represent this mind’s eye representation would be for the system to represent what it learns as objects in a game engine such as Unity. The idea would be that we train a system not just to associate labels with objects, but to associate rich, active models, including physical models and models of behaviour. Then, we could answer questions about the world by setting up those models in the game engine, and letting them interact.
There are a number of possible systems that could be used to test such as system. One might be computational thinking systems such as Bebras these exemplify exactly what we want AI systems to do—bridge the gap between informal descriptions of tasks and computational descriptions.
The recent papers by Kunda (2018) and Ha and Schmidhuber (2018) give useful background for this project and are a good starting point for understanding more.