The University of Manchester, UK
Biography: Ross D. King is Professor of Machine Intelligence at the University of Manchester, UK. He is one of the most experienced machine learning researchers in the UK. His main research interest is the interface between computer science and science. He originated the idea of a ‘Robot Scientist’: integrating AI and laboratory robotics to physically implement closed-loop scientific discovery. His Robot Scientist ‘Adam’ was the first machine to autonomously discover scientific knowledge. ‘Eve’ is currently searching for drugs against neglected tropical diseases, and cancer. This research has been published in top scientific journals, Science, Nature, etc. and has received wide publicity. His other core research interest is DNA computing. He developed the first nondeterministic universal Turing machine, and is now working on ‘DNA supremacy’: a DNA computer that can solve larger NP complete problems that conventional or quantum computers. He is also very interested in computational economics and aesthetics.
The Automation of Science
The application of Artificial Intelligence (AI) to science has a distinguished history. Recent progress in AI and laboratory automation has made it possible to fully automate simple forms of scientific research. A Robot Scientist is a physically implemented robotic system that applies techniques from AI to execute cycles of automated scientific experimentation: hypothesis formation, selection of efficient experiments to discriminate between hypotheses, execution of experiments using laboratory automation equipment, and analysis of results. The motivation for Robot Scientists is to both better understand science, and to make science more efficient. Our Robot Scientist ‘Adam’ was the first machine to autonomously discover novel scientific knowledge. Our Robot Scientist ‘Eve’ was originally designed to automate drug discovery, with a focus on neglected tropical diseases. We are now adapting Eve to work with and learn about yeast systems biology and cancer. In chess there is a continuum of ability from novices up to Grandmasters. We argue that it is likely that advances in AI and lab automation will drive the development of ever-smarter Robot Scientists. The Physics Nobel Frank Wilczek is on record as saying that in 100 years’ time the best physicist will be a machine. If this comes to pass it will not only transform technology, but our understanding of science, and the Universe.
Transformative Machine Learning: Explicit is Better than Implicit
The key to success in machine learning (ML) is the use of effective data representations. Formerly, ML was only applied to isolated problems. Now, with the ever-increasing availability of data, ML is being applied to large sets of related problems. In multi-task ML, and transfer ML, related problems are exploited to improve ML performance. My colleagues and I have developed transformative learning (TL): a novel and general ML representation for sets of related problems. TL has the dual advantages of improving ML performance, and enabling explainable predictions. The fundamental new idea is to transform standard data representations into an explicit representation based on the predictions of pre-trained models. We have evaluated TL using the four most important non-linear ML methods: random forests, support-vector machines, k-nearest neighbour, and neural-networks; on three real-world scientific problem areas: drug-design, predicting gene expression, and meta machine-learning. TL significantly improved the predictive performance of all four ML methods in all three areas. A valuable side-product of TL is the large-scale production of prediction models. We applied these models to cluster drug-targets/genes and drugs by functional similarity. We also used them to make large-scale drug-activity predictions, and gene-activity predictions.