Learning to learn: will machines acquire knowledge as naturally as children do?

Watching a child learn is an extraordinary experience. As a proud dad, it delights and inspires me, and as an artificial intelligence (AI) professional, it reminds me that our journey into machine learning (ML) has only just begun. What is particularly incredible about babies and young children, of course, is that they learn incredibly quickly – drawing on building blocks of information and astounding us by picking things up naturally and incrementally. Is that too much to ask of machines?

For now, the answer is yes. But the extraordinary progress I’m seeing in ML convinces me that the ultimate goal of meta-learning – the ability of machines to learn how to learn – is inching ever closer. The implications are profound, of course. Commercial opportunities for business will be propelled to new levels, society will evolve, and ethical, philosophical and moral questions will be high on the agenda, in a world where AI and human behaviour mirror each other much more closely.

But just what is the current state of play in machine learning, and what exciting recent developments are coming down the pipeline that could take us closer to the notion of ‘learning to learn’?

Where are we now

For the moment, let me stay with my human child analogy. We create new-to-the-world machines, with sophisticated specifications, that are hugely capable. But to reach their potential, we have to expose them to hundreds of thousands of training examples for every single task. They just don’t ‘get’ things like humans do.

One way to get machines to learn more naturally, is to help them to learn from limited data. We can use generative adversarial networks (GANs) to create new examples from a small core of training data rather than having to capture every situation in the real world. It is ‘adversarial’ because one neural network is pitted against another to generate new synthetic data. Then there’s synthetic data rendering – using gaming engines or computer graphics to render new scenarios. Finally, there are algorithmic techniques such as Domain Adaption which involves transferable knowledge (using data in summer that you have collected in winter, for example) or Few Shot Learning, which making predictions from a limited number of samples.

Taking a different limited-data route is multi-task Learning, where commonalities and differences are exploited to solve multiple tasks simultaneously. ML is generally supervised (with input and target pairing labels), but strides are being made in unsupervised, semi-supervised and self-supervised learning. This concerns learning without a human teacher having to label all the examples. With clustering, for instance, an algorithm might cluster things into groups with similarities that may or may not be identified and labelled by a human. Examining the clusters will reveal the system’s thinking.

Human-Machine Understanding: how tech helps us to be more human

Sally Epstein, head of strategic technology at Cambridge Consultants, discusses how Human-Machine Understanding can help us to be more human. Read here

Transforming the Transformer

But now to the Transformer, one of the new kids on the block. Most neural network algorithms have to be adapted to perform one job. The Transformer architecture makes fewer assumptions about the format of the input and output data and so can be applied to different tasks – similar to the idea of machines exploiting building blocks of learning. The Transformer initially used self-attention mechanisms as the building block for machine translation in natural language processing. But now it is being applied to other tasks, like image recognition and 3D point cloud understanding.

This brings me to the obvious question: where next for the Transformer? Recent academic work has looked at applying it alongside data-efficient training techniques to protein applications. The team here at Cambridge Consultants built on this research to create an AI model that can optimise protein function for a specific task. We applied this to fluorescent proteins, specifically whether it is possible to recommend protein structures that fluoresce more brightly. There’s no time to go into detail here, but I can say that results are very encouraging: the model predicted variants with six amino acid changes across the length of the sequence to improve the fluorescence of the protein.

This is just a glimpse of an exciting future. Protein manipulation has the potential to be applied to a range of applications, including medicine, where it could be applied to improving cancer treatments or reducing organ rejection rates. New and more effective antibiotics could also be created using protein manipulation. In the materials space, there could be a role for removing plastic waste more efficiently. The technique could also be used to create better-performing textiles.

Looping into the future

And what of the process of training AI models in an experimentation loop? Essentially, this turns the traditional data-first approach on its head. Instead of saying ‘here’s the data, what will it solve?’, the idea is to start with the problem and then create the data sets you need. You ask the AI what it would like to know, then run a lab experiment to find information that you put back into the neural network. Any knowledge gaps start to get filled. This is at a fairly early stage of development, with the aim to close the loop and automate the whole experimentation process.

As part of the recent AI Summit London 2021, I had a fascinating fireside chat on the subject with Kim Branson, senior vice-president, global head of artificial intelligence and machine learning at GSK. He and his team are applying this experimentation concept to drug discovery. Their approach – ‘let’s ask the question first then go and get the data we need to answer it’ – is enabling them to build unique sets of data to target the problems they are trying to solve. This is powerful stuff, and indicative of the point I made at the beginning: the better machines become at learning, the better the outcomes for business, society and the world. Watch this space.

Written by Tim Ensor, director of artificial intelligence at Cambridge Consultants (part of Capgemini Invent)

Editor's Choice

Editor's Choice consists of the best articles written by third parties and selected by our editors. You can contact us at timothy.adler at stubbenedge.com