“I would guess that by 2050 a computer will be able to simulate the human brain fully. This will have huge social, economic and ethical impact. Once you exceed human capacity you have to teach ethical responsibility,” said Christian Van den Broeck of the Department of Theoretical Physics, Hasselt University. Van den Broeck was addressing STIAS fellows at the first seminar of the 2018 academic year.
Van den Broeck outlined the process followed in his project using a deep-learning network database to identify and recognise South African birdsong, and used this experience to discuss the future of deep learning and artificial and human intelligence.
The project initially developed an operational convolutional neural network that was able to distinguish 170 different South African birdsongs. The dataset consisted of 3228 recordings extracted from Xeno-Canto, a user-generated database. Most recordings, however, were of low quality, containing multiple birdsongs and background noises.
“This problem was solved by pre-training using the larger and higher-quality BirdCLEF2016 dataset which contained 24 607 recordings but no South African birds,” said Van den Broeck. The team added 80% of the South African database and obtained a high recognition accuracy.
“The fact that the database is able to make a complicated recognition on the basis of a limited and noisy training set reveals key information about recognition and intelligence.”
“This made me question what human intelligence is about,” he added.
“Intelligence is a question of ability – we develop it by learning and copying. I believe intelligence is a question of ability to distinguish, identify and relate patterns at various levels of complexity. We do so in an initial phase by exposure to sensory input, and later on also by the internal dynamics in the brain,” continued Van den Broeck. “Human intelligence is overrated – It may arise naturally in a large enough and properly connected system.”
“Computers already influence us, albeit unwillingly so.” he added. “They talk to each other. As we allow them to do so more and more, and as their capabilities increase, they could take over control before we realise it.”
“Already deep learning can extract more data than any human brain can. For example, using deep learning in medicine means we can extract more data quicker than any human can.”
“If we consider quantum computers. These are still in their infancy,” said Van den Broeck. “However, one day the quantum computer could be merged with self-training deep-learning technologies. They would be able to ‘comprehend’ the world and its quantum aspects in a way that is hardly conceivable to us right now.”
“The human development timescale is much slower,” he continued. “I therefore believe we need to have humility.”
Right now the main difference between a human brain and a computer lies in the realm of cognitive functions – this includes ‘human-like’ emotions, socialisation and human contact, ethical responsibility and obligation. Humans are able to communicate about abstract ideas and are even prepared to die for ideals “which doesn’t always make much sense from a biological point of view,” explained Van den Broeck. “Computers have not yet developed cognitive functions. However, when they do, we expect that these other human-like characteristics may appear and co-develop.”
Using the example of the Tesla self-drive cars, Van den Broeck pointed to situations where the car would have to make ethical choices like between hitting a pedestrian or killing the car occupants.
“Can computers be socialised? Can they understand and be socialised into the concept of obligation?” he asked. “I believe they will get there soon if they are not there already. Someone will do it – develop a system that will have cognitive functions acquired by experience – emotions like anger and even something that looks like free will.”
Van den Broeck also pointed out that the artificial intelligence development process is mostly in the hands of a small number of companies – like Amazon, Apple, Facebook, Google and Microsoft.
“Powerful regulation and codes of conduct will be needed,” he said.
“Artificial intelligence won’t replace humans but it is unstoppable – it’s too convenient.”
Michelle Galloway: Part-time media officer at STIAS
Photograph: Christoff Pauw