Juergen Schmidhuber, AI, DLD 2018
Dominik Gigler for DLD

The Feeling Machine

Share on xing
Share on facebook
Share on twitter
Share on linkedin
Share on pocket

The Latin phrase “Cogito, ergo sum” means “I think, therefore I am.” For decades authors and directors have dreamed about the self-aware machine, letting their imagination run high. With AI technology fast developing, could it actually happen? Can a machine actually become conscious of its own existence?

In the 17th century, long before the first computer was developed, French philosopher René Descartes described his first principle of explaining what proofs the existence of a being. Could “Cogito, ergo sum” soon be applied not only on humans but machines as well?

This much-debated and highly sensitive question of our times was put to discussion on Sunday at the annual DLD Munich 2018 conference. Ina Fried, Chief Technology Correspondent for Axios and Editor of its daily tech newsletter Login, interviewed Alexander Del Toro Barba and Prof. Jürgen Schmidhuber. Del Toro Barba is Head of Product at VisualVest, a digital banking platform founded by the German asset manager Union Invest, which uses robot-advisors. Schmidhuber, named the Godfather of AI by Bloomberg Business Week, and his research group at the Swiss AI Lab IDSIA and TU Munich developed early prototypes of today’s “Deep Learning” machine learning algorithms.

Click on the button to load the content from www.flickr.com.

Load content


Can a machine or computer feel emotions? 

Feeling emotions, according to Jürgen Schmidhuber is no thing of the future: “We already have emotions as a side-product of learning processes.” He and his team equip the robots with pain-sensors. If they bounce against something the sensors sends a negative signal. “How do you know if someone feels pain or just pretends to feel pain?”, Ina Fried asked. Both Alexander Del Toro Barba and Jürgen Schmidhuber acknowledged that it is hard to differ between the two. The key is, Jürgen Schmidhuber said, that robots can act emotionally and they do. At the beginning of a learning process they do not know how to avoid certain situations, but over time they learn to do so. The goal is to maximize positive feelings and minimize pain. 

In his office, Jürgen Schmidhuber said, there is a robot that gets hit by a co-worker on the head every morning. Over time the machine developed the kind of behavior that humans would see as fear: avoiding and hiding from that person. 

Click on the button to load the content from www.flickr.com.

Load content


Is it possible for a machine to develop a consciousness?

“There are different levels of consciousness”, said Alexander Del Toro Barba. Thus when talking about it, one has to differentiate between human consciousness and the one robots might experience. Jürgen Schmidhuber agreed: “Consciousness has no universal definition.” This still seems to be a thing of the future, as he added: “It is easy to create a system that […] becomes a prediction machine of what is going to happen next.” Which, according to Schmidhuber, can influence to the state that a machine is thinking about itself. 

Where are we today? 

“95 percent of commercial AI are now assistants for users and companies”, Prof. Schmidhuber saids. Artificial Intelligence algorithms like Long Short-Term Memory (LSTM) units make it possible for devices like smartphones to make your life easier such as understanding speech or translation. 

A technology, as Alexander Del Toro Barba stated, that was also used on Tay, an AI chatbot developed by Microsoft, that was supposed to learn on Twitter how young people communicate. Within 24 hours Tay started to tweet racist statements. Which brings up, as Alexander Del Toro Barba mentioned, the question whether we should introduce the concept of ethics to machines. 

Should AI be introduced to ethics? And if so, which ones? 

Ethics and moral ideas vary depending on region, religion, social and political views. Alexander Del Toro Barba: “Will there be AI with ethic views of the Vatican, ones with Chinese and some with Islamic worldviews?”He went even further: “Will a machine one day question its values, that were programmed into it?”

Jürgen Schmidhuber was rather optimistic: “If you want to build a smart machine you’d better give it the freedom to invent it’s own problems.” This includes also values. As there is also not one single uniform set of values in a nation, but competing ideas and the AI business is structured rather democratically, he is sure there will be various sets of values as well.

Share on facebook
Share on xing
Share on twitter
Share on linkedin
Share on pocket
Share on email
Share on facebook
Share on xing
Share on twitter
Share on linkedin
Share on pocket

Further Reading

Copyright © 2005 – 2021 Hubert Burda Media Holding KG / DLD Media GmbH – All rights reserved.

No spam, promise – just occasional news from the world of DLD.