In the history of the quest for human-level artificial intelligence, a number of rival paradigms have vied for supremacy. Symbolic artificial intelligence was dominant for much of the 20th century, but currently a connectionist paradigm is in the ascendant, namely machine learning with deep neural networks. However, both paradigms have strengths and weaknesses, and a significant challenge for the field today is to effect a reconciliation. A central tenet of the symbolic paradigm is that intelligence results from the manipulation of abstract compositional representations whose elements stand for objects and relations. If this is correct, then a key objective for deep learning is to develop architectures capable of discovering objects and relations in raw data, and learning how to represent them in ways that are useful for downstream processing. This short review highlights recent progress in this direction.
Deep learning is an approach to machine learning that involves training neural networks with many feed-forward layers on large datasets. Over the past ten years, it has become established as one of the most impactful research areas within artificial intelligence (AI). Its notable successes include commercially important applications, such as image captioning and machine translation, and it has been fruitfully combined with reinforcement learning in the context of robotics, video games, and the game of Go. Notwithstanding its undeniable success, critics have recently drawn attention to a number of shortcomings in contemporary deep learning.