There has been a long tradition of philosophers, cognitive scientists and computer scientists discussing philosophical implications of artificial intelligence. Since for the last decades von Neumann style computation has been the main paradigm, most of the discourse is centered around standard digital computation. However, with the advent of neuromorphic hardware many philosophical answers need to be reconsidered. Are implementations of AI algorithms on neuromorphic hardware mere simulations or rather replications of brain structures? Does the use of biologically inspired components yield new ethical implications? Or can neuromorphic chips be used to build embodied robots? This workshop attempts to answer those and more questions by bringing together scientists of different areas and career phases.
Thursday, 30th of May: Brain-like Hardware
TIME | TYPE | SPEAKER | TOPIC |
14:00 – 15:15 | Talk | Pascal Nieters | Neural Computation, AGI and Consciousness: Does the hardware matter? |
15:15 – 15:30 | Break | ||
15:30 – 16:45 | Talk | Herbert Jaeger | It has taken 2350 years to understand the logic of computing – the processing of symbolic structures. — How long will it take until we understand the physics of computing – the structuring of processes? |
16:45 – 17:15 | Break | ||
17:15 – 18:15 | Talk | Johannes Brinz | From Simulating Towards Replicating the Brain |
18:15 – 18:30 | Break | ||
18:30 – 20:00 | Keynote | Chiara Bartolozzi | Embodied Neuromorphic Intelligence |
Friday, 31st of May: Neuromorphic Consciousness
TIME | TYPE | SPEAKER | TOPIC |
14:00 – 15:15 | Talk | Wanja Wiese | Is it plausible to assume that consciousness requires neuromorphic processing? |
15:15 – 15:30 | Break | ||
15:30 – 16:45 | Talk | Gualtiero Piccinini | Neural Hardware for the Language of Thought |
16:45 – 17:15 | Break | ||
17:15 – 18:15 | Panel Discussion | Neuromorphic Systems – Tools or Individuals? | |
18:15 – 18:30 | Break | ||
18:30 – 20:00 | Keynote | Paul Thagard | Consciousness and Neuromorphic AI |
As AI and large-language model capabilities continue to advance driven by increasing computational power of digital von-Neumann computers, some argue that artificial general intelligence (AGI) is within the reach of current technology. Conversely, neuromorphic computing, which is inspired by the brain’s neural computational systems, aims to enable a new generation of intelligent behavior in technical systems with purpose-built hardware modeled after its natural inspiration. This approach stands in contrast to traditional von-Neumann architectures that have powered AI advancements thus far. However, key questions remain unanswered: Will neuromorphic hardware significantly impact AGI, and by extension, our understanding of consciousness? What level of fidelity to the brain’s wetware complexity is required to achieve meaningful results? And, if neuromorphic hardware does prove impactful, is its role primarily in enabling even bigger AI models or is it more fundamental in nature? Does the hardware matter? In addressing these uncertainties, this talk provides an overview of neuromorphic computing. We will attempt to glean what we can on the big questions outlined above from current technology and open questions in the field.
For digital computing we possess a formal theory foundation which deeply roots in Western philosophical history, is mathematically transparent, has been worked out and stabilized and codified into a standard textbook format, and obviously has changed and shaped our modern world. For information processing in neuromorphic microchips, or in other new and to-be-found hardware substrates based on unconventional physical effects, or in biological brains or other natural systems, we do not have anything like a unifying formal theory foundation. But we need it, and it should not only be academically acceptable but really practically useful. In my talk I will draw a quick overall picture of this situation, and then present my own approach toward formulating such a general formal theory for information processing in non-digital, non-symbolic, physical dynamical systems.
Computational neuroscience has spent vast amounts of resources on developing large scale brain models. However, with the advent of neuromorphic engineering, researches might get the tools necessary to moving from simulating towards replicating the brain. First, I give a working definition of what a model is. Based on that definition, I distinguish between simulations and replications. A simulation only shares a common mathematical structure with the model it simulates, whereas a replication additionally must be the sort of entity the model is about (assignment) and work according to the causal laws described by the model (explanation). The last section then explores neuromorphic hardware. I argue that neuromorphic chips share aspects of the replicating causal structure of biological brains, thus moving closer to replicating the brain.
Biological sensory systems have developed to best capture the properties of surrounding objects and environment that are useful for acting in the world. The physical properties of tactile, visual and auditory sensory organs, and the way neurons encode the characteristics of each stimulus allow our brain to make sense of the world and take appropriate decisions on how to behave. This is done by a very efficient system that spares the slightest bit of information, to avoid consuming too much energy for each single action. As such, artificial systems have much to learn from biology, to develop cheap solutions that can run in a very small device and at minimum energy cost. Since the first prototypes of neuromorphic vision sensors and computing devices, part of the community focused its efforts in deploying neuromorphic devices in practical applications, to exploit their intrinsic compression, low latency, high temporal resolution, high dynamic range. The quest to find the best strategy to exploit neuromorphic engineering is still open, but a lot of progress has been made. In this talk, I’ll describe possible approaches towards the development of neuromorphic robots and their relevance.
Which artificial systems could be conscious? Theories of consciousness do not provide an
unequivocal answer to this question. Liberal theories (e.g., global workspace theory) suggest that
performing the right high-level computations is sufficient for being conscious. More restrictive theories
(e.g., integrated information theory) predict that computers with a classical hardware will never be
conscious, because they do not instantiate the right low-level, (non-computational) properties.
A point of contention is to what extent and how artificial systems must replicate the mechanisms
underlying consciousness in living creatures, in order to be conscious. Perhaps consciousness requires
neuromorphic processing? If so, what does that mean?
This talk explores different senses in which computational and other processes can be regarded as
neuromorphic. I distinguish between more general (unspecific) notions of neuromorphic processing
(e.g., certain kinds of causal flow) and more specific notions (e.g., processing in certain neuromorphic
microchips). I argue that only neuromorphic processing in a relatively unspecific sense should be
regarded as a constraint on implementing consciousness in artificial systems.
The Language of Thought (LOT) hypothesis posits that at least some important cognitive processes involve language-like representations. These representations must be processed by appropriate hardware. Since the organ of cognition is the nervous system, whether these representations and the hardware needed to process them are there depends on how nervous systems work. We distinguish between different versions of LOT, articulate their hardware requirements, and consider which versions of LOT are supported by neuroscientific evidence.
I maintain that consciousness in humans and other animals results from four brain mechanisms: neural representation, binding, coherence, and competition. Computers generally lack these mechanisms so we should not expect them to be conscious. But neuromorphic computers are inspired by brain operations, so they are better candidates for becoming conscious. ChatGPT and other new models in generative AI are surprisingly intelligent and are neuromorphic to some extent, in that they are inspired by neural networks and use highly parallel algorithms. This talk examines whether these models are approaching consciousness as rapidly as they are approaching human-level intelligence. I will also consider whether models that are more intensively neuromorphic, for example in energy consumption, might qualify as conscious.
The workshop will take place online.
Link to the Workshop
Johannes Brinz & Nikola Kompa