David Slater
Wrong Question?
Contemporary discussion of artificial intelligence is dominated by questions about intelligence: whether current systems are intelligent, whether they approach general intelligence, or whether scaling alone might eventually produce it (Russell & Norvig, 2021). These questions, however, rest on an assumption that is rarely examined—that intelligence is the primary phenomenon of interest, and that consciousness, if it appears at all, comes later as an emergent by-product. This paper argues that this assumption is likely inverted. Evidence from evolutionary biology, neuroscience, and dynamical systems theory suggests that consciousness emerges before intelligence, and that intelligence can only arise once a system already possesses a unified internal orientation to the world (Panksepp, 1998; Pessoa, 2017).
How did it Evolve?
To make this claim precise, evolution must be understood in a broader sense than its familiar biological formulation. In non-equilibrium thermodynamics, driven open systems can spontaneously self-organise into stable dynamical regimes—often described as dissipative structures—that persist by channelling energy flows and exporting entropy (Prigogine, 1977; Nicolis & Prigogine, 1989). Under sustained driving, some regimes prove more stable than others, introducing a form of differential persistence that does not depend on genes, replication, or selection in the biological sense (England, 2015). Biological evolution by natural selection can therefore be understood as a powerful special case of a more general phenomenon: the selection of stable dynamical regimes under constraint (Kauffman, 1993).
As such systems accumulate constraints—feedback loops, memory, boundary conditions, and separations of timescale—their attractor landscapes become richer and more structured (Mitchell, 2009). At this point, new functional properties can arise. One of the earliest and most consequential is awareness: the capacity for internal dynamics to covary reliably with environmental regularities in ways that stabilise behaviour. When this internal orientation becomes globally integrated into a single, metastable dynamic that continuously binds perception, salience, value, and action readiness, a system crosses a functional threshold that is usefully described as consciousness (Tononi et al., 2016; Mashour et al., 2018).
What is Consciousness?
Consciousness is defined here operationally rather than phenomenologically. The argument does not depend on resolving the “hard problem” of subjective experience, nor does it require commitment to any particular theory of qualia. Instead, consciousness is treated as a system-level dynamical property: the maintenance of a unified internal state that orients the system to the world under uncertainty. This definition is consistent with integrative and dynamical accounts in contemporary neuroscience, although it remains theoretically contested, particularly with respect to minimal . .


