It is curious, and I believe not previously noticed, that something very similar is essential to high-energy physics. (Physics also needs normal perception, of course.)At this point alarms ring in the minds of my colleagues, since we are all too familiar with books on the profound connection between ``the new physics'' and consciousness and various sophomoric distortions of Asian mysticism.
The authors of this school are seldom discussed, save by graduate students who laugh at the errors and covet the royalties. Rest assured, I shall not discuss the torture of cats, Buddhist puns, interpretive dance, the Tao of the relativistic Euler-Lagrange equations, the maya-aspects of renormalizable gauge field theories, or even how to find a cheap Chinese restaurant in Copenhagen without a Danish interpreter./color]
My subject is, instead, rather more massive and solid and sweaty: the detectors attached to particle accelerators. A word or three of reminder about these, too, may not be out of place.
Particle physicists are interested in what the smallest discoverable bits of matter are, and how they behave. They are especially interested in how they behave at very high energies, since these let them probe very short distances and led to unusual (and hence informative) events, like the creation of new kinds of particles.
The only practical way to give elementary particles lots of energy is to accelerate them to very high speeds; the electro-magnetic machines which do this are called, imaginatively enough, ``accelerators''.
Some accelerators send a beam of particles into a fixed target of more normal matter, say, gold foil. The really high-energy ones collide two beams of particles moving in opposite directions. There are all sorts of fascinating technical issues, on which I may well end up writing a dissertation --- but another time.
More interesting for us than the accelerators are the detectors, the machines which sense what happens when the particles collide. The need for such machines is quite real. The events happen far too quickly (over 10^-23 to, at the most lackadaisical, 10^-10, seconds) and in too small a region (on the order of 10^-18 meters) for human perception.
I come at last to the heart of the matter. Most of the oceans of data from detectors are uninteresting and worthless. Recall that physicists want to learn about unusual, hard-to-achieve or anomalous events; everything else is noise.
But common, easily occurring events are by definition the majority; therefore most events are uninteresting. Sturgeon's Law states that ``ninety percent of everything is crap.''
For particle physics, this is wildly optimistic; interesting events can be outnumbered by billions or trillions to one. In theory, combing haystacks for needles is what professors have graduate students for. In practice, not even an army would suffice.
What does suffice is very high speed electronics, working on time-scales of under a microsecond. The lowest level, known as the trigger, scans the signals from the detector for an interesting pattern, usually something very simple, like ``two diametrically opposed detectors activated.'' The data is recorded only if the trigger is (for want of a better word) triggered. Once it is recorded, the computers set to work on it, attempting a more and more detailed reconstruction of the event.
At each stage in the reconstruction there are ``cuts'', i.e. some events are selected for their interesting characteristics and the rest discarded. (For instance, we might want events where all the outgoing particles concentrate into two back-to-back jets, and so cut those where lots of other detectors got triggered, along with a diametrically opposed pair.)
Great care is lavished on both the design of the cuts and the reconstruction, for figuring out what to ignore is, practically, as important as figuring out what happened. What bubbles up, in the end, are a handful of reconstructions selected --- elected? --- for conscious, human attention.
[link to www.consciousness.it
There are a few areas that cooperate and compete in order to outline the framework of this new field: 1) embodiment , 2) simulation and depiction, 3) environmentalism or externalism, 4) extended control theory. None of them is completely independent of the others. They strive to reach a higher level of integration.
Embodiment tries to address the issues of symbol grounding, anchoring, and intentionality. Recent work emphasizing the role of embodiment in grounding conscious experience goes beyond the insights of Brooksian embodied AI and discussions of symbol grounding (Harnad, 1990; Harnad, 1995; Ziemke, 2001; Holland, 2003; Bongard, Zykov et al., 2006). On this view, a crucial role for the body in an artificial consciousness will be to provide the unified, meaning-giving locus required to support and justify attributions of coherent experience in the first place.
Simulation and depiction deal with synthetic phenomenology developing models of mental imagery, attention, working memory. Progress has been made in understanding how imagination- and simulation-guided action (Hesslow, 2003), along with the "virtual reality metaphor" (Revonsuo, 1995), are crucial components of being a system that is usefully characterized as conscious. Correspondingly, a significant part of the recent resurgence of interest in machine consciousness has focused on giving such capacities to robotic systems (e.g., Cotterill, 1995; Stein and Meredith, 1999; Chella, Gaglio et al., 2001; Ziemke, 2001; Hesslow, 2002; Taylor, 2002; Haikonen, 2003; Holland, 2003; Aleksander and Morton, 2005; Shanahan, 2005)
Environmentalism focuses on the integration between the agent and its environment. The problem of situatedness can be addressed adopting the externalism view where the vehicles enabling consciousness extend themselves to part of the environment (Drestke, 2000; O' Regan and Noe, 2001; Noë, 2004; Manzotti, 2006).
Finally, there is a strong overlapping between current control theory of very complex system and the role that is played by a conscious mind. A fruitful approach could be the study of artificial consciousness as a kind of extended control loop (Chella, Gaglio et al., 2001; Sanz, 2005; Bongard, Zykov et al., 2006)
There have also been proposals that AI systems may be well-suited or even necessary for the specification of the contents of consciousness (synthetic phenomenology), which is notoriously difficult to do with natural language (Chrisley, 1995).
One line of thought (Dennett, 1991; McDermott, 2001; Sloman, 2003) sees the primary task in explaining consciousness to be the explanation of consciousness talk, or representations of oneself and others as conscious. On such a view, the key to developing artificial consciousness is to develop an agent that, perhaps due to its own complexity combined with a need to self-monitor, finds a use for thinking of itself (or others) as having experiential states.
Adami, C. (2006). “What Do Robots Dreams Of?” Science 314 (5802): 1093-1094.
Aleksander, I. (2000). How to Build a Mind. London, Weidenfeld & Nicolson.
Aleksander, I. (2001). “The Self 'out there'.” Nature 413: 23.
Aleksander, I. and H. Morton (2005). “Enacted Theories of Visual Awareness, A Neuromodelling Analysis”. in BVAI 2005, LNCS 3704.
Atkinson, A. P., M. S. C. Thomas, et al. (2000). “Consciousness: mapping the theoretical landscape.” Trends in Cognitive Sciences 4 (10): 372-382.
Baars, B. J. (1988). A Cognitive Theory of Consciousness. Cambridge, Cambridge University Press.
Baars, B. J. (2002). “The Conscious Access Hypothesis: origins and recent evidence.” Trends in Cognitive Sciences 6 (1): 47-52.
Bongard, J., v. Zykov, et al. (2006). “Resilient Machines Through Continuous Self-Modeling.” Science 314 (5802): 1118-1121.
Chella, A., S. Gaglio, et al. (2001). “Conceptual representations of actions for autonomous robots.” Robotics and Autonomous Systems 34 (4): 251-264.
Chella, A. and R. Manzotti (2007). Artificial Consciousness. Exeter (UK), Imprint Academic.
Chrisley, R. (1995). “Non-conceptual Content and Robotics: Taking Embodiment Seriously”. in Android Epistemology. F. K, G. C and H. P., Cambridge, AAAI/MIT Press: 141-166.
Chrisley, R. (2003). “Embodied artificial intelligence.” Artificial Intelligence 149: 131-150.
Cotterill, R. M. J. (1995). “On the unity of conscious experience.” Journal of Consciousness Studies 2: 290-311.
Dennett, D. C. (1991). Consciousness explained. Boston, Little Brown and Co.
Drestke, F. (2000). Perception, Knowledge and Belief. Cambridge, Cambridge University Press.
Edelman, G. M. and G. Tononi (2000). A Universe of Consciousness. How Matter Becomes Imagination. London, Allen Lane.
Franklin, S. (2003). “IDA: A Conscious Artefact?” in Machine Consciousness. O. Holland. Exeter (UK), Imprint Academic.
Haikonen, P. O. (2003). The Cognitive Approach to Conscious Machine. London, Imprint Academic.
Harnad, S. (1990). “The Symbol Grounding Problem.” Physica D (42): 335-346.
Harnad, S. (1995). “Grounding symbolic capacity in robotic capacity”. in "Artificial Route" to "Artificial Intelligence": Building Situated Embodied Agents. L. Steels and R. A. Brooks. New York, Erlbaum.
Hesslow, G. (2002). “Conscious thought as simulation of behaviour and perception.” Trends in Cognitive Sciences 6 (6): 242-247.
Hesslow, G. (2003). Can the simulation theory explain the inner world? Lund (Sweden), Department of Physiological Sciences.
Holland, O., (2003). Machine consciousness. New York, Imprint Academic.
Jennings, C. (2000). “In Search of Consciousness.” Nature Neuroscience 3 (8): 1.
Kuipers, B. (2005). “Consciousness: drinking from the firehose of experience”. in National Conference on Artificial Intelligence (AAAI-05).
Manzotti, R. (2005). “The What Problem: Can a Theory of Consciousness be Useful?” in Yearbook of the Artificial. P. Lang. Berna.
Manzotti, R. (2006). “An alternative process view of conscious perception.” Journal of Consciousness Studies 13 (6): 45-79.
Manzotti, R. (2007). “From Artificial Intelligence to Artificial Consciousness”. in Artificial Consciousness. A. Chella and R. Manzotti. London, Imprint Academic.
McCarthy, J. (1995). “Making Robot Conscious of their Mental States”. in Machine Intelligence. S. Muggleton. Oxford, Oxford University Press.
McDermott, D. (2001). Mind and Mechanism. Cambridge (Mass), MIT Press.
Minsky, M. (1991). “Conscious Machines”. in Machinery of Consciousness, National Research Council of Canada.
Minsky, M. (2006). The Emotion Machine: Commonsense Thinking, Artificial Intelligence, and the Future of the Human Mind. New York, Simon & Schuster.
Noë, A. (2004). Action in Perception. Cambridge (Mass), MIT Press.
O' Regan, K. and A. Noe (2001). “A sensorimotor account of visual perception and consciousness.” Behavioral and Brain Sciences 24 (5).
Revonsuo, A. (1995). “Consciousness, dreams, and virtual realities.” Philosophical Psychology 8: 35-58.
Rockwell, T. (2005). Neither ghost nor brain. Cambridge (Mass), MIT Press.
Sanz, R. (2005). “Design and Implementation of an Artificial Conscious Machine”. in IWAC2005, Agrigento.
Shanahan, M. P. (2005). “Global Access, Embodiment, and the Conscious Subject.” Journal of Consciousness Studies 12 (12): 46-66.
Sloman, A. (2003). “Virtual Machines and Consciousness.” Journal of Consciousness Studies 10 (4-5).
Stein, B. E. and M. A. Meredith (1999). The merging of the senses. Cambridge (Mass), MIT Press.
Taylor, J. G. (2002). “Paying attention to consciousness.” Trends in Cognitive Sciences 6 (5): 206-210.
Ziemke, T. (2001). “The Construction of 'Reality' in the Robot: Constructivist Perpectives on Situated Artificial Intelligence and Adaptive Robotics.” Foundations of Science 6 (1-3): 163-233.