Anthony Peressini Consciousness as Integrated Information A Provisional Philosophical Critique

Abstract

Giulio Tononi (2008) has offered his integrated information theory of consciousness (IITC) as a ‘provisional manifesto’. I critically examine how the approach fares. I point out some (relatively) internal concerns with the theory and then more broadly philosophical ones; finally I assess the prospects for IITC as a fundamental theory of consciousness. I argue that the IITC’s scientific promise does carry over to a significant extent to broader philosophical theorizing about qualia and consciousness, though not as directly as Tononi suggests, since the account is much more focused on the qualitative character of experience rather than on consciousness itself. I propose understanding it as ‘integrated information theory of qualia’ (IITQ), rather than of consciousness. 1. Consciousness as Integrated Information Giulio Tononi (2008) has recently offered his integrated information theory of consciousness (IITC) as a ‘provisional manifesto’. I critically examine how the approach fares. I point out some (relatively) internal concerns in section 2 and then in section 3 some more broadly philosophical ones, and finally in section 4 I assess the prospects for integrated information (II) as a fundamental theory of consciousness. I argue that the IITC’s scientific promise does carry over to a significant extent to broader philosophical theorizing about qualia and consciousness, though not as directly as Tononi suggests, since the Journal of Consciousness Studies, 20, No. 1–2, 2013, pp. ??–?? Correspondence: Anthony Peressini, Department of Philosophy, Marquette University Email: anthony.peressini@marquette.edu account seems much more focused on the qualitative character of experience rather than on consciousness itself. The formal definition for the amount of integrated information in a system, , depends on the notion of relative entropy from modern information theory. Given a system, X, characterized by mechanism mech, where mech consists of n discrete states, x1, x2,...xn, one considers the probability distribution of its possible states, p(X(mech)) = {p1, p2,...pn}. One important distribution for X(mech) is its maximum entropy (equivalent to uniform probability in many simple systems) distribution: p(X0(maxH)), where the subscript 0 indicates time t = 0, and would, for example, look like {1⁄4, 1⁄4, 1⁄4, 1⁄4 }, for n = 4. The maximum entropy distribution is the ‘zero point’ from which one measures information as ‘distance’. The distance measure between two probability distributions, p and q, is given by the Kullback-Leibler divergence H, (also know as relative entropy), defined as: H [p q] = pi log(pi | qi). Tononi then defines the effective information (ei) generated by a mechanism in a particular state x1 at t = 1 as ei(X(mech, x1)) = H [p(X0(mech, x1)) || p(X0(maxH))], where this is to be understood as the information generated by the system’s mechanism and state at t = 1 about the system’s prior state at t = 0. Finally the integrated information, , for a system X in state x1 is the difference (measured by relative entropy) between the probability distribution generated by the system as a whole, p(X0(mech, x1)), and the probability distribution generated by X’s decomposition into parts that leave the minimal information unaccounted for, denoted MIP: (X(mech, x1)) = H [p(X0(mech, x1)) || p( kM0(mech, 0))], kM0 ranges over MIP. Heuristically, this difference can be thought of as the information of the system not accounted for by its parts. Having set out this formal machinery, Tononi expresses his provisional manifesto as follows: the IITC proposes that consciousness is II, specifically that (1) the quantity of consciousness corresponds to the amount of II generated by a complex of elements, , and (2) the 2 A. PERESSINI [1] In applying information theory, Tononi (2008) sets it up so that the informational focus is always backwards in time. Given a mechanism and current state, ‘the system’s mechanism and state constitute information (about the system’s previous state), in the classic sense of reduction of uncertainty or ignorance’ (ibid., p. 220). This decision seems both substantive and arbitrary with respect to consciousness. Why not frame the application of information theory in a forward direction, so the mechanism and current state give information about the system’s next state? Perhaps this has something to do with how information theory is typically developed, but given that a finite state mechanism is assumed, it seems ‘forward probabilities’ would work as well. I return to this question below. quality of experience is specified by the set of informational relationships generated within that complex. The informational relationships are formally characterized by the properties of solids (polytropes) in the appropriate 2 dimensional space, where n is the number of states in the mechanism. This space is called Q-space and the 2 dimensional solids are generated by how the probability distributions for the system change as a function of the connections/transitions in the mechanism. Tononi (2008, esp. pp. 224ff) develops many formal aspects of Q-space and shows how these neatly mirror pre-theoretic conceptions of the qualitative aspect of experience. For example, he develops a property of the q-arrows (sides) of solids in Q-space he calls entanglement. This formal property is argued to capture the notion of modes in qualitative experience (sight, sound, etc.). Another important feature is that qualia so construed in Q-space are context dependent in a way that parallels our sense that the subjective qualities of experience (qualia) are dependent on the broader ‘qualia context’, e.g. the particular red qualia a subject experiences when viewing the red of a stop sign may well be different if it is part of a visual field that includes a high percentage of reds of varying shades. Finally, some shapes in Q-space seem to be elementary in that they cannot be further decomposed (do not contain any more densely entangled sub-sub-modes), and these would seems to correspond to ‘what philosophers call a “quale” in the narrow sense — say a pure color like red, or a pain, or an itch...’ (ibid., p. 230). To sum up the account, (in Tononi’s words): ‘Perhaps the most important notion emerging from this approach is that an experience is a shape in Q. According to the IITC, this shape completely and univocally specifies the quality of the experience’ (ibid., p. 228, original italics). I move on now to an appraisal of the IITC approach. 2. Internal Concerns In this section I present concerns with the II approach that have to do with the internal cogency of II as a theory of a fundamental property of the brain. These concerns are whether the notion of II is well defined in a formal sense for an arbitrary system and, second, how the (notorious) difficulty in interpreting probability affects the II account. 2.1 Is Integrated Information Well Defined? The notion of a good definition in mathematics (a term being well defined) means roughly that the defined operation or concept or entity is unambiguous. For example, with cosets of a group relative to a CONSCIOUSNESS AS INTEGRATED INFORMATION 3

Cite this paper

@inproceedings{Peressini2012AnthonyPC, title={Anthony Peressini Consciousness as Integrated Information A Provisional Philosophical Critique}, author={Anthony Peressini}, year={2012} }