Yoshua Bengio on Integrated Information Theory
At NeurIPS this year Yoshua Bengio gave a great talk on research directions towards general intelligence.
While the deep learning paradigm has made major progress this century beyond classical symbolic AI, it has not accomplished its original goal of high-level semantic representations grounded in lower-level representations, which would enable higher-level cognitive tasks like systematic generalization of concepts and properties, working with causality, and factorizing knowledge into small exchangeable pieces.
Bengio thinks that there are pathways from current deep learning to high-level semantics that do not require a return to, or an interleaving with, classical symbolic approaches. One element of the picture he paints is that if we want to get to high-level representations we should drop the “disentangled factors” goal (assumption that each variable should be independent) and instead think of thoughts as best represented by a sparse factor graph.
This leads a questioner to ask about the difference between this model and IIT’s model of integrated information as the measure of consciousness.
Questioner:
The other major theory of consciousness is of course IIT, which measures consciousness by this phi quantity. which is essentially a measure of the mutual information of the parts of a system. and the higher the mutual information, the more consciousness you have. Which seems like the polar opposite of your sparse factor graph hypothesis. How do you reconcile the two?
Yoshua Bengio:
I don’t. I think the IIT theory is more on the mystical side of things and attributes consciousness to any atom in the universe. I’m more interested in the kind of consciousness that we can actually see in brains. [….] There is a quantity that is being measured [in IIT] but I don’t think that it is related to the kind of computational abilities that I’ve been talking about.