Suppose that while you walked and talked, you could watch the signals that traverse your brain. Would they make any sense to you? Many people have done experiments to make such signals audible and visible, by using biofeedback devices. This often helps a person to learn to control various muscles and glands that are not usually under conscious control. But it never leads to comprehending how their hidden circuits work.
Scientists encounter similar problems when they use electronic instruments to tap into brain signals. This has led to a good deal of knowledge about how nervous systems work — but those insights and understandings never came from observation by itself. One cannot use data without having at least the beginnings of some theory or hypothesis. Even if we could directly sense all the interior details of mental life, it wouldn't tell us how to understand them. It might even make that enterprise more difficult, by overwhelming our capacity to interpret what we see. The causes and functions of what we observe are not themselves things we can observe.
Where do we get the ideas we need? Most of our concepts come from the communities in which we're raised. Even the ideas we get for ourselves come from communities — this time, the ones inside our heads. Brains don't manufacture thoughts in the direct ways that muscles exert forces or ovaries make estrogens; instead, to get a good idea, one must engage huge organizations of submachines that do a vast variety of jobs. Each human cranium contains hundreds of kinds of computers, developed over hundreds of millions of years of evolution each with a somewhat different architecture. Each specialized agency must learn to call on other specialists that can serve its purposes. Certain sections of the brain distinguish the sounds of voices from other sorts of sounds; other specialized agencies distinguish the sights of faces from other types of objects. No one knows how many different such organs lie in our brains. But it is almost certain that they all employ somewhat different types of programming and forms of representation; they share no common language code.
If a mind whose parts use different languages and modes of thought attempted to look inside itself, few of those agencies would be able to comprehend one another. It is hard enough for people who speak different human languages to communicate, and the signals used by different portions of the mind are surely even less similar. If agent P asked any question of an unrelated agent Q, how could Q sense what was asked, or P understand its reply? Most pairs of agents can't communicate at all.