I've been fairly positive on the advent of AI and have overtly expressed here and on other forum threads that I believe it is coming fast. But, I've recently become a bit pessimistic on the possibility of self-driving cars in the near future. The issue is recent work in neuropsychology on consciousness suggests AI driving is unlikely to meet the promise many imagine, including myself as recently as a few months ago.
We've long thought consciousness was a kind of 'ongoing record of our awareness of the world'. But that description was difficult to test and didn't fit a good bit of evidence. A revised theory has emerged relatively recently: Consciousness appears to emerge from the integration of multi-sensory inputs forming an interpretable whole. That whole is formed from our experience with the world and, in comparison with our experience, we can infer the near future, hidden information, possible reactions from our actions, etc.* Consciousness is the ongoing continual meta-experience of the comparison of our current experience with our past experiences and inferences for the future. That meta-experience is, essentially, our personal history, sense of self... our conscious knowledge of who we are, in this moment, extending back in time and into the future. These new theories are being put to test by creating blocks on some senses and noting how it interrupts conscious memory.
We also have 'theory of mind' in making inferences about other persons' behaviors based on the situation and what we know of their dispositions, with situation being a strong driver of behavior. We do a lot of communication with, and inference from other drivers by non-verbal behaviors. We see the person in the car ahead on their cell phone or animated discussion with a passenger and we know, implicitly, that their attention is not fully on the road and that we should be a bit more aware of their possibility of an unexpected movement. We keep a closer eye on that 'point of risk'. Or, we have a moment of eyes meeting eyes of the person walking across the street. We know they saw us in the car, and they know they are safe to cross because they know we saw them.
Therefore, we are not simply looking at the world as a collection of objects, but as a collection of linked, hidden, and interacting objects which includes ourself. We don't just see a car at the crossing street's stop sign at a 4-way stop having arrived a couple of seconds after we arrived at our stop. We also see the driver's facial expression and interpret if they appear 'aware' of us, intend to move immediately, intend to wait, seems to want to 'break the 4-way rule', etc. We 'know'. We are conscious of all this in a way that it is unlikely we can code into software and hardware, at least for a long time yet to come.
That somewhat rules out the essential transition phase of automated vehicles. If ALL vehicles were AI and communicating with each other, as we cross, with all of us wearing smart glasses to tell us a car has 'seen us' and is attending to our crossing the street, etc., then an AI world of vehicles becomes possible. Because they are doing the communication that we do non-verbally. Until then, we are stuck behind meat brains that do certain things much better than computer brains, but also make mistakes that others must account for.
Yes, I know they are training computers to recognize human facial expressions...While there are 6 major universal facial expression, we express tremendously more intent and emotion. That tech is lagging human abilities by leaps and bounds, and its integration into automobile driving tech is probably barely even considered as of yet.
And, even then, the thoughts are that linear designed programming (even if using parallel processors, the essence is still linear) seems unlikely to capture the simultaneous processing of integrated information to create a useful whole that filters the irrelevant (the tree behind the fence) and pulls in the tiny but essential: the eyes of the driver in the other car directed downward to their phone.
*For instance, in child development we know that infants go through a stage in which an object that moves out of their field of vision is treated as literally vanished. You take a toy, move it behind your back and it is GONE to that child. They don't yet infer that it is still existing, and only out of view. That shift to being able to see the toy as out of view, but still there is likely an essential step in the development of consciousness according to these new theories of consciousness. (This is called object permanence and develops between ages 4 and 7 months).
Every noticed that you don't experientially remember anything from before about age 3? Anything you 'remember' from before is really just stuff you were told by others, you have no 'experience' of it to call on. That's also connected to this notion of consciousness. Before about age 3, we simply have not yet stored enough information about the world sufficient recognition of self being different from other (that happens about age 18 months), etc. to begin to form a continual 'history of self', the essence of consciousness.
Edited by BalanceUT, 16 April 2019 - 14:59.