اخبار

What Isaac Asimov’s Robbie Teaches About AI and How Minds ‘Work’


In Isaac Asimov’s classic science fiction story “Robbie,” the Weston family owns a robot who serves as a nursemaid and companion for their precocious preteen daughter, Gloria. Gloria and the robot Robbie are friends; their relationship is affectionate and mutually caring. Gloria regards Robbie as her loyal and dutiful caretaker. However, Mrs. Weston becomes concerned about this “unnatural” relationship between the robot and her child and worries about the possibility of Robbie causing harm to Gloria (despite it’s being explicitly programmed to not do so); it is clear she is jealous. After several failed attempts to wean Gloria off Robbie, her father, exasperated and worn down by the mother’s protestations, suggests a tour of a robot factory—there, Gloria will be able to see that Robbie is “just” a manufactured robot, not a person, and fall out of love with it. Gloria must come to learn how Robbie works, how he was made; then she will understand that Robbie is not who she thinks he is. This plan does not work. Gloria does not learn how Robbie “really works,” and in a plot twist, Gloria and Robbie become even better friends. Mrs. Weston, the spoilsport, is foiled yet again. Gloria remains “deluded” about who Robbie “really is.”

What is the moral of this tale? Most importantly, that those who interact and socialize with artificial agents, without knowing (or caring) how they “really work” internally, will develop distinctive relationships with them and ascribe to them those mental qualities appropriate for their relationships. Gloria plays with Robbie and loves him as a companion; he cares for her in return. There is an interpretive dance that Gloria engages in with Robbie, and Robbie’s internal operations and constitution are of no relevance to it. When the opportunity to learn such details arises, further evidence of Robbie’s functionality (after it saves Gloria from an accident) distracts and prevents Gloria from learning anymore.

Philosophically speaking, “Robbie” teaches us that in ascribing a mind to another being, we are not making a statement about the kind of thing it is, but rather, revealing how deeply we understand how it works. For instance, Gloria thinks Robbie is intelligent, but her parents think they can reduce its seemingly intelligent behavior to lower-level machine operations. To see this more broadly, note the converse case where we ascribe mental qualities to ourselves that we are unwilling to ascribe to programs or robots. These qualities, like intelligence, intuition, insight, creativity, and understanding, have this in common: We do not know what they are. Despite the extravagant claims often bandied about by practitioners of neuroscience and empirical psychology, and by sundry cognitive scientists, these self-directed compliments remain undefinable. Any attempt to characterize one employs the other (“true intelligence requires insight and creativity” or “true understanding requires insight and intuition”) and engages in, nay requires, extensive hand waving.

But even if we are not quite sure what these qualities are or what they bottom out in, whatever the mental quality, the proverbial “educated layman” is sure that humans have it and machines like robots do not—even if machines act like we do, producing those same products that humans do, and occasionally replicating human feats that are said to require intelligence, ingenuity, or whatever else. Why? Because, like Gloria’s parents, we know (thanks to being informed by the system’s creators in popular media) that “all they are doing is [table lookup / prompt completion / exhaustive search of solution spaces].” Meanwhile, the mental attributes we apply to ourselves are so vaguely defined, and our ignorance of our mental operations so profound (currently), that we cannot say “human intuition (insight or creativity) is just [fill in the blanks with banal physical activity].”

Current debates about artificial intelligence, then, proceed the way they do because whenever we are confronted with an “artificial intelligence,” one whose operations we (think we) understand, it is easy to quickly respond: “All this artificial agent does is X.” This reductive description demystifies its operations, and we are therefore sure it is not intelligent (or creative or insightful). In other words, those beings or things whose internal, lower-level operations we understand and can point to and illuminate, are merely operating according to known patterns of banal physical operations. Those seemingly intelligent entities whose internal operations we do not understand are capable of insight and understanding and creativity. (Resemblance to humans helps too; we more easily deny intelligence to animals that do not look like us.)

But what if, like Gloria, we did not have such knowledge of what some system or being or object or extraterrestrial is doing when it produces its apparently “intelligent” answers? What qualities would we ascribe to it to make sense of what it is doing? This level of incomprehensibility is perhaps rapidly approaching. Witness the perplexed reactions of some ChatGPT developers to its supposedly “emergent” behavior, where no one seems to know just how ChatGPT produced the answers it did. We could, of course, insist that “all it’s doing is (some kind of) prompt completion.” But really, we could also just say about humans, “It’s just neurons firing.” But neither ChatGPT nor humans would make sense to us that way.

The evidence suggests that if we were to encounter a sufficiently complicated and interesting entity that appears intelligent, but we do not know how it works and cannot utter our usual dismissive line, “All x does is y,” we would start using the language of “folk psychology” to govern our interactions with it, to understand why it does what it does, and importantly, to try to predict its behavior. By historical analogy, when we did not know what moved the ocean and the sun, we granted them mental states. (“The angry sea believes the cliffs are its mortal foes.” Or “The sun wants to set quickly.”) Once we knew how they worked, thanks to our growing knowledge of the physical sciences, we demoted them to purely physical objects. (A move with disastrous environmental consequences!) Similarly, once we lose our grasp on the internals of artificial intelligence systems, or grow up with them, not knowing how they work, we might ascribe minds to them too. This is a matter of pragmatic decision, not discovery. For that might be the best way to understand why and what they do.


اكتشاف المزيد من نص كم

اشترك للحصول على أحدث التدوينات المرسلة إلى بريدك الإلكتروني.

اترك تعليقاً

لن يتم نشر عنوان بريدك الإلكتروني. الحقول الإلزامية مشار إليها بـ *

زر الذهاب إلى الأعلى

اكتشاف المزيد من نص كم

اشترك الآن للاستمرار في القراءة والحصول على حق الوصول إلى الأرشيف الكامل.

Continue reading