This is the first post in the “Perspectives” series, which will be a collection of shorter blogs sharing Enrique’s experiences or perspectives on the world. This post is more of an opinion, but it is reflective of Enrique’s often analytical approach to the world, which I understand is not atypical for individuals on the autistic spectrum with higher cognitive abilities. This analytical approach, combined with Enrique’s quite genuine care and concern for other people and the human species in general, come together to provide the assessment below of the question of whether we, humans, should be pursuing the development of true artificial intelligence. This is a particularly intriguing discussion because the development of artificial intelligence is something in which Enrique would be greatly interested in being involved in personally, if it weren’t for the issues he has identified through his analysis. I often wonder, how much better would our world be if there were more people like Enrique, who really did look at situations before they just waded in, and restrained themselves appropriately?
During discussions on the question of whether humans should continue working on the development of artificial intelligence after reading “Do Androids Dream of Electric Sheep” by Philip K. Dick as part of Enrique’s Language Arts program, in Enrique's words….
Androids are stronger, smarter, and harder to destroy than humans. So far humans have not figured out how to put in emotions, or to manage them. And we won’t, not soon, because we don’t understand how emotions work in people. Which means androids, or beings with artificial intelligence, as per humans’ current abilities and understanding, can not have empathy for others.
Empathy is what manages our society so we can all live together. In fact, empathy is so important that don’t we have a name for people without it and who as a result act outside the control of our society? Psychopaths or something like that? Right. And we are afraid of these people and put them in jail, because they lack the internal controls to be reliably safe for the rest of society.
So… we’re asking about whether we should populate our world with a bunch of entities who would essentially be super smart, super strong, really hard to destroy, psychopaths. When we can’t manage the psychopaths we have. Really? It’s a wonder our species has survived.
And further…
We (humans) have a moral responsibility to treat everything around us with respect. Androids, having been created by humans, present us with a dilemma which we are not capable yet of solving. Are they alive, or are they not? Since we created them, do we need to treat them as their own? We treat babies as their own. We do not treat toasters as their own. Or those fridges that do your grocery shopping for you. So are androids like babies or machines? Hmmmm…. This seems to be a debate that we, humans, are not yet capable of resolving. Which means that we are in danger of treating living creatures as machines. Until we can conclusively resolve this question and behave accordingly it would be irresponsible of us (humans) to bring these potentially living creatures into the world. We need to wait until we (humans) have evolved enough to be able to take responsibility for our actions before engaging in those actions.
Well. There you go. A pretty in depth analysis I think. And just one of many examples I have seen from Enrique of how having a “disability” in one, or many, areas does not preclude the ability to think, or perform, in selected other areas. And also how the ability to think or perform in various particular areas does not necessarily correspond with ability in other areas, including those essential for functionality in every day living. Something for the rest of us who don’t experience such extremes to think about….
