“A Pragmatic View of Paul Bloom and Sam Harris About Conscious Robots”

The article “What is Wrong with Cruelty to Robots?” published by the New York Times on April 23, 2018, was written by Paul Bloom, a psychologist, and Sam Harris, a philosopher, and neuroscientist (Bloom & Harris., 2018), it is an excellent analysis of future issues humanity will face by developing conscious robots and is becoming a concern for the scientific community around the world. Though building a conscious robot is a majestic scientific and technological challenge (Chella et al., 2019), it seems only a matter of time before we either emulate the works of the human brain in our computers or build conscious minds of another sort, Bloom and Harris argued. Therefore, the advances in artificial intelligence shape our society by raising concerns among scientists and modern philosophers based on the moral issue that conscious robots could be used as modern enslaved people.

Even though there is no accepted definition of consciousness for an overview of the different meanings of the word, there is a distinction between consciousness as experience and consciousness as function (Chella et al., 2019). From the point of view of experience, a subject is conscious when it feels visual experiences, bodily sensations, mental images, and emotions; from the perspective of function, a conscious subject can process globally available information, is introspectively aware of itself, generates inner speech, and can anticipate perceptual and behavioral activities, according to Chella et al. (2019).

The philosophical question elaborated on in this article is whether, one day, we can create thinking robots that are living beings with beliefs, desires, basic needs, and, most notably, the potential to suffer. If so, what will hinder us? The authors referred readers to the popular HBO show called Westworld, where through a combination of artificial intelligence and genetic engineering, conscious robots are built, not to illustrate how our future life will look with conscious robots around us but to elaborate on the danger humanity could face while intermingling with their artificially generated consciousness. In the show, people let loose their darkest impulses, partaking in torture, rape, and murder of robots— including robots indistinguishable from human children, in acts of pure sadism without risk of retaliation. Bloom and Harris argued that this issue extends beyond sadism because as AI improves, humanity could run the moral peril of building machines that only brutes would use as they please.

To predict our future relationships with conscious machines, Bloom and Harris cited Kant’s odd views about animals, seeing them as mere things empty of moral value; still, he insisted on their proper treatment, arguing that “For he who is cruel to animals becomes hard also in his dealings with men.” (Gruen., 2017). Bloom and Harris further suggested the same treatment for lifelike robots since even if humans could be confident that they were not conscious and could not suffer, their torture would likely harm the torturer and, ultimately, the other people in his life. However, recent research from Northeastern University published in the journal Society & Animals showed that we humans are more likely to feel empathy for a victim if we consider them helpless and unable to defend themselves, much like an infant or toddler; we view dogs in the same way — ultimately defenseless and in need of assistance. From this perspective, it is logical to reason that if we do not see the robots as helpless creatures, conscious or not, we may not feel the empathy to treat them like other humans. The claim that enacting violence in a virtual environment desensitizes people to violence in the real world resulted in being feeble (Kühn et al., 2018) since research suggests that even children as young as three years old can reliably distinguish fictional and factual events in at least a fundamental sense (Ferguson., 2010). In addition, as video games have become progressively realistic, the rate of violent crime has dropped, though other scholars’ works (Engelhardt., 2011) suggested otherwise.

Bloom and Harris (2018) wrote this article based on the assumption that humans could, in the future, own robots perfectly identical to men, women, and children under the premise that we would be permitted by law to interact with them how we please which arranges a justified pessimistic view of the problem with a lessening narrative. The combination of this assumption and premise created the perfect scenario for the authors to speculate within the unconfined academic landscape of the theory of morality, focusing on the rightness or unfairness of our actions and their consequences for humans and robots, which is somewhat idealistic. However, as the marvels of technological advance challenge us with new real or perceived risks, whether about individual autonomy and privacy or concerns relating to community or moral values, the legal system must respond (Mandel., 2007). In the same way, societies have created traditionally authoritative institutions to regulate human behavior towards animals, climate, nuclear power, and each other; it could develop institutions capable of controlling our actions toward conscious robots and vice versa. Mandel. (2007) argued that new legal issues created by technological advances often raise questions at the forefront of scientific knowledge and hence may not only be incomprehensible to the average person but not even well understood by scientific experts in the related field. However, in the face of limited knowledge and understanding of technologies and disagreements generated by them, legislative, executive, administrative, and judicial actors must continue to establish and rule on laws that govern and decide such uncharted disputes (Mandel., 2007), as they have done in the past. In the end, against all odds, we humans are friends and love each other and protect one another, and we can risk our own lives to save our loved ones. (Georgieva, 2019). If our robot wins our hearts, we will risk our lives to protect our robot.

References

Bloom, P., & Harris, S. (2018, April 23). It is Westworld. What is Wrong with Cruelty to Robots? The New York Times. https://www.nytimes.com/2018/04/23/opinion/westworld-conscious-robots

Chella, A., Cangelosi, A., Metta, G., & Bringsjord, S. (2019). Editorial: Consciousness in humanoid robots. Frontiers in Robotics and AI, 6. https://doi.org/10.3389/frobt.2019.00017

Corner, S. (2017, November 7). Why do people sometimes care more about dogs than humans … Canine Corner. Retrieved January 1, 2022, from https://www.psychologytoday.com/us/blog/canine-corner/201711/why-people-sometimes-care-more-about-dogs-humans

Engelhardt, C. R., Bartholow, B. D., Kerr, G. T., & Bushman, B. J. (2011). This is your brain on violent video games: Neural desensitization to violence predicts increased aggression following damaging video game exposure. Journal of Experimental Social Psychology, 47(5), 1033-1036.

Ferguson, C. J. (2010). Blazing angels or resident evil? Can violent video games be a force for good? Review of general psychology, 14(2), 68–81.

Georgieva, N. (2019). Robots as modern enslaved people. Papeles: Revista de La Facultad de Educación Universidad Antonio Nariño, 11(21), 68–74.

Gruen, L. (2017, August 23). The moral status of animals. Stanford Encyclopedia of Philosophy. Retrieved December 30, 2022, from https://plato.stanford.edu/entries/moral-animal/

Kühn, S., Kugler, D. T., Schmalen, K., Weichenberger, M., Witt, C., & Gallinat, J. (2018). Does playing violent video games cause aggression? A longitudinal intervention study. Molecular Psychiatry, 24(8), 1220–1234. https://doi.org/10.1038/s41380-018-0031-7

Mandel, N. G (2007). History Lessons for a General Theory of Law and Technology, 8 MINN. J.L. SCI. & TECH. 551.