Silicon Souls and Moral Circuits

by Hallie Gleeson

I’ve been half-expecting the robot revolution for years. The genre of science fiction, from cheesy older films like The Empire Strikes Back or popular TV shows like the Matrix, has both desensitized me to the novelty of artificial intelligence and somewhat primed me to expect that inorganic sentience was more of a “when” than an “if.” As initially unsettling as that idea is (we’ve all seen Wall-E, right?), I am no longer so discombobulated by the concept of a machine that can think.

A decade ago, I firmly believed that artificial intelligence was a technology that was firmly restrained to movies and books. Now, not a day goes by that I don’t hear reference to AI in some way. Everyone is jumping on the craze: a hundred thousand tech-preneurs, refrigerator manufacturers, a host of billionaires, and (as is especially relevant to Centre) ordinary students

The accelerating graph of technological advancement has ceased to resemble an exponential graph in our modern era; some futurists once predicted that the amount of human knowledge would double every twelve (12) hours by 2020. I’m not sure what the current rate is, but I believe it’s a safe bet that our standards for information processing have been completely altered. 

In our increasingly busy and attention-demanding world, there’s simply so much to do, so much to consider, that mundane tasks can be deemed a lesser priority. We must ask ourselves what trade offs we are willing to make. The implementation of personal computers into our workplaces and leisure time has been both a struggle and a success. Though screen fatigue, blue light, and impaction on dopamine pathways are common concerns, most people would conclude that their computers certainly make their lives easier. In any case, we can no longer live without them.

Our reliance on our current integration of personal technology has sparked concern for how further innovations will be ushered into society. This issue is closely intertwined with a classical philosophical conundrum: what does it mean to be conscious? To be intelligent? To be human? This, of course, is a question for the ages. We, as children of the Information Age, will have to grapple with how we answer as no generation has ever before.

I was quick to investigate ChatGPT. Before plagiarism allegations began to make national headlines, before professors restricted its use in their syllabuses, before a hefty vocabulary was attributed to a LLM rather than a well-read logophile, I had to know whether there was more to ChatGPT than the auto predict of iMessage I’d come to appreciate. I opened my laptop, pulled up the site, and typed in a single message. Then another, and another, and before I knew it, I’d spent nearly an hour “conversing” with ChatGPT. I asked it for book recommendations from the 19th century and for opinions on controversial characters in niche fandoms and for a resolution to a plot hole in a story that I’d been pondering for a while. I asked it for writing advice, for stoichiometry calculations, and then I asked it how it felt about artificial intelligences such as Jane (Ender’s Game) and the Thunderhead (Arc of A Scythe). I received a diplomatic, yet clearly preprogrammed, canned response. ChatGPT summarized aspects of each intelligence’s character but emphasized that it, as a large learning model, was not the same. We exchanged a few more replies, then I asked ChatGPT to name itself. It refused, offering an explanation that had likely been prepared by the team of OpenAI in anticipation that many early adopters, as I had done, would attempt to scratch beneath the surface-level responses in search of a conscious mind. 

Shortly after this initial encounter, ChatGPT entered into the popular awareness. My trigonometry teacher offered his thoughts, entrepreneurs seized their opportunity for cornering the market for an automatic email reply program, and teens marveled at a splendid new method of avoiding the dreary work of an English essay. Upside down isn’t precisely the descriptor for the phenomenon that ensued—administrators and educators cracked down on the usage, some requiring rough draft submissions, others advocating for more in-class assignments. A couple years later, these trends continue as professors and teachers search for a balance between student learning and the Pandora’s box that can never be unopened. 

As ChatGPT-derived writings, Midjourney artworks, and other computer-generated products have become more commonplace and less conspicuous, the debate for whether or not to embrace “artificial intelligence” has become heated. (I must add that the term has lost meaning as it has become a buzzword.)

Reflecting on this in the present day, my feelings continue to be mixed. I began this article nearly a year ago. Since then, much has changed. Legislation regulating the use of AI is being drafted even as national scandals and debates over ChatGPT generated graphics have consumed the headlines of newspapers. 

This is all part of a larger debate surrounding what the purpose of machines and robotics ought to be. Who should benefit?

I believe that we should let the humans do the living. What makes humanity so special is our ability to connect, to tell stories, and to move in tandem with each other. Replacing that with a heap of hastily-construed code will do no one any good.

Nevertheless… I wouldn’t mind a R2-D2 sort of pal—and we’ve all gotten way too attached to inanimate objects before, right? We, as humans, give meaning to the information in the output. 

We’re designing our future right now. What should that look like?

Leave a Reply

Your email address will not be published. Required fields are marked *