Listeners:
Top listeners:
KTSW 89.9
By Sofia Psolka
Web Content Contributor
The year is 2038; the latest political, sociological turmoil: artificial intelligence has acquired sentience! Andromorphic androids walk the same street as you, work the same jobs as you and even feel the same emotions as you… but do you treat it as you would a biological, organic, human? What if a close friend of yours comes out as an android—would you view it as a tool, free to manipulate? Or would you build it up, support its aspirations?
These are a few of the ethical dilemmas I faced while playing Quantic Dream’s 2018 choose- your-own-adventure game, Detroit: Become Human. With recent tech news, I might have to critically consider these questions.
Why are these AI dilemmas important?
Since Alan Turing’s “Imitation Game” Framework, A.I. has fascinated logicians, computer scientists and sci-fi fans across the world. There has been a lot of trial and error, but humanity’s ravenous appetite for technological experimentation continues to triumph.
In early June, Google fired engineer Blake Lemoine after he claimed that Language Model for Dialogue Applications (aka: LaMDA) gained sentience. However, the chatbot only (and I use “only”, as if this isn’t insane enough) used a process known as “deep learning”, allowing it to cultivate thousands of speech patterns, using the Internet, to organize them in a way that mimics human dialogue. So I don’t find it completely unthinkable that Lemoine would declare LaMDA as a self-aware entity.
Well, on Aug. 28, David Ferrucci, known for the Watson computer, announced his mission to make AI reasoning possible, via a system called Elemental Cognition. In a New York Times article, Dr. Ferrucci states that the goal is to create a “trusted ‘thought-partner’” capable of “making suggestions and explaining them”. If Dr. Ferrucci and his team are successful, the world of Detroit: Become Human may not be so distant– we might put a crowbar in Pandora’s box.
DALL-E 2 and Markus
Along with these major developments, OpenAi’s DALL-E 2 system was released, spawning numerous knockoffs, to the delight of college students. Although still in the testing phase, the implications could revolutionize our creative landscape. As a friend of mine messed around with a knockoff, sending me all her results, I recalled a particular scene from Detroit: Become Human.
One of the playable androids, Markus, is owned by fictional painter Carl Manfred. Manfred is a firm believer in promoting individuality among androids; thus, through art, he finds ways to challenge Markus’s ability to see beyond the physical world.
In the path I chose for Markus, he paints a picture based on the theme of “identity” in correlation with “androids”. The result was a portrait of Markus. His eyes are hard, focused, as if looking deeply at his reflection. From his eyes, the painting gives way to blue, as being washed away by the blue blood Androids bleed; his identity is ignored because of his body’s makeup.
At least, that’s my interpretation of it.
What if DALL-E 2 can create emotive artwork, like Markus? What if it makes us feel connected to the AI system? What would that say about the nature of AI?
Will we have to start considering the citizenship– the identity, the individuality– of something created by humans, for humans?
Is that so different from human children?
The Future
No one can say for sure what AI has in store for humanity. That’s what makes a game like Detroit: Become Human entertaining. It’s a glimpse of a possibility, without any true repercussions. (Unless, like me, you get wayyy too attached to the characters…)
So, I’ll end off with a final question; the last question Detroit: Become Human asks, in its survey:
“Do you think one day machines could develop consciousness?”
• Yes
• No
• Don’t know
Written by: Hannah Walls
Artificial Intelligence Entertainment Sofia Psolka video games
1
Drain Outs
2
Captain Planet
3
Pearl Earl
4
Ratboys
5
Alice Phoebe Lou
This Blog is Propery of KTSW
Post comments (0)