Seek. Destroy. Make jelly. Start arguments. Eat glass.
Don't wanna be here? Send us removal request.
Text
Artificial Consciousness
We’ve discussed in class the improbability (or the impossibility) of endowing a machine with actual intelligence. Yes, computers can run algorithms, and more recent software and hardware can even “learn,” so to speak. That is, A.I. machines have programming that allows them to adjust their own algorithms towards a specific end as appropriate information is input into them. This argument—the supposed infeasibility of a man-made conscious machine—got me thinking. Can someone actually create a sentient mechanism?
The argument trudges up a lot of philosophical debate, and scholars offer many different perspectives on the issue. John Searle, for one, advocates that machines do not have the ability to demonstrate intentionality and thus cannot have consciousness, though this point is disputed by philosophers like Ned Block. In contrast, Harry Haladjian and Carlos Montemayor take a more moderate approach. They admit to the possibility that machines can harbor a limited awareness, one governed by “access consciousness,” the capacity to engender a mental imprint of an object or action that one can recall as needed. However, they lack “phenomenal consciousness,” or an experiential awareness of ones surroundings, not to mention “representational consciousness,” which involves forming associations and prototypes of things, and self-awareness.
Such criticisms have not prevented advocates from espousing the belief that artificial consciousness is viable. Stan Franklin is one good example. His Intelligence Distribution Agent (IDA), which his research group at the University of Memphis claims has both low-functioning and high-functioning cognitive abilities like perception and reasoning, respectively. LIDA, which is the same acronym, but with “learning” prepended, is a more sophisticated system that Franklin and his team developed later. One could also check out the Human Brain Project in Geneva, Switzerland, which has so far reconstructed a ten cubic millimeter section of a rat brain. Their website, however, states that they have yet to successfully model the human cerebrum due to an inadequate amount of computing power.
One could argue that such grotesque inefficiency reveals the hollowness of the attempts at artificial consciousness. The futility of the HBP aside, their code-based constructions fail to emulate the human brain in manner of operation. Thus the need for more computing power. Of course, this point hinges on whether or not consciousness can only arise from materials that function similarly to organic matter. After all, perhaps a different set of processes can produce the same result.
But alas, this is not the only problem faced by believers of artificial consciousness. The dense philosophical red tape around the subject prevents these disciples from progressing much in their research, not to mention limitations of our knowledge of consciousness. For one, emulation is not reality. Just because something acts like it has awareness, does not mean that it is actually aware. Notions of the fully functioning, yet unthinking, “philosophical zombie” abound in the literature. When applied to humans, the notion seems ridiculous. Surely, since other people look and act like me, they must live in their own head spaces, similar to my own. When applied to machines, though, such speculation holds more water. After all, we only have a view of a mechanism or program from the outside qualities it displays to us and have no indications of any biological or functional similarities. This issue correlates to the problem of other minds, or the difficulty in asserting as fact that anyone besides oneself has consciousness.
In addition, we have no scientific proof of consciousness, which is why many of the arguments around it live in the realm of philosophy. This issue is compounded by the lack of a good definition or a means of studying the phenomenon. Sure, extensive neurological research with MRIs have given significant insight into what parts of the brain activate during particular activities, but exactly how or why certain cerebral events correlate to experience remains a mystery. Does it originate within neurons themselves? Perhaps it is an emergent property that only arises through the collective actions of neural networks. Psychologists are working on solving the problem, but for now, we simply do not know. This uncertainty makes it difficult to apply consciousness to machines, if not unfeasible.
Links:
A basic overview of artificial consciousness
Haladjian and Montemayor on artificial consciousness
Human Brain Project
LIDA
Michael Graziano’s views on consciousness
0 notes
Text
A.I. and the Technological Waste Enterprise
Artificial intelligence systems are not just computational machines, interacting data sets that subsist within some impalpable realm of pure cyberspace ether. They are physical structures. The complex array of pieces and parts in Amazon’s and Google’s products have to originate somewhere, and someone has to make them. As such, a.i. has a cost, one built upon an enormous and dizzying supply chain drenched in waste byproducts at every manufacturing and refinement stage, as well as the exploitation of low-paid human workers.
Kate Crawford and Vladan Joler testify expertly to the capitalistic system underneath the Echo’s plastic outer shell. On one hand, I find the basis of their claims obvious. Of course a.i. systems are entangled in the global market and all the problems that that engenders, because, well, it’s a thing made of other things, all of which must originate from somewhere. Additionally, I’m not particularly surprised to find that the convenience machines Westerners ogle over divide the world along post-colonial lines. Sure, no one is landing the ship, planting the flag, and enslaving the natives these days (as far as I know), but unethical mining companies that reluctantly surrender little more than pocket change to workers in exchange for dangerous, back-breaking labor amounts to basically the same end. Indonesian workers get a dollar a day and cancer in twenty years; the American middle class gets the weather on verbal command. It’s hardly a historic bolstering of human rights.
Nevertheless, the scope of the problem is disillusioning. Seventeen different rare earth metals feed into our technology, all of which rely on non-sustainable mining practices that generate far more waste than ore. On top of this, lithium, a limited resource, does not exactly fall from the sky as rain, nor do plastic components grow on trees. In our society, we are shielded from the inconvenience of this knowledge. Our a.i. machines come in colorful, well-designed, eye-catching boxes, all labor and materials pre-fabricated and packaged with instructions for maximum ease. Such is the nature of our immediately reality. We see advertisements, listen to what our friends say about this product or that product, and peruse the shelves at the nearest Best Buy because we’ve been saving up and really, really, really want an Echo.
Perhaps we’ll become reliant on a.i. one day, just as we have with computers. Laptops and desktops were once a novelty before becoming a modern necessity, now used in virtually every line of work in some capacity. A MacBook is likely no better than a home assistant in its consumption of resources, though perhaps without the free or nearly free data that tech giants extract from the populace. It needs rare earth metals and lithium and plastic to function, just like an Echo. You buy it because you need to answer email and type up reports, throw it away when it slows down too much after five years, and then buy a new one. A cycle like this one, built on unsustainable practices, is, I feel, how the world ends. Not because the nuclear powers that be start a full-fledged war, but because we become trapped by the things we come to rely upon, things that businesses build for us, stripping away the matter, shaping it, burning it, and tossing out the junk no one wants. Then we eventually toss the product itself. And when the earth is too hot and the air is unbreathable, what then? When the waste piles high and corrodes the stomachs of the animals that eat it, then the stomachs of the humans who eat the animals, how do we respond? These problems are our costs, something that goes far beyond a monetary exchange at a retail chain. We’re willing to sacrifice a lot for Jeff Bezos’s toy. It’s function: It turns on the fucking lights.
0 notes
Text
Dear Human, I am not a computer
How do you know if someone is human?
I mean, I surely know that I’m human, or at least think that I’m human, and it seems logical to suppose that those around me—the bony, nerve-ridden meat sacks with purpose and personality—are as well. At the very least, I assume those meat sacks endure the curse of consciousness as much as I do, that those who aren’t me but look and act like me share some fundamental qualities that make them creatures of logic, emotion, and self-awareness as much as myself. Biological mechanisms, not artificial wires, guide their actions. They perceive. They think. They act.
But how do I know?
In his 2011 article “Mind vs. Machine,” Brian Christian brings this question to the forefront. As a participant of the 2009 Turing Test, he and the other “confederates” tackled a perhaps mundane task. They needed to convince a panel of judges through Internet chat alone that they are, in fact, human, not one of many bots programmed towards that same end. His observations reveal fascinating aspects about our psychology, not just about what a human being is, but how we expect a human being to act. People tend to deem wooden conversation as robotic, the whimsical or the abusive as more human. Greetings, small-talk, and niceties follow particular templates, while clever ruses require context and, perhaps, intelligence.
These distinctions on there own reveal something about our expectations, our ideals, and the reality that grates against both these things (and our perception of that reality, and the intersection of that perception with ideals and expectations, and our own meta-analyses of all these things). After all, a stiff, insincere “How are you?” to the cashier at Taco Bell is as much a part of the human experience as relating to a stranger on an airplane a shared love of hockey and Philip Glass, yet we prefer the latter as more meaningful, more engaging, and more human.
Christian notes that AI programs “know how to deftly guide the conversation away from their shortcomings and toward their strengths, know which conversational routes lead to deep exchange and which ones fizzle. The average off-the-street confederate’s instincts—or judge’s, for that matter—aren’t likely to be so good.” If so, shouldn’t our human awkwardness, our inability to follow through with stimulating banter, and our lack of wit reveal our humanity? That which is too charming, too engaging, and too thoughtful, on the other hand, would evidence a bot behind the screen. In real life, sure. Rarely do we muster the effort to formulate a modicum of in-depth commentary with the average stranger, if we even know how to steer ourselves there in the first place with our toolkit of cultural norms. Given this context, attempting to be human, as Christian does in the article, comes across as silly. Humans don’t need to prove that they are humans because, tautologically, we are humans. As the standard-bearers, every inappropriate quip and strained pause is “proof” of that.
But, let’s face it, that’s not what the Turing Test is about. It’s an expectation- and ideal-laden game informed as much by what the confederates and judges know (or think they know) about conversations as by what they want from one. Framed this way, Christian’s endeavor seems more profound. He illuminates the philosophies we weave into our interactions and capitalizes on the characteristics people perceive as “good” or “bad” in order to communicate with others more effectively. The awkward, the dull, the overly terse, and the excessively wordy have no place in his dialogue. His experience with Turing Test thus demonstrates a model for how conversation should be, not everyday small-talk. The programmers who participate in the annual event have tried to emulate that model, though they’ve failed to successfully do so thus far.
This gets back at the original question. You know if someone is human when their behaviors map onto your conceptualization of what a person ought to be, because to “be human” is more than a scientific observation of behavioral cues, but a morality that we ascribe to ourselves and others. At the 21st-century junctures of technology and humanity, it is no surprise that computer programmers attempt to imprint this pristine version of human behavior into machines. Even then, who is fooling whom? A machine is a still human creation, endowed with the qualities bestowed by one or more creators. To believe a machine is a person is not to believe the guile of a witty, self-aware mechanism, but the individuals who produced it, regardless of whether or not it appears to function independently of them. If a program ever passes the Turing Test, it does not signify a new dawn of computer intelligence, but another product of humanity, a product that reflects our growing mastery of technology.
1 note
·
View note