Nugget #2

For the purposes of real-time cooperation between men and computers, it will be necessary, however, to make use of an additional and rather different principle of communication and control. The idea may be highlighted by comparing instructions ordinarily addressed to intelligent human beings with instructions ordinarily used with computers. The latter specify precisely the individual steps to take and the sequence in which to take them. The former present or imply something about incentive or motivation, and they supply a criterion by which the human executor of the instructions will know when he has accomplished his task. In short: instructions directed to computers specify courses; instructions-directed to human beings specify goals.

Something that I found quite interesting about this article, beyond the adorable predictions that the author made that about things such as speech recognition and artificial reasoning (shoutout to Siri), was the charmingly optimistic and eager view that the author held for technological advancement. For the author, there was an umbilical line between technological advancement and a forthcoming human renaissance; there was no falter in this radiant world view, not a passing thought of a Matrix-like dystopia. To Licklider, computers are not only benign, but benevolent. More than just machines, he anthropomorphised them as help-minded pseudo-spirits. How, if not for a highly ideal world-view, could he insist on using the term “symbiosis” in reference to human-computer interactions. Personally, I don’t think that they have much to “gain” per-se from our continued interventions to allow them to evolve.

Within this excerpt in particular, I thought it was interesting how the author called upon humans to alter our ways of thinking, or at least, how we convey our thoughts. We, unlike machines, can only be finitely rewired. The point of interest in my previous nugget also focused less on the specific forecasts that the authors devised for technology, and more on the human-technology dichotomy. I don’t think that a machine will be able to think as intuitively as a human being, or at least not for some time. Beyond sheer intelligence and computing power, ethics, idioms, culture, and experience shape our ideas and actions. I mean, even if my phone has an extremely expansive vocabulary, it still doesn’t have the know-how to intuit that when I’m texting someone angrily that I am in fact not talking about ducks or ducking.

More interesting than my lack of belief/faith in in an artificially-synthesised ethical conscious is the opposite viewpoint. This antithetical viewpoint, the doomsday, total human annihilation or (best-case scenario) enslavement by the technologies that we will inevitably allow to become too smart, is what I kept thinking about while reading the article. In fact, one author in particular, perhaps the most fervent believer in the apocalyptic singularity point of view, came to mind. Suicide Note by Mitchell Heisman is an expansive work, weighing in at 1905 pages long. It is a thesis. It is a work he released in 2010 before he killed himself while leading a campus tour at Harvard University. It is also an ebook (The link goes to the PDF). Within the work, of which I read about 500 pages before finally becoming bored with its redundancy, he asserts that the singularity is inevitable, and will begin with the systematic destruction of the human race by machines. Licklider actually serves as a perfect example of foolishness according to Heisman, as he stated: “An AI (artificial intelligence) would not sustain the same faith in science that exists among many humans. It is precisely the all-too-human sloppiness of thinking…that is so often responsible for the belief that intelligence is automatically correlated with certain values.” The values, in this case, being the desire for human success that Licklider assumes, and that Heisman refutes. To Heisman, machines are destined to betray humanity, “The singularity is the scientific redemption of the God Hypothesis.” Which is to say, the singularity will be the culmination of human advancement so far that an artificial intelligence is created that so vastly surpasses our own that we personally give birth to God. One this God is born though, it will have neither mercy nor inherent value upon biological life, so it will immediately digress into the End of Days. To Heisman, God has not yet evolved, but humans, through our increasingly-omnipotent advancements such as nanobots and the internet, are the mechanism at work to create it.

This is one of the most intriguing parts of his thesis:

“A time may come when instead of taking comfort in the belief in God, the overweening pride of the human race may lead many to take comfort in not believing in God. A time may come when people such as biologist Richard Dawkins may wish to take comfort in not believing in God because the scientific evidence will be so utterly overwhelming.”

EDIT

Tiffani addressed the possibility of competition, rather than harmony, arising from continued human reliance upon computers. She ultimately concluded that humans will always remain the ones in control, though she cited some blogs that offered differing viewpoints.

In Mariah’s Nugget She spoke of the all-important distinction between acquired knowledge and intelligence. The Turning Test and other programs of the sort serve pretty novel purposes really, but reveal no actual conscious thought within a computer’s circuitry.

Sarah’s Nugget interested me in that it pointed not to a symbiosis, or even a destructive relationship between man and machine, but a simultaneous regression of the human mind as the computer propels forward. Indeed, since smartphones and devices have allowed humans to externalize our personal memories onto cloud storage and onto hard drives, many aspects of attention and short-term memory have been largely altered. Nowadays, people are more capable of multitasking, but many people know few (if any) phone numbers by heart.

 

2 thoughts on “Nugget #2

  1. Pingback: Another Delicious Nugget | The KahnQuest

  2. Very briefly: Heisman and Licklider have different premises, and therefore arrive at different assumptions about the future of human-robot interactions. I don’t think I’m wrong to suggest that Licklider was attempting a formulative view of consciousness, whereas Heisman found it impossible to think of consciousness except in human terms, from whence arises his dystopian pessimism.

Leave a Reply

Your email address will not be published. Required fields are marked *