We know your time is important. Please take a few moments to fill out this questionnaire…

Posted on August 23, 2008. Filed under: Layman's AI, Self-deception |

A couple of years ago, I saw a reference to a new book in New Scientist magazine: The Singularity Is Near, by Ray Kurzweil. My leisure reading interests had turned to physics, evolutionary biology, and the quest for the Theory of Everything in recent years (I know that doesn’t sound like “leisure”, but one man’s trash is a sow’s ear, as the saying goes), and Kurzweil’s tome seemed to be about a curiously related issue. I bought the book, and read it. I haven’t been the same since.

Kurzweil discusses the almost certain (in his mind) upcoming emergence of the technological Singularity: the development of smarter-than-human intelligence. Among my friends, and apparently people in general, this is a topic that, once broached, causes severe polarization. I admit, it’s not sweeping the country with polarization; most people have never heard of the concept, except in movies and sci-fi books. But once they become aware that serious scientists with ninja-brainpower are working on it, most reactions that I have seen fall into one of two categories: 

  1. reject it out-of-hand, or
  2. think about it carefully, and then reject it.

I am not aware of the reason for my unpopular reaction to the concept: immediate acceptance. An “Aha!” moment. You’d think I’d be wary of the genre, having lost half of my portfolio in the tech market. Certainly it’s not because of my knowledge of, or skills in, the field of computer science: I learned how to make a hyperlink only last week. And it wasn’t because of admiration for the people doing the research; I had never heard of any of them. It just seemed logical, and inevitable. The name most often mentioned as a suspect for having an influential role in the development of this technology is Eliezer Yudkowsky. He is a very interesting character (from my reading; I’ve never met him), apparently with no formal education and the holder of no degrees. The significance of that situation is fodder for another post. If you have the time, read about him. He is a fascinating product of evolution.

Mr. Yudkowsky is one of the two regular posters on the Overcoming Bias blog, sponsored by the University of Oxford’s Future of Humanities Institute. He is truly a student of human nature, in a much more literal sense than the normal use of that phrase. He is trying to determine the exact nature of morality and other important human qualities so that he can program them into the source code of what he calls a “Friendly Artificial Intelligence”; that is, a super-human intelligence that becomes recursively smarter and smarter, while keeping in “mind” the necessity to avoid hurting humanity. Oh, did I mention that there will be no need for death? Cool.

So, how would one go about determining which charcteristics are the basis for “desirable humanity’? One way would be a questionnaire. I’m certain you’ve received many requests to “just take a few minutes of your time, and, for free, give us the info we can use to make a whole bunch more money.” Like me, you probably never comply (has a physicist ever won the Publisher’s Clearing House prize? Just asking.). Mr. Yudkowsky has come up with a remarkably innovative (natch) way to do this. He creates complicated, sometimes brilliant posts, frequently in the form of parables, and always controversial. Then he sits back to see what a self-selected group of incredibly brilliant people (along with some dweebs like me) has to say about it, and about each other. It is the same technique Jane Goodall used to determine what chimps were all about, with far fewer biting insects and no cannibal threat. A lot of information is exchanged by the participants, which is very educational to me, but there is a paucity of participation by Yudkowsky himself, other than the main post. It’s not that he’s not following the comments; write something snarky and it’s deleted in a flash. And he certainly responds more to known posters than to lurkers, but there seems to be a pattern to it.

Since all of the math-based posts, and many of the philosophical ones, are beyond my understanding comfort-level, I have no way to know whether the brainy participants get it either. I have come to believe that Mr. Yudkowsky deliberately stirs the pot by being mysterious, selectively confrontational, or even misstating his own previously espoused views, in order to see how bright humans react. I get the immpression that most of the participants feel they are part of a blog environment; the “Aha” is that instead, they are part of an experiment. The blog is an ant-hill, and EY’s posts are flexible twigs. Jane Goodall Yudkowsky is watching to see whether the chimps will realize how to use the tool, hopefully to keep his impending creation “Friendly” and useful. On the other hand, if I’ve noticed something like that, it’s probably so obvious that the rest of the chimps have already compensated for it.

Hey, that’s just my opinion; I could be wrong. Actually, I was wrong, back in 1976, pertaining to an issue where I thought I was wrong, but I turned out to be right. 🙂

UPDATE 8/25/08: I am wrong. Mr. Yudkowsky recently responded that he learned long ago the lack of value in such shenanigans.


Make a Comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

2 Responses to “We know your time is important. Please take a few moments to fill out this questionnaire…”

RSS Feed for It's Not Hard… but it could be Comments RSS Feed

I’m sometimes deliberately mysterious. I don’t lie.

EY: I don’t lie.

Nor did I mean to imply that you do. I think your technique is fabulous. Being deliberately mysterious is playing with people, and at the least, amusing. While neither of us endorses the Bible as a source of fact, it can be an interesting source of insight into the human condition. The Genesis 37 story of Joseph and his coat of many colors makes the point that giving evidence intended to be misleading, and then failing to correct the incorrect conclusion of the observer, is the equivalent of lying. Could be some meta-ethics there. Hope to see you in San Jose this week.

Where's The Comment Form?


    The director of the Sexual Medicine Center leaves penile implants behind, and launches a quest for knowledge about Artificial Intelligence, extended life, and the issues inside the health-care industry.


    Subscribe Via RSS

    • Subscribe with Bloglines
    • Add your feed to Newsburst from CNET News.com
    • Subscribe in Google Reader
    • Add to My Yahoo!
    • Subscribe in NewsGator Online
    • The latest comments to all posts in RSS


Liked it here?
Why not try sites on the blogroll...

%d bloggers like this: