Layman’s AI

Singularity Summit ’08 from a non-nerd

Posted on October 27, 2008. Filed under: Layman's AI, The Singularity |

San Jose, California: the most expensive place to live in the USA. The international airport is only three miles from the historic downtown district, but my imaginative taxi driver was still able to get the meter up to $25. On Sunday morning, city employees are out power-washing the sidewalks and pedestrian alleyways. The “light train” seemingly goes everywhere, including right in front of the Hotel Montgomery, my home in Silicon Valley. Everything is clean, and the FBI says it’s the safest large town in the country. With only my orthotically-enhanced shoes for transportation, I easily enjoyed wonderful meals, great jazz, and mind-popping intellectual stimulation at the Singularity Summit.

Had he not passed on four days earlier (was he signed up for cryonics?), “Mr. Blackwell” could have found limitless fodder at the Summit for his worst-dressed lists. Singularitarians, for the most part, seem to practice the fashion credo: “I’m not like you, at all.” None were barefoot, but the footwear was imaginative, to say the least, including one young man whose “shoes” had ten individual toe compartments. As the age of the participants increased, there was a general decline in the volume of hair gel and spike points, with a trend toward less of what one might consider a “statement”, and more of a mere lack of good taste. It was reminiscent of a ghetto yard sale, except the clothes were moving around. As to the presenters themselves, there was a fashion spectrum: from one who easily could have passed for a hitch-hiker over from a ’60’s Big Sur commune, with hair and beard that had never been near a sharp object, to others in custom-tailored suits with Clintonesque coifs. Some were signaling “g” more than others, but all that I heard and met (stage and audience) were clearly quite sharp.

Singularity Institute volunteer Michael Vassar has written that non-nerds may be seen as “defective nerds”. I am definitely in the “defective” category, so my impressions of the Summit may differ significantly from other versions.

Perhaps the most all-encompassing concept about the Singularity that I gained from the Summit is this: millions of people are working on things that will cause the occurrence of the Singularity, even if they do not realize it. This idea is reminiscent of the human brain itself, in that there are millions of non-conscious interactive parts that somehow sum to create consciousness. This “wisdom of crowds” theme was injected into several presentations, and if accurate, confirms for me the inevitability of the Singularity, although not necessarily the pace. Ben Goertzel’s OpenCog project approaches AI through open sourcing (and is the only official entry of the Singularity Institute). Radar Networks CEO Nova Spivack envisions organizations increasing in intelligence, as opposed to individuals. He suggests that the most likely candidate for super-human intelligence already exists: the Internet. As it grows, he predicts the Web will “wake up”. In the view of Dr. Pete Estep, this and other developments would be part of a cooperative human/computer interface, described here. (more…)

Read Full Post | Make a Comment ( 4 so far )

Do you know the way to San Jose?

Posted on October 24, 2008. Filed under: Layman's AI, Personal philosophy, Self-deception, The Singularity |

Long-time reader, first-time visitor to Silicon Valley. I just arrived for the Singularity Summit. It will be interesting to see how out-of-place a redneck sex doctor will be in this sea of geniuses. More to follow.

Add-On: see today’s Summit summary.

Read Full Post | Make a Comment ( 2 so far )

What, me worry?

Posted on October 1, 2008. Filed under: Layman's AI, Personal philosophy |

Today, on Less Wrong, one of the commenters used the word “sinecure”, sending me rushing to the dictionary: an office or position that requires little or no work and usually provides an income. How did I reach this stage of life without benefit of this marvelous word?! It’s Latin roots suggest an even deeper meaning: sine cura, “without cure”, referring to a Middle-Ages ecclesiastical appointment, but without the power to “cure souls”. In my part of the US, we’d say it was someone who was “all hat and no cattle”. And, it is easily rearranged to “insecure”. Significantly, the commenter was referring to the position of “Research Fellow” at the Singularity Institute for Artificial Intelligence, I presume tongue-in-cheek. This Fellow has identified a threat to mankind of which few others (mankindly-speaking) are aware: unfriendly general artificial intelligence. The perceived level of the threat is absolute: total annihilation of humanity and the world as we have come to know it. The timing of the threat is soon: perhaps three decades or less. The perceived likelihood is 100%, save some intervention from a ninja code-writer. To make it a story easily publishable, and on the fast-track to moviedom, there’s this twist: the Fellow is the (potential) ninja code-writer, and only he can save us. His version of friendly general artificial intelligence would not only prevent annihilation, but also provide a paradisiacal existence for all. His position is funded by donations. He hasn’t produced anything so far, but he thinks about it (and writes about it) all the time.

Sounds like this is going to be a Fellow Roast, eh? It’s not. I’m one of his admirers (at least of what he represents), albeit a Johnny-come-lately. I’ve written positively about him before in this blog, as well as having been inspired to cover the fictionalized human aftershock of his ideas in a short novel. No, it’s no roast. Instead, a reality check. For the past year or so, I’ve spent perhaps several hours of each retired day reading and ruminating about the technological Singularity. There’s the media-friendly version (see The Singularity is Near by Ray Kurzweil), which is all happy and inspiring, but weak on nuts and bolts. Then there’s the blog and mailing-lists version, full of competition, snarking, and predictions of doom, complete with such-high-level-that-only-they-can-understand-it nerdism arguments apparently confirming both the enormity of the task and the misconceptions of everyone save him who is doing the writing. Beneath it all, I am fascinated that a topic of such perceived enormity, described as the greatest event since the appearance of the first replicating chemicals (read:life), is almost unknown to the public, especially since no one is atempting to keep it a secret. To the contrary, fund-raising and publicity efforts are in full swing, as evidenced by the upcoming Singularity Summit.

Suppose the Deep Impact scenario occurred, but starting now, with 30 years warning. In the movie, the US government’s first reaction was to maintain secrecy while beginning survival measures. Once outed, what would be mankind’s reaction? In general, that scenario has been playing for millennia, with the time-frame being less predictable, and the comet being Death. Under those circumstances, there has been little extravagant reaction at all, other than to live until it happens. But death-as-a-part-of-life has always been around, and mankind is accustomed to it. True comet-type death (or on the flip side, elimination of death) is a different animal.  So what is the US government’s response to the possibility (inevitability?) of a mankind-altering Singularity, be it friendly or unfriendly? A well-placed employee at the Department of Defense says here: “I don’t know a *soul* in DoD or any of the services off the top of my head that has any *inkling* of the very existence of trans-H (trans-humanism) or of the various technical/scientific lanes of approach that are leading to a trans/post-human future of some sort. Zip. Zero. Nada.” OK, assume there are no world-class AGI (artificial general intelligence) experts, unknown to the rest of the AGI community, in cahoots with our government, or that of other nations, with a near-solution leading to the Singularity. And suppose that these AGI guys, in all nations, all know one another, and are familiar with one another’s skills. And suppose that none of them has any idea how to write code for a Friendly AGI, and our Fellow stands alone thinking he may be able to do it, eventually. Now, throw in the kicker that a significant number of AI experts think they can write code for AGI soon, leaving the “friendliness” aspect aside. If they are right, and if “undesignated” AGI becomes “unfriendly” AGI (as the Fellow assures us it will), it seems nearly inevitable that the comet is on the way.

There is another, perhaps much larger, community of experts who do not give any type of AGI much hope for existence. These mostly claim either that mankind will destroy itself before the Singularity, or that the possibility of the Singularity is exaggerated. That may be why AGI is a fairly well-kept secret (or just ignored?). Let’s set this group aside as we look at the strength of the AGI group’s convictions. They know the Singularity is coming. They assign various time-frames and modes to it, but their conviction is compelling. There is apparently a common belief among them that those over the age of forty years are unlikely either to have or to retain the math and other technical intellectual skills to be partner to the project, so almost all the go-getters are in their twenties and thirties. I have a few questions for them:

  • Are you enrolled in a financial retirement plan, assuming you need 30 years of service to qualify?
  • Are you saving any money for the future, or are you spending as you go, enjoying life to the fullest?
  • Are you planning to educate your children with the goal of them having a career?
  • Would you buy a 30-year bond at the right price that has no redemption value before 30 years ?
  • Lots of other long-term considerations, perhaps more subtle than I can readily identify

I suppose any answers of “yes” could fall into the category of “wearing a belt and suspenders”, “erring on the cautious side”, “go by what I tell you, not by what I do”, et cetera. For those not familiar with the advantages of the “good” Singularity: none of the things listed would have value post-Singularity. In the case of the “bad” Singularity, no one will be around to worry about it. Either way, it is a list of useless activities. Unless, of course, there either is not going to be a Singularity, or it’s not going to happen for at least two generations.

Without some remarkable non-Singularity breakthroughs, I won’t be here to judge, as the optimistic time-frames put me well into my nineties. What should I do? I’ve got lots of spare time. Hopefully, I have enough money. I’m smart enough to realize that the problem is one of dire importance, and I read enough to detect the urgency in the messages of those involved. One solution is a classic approach when encountering difficult problems, as Bluto advised Flounder: drink heavily. Unless one was dealt the required one-in-a-million brainage, and has subsequently used it to develop the appropriate technical, mathematical, and philosophical skills to approach friendly AGI, heavy drinking (or whatever hedonistic pursuit appeals) seems reasonable. The solution is out of my hands, and most likely out of yours as well. One thing for sure: I’m not going to worry about it.


Read Full Post | Make a Comment ( 6 so far )

Vanity, thy name is “expert”

Posted on September 29, 2008. Filed under: Everything you wanted to know about doctors, Layman's AI, Personal philosophy, Self-deception |

As my medical school years drew to a close, each of us faced the choice of residency that would determine how we spent our professional lives. A close friend and member of AOA, the medical honor society comparable to Phi Beta Kappa or Law Review, declared that he had chosen OB/GYN. He and I had shared what I felt was a miserable experience as “acting interns” on the obstetrics service our senior year, so his choice astounded me.


His answer was seminal: “Have you noticed the size of the textbook?” Indeed, the OB/GYN text was far smaller than that of any other subject we studied. “I think it’s possible to learn everything there is to know about OB/GYN. I can be an expert.” Perhaps he was citing the mental comfort associated with mastery of a skill, and the unlikelihood that he would find himself in a situation beyond his capabilities, akin to a world-class martial arts expert walking alone at night. I suspect the knowledge that one’s work was done as well as could be done would provide substantial comfort, especially if one were well-paid, and the importance of that work were protected and promoted by a guild system. [NOTE: in those days, there was little concept of medical malpractice, a scourge which subsequently would hit the OB/GYN specialty harder than any other.]

Yet, I think his answer (and his career choice) may have been more instinctive, and perhaps outside his conscious awareness: the possibility of being an expert may have been subsumed by the possibility of being recognized as an expert. Dr. Robin Hanson, on the Overcoming Bias blog, initiated a discussion of a similar concept, referring to “expert at” versus “expert on, in which the former could perform successfully and the latter could talk about it successfully. I’m referring to a third entity: an expert on a topic who also is an expert at that topic. He is an expert by all practical considerations, and he is well-remunerated. Is that enough? Perhaps not.

I have observed a distinct change in attitude when the expert-aspirant is exposed to his peers. In my own field, I wanted to be, planned to be, and worked to be the best in the world. In my own mind, I achieved that (male surgical sexual medicine is a very small pond for any size frog), and I was compensated financially in adequate fashion. I want to be satisfied with the knowledge that my work was of superior technical and ethical quality. But it’s a self-edited summary; often (not always) at the highest levels of anything, self-satisfaction seems overrated, and inadequate. At a conference of IPP (inflatable penile prosthesis) technical experts, early in my career, I was seated at dinner next to a surgeon who was prolific in numbers of successful operations. In fact, studying his methods had caused me to take a number of steps that benefitted both my technical skills and my practice success. Because of his influence, and my subsequent personal experience, he and I both used the same brand of IPP in our patients. Neither of us was in academics, so our “fame” came only from our patients and from the recognition of the manufacturer. He mentioned that he had performed “3- or 4-hundred” procedures that year. Unlike some areas of surgery, the number of IPP surgeons who ever perform more than 100 procedures in a year can be counted on two hands. My pride was piqued, and I replied, “I did 201, and Mr. X (the manufacturer CEO) told me that was tops in the world.” When I was just starting, this same surgeon had asked me to join his practice; after the dinner encounter, he was never friendly to me again. It was vanity versus vanity. Of note, I am very unpopular with the “experts on” in my field, those I call the “thought leaders”, none of whom are “experts at”. It’s the recognition, stupid.

Lest you think that the self-satisfied expert at/expert on doctor is immune to this vanity, give him a chance for recognition. Pharmaceutical and device manufacturers have caught on to this weakness in spades. The opportunity to be the star at doctor-to-peer lectures and presentations has changed the attitude of many a current physician, and strongly influenced his practice habits. Even when one has reached the pinnacle of both actual and recognized expertise, the vanity drive remains strong. Dr. Michael DeBakey gave the AOA visiting professor lecture during my junior year. I don’t remember much of what he said, but one quote has stayed with me: “I could make a career simply correcting the mistakes of other vascular surgeons.” Probably a true statement, especially at the time, but of what value was this knowledge to junior medical students? Could there be any doubt that recognition was the driving force?

Recently on Overcoming Bias, the smartest of the smart have shown not only are they are not immune to the vanity of the experts, they actually are as pedestrian as the rest of us when it comes to this human frailty. In the posts and discussions here, here, and here, it’s all about who is the smartest, who is the best qualified, and who is the leading expert. One would think pride in one’s intelligence is severely misplaced. As one of the main posters, Eliezer Yudkowsky, has said, “We are the cards we are dealt, and intelligence is the unfairest of all those cards.” Yet note the ego-involvement. One would think that accomplishment was a far better source of pride. And if that accomplishment has not yet occurred? Such encounters as this are the result. I choose Mr. Yudknowsky as an example only because he is a dedicated student of the human thought process, and one of two main writers on a blog dedicated to eradicating bias. If it can happen to such as him, perhaps it’s innate.

*Pro tip*:The ultimate goal is not only that I succeed, but also that you fail.

Read Full Post | Make a Comment ( None so far )

We know your time is important. Please take a few moments to fill out this questionnaire…

Posted on August 23, 2008. Filed under: Layman's AI, Self-deception |

A couple of years ago, I saw a reference to a new book in New Scientist magazine: The Singularity Is Near, by Ray Kurzweil. My leisure reading interests had turned to physics, evolutionary biology, and the quest for the Theory of Everything in recent years (I know that doesn’t sound like “leisure”, but one man’s trash is a sow’s ear, as the saying goes), and Kurzweil’s tome seemed to be about a curiously related issue. I bought the book, and read it. I haven’t been the same since.

Kurzweil discusses the almost certain (in his mind) upcoming emergence of the technological Singularity: the development of smarter-than-human intelligence. Among my friends, and apparently people in general, this is a topic that, once broached, causes severe polarization. I admit, it’s not sweeping the country with polarization; most people have never heard of the concept, except in movies and sci-fi books. But once they become aware that serious scientists with ninja-brainpower are working on it, most reactions that I have seen fall into one of two categories: 

  1. reject it out-of-hand, or
  2. think about it carefully, and then reject it. (more…)
Read Full Post | Make a Comment ( 2 so far )


    The director of the Sexual Medicine Center leaves penile implants behind, and launches a quest for knowledge about Artificial Intelligence, extended life, and the issues inside the health-care industry.


    Subscribe Via RSS

    • Subscribe with Bloglines
    • Add your feed to Newsburst from CNET
    • Subscribe in Google Reader
    • Add to My Yahoo!
    • Subscribe in NewsGator Online
    • The latest comments to all posts in RSS


Liked it here?
Why not try sites on the blogroll...