What, me worry?

Posted on October 1, 2008. Filed under: Layman's AI, Personal philosophy |

Today, on Less Wrong, one of the commenters used the word “sinecure”, sending me rushing to the dictionary: an office or position that requires little or no work and usually provides an income. How did I reach this stage of life without benefit of this marvelous word?! It’s Latin roots suggest an even deeper meaning: sine cura, “without cure”, referring to a Middle-Ages ecclesiastical appointment, but without the power to “cure souls”. In my part of the US, we’d say it was someone who was “all hat and no cattle”. And, it is easily rearranged to “insecure”. Significantly, the commenter was referring to the position of “Research Fellow” at the Singularity Institute for Artificial Intelligence, I presume tongue-in-cheek. This Fellow has identified a threat to mankind of which few others (mankindly-speaking) are aware: unfriendly general artificial intelligence. The perceived level of the threat is absolute: total annihilation of humanity and the world as we have come to know it. The timing of the threat is soon: perhaps three decades or less. The perceived likelihood is 100%, save some intervention from a ninja code-writer. To make it a story easily publishable, and on the fast-track to moviedom, there’s this twist: the Fellow is the (potential) ninja code-writer, and only he can save us. His version of friendly general artificial intelligence would not only prevent annihilation, but also provide a paradisiacal existence for all. His position is funded by donations. He hasn’t produced anything so far, but he thinks about it (and writes about it) all the time.

Sounds like this is going to be a Fellow Roast, eh? It’s not. I’m one of his admirers (at least of what he represents), albeit a Johnny-come-lately. I’ve written positively about him before in this blog, as well as having been inspired to cover the fictionalized human aftershock of his ideas in a short novel. No, it’s no roast. Instead, a reality check. For the past year or so, I’ve spent perhaps several hours of each retired day reading and ruminating about the technological Singularity. There’s the media-friendly version (see The Singularity is Near by Ray Kurzweil), which is all happy and inspiring, but weak on nuts and bolts. Then there’s the blog and mailing-lists version, full of competition, snarking, and predictions of doom, complete with such-high-level-that-only-they-can-understand-it nerdism arguments apparently confirming both the enormity of the task and the misconceptions of everyone save him who is doing the writing. Beneath it all, I am fascinated that a topic of such perceived enormity, described as the greatest event since the appearance of the first replicating chemicals (read:life), is almost unknown to the public, especially since no one is atempting to keep it a secret. To the contrary, fund-raising and publicity efforts are in full swing, as evidenced by the upcoming Singularity Summit.

Suppose the Deep Impact scenario occurred, but starting now, with 30 years warning. In the movie, the US government’s first reaction was to maintain secrecy while beginning survival measures. Once outed, what would be mankind’s reaction? In general, that scenario has been playing for millennia, with the time-frame being less predictable, and the comet being Death. Under those circumstances, there has been little extravagant reaction at all, other than to live until it happens. But death-as-a-part-of-life has always been around, and mankind is accustomed to it. True comet-type death (or on the flip side, elimination of death) is a different animal.  So what is the US government’s response to the possibility (inevitability?) of a mankind-altering Singularity, be it friendly or unfriendly? A well-placed employee at the Department of Defense says here: “I don’t know a *soul* in DoD or any of the services off the top of my head that has any *inkling* of the very existence of trans-H (trans-humanism) or of the various technical/scientific lanes of approach that are leading to a trans/post-human future of some sort. Zip. Zero. Nada.” OK, assume there are no world-class AGI (artificial general intelligence) experts, unknown to the rest of the AGI community, in cahoots with our government, or that of other nations, with a near-solution leading to the Singularity. And suppose that these AGI guys, in all nations, all know one another, and are familiar with one another’s skills. And suppose that none of them has any idea how to write code for a Friendly AGI, and our Fellow stands alone thinking he may be able to do it, eventually. Now, throw in the kicker that a significant number of AI experts think they can write code for AGI soon, leaving the “friendliness” aspect aside. If they are right, and if “undesignated” AGI becomes “unfriendly” AGI (as the Fellow assures us it will), it seems nearly inevitable that the comet is on the way.

There is another, perhaps much larger, community of experts who do not give any type of AGI much hope for existence. These mostly claim either that mankind will destroy itself before the Singularity, or that the possibility of the Singularity is exaggerated. That may be why AGI is a fairly well-kept secret (or just ignored?). Let’s set this group aside as we look at the strength of the AGI group’s convictions. They know the Singularity is coming. They assign various time-frames and modes to it, but their conviction is compelling. There is apparently a common belief among them that those over the age of forty years are unlikely either to have or to retain the math and other technical intellectual skills to be partner to the project, so almost all the go-getters are in their twenties and thirties. I have a few questions for them:

  • Are you enrolled in a financial retirement plan, assuming you need 30 years of service to qualify?
  • Are you saving any money for the future, or are you spending as you go, enjoying life to the fullest?
  • Are you planning to educate your children with the goal of them having a career?
  • Would you buy a 30-year bond at the right price that has no redemption value before 30 years ?
  • Lots of other long-term considerations, perhaps more subtle than I can readily identify

I suppose any answers of “yes” could fall into the category of “wearing a belt and suspenders”, “erring on the cautious side”, “go by what I tell you, not by what I do”, et cetera. For those not familiar with the advantages of the “good” Singularity: none of the things listed would have value post-Singularity. In the case of the “bad” Singularity, no one will be around to worry about it. Either way, it is a list of useless activities. Unless, of course, there either is not going to be a Singularity, or it’s not going to happen for at least two generations.

Without some remarkable non-Singularity breakthroughs, I won’t be here to judge, as the optimistic time-frames put me well into my nineties. What should I do? I’ve got lots of spare time. Hopefully, I have enough money. I’m smart enough to realize that the problem is one of dire importance, and I read enough to detect the urgency in the messages of those involved. One solution is a classic approach when encountering difficult problems, as Bluto advised Flounder: drink heavily. Unless one was dealt the required one-in-a-million brainage, and has subsequently used it to develop the appropriate technical, mathematical, and philosophical skills to approach friendly AGI, heavy drinking (or whatever hedonistic pursuit appeals) seems reasonable. The solution is out of my hands, and most likely out of yours as well. One thing for sure: I’m not going to worry about it.

Cheers!

Advertisements

Make a Comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

6 Responses to “What, me worry?”

RSS Feed for It's Not Hard… but it could be Comments RSS Feed

Signing up for cryonics seems like a good first step.

# Are you enrolled in a financial retirement plan, assuming you need 30 years of service to qualify?

This can be rational even with early with draw fees if you do the tax math.

# Are you saving any money for the future, or are you spending as you go, enjoying life to the fullest?

Probably best to spend anything beyond a safety margin for immediate economic dislocation of a few month’s income, but not so many singularitarians even have that. The median savings for a 40 year old are negative. More educated people have an even worse situation.

# Are you planning to educate your children with the goal of them having a career?

“HELL NO!”, and one that means something from someone who does plan to have children.

# Would you buy 30-year bonds at the right price?

HELL YEAH! Bonds are worth something now. You don’t have to hold them till maturity.

Thanks for your comment, Michael. I doubt that you missed the spirit of my questions, in spite of your answers:

To suggest cryonics to a group you describe as needing a few month’s income, but not so many singularitarians even have that seems unrealistic. I priced cryonics, and I could not afford it myself, without making some sacrifices, and I am one with far more assets than a “few month’s income”.

Concerning savings, you suggest: More educated people have an even worse situation. It may seem so, but a quick Google refutes this, especially in this large study, Savings and education, but also in these: stats on asset accumulation,
The role of higher education to economic development, and
tightwad or spendthrift.

While retirement plans in general may be a “rational” (but hardly well-chosen) investment for those who plan early withdrawal, Singularitarians as a group are frequently citing shortages of operating capital, so it’s not “very rational” to divert money that could be used for reaching one’s goals to an activity that one believes will never become necessary.

Buying 30-year bonds because “they are worth something now” is reminiscent of the commercial in which the man buys a painting at auction, and then immediately puts it up for resale at the same auction: it makes no sense. The question of “30-year bonds” means “are you concerned that you will need money 30 years from now?”, as I’m sure you knew.

I do not expect the Singularity to occur in my lifetime.

I do expect it to occur within 500 years, barring such disasters as supervolcanoes, ecological collapse, or large scale nuclear war.

I also expect that, whatever Eliezer accomplishes in his own lifetime will probably end up being as important as, well, any other advance made by a single person in the field of computing since 1970.

Doug S.:

What proportion of the trans-humanism community would agree with your assessment of the timing of the Singularity vs. total catastrophe? Are the active writers on the fringe?

I have no idea who agrees with me or not about timing; it’s mostly just a guess based on my subjective impressions of the pace of research.

Software is damn hard. You can’t write software to do something unless you understand it very, very well; programming is basically the art of figuring out what you want so precisely that even a machine can do it. We don’t understand intelligence, and if the history of AI research is any guide, we probably won’t understand it for many years to come.

(Something other than AGI could certainly cause a less awe-inspiring Singularity, with a second, AGI-fueled, Singularity to follow afterward. I like to joke that the Singularity occurred in 1876, when Thomas Edison invented the industrial research laboratory.)

@Doug S: I like to joke that the Singularity occurred in 1876, when Thomas Edison invented the industrial research laboratory.

Here is an excerpt of a email sent to me by Dr. Bruno Marchal (and he’s not joking):

My current opinion is that the singularity is behind us. The deep discovery is the discovery of the Universal Machine, alias the computer, but we have our nose so close on it that we don’t really realize what is happening. From this, by adding more and more competence to the universal machine, we put it away from its initial “natural” intelligence. I even believe that the greek theologians were in advance, conceptually, on what intelligence is. Intelligence is confused with competence today. It is correct that competence needs intelligence to develop, but competence cannot be universal and it makes the intelligence fading away: it has a negative feedback on intelligence.

So my opinion is that the singularity has already occurred, and, since a longer time, we have abandon the conceptual tools to really appreciate that recent revolution. We are somehow already less smarter than the universal machine, when it is not yet programmed.


Where's The Comment Form?

    About

    The director of the Sexual Medicine Center leaves penile implants behind, and launches a quest for knowledge about Artificial Intelligence, extended life, and the issues inside the health-care industry.

    RSS

    Subscribe Via RSS

    • Subscribe with Bloglines
    • Add your feed to Newsburst from CNET News.com
    • Subscribe in Google Reader
    • Add to My Yahoo!
    • Subscribe in NewsGator Online
    • The latest comments to all posts in RSS

    Meta

Liked it here?
Why not try sites on the blogroll...

%d bloggers like this: