EventsScience

One Singular Sensation: The Geek Rapture

As a writer working intermittently in the information technology industry and as a science enthusiast, I have been aware of the evermore rapid advancements taking place in some areas of technological endeavor for some time. I first heard of Moore’s Law — which describes the computer hardware trend where the number of transistors that can be placed inexpensively on an integrated circuit basically doubles every two years — in the mid-1990s while working for the leading PC manufacturer at the time, Compaq Computers. And like a lot of people, I’ve seen that trend unfold with remarkable alacrity in the ever-expanding marketplace of machines and gadgets we use for work and to communicate. In addition to that, I have access to people involved in the human genome project. I maintain close friendships with a handful of engineers currently contracting with NASA. Plus, simply because they interest me to a degree, I keep peripheral tabs on the biotechnology, nanotechnology, and artificial intelligence fields.

So, if I’m not a little more informed than the average person when it comes to the progress of technology and the processes that control it, I at least like to think I am.

I must admit, however, that until about a month ago, I was completely ignorant of something known as the technological Singularity. I was likewise ignorant of the people and institutions that espouse it. But recently, a friend of mine mentioned the Singularity to me, almost in passing, and being that I’m blessed/cursed with a penchant for letting my curiosity guide me, I began to look into it a bit more deeply, and soon discovered it to be one of the most interesting targets on which to turn a critical eye that I’ve ever come across.

Now, if you are in the same spot I was a month ago, the technological Singularity is defined as an “event horizon” in human technological development beyond which humans (at least in our current incarnation) will cease to be driving technological progress. The technology itself will take over that task. Basically, ultra-intelligent machines will surpass all the intellect of any human, no matter how clever. And since the design of machines is a product of human intellect, such an ultra-intelligent machine could design even better machines. The intelligence of man would be left behind. The first ultra-intelligent machine is the last invention that man would ever need to make, and its creation would mark the singularity.

Not only that, but people like inventor Ray Kurzweil, who is one of the leading thinkers championing the coming Singularity, think it will entail a radical transformation of our minds and bodies, thanks to exaggerated advances in AI, nanotech, biotech, computer science, and neuroscience, and their integration with humankind on a mass scale. We’re talking about cyborgs, digitized psyches, downloading human consciousness onto machine, and immortality here.

Of course, this is the point at which pop culture favorites like the Terminator and Matrix movies come to mind, not to mention hundreds of science fiction novels that deal with similar themes. And in fact, there are other supporters of the Singularity, aside from Kurzweil, who see the event resulting in the dystopian or utopian landscapes found in fiction. Brilliant men and women in cutting-edge technological scientific fields demonstrate a passionate belief that this “event horizon” will be reached, and that the world after will look like nothing we can imagine.

But before we get to the question of what our world will be like, we have to put on our critical thinking shoes to determine, “Is the Singularity going to take place at all?”

The Indicators

One thing about the idea of the Singularity that I found very disturbing when I first started reading papers, blogs, and various other materials on the subject was the fervency with which many very rational, scientific people believed in its certainty. There are religious overtones to what some of the more staunch supporters say online, and the disharmony of hearing language of faith come from otherwise rational minds was shocking.

In a June 2010 piece in Scientific American, one of the Singularity’s most high-profile detractors, science writer John Horgan, called Kurzweil and his followers a cult.

And Kurzweil does nothing to eschew that image. In fact, throughout a documentary detailing his life and beliefs called “Transcendent Man,” he is presented almost as a mystic, sitting in a chair with a shimmering, circular light floating around his head as he explains his philosophy’s basic tenets. (See trailer for “Transcendent Man” below.)

Images and attitudes like these are why Horgan and others refer to the Singularity as the “rapture of the geeks”.

Now, I actually think “rapture of the geeks” is a great line, and I recognize why some people would use that terminology, but I wanted to avoid having the conclusions of my research tainted by my impressions of the people involved and their behavior. I felt it more appropriate to examine instead the nuts and bolts behind the idea, and to let that examination inform my conclusions free of any preconception about the players.

After all, this is a different type of problem than most claims and phenomona skeptics and critical thinkers address in this context. It’s one thing to examine a phenomenon that has already occurred (or that is occurring) and determine its validity or formulate an explanation for it, but it’s something else entirely to try to determine whether something predicted will indeed come to pass.

Depending on the subject matter and the available information, it may very well be impossible to proceed with any measure of confidence, but if we’re going to try, one way to do it might be to look at the major implications for the prediction coming true, and weigh those against other similar implications that failed.

In other words:

What might one point to as an indicator that the Singularity will take place, and what might one point to as an indicator that it will not?

And as it turns out, there are a lot of indicators on both sides of the question. To keep this post from becoming even longer, I’ll just mention a few of them.

Kurzweil goes to great lengths in his talks to point out exponential progress in information technology (doubling, as in 1 to 2, 2 to 4, 4 to 8, 8 to 16 . . . . .). So for example, we can advance to a billion bits of information in merely 30 steps as opposed to a billion steps. And it is a fact that advances in information technology are taking place at an exponential rate as opposed to a linear rate.

The human genome has been mapped. The brain is being mapped. All of human biology is being deciphered and recorded as bits of information, and the stockpiles of information we have in all these areas are constantly doubling. The more information we have about something, the more we understand it, the better we can manipulate it. It’s this kind of speed in progress of understanding that lead many to think that the Singularity will happen very soon.

But will data collection and analysis allow us to do things like reverse engineer nervous systems or the brain, so that we can build one that will be more intelligent and more efficient than our own?

The information technology advancements also impact areas like AI and robotics. Machines are becoming more delicate and intricate, and are able to do exponentially more calculations in the same amount of time. In some cases, engineers are actually finding it challenging to design and manufacture items that keep pace with the capabilities.

If nanotechnology follows this trend, and we achieve a complete understanding of life on an informational level in a short time based on the exponential data collection, it is not inconceivable that we could use machines to fix every little thing that plagues our mortal bodies, thereby extending our lives to however long we see fit. We could ourselves become the machines that mark the Singularity.

But will that be the case?

Of course it remains to be seen, but these are but a few of the indicators that something extraordinary in technological advancement could actually occur.

But are there areas of advancement that have leveled off or collapsed completely?

Well, there were predictions in the 1970s by epidemiologists that infectious diseases would soon be completely eliminated. Yet, we still fight colds, STDs, and the occasional new strain of the flu virus never fails to strike fear into the hearts of the general public.  

In the late 1990s and early 2000s, several publications began reporting that diseases like Alzheimer’s were on the verge of being eradicated. Advances in our understanding of the disorder and in the treatment options were about to explode, and there was a great deal of fanfare that soon a diagnosis of Alzheimer’s would be no more off-putting than a diagnosis of the flu. Yet here we are more than a decade later, and hardly any advances in the treatment for the disease have been made.

As early as the 1980s, there was great excitement over nuclear fusion. The PR at the time was that nuclear fusion energy was going to be so cheap by the 2000s that it would be too cheap to meter (basically free). Yet the fusion energy industry in the US has all but collapsed.

And one need not say more than the word “cancer” when talking of problems we just can’t seem to get a handle on.

(See a video of Horgan and Kurzweil debating these points here)

So, clearly there are indicators that something like a Singularity could take place. But there are also many points of history that show our best guesses about how things are going to turn out don’t always come to fruition.

The Futurists

And it seems another obstacle preventing the critical thinker from getting aboard the Singularity train lies in the nature of the futurist, or the predictors of things to come. I don’t know if there is a default psychological make-up amongst futurists, but there is a long history of futurists letting their ideology get the better of their reason. How often have we read about and commented on the exploits of the doomsday cults on blogs such as this one? We’ve pitied and ridiculed those people who are so sure the end of the world is coming that they give away their material possessions, only to be standing in a field or on a mountain somewhere (or worse, lying dead in a heap) wondering what they’re going to eat and what they’re going to wear the next day because their prophet somehow picked the wrong day.

And if you’re wondering if I’m comparing adherents of the Singularity to the likes of the Millerites, I absolutely am. At least in the sense that some Singulatarians don’t seem to want to exercise due caution about what are some very bold claims.

Hey, if you think you can fly, don’t jump off a 20-story rooftop. Try taking off from the ground first.

Of course, making precise predictions is nearly always problematic. There are so many unknowns, that even having an enormous set of strong indicators about a possible event happening doesn’t necessarily mean it will.

When I ask you if the sun will rise tomorrow, or more precisely if the Earth will rotate on its axis in its orbit around a medium size star that continues to burn such that the inhabitants of the Earth experience what those with sufficient communication techniques call daylight, even though there are extremely strong indicators that it absolutely will, the correct answer is still “probably”.

And the answer is “probably” and not “I don’t know” only because there is precedent. It has happened before.

In the case of the Singularity, there is no precedent.

Conclusions

There a few final thoughts I’ll relate to you about the subject of the Singularity:

Will it happen?

Well, I’m going to go with “I don’t know”. I know it’s not a very sexy answer, but it’s one that thinking about the subject critically leads me to, and it’s one that I’m okay with. And “I don’t know” doesn’t mean something won’t come to light that would make me change my mind. Indeed the subject fascinates me, and I will continue to look into it, and keep an open mind about what the future holds. But I’m not going to make any hard and fast predictions, nor am I going to adjust my life or my thinking as though the Singularity is coming.

I’ll also add that I had the great pleasure of attending the Singularity Summit in San Francisco this past weekend. And where I went into the conference honestly expecting to be awash in the cultish fervency I had seen demonstrated by some Singulatarians on the Internet, I was pleasantly surprised to have those fears allayed. The overly enthusiastic types seem to be the exception not the rule. Everyone I met was brilliant, sober, and extremely rational in their thinking. It was an awesome intellectual and philosophical experience.

On top of that, the subject matter for each presentation was just so cool. It was AI, and nanotech, and biotech, and robotics, and animal intelligence, etc., etc. And with the ardent futurist approach mostly absent from the program, it was a science enthusiast’s/technophile’s dream.

And ultimately, I came away from it thinking that it might just be best for all the folks involved in developing these amazing technologies to continue their work as always, at whatever pace their abilities and funding dictate, and let the future unfold as it will.

And when people ask what will the world be like after so much technological advancement, be comfortable saying, “Well, I think it’s going to be really really cool, but I don’t know for sure”.

Sam Ogden

Sam Ogden is a writer, beach bum, and songwriter living in Houston, Texas, but he may be found scratching himself at many points across the globe. Follow him on Twitter @SamOgden

Related Articles

27 Comments

  1. @plaws: I might be a tad biased, but PZ seems a bit narrow-sighted in this area, though his objections are probably valid.

    From what I’ve seen of Kurzweil, he would be better sticking to his role as a populariser and stop formulating his own hypotheses in fields he does not appear formally trained in.

    The idea of a singularity was around before Kurzweil, and his specific conception (superhuman AI bootstrapping to an intelligence explosion) is not the only interpretation of the singularity out there.

    IEET Spectrum gave us a chart of people with other ideas: http://spectrum.ieee.org/images/jun08/images/swho_full.pdf

    (And I hear James Randi was a speaker at the Singularity Summit – I’m eagerly awaiting the video for his talk)

  2. The whole thing seems so poorly defined to me.

    ” Basically, ultra-intelligent machines will surpass all the intellect of any human, no matter how clever.”

    When you get right down to it, what does this statement mean? We don’t even know what intellect is, let alone have a way to measure when our machines ‘surpass us’ in it.

    This has been a problem with AI since I was a student in the 80s – we can’t even define the problem. And you know what? We still can’t. People are still arguing about the same old chestnuts – free will vs. determinism, brains in a vat, the chinese room, ‘what is it like to be a bat’, etc… Kurzweil is a self-hyping self-hyper.

  3. @MnemosyneAH:

    Randi was a speaker at the conference, and he was quite entertaining, as always. The parts of his talk specific to the Singularity were somewhat along the lines of my thoughts in the post (if I could be so bold). He was obviously taken with the science and technology, but stopped short of embracing a singularity.

    @jblumenfeld:

    I mentioned being pleased with the attitude of cautious optimism at the Summit, and one of the best examples was the deep discussion and realization by the speakers and many of the attendees that we can’t really define intelligence.

  4. Thanks for the article, Sam – the increasing presence of “Singularity” talk in skeptical circles has been worrying me. Notably, we’ve got D.J. Grothe declaring himself a trans-humanist, and looking like a complete Kurzweil fanboy – but barely acknowledging that Kurzweil has pretty much abandoned science in his quest for a longer life. The guy pretty much adopts any supposed life extension modality that strikes his fancy.

  5. @sowellfan:

    Yeah, I don’t know if I’ve ever read or listened to one thing D.J. has written or recorded, so I can’t comment, but you may be right about Kurzweil. I immediately became suspicious when I found out he keeps Tony Robbins as an adviser.

  6. Kurzweil seems to have fallen victim to a ‘need to believe’, and this compromises his judgement. Although he cautions in his books (I think correctly) that technological progress is frequently over-estimated in the short term and under-estimated in the long term, he himself seems to be quite ready to extrapolate technologies that barely exist or a likely to saturate fairly soon.

    That said I don’t think there’s much doubt that something pretty dramatic is likely to happen in the next 50-100 years; but it’s hubris to pretend to know what it will be.

  7. I am glad that the Singularity Summit was not what I would have expected it to be based on what I have seen written by Singulatarians.

    I don’t know what to expect in the future but I would bet quite a bit that both Kurzweil and I will be long dead before it happens.

  8. @dahduh:
    I feel confident predicting that first person shooters will continue to get better, at least graphically. Beyond that, I’m hesitant.

  9. I just watched it on my computer w/o sound, and I feel like I need to go sell all my belongings, don gossomer clothing, shave my head, and take a sacrement.

    I personally feel like the Singularity is a concept we will always approach, bnut never quite get there. There’s a term for that, in mathematics, but I forgot what it is. If you graph y=tan(x), you’ll get what looks like a heart beat monitor, because the answer will approach 0, but never quite get there.

    In short, the closer we get, the more we will find out there’s even more to do before we get there.

  10. I stopped reading H+ Magazine when they posted what was essentially a paid advertisement for unproven dietary supplements as an ‘interview’. It was an utterly disgusting display of unedited sales pitch along the lines of:
    “Reporter: How AWESOME is your product?”
    “Sleazy Supplement Guy: SO AWESOME!!!!”

    This utter violation of journalistic ethics, as well as some comments on the SGU about Kurzweil, have made me quite skeptical of the Transhumanist movement.
    I wholeheartedly support what they’re trying to do, I’m just not sure I’m part of that crowd.

    As for the singularity… whatever. As long as the hyper intelligent machines let me hitch along for the ride, I’m fine with it.

  11. There’s a few problems I have with this singularity stuff.

    Firstly, it ignores the fact that reality imposes limits on what we can do. Yes, we can keep getting processors and data storage that are smaller and faster and cheaper. But eventually, we have to hit a wall. We can’t keep condensing information forever. If nothing else, the speed of light will stop us – and it’s entirely reasonable to suppose we’ll hit a practical wall long before that theoretical one.

    Secondly, there’s the fact that the main driver of technological innovation is commerce. Diminishing marginal returns haven’t kicked in yet – but in every other human endeavor, they always have. Eventually, we’ll get to the point where we won’t need to buy more data storage or a faster array of CPU’s.

    Thirdly, human brains and computers are very good at different things. Human brains are brilliant at pattern recognition. Computers? Not so much. On the other hand, human brains are crap at doing fast math. Computers, on the other hand, rock at it. So I think it’s foolish and typically anthropocentric to suppose that the cutting edge will be when computers can do exactly what humans can do. Why should computers do what we can do? Why should computers be like us? Other than our own sesnse of self-importance, why should that be the signifier or change?

    Fifthly – PZ Myers makes a very good point when he indicates that the genome is the data, and development is the algorithm. It’s a very different kind of algorithm compared to how computers work – two very different worlds. Simulating a procedural, computer-ish world in biology has been done by millions of years of evolution. Going the other way poses different challenges. It’s not going to be as easy – or as inevitable – as the Singularity crowd think.

    Sixthly – Why do we suppose that even if we did create an AI that was capable of everything a human is capable of, that the AI would react in the all-to-human methods of pursuing power and control over the world? That’s a human way of doing things. We shouldn’t assume that machines will behave just like us. We should expect that machines will be better and worse than us in unexpected ways.

  12. Most of the Singularitarians not only don’t understand development, they also don’t understand manufacturing very well.

    Start with current technology: Consider that building computers requires a huge industrial base, and an assortment of exotic materials. These things currently exist, but they belong to us humans — and they’re connected primarily by our own web of commerce. For super-AIs to go Von Neumann, they’d not only need to control and maintain chip foundries and assembly plants, but all the factories that feed into them — the factories that make everything from conveyor belts to machine parts to lasers, production of high-purity silicon blanks, clean-room equipments…. And then there’s materials — not just silicon, but an assortment of metals, some rare enough that wars have been, and still are, being fought over them. And then there’s the power plants… more machines, more raw materials, more consumable resources.

    OK, some would say, how about nanotech? Thing is, when you’re building from the small, you have a whole new set of challenges. You’ll still probably need exotic resources, and you still need to protect your nanostuff from environmental hazards. Plus, now the “development” issues come into play — when you’ve got self-reproducing units, errors are also self-reproducing, and “the more complicated the plumbing, the easier it is to stop up the works”. Large-scale nanodevices would not only be insanely complex, but would forfeit much of the consistency and reliability that we normally associate with machinery — because those features come from the machines being simpler and more durable than humans. (*And* a fair bit of trial-and-error to boot — even non-biological stuff usually needs some evolution before we get a gadget that “just works”.) Life as we know it has megayears worth of experience with those issues, and what it came up with was massive redundancy, multiple variations on a few themes, tolerance of some nasty failure rates, and often-drastic specializations to specific environments. Certainly, “designed” nanoware could shortcut past some of the stuff that life’s evolution had to brute-force — but not all of it, and it’s the stuff the designer didn’t think of beforehand that will be toughest. (Consider that decades after the original PC, modern Windows is still afflicted by limitations and flaws that derive from its own evolutionary history, right back to MS-DOS and the 8086!)

    Another issue: We already have creatures around that think far faster, and usually better, than the usual run of humanity. They’re called “geniuses”, provided by our own developmental variation. Somehow, they don’t magically end up in charge of things… often enough, they end up in the gutter, because boosting intellectual capability brings its own problems, and they’re still dealing with the world around them. If we do manage to build an AI that’s “really really smart”… it’s going to have to prove itself to us before it can gain any influence, much less take over the world. To riff off a classic saying — no matter how intelligent and knowledgeable the AI, an bazooka shell is going to interfere with its operations.

  13. PS: The whole “Rapture of the Geeks” thing evolved from the original meaning of a social Singularity, which is just a point beyond which you can’t forecast. Say, like Imperial Roman writers trying to work out the effects of the Industrial Revolution, or a 19th-century writer trying to imagine our world of computers. As you can see, we’ve already been through a number of such changes to human society, and we did survive, even though quite a few former livelihoods and industries have been reduced to niches or hobbies. The point isn’t that everything ends, it’s just that you can’t predict what will happen.

  14. Two sides of my personality were desperately fighting each other while I read this post. The geek side, as well as the side that wants to live, wants to believe that nanobots can heel my every ailment thereby extending my life indefinitely. But then that pesky rational side kicks in.

    I’m not an expert in any kind of technology. So I can say anything about the technological problems or lack there of with the singularity occurring. I can however say predictions into the distant future are, more often than not, wrong.

    When my mom was in college she worked with one of the first computers. They were massive things, the size of buildings. She programed them by running a couple meters of cards with holes punched in. The most complicated thing they did was 7+3=10. Engineers at the time couldn’t image another way to program a computer. They thought all future computers would just be more complicated expansions of this model. It is amusing seeing Science Fiction from that time. The futuristic, advanced, alien computers have reels of punch cards running trough them.

    I have no doubt that technology will advance, maybe even to the point where we don’t recognize it. But I also have no doubt that most of the technologies that seem promising now will be dead ends. And avenues we’ve never even thought of will change the world.

    I’m excited for the future. But I don’t entertain any notion that I can predict it.

  15. One thing I’d just like to point out is that Kurzweil is not the be all and end all of Singularitarianism. For those who find Kurzweil too woo-heavy I’d recommend Eliezer Yudkowsky, a senior research fellow at eh Singularity Institute for Artificial Intelligence.

    You can find a lot of his writing at Less Wrong, a community blog that looks at improving rationality more generally. He also pops up on Bloggingheads occaisonally.

  16. Even making the leap and assuming for the sake of argument that the Singularity is likely or possible, I just don’t see why I would want it to happen, particularly the live-forever part. This sack of flesh and bones is Me. I enjoy learning things, experiencing things, pushing myself physically, hiking until I’m dog-tired and ready for that cheeseburger and milkshake. I know that someday my body is going to be done, and while I’m no big fan of that thought, I just can’t see downloading my consciousness into an artificial body and mind so that I can just coast along endlessly into the future.

  17. If we ever reach the situation where artificial intelligence makes my decisions for me please promise that Microsoft will not be involved with the programming. I already have enough problems with Word deciding that I did not really mean to type one word and really meant another. I remember years ago writing a report where we had used heat shrink tubing to make a repair and being advised that “shrink” was not a polite word to use and should consider therapist instead. I still use heat therapist tubing to this day.

  18. Thanks for the post, Sam, it got me thinking.

    So far, I only felt unconfortable with the idea behind the singularity, but I could not quite pin it. After reading your list of counter arguments, I thought: no strong points, only proof that it might be hard and that we have failed to meet earlier expactations in dealing with e.g. cancer. That would only affect the When, not the If.

    But in my opinion the question is more fundamental: Will we eventually cross that event horizon by continuing to do what we do – or is there a threshold similar to the speed of light?

    I tend to assume the latter, because so far we have only created things less complex than ourselves (please allow me to exclude my kids from the list). As long as there are no indicators that this threshold either does not exist or can be overcome by human means, the singularity proponents face similar problems as creationists. Again – my opinion, and I may have fallen into a fallacy trap. Everybody, provide arguments why I might be wrong, that would be very interesting!

    The singularity is a good focal point for a broad discussion, and the fallout from it might be quite beneficial. Given the media attention, we could end up with a better popular understanding of a number of topics, e.g. what evolution is. Or am I too optimistic now?

  19. Sam,

    If you’re really interested in Singularity, and especially in ‘predictions’ thereof, maybe you should acquire some historical background: try reading up on Professor Herbert Simon and also on the Japanese 5th Generation Computer project.

    We really have been at this game for a long time, and the only relaible prediction is that however few, or many, years in the future the ‘predictions’ specify, we never actually get any closer. Indeed you might want to add Alvin Toffler to the above lot.

    You can find ’em all, for starters anyway, in Wikipedia, but of course.

  20. @Kai:

    The singularity is a good focal point for a broad discussion, and the fallout from it might be quite beneficial. Given the media attention, we could end up with a better popular understanding of a number of topics, e.g. what evolution is. Or am I too optimistic now?

    No, I think that’s a pretty fair assessment. It’s one reason I decided to make the post. I wanted to bring the discussion here. And already, some folks who, like me, had no prior exposure to the subject are talking about it and looking deeper into it.

    @GrueBleen:

    Yes, I’m slowly discovering these things piece by piece. Trying to fit in extra reading when I get a chance.

    Thanks for the info.

  21. I always thought Rapture of the Geeks was a zombie apocalypse.

    I’m preparing for both.

    The end.

    (In all honesty, I haven’t read the entire post and I am going to get back to it but I saw “Rapture of Geeks” and pretty much *squeed* my pants.)

  22. @Daniel Schealler:
    1. The theoretical wall definitely exists – thing is, the human brain falls short of it by a long run. If we “just” build an artificial brain and let that one improve itself, that might be impressive enough (if someone’s not sure what “more intelligent” could mean, just thing of an amazingly smart person, say Terry Tao, and speed his thoughts up a thousand times). On the practical limits, you’re right, there could well be something that makes the project fail – and I’d be really interested to know what.

    2. I’m not so sure that there’s a limit that’s so low it will prevent us from obtaining AGI (the G is for general, and one *should* use the distinction because most modern research on the field of AI focuses on something entirely different). Basically, your brain runs on computation, humans are useful, and the more we’ll be able to reproduce human behavior (or different, interesting things) by a computer, the more we’ll want more computational power.

    3. They shouldn’t. The only reason that one comes up every time is that in order to explain such a weird idea as the singularity to someone who isn’t familiar with it, you have to dumb it down, to make clear why exactly we would expect to be able to build intelligent machines *at all*. But “artificial (i.e. not human) intelligence” is, as a definition, a bit like “an animal that isn’t a starfish”. That doesn’t mean it won’t be useful.

    4. You somehow skipped over the four, so I’ll use that space to excuse me for my english.

    5. Either we rebuild a human brain, or you’re right that it probably won’t look much like a human mind.
    It won’t be easy, you’re right. I think it’s inevitable only if humanity doesn’t get killed and progress continues in a meaningful way, but if it goes on as it does now, we’re probably closer than you think. I just don’t think we’ll be able to go on like now, what with all the global warming and destabilizing world and whatnot.

    6. This is where I just link to the Singularity Institute, the organizers of the mentioned Singularity Summit. They (Eliezer Yudkowsky to be precise) have this idea of a “Friendly AI” (see page 11, but read the rest too, it’s good stuff), that means an AI that stays “friendly” to humans even as it reprograms itself to become better and better, beyond imaginable limits (as it hits the singularity).
    The point of this idea is that if you don’t make sure that the AI is friendly, it probably isn’t and will just maximize and odd or ill-stated goal (the name coined by the Less Wrong page is “paper clip maximizer”, an AI that wants to transform as much of the universe as possible into paper clips).

    And while I don’t agree with the singularitarians on their assessment of the state of the world, that there is nothing that’s more important right now (like global warming), I do think this idea is worth exploring, and that *if* we can build a friendly self-improving AI, that will be the most important, most amazing thing that’s ever happened to the universe. I just fear it’s not that likely.

  23. @Daniel Schealler:
    1. The theoretical wall definitely exists – thing is, the human brain falls short of it by a long run. If we “just” build an artificial brain and let that one improve itself, that might be impressive enough (if someone’s not sure what “more intelligent” could mean, the easiest way to imagine it is to take an amazingly smart person, say Terry Tao, and speed his thoughts up a thousand times – but there could be vastly more). On the practical limits, you’re right, there could well be something that makes the project fail – and I’d be really interested to know what.

    2. I’m not so sure that there’s a limit that’s so low it will prevent us from obtaining AGI (the G is for general, and one *should* use the distinction because most modern research on the field of AI focuses on something entirely different). Basically, your brain runs on computation, humans are useful, and the more we’ll be able to reproduce human behavior (or different, interesting things) by a computer, the more we’ll want more computational power.

    3. They shouldn’t. The only reason that one comes up every time is that in order to explain such a weird idea as the singularity to someone who isn’t familiar with it, you have to dumb it down, to make clear why exactly we would expect to be able to build intelligent machines *at all*. But “artificial (i.e. not human) intelligence” is, as a definition, a bit like “an animal that isn’t a starfish”, so you’re right in that sense. That doesn’t mean it won’t be useful.

    4. You somehow skipped over the four, so I’ll use that space to excuse me for my english.

    5. Either we rebuild a human brain, or you’re right that it probably won’t look much like a human mind.
    It won’t be easy, you’re right. I think it’s inevitable only if humanity doesn’t get killed and progress continues in a meaningful way, but if it goes on as it does now, I think it will happen, if only after we get to brain simulations. I just don’t think we’ll be able to go on like now, what with all the global warming and destabilizing world and whatnot.

    6. This is where I just link to the Singularity Institute, the organizers of the mentioned Singularity Summit. They (Eliezer Yudkowsky to be precise) have this idea of a “Friendly AI” (see page 11, but read the rest too, it’s good stuff), that means an AI that stays “friendly” to humans even as it reprograms itself to become better and better, beyond imaginable limits (as it hits the singularity).
    The point of this idea is that if you don’t make sure that the AI is friendly, it probably isn’t and will just maximize and odd or ill-stated goal (the name coined by the Less Wrong page is “paper clip maximizer”, an AI that wants to transform as much of the universe as possible into paper clips).

    And while I don’t agree with the singularitarians on their assessment of the state of the world, that there is nothing that’s more important right now (like global warming), I do think this idea is worth exploring, and that *if* we can build a friendly self-improving AI, that will be the most important, most amazing thing that’s ever happened to the universe. I just fear it’s not that likely.

    This may now look like a specific refutation of Daniel’s post, but I just took that one because it helped me organize my thoughts.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Back to top button