Skeptophilia (skep-to-fil-i-a) (n.) - the love of logical thought, skepticism, and thinking critically. Being an exploration of the applications of skeptical thinking to the world at large, with periodic excursions into linguistics, music, politics, cryptozoology, and why people keep seeing the face of Jesus on grilled cheese sandwiches.

Monday, April 9, 2018

Dodging the Great Filter

There's a cheery idea called "The Great Filter," have you heard of it?

The whole concept came up when considering the possibility of extraterrestrial intelligence, especially vis-à-vis the Fermi Paradox, which can be summed up as, "If intelligent aliens are common in the universe, where is everyone?"  Despite fifty-odd years of intensive searching, there has never been incontrovertible evidence of someone out there.  I maintain hope, however; the universe is a big, big place, and even the naysayers admit we've only surveyed the barest fraction of it.

The Arecibo Radio Telescope [image courtesy of photographer David Broad and the Wikimedia Commons]

"The Great Filter" is an attempt to parse why this may be, assuming it's not because alien civilizations are communicating with each other (and/or sending signals to us) using a technology we don't understand yet and can't detect.  You can think of the Great Filter as being a roadblock -- where, along the way, do circumstances prevent life forming on other planets, then achieving intelligence?

There are a few candidates for the Great Filter, to wit:
  • the abiotic synthesis of complex organic molecules.  This seems unlikely, as organic molecule synthesis appears to be easy, as long as there's no nasty chemical like molecular oxygen around to rip them apart as fast as they form.  In an anoxic atmosphere -- such as the one the Earth almost certainly had five billion years ago -- organic molecules of all sorts can form with wild abandon.
  • assembly of those organic molecules into cells.  Again, this has been demonstrated in the lab to be easy.  Hydrophobic interactions make lipids (or other amphipathic molecules, ones with a polar end and a nonpolar end) form structures that look convincingly like cells with little more encouragement than occasional agitation.
  • the evolution of those cells into a complex life form.  Now we're on shakier ground; no one knows how common this may be.  Although natural selection seems to be universal, all this would do is cause the cells that are the best/most efficient at replicating themselves to become more common.  There's no particular reason that complex life forms would necessarily result from that process.  As eminent evolutionary biologist Richard Dawkins put it, "Evolution is the law of whatever works."
  • the development of intelligence.  Again, there's no reason to expect this to occur everywhere.  Intelligent life forms aren't even the most common living things on Earth -- far from it.  We are vastly outnumbered not only by insects, but bacteria -- methanogens, a group of bacteria species that live in anaerobic sediment on the ocean floor, are thought to outnumber all other living organisms on Earth put together.
  • an intelligent species surviving long enough to stand a chance of sending an identifiable signal.  That the Great Filter consists of intelligent life evolving and then proceeding to do something stupid and destroying itself has been nicknamed the "We're Fucked" model.  If all of the preceding scenarios turn out not to be serious issues -- and at least the first two seem that way -- then it could be that intelligence pops up all over the place, but only lasts a few decades before spontaneously combusting.
Most biologists think that if a Great Filter does exist, #5 is probably the best candidate.  There's nothing we know about biology that precludes any of the others; even if (for example) the evolution of intelligence is slow and arduous, given the size of the universe, there are probably millions of planets that host, or have hosted, intelligent life.

On the other hand, if they only host that life for a few years before it commits suicide en masse, it could explain why we're not getting a lot of "Hey, We're Here!" signals from the cosmos.

When people consider what could trigger an intelligent civilization to self-destruct, most people think of the development of advanced weaponry.  It's like a planet-wide application of the Principle of Chekhov's Gun (from 19th century Russian author Anton Chekhov): "If you say in the first chapter that there is a rifle hanging on the wall, in the second or third chapter it absolutely must go off.  If it's not going to be fired, it shouldn't be hanging there."  If we develop weapons of mass destruction, eventually we'll use them -- destroying ourselves in the process.

It reminds me of the Star Trek: The Next Generation episode "The Arsenal of Freedom," in which a civilization becomes the salespeople of increasingly advanced weapon systems -- until they develop one so powerful that once activated, it can't be stopped, and it proceeds to wipe out the people who made it.


Of course, there's another possibility (because one way of self-destructing isn't enough...).  This was just brought up by inventor and futurist Elon Musk, who last week declared that he wants us to put the brakes on artificial intelligence development.  Musk says that if we develop a true artificial intelligence, it will not only inevitably take over, it will eventually look at humanity as "in the way" -- and destroy us:
[I]f we’re building a road, and an anthill happens to be in the way, we destroy it.  We don’t hate ants, we’re just building a road.  So, goodbye, anthill.  
If AI has a goal and humanity just happens to be in the way, it will destroy humanity as a matter of course without even thinking about it.  No hard feelings...  By the time we are reactive in AI regulation, it’ll be too late.  Normally the way regulations are set up is when a bunch of bad things happen, there’s a public outcry, and after many years a regulatory agency is set up to regulate that industry.  It takes forever.  That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilization...  
At least when there’s an evil dictator, that human is going to die.  But for an AI there would be no death.  It would live forever, and then you’d have an immortal dictator, from which we could never escape.
It's possible that we could fall prey not to our weapon systems, but to something few of us have considered dangerous -- a created artificial intelligence.  (Although you'd think that anyone who has watched either I, Robot or any of the Terminator movies would understand the risk.)

So do advanced civilizations inevitably develop AI systems, that then turn on them?  It would certainly explain why we're not receiving greetings from the stars.  It's possible that the Great Filter lies ahead of us -- a prospect that I consider a little terrifying.

Anyhow, sorry for being a downer.  Besides Musk's recent pronouncements, the idea has been floating around in my head given all of the idiotic things our leaders have been doing recently.  I guess if we can survive for the next few years, we might break through the suspicion and violence and parochialism that has characterized our species pretty much forever.  I'm going to try to remain optimistic -- as my dad used to say, "I'd rather be an optimist who is wrong than a pessimist who is right."

On the other hand, I think I'll end with a quote from theologian and Orthodox Rabbi Jonathan Sacks: "Science will explain how but not why. It talks about what is, not what ought to be.  Science is descriptive, not prescriptive; it can tell us about causes but it cannot tell us about purposes."

So maybe Elon Musk's adjuration to caution is well advised.

4 comments:

  1. Artificial intelligence is still an intelligence. Even if they regularly supplant their creators, one has to ask why we aren’t hearing from _them_.

    ReplyDelete
  2. Even if AI results in the wiping out of the human species, I see no reason why an AI based civilisation would not communicate with the universe. It will depend upon what "purpose" it determines for itself, but if current theories of mind and consciousness are correct, any AI advanced enough to replace humans would have to have many of the same cognitive features, including curiosity, probably empathy, and even a moral sense.

    A nuclear accident or global warming wiping us out - our own direct collective stupidity - is a different, and more frightening prospect altogether.

    ReplyDelete
  3. ^What they said.

    Even the Borg communicate... 😳

    ReplyDelete
  4. i'm with Steve Gould. If you rewound the tape here on earth, even, things might be much different. Ergo, i think the chance of life 'out there,' of any complexity, is overrated.

    ReplyDelete