Response to: The Double-Edged Absurdity of Ray Kurzweil's Metaphor

11/12/2015 - 17:30

Szymon Wilczyński


Recently I stumbled upon an article entitled “The Double-Edged Absurdity of Ray Kurzweil's Metaphor” and felt a little bit uneasy after reading it. What is the fuss all about? Namely, the author disagrees with Ray Kurzweil's concern regarding technology and tries to demonstrate that risk is negligible, claiming that potential benefits from upcoming tech outgrows potential harm so much, that we should just ignore the threat. I, for one, disagree with that position.

Let's start with the “controversial” metaphor comparing technology to a double-edged sword. The author states: "Technology isn't a doubled-edged sword. At worst it is a 99% good sword and tiny 1% bad sword. Doubled-edged implies a 50/50 harm/goodness." They try to prove their opinion by pointing to the relative rarity of accidents such as cutting ones finger with a knife or burning down a village by fire. The author admits that accidents happen, but benefits from using knives, fire, or more broadly, technology, outweigh losses.

Firstly, knowledge acquired by science and technology by itself are double-edged swords. They are tools and we can use tools in both ways - good or bad. A prime example is someone using a hammer to build a house or someone using it as a murder weapon. It just happens that we live in a world in which an overwhelming portion of people use technology and science in beneficial and peaceful ways.

transparent-blood-splatter-png

Furthermore, when I think about engineering something from the ground up, in my opinion, it's closer to 50/50 (in terms of harm/goodness ratio) than anything else. When you are creating something, you have your own purpose behind the item you made. You are not constrained by somebody's else intent – your intent is shaping the design. Technology is blind and is only as good/evil as people creating or using it. That is why there are accidents and people who deliberately burn buildings down, stab others with knives, create malicious software for fun, etc.

Secondly, even in the case of cars and fire, we use precautionary measures to ensure safety. If car accidents were rare and harmless in a broad perspective, we wouldn't use seat belts or air bags. If fire wasn't dangerous, would fire extinguishers exist? Why do we have smoke detectors and fire sprinkler systems? We actively counteract such risks as people have recognized these dangers and designed ways to mitigate them. It is important to note that often these risks are not lessened until something dangerous has happened and a solution has been identified.

Fundamentally, what the author seems to miss is the difference between existing technologies (like knives and cars) and upcoming ones. Cutting your finger is not an existential threat and a knife can't be used as a weapon of mass destruction. Future technologies will have a greater global impact than anything ever before and we don't want to realize their harmful potential, because it may have disastrous consequences.


On my radar are mainly biotech, artificial intelligence, and nuclear weapons. I'll focus on biotech, because it was mentioned in the article and I think it's an interesting case. I see danger in the situation in which an individual or groups can manipulate characteristics of organisms easily and cheaply. It's not hard to imagine, that some group with advanced enough technology could modify a virus to be fatally harmful to humanity. Am I exaggerating? I don't think so. Here are some recent advancements in the biotech field:
Rapidly decreasing cost of genome sequencing (currently $1000 for whole genome by Veritas Genetics if you participate in PGP)
CRISPR Cas9 (and even better CRISPR Cpf1)
• Genome sequencing projects like one currently conducted by Human Longevity, Inc.
First synthetic self-replicating bacteria
First synthetic yeast chromosome
Synthetic virus made in two weeks for $1000
DNA with two synthetic nucleotides
Progress in DNA synthesis
Mobile DNA sequencer
Human genome editing (despite moratorium)
• Even controversial gene therapy targeted at aging

I do not think this trend will stop and if you extrapolate these advances, you can imagine where things could be headed.

There are research facilities where existing and de-extincted diseases are studied to develop better cures; perhaps there are even military research facilities where diseases are being modified just to see how far things can be pushed. I hope it is obvious that the possibility of an outbreak cannot be denied. We also can't neglect individuals or groups that would like to hurt or kill a lot of people.

Luckily there are people thinking about consequences. The CEO of Cambrian Genomics, said: "This (DNA Laser Printer) is probably more powerful than a hydrogen bomb. It has a power to create and destroy entire worlds, so you probably don't want to put one in every home. So I think this is going to be centralized service... forever."

The author's argument about decreasing scarcity of goods doesn't hold up either when you consider groups such as Aum Shinrikyo from Japan. This group was wealthy enough to acquire military hardware from AK47s to a MIL Mi-17 helicopter and manufacture the nerve agent sarin and VX gas. Think about that and consider the fact that you (and not only you) can easily find the smallpox genome on the Internet.

Instead of a "dawning utopia" let's face the harsh reality - our world is unfortunately still full of stupid and crazy people. There are probably people who would not hesitate to push a hypothetical "blow-up all of humanity" button. I know the risk of dying from terrorism is minuscule, but that's not because terrorists don't want to kill people, but because they don't have an efficient way to do it yet.


Quote from the article: "The main risk, likely to disastrously threaten our progress, is the sometimes painful absurdity of Kurzweil, MIRI, FLI, Hawking, or other fearful scaremongers."

I think the author went a little too far here. Kurzweil, a fearful scaremonger? On the contrary, he is often criticised for being too optimistic. Kurzweil presents probably the most utopian vision of the future I have ever heard of and a "double-edged sword" metaphor is the only word of caution from him - calling this scaremongering is absurd. Also, MIRI, FLI and FHI aren't some kind of neo-luddites, they just call to be extremely careful with upcoming powerful technologies.

"In the meantime, I intelligently focus on utopia."

This sentence summarizes the author's position regarding future advancements pretty well. Visions of utopia in 2045 seem awesome and I am sure that great, even unimaginable things await us. However, humanity needs to survive up to this point and it is imperative to mitigate existential risk. It's worth mentioning that some precautionary measures are under development for that precise reason.

I'm an optimist, but I don't like blind, reality-denying optimism. In the past, mankind was really close to some catastrophic events and we were lucky enough to avoid them. I think that the future of humanity should not depend on pure luck. This time we must prepare for the upcoming revolutionary technologies and the first step is to admit that the risk exists.