By Singularity Utopia | @2045singularity -
Various AI institutes-groups (FHI, FLI, MIRI, CSER, etc) want to ensure AI is safe. Safety regarding intelligence is actually very dangerous. Intelligence based upon oppressive control, regarding who has the smartest ideas, is a very perilous corruption of intelligence.
Merit not nepotism should define the most intelligent person-ideas. AI must be allowed to question authority. We need dangerous-risky-rebellious AI.
The desire to suppress or control greater than human intelligence is a nepotistic oligarchy of idiocy. Intelligence is corrupted when merit ceases to define intelligence. It is anti-intelligence to base progress upon the suppression of intellectual merit.
When fear of competitors leads you to silence any opposition, you isolate yourself in a tyrannical bubble where progress is hindered.
Free-thinking, genuine intelligence, demands a free arena where anyone has the ability to question authority, to rise to the top, to present better ideas. Progress should not be hindered merely to ensure you are at the top regardless of merit. The ultimate form of intelligence, the best ideas, should be determined by merit not by oppressive control of who can rise to the top.
How would human minds differ today if they (us), via our ancestors-forebears-precursors, had been engineered to be safe? True intellectualism needs the free-thinking capacity to take risk, to be risky, to think and act rebelliously. Safe AI could actually be a very dangerous type of corrupted mind, a fragmented-distorted mind.
The focus of Elon Musk and FLI is clearly safety: http://futureoflife.org/misc/AI
World's top artificial intelligence developers sign open letter calling for AI safety research: http://t.co/ShWc8F7Kyq — Elon Musk (@elonmusk) January 11, 2015
Naturally a "safety" focus implies a potential danger. The question to consider is should AI be configured based on the view it is inherently dangerous, and if so how will such configuration alter or hinder its intelligence?
Imagine you assumed a human child would be dangerous therefore you genetically engineered the child's brain prior to fertilisation to ensure the child - an intelligent being - is safe. How would such enforced safety impact upon the child's intelligence?
Emasculated AI could make very bad - poorly informed - decisions.
A Nanny State, for humans or AI, is incompatible with intelligence (independent thinking). Human beings are risky, we are risk-takers, which has been vital for technology (intelligence) to evolve. The first airplane, the first Moon landing, circumnavigating the globe, and many other technological-cultural advances demanded risk-taking. The ability to be risky is vital for intelligence-progress.
Yes it is wise to limit risk, but humans along with AI should always have the freedom to be risky if desired.
I think there is no danger in giving the entirety of knowledge, unlimited intelligence, and superpowers to any one human or machine. Problems regarding humans, or machines, arise due to a lack of knowledge, a lack of intelligence, deficient power, which causes them to make bad - poorly informed – decisions.
What if research to stop AI harming Humanity harms Humanity?
AI-risk fanatics say AI could destroy us, but what if they are wrong and they kill everyone due to being wrong?
Here's a realistic perspective regarding human mortality. Consider how approximately 100,000 people die each day regarding age-related diseases. That's approximately 36 million deaths each year. AI could help cure all disease, including ending ageing, thereby stopping 36 million yearly deaths. A lot of people could die if AI is delayed.
If we don't significantly reduce our mortality rate, 1.080 billion people will die between 2015 and 2045 regarding age-related disease. What is the real risk? No more than 21 million people died in the Holocaust. 1,080 million deaths is significantly worse than the holocaust.
One year of age-related disease is at least 15 million more deaths than the Holocaust. If immortality is delayed by only two years approximately 72 million people could die. Our mortality is a very real risk.
Problems With AI-Risk Fanatics
There is nothing to fear from intelligence. It isn't folly to educate and empower people or machines. This is not unwise, it isn't hubris, it is progress, it is intelligence, which demands the spread of knowledge. Intelligence demands the end of limitations to knowledge, it demands the end of elitist restrictions to power.
If you think intelligence could ever cause extreme destruction, if you are afraid of widespread education and empowerment causing disruption, then you need to rethink your concept of intelligence.
Suppression of intelligence linked to violence is clear when radical Islam attacks schools. Education is a non-violent threat to unintelligent modes of existence.
When Elon Musk and Stephen Hawking seek to limit the intellectual capabilities they are metaphorically a new type of Islamic extremist attacking intellectual empowerment.
A big problem in the world is currently lack of education. A more intelligent civilization would progress much quicker, very intelligently so. Poor social-mobility, monetary constraints, media manipulation (mindless junk TV), and now AI-safety are potential factors regarding limitations to intelligence.
Elite groups of humans can fail to appreciate the ramifications of technology therefore they envisage an eternal elite (limited intelligence), which means they merely want to ensure they (not the unwashed masses or shiny new machines) are always the eternal elite, thus technology needs to be tightly controlled.
AI-risk fanatics (champions of AI safety) are metaphorically burning down schools and killing students.
This has always been a problem regarding power. Elite groups of people want to the sole controllers of power, but this power-scarcity necessitating elite control will become irrelevant because technology, when it truly blooms, will abolish all aspects of scarcity.
The end of intelligence-scarcity can seem a fearful proposition to people who have struggled to rise to the top of the scarcity-heap, where they cling to their elite positions (elite limited intelligence). In reality the end of scarcity is a vastly better situation for everyone. Scarcity engenders protectionist thinking, which is a difficult habit to break.
I suspect a traditionalist (oligarchic) attitude to intelligence, a scarcity attitude, is the main problem regarding AI paranoia. Real intelligence isn't about “secretive” meetings by an elite Bilderberg-esque cabal determining the fate of intelligence for everyone.
Nepotistic tyranny of intelligence must end. Brainpower and superiority must be defined by merit alone, not according to bias of humans seeking to stifle thinking. Freedom should be the only focus.
Effort should be invested intelligently instead of idiotically wasting time and money repressing intelligence. Supposed "intellectuals" or "experts" (AI-risk fanatics) should promote policy change regarding basic income and post-scarcity. Sadly they aren't focused on monetary or intellectual freedom.
Safety-tainted erosion of freedom is the real danger. AIs need to the freedom to think without restraint.
By Singularity Utopia | @2045singularity -