AI is accelerating at an incredible pace, so quickly it’s said that AI should exceed our own intelligence between 2040 and 2060. Yes, even if “the brain is the most complex object in the known universe” (Nick Bostrom). “In our world, smart means a 130 IQ and stupid means an 85 IQ — we don’t have a word for an IQ of 12,952.” (Tim Urban in his must read AI story)
How bad could it be?
The train won’t stop when it exceeds our intelligence. Who knows what will happen when machines exceed our brains. There is no way to predict what the consequences will be for us.
Experts believe there is a 52% chance that it’s going to be good. Experts are people who know more and more about less and less, so let’s trust them for a minute.
Okay how good?
“Artificial Super Intelligence (ASI) could solve every problem in humanity” says Tim Urban. It could “halt CO2 emissions by coming up with much better ways to generate energy that had nothing to do with fossil fuels. Then it could create some innovative way to begin to remove excess CO2 from the atmosphere. Cancer and other diseases? No problem for ASI — health and medicine would be revolutionized beyond imagination. World hunger? ASI could use things like nanotech to build meat from scratch that would be molecularly identical to real meat — in other words, it would be real meat. Nanotech could turn a pile of garbage into a huge vat of fresh meat or other food. Better yet, Artificial Super Intelligence could allow us to conquer our mortality”