Talk on AI reminds me of the Munk Debate on “Be it resolved humankind’s best days lie ahead” here.
Notes taken (mostly directly) from video above.
- Famine is not fun. Science fiction is fun.
- Danger of AI: We’re unable to marshal an emotional response at the dangers that lie ahead
- Analogy: 2 doors.
- One: We’re unable to continue discovering AI and permanently unable to develop tech. (i.e. global famine, etc.)
- Two: We continue to develop smarter and smarter machines. Then we go into an intelligence explosion. *We’re headed in this one.
- Slight divergence between our goals and theirs will destroy us (Humans vs. AI). Re: Ants and how we treat them.
- Concern: we build machines that treat us with similar disregard.
- AI is solution of super info processing. Mere matter allows us to process information. AI = building systems of atoms that can process info. As long as we keep going, it will get smarter and smarter
- We continue to improve our intelligent machines. We will do it to help us with climate change, diseases, etc.
- We are not near the summit of possible intelligence.
- If we build machines that process info faster and faster. They will surpass us at doing it. (By virtue of speed alone)
- Electrical circuits is faster than biochemical by a 1 million x (20,000 years of human intellectual work week over week)
- Machine design is important
- Researchers say we shouldn’t worry, but referencing time isn’t that valid. We have no idea how long it will take us to create the conditions to create super intelligence safely. SAFELY is the key word.
- Blending neuroscience and AI is important, but it’s often easier to build the AI first. (Challenge: folks involved in building AI are in their own race to get their first.)
- Re: Manhattan project
- Seems like we have one chance to make the conditions right to build super AI
- Admit info processing is the source of intelligence.