When I came across a collection of early artificial intelligence researcher Eliezer Yudkowsky’s essays warning about the inherent hazards of AI, I was in the process of reducing my workload at Skype.
His arguments immediately won me over, and I experienced a mix of intrigue, interest, and perplexity. Why hadn’t I realised this earlier? Why wasn’t this topic being taken seriously by anyone else? There was no doubt that I had been victim to a blind spot here.
After selling Skype a few years earlier, I was searching about for my next project in 2009. It was 2009. I made the decision to write Yudkowsky. We met, and I immediately started debating how to approach this kind of research the most effectively.
I had devoted my efforts to mitigating existential risk with a concentration on AI by the next year. Prior to making an investment in the artificial intelligence company DeepMind in 2011, I was chatting with media, delivering presentations on the subject, and conversing with businesspeople.
I have been attempting to foster a dialogue about the dangers of this research for many years, first personally and then through the Future of Life Institute, a nonprofit organisation I co-founded that works to lessen risks to humanity in general and those from advanced AI in particular.
My plan was to promote the same arguments Yudkowsky had developed 15 years earlier while simultaneously having access to this kind of study.
I’ve kept investing in different AI startups so that I can voice my concerns from within, but finding that balance may be very difficult.