It was back in May of this year that Google’s AI AlphaGo beat the high-profile Go player Lee Sedol at his own game. What was a surely disappointing moment for Sedol, losing 4-1 to the AI, was a revelation for the tech community. Media organisations quickly picked up the story, proclaiming AlphaGo’s success a demonstration of AI’s superiority to humans.
But while it may seem that we’re headed for a Skynet scenario, artificial intelligence has yet to live up to our expectations of what “intelligence” really is. My interest was peaked when I first heard about Kimera System’s latest algorithm, Nigel; the first example of artificial general intelligence.
Nigel, conceived by Mounir Shita and his late partner (after whom Nigel was named), is a single-algorithm, machine-based artificial general intelligence (AGI). Most contemporary AI is restricted to one domain (that’s to say autonomous vehicles or recognising image patterns), whereas AGI is not bound by these restrictions and can, within reason, accomplish or at least try to understand any task it is given.
I spoke with Mounir Shita, co-founder of Kimera and co-creator of Nigel, to discover more about what makes Nigel such a monumental step for the future of AI and overblown stories of AI’s capabilities in much of our media.
“I think if you came to earth from a different planet without knowing how advanced we were and saw the AI news you’d think we were 200 years ahead of where we actually are,” Shita tells me, in what I soon learn is a typically bullish approach towards AI. “It doesn’t matter if it’s IBM, Google or Facebook, the most advanced thing that conventional AI can do today is recognise a cat from an image.”
Pattern recognition is the cornerstone of artificial intelligence, and many would argue, of intelligence itself. It is through noticing patterns that AlphaGo could beat the world’s best player, how autonomous vehicles can make sense of stop signs and how AI can even compose a pop song. But Shita feels the only way for us to truly create an advanced AI, is to redefine our humanised view of intelligence.
“I have always been puzzled by why we want to build human intelligence,” he tells me. “Why don’t we focus on intelligence that can expand our abilities; if you replicate humans you just get a machine that can do what a human can do.”
It was this dissatisfaction with the human-centric view of intelligence that led Shita down the path of quantum mechanics, to create a theory of general intelligence, one that sees intelligence as being part of the fabric of space and time.
“From there it was a lot easier to start thinking; now we have an idea of what intelligence is how do we create an algorithm of it?” Shita says.
From these questions came Kimera’s AGI Nigel. Nigel operates without relying on a centralised brain or knowledge bank, but instead utilises a neural network of nodes that are compatible with any connected device and can work in any domain. What’s interesting is that Nigel’s understanding of the world comes from its user, which feeds back into its system. Shita tells me that Nigel’s goal is to help you achieve your goals, but he has set his aspirations for the AGI very high; hoping that Nigel can contribute to solving some of the global issues that have stumped humans, such as global poverty, energy consumption and a cure for cancer, to name just a few.
He tells me, “What we really wanted to figure out was how do we create something that can take on the biggest issues in life and immediately the assumption was you don’t build a piece of technology then figure out how to make money off it; that’s not how you change the planet”
“The second part of our research we called future economy. We assumed that the economic model we live in today cannot work with ubiquitous intelligence, so we focused on how earth would operate economically 100 years from now, assuming the planet was full of thinking machines. When we came up with a model we took the algorithms that we had developed and constructed them in a way that allowed us to evolve how things work today into an idea of how they should work.
“If you look at all these big problems they’re not just solved by technology, there has to be a relationship between technology and the global system as a whole.”
Let’s not assume, however, that the world is ready to embrace Nigel. Elon Musk and Stephen Hawking have both expressed their fears of a future where killer robots enslave humanity and our species is rendered obsolete. But Shita remains positive in his outlook on AI.
“If Elon Musk reads this, I would like to ask him on behalf of every AI scientist I have talked to, to please stay out of this,” he chuckles, though clearly tired of this repeated claim. “I think, in common with many AI scientists I’ve talked to, that we like to focus on the good. We want to get people out of global poverty, we want to reduce global energy consumption, we want AI to help us host a great dinner party and to interact. Trying to regulate something that we’ve never seen before just doesn’t seem like the best way of achieving that.”
Though it’s clear that AI’s capabilities have been vastly exaggerated by the media, there are very valid fears around the rise of artificial intelligence, most notably, when it concerns jobs. It is this future of AI jobs that has seen Uber become the most heavily invested tech start-up in history and even an insurance firm in Japan replacing its office workers with IBM’s Watson Explorer AI.
“I believe that machines will eventually take over every job that exists,” Shita tells me. “I think the reason why people are so scared of losing their jobs to machines is because we live in a system that expects people to work.
“But machines will take over, and you can either fight it or accept it and then come up with a system that allows people to live happy lives.
Shita describes a future without jobs, where we humans can enjoy our lives to the fullest; travelling the world and dedicating time to their passions. This proposed future of freedom may sound idealistic, but the inevitability of change will not save the millions of jobs around the world from being taken over by machines. Shita feels that rather than fight it, we must make this work in our best interests, rather than fighting it.
“You don’t have to fear progress because you can’t imagine what things will look like later. No one really knows how things are going to look later,” Shita says. “But we can create a system that supports us. I truly believe that when you start presenting ideas of how life without work could be in the future fear is reduced. That fear of losing jobs is rooted in the fear of the unknown.”
While there are legitimate concerns regarding the rise of AI, perhaps we have allowed media scare stories to dominate. As our conversation comes to a close, Shita insists that the real danger doesn’t come from the machines, but from our leaders.
“The biggest threat to general AI is not what Elon Musk is talking about, it is government,” he tells me. “Vladimir Putin even said recently that the nation that leads in AI will ‘rule the world’.
“A quote that really resonates with me is something Vinod Khosla, one of the best-known venture capitalists in Silicon Valley, wrote:
“Long before AI goes uncontrollable or takes over jobs, there lurks a much larger danger: AI in the hands of governments and/or bad actors used to push self-interested agendas against the greater good.”