This market research report was originally published at Tractica's website. It is reprinted here with the permission of Tractica.
Microsoft announced on July 22 that it plans to invest $1 billion in OpenAI, a research organization working on artificial general intelligence (AGI). The Microsoft OpenAI announcement mentioned the co-development of Azure AI supercomputing technologies focused on both software and hardware to enable Microsoft Azure to scale to AGI.
AGI, or strong AI, is the application of AI across a generalized domain. In contrast, the current form of AI solves problems in narrow domains. For example, AI trained on medical images cannot be used for self-driving cars and AI trained on recognizing Chinese speech will falter at recognizing French speech. But AGI for vision will be able to solve all vision tasks; similarly, AGI for language will be able to solve all language tasks. We could also create AGI for subcategories like object recognition, speech translation, or even autonomous driving.
What Does OpenAI Gain?
The Microsoft announcement looks more favorable to OpenAI than Microsoft. In addition to the investment toward expanding its R&D budget, OpenAI will essentially get access to Azure cloud and hardware resources, likely at favorable prices. The organization published a paper on AI compute growth last year, which showed 300,000x increase in AI compute from AlexNet in 2012 to AlphaGo in 2018. The paper suggested that we are seeing an 8x increase in compute requirements for AI every year based on increasing AI model complexity. This has also been validated in 2019, with OpenAI’s DOTA 2 seeing an 8x increase in compute compared to DOTA 1 from last year. Therefore, it’s clear that for OpenAI to continue advancing AI, it needs access to hardware, preferably at lower costs.
Unlike DeepMind, which is owned by Google and has been pushing the boundaries of AI with AlphaGo with unrivaled support from Google’s tensor processing unit (TPU) and data center infrastructure, OpenAI has largely been an independent research body. It has been left to fend for its own hardware resources, either purchasing hardware from NVIDIA or using public cloud resources. OpenAI now gains access to Microsoft’s AI hardware infrastructure (i.e., Azure cloud and data centers). Without access to high performance hardware, OpenAI is essentially a racing bike with one wheel missing. But with the Microsoft deal, OpenAI has secured the missing wheel and can now speed along toward its goal of developing AGI.
What Does Microsoft Gain?
In return for the investment in OpenAI, Microsoft plans to offer OpenAI’s current technologies as “pre-AGI” cloud services and eventually AGI cloud services. It’s not clear what Microsoft’s pre-AGI services will look like since the road to AGI cloud services is a long one, possibly 5-10 years away, if not more. Most of what OpenAI works on is still many years away from being commercialized, which leaves a few elements like the well-known language generation model GPT-2 possibly being part of the pre-AGI services. In fact, there could be a link between OpenAI’s language model advances and Maluuba, which Microsoft acquired some years ago, with the idea of developing AGI for language. At the time, Tractica called Maluuba Microsoft’s DeepMind and bet that some of the technology might be commercialized. However, that hasn’t come to pass.
We could consider this announcement as a stake in the ground for Microsoft, setting forth a bold strategy that builds toward AGI in the cloud, something that none of the other players like Google or Amazon have embarked on. However, it might be a bit too early to venture into offering AGI cloud services since most enterprises still struggle with implementing narrow AI.
Where’s the Spotlight?
Stepping away from what this means for Microsoft or OpenAI, the deal puts the spotlight on access to high end AI compute hardware. It also highlights how a handful of companies like Google, Facebook, Microsoft, Amazon, and their Chinese equivalents now control access to AI hardware. For the rest of the companies developing AI, there comes a point where they either need to get acquired by one of the hyperscaler companies or they go out of business because the cost of accessing AI compute grows at an exponential rate, as noted by OpenAI.
DeepMind is one of the foremost companies working in the field of AGI, and its goal is explicitly to solve AGI. Its efforts in game play are getting closer to AGI, as DeepMind’s latest creation, AlphaZero, has superhuman capabilities in the game of Go. It has also been able to beat the top chess and shogi programs. DeepMind’s access to Google’s TPU and cloud infrastructure, not to mention the high performance computing (HPC) engineering teams within Google, gives it an upper hand and ensure that the AI models it develops will always be matched with equivalent hardware resources. AlphaGo Zero hardware is estimated to have cost $25 million, which gives us a sense of the advantage of having hardware resources in-house.
Eventually, as we move toward AGI and start to see generalized AI being offered in specific areas like language or vision – or subcomponents within those – the hardware requirements will grow at the rate of 8x or more every year. As a result, most AI companies will not be able to develop AGI unless they have in-house access to hardware resources. This creates a massive gap in the market that will widen quickly, as the AI capabilities of the hyperscalers are orders of magnitude more advanced than those of the rest of the AI companies.
This gap is already starting to emerge in areas such as advancements in meta learning, reinforcement learning, and the development of AutoML. Google and a handful of others, including Facebook, have far more advanced capabilities in those areas thanks to their in-house AI hardware and data center resources. The questions are: How soon will this monopoly by the hyperscalers start to affect the AI ecosystem? Are we likely to see some regulations to pull this AI monopolization back? And might we see the emergence of third-party decentralized AI data centers?
Research Director, Tractica