Intel has, unsurprisingly, announced that it is plunging head-first into the artificial and machine intelligence (AI and MI) market with what it somewhat inaccurately claims to be the 'industry's first neural network processor (NNP)' for deep-learning acceleration.
That Intel has been working on AI and MI projects is no secret of course, and neither is the technology behind what Intel chief Brian Krzanich proclaims to be an industry first: Intel's 'Lake Crest' Neural Network Processor (NNP) is based on the work of Nervana, a start-up acquired by Intel last year in a deal estimated at around $400 million. Using said technology, Intel's Nervana NNP promises to, in Krzanich's words, 'revolutionise AI computing across myriad industries.
'Using Intel Nervana technology, companies will be able to develop entirely new classes of AI applications that maximise the amount of data processed and enable customers to find greater insights – transforming their businesses. Examples include health care: AI will allow for earlier diagnosis and greater accuracy, helping make the impossible possible by advancing research on cancer, Parkinson’s disease and other brain disorders. Social media: Providers will be able to deliver a more personalized experience to their customers and offer more targeted reach to their advertisers. Automotive: The accelerated learning delivered in this new platform brings us another step closer to putting autonomous vehicles on the road. Weather: Consider the immense data required to understand movement, wind speeds, water temperatures and other factors that decide a hurricane’s path. Having a processor that takes better advantage of data inputs could improve predictions on how subtle climate shifts may increase hurricanes in different geographies.'
Unlike some of Intel's other research projects, such as the company’s recent announcement of a 17-qubit quantum processor, the Nervana NNP is on the roadmap for a commercial launch with what Krzanich claims is 'multiple generations of Intel Nervana NNP products in the pipeline that will deliver higher performance and enable new levels of scalability for AI models.'
More technical details of the NNP are available from the official website, where architecture features such as a new architecture with software-defined cache memory, true model parallelism allowing multiple chips to merge into a single chip, and the promise of a roadmap to a two orders of magnitude performance increase by 2020. Pricing, however, has not been disclosed.
October 18 2019 | 17:00