Menu
Microsoft, Nvidia work to speed up AI platform powering Cortana

Microsoft, Nvidia work to speed up AI platform powering Cortana

Microsoft and Nvidia collaborate to boost the deep-learning capabilities of CNTK, the underlying AI technology behind Cortana and Skype language translator

Thanks to artificial intelligence, we have autonomous cars, chat bots, and speech recognition. Microsoft's CNTK (Cognitive Toolkit) is one among many platforms that trains computers to learn, and it's getting an upgrade.

CNTK drives the Microsoft services Cortana and Skype language translation, and it boasts more than 90 percent accuracy in speech recognition tasks. Microsoft will soon release an upgraded CNTK toolkit, and one hardware maker wants to ensure the toolkit works best on its hardware.

Nvidia is partnering with Microsoft to optimize its GPU development tools for CNTK. The companies have created a set of deep-learning algorithms and libraries that will speed up CNTK to perform AI tasks like image and speech recognition on GPUs.

Deep-learning tools like CNTK are sandboxes in which developers can create a model for computers to solve a particular problem. The ultimate objective is to build a well-trained model that can accurately perform a specific task, such as shuffling through loads of medical data to diagnose a disease.

Based on input, researchers are continuously modifying models and tweaking parameters. One such tweak optimizes neural networking connections for better AI capabilities and for scaling the training model over more GPUs and servers.

The training of computer models can run for days and require intense computing horsepower. GPUs power deep learning for companies like Google and Facebook, and Microsoft is allowing some customers to test GPUs as part of its Azure cloud service. The updated CNTK tools will run faster on Microsoft's Azure N Series cloud offerings, which run on GPUs based on Nvidia's older Kepler and Maxwell architectures.

The updated CNTK tools could be bought on-premises with Nvidia's DGX supercomputer, which costs US$129,000 and has eight Tesla P100 GPUs based on the latest Pascal architecture.

Nvidia has been working closely to optimize its GPU deep-learning libraries for the upcoming CNTK release, said Ian Buck, vice president for accelerated computing at Nvidia.

GPUs from Nvidia already work on CNTK with existing libraries, but the new framework will provide a big performance upgrade. There is a 7.4 fold improvement in deep-learning training times across eight GPUs in a system, Buck said.

Buck declined to comment on when the new tools would be released.

Nvidia has made improvements in tools like CUDNN, which provides the libraries and algorithms for GPU-based deep-learning. CUDNN is a core underlying technology of CUDA, which is Nvidia's parallel programming framework.

There are other deep-learning frameworks like Google's Tensor, Theano, and the open-source Caffe, which excels at image recognition. Companies like Intel and IBM have forked those frameworks to work best with their hardware. Nvidia's GPUs already support most of the deep-learning frameworks, but CNTK now has an edge.

Google has come out with its own inferencing chip called Tensor Processing Unit, which is designed to improve deep learning results. Google can afford to build its own hardware because it deploys deep learning across on a larger scale, but it could be expensive for smaller companies to deploy deep-learning systems; instead they can turn to cloud services like Azure.

Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

More about FacebookGoogleIBMIntelMicrosoftNvidiaSkypeTeslaToolkit

Show Comments
[]