Nvidia chips rule the AI training chip market, where enormous amounts of data aid algorithms “learn” tasks like how to recognize the human voice but deploying computers that execute the “learned” tasks will be one of the biggest growth areas. Intel Corp (INTC.O) dominates this growing field.
Jon Bathgate, analyst, and tech sector co-lead at Janus Henderson Investors professed, “For the next 18 to 24 months, it’s very hard to envision anyone challenging Nvidia on training.”
However, Intel processors are already in use for putting trained algorithm to use. For instance, scanning incoming calls and audios and deciphering it into text-based requests is termed as “inference.”
Chief Technology Officer of Nexar Inc., Bruno Fernandez-Ruiz stated that Intel’s chips, when paired with a huge amount of memory, can still work just fine.
That market, when compared to the training market, can be a bigger market, says Abhinav Davuluri, an expert at Morningstar, who visualizes an inference market of $11.8 billion by 2021, against $8.2 billion for training. Evenly split between training and inference, Intel estimates the current market for AI chips to be of $2.5 billion.
Posting of 89% rise in profit on Thursday, Nvidia hasn’t given a specific estimation for the inference chip market. However, CEO Jensen Huang said on an earnings call with analysts on Thursday said that he believes it “is going to be a very large market for us.”
Inference chips sales of Nvidia has seen substantial growth. Though it didn’t give a baseline, in May the company stated that it had doubled its shipments of inference chips every year to big data center customers. Earlier in this month, Nvidia’s inference chips were adopted by Alphabet Inc’s Google Cloud unit and announced the plan of renting them out to customers.
Despite this, Nvidia continues to face a headwind selling inference chips as the data center market is swathed by the Intel’s CPUs that it has been selling for 20 years. Intel is now working on convincing the customers to continue using Intel’s products for both business and technical reasons.
Let’s consider Taboola, a New York-centered company that aids web-publishers in recommending relevant content to readers and that Intel has flaunted as an example the way its chips remain in the competition.
The company is currently using Nvidia’s training chips to teach its algorithm to absorb what to suggest and took Nvidia’s inference chips in consideration to make the recommendations. Users leave slow-loading pages, concluding that speed matters.
The company’s vice president of information technology, Ariel Pisetzky, reported, “For reasons of cost and speed, Taboola ended up sticking to Intel.”
“No wonder, Nvidia’s chips were faster, but the time consumed for shuffling the data back and forth contravened the gains. Secondly, to make sure that the same servers handled more than twice as many requests, Intel sent its engineers to assist Taboola in tweaking its computer code,” further added Pisetzky.
Highlighting the pros of Intel, he said, “My options were, you already have the servers. They’re in the racks. Working with Intel, I can now cut back my new server purchases by a factor of two because I can use my existing servers two times better.”
Huang on the analyst call assured on Thursday, “We are actively working with just about every single Internet service provider in the world to incorporate inference acceleration into their stack”. Explaining it further with an example he said, “voice recognition is only useful if it responds in a relatively short period of time. And our platform is just really, excellent for that.”