Semiconductor and cloud service provider (CSP) companies have been recent and eager entrants in the artificial intelligence (AI) cloud infrastructure market. Original equipment manufacturers (OEMs) may be feeling the pressure.
Cloud computing has been one of the hottest innovations in the enterprise information technology (IT) world. In essence, the cloud allowed enterprises to move their IT infrastructure off-premise and provided flexibility, extensibility, and maintainability, all while reducing costs. Noticing the potential, many top technology companies, such as Microsoft, Amazon, and IBM, have jumped into the cloud services business and have become CSPs.
During the past couple of years, all prominent CSPs have announced AI cloud products. Given the broad nature of AI, there is a varying need for hardware, software, and algorithms, and many CSPs have capitalized on this by offering a wide variety of products. Cloud companies have traditionally purchased their hardware from established OEMs, such as Dell EMC (Dell and EMC recently merged) and Cisco, and built a cloud on top. However, with recent AI infrastructure offerings, there are a couple of very interesting changes happening in the AI cloud market that do not seem to bode well for AI infrastructure OEMs.
Cloud Service Providers Offering Lower Priced Performance, Chipsets, and Hardware
First, CSPs have become big purchasers of graphics processing unit (GPU) cards, rather than servers. A low-cost AI server built via an X86 chassis and GPU cards can offer performance that is comparable to an AI server offered by an OEM at a much lower cost. There is also talk about CSPs putting out requests for quotation (RFQs), specifying price points for cards and buying them in bulk. Purchasing a card, rather than AI server works well for CSPs as their capital expenditure is less and they can ramp up the capacity as needed by adding GPU cards to a generic X86 machine. This is opening up a new opportunity for original device manufacturers (ODMs) that are good at designing cards per spec at a specified price.
Second, CSPs have started developing their own chipsets and hardware. Google has its own tensor processing unit (TPU) chipset, IBM has developed its Power9 chipset, and Microsoft is building its own field-programmable gate array (FPGA) engine to accelerate AI applications. By offering AI infrastructure based on their chipsets, CSPs are offering a better performance at a similar cost structure to their customers.
Semiconductor Companies Are Moving Up the Value Chain
On the other side, semiconductor companies are trying to move up the value chain to maximize their revenue potential. They have started offering their own AI servers or even cloud services based on their own chipsets. NVIDIA has announced that it will provide an AI cloud based on its GPU and Intel is planning on building its own cloud via Nervana. Wave Computing, a Silicon Valley-based startup, is planning on selling AI servers, rather than chipsets.
OEMs Need to Find Their Niche
AI infrastructure presents a huge market opportunity for OEMs, but barring IBM, others have been slow in announcing products that bring out differentiation. This also comes at a time when the OEM industry is going through tough times. HPE recently announced that it has missed expectations, while stating that its results will be low for the next quarters. Dell and EMC have merged, and Cisco has said that it will explore offering services in the future to improve its bottom line.
If semiconductor companies move up in the value chain and cloud companies move down, OEMs will have to figure out some place to maneuver. Given the dynamic nature of the AI market, OEMs have their work cut out for them in maximizing their revenue potential.