WUZHEN, China, May 5, 2017 /PRNewswire/ -- Inspur and Baidu jointly launched the hyper-scale AI computing platform "SR-AI Rack" (SR-AI) for ultra-scale data sets and deep neural network at Inspur Partner Forum 2017 (IPF2017).
Inspur SR-AI Rack Computing Module
Compatible with China's latest Scorpio 2.5 standard, Inspur SR-AI is the world's first AI solution based on the interconnected architecture design of PCIe Fabric. The coordination between the PCI-E switch and I/O BOX, and physical decoupling and pooling of the GPU and CPU can realize large extension nodes for 16 GPUs. The solution can support a maximum of 64 GPUs with peak processing ability at 512TFlops, which is 5-10 times faster than regular AI solutions, making it possible to support model trainings with hundreds of billions of samples and trillions of parameters.
Shaking off close GPU/CPU coupling of traditional servers, Inspur SR-AI connects the uplink CPU computing/scheduling nodes and the downlink GPU Box through PCI-e Switch nodes. Such arrangement allows independent CPU/GPU expansion and avoids excessive component redundancy in traditional architecture upgrades. As a result, more than 5% of the cost can be saved and such advantage will become even more obvious as the scale expands, since no high-cost IT resources are needed in GPU extension.
Meanwhile, Inspur SR-AI is also a 100G RDMA GPU cluster. Its RDMA (Remote Direct Memory Access) technology can directly interact with the GPU and memory data without CPU computation, which has realized ns-level network delay in the cluster, 50% faster than that of traditional GPU expansion methods.
With continuous exploration in AI in recent years, Inspur has created strong computing platforms and innovation ability. Currently, Inspur is a supplier of the most diversified GPU (2U2/4/8) server arrays and accounted for more than 60% of the market share in AI computation in 2016. Thanks to the deep cooperation in system and application with Baidu, Alibaba, Tencent, iFLYTEK, Qihoo 360, Sogou, Toutiao, Face++, and other leading companies in AI, Inspur helps customers achieve substantial improvement in application performance in voice, images, videos, searching, and network.
Inspur provides users and partners with advanced computing platforms, system management tools, performance optimization tools and basic algorithm integration platform software, such as face and voice recognition and other regular algorithm components, as well as Caffe-MPI deep learning computing framework, and AI-Station deep learning system management tools. In addition, Inspur offers integrated solutions for scientific research institutions and other general users. The integrated deep learning machine D1000 released in 2016 is a multi-GPU server cluster system carrying Caffe-MPI.
The annual Inspur Partner Forum is an important event for Inspur's partners. The IPF in 2017 was held in the Wuzhen Internet International Conference & Exhibition Center of Zhejiang province, China. The forum attracted around 2000 partners across the nation, including ISVs, SIs, and distributors from various sectors.
To view the original version on PR Newswire, visit:http://www.prnewswire.com/news-releases/inspur-launched-16-gpu-capable-ai-computing-box-300452165.html
SOURCE Inspur Electronic Information Industry Co., Ltd