Inspur Open-Sources TF2, a Full-Stack FPGA-Based Deep Learning Inference Engine

PR Newswire

Highlights

  • Inspur has announced the open-source release of TF2, the world’s first FPGA-based AI framework that contains comprehensive solutions ranging from model pruning, compression, quantization, and a general DNN inference computing architecture based on FPGA; the open source project can be found on https://github.com/TF2-Engine/TF2
  • The TF2 community will promote open-source cooperation and development of AI technology based on customizable FPGAs, reducing the barriers to high-performance AI computing technology.

SAN JOSE, Calif., Sept. 17, 2019 /PRNewswire/ — Inspur has announced the open-source release of TF2, an FPGA-based efficient AI computing framework. The inference engine of this framework employs the world’s first DNN shift computing technology, combined with a number of the latest optimization techniques, to achieve FPGA-based high-performance low-latency deployment of universal deep learning models. This is also the world’s first open-sourced FPGA-based AI framework that contains comprehensive solutions ranging from model pruning, compression, quantization, and a general DNN inference computing architecture based on FPGA. The open source project can be found at https://github.com/TF2-Engine/TF2. Many companies and research institutions, such as Kuaishou, Shanghai University, and MGI, are said to have joined the TF2 open source community, which will jointly promote open-source cooperation and the development of AI technology based on customizable FPGAs, reducing the barriers to high-performance AI computing technology, and shortening development cycles for AI users and developers.

At present, customizable FPGA technology with low latency and a high performance-to-power ratio has become the choice of many AI users for deploying their inference applications. However, the high level of difficulty and long cycle involved in FPGA development poses difficulties in adapting to the fast iterative application requirements of deep learning algorithms. TF2 is able to quickly implement FPGA inference based on mainstream AI training software and the deep neural network (DNN) model, enabling users to maximize FPGA computing power and achieve the high-performance and low-latency deployment of FPGAs. At the same time, chip-level AI design and performance verification can also be carried out quickly with the TF2 computing architecture.

TF2 consists of two parts. The first part is the model optimization and conversion tool TF2 Transform Kit, which can conduct compression, pruning, and 8-bit quantization of network model data trained by frameworks such as PyTorch, TensorFlow and Caffe, thereby reducing the amount of model calculations. For example, by compressing a 32-bit floating point model into a 4-bit integer model and pruning the channel, a ResNet50 model can be pruned by 93.75% with virtually no precision loss while maintaining the basic computational architecture of the original model. The second part is the FPGA intelligent running engine TF2 Runtime Engine, which can automatically convert optimized model files into FPGA target running files, and greatly improve the performance and effectively reduce the actual operating power consumption of FPGA inference calculations through the innovative DNN shift computing technology. Testing and verification of TF2 have been completed on mainstream DNN models such as ResNet50, FaceNet, GoogLeNet, and SqueezeNet. A TF2 test using the FaceNet model on the Inspur F10A FPGA card (Batch Size=1) shows that the calculation time of a single image with TF2 is 0.612 ms, an increase in speed of 12.8 times.

At the same time, Inspur’s open source project also includes TF2’s software-defined reconfigurable chip design architecture, which fully supports the development of current CNN network models and can be quickly ported to support the development of network models such as Transformer and LSTM. Based on this architecture, the prototype design for ASIC chip development can be further implemented.

According to Inspur’s plans to establish an open source community for TF2, the company will continuously invest in updating TF2 by developing new functions such as open source automatic model analysis, structural pruning, arbitrary bit quantization, and AutoML-based pruning and quantization, as well as supporting sparse computing, the Transformer network model, and the NLP general model. Furthermore, the community will hold developer conferences and online open classes on a regular basis to share the latest technological advances and experience, train developers through college education programs, and develop user migration programs and provide technical support for development.

“The deployment of AI applications covers the cloud, the edge, and the mobile end, and has highly diverse requirements. TF2 can greatly improve the efficiency of application deployment across different ends and quickly adapt to the model inference requirements in different scenarios,” said Liu Jun, AI & HPC General Manager of Inspur Group. “AI users and developers are welcome to join the TF2 open-source community to jointly accelerate the deployment of AI applications and facilitate the implementation of more AI applications. “

About Inspur
Inspur is a leading provider of data center infrastructure, cloud computing, and AI solutions, ranking among the world’s top 3 server manufacturers. Through engineering and innovation, Inspur delivers cutting-edge computing hardware design and extensive product offerings to address important technology arenas like open computing, cloud data center, AI and deep learning. Performance-optimized and purpose-built, our world-class solutions empower customers to tackle specific workloads and real-world challenges. To learn more, please go to www.inspursystems.com.

Cision View original content:http://www.prnewswire.com/news-releases/inspur-open-sources-tf2-a-full-stack-fpga-based-deep-learning-inference-engine-300919692.html

SOURCE Inspur Electronic Information Industry Co., Ltd