Binarized image segmentation network and hardware accelerator for on-device AI REVIEW
Existing image processing neural networks are trained and used in a 32-bit full precision environment. Therefore, processing the actual image requires a large amount of memory and computation. Among these image processing neural networks, especially the image segmentation neural network, the model itself is large, and the memory of the activation value, which is an intermediate operation result, is very large. Therefore, network model requires model compression for On-Device AI and acceleration hardware to compute it. In this regard, this presentation introduces lightweight deep learning models and hardware accelerators for On-Device AI. We model a neural network using 1-bit weight and activation using neural network quantization, one of the neural network model compression techniques, and accelerate it with hardware. Through this, we intend to contribute to the hardware-aware deep learning open source ecosystem and the hardware open source ecosystem. In addition, I would like to share my experiences in solving problems through cooperation in the field of deep learning software and hardware.
- Jeonghoon Kim / -
- Jeonghoon Kim, who majored in Control & Robotics System at Korea University. His intellectual pursuit is in sensor data processing. Studying state estimation & deep learning in graduate school. After graduating his master's degree, He is working as a deep learning engineer for alternative military service. Discovering new problems and finding solutions were his great pleasure in research and development. The growth He gained by sharing experiences with his colleagues was also a big boost to him. So, the keywords for his research goals are "observation", "analysis" and "collaboration". His recent interests are Deep Learning Application, Neural Network Model Compression and Robotics Perception.
- Hyunwoo Kim / Hanyang University, Department of Electronics and Computer Engineering
- He is in the doctoral course at Hanyang University, Department of Electronics and Computer Engineering. He mainly studied computer architecture and FPGA and chip design. During his master's course, He studied ASIP design customized to HEVC, a video codec, and received a master's degree for that research. During his doctoral course, He started studying with an interest in deep learning hardware accelerators. Recently, He is developing hardware acceleration for quantized neural networks, especially binary/ternary neural networks.