AI國際鏈結辦公室於11/22(四)舉辦一場AI專題演講!
演講詳細資訊如下,歡迎大家踴躍參加!
#題目: AI Chip Design Challenges at the Edge – from Deep Learning Model to Hardware.
#講者: Bike Xie, Director of engineering, Kneron Inc
#時間: 2018/11/22(四)10:00-11:20AM
#地點: 國立清華大學 台達館106室
#報名及詳細資訊: https://reurl.cc/yEv58
#演講摘要: Since the remarkable success of AlexNet on the 2012 ImageNet competition, deep learning models and especially CNN models have become the architecture of the choice for many computer vision tasks. However, inference of a CNN model can be highly computational expensive, especially for the end-user devices, such as the internet of thing (IoT) devices, which have a very limited computing capability with low-precision arithmetic operators. A typical CNN model might require billions of multiply-accumulate operations (MACs), load millions of weights, and draw several watts power for a single inference. Limited computing resources and storage become the major obstacle to run computation-hungry CNN on IoT devices.
Many design techniques in the area of model structure, compiler, and hardware architecture are making it possible to deploy CNN models on edge devices. This report discusses the design challenges for AI chip at the edge and briefly introduces these design techniques. A well-designed small size model might only require much less storage and computation resource. Therefore, model compression techniques including pruning, quantization, model distillation become substantial to deploy CNN models on edge devices. Compiling CNN models to hardware instructions is another critical step. operation fusion, partition, and ordering might significantly improve the memory efficiency and model inference speed. Finally, hardware architecture for AI chip is currently one of the hottest topics in circuit design. Dedicated AI accelerators provide an opportunity to optimize the data movement in order to minimize memory access and maximize MAC efficiency.
主辦:國立清華大學AI創新研究中心專案-國際鏈結計畫
聯絡資訊:
田小姐 03-5715131 分機34908
黃小姐 03-5715131 分機34905
同時也有10000部Youtube影片,追蹤數超過2,910的網紅コバにゃんチャンネル,也在其Youtube影片中提到,...
「pruning deep learning」的推薦目錄:
- 關於pruning deep learning 在 國立陽明交通大學電子工程學系及電子研究所 Facebook 的最佳解答
- 關於pruning deep learning 在 コバにゃんチャンネル Youtube 的最佳解答
- 關於pruning deep learning 在 大象中醫 Youtube 的最佳貼文
- 關於pruning deep learning 在 大象中醫 Youtube 的最讚貼文
- 關於pruning deep learning 在 Pruning deep neural networks to make them fast and small 的評價
- 關於pruning deep learning 在 he-y/Awesome-Pruning - GitHub 的評價
- 關於pruning deep learning 在 Design and analysis of hardware friendly pruning algorithms ... 的評價
- 關於pruning deep learning 在 Neural network lottery prediction github 的評價
- 關於pruning deep learning 在 Dense-Sparse-Dense CNN training - AI Stack Exchange 的評價
pruning deep learning 在 コバにゃんチャンネル Youtube 的最佳解答
pruning deep learning 在 大象中醫 Youtube 的最佳貼文
pruning deep learning 在 大象中醫 Youtube 的最讚貼文
pruning deep learning 在 he-y/Awesome-Pruning - GitHub 的推薦與評價
A curated list of neural network pruning and related resources. Inspired by awesome-deep-vision, awesome-adversarial-machine-learning, ... ... <看更多>
pruning deep learning 在 Design and analysis of hardware friendly pruning algorithms ... 的推薦與評價
Design and analysis of hardware friendly pruning algorithms to accelerate deep neural networks at the edge. tinyML Research Symposium (tinyML) ... ... <看更多>
pruning deep learning 在 Pruning deep neural networks to make them fast and small 的推薦與評價
Pruning neural networks is an old idea going back to 1990 (with Yan Lecun's optimal brain damage work) and before. The idea is that among the many parameters in ... ... <看更多>