RIPS2017 AMD Project Wiki Page
Deep Neural Networks (DNN) provide near-human performance on a number of learning tasks, including image classification, speech recognition and language understanding. The task of determining the values of parameters for a deep neural network, called “training” the network and, similarly, the task of utilizing a trained network to perform a specific recognition task, called “inference”, are both computationally demanding. This project explores numerical precision requirements of this computation with the goal of utilizing reduced precision computation thus enabling more efficient hardware u liza on achieving the desired results sooner, at lower hardware and power requirements.
Sponsor: Advanced Micro Devices
Industry Mentors: Allen Rush, Ph.D., AMD Fellow, and Nicholas Malaya, Ph.D.
Team members: Zhaoqi Li, Yu Ma, Catalina Vajiac, Yunkai Zhang
Academic mentor: Hangjie Ji, Ph.D.
- Week 1 (6.19 - 6.23) The draft of the work statement is due on Friday.
- Week 2 (6.26 - 6.30)
- Week 3 (7.3 - 7.7)
- Week 4 (7.10 - 7.14)
- Week 5 (7.17 - 7.21) Midterm presentations on Thursday; Summary progress report due on Friday
- Week 6 (7.24 - 7.28)
- Week 7 (7.31 - 8.4)
- Week 8 (8.7 - 8.11)
- Week 9 (8.14 - 8.18) Projects Day on Thursday
- Q. Le et al., On Optimization Methods for Deep Learning (2011)
- Bousquet et al., The tradeoffs of large scale learning (2008)
- Gupta et al., Deep Learning with Limited Numerical Precision (2015)
- Courbariaux et al., Training deep neural networks with low precision muptiplications (2014)
- Yamanaka et al., A parallel algorithm for accurate dot product (2008)
- Higham, Nicholas J, Accuracy and stability of numerical algorithms (2002)
- Trefethen, Lloyd N., and David Bau III. Numerical linear algebra (1997)