{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2024,10,30]],"date-time":"2024-10-30T20:58:16Z","timestamp":1730321896896,"version":"3.28.0"},"publisher-location":"New York, NY, USA","reference-count":19,"publisher":"ACM","content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2020,8,10]]},"DOI":"10.1145\/3370748.3407002","type":"proceedings-article","created":{"date-parts":[[2020,8,7]],"date-time":"2020-08-07T16:10:32Z","timestamp":1596816632000},"page":"103-108","update-policy":"http:\/\/dx.doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":2,"title":["Multi-channel precision-sparsity-adapted inter-frame differential data codec for video neural network processor"],"prefix":"10.1145","author":[{"given":"Yixiong","family":"Yang","sequence":"first","affiliation":[{"name":"Tsinghua University, Beijing, China"}]},{"given":"Zhe","family":"Yuan","sequence":"additional","affiliation":[{"name":"Tsinghua University, Beijing, China"}]},{"given":"Fang","family":"Su","sequence":"additional","affiliation":[{"name":"Tsinghua University, Beijing, China"}]},{"given":"Fanyang","family":"Cheng","sequence":"additional","affiliation":[{"name":"University of Pittsburgh"}]},{"given":"Zhuqing","family":"Yuan","sequence":"additional","affiliation":[{"name":"Tsinghua University, Beijing, China"}]},{"given":"Huazhong","family":"Yang","sequence":"additional","affiliation":[{"name":"Tsinghua University, Beijing, China"}]},{"given":"Yongpan","family":"Liu","sequence":"additional","affiliation":[{"name":"Tsinghua University, Beijing, China"}]}],"member":"320","published-online":{"date-parts":[[2020,8,10]]},"reference":[{"key":"e_1_3_2_2_1_1","first-page":"609","volume-title":"MICRO","author":"Chen Y.","year":"2014","unstructured":"Y. Chen : A machine-learning supercomputer . In MICRO , pages 609 -- 622 . IEEE, 2014 . Y. Chen et al. Dadiannao: A machine-learning supercomputer. In MICRO, pages 609--622. IEEE, 2014."},{"issue":"1","key":"e_1_3_2_2_2_1","first-page":"127","article-title":"Eyeriss: An energy-efficient reconfigurable accelerator for deep convolutional neural networks","volume":"52","author":"Chen Y. H.","year":"2016","unstructured":"Y. H. Chen Eyeriss: An energy-efficient reconfigurable accelerator for deep convolutional neural networks . JSSC , 52 ( 1 ): 127 -- 138 , 2016 . Y. H. Chen et al. Eyeriss: An energy-efficient reconfigurable accelerator for deep convolutional neural networks. JSSC, 52(1):127--138, 2016.","journal-title":"JSSC"},{"issue":"4","key":"e_1_3_2_2_3_1","first-page":"903","article-title":"An energy-efficient precision-scalable ConvNet processor in 40-nm CMOS","volume":"52","author":"Moons B.","year":"2016","unstructured":"B. Moons and M. Verhelst . An energy-efficient precision-scalable ConvNet processor in 40-nm CMOS . JSSC , 52 ( 4 ): 903 -- 914 , 2016 . B. Moons and M. Verhelst. An energy-efficient precision-scalable ConvNet processor in 40-nm CMOS. JSSC, 52(4):903--914, 2016.","journal-title":"JSSC"},{"key":"e_1_3_2_2_4_1","first-page":"33","volume-title":"Symposium on VLSI Circuits","author":"Yuan Z.","year":"2018","unstructured":"Z. Yuan , : A 0.41-62.1 tops\/w 8bit neural network processor with multi-sparsity compatible convolution arrays and online tuning acceleration for fully connected layers . In Symposium on VLSI Circuits , pages 33 -- 34 . IEEE, 2018 . Z. Yuan, et al. Sticker: A 0.41-62.1 tops\/w 8bit neural network processor with multi-sparsity compatible convolution arrays and online tuning acceleration for fully connected layers. In Symposium on VLSI Circuits, pages 33--34. IEEE, 2018."},{"key":"e_1_3_2_2_5_1","first-page":"10","volume-title":"ISSCC","author":"Horowitz M.","year":"2014","unstructured":"M. Horowitz . Computing's energy problem (and what we can do about it) . In ISSCC , pages 10 -- 14 . IEEE, 2014 . M. Horowitz. Computing's energy problem (and what we can do about it). In ISSCC, pages 10--14. IEEE, 2014."},{"key":"e_1_3_2_2_6_1","first-page":"1135","volume-title":"NIPS","author":"Han S.","year":"2015","unstructured":"S. Han Learning both weights and connections for efficient neural network . In NIPS , pages 1135 -- 1143 , 2015 . S. Han et al. Learning both weights and connections for efficient neural network. In NIPS, pages 1135--1143, 2015."},{"key":"e_1_3_2_2_7_1","first-page":"2074","volume-title":"NIPS","author":"Wen W.","year":"2016","unstructured":"W. Wen Learning structured sparsity in deep neural networks . NIPS , pages 2074 -- 2082 , 2016 . W. Wen et al. Learning structured sparsity in deep neural networks. NIPS, pages 2074--2082, 2016."},{"key":"e_1_3_2_2_8_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2017.502"},{"key":"e_1_3_2_2_9_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00166"},{"key":"e_1_3_2_2_10_1","first-page":"57","volume-title":"ISCA","author":"Riera M.","year":"2018","unstructured":"M. Riera Computation reuse in dnns by exploiting input similarity . In ISCA , pages 57 -- 68 . IEEE, 2018 . M. Riera et al. Computation reuse in dnns by exploiting input similarity. In ISCA, pages 57--68. IEEE, 2018."},{"key":"e_1_3_2_2_11_1","doi-asserted-by":"publisher","DOI":"10.1109\/MICRO.2018.00020"},{"key":"e_1_3_2_2_12_1","first-page":"28","volume-title":"ShapeShifter: Enabling Fine-Grain Data Width Adaptation in Deep Learning. In Proceedings of the 52nd Annual IEEE\/ACM International Symposium on Microarchitecture","author":"Lascorz A. D.","year":"2019","unstructured":"A. D. Lascorz , S. Sharify and I. Edo et al . ShapeShifter: Enabling Fine-Grain Data Width Adaptation in Deep Learning. In Proceedings of the 52nd Annual IEEE\/ACM International Symposium on Microarchitecture , pages 28 -- 41 . IEEE, 2019 . A. D. Lascorz, S. Sharify and I. Edo et al. ShapeShifter: Enabling Fine-Grain Data Width Adaptation in Deep Learning. In Proceedings of the 52nd Annual IEEE\/ACM International Symposium on Microarchitecture, pages 28--41. IEEE, 2019."},{"key":"e_1_3_2_2_13_1","first-page":"134","volume-title":"51st Annual IEEE\/ACM International Symposium on Microarchitecture","author":"Buckler M.","year":"2018","unstructured":"M. Buckler , P. Bedoukian and S. Suren et al. EVA2: Exploiting Temporal Redundancy in Live Computer Vision . In 51st Annual IEEE\/ACM International Symposium on Microarchitecture , pages 134 -- 147 . IEEE, 2018 . M. Buckler, P. Bedoukian and S. Suren et al. EVA2: Exploiting Temporal Redundancy in Live Computer Vision. In 51st Annual IEEE\/ACM International Symposium on Microarchitecture, pages 134--147. IEEE, 2018."},{"volume-title":"ISSCC, Accepted","year":"2020","author":"Yuan Z.","key":"e_1_3_2_2_14_1","unstructured":"Z. Yuan A 65nm 24.7 uJ\/Frame 12.3 mW Activation Similarity Aware Convolutional Neural Network Video Processor Using Hybrid Precision Inter Frame Data Reuse and Mixed-Bit-Width Difference Frame Data Codec . In ISSCC, Accepted . IEEE , 2020 . Z. Yuan et al. A 65nm 24.7 uJ\/Frame 12.3 mW Activation Similarity Aware Convolutional Neural Network Video Processor Using Hybrid Precision Inter Frame Data Reuse and Mixed-Bit-Width Difference Frame Data Codec. In ISSCC, Accepted. IEEE, 2020."},{"volume-title":"End to end learning for self-driving cars. arXiv preprint, arXiv:1604.07316","year":"2016","author":"Bojarski M.","key":"e_1_3_2_2_15_1","unstructured":"M. Bojarski , End to end learning for self-driving cars. arXiv preprint, arXiv:1604.07316 , 2016 . M. Bojarski, et al. End to end learning for self-driving cars. arXiv preprint, arXiv:1604.07316, 2016."},{"volume-title":"ICLR","year":"2016","author":"Gysel P.","key":"e_1_3_2_2_16_1","unstructured":"P. Gysel , M. Motamedi and S. Ghiasi . Hardware-oriented approximation of convolutional neural networks . ICLR , 2016 . P. Gysel, M. Motamedi and S. Ghiasi. Hardware-oriented approximation of convolutional neural networks. ICLR, 2016."},{"key":"e_1_3_2_2_17_1","first-page":"4510","volume-title":"CVPR","author":"Sandler M.","year":"2018","unstructured":"M. Sandler : Inverted residuals and linear bottlenecks . In CVPR , pages 4510 -- 4520 . IEEE, 2018 . M. Sandler et al. Mobilenetv2: Inverted residuals and linear bottlenecks. In CVPR, pages 4510--4520. IEEE, 2018."},{"key":"e_1_3_2_2_18_1","first-page":"265","volume-title":"OSDI","author":"Abadi M.","year":"2016","unstructured":"M. Abadi : A system for large-scale machine learning . In OSDI , pages 265 -- 283 , 2016 . M. Abadi et al. Tensorflow: A system for large-scale machine learning. In OSDI, pages 265--283, 2016."},{"key":"e_1_3_2_2_19_1","unstructured":"Arm Cortex-M7 implementation data. https:\/\/developer.arm.com\/ipproducts\/processors\/cortex-m\/cortex-m7 Arm Cortex-M7 implementation data. https:\/\/developer.arm.com\/ipproducts\/processors\/cortex-m\/cortex-m7"}],"event":{"name":"ISLPED '20: ACM\/IEEE International Symposium on Low Power Electronics and Design","sponsor":["SIGDA ACM Special Interest Group on Design Automation","IEEE CAS"],"location":"Boston Massachusetts","acronym":"ISLPED '20"},"container-title":["Proceedings of the ACM\/IEEE International Symposium on Low Power Electronics and Design"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3370748.3407002","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,1,12]],"date-time":"2023-01-12T02:52:14Z","timestamp":1673491934000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3370748.3407002"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2020,8,10]]},"references-count":19,"alternative-id":["10.1145\/3370748.3407002","10.1145\/3370748"],"URL":"http:\/\/dx.doi.org\/10.1145\/3370748.3407002","relation":{},"subject":[],"published":{"date-parts":[[2020,8,10]]},"assertion":[{"value":"2020-08-10","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}