iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: https://api.crossref.org/works/10.1145/3639051
{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2024,7,24]],"date-time":"2024-07-24T18:35:48Z","timestamp":1721846148373},"reference-count":90,"publisher":"Association for Computing Machinery (ACM)","issue":"2","content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Des. Autom. Electron. Syst."],"published-print":{"date-parts":[[2024,3,31]]},"abstract":"Ideally, accelerator development should be as easy as software development. Several recent design languages\/tools are working toward this goal, but actually testing early designs on real applications end-to-end remains prohibitively difficult due to the costs of building specialized compiler and simulator support. We propose a new first-in-class, mostly automated methodology termed \u201c3LA\u201d to enable end-to-end testing of prototype accelerator designs on unmodified source applications. A key contribution of 3LA is the use of a formal software\/hardware interface that specifies an accelerator\u2019s operations and their semantics. Specifically, we leverage the Instruction-level Abstraction (ILA) formal specification for accelerators that has been successfully used thus far for accelerator implementation verification. We show how the ILA for accelerators serves as a software\/hardware interface, similar to the Instruction Set Architecture for processors, that can be used for automated development of compilers and instruction-level simulators. Another key contribution of this work is to show how ILA-based accelerator semantics enables extending recent work on equality saturation to auto-generate basic compiler support for prototype accelerators in a technique we term \u201cflexible matching.\u201d By combining flexible matching with simulators auto-generated from ILA specifications, our approach enables end-to-end evaluation with modest engineering effort. We detail several case studies of 3LA, which uncovered an unknown flaw in a recently published accelerator and facilitated its fix.<\/jats:p>","DOI":"10.1145\/3639051","type":"journal-article","created":{"date-parts":[[2023,12,29]],"date-time":"2023-12-29T22:04:45Z","timestamp":1703887485000},"page":"1-25","update-policy":"http:\/\/dx.doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":1,"title":["Application-level Validation of Accelerator Designs Using a Formal Software\/Hardware Interface"],"prefix":"10.1145","volume":"29","author":[{"ORCID":"http:\/\/orcid.org\/0000-0001-7069-4069","authenticated-orcid":false,"given":"Bo-Yuan","family":"Huang","sequence":"first","affiliation":[{"name":"Intel Corporation, USA"}]},{"ORCID":"http:\/\/orcid.org\/0009-0003-6747-7014","authenticated-orcid":false,"given":"Steven","family":"Lyubomirsky","sequence":"additional","affiliation":[{"name":"OctoML, USA"}]},{"ORCID":"http:\/\/orcid.org\/0009-0000-4837-2282","authenticated-orcid":false,"given":"Yi","family":"Li","sequence":"additional","affiliation":[{"name":"Princeton University, USA"}]},{"ORCID":"http:\/\/orcid.org\/0009-0002-0843-8413","authenticated-orcid":false,"given":"Mike","family":"He","sequence":"additional","affiliation":[{"name":"Princeton University, USA"}]},{"ORCID":"http:\/\/orcid.org\/0000-0001-9754-233X","authenticated-orcid":false,"given":"Gus Henry","family":"Smith","sequence":"additional","affiliation":[{"name":"University of Washington, USA"}]},{"ORCID":"http:\/\/orcid.org\/0000-0002-6411-9620","authenticated-orcid":false,"given":"Thierry","family":"Tambe","sequence":"additional","affiliation":[{"name":"Harvard University, USA"}]},{"ORCID":"http:\/\/orcid.org\/0000-0001-5565-2581","authenticated-orcid":false,"given":"Akash","family":"Gaonkar","sequence":"additional","affiliation":[{"name":"Princeton University, USA"}]},{"ORCID":"http:\/\/orcid.org\/0009-0001-5418-1279","authenticated-orcid":false,"given":"Vishal","family":"Canumalla","sequence":"additional","affiliation":[{"name":"University of Washington, USA"}]},{"ORCID":"http:\/\/orcid.org\/0009-0006-0661-2640","authenticated-orcid":false,"given":"Andrew","family":"Cheung","sequence":"additional","affiliation":[{"name":"University of Washington, USA"}]},{"ORCID":"http:\/\/orcid.org\/0000-0001-5730-9904","authenticated-orcid":false,"given":"Gu-Yeon","family":"Wei","sequence":"additional","affiliation":[{"name":"Harvard University, USA"}]},{"ORCID":"http:\/\/orcid.org\/0000-0001-6676-9400","authenticated-orcid":false,"given":"Aarti","family":"Gupta","sequence":"additional","affiliation":[{"name":"Princeton University, USA"}]},{"ORCID":"http:\/\/orcid.org\/0000-0002-4731-0124","authenticated-orcid":false,"given":"Zachary","family":"Tatlock","sequence":"additional","affiliation":[{"name":"University of Washington, USA"}]},{"ORCID":"http:\/\/orcid.org\/0000-0002-0837-5443","authenticated-orcid":false,"given":"Sharad","family":"Malik","sequence":"additional","affiliation":[{"name":"Princeton University, USA"}]}],"member":"320","published-online":{"date-parts":[[2024,2,14]]},"reference":[{"key":"e_1_3_2_2_2","first-page":"265","volume-title":"Proceedings of the 12th USENIX Conference on Operating Systems Design and Implementation (OSDI\u201916)","author":"Abadi Mart\u00edn","year":"2016","unstructured":"Mart\u00edn Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, Manjunath Kudlur, Josh Levenberg, Rajat Monga, Sherry Moore, Derek G. Murray, Benoit Steiner, Paul Tucker, Vijay Vasudevan, Pete Warden, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. 2016. TensorFlow: A system for large-scale machine learning. In Proceedings of the 12th USENIX Conference on Operating Systems Design and Implementation (OSDI\u201916). USENIX Association, 265\u2013283."},{"key":"e_1_3_2_3_2","doi-asserted-by":"publisher","DOI":"10.5555\/323215"},{"key":"e_1_3_2_4_2","doi-asserted-by":"publisher","DOI":"10.1109\/ASPDAC.2000.835166"},{"key":"e_1_3_2_5_2","doi-asserted-by":"publisher","DOI":"10.1017\/CBO9781139172752"},{"key":"e_1_3_2_6_2","doi-asserted-by":"publisher","DOI":"10.1109\/DAC18072.2020.9218553"},{"key":"e_1_3_2_7_2","doi-asserted-by":"publisher","DOI":"10.1145\/1168857.1168906"},{"key":"e_1_3_2_8_2","doi-asserted-by":"publisher","DOI":"10.5555\/1855741.1855754"},{"key":"e_1_3_2_9_2","first-page":"1","volume-title":"Proceedings of the Workshop on Open-Source EDA Technology (WOSET\u201918)","author":"Batten Shunning Jiang, Christopher Torng, and Christopher","year":"2018","unstructured":"Shunning Jiang, Christopher Torng, and Christopher Batten. 2018. An open-source python-based hardware generation, simulation, and verification framework. In Proceedings of the Workshop on Open-Source EDA Technology (WOSET\u201918). N.A., Virtual, 1\u20135."},{"key":"e_1_3_2_10_2","doi-asserted-by":"publisher","DOI":"10.5555\/1247360.1247401"},{"key":"e_1_3_2_11_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-34019-7"},{"key":"e_1_3_2_12_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCD.2014.6974675"},{"key":"e_1_3_2_13_2","first-page":"6","volume-title":"Proceedings of the 10th International Workshop on Frontiers in Handwriting Recognition","author":"Chellapilla Kumar","year":"2006","unstructured":"Kumar Chellapilla, Sidd Puri, and Patrice Simard. 2006. High performance convolutional neural networks for document processing. In Proceedings of the 10th International Workshop on Frontiers in Handwriting Recognition, Guy Lorette (Ed.). Universit\u00e9 de Rennes 1, La Baule (France), 6 pages."},{"key":"e_1_3_2_14_2","unstructured":"Tianqi Chen Mu Li Yutian Li Min Lin Naiyan Wang Minjie Wang Tianjun Xiao Bing Xu Chiyuan Zhang and Zheng Zhang. 2015. MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems. Retrieved from https:\/\/arXiv:1512.01274"},{"key":"e_1_3_2_15_2","first-page":"579","volume-title":"Proceedings of the 13th USENIX Conference on Operating Systems Design and Implementation (OSDI\u201918)","author":"Chen Tianqi","year":"2018","unstructured":"Tianqi Chen, Thierry Moreau, Ziheng Jiang, Lianmin Zheng, Eddie Yan, Meghan Cowan, Haichen Shen, Leyuan Wang, Yuwei Hu, Luis Ceze, Carlos Guestrin, and Arvind Krishnamurthy. 2018. TVM: An automated end-to-end optimizing compiler for deep learning. In Proceedings of the 13th USENIX Conference on Operating Systems Design and Implementation (OSDI\u201918), Andrea C. Arpaci-Dusseau and Geoff Voelker (Eds.). USENIX Association, 579\u2013594."},{"key":"e_1_3_2_16_2","doi-asserted-by":"publisher","DOI":"10.1109\/JSSC.2016.2616357"},{"key":"e_1_3_2_17_2","unstructured":"Zhi Chen Cody Hao Yu Trevor Morris Jorn Tuyls Yi-Hsiang Lai Jared Roesch Elliott Delaye Vin Sharma and Yida Wang. 2021. Bring Your Own Codegen to Deep Learning Compiler. Retrieved from https:\/\/arXiv:2105.03215"},{"key":"e_1_3_2_18_2","unstructured":"Canadian Institute for Advanced Research 2009. The CIFAR-10 Dataset. Canadian Institute for Advanced Research. Retrieved from http:\/\/www.cs.toronto.edu\/kriz\/cifar.html"},{"key":"e_1_3_2_19_2","unstructured":"Apple Inc. 2022. CoreML: Integrate Machine Learning Models Into Your App. Apple Inc. Retrieved from https:\/\/developer.apple.com\/documentation\/coreml"},{"key":"e_1_3_2_20_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2009.5206848"},{"key":"e_1_3_2_21_2","doi-asserted-by":"publisher","DOI":"10.1007\/3-540-56883-2_11"},{"key":"e_1_3_2_22_2","doi-asserted-by":"publisher","DOI":"10.1145\/1706299.1706346"},{"key":"e_1_3_2_23_2","doi-asserted-by":"publisher","DOI":"10.1109\/ASAP.2019.00013"},{"key":"e_1_3_2_24_2","series-title":"Proceedings of the 31st International Conference on Machine Learning","first-page":"1764","volume":"32","author":"Graves Alex","year":"2014","unstructured":"Alex Graves and Navdeep Jaitly. 2014. Towards end-to-end speech recognition with recurrent neural networks. In Proceedings of the 31st International Conference on Machine Learning(Proceedings of Machine Learning Research, Vol. 32), Eric P. Xing and Tony Jebara (Eds.). PMLR, 1764\u20131772. Retrieved from https:\/\/proceedings.mlr.press\/v32\/graves14.html"},{"key":"e_1_3_2_25_2","doi-asserted-by":"publisher","DOI":"10.1109\/ISCA45697.2020.00071"},{"key":"e_1_3_2_26_2","doi-asserted-by":"publisher","DOI":"10.1109\/MICRO.2016.7783759"},{"key":"e_1_3_2_27_2","doi-asserted-by":"publisher","DOI":"10.1145\/1815961.1815968"},{"key":"e_1_3_2_28_2","doi-asserted-by":"publisher","DOI":"10.1109\/ISCA.2016.30"},{"key":"e_1_3_2_29_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.90"},{"key":"e_1_3_2_30_2","unstructured":"Andrew G. Howard Menglong Zhu Bo Chen Dmitry Kalenichenko Weijun Wang Tobias Weyand Marco Andreetto and Hartwig Adam. 2017. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. Retrieved from https:\/\/arXiv:1704.04861"},{"key":"e_1_3_2_31_2","doi-asserted-by":"publisher","DOI":"10.1145\/3195970.3196055"},{"key":"e_1_3_2_32_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-17462-0_21"},{"key":"e_1_3_2_33_2","doi-asserted-by":"publisher","DOI":"10.1145\/3282444"},{"key":"e_1_3_2_34_2","doi-asserted-by":"publisher","DOI":"10.1145\/3519939.3523446"},{"key":"e_1_3_2_35_2","doi-asserted-by":"crossref","unstructured":"Benoit Jacob Skirmantas Kligys Bo Chen Menglong Zhu Matthew Tang Andrew Howard Hartwig Adam and Dmitry Kalenichenko. 2017. Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference. Retrieved from https:\/\/arXiv:1712.05877","DOI":"10.1109\/CVPR.2018.00286"},{"key":"e_1_3_2_36_2","doi-asserted-by":"publisher","DOI":"10.1145\/512529.512566"},{"key":"e_1_3_2_37_2","doi-asserted-by":"publisher","DOI":"10.1145\/3360307"},{"key":"e_1_3_2_38_2","doi-asserted-by":"publisher","DOI":"10.1145\/3079856.3080246"},{"key":"e_1_3_2_39_2","doi-asserted-by":"publisher","DOI":"10.1109\/GlobalSIP.2015.7418272"},{"key":"e_1_3_2_40_2","doi-asserted-by":"publisher","DOI":"10.1145\/3497776.3517781"},{"key":"e_1_3_2_41_2","doi-asserted-by":"publisher","DOI":"10.1145\/3289602.3293910"},{"key":"e_1_3_2_42_2","doi-asserted-by":"publisher","DOI":"10.1145\/3469660"},{"key":"e_1_3_2_43_2","doi-asserted-by":"publisher","DOI":"10.1109\/CGO51591.2021.9370308"},{"key":"e_1_3_2_44_2","doi-asserted-by":"publisher","DOI":"10.1145\/3591622"},{"key":"e_1_3_2_45_2","doi-asserted-by":"publisher","DOI":"10.1145\/3373087.3375320"},{"key":"e_1_3_2_46_2","volume-title":"Proceedings of the 29th Asia and South Pacific Design Automation Conference (ASP-DAC\u201924)","author":"Li Yi","year":"2024","unstructured":"Yi Li, Aarti Gupta, and Sharad Malik. 2024. Exact scheduling to minimize off-chip data movement for deep learning accelerators. In Proceedings of the 29th Asia and South Pacific Design Automation Conference (ASP-DAC\u201924). IEEE. To appear."},{"key":"e_1_3_2_47_2","unstructured":"The Linux Foundation 2019. ONNX: Open Neural Network Exchange. The Linux Foundation. Retrieved from https:\/\/onnx.ai\/"},{"key":"e_1_3_2_48_2","doi-asserted-by":"publisher","DOI":"10.1145\/3498717"},{"key":"e_1_3_2_49_2","unstructured":"Qiaoyi Liu Dillon Huff Jeff Setter Maxwell Strange Kathleen Feng Kavya Sreedhar Ziheng Wang Keyi Zhang Mark Horowitz Priyanka Raina and Fredrik Kjolstad. 2021. Compiling Halide Programs to Push-Memory Accelerators. Retrieved from https:\/\/arXiv:2105.12858"},{"key":"e_1_3_2_50_2","doi-asserted-by":"publisher","DOI":"10.1145\/367177.367199"},{"key":"e_1_3_2_51_2","doi-asserted-by":"publisher","DOI":"10.1145\/3559009.3569664"},{"key":"e_1_3_2_52_2","unstructured":"Stephen Merity Caiming Xiong James Bradbury and Richard Socher. 2016. Pointer Sentinel Mixture Models. Retrieved from https:\/\/arXiv:1609.07843"},{"key":"e_1_3_2_53_2","unstructured":"MLCommons [n.d.]. MLPerf Benchmarks. MLCommons. Retrieved from https:\/\/mlcommons.org"},{"key":"e_1_3_2_54_2","doi-asserted-by":"publisher","DOI":"10.1109\/MM.2019.2928962"},{"key":"e_1_3_2_55_2","doi-asserted-by":"publisher","DOI":"10.1145\/322186.322198"},{"key":"e_1_3_2_56_2","doi-asserted-by":"publisher","DOI":"10.1145\/3428234"},{"key":"e_1_3_2_57_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-540-32033-3_33"},{"key":"e_1_3_2_58_2","doi-asserted-by":"publisher","DOI":"10.1145\/3445814.3446712"},{"key":"e_1_3_2_59_2","first-page":"8024","volume-title":"Proceedings of the Annual Conference on Advances in Neural Information Processing Systems (NeurIPS\u201919)","author":"Paszke Adam","year":"2019","unstructured":"Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas K\u00f6pf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. PyTorch: An imperative style, high-performance deep learning library. In Proceedings of the Annual Conference on Advances in Neural Information Processing Systems (NeurIPS\u201919), Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d\u2019Alch\u00e9 Buc, Emily B. Fox, and Roman Garnett (Eds.). Curran Associates, 8024\u20138035."},{"key":"e_1_3_2_60_2","doi-asserted-by":"publisher","DOI":"10.1145\/3445814.3446720"},{"key":"e_1_3_2_61_2","doi-asserted-by":"publisher","DOI":"10.1145\/3107953"},{"key":"e_1_3_2_62_2","doi-asserted-by":"publisher","DOI":"10.1145\/2491956.2462176"},{"key":"e_1_3_2_63_2","doi-asserted-by":"publisher","DOI":"10.1145\/1926385.1926451"},{"key":"e_1_3_2_64_2","doi-asserted-by":"publisher","DOI":"10.1109\/ISCA.2016.32"},{"key":"e_1_3_2_65_2","doi-asserted-by":"publisher","DOI":"10.1109\/ISCA45697.2020.00045"},{"key":"e_1_3_2_66_2","unstructured":"Jared Roesch Steven Lyubomirsky Marisa Kirisame Logan Weber Josh Pollock Luis Vega Ziheng Jiang Tianqi Chen Thierry Moreau and Zachary Tatlock. 2019. Relay: A High-Level Compiler for Deep Learning. Retrieved from https:\/\/arXiv:1904.08368"},{"key":"e_1_3_2_67_2","doi-asserted-by":"crossref","unstructured":"Mark Sandler Andrew Howard Menglong Zhu Andrey Zhmoginov and Liang-Chieh Chen. 2019. MobileNetV2: Inverted Residuals and Linear Bottlenecks. Retrieved from https:\/\/arXiv:1801.04381","DOI":"10.1109\/CVPR.2018.00474"},{"key":"e_1_3_2_68_2","volume-title":"Automatically Scheduling Halide-HLS","author":"Schellekens Ruud","year":"2020","unstructured":"Ruud Schellekens. 2020. Automatically Scheduling Halide-HLS. Master\u2019s Thesis. Eindhoven University of Technology."},{"key":"e_1_3_2_69_2","unstructured":"Siemens n.d.. Catapult High-Level Synthesis and Verification. Siemens. Retrieved from https:\/\/eda.sw.siemens.com\/en-US\/ic\/catapult-high-level-synthesis"},{"key":"e_1_3_2_70_2","doi-asserted-by":"publisher","DOI":"10.1145\/3460945.3464953"},{"key":"e_1_3_2_71_2","doi-asserted-by":"publisher","DOI":"10.1109\/SiPS50750.2020.9195237"},{"key":"e_1_3_2_72_2","doi-asserted-by":"publisher","DOI":"10.1109\/TCAD.2017.2764482"},{"key":"e_1_3_2_73_2","doi-asserted-by":"publisher","DOI":"10.1109\/ISSCC42613.2021.9366062"},{"key":"e_1_3_2_74_2","doi-asserted-by":"publisher","DOI":"10.1109\/DAC18072.2020.9218516"},{"key":"e_1_3_2_75_2","series-title":"Proceedings of the 36th International Conference on Machine Learning","first-page":"6105","volume":"97","author":"Tan Mingxing","year":"2019","unstructured":"Mingxing Tan and Quoc V. Le. 2019. EfficientNet: Rethinking model scaling for convolutional neural networks. In Proceedings of the 36th International Conference on Machine Learning(ICML\u201919, Vol. 97), Kamalika Chaudhuri and Ruslan Salakhutdinov (Eds.). PMLR, 6105\u20136114. http:\/\/proceedings.mlr.press\/v97\/tan19a.html"},{"key":"e_1_3_2_76_2","doi-asserted-by":"publisher","DOI":"10.1145\/1480881.1480915"},{"key":"e_1_3_2_77_2","doi-asserted-by":"crossref","unstructured":"Hugo Touvron Piotr Bojanowski Mathilde Caron Matthieu Cord Alaaeldin El-Nouby Edouard Grave Armand Joulin Gabriel Synnaeve Jakob Verbeek and Herv\u00e9 J\u00e9gou. 2021. ResMLP: Feedforward Networks for Image Classification with Data-efficient Training. Retrieved from https:\/\/arXiv:2105.03404","DOI":"10.1109\/TPAMI.2022.3206148"},{"key":"e_1_3_2_78_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-53288-8_19"},{"key":"e_1_3_2_79_2","doi-asserted-by":"publisher","DOI":"10.1145\/3445814.3446707"},{"key":"e_1_3_2_80_2","doi-asserted-by":"publisher","DOI":"10.1145\/3445814.3446707"},{"key":"e_1_3_2_81_2","unstructured":"Ashish Vaswani Noam Shazeer Niki Parmar Jakob Uszkoreit Llion Jones Aidan N. Gomez Lukasz Kaiser and Illia Polosukhin. 2017. Attention Is All You Need. Retrieved from https:\/\/arXiv:1706.03762"},{"key":"e_1_3_2_82_2","unstructured":"Veripool n.d. Verilator. Veripool. Retrieved from https:\/\/www.veripool.org\/verilator\/"},{"key":"e_1_3_2_83_2","doi-asserted-by":"publisher","DOI":"10.1145\/3106343"},{"key":"e_1_3_2_84_2","doi-asserted-by":"publisher","DOI":"10.23919\/VLSIC.2019.8778002"},{"key":"e_1_3_2_85_2","doi-asserted-by":"publisher","DOI":"10.1145\/267959.267960"},{"key":"e_1_3_2_86_2","doi-asserted-by":"publisher","DOI":"10.1145\/3434304"},{"key":"e_1_3_2_87_2","unstructured":"PyTorch Team 2020. Word-level Language Modeling RNN. PyTorch Team. Retrieved from https:\/\/github.com\/pytorch\/examples\/tree\/master\/word_language_model"},{"key":"e_1_3_2_88_2","unstructured":"Xilinx Inc. [n.d.]. The Xilinx Software Development Kit (XSDK). Xilinx Inc. Retrieved from https:\/\/www.xilinx.com\/products\/design-tools\/embedded-software\/sdk.html"},{"key":"e_1_3_2_89_2","first-page":"255","volume-title":"Proceedings of Machine Learning and Systems","volume":"3","author":"Yang Yichen","year":"2021","unstructured":"Yichen Yang, Phitchaya Phothilimthana, Yisu Wang, Max Willsey, Sudip Roy, and Jacques Pienaar. 2021. Equality saturation for tensor graph superoptimization. In Proceedings of Machine Learning and Systems, Vol. 3. MLSys.org, Virtual, 255\u2013268."},{"key":"e_1_3_2_90_2","doi-asserted-by":"publisher","DOI":"10.1109\/MICRO.2016.7783723"},{"key":"e_1_3_2_91_2","doi-asserted-by":"publisher","DOI":"10.5555\/AAI28869335"}],"container-title":["ACM Transactions on Design Automation of Electronic Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3639051","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,2,15]],"date-time":"2024-02-15T11:42:38Z","timestamp":1707997358000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3639051"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,2,14]]},"references-count":90,"journal-issue":{"issue":"2","published-print":{"date-parts":[[2024,3,31]]}},"alternative-id":["10.1145\/3639051"],"URL":"http:\/\/dx.doi.org\/10.1145\/3639051","relation":{},"ISSN":["1084-4309","1557-7309"],"issn-type":[{"value":"1084-4309","type":"print"},{"value":"1557-7309","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,2,14]]},"assertion":[{"value":"2023-04-01","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-12-15","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-02-14","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}