default search action
59th DAC 2022: San Francisco, CA, USA
- Rob Oshana:
DAC '22: 59th ACM/IEEE Design Automation Conference, San Francisco, California, USA, July 10 - 14, 2022. ACM 2022, ISBN 978-1-4503-9142-9 - Frontmatter.
- Hanrui Wang, Jiaqi Gu, Yongshan Ding, Zirui Li, Frederic T. Chong, David Z. Pan, Song Han:
QuantumNAT: quantum noise-aware training with noise injection, quantization and normalization. 1-6 - Cynthia Chen, Bruno Schmitt, Helena Zhang, Lev S. Bishop, Ali Javadi-Abhari:
Optimizing quantum circuit synthesis for permutations using recursion. 7-12 - Sunghye Park, Daeyeon Kim, Minhyuk Kweon, Jae-Yoon Sim, Seokhyeong Kang:
A fast and scalable qubit-mapping method for noisy intermediate-scale quantum computers. 13-18 - Hongxiang Fan, Ce Guo, Wayne Luk:
Optimizing quantum circuit placement via machine learning. 19-24 - Huanrui Yang, Xiaoxuan Yang, Neil Zhenqiang Gong, Yiran Chen:
HERO: hessian-enhanced robust optimization for unifying and improving generalization and quantization performance. 25-30 - Mohsen Imani, Ali Zakeri, Hanning Chen, Taehyun Kim, Prathyush Poduval, Hyunsei Lee, Yeseong Kim, Elaheh Sadredini, Farhad Imani:
Neural computation for robust and holographic face detection. 31-36 - Rishikanth Chandrasekaran, Kazim Ergun, Jihyun Lee, Dhanush Nanjunda, Jaeyoung Kang, Tajana Rosing:
FHDnn: communication efficient and robust federated learning for AIoT networks. 37-42 - Ruixuan Wang, Xun Jiao, X. Sharon Hu:
ODHD: one-class brain-inspired hyperdimensional computing for outlier detection. 43-48 - Nan Wu, Hang Yang, Yuan Xie, Pan Li, Cong Hao:
High-level synthesis performance prediction using GNNs: benchmarking, modeling, and advancing. 49-54 - Atefeh Sohrabizadeh, Yunsheng Bai, Yizhou Sun, Jason Cong:
Automated accelerator optimization aided by graph neural networks. 55-60 - Ziyi Wang, Chen Bai, Zhuolun He, Guangliang Zhang, Qiang Xu, Tsung-Yi Ho, Bei Yu, Yu Huang:
Functionality matters in netlist representation learning. 61-66 - Liancheng Jia, Yuyue Wang, Jingwen Leng, Yun Liang:
EMS: efficient memory subsystem synthesis for spatial accelerators. 67-72 - Jiliang Zhang, Lin Ding, Zhuojun Chen, Wenshang Li, Gang Qu:
DA PUF: dual-state analog PUF. 73-78 - Haocheng Ma, Qizhi Zhang, Ya Gao, Jiaji He, Yiqiang Zhao, Yier Jin:
PathFinder: side channel protection through automatic leaky paths identification and obfuscation. 79-84 - Gaurav Kolhe, Tyler David Sheaves, Kevin Immanuel Gubbi, Soheil Salehi, Setareh Rafatirad, Sai Manoj P. D., Avesta Sasan, Houman Homayoun:
LOCK&ROLL: deep-learning power side-channel attack mitigation using emerging reconfigurable devices and logic locking. 85-90 - Xiangren Chen, Bohan Yang, Yong Lu, Shouyi Yin, Shaojun Wei, Leibo Liu:
Efficient access scheme for multi-bank based NTT architecture through conflict graph. 91-96 - Yintao He, Songyun Qu, Ying Wang, Bing Li, Huawei Li, Xiaowei Li:
InfoX: an energy-efficient ReRAM accelerator design with information-lossless low-bit ADCs. 97-102 - Yinyi Liu, Jiaqi Liu, Yuxiang Fu, Shixi Chen, Jiaxu Zhang, Jiang Xu:
PHANES: ReRAM-based photonic accelerator for deep neural networks. 103-108 - He Zhang, Linjun Jiang, Jianxin Wu, Tingran Chen, Junzhan Liu, Wang Kang, Weisheng Zhao:
CP-SRAM: charge-pulsation SRAM marco for ultra-high energy-efficiency computing-in-memory. 109-114 - Liukai Xu, Songyuan Liu, Zhi Li, Dengfeng Wang, Yiming Chen, Yanan Sun, Xueqing Li, Weifeng He, Shi Xu:
CREAM: computing in ReRAM-assisted energy and area-efficient SRAM for neural network acceleration. 115-120 - Yinxiao Feng, Kaisheng Ma:
Chiplet actuary: a quantitative cost model and multi-chiplet architecture exploration. 121-126 - Dhananjaya Wijerathne, Zhaoying Li, Thilini Kaushalya Bandara, Tulika Mitra:
PANORAMA: divide-and-conquer approach for mapping complex loop kernels on CGRA. 127-132 - Zheng Zhang, Tinghuan Chen, Jiaxin Huang, Meng Zhang:
A fast parameter tuning framework via transfer learning and multi-objective bayesian optimization. 133-138 - Nicholas Wendt, Todd M. Austin, Valeria Bertacco:
PriMax: maximizing DSL application performance with selective primitive acceleration. 139-144 - Pierpaolo Morì, Manoj Rohit Vemparala, Nael Fasfous, Saptarshi Mitra, Sreetama Sarkar, Alexander Frickenstein, Lukas Frickenstein, Domenik Helms, Naveen Shankar Nagaraja, Walter Stechele, Claudio Passerone:
Accelerating and pruning CNNs for semantic segmentation on FPGA. 145-150 - Rachmad Vidya Wicaksana Putra, Muhammad Abdullah Hanif, Muhammad Shafique:
SoftSNN: low-cost fault tolerance for spiking neural network accelerators under soft errors. 151-156 - Chun-Feng Wu, Carole-Jean Wu, Gu-Yeon Wei, David Brooks:
A joint management middleware to improve training performance of deep recommendation systems with SSDs. 157-162 - Yi Sheng, Junhuan Yang, Yawen Wu, Kevin Mao, Yiyu Shi, Jingtong Hu, Weiwen Jiang, Lei Yang:
The larger the fairer?: small neural networks can achieve fairness for edge devices. 163-168 - Mihaela Damian, Julian Oppermann, Christoph Spang, Andreas Koch:
SCAIE-V: an open-source SCAlable interface for ISA extensions for RISC-V processors. 169-174 - Subhash Sethumurugan, Shashank Hegde, Hari Cherupalli, John Sartori:
A scalable symbolic simulation tool for low power embedded systems. 175-180 - Ran Wei, Zhe Jiang, Xiaoran Guo, Haitao Mei, Athanasios Zolotas, Tim Kelly:
Designing critical systems with iterative automated safety analysis. 181-186 - Amrit Nagarajan, Jacob R. Stevens, Anand Raghunathan:
Efficient ensembles of graph neural networks. 187-192 - Feijie Wu, Shiqi He, Song Guo, Zhihao Qu, Haozhao Wang, Weihua Zhuang, Jie Zhang:
Sign bit is enough: a learning synchronization framework for multi-hop all-reduce with ultimate compression. 193-198 - Jiaqi Li, Min Peng, Qingan Li, Meizheng Peng, Mengting Yuan:
GLite: a fast and efficient automatic graph-level optimizer for large-scale DNNs. 199-204 - Yonggan Fu, Qixuan Yu, Meng Li, Xu Ouyang, Vikas Chandra, Yingyan Lin:
Contrastive quant: quantization makes stronger contrastive learning. 205-210 - Linghao Song, Yuze Chi, Licheng Guo, Jason Cong:
Serpens: a high bandwidth memory based accelerator for general-purpose sparse matrix-vector multiplication. 211-216 - Jiahao Liu, Zirui Zhong, Yong Zhou, Hui Qiu, Jianbiao Xiao, Jiajing Fan, Zhaomin Zhang, Sixu Li, Yiming Xu, Siqi Yang, Weiwei Shan, Shuisheng Lin, Liang Chang, Jun Zhou:
An energy-efficient seizure detection processor using event-driven multi-stage CNN classification and segmented data processing with adaptive channel selection. 217-222 - Behnam Khaleghi, Uday Mallappa, Duygu Yaldiz, Haichao Yang, Monil Shah, Jaeyoung Kang, Tajana Rosing:
PatterNet: explore and exploit filter patterns for efficient deep neural networks. 223-228 - Zhuoran Song, Zhongkai Yu, Naifeng Jing, Xiaoyao Liang:
E2SR: an end-to-end video CODEC assisted system for super resolution acceleration. 229-234 - Lei Jiang, Qian Lou, Nrushad Joshi:
MATCHA: a fast and energy-efficient accelerator for fully homomorphic encryption over the torus. 235-240 - Jianqiang Wang, Pouya Mahmoody, Ferdinand Brasser, Patrick Jauernig, Ahmad-Reza Sadeghi, Donghui Yu, Dahan Pan, Yuanyuan Zhang:
VirTEE: a full backward-compatible TEE with native live migration and secure I/O. 241-246 - Gregor Haas, Aydin Aysu:
Apple vs. EMA: electromagnetic side channel attacks on apple CoreCrypto. 247-252 - Jaekang Shin, Seungkyu Choi, Jongwoo Ra, Lee-Sup Kim:
Algorithm/architecture co-design for energy-efficient acceleration of multi-task DNN. 253-258 - Fangxin Liu, Wenbo Zhao, Zongwu Wang, Yongbiao Chen, Zhezhi He, Naifeng Jing, Xiaoyao Liang, Li Jiang:
EBSP: evolving bit sparsity patterns for hardware-friendly inference of quantized deep neural networks. 259-264 - Dongwoo Lew, Kyungchul Lee, Jongsun Park:
A time-to-first-spike coding and conversion aware training for energy-efficient deep spiking neural network processor design. 265-270 - Fan Zhang, Li Yang, Jian Meng, Jae-sun Seo, Yu Kevin Cao, Deliang Fan:
XMA: a crossbar-aware multi-task adaption framework via shift-based mask learning method. 271-276 - Zheyu Yan, Xiaobo Sharon Hu, Yiyu Shi:
SWIM: selective write-verify for computing-in-memory neural accelerators. 277-282 - Xiaoming Zeng, Zhendong Wang, Yang Hu:
Enabling efficient deep convolutional neural network-based sensor fusion for autonomous driving. 283-288 - Yu-Shun Hsiao, Siva Kumar Sastry Hari, Michal Filipiuk, Timothy Tsai, Michael B. Sullivan, Vijay Janapa Reddi, Vasu Singh, Stephen W. Keckler:
Zhuyi: perception processing rate estimation for safety in autonomous vehicles. 289-294 - Yuquan He, Songyun Qu, Gangliang Lin, Cheng Liu, Lei Zhang, Ying Wang:
Processing-in-SRAM acceleration for ultra-low power visual 3D perception. 295-300 - Abdullah Al Arafat, Sudharsan Vaidhun, Kurt M. Wilson, Jinghao Sun, Zhishan Guo:
Response time analysis for dynamic priority scheduling in ROS2. 301-306 - Jiwon Kim, Seunghyeok Jeon, Jaehyun Kim, Hojung Cha:
Voltage prediction of drone battery reflecting internal temperature. 307-312 - Weihong Xu, Jaeyoung Kang, Tajana Rosing:
A near-storage framework for boosted data preprocessing of mass spectrum clustering. 313-318 - Ruihao Gao, Xueqi Li, Yewen Li, Xun Wang, Guangming Tan:
MetaZip: a high-throughput and efficient accelerator for DEFLATE. 319-324 - Hongxiang Fan, Martin Ferianc, Wayne Luk:
Enabling fast uncertainty estimation: accelerating bayesian transformers via algorithmic and hardware optimizations. 325-330 - Mingjun Li, Jianlei Yang, Yingjie Qi, Meng Dong, Yuhao Yang, Runze Liu, Weitao Pan, Bei Yu, Weisheng Zhao:
Eventor: an efficient event-based monocular multi-view stereo accelerator on FPGA platform. 331-336 - Mingyang Kou, Jun Zeng, Boxiao Han, Fei Xu, Jiangyuan Gu, Hailong Yao:
GEML: GNN-based efficient mapping method for large loop applications on CGRA. 337-342 - Jinyi Deng, Linyun Zhang, Lei Wang, Jiawei Liu, Kexiang Deng, Shibin Tang, Jiangyuan Gu, Boxiao Han, Fei Xu, Leibo Liu, Shaojun Wei, Shouyi Yin:
Mixed-granularity parallel coarse-grained reconfigurable architecture. 343-348 - Weizhe Hua, Muhammad Umar, Zhiru Zhang, G. Edward Suh:
GuardNN: secure accelerator architecture for privacy-preserving deep learning. 349-354 - Lei Zhao, Youtao Zhang, Jun Yang:
SRA: a secure ReRAM-based DNN accelerator. 355-360 - Liyan Shen, Ye Dong, Binxing Fang, Jinqiao Shi, Xuebin Wang, Shengli Pan, Ruisheng Shi:
ABNN2: secure two-party arbitrary-bitwidth quantized neural network predictions. 361-366 - Prathyush Poduval, Yang Ni, Yeseong Kim, Kai Ni, Raghavan Kumar, Rosario Cammarota, Mohsen Imani:
Adaptive neural recovery for highly robust brain-like representation. 367-372 - Sarada Krithivasan, Sanchari Sen, Nitin Rathi, Kaushik Roy, Anand Raghunathan:
Efficiency attacks on spiking neural networks. 373-378 - Ji Zhang, Xijun Li, Xiyao Zhou, Mingxuan Yuan, Zhuo Cheng, Keji Huang, Yifan Li:
L-QoCo: learning to optimize cache capacity overloading in storage systems. 379-384 - Shuhan Bai, Hu Wan, Yun Huang, Xuan Sun, Fei Wu, Changsheng Xie, Hung-Chih Hsieh, Tei-Wei Kuo, Chun Jason Xue:
Pipette: efficient fine-grained reads for SSDs. 385-390 - Longfei Luo, Dingcui Yu, Liang Shi, Chuanming Ding, Changlong Li, Edwin H.-M. Sha:
CDB: critical data backup design for consumer devices with high-density flash based hybrid storage. 391-396 - Chunhua Li, Man Wu, Yuhan Liu, Ke Zhou, Ji Zhang, Yunqing Sun:
SS-LRU: a smart segmented LRU caching. 397-402 - Haoran Dang, Chongnan Ye, Yanpeng Hu, Chundong Wang:
NobLSM: an LSM-tree with non-blocking writes for SSDs. 403-408 - Jaeyong Lee, Myungsuk Kim, Wonil Choi, Sanggu Lee, Jihong Kim:
TailCut: improving performance and lifetime of SSDs using pattern-aware state encoding. 409-414 - Xing Li, Lei Chen, Fan Yang, Mingxuan Yuan, Hongli Yan, Yupeng Wan:
HIMap: a heuristic and iterative logic synthesis approach. 415-420 - Walter Lau Neto, Luca G. Amarù, Vinicius Possani, Patrick Vuillod, Jiong Luo, Alan Mishchenko, Pierre-Emmanuel Gaillardon:
Improving LUT-based optimization for ASICs. 421-426 - Shiju Lin, Jinwei Liu, Tianji Liu, Martin D. F. Wong, Evangeline F. Y. Young:
NovelRewrite: node-level parallel AIG rewriting. 427-432 - Linus Witschen, Tobias Wiersema, Lucas Reuter, Marco Platzner:
Search space characterization for approximate logic synthesis. 433-438 - Chang Meng, Xuan Wang, Jiajun Sun, Sijun Tao, Wei Wu, Zhihang Wu, Leibin Ni, Xiaolong Shen, Junfeng Zhao, Weikang Qian:
SEALS: sensitivity-driven efficient approximate logic synthesis. 439-444 - Siang-Yun Lee, Heinz Riener, Giovanni De Micheli:
Beyond local optimality of buffer and splitter insertion for AQFP circuits. 445-450 - Shubham Negi, Indranil Chakraborty, Aayush Ankit, Kaushik Roy:
NAX: neural architecture and memristive xbar based accelerator co-design. 451-456 - Zhiheng Yue, Yabing Wang, Leibo Liu, Shaojun Wei, Shouyi Yin:
MC-CIM: a reconfigurable computation-in-memory for efficient stereo matching cost computation. 457-462 - Mengyuan Li, Ann Franchesca Laguna, Dayane Reis, Xunzhao Yin, Michael T. Niemier, X. Sharon Hu:
iMARS: an in-memory-computing architecture for recommendation systems. 463-468 - Cong Liu, Haikun Liu, Hai Jin, Xiaofei Liao, Yu Zhang, Zhuohui Duan, Jiahong Xu, Huize Li:
ReGNN: a ReRAM-based heterogeneous architecture for general graph neural networks. 469-474 - Xiangzhong Luo, Di Liu, Hao Kong, Shuo Huai, Hui Chen, Weichen Liu:
You only search once: on lightweight differentiable architecture search for resource-constrained embedded platforms. 475-480 - Arnav Vaibhav Malawade, Trier Mortlock, Mohammad Abdullah Al Faruque:
EcoFusion: energy-aware adaptive sensor fusion for efficient autonomous vehicle perception. 481-486 - Yijie Wei, Zhiwei Zhong, Jie Gu:
Human emotion based real-time memory and computation management on resource-limited edge devices. 487-492 - Zihan Wang, Chengcheng Wan, Yuting Chen, Ziyi Lin, He Jiang, Lei Qiao:
Hierarchical memory-constrained operator scheduling of neural architecture search networks. 493-498 - Abhiroop Bhattacharjee, Yeshwanth Venkatesha, Abhishek Moitra, Priyadarshini Panda:
MIME: adapting a single neural network for multi-task inference with memory-efficient dynamic pruning. 499-504 - Weihong Liu, Jiawei Geng, Zongwei Zhu, Jing Cao, Zirui Lian:
Sniper: cloud-edge collaborative inference scheduling with neural network similarity modeling. 505-510 - Yibin Gu, Yifan Li, Hua Wang, Li Liu, Ke Zhou, Wei Fang, Gang Hu, Jinhu Liu, Zhuo Cheng:
LPCA: learned MRC profiling based cache allocation for file storage systems. 511-516 - Tom Peham, Lukas Burgholzer, Robert Wille:
Equivalence checking paradigms in quantum circuit design: a case study. 517-522 - Chun-Yu Wei, Yuan-Hung Tsai, Chiao-Shan Jhang, Jie-Hong R. Jiang:
Accurate BDD-based unitary operator manipulation for scalable and robust quantum circuit verification. 523-528 - Lukas Burgholzer, Robert Wille:
Handling non-unitaries in quantum circuit equivalence checking. 529-534 - Wei-Hsiang Tseng, Yao-Wen Chang:
A bridge-based algorithm for simultaneous primal and dual defects compression on topologically quantum-error-corrected circuits. 535-540 - Tuo Li, Sri Parameswaran:
FaSe: fast selective flushing to mitigate contention-based cache timing attacks. 541-546 - Peinan Li, Rui Hou, Lutan Zhao, Yifan Zhu, Dan Meng:
Conditional address propagation: an efficient defense mechanism against transient execution attacks. 547-552 - Anirban Chakraborty, Nikhilesh Singh, Sarani Bhattacharya, Chester Rebeiro, Debdeep Mukhopadhyay:
Timed speculative attacks exploiting store-to-load forwarding bypassing cache-based countermeasures. 553-558 - Fan Zhang, Zhiyong Wang, Haoting Shen, Bolin Yang, Qianmei Wu, Kui Ren:
DARPT: defense against remote physical attack based on TDC in multi-tenant scenario. 559-564 - Sudipta Mondal, Susmita Dey Manasi, Kishor Kunal, Ramprasath S, Sachin S. Sapatnekar:
GNNIE: GNN inference engine with load-balancing and graph-specific caching. 565-570 - Guan Shen, Jieru Zhao, Quan Chen, Jingwen Leng, Chao Li, Minyi Guo:
SALO: an efficient spatial accelerator enabling hybrid sparse attention mechanisms for long sequences. 571-576 - Joonsang Yu, Junki Park, Seongmin Park, Minsoo Kim, Sihwa Lee, Dong Hyun Lee, Jungwook Choi:
NN-LUT: neural approximation of non-linear operations for efficient transformer inference. 577-582 - Ananda Samajdar, Eric Qin, Michael Pellauer, Tushar Krishna:
Self adaptive reconfigurable arrays (SARA): learning flexible GEMM accelerator configuration and mapping-space using ML. 583-588 - Deokki Hong, Kanghyun Choi, Hyeyoon Lee, Joonsang Yu, Noseong Park, Youngsok Kim, Jinho Lee:
Enabling hard constraints in differentiable neural network and accelerator co-exploration. 589-594 - Guohao Dai, Guyue Huang, Shang Yang, Zhongming Yu, Hengrui Zhang, Yufei Ding, Yuan Xie, Huazhong Yang, Yu Wang:
Heuristic adaptability to input dynamics for SpMM on CPUs. 595-600 - Xinyi Zhang, Cong Hao, Peipei Zhou, Alex K. Jones, Jingtong Hu:
H2H: heterogeneous model to heterogeneous system mapping with computation and communication awareness. 601-606 - Yunseong Kim, Yujeong Choi, Minsoo Rhu:
PARIS and ELSA: an elastic scheduling algorithm for reconfigurable multi-GPU inference servers. 607-612 - Zhiqiang Liu, Wenjian Yu:
Pursuing more effective graph spectral sparsifiers via approximate trace reduction. 613-618 - Zhou Jin, Haojie Pei, Yichao Dong, Xiang Jin, Xiao Wu, Wei W. Xing, Dan Niu:
Accelerating nonlinear DC circuit simulation with reinforcement learning. 619-624 - Xiaodong Wang, Changhao Yan, Fan Yang, Dian Zhou, Xuan Zeng:
An efficient yield optimization method for analog circuits via gaussian process classification and varying-sigma sampling. 625-630 - Jinwei Liu, Xiaopeng Zhang, Shiju Lin, Xinshi Zang, Jingsong Chen, Bentian Jiang, Martin D. F. Wong, Evangeline F. Y. Young:
Partition and place finite element model on wafer-scale engine. 631-636 - Huimin Wang, Xingyu Tong, Chenyue Ma, Runming Shi, Jianli Chen, Kun Wang, Jun Yu, Yao-Wen Chang:
CNN-inspired analytical global placement for large-scale heterogeneous FPGAs. 637-642 - Ziran Zhu, Yangjie Mei, Zijun Li, Jingwen Lin, Jianli Chen, Jun Yang, Yao-Wen Chang:
High-performance placement for large-scale heterogeneous FPGAs with clock constraints. 643-648 - Jing Mai, Yibai Meng, Zhixiong Di, Yibo Lin:
Multi-electrostatic FPGA placement considering SLICEL-SLICEM heterogeneity and clock feasibility. 649-654 - Hanrui Wang, Zirui Li, Jiaqi Gu, Yongshan Ding, David Z. Pan, Song Han:
QOC: quantum on-chip training with parameter shift and gradient pruning. 655-660 - Mikail Yayla, Jian-Jia Chen:
Memory-efficient training of binarized neural networks on the edge. 661-666