skip to main content
10.1145/3526241.3530376acmconferencesArticle/Chapter ViewAbstractPublication PagesglsvlsiConference Proceedingsconference-collections
short-paper

MRAM-based Analog Sigmoid Function for In-memory Computing

Published: 06 June 2022 Publication History

Abstract

We propose an analog implementation of the transcendental activation function leveraging two spin-orbit torque magnetoresistive random-access memory (SOT-MRAM) devices and a CMOS inverter. The proposed analog neuron circuit consumes 1.8-27x less power, and occupies 2.5-4931x smaller area, compared to the state-of-the-art analog and digital implementations. Moreover, the developed neuron can be readily integrated with memristive crossbars without requiring any intermediate signal conversion units. The architecture-level analyses show that a fully-analog in-memory computing (IMC) circuit that use our SOT-MRAM neuron along with an SOT-MRAM based crossbar can achieve more than 1.1x, 12x, and 13.3x reduction in power, latency, and energy, respectively, compared to a mixed-signal implementation with analog memristive crossbars and digital neurons. Finally, through cross-layer analyses, we provide a guide on how varying the device-level parameters in our neuron can affect the accuracy of multilayer perceptron (MLP) for MNIST classification.

Supplementary Material

MP4 File (GLSVLSI22-vlsi133s.mp4)
This is the presentation video for the paper titled "MRAM-based Analog Sigmoid Function for In-Memory Computing", containing the motivation, necessary backgrounds, proposed hardware description, simulation results and conclusion for the work.

References

[1]
Ali Shafiee, Anirban Nag, Naveen Muralimanohar, Rajeev Balasubramonian, John Paul Strachan, Miao Hu, R. Stanley Williams, and Vivek Srikumar. 2016. Isaac: a convolutional neural network accelerator with in-situ analog arithmetic in crossbars. In (ISCA '16). IEEE Press, Seoul, Republic of Korea, 14--26.
[2]
Ping Chi, Shuangchen Li, Cong Xu, Tao Zhang, Jishen Zhao, Yongpan Liu, Yu Wang, and Yuan Xie. 2016. Prime: a novel processing-in-memory architecture for neural network computation in reram-based main memory. ACM SIGARCH Computer Architecture News, 44, 3, 27--39.
[3]
Miao Hu, John Paul Strachan, Zhiyong Li, Emmanuelle M. Grafals, Noraica Davila, Catherine Graves, Sity Lam, Ning Ge, Jianhua Joshua Yang, and R. Stanley Williams. 2016. Dot-product engine for neuromorphic computing: programming 1t1m crossbar to accelerate matrix-vector multiplication. In 2016 53nd ACM/EDAC/IEEE Design Automation Conference (DAC), 1--6.
[4]
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems 25. F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, editors. Curran Associates, Inc., 1097--1105.
[5]
Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.
[6]
Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. 2015. Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, 1--9.
[7]
B. Lee and N. Burgess. 2003. Some results on taylor-series function approximation on fpga. In The Thrity-Seventh Asilomar Conference on Signals, Systems Computers, 2003. Volume 2, 2198--2202 Vol.2.
[8]
Hui Chen, Lin Jiang, Yuanyong Luo, Zhonghai Lu, Yuxiang Fu, Li Li, and Zongguang Yu. 2020. A cordic-based architecture with adjustable precision and flexible scalability to implement sigmoid and tanh functions. In 2020 IEEE International Symposium on Circuits and Systems (ISCAS), 1--5.
[9]
Guido Baccelli, Dimitrios Stathis, Ahmed Hemani, and Maurizio Martina. 2020. Nacu: a non-linear arithmetic unit for neural networks. In 2020 57th ACM/IEEE Design Automation Conference (DAC), 1--6.
[10]
Pramod Kumar Meher. 2010. An optimized lookup-table for the evaluation of sigmoid function for artificial neural networks. In 2010 18th IEEE/IFIP International Conference on VLSI and System-on-Chip, 91--95.
[11]
G. Khodabandehloo, M. Mirhassani, and M. Ahmadi. 2012. Analog implementation of a novel resistive-type sigmoidal neuron. IEEE Trans. Very Large Scale Integr. (VLSI) Syst., 20, 4, 750--754.
[12]
J. Shamsi, A. Amirsoleimani, S. Mirzakuchaki, A. Ahmade, S. Alirezaee, and M. Ahmadi. 2015. Hyperbolic tangent passive resistive-type neuron. In IEEE International Symposium on Circuits and Systems (ISCAS).
[13]
Ramtin Zand, Kerem Yunus Camsari, Steven D. Pyle, Ibrahim Ahmed, Chris H. Kim, and Ronald F. DeMara. 2018. Low-energy deep belief networks using intrinsic sigmoidal spintronic-based probabilistic neurons. In Proceedings of the 2018 on Great Lakes Symposium on VLSI (GLSVLSI '18). Chicago, IL, USA, 15--20.
[14]
Mohammed Elbtity, Abhishek Singh, Brendan Reidy, Xiaochen Guo, and Ramtin Zand. 2021. An in-memory analog computing co-processor for energy-efficient cnn inference on mobile devices. 2021 IEEE Computer Society Annual Symposium on VLSI (ISVLSI), 2021.
[15]
Y. Zhang, W. Zhao, Y. Lakys, J. O. Klein, J. V. Kim, D. Ravelosona, and C. Chappert. 2012. Compact modeling of perpendicular-anisotropy cofeb/mgo magnetic tunnel junctions. IEEE Transactions on Electron Devices, 59, 3, 819--826.
[16]
Ramtin Zand, Arman Roohi, and Ronald F DeMara. 2018. Fundamentals, modeling, and application of magnetic tunnel junctions. Nanoscale Devices: Physics, Modeling, and Their Application, 337.
[17]
L. Liu, C.F. Pai, Y. Li, H. W. Tseng, D. C. Ralph, and R. A. Buhrman. 2012. Spin-torque switching with the giant spin hall effect of tantalum. Science, 336, 6081, 555--558.
[18]
2012. Predictive technology model. http://ptm.asu.edu. (2012).
[19]
Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner. 1998. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86, 11, 2278--2324.
[20]
Md Hasibul Amin, Mohammed Elbtity, and Ramtin Zand. 2022. Interconnect parasitics and partitioning in fully-analog in-memory computing architectures. In 2022 IEEE International Symposium on Circuits and Systems (ISCAS).
[21]
Gunjan Rajput, Gopal Raut, Mahesh Chandra, and Santosh Kumar Vishvakarma. 2021. Vlsi implementation of transcendental function hyperbolic tangent for deep neural network accelerators. Microprocessors and Microsystems, 84, 104270.
[22]
Saurabh Sinha, Greg Yeric, Vikas Chandra, Brian Cline, and Yu Cao. 2012. Exploring sub-20nm finfet design with predictive technology models. In DAC Design Automation Conference 2012, 283--288.
[23]
Aaron Stillmaker and Bevan Baas. 2017. Scaling equations for the accurate prediction of CMOS device performance from 180 nm to 7 nm. Integration, 58, (June 2017), 74--81.
[24]
Jeevani Kommareddy. 2019. 10-bit c2c dac design in 65nm cmos technology. In Browse all Theses and Dissertations. 2112. https://corescholar.libraries.wright.edu/etd_all/2112.

Cited By

View all
  • (2024)SAC: An Ultra-Efficient Spin-based Architecture for Compressed DNNsACM Transactions on Architecture and Code Optimization10.1145/363295721:1(1-26)Online publication date: 19-Jan-2024
  • (2024)Full-Analog Implementation of Activation Function Based on Phase-Change Memory for Artificial Neural NetworksIEEE Transactions on Industrial Electronics10.1109/TIE.2023.331971171:8(9914-9922)Online publication date: Aug-2024
  • (2023)Energy Efficient Spin-Based Implementation of Neuromorphic Functions in CNNsIEEE Open Journal of Nanotechnology10.1109/OJNANO.2023.32619594(102-108)Online publication date: 2023
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
GLSVLSI '22: Proceedings of the Great Lakes Symposium on VLSI 2022
June 2022
560 pages
ISBN:9781450393225
DOI:10.1145/3526241
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 06 June 2022

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. SOT-MRAM neuron
  2. analog neuron
  3. in-memory computing
  4. multilayer perceptron (MLP)

Qualifiers

  • Short-paper

Conference

GLSVLSI '22
Sponsor:

Acceptance Rates

Overall Acceptance Rate 312 of 1,156 submissions, 27%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)80
  • Downloads (Last 6 weeks)18
Reflects downloads up to 13 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)SAC: An Ultra-Efficient Spin-based Architecture for Compressed DNNsACM Transactions on Architecture and Code Optimization10.1145/363295721:1(1-26)Online publication date: 19-Jan-2024
  • (2024)Full-Analog Implementation of Activation Function Based on Phase-Change Memory for Artificial Neural NetworksIEEE Transactions on Industrial Electronics10.1109/TIE.2023.331971171:8(9914-9922)Online publication date: Aug-2024
  • (2023)Energy Efficient Spin-Based Implementation of Neuromorphic Functions in CNNsIEEE Open Journal of Nanotechnology10.1109/OJNANO.2023.32619594(102-108)Online publication date: 2023
  • (2022)Xbar-Partitioning: A Practical Way for Parasitics and Noise Tolerance in Analog IMC CircuitsIEEE Journal on Emerging and Selected Topics in Circuits and Systems10.1109/JETCAS.2022.322296612:4(867-877)Online publication date: Dec-2022

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media