Tech

A crossbar array of magnetoresistive memory devices for in-memory computing – Nature

  • 1.

    Horowitz, M. Computing’s energy problem (and what we can do about it). In Proc. International Solid-State Circuits Conference (ISSCC) 10−14 (IEEE, 2014).

  • 2.

    Keckler, S. W., Dally, W. J., Khailany, B., Garland, M. & Glasco, D. GPUs and the future of parallel computing. IEEE Micro 31, 7–17 (2011).

    Article 

    Google Scholar 

  • 3.

    Song, J. et al. An 11.5TOPS/W 1024-MAC butterfly structure dual-core sparsity-aware neural processing unit in 8nm flagship mobile SoC. In 2019 IEEE Int. Solid-State Circuits Conference Digest of Technical Papers (ISSCC) 130−131 (IEEE, 2019).

  • 4.

    Sebastian, A. et al. Memory devices and applications for in-memory computing. Nat. Nanotechnol. 15, 529–544 (2020).

    ADS 
    CAS 
    Article 

    Google Scholar 

  • 5.

    Wang, Z. et al. Resistive switching materials for information processing. Nat. Rev. Mater. 5, 173–195 (2020).

    ADS 
    CAS 
    Article 

    Google Scholar 

  • 6.

    Ielmini, D. & Wong, H. P. In-memory computing with resistive switching devices. Nat. Electron. 1, 333–343 (2018).

    Article 

    Google Scholar 

  • 7.

    Verma, N. et al. In-memory computing: advances and prospects. IEEE Solid-State Circuits Mag. 11, 43–55 (2019).

    Article 

    Google Scholar 

  • 8.

    Woo, J. et al. Improved synaptic behavior under identical pulses using AlOx/HfO2 bilayer RRAM array for neuromorphic systems. IEEE Electron Device Lett. 37, 994–997 (2016).

    ADS 
    CAS 
    Article 

    Google Scholar 

  • 9.

    Yao, P. et al. Face classification using electronic synapses. Nat. Commun. 8, 15199 (2017).

    ADS 
    CAS 
    Article 

    Google Scholar 

  • 10.

    Wu, H. et al. Device and circuit optimization of RRAM for neuromorphic computing. In 2017 IEEE International Electron Devices Meeting 11.5.1−11.5.4 (IEEE, 2017).

  • 11.

    Li, C. et al. Efficient and self-adaptive in-situ learning in multilayer memristor neural networks. Nat. Commun. 9, 2385 (2018).

    ADS 
    Article 

    Google Scholar 

  • 12.

    Chen, W. et al. CMOS-integrated memristive non-volatile computing-in-memory for AI edge processors. Nat. Electron. 2, 420–428 (2019).

    CAS 
    Article 

    Google Scholar 

  • 13.

    Yao, P. et al. Fully hardware-implemented memristor convolutional neural network. Nature 577, 641–646 (2020).

    ADS 
    CAS 
    Article 

    Google Scholar 

  • 14.

    Le Gallo, M. et al. Mixed-precision in-memory computing. Nat. Electron. 1, 246–253 (2018).

    Article 

    Google Scholar 

  • 15.

    Ambrogio, S. et al. Equivalent-accuracy accelerated neural-network training using analogue memory. Nature 558, 60–67 (2018).

    NEWS:   Google working on fix for Duo audio bug disrupting calls for Android 12 users

    ADS 
    CAS 
    Article 

    Google Scholar 

  • 16.

    Merrikh-Bayat, F. et al. High-performance mixed-signal neurocomputing with nanoscale floating-gate memory cell arrays. IEEE Trans Neural Netw. Learn. Syst. 29, 4782–4790 (2018).

    Article 

    Google Scholar 

  • 17.

    Wang, P. et al. Three-dimensional NAND flash for vector-matrix multiplication. IEEE Trans. VLSI Syst. 27, 988–991 (2019).

    Article 

    Google Scholar 

  • 18.

    Xiang, Y. et al. Efficient and robust spike-driven deep convolutional neural networks based on NOR flash computing array. IEEE Trans. Electron Dev. 67, 2329–2335 (2020).

    ADS 
    Article 

    Google Scholar 

  • 19.

    Lin, Y.-Y. et al. A novel voltage-accumulation vector-matrix multiplication architecture using resistor-shunted floating gate flash memory device for low-power and high-density neural network applications. In 2018 IEEE International Electron Devices Meeting 2.4.1−2.4.4 (IEEE, 2018).

  • 20.

    Song, Y. J. et al. Demonstration of highly manufacturable STT-MRAM embedded in 28nm logic. In 2018 IEEE International Electron Devices Meeting 18.2.1−18.2.4 (IEEE, 2018).

  • 21.

    Lee, Y. K. et al. Embedded STT-MRAM in 28-nm FDSOI logic process for industrial MCU/IoT application. In 2018 IEEE Symposium on VLSI Technology 181−182 (IEEE, 2018).

  • 22.

    Wei, L. et al. A 7Mb STT-MRAM in 22FFL FinFET technology with 4ns read sensing time at 0.9V using write-verify-write scheme and offset-cancellation sensing technique. In 2019 IEEE Int. Solid-State Circuits Conference Digest of Technical Papers 214−216 (IEEE, 2019).

  • 23.

    LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521, 436–444 (2015).

    ADS 
    CAS 
    Article 

    Google Scholar 

  • 24.

    Yu, S. Neuro-inspired computing with emerging nonvolatile memory. Proc. IEEE 106, 260–285 (2018).

    CAS 
    Article 

    Google Scholar 

  • 25.

    Patil, A. D. et al. An MRAM-based deep in-memory architecture for deep neural networks. In 2019 IEEE International Symposium on Circuits and Systems (IEEE, 2019).

  • 26.

    Zabihi, M. et al. In-memory processing on the spintronic CRAM: from hardware design to application mapping. IEEE Trans. Comput. 68, 1159–1173 (2019).

    MathSciNet 
    Article 

    Google Scholar 

  • 27.

    Kang, S. H. Embedded STT-MRAM for energy-efficient and cost-effective mobile systems. In 2014 IEEE Symposium on VLSI Technology (IEEE, 2014).

  • 28.

    Zeng, Z. M. et al. Effect of resistance-area product on spin-transfer switching in MgO-based magnetic tunnel junction memory cells. Appl. Phys. Lett. 98, 072512 (2011).

    NEWS:   Near-complete build of Horizon Forbidden West has leaked online

    ADS 
    Article 

    Google Scholar 

  • 29.

    Kim, H. & Kwon, S.-W. Full-precision neural networks approximation based on temporal domain binary MAC operations. US patent 17/085,300.

  • 30.

    Hung, J.-M. et al. Challenges and trends in developing nonvolatile memory-enabled computing chips for intelligent edge devices. IEEE Trans. Electron Dev. 67, 1444–1453 (2020).

    ADS 
    CAS 
    Article 

    Google Scholar 

  • 31.

    Jiang, Z., Yin, S., Seo, J. & Seok, M. C3SRAM: an in-memory-computing SRAM macro based on robust capacitive coupling computing mechanism. IEEE J. Solid-State Circuits 55, 1888–1897 (2020).

    ADS 
    Article 

    Google Scholar 

  • 32.

    Hubara, I. et al. Binarized neural networks. In Advances in Neural Information Processing Systems 4107−4115 (NeurIPS, 2016).

  • 33.

    Rastegari, M., Ordonez, V., Redmon, J. & Farhadi, A. XNOR-Net: ImageNet classification using binary convolutional neural networks. In 2016 European Conference on Computer Vision 525−542 (2016).

  • 34.

    Lin, X., Zhao, C. & Pan, W. Towards accurate binary convolutional neural network. In Advances in Neural Information Processing Systems 345−353 (NeurIPS, 2017).

  • 35.

    Zhuang, B. et al. Structured binary neural networks for accurate image classification and semantic segmentation. In 2019 IEEE Conference on Computer Vision and Pattern Recognition 413−422 (IEEE, 2019).

  • 36.

    Shafiee, A. et al. ISAAC: a convolutional neural network accelerator with in-situ analog arithmetic in crossbars. In 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture 14−26 (IEEE, 2016).

  • 37.

    Liu, B. et al. Digital-assisted noise-eliminating training for memristor crossbar-based analog neuromorphic computing engine. In 2013 50th ACM/EDAC/IEEE Design Automation Conference 1−6 (IEEE, 2013).

  • 38.

    Wu, B., Iandola, F., Jin, P. H. & Keutzer, K. SqueezeDet: unified, small, low power fully convolutional neural networks for real-time object detection for autonomous driving. In 2017 IEEE Conference on Computer Vision and Pattern Recognition 129−137 (IEEE, 2017).

  • 39.

    Ham, D., Park, H., Hwang, S. & Kim, K. Neuromorphic electronics based on copying and pasting the brain. Nat. Electron. 4, 635–644 (2021).

    Article 

    Google Scholar 

  • 40.

    Wang, P. et al. Two-step quantization for low-bit neural networks. In 2018 IEEE Conference on Computer Vision and Pattern Recognition 4376−4384 (IEEE, 2018).

  • Show More

    Related Articles

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    Back to top button

    Adblock Detected

    Please consider supporting us by disabling your ad blocker