A review of in-memory computing for machine learning: architectures, options

Purpose This paper aims to review in-memory computing (IMC) for machine learning (ML) applications from history, architectures and options aspects. In this review, the authors investigate different architectural aspects and collect and provide our comparative evaluations. Design/methodology/approach...

Full description

Saved in:
Bibliographic Details
Published in:International journal of Web information systems Vol. 20; no. 1; pp. 24 - 47
Main Authors: Snasel, Vaclav, Dang, Tran Khanh, Kueng, Josef, Kong, Lingping
Format: Journal Article
Language:English
Published: Bingley Emerald Publishing Limited 05-02-2024
Emerald Group Publishing Limited
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Abstract Purpose This paper aims to review in-memory computing (IMC) for machine learning (ML) applications from history, architectures and options aspects. In this review, the authors investigate different architectural aspects and collect and provide our comparative evaluations. Design/methodology/approach Collecting over 40 IMC papers related to hardware design and optimization techniques of recent years, then classify them into three optimization option categories: optimization through graphic processing unit (GPU), optimization through reduced precision and optimization through hardware accelerator. Then, the authors brief those techniques in aspects such as what kind of data set it applied, how it is designed and what is the contribution of this design. Findings ML algorithms are potent tools accommodated on IMC architecture. Although general-purpose hardware (central processing units and GPUs) can supply explicit solutions, their energy efficiencies have limitations because of their excessive flexibility support. On the other hand, hardware accelerators (field programmable gate arrays and application-specific integrated circuits) win on the energy efficiency aspect, but individual accelerator often adapts exclusively to ax single ML approach (family). From a long hardware evolution perspective, hardware/software collaboration heterogeneity design from hybrid platforms is an option for the researcher. Originality/value IMC’s optimization enables high-speed processing, increases performance and analyzes massive volumes of data in real-time. This work reviews IMC and its evolution. Then, the authors categorize three optimization paths for the IMC architecture to improve performance metrics.
AbstractList PurposeThis paper aims to review in-memory computing (IMC) for machine learning (ML) applications from history, architectures and options aspects. In this review, the authors investigate different architectural aspects and collect and provide our comparative evaluations.Design/methodology/approachCollecting over 40 IMC papers related to hardware design and optimization techniques of recent years, then classify them into three optimization option categories: optimization through graphic processing unit (GPU), optimization through reduced precision and optimization through hardware accelerator. Then, the authors brief those techniques in aspects such as what kind of data set it applied, how it is designed and what is the contribution of this design.FindingsML algorithms are potent tools accommodated on IMC architecture. Although general-purpose hardware (central processing units and GPUs) can supply explicit solutions, their energy efficiencies have limitations because of their excessive flexibility support. On the other hand, hardware accelerators (field programmable gate arrays and application-specific integrated circuits) win on the energy efficiency aspect, but individual accelerator often adapts exclusively to ax single ML approach (family). From a long hardware evolution perspective, hardware/software collaboration heterogeneity design from hybrid platforms is an option for the researcher.Originality/valueIMC’s optimization enables high-speed processing, increases performance and analyzes massive volumes of data in real-time. This work reviews IMC and its evolution. Then, the authors categorize three optimization paths for the IMC architecture to improve performance metrics.
Purpose This paper aims to review in-memory computing (IMC) for machine learning (ML) applications from history, architectures and options aspects. In this review, the authors investigate different architectural aspects and collect and provide our comparative evaluations. Design/methodology/approach Collecting over 40 IMC papers related to hardware design and optimization techniques of recent years, then classify them into three optimization option categories: optimization through graphic processing unit (GPU), optimization through reduced precision and optimization through hardware accelerator. Then, the authors brief those techniques in aspects such as what kind of data set it applied, how it is designed and what is the contribution of this design. Findings ML algorithms are potent tools accommodated on IMC architecture. Although general-purpose hardware (central processing units and GPUs) can supply explicit solutions, their energy efficiencies have limitations because of their excessive flexibility support. On the other hand, hardware accelerators (field programmable gate arrays and application-specific integrated circuits) win on the energy efficiency aspect, but individual accelerator often adapts exclusively to ax single ML approach (family). From a long hardware evolution perspective, hardware/software collaboration heterogeneity design from hybrid platforms is an option for the researcher. Originality/value IMC’s optimization enables high-speed processing, increases performance and analyzes massive volumes of data in real-time. This work reviews IMC and its evolution. Then, the authors categorize three optimization paths for the IMC architecture to improve performance metrics.
Author Dang, Tran Khanh
Kueng, Josef
Snasel, Vaclav
Kong, Lingping
Author_xml – sequence: 1
  givenname: Vaclav
  surname: Snasel
  fullname: Snasel, Vaclav
  email: vaclav.snasel@vsb.cz
– sequence: 2
  givenname: Tran Khanh
  surname: Dang
  fullname: Dang, Tran Khanh
  email: khanh@hufi.edu.vn
– sequence: 3
  givenname: Josef
  surname: Kueng
  fullname: Kueng, Josef
  email: josef.kueng@jku.at
– sequence: 4
  givenname: Lingping
  surname: Kong
  fullname: Kong, Lingping
  email: lingping.kong@vsb.cz
BookMark eNptkE1LAzEQhoNUsK3-AU8Br0bzsR-Jt1L8qBQ8qHgM2XSiW7rJmuwq_fdurQiCp3kZ5plhngka-eABoVNGLxij8nJx_7J4JFQSTrkglAl2gMaszDJCqeKj3yyzIzRJaU1pIQVTY7Sc4QgfNXzi4HDtSQNNiFtsQ9P2Xe1fsQsRN8a-1R7wBkz0Q_MKmzh0OrBdHyGd49B2dfDpGB06s0lw8lOn6Pnm-ml-R5YPt4v5bEmsYGVHpM2LPK9yYTOelVCVvLLGgSuZK4SUuVK8ZBXYlRSlpKKgUonKGaeqKlduVYgpOtvvbWN47yF1eh366IeTmis-PKnEwE0R30_ZGFKK4HQb68bErWZU76zpb2t6CDtremdtgNgeggai2az-Z_6YFl8qa3CC
CitedBy_id crossref_primary_10_1016_j_jclepro_2024_143051
Cites_doi 10.1109/JPROC.2015.2435018
10.1109/HPCA.2018.00016
10.1109/TNNLS.2020.2978386
10.1109/TC.2016.2574353
10.1109/FPT.2017.8280160
10.1109/JPROC.2014.2304638
10.1109/IJCNN.2017.7966125
10.1145/2654822.2541967
10.5121/ijaia.2018.9105
10.1109/ASP-DAC52403.2022.9712569
10.1109/MNANO.2018.2844902
10.1117/1.OE.58.4.040901
10.1145/3505244
10.1145/3020078.3021698
10.1186/s42162-018-0007-5
10.1016/B978-0-12-816718-2.00013-0
10.1109/TCAD.2020.3043731
10.1109/TIE.2006.888791
10.1109/ISCA.2016.32
10.1109/TPDS.2020.2990924
10.1016/j.procs.2020.03.355
10.1145/2684746.2689060
10.1002/aisy.201900068
10.23919/VLSIT.2019.8776518
10.1002/adma.201705914
10.1098/rsta.2020.0209
10.1002/aisy.202000040
10.3390/jlpea10030028
10.3390/app10062143
10.1109/JPROC.2018.2790840
10.1016/j.jmat.2015.07.009
10.1007/BF00293853
10.1145/3065386
10.1126/sciadv.aau5759
10.1145/2996864
10.1145/2508148.2485926
10.1016/j.jksuci.2016.06.007
10.3389/fnins.2020.00406
10.1109/ISVLSI.2017.127
10.3390/fi11040100
10.1145/3007787.3001140
10.1145/2786763.2694358
10.1109/MICRO.2018.00060
10.1109/JSSC.1987.1052809
10.1109/MICRO.2018.00064
10.1109/TCAD.2016.2587683
10.1038/s41467-023-40770-4
10.1016/j.marpol.2013.08.007
10.1109/VLSIC.2018.8502322
10.1109/TCSI.2018.2848999
10.1038/s41565-020-0655-z
10.1145/1168918.1168898
10.3389/fnins.2020.00103
10.1109/JSSC.2019.2899730
10.3389/frai.2021.699148
10.1109/5.726791
10.1145/3007787.3001139
10.1109/JETCAS.2020.3030542
10.1016/j.aiopen.2022.10.001
10.3389/fncom.2021.674154
10.1126/science.aaw5581
10.1145/3007787.3001177
10.1109/MSSC.2019.2922889
10.1109/MSP.2017.2765202
ContentType Journal Article
Copyright Emerald Publishing Limited
Emerald Publishing Limited.
Copyright_xml – notice: Emerald Publishing Limited
– notice: Emerald Publishing Limited.
DBID AAYXX
CITATION
7SC
8FD
E3H
F2A
JQ2
L7M
L~C
L~D
DOI 10.1108/IJWIS-08-2023-0131
DatabaseName CrossRef
Computer and Information Systems Abstracts
Technology Research Database
Library & Information Sciences Abstracts (LISA)
Library & Information Science Abstracts (LISA)
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
DatabaseTitle CrossRef
Technology Research Database
Computer and Information Systems Abstracts – Academic
Library and Information Science Abstracts (LISA)
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts Professional
DatabaseTitleList Technology Research Database

DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EISSN 1744-0092
1744-0084
EndPage 47
ExternalDocumentID 10_1108_IJWIS_08_2023_0131
10.1108/IJWIS-08-2023-0131
GroupedDBID 0R~
1XV
29J
3FY
4.4
5VS
70U
AADTA
AADXL
AAGBP
AAMCF
AAPSD
AATHL
AAUDR
ABIJV
ABKQV
ABSDC
ABZEH
ACGFS
ACMTK
ADOMW
AEBVX
AEBZA
AEUCW
AFNZV
AFYHH
AJEBP
ALMA_UNASSIGNED_HOLDINGS
ALSLI
ARAPS
ASMFL
ATGMP
AUCOK
AVELQ
BENPR
BPQFQ
CS3
EBS
ECCUG
FNNZZ
GEI
GQ.
HCIFZ
HZ~
IPNFZ
J1Y
JL0
KBGRL
M1O
O9-
P2P
Q3A
RIG
TDQ
TGG
TMF
V1G
Z11
Z12
AAYXX
ABYQI
ACZLT
AFZLO
AODMV
CITATION
H13
M42
SBBZN
7SC
8FD
E3H
F2A
JQ2
L7M
L~C
L~D
ID FETCH-LOGICAL-c317t-8c5655b53c4247eb72bcafef71f6388599271becd83780360893bfaf9bb59fd63
IEDL.DBID GQ.
ISSN 1744-0084
IngestDate Thu Oct 10 22:08:51 EDT 2024
Thu Nov 21 20:56:31 EST 2024
Sun Feb 04 01:52:21 EST 2024
IsPeerReviewed true
IsScholarly true
Issue 1
Keywords In-memory computing
In-memory accelerator
Deep neural network
Machining learning
Language English
License Licensed re-use rights only
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c317t-8c5655b53c4247eb72bcafef71f6388599271becd83780360893bfaf9bb59fd63
PQID 2920849380
PQPubID 136061
PageCount 24
ParticipantIDs crossref_primary_10_1108_IJWIS_08_2023_0131
proquest_journals_2920849380
emerald_primary_10_1108_IJWIS-08-2023-0131
PublicationCentury 2000
PublicationDate 2024-02-05
PublicationDateYYYYMMDD 2024-02-05
PublicationDate_xml – month: 02
  year: 2024
  text: 2024-02-05
  day: 05
PublicationDecade 2020
PublicationPlace Bingley
PublicationPlace_xml – name: Bingley
PublicationTitle International journal of Web information systems
PublicationYear 2024
Publisher Emerald Publishing Limited
Emerald Group Publishing Limited
Publisher_xml – name: Emerald Publishing Limited
– name: Emerald Group Publishing Limited
References (key2023122103263633100_ref0150) 2018
(key2023122103263633100_ref0120) 2015
key2023122103263633100_ref028
(key2023122103263633100_ref0138) 2021; 34
(key2023122103263633100_ref0147) 2018; 1
(key2023122103263633100_ref080) 2016
(key2023122103263633100_ref0101) 2019; 5
(key2023122103263633100_ref040) 2013
(key2023122103263633100_ref061) 2020; 167
(key2023122103263633100_ref066) 2017; 60
(key2023122103263633100_ref058) 2010
(key2023122103263633100_ref008) 2018
(key2023122103263633100_ref098) 2016
key2023122103263633100_ref0114
(key2023122103263633100_ref073) 2015; 43
(key2023122103263633100_ref088) 2011
(key2023122103263633100_ref0134) 2020; 32
(key2023122103263633100_ref0146) 2019
(key2023122103263633100_ref068) 1998; 86
(key2023122103263633100_ref0109) 1987; 22
(key2023122103263633100_ref0103) 2019
(key2023122103263633100_ref0143) 2019
(key2023122103263633100_ref0123) 2021
(key2023122103263633100_ref0124) 2019; 54
(key2023122103263633100_ref0127) 2016; 36
(key2023122103263633100_ref018) 2022
(key2023122103263633100_ref038) 2020; 14
(key2023122103263633100_ref070) 2021; 379
(key2023122103263633100_ref093) 2020; 40
(key2023122103263633100_ref001) 2018; 65
(key2023122103263633100_ref095) 2021
(key2023122103263633100_ref0135) 2017
(key2023122103263633100_ref0140) 2018; 106
(key2023122103263633100_ref0141) 2020; 10
(key2023122103263633100_ref056) 2007; 54
(key2023122103263633100_ref016) 2018
(key2023122103263633100_ref092) 2017
(key2023122103263633100_ref0112) 2017
(key2023122103263633100_ref069) 2018
key2023122103263633100_ref048
(key2023122103263633100_ref0115) 1961; 1
(key2023122103263633100_ref079) 2018
(key2023122103263633100_ref021) 2016; 44
(key2023122103263633100_ref0133) 2018; 12
(key2023122103263633100_ref032) 1990
(key2023122103263633100_ref0105) 2020; 10
(key2023122103263633100_ref0125) 2019; 11
key2023122103263633100_ref0139
(key2023122103263633100_ref0126) 2020; 31
(key2023122103263633100_ref0136) 2019; 1
(key2023122103263633100_ref023) 2014
(key2023122103263633100_ref078) 2023; 1
key2023122103263633100_ref0131
(key2023122103263633100_ref0102) 2006
(key2023122103263633100_ref0129) 2019; 58
(key2023122103263633100_ref017) 2016; 59
(key2023122103263633100_ref035) 2014; 102
(key2023122103263633100_ref097) 2022
(key2023122103263633100_ref091) 2015
key2023122103263633100_ref059
(key2023122103263633100_ref0148) 2017
(key2023122103263633100_ref051) 2008
(key2023122103263633100_ref012) 2019; 11
(key2023122103263633100_ref022) 2021
(key2023122103263633100_ref0113) 2018
(key2023122103263633100_ref055) 2013; 26
(key2023122103263633100_ref0130) 2020
(key2023122103263633100_ref0119) 2020; 33
(key2023122103263633100_ref0118) 2019; 32
(key2023122103263633100_ref0108) 2020; 15
(key2023122103263633100_ref045) 2020
(key2023122103263633100_ref033) 2022
(key2023122103263633100_ref0142) 2015
Moc (key2023122103263633100_ref084) 2018
(key2023122103263633100_ref096) 2023; 14
(key2023122103263633100_ref0137) 2019
(key2023122103263633100_ref025) 2018; 35
(key2023122103263633100_ref054) 2018
(key2023122103263633100_ref094) 2018
(key2023122103263633100_ref0117) 2016
(key2023122103263633100_ref034) 2019; 364
(key2023122103263633100_ref0107) 2019
(key2023122103263633100_ref0104) 2018
(key2023122103263633100_ref086) 2020; 14
(key2023122103263633100_ref011) 2015
key2023122103263633100_ref063
(key2023122103263633100_ref031) 2015
(key2023122103263633100_ref041) 2015
(key2023122103263633100_ref0144) 2017
(key2023122103263633100_ref009) 2022; 16
(key2023122103263633100_ref053) 2003; 2003
(key2023122103263633100_ref047) 2008
(key2023122103263633100_ref039) 2021
(key2023122103263633100_ref064) 2020; 10
(key2023122103263633100_ref062) 2021; 54
(key2023122103263633100_ref044) 2000
(key2023122103263633100_ref0121) 2016
(key2023122103263633100_ref0100) 2020; 10
(key2023122103263633100_ref081) 2011
key2023122103263633100_ref074
(key2023122103263633100_ref004) 2019
key2023122103263633100_ref072
(key2023122103263633100_ref0149) 2015; 1
(key2023122103263633100_ref019) 2014
(key2023122103263633100_ref024) 2015
(key2023122103263633100_ref052) 2018; 9
(key2023122103263633100_ref075) 2016; 66
(key2023122103263633100_ref027) 2018
(key2023122103263633100_ref067) 2019
(key2023122103263633100_ref089) 2016
(key2023122103263633100_ref029) 1997; 18
key2023122103263633100_ref007
key2023122103263633100_ref005
key2023122103263633100_ref006
(key2023122103263633100_ref036) 2019
(key2023122103263633100_ref071) 2013; 41
key2023122103263633100_ref002
key2023122103263633100_ref087
(key2023122103263633100_ref0110) 2016; 44
(key2023122103263633100_ref0122) 2006; 41
key2023122103263633100_ref082
(key2023122103263633100_ref037) 2021; 4
(key2023122103263633100_ref046) 2018; 30
(key2023122103263633100_ref003) 2018
(key2023122103263633100_ref085) 2015; 103
(key2023122103263633100_ref0106) 2017
(key2023122103263633100_ref013) 2019
(key2023122103263633100_ref043) 2016
(key2023122103263633100_ref0116) 2013
(key2023122103263633100_ref0132) 2020
(key2023122103263633100_ref083) 2016
(key2023122103263633100_ref026) 2021; 15
(key2023122103263633100_ref010) 2000; 25
(key2023122103263633100_ref015) 2014; 42
(key2023122103263633100_ref0111) 2014
(key2023122103263633100_ref020) 2016; 44
(key2023122103263633100_ref049) 2017; 18
key2023122103263633100_ref099
(key2023122103263633100_ref0128) 2017
(key2023122103263633100_ref065) 2018
(key2023122103263633100_ref042) 2017
(key2023122103263633100_ref057) 2017; 29
(key2023122103263633100_ref050) 2020; 2
key2023122103263633100_ref090
(key2023122103263633100_ref014) 2016
(key2023122103263633100_ref0145) 2017
(key2023122103263633100_ref030) 2020
(key2023122103263633100_ref076) 2014; 44
(key2023122103263633100_ref060) 2021; 8
(key2023122103263633100_ref077) 2019
References_xml – volume: 103
  start-page: 1331
  issue: 8
  year: 2015
  ident: key2023122103263633100_ref085
  article-title: Evolution of memory architecture
  publication-title: Proceedings of the IEEE
  doi: 10.1109/JPROC.2015.2435018
– volume: 33
  start-page: 1796
  year: 2020
  ident: key2023122103263633100_ref0119
  article-title: Ultra-low precision 4-bit training of deep neural networks
  publication-title: Advances in Neural Information Processing Systems
– start-page: 66
  volume-title: 2018 IEEE International Symposium on High Performance Computer Architecture (HPCA)
  year: 2018
  ident: key2023122103263633100_ref0113
  article-title: Towards efficient microarchitectural design for accelerating unsupervised GAN-based deep learning
  doi: 10.1109/HPCA.2018.00016
– start-page: 1
  volume-title: 2017 IST-Africa Week Conference (IST-Africa)
  year: 2017
  ident: key2023122103263633100_ref042
  article-title: Big data: we’re almost at infinity
– start-page: 8697
  year: 2018
  ident: key2023122103263633100_ref0150
  article-title: Learning transferable architectures for scalable image recognition
– volume-title: Recent Progress in Parallel and Distributed Computing
  year: 2017
  ident: key2023122103263633100_ref092
  article-title: Abdelrahman Ahmed Mohamed Osman. GPU computing taxonomy
– ident: key2023122103263633100_ref074
– volume: 32
  start-page: 4
  issue: 1
  year: 2020
  ident: key2023122103263633100_ref0134
  article-title: A comprehensive survey on graph neural networks
  publication-title: IEEE Transactions on Neural Networks and Learning Systems
  doi: 10.1109/TNNLS.2020.2978386
– volume: 25
  start-page: 120
  year: 2000
  ident: key2023122103263633100_ref010
  article-title: The openCV library
  publication-title: Dr Dobb’s Journal: Software Tools for the Professional Programmer
– volume: 66
  start-page: 73
  issue: 1
  year: 2016
  ident: key2023122103263633100_ref075
  article-title: DaDianNao: a neural network supercomputer
  publication-title: IEEE Transactions on Computers
  doi: 10.1109/TC.2016.2574353
– start-page: 279
  volume-title: 2017 International Conference on Field Programmable Technology (ICFPT)
  year: 2017
  ident: key2023122103263633100_ref0128
  article-title: Pipecnn: an OpenCL-based open-source FPGA accelerator for convolution neural networks
  doi: 10.1109/FPT.2017.8280160
– volume: 102
  start-page: 652
  issue: 5
  year: 2014
  ident: key2023122103263633100_ref035
  article-title: The spinnaker project
  publication-title: Proceedings of the IEEE
  doi: 10.1109/JPROC.2014.2304638
– start-page: 77
  volume-title: International Workshop on Grid Computing
  year: 2000
  ident: key2023122103263633100_ref044
  article-title: Data management in an international data grid project
– year: 2022
  ident: key2023122103263633100_ref033
  article-title: A machine learning landscape: where AMD, Intel, Nvidia, Qualcomm and Xilinx AI engines live
– start-page: 92
  year: 2015
  ident: key2023122103263633100_ref031
  article-title: Shidiannao: shifting vision processing closer to the sensor
– start-page: 29
  volume-title: IEEE International Electron Devices Meeting (IEDM)
  year: 2020
  ident: key2023122103263633100_ref045
  article-title: Opportunities and limitations of emerging analog in-memory compute DNN architectures
– ident: key2023122103263633100_ref048
– start-page: 16
  year: 2016
  ident: key2023122103263633100_ref0117
  article-title: Throughput-optimized OpenCL-based FPGA accelerator for large-scale convolutional neural networks
– start-page: 155
  volume-title: Digital Image Computing: Techniques and Applications
  year: 2008
  ident: key2023122103263633100_ref051
  article-title: Neural network implementation using CUDA and OpenMP
– start-page: 2227
  volume-title: 2017 International Joint Conference On Neural Networks (IJCNN)
  year: 2017
  ident: key2023122103263633100_ref0106
  article-title: Neuromorphic hardware in the loop: Training a deep spiking network on the brainscales wafer-scale system
  doi: 10.1109/IJCNN.2017.7966125
– volume: 42
  start-page: 269
  issue: 1
  year: 2014
  ident: key2023122103263633100_ref015
  article-title: Diannao: a small- footprint high-throughput accelerator for ubiquitous machine-learning
  publication-title: ACM SIGARCH Computer Architecture News
  doi: 10.1145/2654822.2541967
– volume: 9
  start-page: 63
  issue: 1
  year: 2018
  ident: key2023122103263633100_ref052
  article-title: Hardware design for machine learning
  publication-title: International Journal of Artificial Intelligence and Applications
  doi: 10.5121/ijaia.2018.9105
– volume-title: VLSI 2020 Symposium on Technology and Circuits
  year: 2020
  ident: key2023122103263633100_ref0130
  article-title: Design considerations for emerging memory and in-memory computing
– start-page: 7908
  year: 2021
  ident: key2023122103263633100_ref039
  article-title: Positnn: training deep neural networks with mixed low-precision posit
– ident: key2023122103263633100_ref059
– start-page: 1
  volume-title: 2011 IEEE custom integrated circuits conference (CICC)
  year: 2011
  ident: key2023122103263633100_ref081
  article-title: A digital neurosynaptic core using embedded crossbar memory with 45pj per spike in 45nm
– start-page: 1
  volume-title: ACM Transactions on Reconfigurable Technology and Systems (TRETS)
  year: 2018
  ident: key2023122103263633100_ref008
  article-title: You cannot improve what you do not measure: FPGA vs. ASIC efficiency gaps for convolutional neural network inference
– start-page: 770
  year: 2016
  ident: key2023122103263633100_ref043
  article-title: Deep residual learning for image recognition
– ident: key2023122103263633100_ref007
– start-page: 690
  volume-title: 2022 27th Asia and South Pacific Design Automation Conference (ASP-DAC)
  year: 2022
  ident: key2023122103263633100_ref097
  article-title: Stream: towards read-based in-memory computing for streaming based data processing
  doi: 10.1109/ASP-DAC52403.2022.9712569
– volume: 12
  start-page: 36
  issue: 3
  year: 2018
  ident: key2023122103263633100_ref0133
  article-title: Resistive memory-based analog synapse: the pursuit for linear and symmetric weight update
  publication-title: IEEE Nanotechnology Magazine
  doi: 10.1109/MNANO.2018.2844902
– start-page: 4
  volume-title: 2015 IEEE International Electron Devices Meeting (IEDM)
  year: 2015
  ident: key2023122103263633100_ref011
  article-title: Large-scale neural networks implemented with non-volatile memory as the synaptic weight element: Comparative performance analysis (accuracy, speed, and power)
– start-page: 6645
  year: 2013
  ident: key2023122103263633100_ref040
  article-title: Speech recognition with deep recurrent neural networks
– ident: key2023122103263633100_ref028
– start-page: 1
  year: 2018
  ident: key2023122103263633100_ref094
  article-title: Atomlayer: a universal RERAM-based CNN accelerator with atomic layer computation
– start-page: 32
  year: 2021
  ident: key2023122103263633100_ref0123
  article-title: Going deeper with image transformers
– year: 2014
  ident: key2023122103263633100_ref023
  article-title: Training deep neural networks with low pre- cision multiplications
– volume: 58
  start-page: 40901
  issue: 4
  year: 2019
  ident: key2023122103263633100_ref0129
  article-title: Development of convolutional neural network and its application in image classification: a survey
  publication-title: Optical Engineering
  doi: 10.1117/1.OE.58.4.040901
– volume: 1
  issue: 1
  year: 2023
  ident: key2023122103263633100_ref078
  article-title: In-memory computing with emerging memory devices: status and outlook
  publication-title: APL Machine Learning
– volume: 54
  issue: 10s
  year: 2021
  ident: key2023122103263633100_ref062
  article-title: Transformers in vision: a survey
  publication-title: ACM Computing Surveys
  doi: 10.1145/3505244
– start-page: 25
  volume-title: Proceedings of the 2017 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays
  year: 2017
  ident: key2023122103263633100_ref0145
  article-title: Improving the performance of OpenCL-based FPGA accelerator for convolutional neural network
  doi: 10.1145/3020078.3021698
– volume: 1
  start-page: 24
  issue: 1
  year: 2018
  ident: key2023122103263633100_ref0147
  article-title: Big data analytics in smart grids: a review
  publication-title: Energy Informatics
  doi: 10.1186/s42162-018-0007-5
– start-page: 7908
  year: 2021
  ident: key2023122103263633100_ref095
  article-title: Positnn: training deep neural networks with mixed low-precision posit
– start-page: 815
  volume-title: 2018 Design, Automation and Test in Europe Conference and Exhibition (DATE)
  year: 2018
  ident: key2023122103263633100_ref069
  article-title: RERAM-based accelerator for deep learning
– start-page: 99
  volume-title: Deep Learning and Parallel Computing Environment for Bioengineering Systems
  year: 2019
  ident: key2023122103263633100_ref036
  article-title: Deep convolutional neural network for image classification on cuda platform
  doi: 10.1016/B978-0-12-816718-2.00013-0
– ident: key2023122103263633100_ref0139
– volume: 40
  start-page: 2306
  issue: 11
  year: 2020
  ident: key2023122103263633100_ref093
  article-title: DNN+ NeuroSim V2. 0: an end-to-end benchmarking framework for compute-in-memory accelerators for on-chip training
  publication-title: IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
  doi: 10.1109/TCAD.2020.3043731
– ident: key2023122103263633100_ref099
– volume: 54
  start-page: 265
  issue: 1
  year: 2007
  ident: key2023122103263633100_ref056
  article-title: Hardware implementation of a real-time neural network controller with a DSP and an FPGA for nonlinear systems
  publication-title: IEEE Transactions on Industrial Electronics
  doi: 10.1109/TIE.2006.888791
– start-page: 267
  volume-title: 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA)
  year: 2016
  ident: key2023122103263633100_ref098
  article-title: Minerva: enabling low-power, highly-accurate deep neural network accelerators
  doi: 10.1109/ISCA.2016.32
– volume: 18
  start-page: 97
  year: 1997
  ident: key2023122103263633100_ref029
  article-title: Machine-learning research
  publication-title: AI Magazine
– start-page: 4035
  year: 2017
  ident: key2023122103263633100_ref0144
  article-title: ZIPML: training linear models with end-to-end low precision, and a little bit of deep learning
– start-page: 1
  year: 2019
  ident: key2023122103263633100_ref013
  article-title: Zara: a novel zero-free dataflow accelerator for generative adversarial networks in 3d RERAM
– ident: key2023122103263633100_ref082
– year: 2011
  ident: key2023122103263633100_ref088
  article-title: Reading digits in natural images with unsupervised feature learning
– volume: 31
  start-page: 2346
  issue: 10
  year: 2020
  ident: key2023122103263633100_ref0126
  article-title: A ubiquitous machine learning accelerator with automatic parallelization on FPGA
  publication-title: IEEE Transactions on Parallel and Distributed Systems
  doi: 10.1109/TPDS.2020.2990924
– volume: 32
  year: 2019
  ident: key2023122103263633100_ref0118
  article-title: Hybrid 8-bit floating point (HFP8) training and inference for deep neural networks
  publication-title: Advances in Neural Information Processing Systems
– ident: key2023122103263633100_ref0131
– ident: key2023122103263633100_ref063
– volume: 167
  start-page: 1444
  year: 2020
  ident: key2023122103263633100_ref061
  article-title: Machine learning in computer vision
  publication-title: Procedia Computer Science
  doi: 10.1016/j.procs.2020.03.355
– start-page: 161
  volume-title: Proceedings of the 2015 ACM/SIGDA International Symposium on Field-programmable Gate Arrays
  year: 2015
  ident: key2023122103263633100_ref0142
  article-title: Optimizing FPGA-based accelerator design for deep convolutional neural networks
  doi: 10.1145/2684746.2689060
– volume: 1
  start-page: 1900068
  year: 2019
  ident: key2023122103263633100_ref0136
  article-title: Resistive memory- based in-memory computing: from device and large-scale integration system perspectives
  publication-title: Advanced Intelligent Systems
  doi: 10.1002/aisy.201900068
– start-page: 494
  volume-title: IEEE International Solid-State Circuits Conference-(ISSCC)
  year: 2018
  ident: key2023122103263633100_ref016
  article-title: A 65nm 1mb nonvolatile computing-in-memory RERAM macro with Sub-16ns multiply-and-accumulate for binary DNN AI edge processors
– start-page: T168
  volume-title: 2019 Symposium on VLSI Technology
  year: 2019
  ident: key2023122103263633100_ref0107
  article-title: Computational memory-based inference and training of deep neural networks
  doi: 10.23919/VLSIT.2019.8776518
– year: 2019
  ident: key2023122103263633100_ref067
  article-title: Deep learning training on the edge with low-precision posits
– start-page: 242
  volume-title: In 15.3 a 351tops/w and 372.4 GOPS Compute-in-Memory SRAM Macro in 7nm FINFET CMOS for Machine-Learning Applications 2020 IEEE International Solid-State Circuits Conference-(ISSCC)
  year: 2020
  ident: key2023122103263633100_ref030
  article-title: Diannao: a small-footprint high-throughput accelerator for ubiquitous machine-learning
– volume: 30
  start-page: 1705914
  issue: 9
  year: 2018
  ident: key2023122103263633100_ref046
  article-title: Memristor-based analog computation and neural network classification with a dot product engine
  publication-title: Advanced Materials
  doi: 10.1002/adma.201705914
– volume: 379
  start-page: 20200209
  issue: 2194
  year: 2021
  ident: key2023122103263633100_ref070
  article-title: Time-series forecasting with deep learning: a survey
  publication-title: Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences
  doi: 10.1098/rsta.2020.0209
– volume: 2
  start-page: 2000040
  issue: 7
  year: 2020
  ident: key2023122103263633100_ref050
  article-title: Device and circuit architectures for in-memory computing
  publication-title: Advanced Intelligent Systems
  doi: 10.1002/aisy.202000040
– volume: 10
  start-page: 28
  issue: 3
  year: 2020
  ident: key2023122103263633100_ref0100
  article-title: Rediscovering majority logic in the post-CMOS era: a perspective from in-memory computing
  publication-title: Journal of Low Power Electronics and Applications
  doi: 10.3390/jlpea10030028
– start-page: 1
  year: 2019
  ident: key2023122103263633100_ref0103
  article-title: K-nearest neighbor hardware accelerator using in-memory computing SRAM
– volume: 10
  start-page: 2143
  issue: 6
  year: 2020
  ident: key2023122103263633100_ref064
  article-title: The firmware design and implementation scheme for C form-factor pluggable optical transceiver
  publication-title: Applied Sciences
  doi: 10.3390/app10062143
– start-page: 1
  year: 2018
  ident: key2023122103263633100_ref003
  article-title: A depthwise CNN in-memory accelerator
– volume: 106
  start-page: 260
  issue: 2
  year: 2018
  ident: key2023122103263633100_ref0140
  article-title: Neuro-inspired computing with emerging nonvolatile memorys
  publication-title: Proceedings of the IEEE
  doi: 10.1109/JPROC.2018.2790840
– volume: 8
  start-page: e4
  year: 2021
  ident: key2023122103263633100_ref060
  article-title: Machine learning in computer vision: a review
  publication-title: EAI Endorsed Transactions on Scalable Information Systems
– year: 2016
  ident: key2023122103263633100_ref080
  article-title: Deep neural networks are robust to weight binarization and other non-linear distortions
– start-page: 125
  volume-title: International Supercomputing Conference
  year: 2013
  ident: key2023122103263633100_ref0116
  article-title: On the GPU performance of 3d stencil computations implemented in OpenCL
– volume: 1
  start-page: 285
  issue: 4
  year: 2015
  ident: key2023122103263633100_ref0149
  article-title: An overview of materials issues in resistive random access memory
  publication-title: Journal of Materiomics
  doi: 10.1016/j.jmat.2015.07.009
– start-page: 416
  volume-title: International Conference Aviamechanical Engineering and Transport (AVENT 2018)
  year: 2018
  ident: key2023122103263633100_ref065
  article-title: Results of research of working capability of refined pipelayer equipment
– volume: 1
  start-page: 36
  issue: 1
  year: 1961
  ident: key2023122103263633100_ref0115
  article-title: Die lernmatrix
  publication-title: Kybernetik
  doi: 10.1007/BF00293853
– volume: 60
  start-page: 84
  issue: 6
  year: 2017
  ident: key2023122103263633100_ref066
  article-title: Imagenet classification with deep convolutional neural networks
  publication-title: Communications of the ACM
  doi: 10.1145/3065386
– volume: 5
  start-page: eaau5759
  issue: 2
  year: 2019
  ident: key2023122103263633100_ref0101
  article-title: In-memory computing on a photonic platform
  publication-title: Science Advances
  doi: 10.1126/sciadv.aau5759
– volume: 59
  start-page: 105
  issue: 11
  year: 2016
  ident: key2023122103263633100_ref017
  article-title: DianNao family: energy-efficient hardware accelerators for machine learning
  publication-title: Communications of the ACM
  doi: 10.1145/2996864
– volume: 41
  start-page: 36
  issue: 3
  year: 2013
  ident: key2023122103263633100_ref071
  article-title: Thin servers with smart pipes: designing soc accelerators for memcached
  publication-title: ACM SIGARCH Computer Architecture News
  doi: 10.1145/2508148.2485926
– start-page: 715
  year: 2019
  ident: key2023122103263633100_ref004
  article-title: Puma: a programmable ultra-efficient memristor-based accelerator for machine learning inference
– start-page: 28
  volume-title: Advances in Neural Information Processing systems
  year: 2015
  ident: key2023122103263633100_ref024
  article-title: Binaryconnect: training deep neural networks with binary weights during propagations
– volume: 29
  start-page: 520
  issue: 4
  year: 2017
  ident: key2023122103263633100_ref057
  article-title: A performance evaluation of in-memory databases
  publication-title: Journal of King Saud University – Computer and Information Sciences
  doi: 10.1016/j.jksuci.2016.06.007
– volume: 14
  start-page: 406
  year: 2020
  ident: key2023122103263633100_ref086
  article-title: Mixed-precision deep learning based on computational memory
  publication-title: Frontiers in Neuroscience
  doi: 10.3389/fnins.2020.00406
– start-page: 645
  volume-title: 2017 IEEE Computer Society Annual Symposium on VLSI (ISVLSI)
  year: 2017
  ident: key2023122103263633100_ref0148
  article-title: Hardware acceleration for machine learning
  doi: 10.1109/ISVLSI.2017.127
– volume: 11
  start-page: 100
  issue: 4
  year: 2019
  ident: key2023122103263633100_ref012
  article-title: Edge computing: a survey on the hardware requirements in the internet of things world
  publication-title: Future Internet
  doi: 10.3390/fi11040100
– volume: 2003
  issue: November
  year: 2003
  ident: key2023122103263633100_ref053
  article-title: Human brain and neural network behavior: a comparison
  publication-title: Ubiquity
– start-page: 5270
  year: 2022
  ident: key2023122103263633100_ref018
  article-title: Mobile-former: bridging mobilenet and transformer
– volume: 44
  start-page: 27
  issue: 3
  year: 2016
  ident: key2023122103263633100_ref021
  article-title: Prime: a novel processing-in-memory architecture for neural network computation in RERAM-based main memory
  publication-title: ACM SIGARCH Computer Architecture News
  doi: 10.1145/3007787.3001140
– ident: key2023122103263633100_ref006
– volume: 43
  start-page: 369
  issue: 1
  year: 2015
  ident: key2023122103263633100_ref073
  article-title: Pudiannao: a polyvalent machine learning accelerator
  publication-title: ACM SIGARCH Computer Architecture News
  doi: 10.1145/2786763.2694358
– ident: key2023122103263633100_ref0114
– start-page: 669
  volume-title: 2018 51st Annual IEEE/ACM International Symposium on Microarchitecture (MICRO)
  year: 2018
  ident: key2023122103263633100_ref079
  article-title: LERGAN: a zero-free, low data movement and pim-based gan architecture
  doi: 10.1109/MICRO.2018.00060
– volume: 22
  start-page: 748
  issue: 5
  year: 1987
  ident: key2023122103263633100_ref0109
  article-title: Static-noise margin analysis of MOS SRAM cells
  publication-title: IEEE Journal of Solid-State Circuits
  doi: 10.1109/JSSC.1987.1052809
– volume: 16
  year: 2022
  ident: key2023122103263633100_ref009
  article-title: Hardware for artificial intelligence
  publication-title: Fron-Tiers in Neuroscience
– start-page: 609
  year: 2014
  ident: key2023122103263633100_ref019
  article-title: Dadiannao: a machine-learning supercomputer
– start-page: 724
  volume-title: 2018 51st Annual IEEE/ACM International Symposium on Microarchitecture (MICRO)
  year: 2018
  ident: key2023122103263633100_ref0104
  article-title: Comprehensive evaluation of supply voltage underscaling in FPGA on-chip memories
  doi: 10.1109/MICRO.2018.00064
– volume-title: Workshop on Faces in’ Real-Life’Images: Detection, Alignment, and Recognition
  year: 2008
  ident: key2023122103263633100_ref047
  article-title: Labeled faces in the wild: a database forstudying face recognition in unconstrained environments
– start-page: 541
  year: 2017
  ident: key2023122103263633100_ref0112
  article-title: Pipelayer: a pipelined RERAM-based accelerator for deep learning
– volume: 36
  start-page: 513
  year: 2016
  ident: key2023122103263633100_ref0127
  article-title: DLAU: a scalable deep learning accelerator unit on FPGA
  publication-title: IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
  doi: 10.1109/TCAD.2016.2587683
– year: 1990
  ident: key2023122103263633100_ref032
  article-title: VLSI implementation of neural networks
– ident: key2023122103263633100_ref090
– volume: 14
  start-page: 5282
  issue: 1
  year: 2023
  ident: key2023122103263633100_ref096
  article-title: Hardware-aware training for large-scale and diverse deep learning inference workloads using in-memory computing-based accelerators
  publication-title: Nature Communications
  doi: 10.1038/s41467-023-40770-4
– start-page: T86
  year: 2019
  ident: key2023122103263633100_ref0137
  article-title: RRAM-based spiking nonvolatile computing-in-memory processing engine with precision-configurable in situ nonlinear activation
– volume: 44
  start-page: 42
  year: 2014
  ident: key2023122103263633100_ref076
  article-title: How have catch shares been allocated?
  publication-title: Marine Policy
  doi: 10.1016/j.marpol.2013.08.007
– volume-title: 2018 IEEE Symposium on VLSI Circuits
  year: 2018
  ident: key2023122103263633100_ref084
  article-title: Symposium on VLSI circuits digest of technical papers
  doi: 10.1109/VLSIC.2018.8502322
  contributor:
    fullname: Moc
– start-page: 2818
  year: 2016
  ident: key2023122103263633100_ref0121
  article-title: Rethinking the inception architecture for computer vision
– start-page: 1105
  year: 2006
  ident: key2023122103263633100_ref0102
  article-title: Neural network implementation in hardware using fpgas
– start-page: 1
  year: 2015
  ident: key2023122103263633100_ref0120
  article-title: Going deeper with convolutions
– start-page: 1737
  year: 2015
  ident: key2023122103263633100_ref041
  article-title: Deep learning with limited numerical precision
– start-page: 727
  year: 2019
  ident: key2023122103263633100_ref077
  article-title: XPPE: cross-platform performance estimation of hardware accelerators using machine learning
– volume: 65
  start-page: 4219
  issue: 12
  year: 2018
  ident: key2023122103263633100_ref001
  article-title: X-SRAM: enabling in-memory Boolean computations in CMOS static random access memories
  publication-title: IEEE Transactions on Circuits and Systems I: Regular Papers
  doi: 10.1109/TCSI.2018.2848999
– volume: 15
  start-page: 529
  issue: 7
  year: 2020
  ident: key2023122103263633100_ref0108
  article-title: Memory devices and applications for in-memory computing
  publication-title: Nature Nanotechnology
  doi: 10.1038/s41565-020-0655-z
– volume: 41
  start-page: 325
  issue: 11
  year: 2006
  ident: key2023122103263633100_ref0122
  article-title: Accelerator: using data parallelism to program GPUs for general-purpose uses
  publication-title: ACM SIGPLAN Notices
  doi: 10.1145/1168918.1168898
– volume: 10
  start-page: 2088
  year: 2020
  ident: key2023122103263633100_ref0105
  article-title: A survey of big data and machine learning
  publication-title: International Journal of Electrical and Computer Engineering
– year: 2014
  ident: key2023122103263633100_ref0111
  article-title: Very deep convolutional networks for large-scale image recognition
– ident: key2023122103263633100_ref087
– year: 2018
  ident: key2023122103263633100_ref054
  article-title: Highly scalable deep learning training system with mixed-precision: training imagenet in four minutes
– volume: 18
  start-page: 6869
  year: 2017
  ident: key2023122103263633100_ref049
  article-title: Quantized neural net- works: training neural networks with low precision weights and activations
  publication-title: The Journal of Machine Learning Research
– volume: 14
  start-page: 103
  year: 2020
  ident: key2023122103263633100_ref038
  article-title: Algorithm for training neural networks on resistive device arrays
  publication-title: Frontiers in Neuroscience
  doi: 10.3389/fnins.2020.00103
– year: 2018
  ident: key2023122103263633100_ref027
  article-title: High-accuracy low-precision training
– volume: 54
  start-page: 1789
  issue: 6
  year: 2019
  ident: key2023122103263633100_ref0124
  article-title: NA 64-tile 2.4-Mb in-memory-computing CNN accelerator employing charge-domain compute
  publication-title: IEEE Journal of Solid-State Circuits
  doi: 10.1109/JSSC.2019.2899730
– volume: 4
  start-page: 699148
  year: 2021
  ident: key2023122103263633100_ref037
  article-title: Enabling training of neural networks on noisy hardware
  publication-title: Frontiers in Artificial Intelligence
  doi: 10.3389/frai.2021.699148
– year: 2015
  ident: key2023122103263633100_ref091
  article-title: An introduction to convolutional neural networks
– start-page: 1
  volume-title: 26th International Conference on Field Programmable Logic and Applications (FPL)
  year: 2016
  ident: key2023122103263633100_ref089
  article-title: Accelerating recurrent neural networks in analytics servers: comparison of Fpga, CPU, GPU, and ASIC
– year: 2016
  ident: key2023122103263633100_ref083
  article-title: Convolutional neural networks using logarithmic data representation
– volume: 86
  start-page: 2278
  issue: 11
  year: 1998
  ident: key2023122103263633100_ref068
  article-title: Gradient-based learning applied to document recognition
  publication-title: Proceedings of the IEEE
  doi: 10.1109/5.726791
– start-page: 96
  year: 2021
  ident: key2023122103263633100_ref022
  article-title: On reverse engineering neural network implementation on GPU
– volume: 44
  start-page: 14
  issue: 3
  year: 2016
  ident: key2023122103263633100_ref0110
  article-title: ISAAC: a convolutional neural network accelerator with in-situ analog arithmetic in crossbars
  publication-title: ACM SIGARCH Computer Architecture News
  doi: 10.1145/3007787.3001139
– volume: 10
  start-page: 478
  issue: 4
  year: 2020
  ident: key2023122103263633100_ref0141
  article-title: Reconfigurable smart in-memory computing plat- form supporting logic and binarized neural networks for low-power edge devices
  publication-title: IEEE Journal on Emerging and Selected Topics in Circuits and Systems
  doi: 10.1109/JETCAS.2020.3030542
– start-page: 10
  volume-title: 2019 Fifth Workshop on Energy Efficient Machine Learning and Cognitive Computing-NeurIPS Edition (EMC2-NIPS)
  year: 2019
  ident: key2023122103263633100_ref0146
  article-title: Qpytorch: a low-precision arithmetic simulation framework
– start-page: 793
  volume-title: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining
  year: 2019
  ident: key2023122103263633100_ref0143
  article-title: Heterogeneous graph neural network
– ident: key2023122103263633100_ref072
  doi: 10.1016/j.aiopen.2022.10.001
– ident: key2023122103263633100_ref002
– volume: 15
  start-page: 674154
  year: 2021
  ident: key2023122103263633100_ref026
  article-title: Accelerating inference of convolutional neural networks using in-memory computing
  publication-title: Frontiers in Computational Neuroscience
  doi: 10.3389/fncom.2021.674154
– start-page: 6144
  year: 2020
  ident: key2023122103263633100_ref0132
  article-title: Lightweight and efficient end-to-end speech recognition using low-rank transformer
– start-page: 785
  year: 2016
  ident: key2023122103263633100_ref014
  article-title: Xgboost: a scalable tree boosting system
– volume: 364
  start-page: 570
  issue: 6440
  year: 2019
  ident: key2023122103263633100_ref034
  article-title: Parallel programming of an ionic floating-gate memory array for scalable neuromorphic computing
  publication-title: Science
  doi: 10.1126/science.aaw5581
– volume: 26
  year: 2013
  ident: key2023122103263633100_ref055
  article-title: Accelerating stochastic gradient descent using predictive variance reduction
  publication-title: Advances in Neural Information Processing Systems
– year: 2010
  ident: key2023122103263633100_ref058
  article-title: A performance comparison of CUDA and OpenCL
– volume: 44
  start-page: 367
  issue: 3
  year: 2016
  ident: key2023122103263633100_ref020
  article-title: Eyeriss: a spatial architecture for energy-efficient dataflow for convolutional neural networks
  publication-title: ACM SIGARCH Computer Architecture News
  doi: 10.1145/3007787.3001177
– volume: 34
  start-page: 28798
  year: 2021
  ident: key2023122103263633100_ref0138
  article-title: GraphFormers: GNN-nested transformers for representation learning on textual graph
  publication-title: Advances in Neural Information Processing Systems
– ident: key2023122103263633100_ref005
– year: 2017
  ident: key2023122103263633100_ref0135
  article-title: Fashionmnist: a novel image dataset for benchmarking machine learning algorithms
– volume: 11
  start-page: 43
  issue: 3
  year: 2019
  ident: key2023122103263633100_ref0125
  article-title: In-memory computing: advances and prospects
  publication-title: IEEE Solid-State Circuits Magazine
  doi: 10.1109/MSSC.2019.2922889
– volume: 35
  start-page: 53
  issue: 1
  year: 2018
  ident: key2023122103263633100_ref025
  article-title: Generative adversarial networks: an overview
  publication-title: IEEE Signal Processing Magazine
  doi: 10.1109/MSP.2017.2765202
SSID ssj0068319
Score 2.334558
Snippet Purpose This paper aims to review in-memory computing (IMC) for machine learning (ML) applications from history, architectures and options aspects. In this...
PurposeThis paper aims to review in-memory computing (IMC) for machine learning (ML) applications from history, architectures and options aspects. In this...
SourceID proquest
crossref
emerald
SourceType Aggregation Database
Publisher
StartPage 24
SubjectTerms Algorithms
Application specific integrated circuits
Artificial intelligence
Big Data
Central processing units
Computation
Computer architecture
Cost reduction
CPUs
Data analysis
Data processing
Design optimization
Energy consumption
Evolution
Field programmable gate arrays
Graphics processing units
Hardware
Heterogeneity
Machine learning
Neural networks
Performance enhancement
Performance measurement
Random access memory
Title A review of in-memory computing for machine learning: architectures, options
URI https://www.emerald.com/insight/content/doi/10.1108/IJWIS-08-2023-0131/full/html
https://www.proquest.com/docview/2920849380
Volume 20
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://sdu.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV09T8MwELVoWWDgG1EoyAMTYJrGTuKwVdBCEUKggmCL4sSGSjRFtB3499w5CbSoCxKbB8tSzue7Z-fuPUIOQ-4DavV8pvBuIrTDmfJCzrxmrKXnisAkVjqhF9w-y4s20uTclb0wtqwyf46xcbqfjfCS2sDCbYjC34QDqF7TvX7q9vB9DyXAGRLHNPDNuvE6HrxVyCKAbQfZ9C_vT8vY7EtupT4AhguGVPJlG83c1WZS1a9-3Z-YbRNRZ_X_P2GNrBSglLZyL1onCzrbIMtTVIWb5KZF8y4XOjS0n7EBVuh-0sSKQsAMCuCXDmxlpqaFFMXLGZ3-TzE6ocO8hGaLPHbaD-dXrFBiYAngizGTCeA-T3k8EbB9WgWuSmKjTdA0cH6lF4Zu0ARvSJGeHnKiAyhImdiECjbepD7fJtVsmOkd7BHXIuawXqgEeIJRAPHcpu8qmRrOfVEjx6XRo_eccCOyFxVHRtZaEQzQWhFaq0aOClPPnzxj2hqpl1sXFSd1FKFalxQhl87uX9baI0swFrZ-26uT6vhjovdJZZRODqzHfQE_q9ju
link.rule.ids 315,782,786,21704,27933,27934,53253
linkProvider Emerald
linkToHtml http://sdu.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV3PT8IwFG4UD-rB30YUtQdPaoWt7dZ5IwoBRaIBo7eFjlZJZBgHB_97X7tNwXAx8bYszUv2-vr6tXvv-xA6CagHqJV7RJqzCVMVSiQPKOFOTwnuMl9HVjqh47efxXXN0OTc570wtqwyvY6xeXoQJ-aQWjaF25CFvwkHjHpN8-ap2TH3e0YCnBjimLK5sy6_jodvi2iJU5fb9t-Hizw3e4JaqQ-A4YwYKvm8jWautZmt6le_7k_OthtRff3_P2EDrWWgFFfTKNpECyreQqtTVIXbqFXFaZcLHmk8iMnQVOh-4siKQsAIDOAXD21lpsKZFMXLJZ7-T5Gc41FaQrODHuu17lWDZEoMJAJ8MSYiAtzHJacRg-lT0ndl1NNK-46G9St4ELi-A9HQN_T0sCdWAAVJ3dOBhInXfY_uokI8itWe6RFXrEfBXiAZRIKWAPFcx3Ol6GtKPVZEZ7nTw_eUcCO0B5WKCK23Qngw3gqNt4roNHP1_MEzri2iUj51YbZSk9CodQkWUFHZ_4utY7Tc6N61wlazfXuAVuA9s7XcvIQK44-JOkSLSX9yZKPvC5dk29w
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=A+review+of+in-memory+computing+for+machine+learning%3A+architectures%2C+options&rft.jtitle=International+journal+of+Web+information+systems&rft.au=Snasel%2C+Vaclav&rft.au=Dang%2C+Tran+Khanh&rft.au=Kueng%2C+Josef&rft.au=Kong%2C+Lingping&rft.date=2024-02-05&rft.pub=Emerald+Publishing+Limited&rft.issn=1744-0084&rft.eissn=1744-0092&rft.volume=20&rft.issue=1&rft.spage=24&rft.epage=47&rft_id=info:doi/10.1108%2FIJWIS-08-2023-0131&rft.externalDocID=10.1108%2FIJWIS-08-2023-0131
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1744-0084&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1744-0084&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1744-0084&client=summon