Hybrid no-propagation learning for multilayer neural networks
A hybrid learning algorithm suitable for hardware implementation of multi-layer neural networks is proposed. Though backpropagation is a powerful learning method for multilayer neural networks, its hardware implementation is difficult due to complexities of the neural synapses and the operations inv...
Saved in:
Published in: | Neurocomputing (Amsterdam) Vol. 321; pp. 28 - 35 |
---|---|
Main Authors: | , , , , |
Format: | Journal Article |
Language: | English |
Published: |
Elsevier B.V
10-12-2018
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Abstract | A hybrid learning algorithm suitable for hardware implementation of multi-layer neural networks is proposed. Though backpropagation is a powerful learning method for multilayer neural networks, its hardware implementation is difficult due to complexities of the neural synapses and the operations involved in error backpropagation. We propose a learning algorithm with performance comparable to but easier than backpropagation to be implemented in hardware for on-chip learning of multi-layer neural networks. In the proposed learning algorithm, a multilayer neural network is trained with a hybrid of gradient-based delta rule and a stochastic algorithm, called Random Weight Change. The parameters of the output layer are learned using the delta rule, whereas the inner layer parameters are learned using Random Weight Change, thereby the overall multilayer neural network is trained without the need for error backpropagation. Experimental results showing better performance of the proposed hybrid learning rule than either of its constituent learning algorithms, and comparable to that of backpropagation on the benchmark MNIST dataset are presented. Hardware architecture illustrating the ease of implementation of the proposed learning rule in analog hardware vis-a-vis the backpropagation algorithm is also presented. |
---|---|
AbstractList | A hybrid learning algorithm suitable for hardware implementation of multi-layer neural networks is proposed. Though backpropagation is a powerful learning method for multilayer neural networks, its hardware implementation is difficult due to complexities of the neural synapses and the operations involved in error backpropagation. We propose a learning algorithm with performance comparable to but easier than backpropagation to be implemented in hardware for on-chip learning of multi-layer neural networks. In the proposed learning algorithm, a multilayer neural network is trained with a hybrid of gradient-based delta rule and a stochastic algorithm, called Random Weight Change. The parameters of the output layer are learned using the delta rule, whereas the inner layer parameters are learned using Random Weight Change, thereby the overall multilayer neural network is trained without the need for error backpropagation. Experimental results showing better performance of the proposed hybrid learning rule than either of its constituent learning algorithms, and comparable to that of backpropagation on the benchmark MNIST dataset are presented. Hardware architecture illustrating the ease of implementation of the proposed learning rule in analog hardware vis-a-vis the backpropagation algorithm is also presented. |
Author | Adhikari, Shyam Prasad Yang, Changju Kim, Hyongsuk Slot, Krzysztof Strzelecki, Michal |
Author_xml | – sequence: 1 givenname: Shyam Prasad orcidid: 0000-0002-8531-4599 surname: Adhikari fullname: Adhikari, Shyam Prasad email: all.shyam@gmail.com organization: Division of Electronics Engineering, Chonbuk National University, Jeonju, 561-756, Republic of Korea – sequence: 2 givenname: Changju surname: Yang fullname: Yang, Changju email: ychangju@jbnu.ac.kr organization: Division of Electronics Engineering, Chonbuk National University, Jeonju, 561-756, Republic of Korea – sequence: 3 givenname: Krzysztof surname: Slot fullname: Slot, Krzysztof email: krzysztof.slot@p.lodz.pl organization: Institute of Applied Computer Science, Lodz University of Technology, Stefanowskiego 18/22, 90-924 Lodz, Poland – sequence: 4 givenname: Michal surname: Strzelecki fullname: Strzelecki, Michal email: michal.strzelecki@p.lodz.pl organization: Institute of Electronics, Technical University of Lodz, Wolczanska 211/215, 90-924 Lodz, Poland – sequence: 5 givenname: Hyongsuk surname: Kim fullname: Kim, Hyongsuk email: hskim@jbnu.ac.kr organization: Division of Electronics Engineering, Chonbuk National University, Jeonju, 561-756, Republic of Korea |
BookMark | eNp9kM1KAzEUhYMo2FbfwMW8wIz5mWQyCwUpaoWCG12HTHJTUqdJSaZK396UuhYOnM0951y-OboMMQBCdwQ3BBNxv20CHEzcNRQT2eAi1l6gGZEdrSWV4hLNcE95TRmh12ie8xZj0hHaz9DD6jgkb6sQ632Ke73Rk4-hGkGn4MOmcjFVu8M4-VEfIVVlJ-mx2PQT01e-QVdOjxlu_3yBPl-eP5arev3--rZ8WteGYTHVnHetZYQM2jDJDbWCil46sAYL5gbigPVOD5xz0Q3OQrkXFgjhRtKWs4EtUHvuNSnmnMCpffI7nY6KYHVCoLbqjECdEChcxNoSezzHoPz27SGpbDwEA9YnMJOy0f9f8As-0Gm8 |
CitedBy_id | crossref_primary_10_1016_j_neucom_2020_05_072 crossref_primary_10_1016_j_neucom_2019_10_029 crossref_primary_10_1016_j_neucom_2019_03_105 crossref_primary_10_1016_j_neucom_2019_06_068 crossref_primary_10_1016_j_neucom_2019_08_055 crossref_primary_10_1016_j_neucom_2019_11_083 crossref_primary_10_1515_ijeeps_2023_0135 crossref_primary_10_1016_j_neucom_2019_12_119 crossref_primary_10_1007_s00521_022_07673_9 crossref_primary_10_1016_j_neucom_2019_03_103 crossref_primary_10_32604_csse_2021_014894 crossref_primary_10_1049_trit_2019_0036 crossref_primary_10_1016_j_neucom_2019_09_087 crossref_primary_10_1016_j_neucom_2019_12_082 crossref_primary_10_1016_j_neucom_2019_12_120 crossref_primary_10_1155_2021_3250062 crossref_primary_10_1016_j_sysarc_2022_102730 crossref_primary_10_1016_j_neucom_2020_05_027 crossref_primary_10_1109_TCSI_2019_2934560 crossref_primary_10_1016_j_neucom_2019_12_108 crossref_primary_10_1016_j_neucom_2019_11_011 crossref_primary_10_1016_j_neucom_2019_11_052 crossref_primary_10_1016_j_neucom_2020_02_076 crossref_primary_10_1016_j_neucom_2019_07_051 crossref_primary_10_1049_ipr2_12713 crossref_primary_10_3390_healthcare8030234 crossref_primary_10_1016_j_neucom_2019_09_049 crossref_primary_10_1016_j_neucom_2020_07_070 |
Cites_doi | 10.1016/j.neunet.2012.09.020 10.1109/72.105429 10.1109/TCSI.2014.2359717 10.1109/TNNLS.2012.2204770 10.1109/TNNLS.2014.2383395 10.25080/Majora-92bf1922-003 10.1038/nature14539 10.1038/nature14441 10.1016/0893-6080(94)00084-Y |
ContentType | Journal Article |
Copyright | 2018 |
Copyright_xml | – notice: 2018 |
DBID | AAYXX CITATION |
DOI | 10.1016/j.neucom.2018.08.034 |
DatabaseName | CrossRef |
DatabaseTitle | CrossRef |
DatabaseTitleList | |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Computer Science |
EISSN | 1872-8286 |
EndPage | 35 |
ExternalDocumentID | 10_1016_j_neucom_2018_08_034 S0925231218309846 |
GroupedDBID | --- --K --M .DC .~1 0R~ 123 1B1 1~. 1~5 4.4 457 4G. 53G 5VS 7-5 71M 8P~ 9JM 9JN AABNK AACTN AADPK AAEDT AAEDW AAIAV AAIKJ AAKOC AALRI AAOAW AAQFI AAXLA AAXUO AAYFN ABBOA ABCQJ ABFNM ABJNI ABMAC ABYKQ ACDAQ ACGFS ACRLP ACZNC ADBBV ADEZE AEBSH AEKER AENEX AFKWA AFTJW AFXIZ AGHFR AGUBO AGWIK AGYEJ AHHHB AHZHX AIALX AIEXJ AIKHN AITUG AJOXV ALMA_UNASSIGNED_HOLDINGS AMFUW AMRAJ AOUOD AXJTR BKOJK BLXMC CS3 DU5 EBS EFJIC EFLBG EJD EO8 EO9 EP2 EP3 F5P FDB FIRID FNPLU FYGXN G-Q GBLVA GBOLZ IHE J1W KOM LG9 M41 MO0 MOBAO N9A O-L O9- OAUVE OZT P-8 P-9 P2P PC. Q38 RIG ROL RPZ SDF SDG SDP SES SPC SPCBC SSN SSV SSZ T5K ZMT ~G- 29N AAQXK AAXKI AAYXX ABXDB ACNNM ADJOM ADMUD AFJKZ AKRWK ASPBG AVWKF AZFZN CITATION FEDTE FGOYB HLZ HVGLF HZ~ R2- SBC SEW WUQ XPP |
ID | FETCH-LOGICAL-c306t-5574d311bac385c2d62698fedc063fb1fe39fab55567bfde5576de115c82453b3 |
ISSN | 0925-2312 |
IngestDate | Thu Sep 26 16:07:25 EDT 2024 Fri Feb 23 02:30:26 EST 2024 |
IsPeerReviewed | true |
IsScholarly | true |
Keywords | Backpropagation Multilayer neural network Random weight change On-chip learning No-propagation Delta rule |
Language | English |
LinkModel | OpenURL |
MergedId | FETCHMERGED-LOGICAL-c306t-5574d311bac385c2d62698fedc063fb1fe39fab55567bfde5576de115c82453b3 |
ORCID | 0000-0002-8531-4599 |
PageCount | 8 |
ParticipantIDs | crossref_primary_10_1016_j_neucom_2018_08_034 elsevier_sciencedirect_doi_10_1016_j_neucom_2018_08_034 |
PublicationCentury | 2000 |
PublicationDate | 2018-12-10 |
PublicationDateYYYYMMDD | 2018-12-10 |
PublicationDate_xml | – month: 12 year: 2018 text: 2018-12-10 day: 10 |
PublicationDecade | 2010 |
PublicationTitle | Neurocomputing (Amsterdam) |
PublicationYear | 2018 |
Publisher | Elsevier B.V |
Publisher_xml | – name: Elsevier B.V |
References | Flower, Jabri (bib0011) 1993; 5 Adhikari, Yang, Kim, Chua (bib0008) 2012; 23 Kataeva (bib0006) 2015 Maeda, Hirano, Kanata (bib0013) 1995; 8 Widrow, Greenblatt, Kim, Park (bib0017) 2013; 37 Krizhevsky, Sutskever, Hinton (bib0002) 2012; 25 Prezioso (bib0007) 2015; 521 Glorot, Bengio (bib0024) 2010; 9 Adhikari (bib0015) 2015; 62 Hirotsu, Brooke (bib0014) 1993 Nair, Hinton (bib0022) 2010 Cauwenberghs (bib0012) 1993; 5 Jarrett, Kavukcuoglu, Ranzato, LeCun (bib0019) 2009 Hu (bib0005) 2016; 53 Schmidt, Kraaijveld, Duin (bib0016) 1992 Saxe (bib0018) 2011 Wan (bib0023) 2013 Bergstra (bib0020) 2010 Soudry, Di Castro, Gal, Kolodny, Kvatinsky (bib0025) 2015; 26 Szegedy (bib0003) 2015 Widrow, Hoff (bib0009) 1960; 4 LeCun, Bengio, Hinton (bib0001) 2015; 521 Jabri, Flower (bib0010) 1992; 3 He, Zhang, Ren, Sun (bib0004) 2016 Y. LeCun, C. Corinna, and J.C.B. Christopher, “The MNIST database of handwritten digits,” 1998. Widrow (10.1016/j.neucom.2018.08.034_bib0017) 2013; 37 Cauwenberghs (10.1016/j.neucom.2018.08.034_bib0012) 1993; 5 Kataeva (10.1016/j.neucom.2018.08.034_bib0006) 2015 Krizhevsky (10.1016/j.neucom.2018.08.034_bib0002) 2012; 25 Hu (10.1016/j.neucom.2018.08.034_bib0005) 2016; 53 Flower (10.1016/j.neucom.2018.08.034_bib0011) 1993; 5 LeCun (10.1016/j.neucom.2018.08.034_bib0001) 2015; 521 Saxe (10.1016/j.neucom.2018.08.034_bib0018) 2011 10.1016/j.neucom.2018.08.034_bib0021 Szegedy (10.1016/j.neucom.2018.08.034_bib0003) 2015 He (10.1016/j.neucom.2018.08.034_bib0004) 2016 Bergstra (10.1016/j.neucom.2018.08.034_bib0020) 2010 Jabri (10.1016/j.neucom.2018.08.034_bib0010) 1992; 3 Widrow (10.1016/j.neucom.2018.08.034_bib0009) 1960; 4 Maeda (10.1016/j.neucom.2018.08.034_bib0013) 1995; 8 Hirotsu (10.1016/j.neucom.2018.08.034_bib0014) 1993 Prezioso (10.1016/j.neucom.2018.08.034_bib0007) 2015; 521 Soudry (10.1016/j.neucom.2018.08.034_bib0025) 2015; 26 Wan (10.1016/j.neucom.2018.08.034_bib0023) 2013 Schmidt (10.1016/j.neucom.2018.08.034_bib0016) 1992 Adhikari (10.1016/j.neucom.2018.08.034_bib0015) 2015; 62 Glorot (10.1016/j.neucom.2018.08.034_bib0024) 2010; 9 Adhikari (10.1016/j.neucom.2018.08.034_bib0008) 2012; 23 Nair (10.1016/j.neucom.2018.08.034_bib0022) 2010 Jarrett (10.1016/j.neucom.2018.08.034_bib0019) 2009 |
References_xml | – volume: 4 start-page: 96 year: 1960 end-page: 104 ident: bib0009 article-title: Adaptive switching circuits publication-title: IRE WESCON Convention Record contributor: fullname: Hoff – year: 2015 ident: bib0006 article-title: Efficient training algorithms for neural networks based on memristive crossbar circuits publication-title: Proceedings of the International Joint Conference on Neural Networks (IJCNN) contributor: fullname: Kataeva – volume: 5 year: 1993 ident: bib0012 article-title: A fast stochastic error-descent algorithm for supervised learning and optimization publication-title: Advances in Neural Information Processing Systems contributor: fullname: Cauwenberghs – start-page: 1 year: 1992 end-page: 4 ident: bib0016 article-title: Feedforward neural networks with random weights publication-title: Proceedings of the 11th IAPR International Conference on Pattern Recognition Methodology and Systems contributor: fullname: Duin – volume: 5 year: 1993 ident: bib0011 article-title: Summed weight neuron perturbation: an O(n) improvement over weight perturbation publication-title: Advances in Neural Information Processing Systems contributor: fullname: Jabri – volume: 23 start-page: 1426 year: 2012 end-page: 1435 ident: bib0008 article-title: Memristor bridge synapse-based neural network and its learning publication-title: IEEE Trans. Neural Netw. Learn. Syst. contributor: fullname: Chua – start-page: 1058 year: 2013 end-page: 1066 ident: bib0023 article-title: Regularization of neural networks using dropconnect publication-title: Proceedings of the 30th International Conference on Machine Learning (ICML-13) contributor: fullname: Wan – start-page: 1 year: 2015 end-page: 9 ident: bib0003 article-title: Going deeper with convolutions publication-title: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition contributor: fullname: Szegedy – volume: 521 start-page: 436 year: 2015 end-page: 444 ident: bib0001 article-title: Deep learning publication-title: Nature contributor: fullname: Hinton – volume: 8 start-page: 251 year: 1995 end-page: 259 ident: bib0013 article-title: A Learning rule of neural networks via simultaneous perturbation and its hardware implementation publication-title: Neural Netw. contributor: fullname: Kanata – start-page: 807 year: 2010 end-page: 814 ident: bib0022 article-title: Rectified linear units improve restricted boltzmann machines publication-title: Proceedings of the 27th International Conference on Machine Learning (ICML-10) contributor: fullname: Hinton – start-page: 770 year: 2016 end-page: 778 ident: bib0004 article-title: Deep residual learning for image recognition publication-title: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition contributor: fullname: Sun – volume: 3 start-page: 154 year: 1992 end-page: 157 ident: bib0010 article-title: Weight perturbation: an optimal architecture and learning technique for analog VLSI feedforward and recurrent multilayer networks publication-title: IEEE Trans. Neural Netw. contributor: fullname: Flower – start-page: 3031 year: 1993 end-page: 3034 ident: bib0014 article-title: An analog neural network chip with random weight change learning algorithm publication-title: Proceedings of the 1993 International Joint Conference on Neural Networks contributor: fullname: Brooke – volume: 62 start-page: 215 year: 2015 end-page: 223 ident: bib0015 article-title: A circuit-based learning architecture for multilayer neural networks with memristor bridge synapses publication-title: IEEE Trans. Circuits Syst. I contributor: fullname: Adhikari – volume: 9 year: 2010 ident: bib0024 article-title: Understanding the difficulty of training deep feedforward neural networks publication-title: Aistats contributor: fullname: Bengio – volume: 521 start-page: 61 year: 2015 end-page: 64 ident: bib0007 article-title: Training and operation of an integrated neuromorphic network based on metal-oxide memristors publication-title: Nature contributor: fullname: Prezioso – volume: 37 start-page: 182 year: 2013 end-page: 188 ident: bib0017 article-title: The no-prop algorithm: a new learning algorithm for multilayer neural networks publication-title: Neural Netw. contributor: fullname: Park – volume: 53 year: 2016 ident: bib0005 article-title: Dot-product engine for neuromorphic computing: programming 1T1M crossbar to accelerate matrix-vector multiplication publication-title: Proceedings of the DAC contributor: fullname: Hu – start-page: 2146 year: 2009 end-page: 2153 ident: bib0019 article-title: What is the best multi-stage architecture for object recognition? publication-title: Proceedings of the IEEE International Conference Computer Vision contributor: fullname: LeCun – volume: 26 start-page: 2408 year: 2015 end-page: 2421 ident: bib0025 article-title: Memristor-based multilayer neural networks with online gradient descent training publication-title: IEEE Trans. Neural Netw. Learn. Syst. contributor: fullname: Kvatinsky – year: 2010 ident: bib0020 article-title: Theano: a CPU and GPU math expression compiler publication-title: Proceedings of the Python for Scientific Computing Conference (SciPy) contributor: fullname: Bergstra – volume: 25 start-page: 1097 year: 2012 end-page: 1105 ident: bib0002 article-title: Imagenet classification with deep convolutional neural networks publication-title: Adv. Neural Inf. Process. Syst. contributor: fullname: Hinton – start-page: 1089 year: 2011 end-page: 1096 ident: bib0018 article-title: On random weights and unsupervised feature learning publication-title: Proceedings of the 28th international conference on machine learning (ICML-11) contributor: fullname: Saxe – volume: 25 start-page: 1097 year: 2012 ident: 10.1016/j.neucom.2018.08.034_bib0002 article-title: Imagenet classification with deep convolutional neural networks publication-title: Adv. Neural Inf. Process. Syst. contributor: fullname: Krizhevsky – start-page: 1058 year: 2013 ident: 10.1016/j.neucom.2018.08.034_bib0023 article-title: Regularization of neural networks using dropconnect contributor: fullname: Wan – year: 2015 ident: 10.1016/j.neucom.2018.08.034_bib0006 article-title: Efficient training algorithms for neural networks based on memristive crossbar circuits contributor: fullname: Kataeva – start-page: 3031 year: 1993 ident: 10.1016/j.neucom.2018.08.034_bib0014 article-title: An analog neural network chip with random weight change learning algorithm contributor: fullname: Hirotsu – volume: 9 year: 2010 ident: 10.1016/j.neucom.2018.08.034_bib0024 article-title: Understanding the difficulty of training deep feedforward neural networks publication-title: Aistats contributor: fullname: Glorot – start-page: 770 year: 2016 ident: 10.1016/j.neucom.2018.08.034_bib0004 article-title: Deep residual learning for image recognition contributor: fullname: He – volume: 37 start-page: 182 year: 2013 ident: 10.1016/j.neucom.2018.08.034_bib0017 article-title: The no-prop algorithm: a new learning algorithm for multilayer neural networks publication-title: Neural Netw. doi: 10.1016/j.neunet.2012.09.020 contributor: fullname: Widrow – volume: 3 start-page: 154 issue: 1 year: 1992 ident: 10.1016/j.neucom.2018.08.034_bib0010 article-title: Weight perturbation: an optimal architecture and learning technique for analog VLSI feedforward and recurrent multilayer networks publication-title: IEEE Trans. Neural Netw. doi: 10.1109/72.105429 contributor: fullname: Jabri – volume: 62 start-page: 215 issue: 1 year: 2015 ident: 10.1016/j.neucom.2018.08.034_bib0015 article-title: A circuit-based learning architecture for multilayer neural networks with memristor bridge synapses publication-title: IEEE Trans. Circuits Syst. I doi: 10.1109/TCSI.2014.2359717 contributor: fullname: Adhikari – volume: 5 year: 1993 ident: 10.1016/j.neucom.2018.08.034_bib0011 article-title: Summed weight neuron perturbation: an O(n) improvement over weight perturbation contributor: fullname: Flower – volume: 23 start-page: 1426 issue: 9 year: 2012 ident: 10.1016/j.neucom.2018.08.034_bib0008 article-title: Memristor bridge synapse-based neural network and its learning publication-title: IEEE Trans. Neural Netw. Learn. Syst. doi: 10.1109/TNNLS.2012.2204770 contributor: fullname: Adhikari – volume: 4 start-page: 96 issue: 1 year: 1960 ident: 10.1016/j.neucom.2018.08.034_bib0009 article-title: Adaptive switching circuits publication-title: IRE WESCON Convention Record contributor: fullname: Widrow – volume: 26 start-page: 2408 issue: 10 year: 2015 ident: 10.1016/j.neucom.2018.08.034_bib0025 article-title: Memristor-based multilayer neural networks with online gradient descent training publication-title: IEEE Trans. Neural Netw. Learn. Syst. doi: 10.1109/TNNLS.2014.2383395 contributor: fullname: Soudry – start-page: 1 year: 2015 ident: 10.1016/j.neucom.2018.08.034_bib0003 article-title: Going deeper with convolutions contributor: fullname: Szegedy – ident: 10.1016/j.neucom.2018.08.034_bib0021 – volume: 53 year: 2016 ident: 10.1016/j.neucom.2018.08.034_bib0005 article-title: Dot-product engine for neuromorphic computing: programming 1T1M crossbar to accelerate matrix-vector multiplication contributor: fullname: Hu – volume: 5 year: 1993 ident: 10.1016/j.neucom.2018.08.034_bib0012 article-title: A fast stochastic error-descent algorithm for supervised learning and optimization contributor: fullname: Cauwenberghs – start-page: 807 year: 2010 ident: 10.1016/j.neucom.2018.08.034_bib0022 article-title: Rectified linear units improve restricted boltzmann machines contributor: fullname: Nair – start-page: 1089 year: 2011 ident: 10.1016/j.neucom.2018.08.034_bib0018 article-title: On random weights and unsupervised feature learning contributor: fullname: Saxe – year: 2010 ident: 10.1016/j.neucom.2018.08.034_bib0020 article-title: Theano: a CPU and GPU math expression compiler doi: 10.25080/Majora-92bf1922-003 contributor: fullname: Bergstra – volume: 521 start-page: 436 year: 2015 ident: 10.1016/j.neucom.2018.08.034_bib0001 article-title: Deep learning publication-title: Nature doi: 10.1038/nature14539 contributor: fullname: LeCun – start-page: 2146 year: 2009 ident: 10.1016/j.neucom.2018.08.034_bib0019 article-title: What is the best multi-stage architecture for object recognition? contributor: fullname: Jarrett – volume: 521 start-page: 61 year: 2015 ident: 10.1016/j.neucom.2018.08.034_bib0007 article-title: Training and operation of an integrated neuromorphic network based on metal-oxide memristors publication-title: Nature doi: 10.1038/nature14441 contributor: fullname: Prezioso – start-page: 1 year: 1992 ident: 10.1016/j.neucom.2018.08.034_bib0016 article-title: Feedforward neural networks with random weights contributor: fullname: Schmidt – volume: 8 start-page: 251 issue: 2 year: 1995 ident: 10.1016/j.neucom.2018.08.034_bib0013 article-title: A Learning rule of neural networks via simultaneous perturbation and its hardware implementation publication-title: Neural Netw. doi: 10.1016/0893-6080(94)00084-Y contributor: fullname: Maeda |
SSID | ssj0017129 |
Score | 2.4275565 |
Snippet | A hybrid learning algorithm suitable for hardware implementation of multi-layer neural networks is proposed. Though backpropagation is a powerful learning... |
SourceID | crossref elsevier |
SourceType | Aggregation Database Publisher |
StartPage | 28 |
SubjectTerms | Backpropagation Delta rule Multilayer neural network No-propagation On-chip learning Random weight change |
Title | Hybrid no-propagation learning for multilayer neural networks |
URI | https://dx.doi.org/10.1016/j.neucom.2018.08.034 |
Volume | 321 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://sdu.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV27btswFCWcZMnSNI-i6QscsgUMJFE0qdFoXbgdujgBsgmiSCJ1HbmI5cH--ly-LAUOirZAFkGgTZngOTi6vL4PhC6kqahKJSUmMZTklTBEyIQSLim8PHOqjbLJyZMp_3Ervozz8WAQke7GXhRpGAOsbebsP6C9fSgMwD1gDldAHa5_hftkbXOwLpsFAW0EtfAAz6MHxEYVuiDCeQXG9qUtZwkgNT4YfNk3VV3Zjto1fQjuhNG9raqgLIU694G6-_mr8unq07t1dW8rIC0rtVWT4I92WQyzVeeSXbReYzbr5aZdmO0H7cPGtubx3bRdUP-875lIhY3yCDGqzl22kzLj_Y4ZI2BUegnWXnUFz1w-e1-Wqc-cjsIqeq9oX-BkR_y9H2J2BVtnI4Hsklx51uAtfVpWe2oXYtcBmpYUYIXtoYMMxAq08mD0bXz7fftfFE8zX7ExLDwmYLoowd3fet7A6Rkt16_Rq3DawCNPk2M00M0JOoqdPHAQ9lMUWIOfsgZH1mBgDe5Ygz1rcGTNGbr5Or7-PCGhsQap4YTYEsZ4rmiayqqmgtWZglNtIYxWNRisRqZG08JUkjE25NIoDd8fKg1nh1pkOaOSvkH7zaLRbxGG-bIoTMYZB3tomBQUpsuhkrzQMk_qc0TibpS_ff2UMgYWzkq_e6XdvdJ2Q6X5OeJxy8pgA3rbrgSU_zjz3X_PfI8OO_p-QPvtw0p_RHtLtfoUuPAIfZOCTQ |
link.rule.ids | 315,782,786,27933,27934 |
linkProvider | Elsevier |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Hybrid+no-propagation+learning+for+multilayer+neural+networks&rft.jtitle=Neurocomputing+%28Amsterdam%29&rft.au=Adhikari%2C+Shyam+Prasad&rft.au=Yang%2C+Changju&rft.au=Slot%2C+Krzysztof&rft.au=Strzelecki%2C+Michal&rft.date=2018-12-10&rft.pub=Elsevier+B.V&rft.issn=0925-2312&rft.eissn=1872-8286&rft.volume=321&rft.spage=28&rft.epage=35&rft_id=info:doi/10.1016%2Fj.neucom.2018.08.034&rft.externalDocID=S0925231218309846 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0925-2312&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0925-2312&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0925-2312&client=summon |