Learning State Transition Rules from High-Dimensional Time Series Data with Recurrent Temporal Gaussian-Bernoulli Restricted Boltzmann Machines
Understanding the dynamics of a system is crucial in various scientific and engineering domains. Machine learning techniques have been employed to learn state transition rules from observed time-series data. However, these data often contain sequences of noisy and ambiguous continuous variables, whi...
Saved in:
Published in: | Human-Centric Intelligent Systems Vol. 3; no. 3; pp. 296 - 311 |
---|---|
Main Authors: | , |
Format: | Journal Article |
Language: | English |
Published: |
Dordrecht
Springer Netherlands
20-06-2023
Springer Nature |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Abstract | Understanding the dynamics of a system is crucial in various scientific and engineering domains. Machine learning techniques have been employed to learn state transition rules from observed time-series data. However, these data often contain sequences of noisy and ambiguous continuous variables, while we typically seek simplified dynamics rules that capture essential variables. In this work, we propose a method to extract a small number of essential hidden variables from high-dimensional time-series data and learn state transition rules between hidden variables. Our approach is based on the Restricted Boltzmann Machine (RBM), which models observable data in the visible layer and latent features in the hidden layer. However, real-world data, such as video and audio, consist of both discrete and continuous variables with temporal relationships. To address this, we introduce the Recurrent Temporal Gaussian-Bernoulli Restricted Boltzmann Machine (RTGB-RBM), which combines the Gaussian-Bernoulli Restricted Boltzmann Machine (GB-RBM) to handle continuous visible variables and the Recurrent Temporal Restricted Boltzmann Machine (RT-RBM) to capture time dependencies among discrete hidden variables. Additionally, we propose a rule-based method to extract essential information as hidden variables and represent state transition rules in an interpretable form. We evaluate our proposed method on the Bouncing Ball, Moving MNIST, and dSprite datasets. Experimental results demonstrate that our approach effectively learns the dynamics of these physical systems by extracting state transition rules between hidden variables. Moreover, our method can predict unobserved future states based on observed state transitions. |
---|---|
AbstractList | Abstract Understanding the dynamics of a system is crucial in various scientific and engineering domains. Machine learning techniques have been employed to learn state transition rules from observed time-series data. However, these data often contain sequences of noisy and ambiguous continuous variables, while we typically seek simplified dynamics rules that capture essential variables. In this work, we propose a method to extract a small number of essential hidden variables from high-dimensional time-series data and learn state transition rules between hidden variables. Our approach is based on the Restricted Boltzmann Machine (RBM), which models observable data in the visible layer and latent features in the hidden layer. However, real-world data, such as video and audio, consist of both discrete and continuous variables with temporal relationships. To address this, we introduce the Recurrent Temporal Gaussian-Bernoulli Restricted Boltzmann Machine (RTGB-RBM), which combines the Gaussian-Bernoulli Restricted Boltzmann Machine (GB-RBM) to handle continuous visible variables and the Recurrent Temporal Restricted Boltzmann Machine (RT-RBM) to capture time dependencies among discrete hidden variables. Additionally, we propose a rule-based method to extract essential information as hidden variables and represent state transition rules in an interpretable form. We evaluate our proposed method on the Bouncing Ball, Moving MNIST, and dSprite datasets. Experimental results demonstrate that our approach effectively learns the dynamics of these physical systems by extracting state transition rules between hidden variables. Moreover, our method can predict unobserved future states based on observed state transitions. Abstract Understanding the dynamics of a system is crucial in various scientific and engineering domains. Machine learning techniques have been employed to learn state transition rules from observed time-series data. However, these data often contain sequences of noisy and ambiguous continuous variables, while we typically seek simplified dynamics rules that capture essential variables. In this work, we propose a method to extract a small number of essential hidden variables from high-dimensional time-series data and learn state transition rules between hidden variables. Our approach is based on the Restricted Boltzmann Machine (RBM), which models observable data in the visible layer and latent features in the hidden layer. However, real-world data, such as video and audio, consist of both discrete and continuous variables with temporal relationships. To address this, we introduce the Recurrent Temporal Gaussian-Bernoulli Restricted Boltzmann Machine (RTGB-RBM), which combines the Gaussian-Bernoulli Restricted Boltzmann Machine (GB-RBM) to handle continuous visible variables and the Recurrent Temporal Restricted Boltzmann Machine (RT-RBM) to capture time dependencies among discrete hidden variables. Additionally, we propose a rule-based method to extract essential information as hidden variables and represent state transition rules in an interpretable form. We evaluate our proposed method on the Bouncing Ball, Moving MNIST, and dSprite datasets. Experimental results demonstrate that our approach effectively learns the dynamics of these physical systems by extracting state transition rules between hidden variables. Moreover, our method can predict unobserved future states based on observed state transitions. Understanding the dynamics of a system is crucial in various scientific and engineering domains. Machine learning techniques have been employed to learn state transition rules from observed time-series data. However, these data often contain sequences of noisy and ambiguous continuous variables, while we typically seek simplified dynamics rules that capture essential variables. In this work, we propose a method to extract a small number of essential hidden variables from high-dimensional time-series data and learn state transition rules between hidden variables. Our approach is based on the Restricted Boltzmann Machine (RBM), which models observable data in the visible layer and latent features in the hidden layer. However, real-world data, such as video and audio, consist of both discrete and continuous variables with temporal relationships. To address this, we introduce the Recurrent Temporal Gaussian-Bernoulli Restricted Boltzmann Machine (RTGB-RBM), which combines the Gaussian-Bernoulli Restricted Boltzmann Machine (GB-RBM) to handle continuous visible variables and the Recurrent Temporal Restricted Boltzmann Machine (RT-RBM) to capture time dependencies among discrete hidden variables. Additionally, we propose a rule-based method to extract essential information as hidden variables and represent state transition rules in an interpretable form. We evaluate our proposed method on the Bouncing Ball, Moving MNIST, and dSprite datasets. Experimental results demonstrate that our approach effectively learns the dynamics of these physical systems by extracting state transition rules between hidden variables. Moreover, our method can predict unobserved future states based on observed state transitions. |
Author | Watanabe, Koji Inoue, Katsumi |
Author_xml | – sequence: 1 givenname: Koji orcidid: 0000-0003-4603-5252 surname: Watanabe fullname: Watanabe, Koji email: kojiwatanabe@nii.ac.jp organization: The Graduate University for Advanced Studies, SOKENDAI, National Institute of Informatics – sequence: 2 givenname: Katsumi surname: Inoue fullname: Inoue, Katsumi organization: The Graduate University for Advanced Studies, SOKENDAI, National Institute of Informatics |
BookMark | eNp9kcFO3DAQhiMEEhR4gZ78Am7tcTZOjgVaQNoKCbZna-JMdr1KbGQ7qspL9JVr2KrqidP8mvn-_zD_h-rYB09V9VGKT1II_TnVNSjBBSguhICGw1F1Bk2juVSqOf5Pn1aXKe1foQ6UBjirfq8Jo3d-y54yZmKbiD657IJnj8tEiY0xzOzObXf8xs1UbsHjxDZFsyeKrhA3mJH9dHnHHskuMZLPbEPzc4gFvMUlJYeeX1H0YZkmV6iUo7OZBnYVpvwyo_fsO9qd85QuqpMRp0SXf-d59ePb1831HV8_3N5ff1lzC3UDfCToNOiua_seFCqppG5UL6VaSZCdaGEYuw47LbTVZDWMemzrvu_UCvQKUZ1X94fcIeDePEc3Y_xlAjrztghxazBmZycy0K7aQStLMKh6sGNPI4iBWtEMrRJDU7LgkGVjSCnS-C9PCvPakDk0ZEpD5q0hA8WkDqZUYL-laPZhieW36T3XHxhpls4 |
CitedBy_id | crossref_primary_10_3390_math11214461 |
Cites_doi | 10.1126/science.1127647 10.1038/s42005-022-00987-z 10.1007/978-981-16-5036-9_30 10.1201/9780429494093 10.1162/089976602760128018 10.1007/s10994-013-5353-8 10.1007/s10994-021-06058-8 10.24963/kr.2022/44 10.1162/neco.2006.18.7.1527 10.1145/3422622 10.1109/ICCV.2015.320 10.1007/978-94-015-8054-0_8 10.1109/MASSP.1986.1165342 10.1073/pnas.79.8.2554 10.1126/sciadv.aay2631 10.1016/j.jcp.2018.10.045 10.1088/1742-5468/ac3ae5 |
ContentType | Journal Article |
Copyright | The Author(s) 2023 |
Copyright_xml | – notice: The Author(s) 2023 |
DBID | C6C AAYXX CITATION DOA |
DOI | 10.1007/s44230-023-00026-2 |
DatabaseName | SpringerOpen CrossRef DOAJ Directory of Open Access Journals |
DatabaseTitle | CrossRef |
DatabaseTitleList | CrossRef |
Database_xml | – sequence: 1 dbid: DOA name: Directory of Open Access Journals url: http://www.doaj.org/ sourceTypes: Open Website |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Computer Science |
EISSN | 2667-1336 |
EndPage | 311 |
ExternalDocumentID | oai_doaj_org_article_2858d73ce2d34dcfbef20de806d830d6 10_1007_s44230_023_00026_2 |
GroupedDBID | AAYZJ ACACY AFNRJ ALMA_UNASSIGNED_HOLDINGS ARCSS C6C EBS GROUPED_DOAJ M~E RSV SOJ 0R~ AAKKN AAYXX ABEEZ ACULB AFGXO C24 CITATION |
ID | FETCH-LOGICAL-c2462-fe29727998bb23a3131763b11351219082df99a9707c7ec72f7f84bb935275aa3 |
IEDL.DBID | C24 |
ISSN | 2667-1336 |
IngestDate | Tue Oct 22 15:16:14 EDT 2024 Fri Aug 23 02:20:59 EDT 2024 Sat Dec 16 12:05:39 EST 2023 |
IsDoiOpenAccess | true |
IsOpenAccess | true |
IsPeerReviewed | false |
IsScholarly | true |
Issue | 3 |
Keywords | Restricted Boltzmann Machine State Transition Rules Hidden Variables |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c2462-fe29727998bb23a3131763b11351219082df99a9707c7ec72f7f84bb935275aa3 |
ORCID | 0000-0003-4603-5252 |
OpenAccessLink | http://link.springer.com/10.1007/s44230-023-00026-2 |
PageCount | 16 |
ParticipantIDs | doaj_primary_oai_doaj_org_article_2858d73ce2d34dcfbef20de806d830d6 crossref_primary_10_1007_s44230_023_00026_2 springer_journals_10_1007_s44230_023_00026_2 |
PublicationCentury | 2000 |
PublicationDate | 2023-06-20 |
PublicationDateYYYYMMDD | 2023-06-20 |
PublicationDate_xml | – month: 06 year: 2023 text: 2023-06-20 day: 20 |
PublicationDecade | 2020 |
PublicationPlace | Dordrecht |
PublicationPlace_xml | – name: Dordrecht |
PublicationTitle | Human-Centric Intelligent Systems |
PublicationTitleAbbrev | Hum-Cent Intell Syst |
PublicationYear | 2023 |
Publisher | Springer Netherlands Springer Nature |
Publisher_xml | – name: Springer Netherlands – name: Springer Nature |
References | Wolfram S. Cellular automata and complexity: Collected papers. CRC Press. 2018. Chang MB, Ullman T, Torralba A, Tenenbaum JB. A compositional object-based approach to learning physical dynamics. 2016; arXiv preprint arXiv:1612.00341. HintonGESalakhutdinovRRReducing the dimensionality of data with neural networksScience20063135786504507224250910.1126/science.11276471226.68083 Liao R, Kornblith S, Ren M, Fleet DJ, Hinton G. Gaussian-Bernoulli RBMs without tears. 2022; arXiv preprint arXiv:2210.10318. DupontELearning disentangled joint continuous and discrete representationsAdv Neural Inform Process Syst201831710720 Ayed I, de Bézenac E, Pajot A, Brajard J, Gallinari P. Learning dynamical systems from partial observations. 2019; arXiv preprint arXiv:1902.11136. Lotter W, Kreiman G, Cox D. Deep predictive coding networks for video prediction and unsupervised learning. 2016; arXiv preprint arXiv:1605.08104. Aspis Y, Broda K, Lobo J, Russo A. Embed2sym - scalable neuro-symbolic reasoning via clustered embeddings. In: Kern-Isberner, G, Lakemeyer G, Meyer T (Eds.) Proceedings of the 19th International Conference on Principles of Knowledge Representation and Reasoning, KR 2022, Haifa, Israel. July 31–August 5, 2022. https://proceedings.kr.org/2022/44/. Srivastava N, Mansimov E, Salakhudinov R. Unsupervised learning of video representations using LSTMs. In: International Conference on Machine Learning, PMLR. 2015;pp. 843–852. Kingma DP, Welling M. Auto-encoding variational bayes. International Conference on Learning Representations. 2013. Kauffman SA. The origins of order: self-organization and selection in evolution. Oxford University Press. 1993. Tran SN, d’Avila Garcez A. Knowledge extraction from deep belief networks for images. IJCAI-Workshop Neural-Symbolic Learn. Reason, 2013;1–6. Tran SN. d’Avila Garcez, A.: logical Boltzmann machines. 2021;CoRR abs/2112.05841. Hsu W-N, Zhang Y, Glass J. unsupervised learning of disentangled and interpretable representations from sequential data. Adv Neural Inform Process Syst 2017;30. Inoue K. Logic programming for boolean networks. In: International Joint Conference on Artificial Intelligence. 2011. InoueKRibeiroTSakamaCLearning from interpretation transitionMach Learn20149415179314440710.1007/s10994-013-5353-81319.68054 Wang X, Gupta A. Unsupervised learning of visual representations using videos. In: Proceedings of the IEEE International Conference on Computer Vision. 2015;2794–2802. YinYLe GuenVDonaJde BézenacEAyedIThomeNGallinariPAugmenting physical models with deep networks for complex dynamics forecastingJ Stat Mech Theory Exper2021202112441284010.1088/1742-5468/ac3ae507451722 Yingzhen L, Mandt S. Disentangled sequential autoencoder. In: International Conference on Machine Learning, PMLR. 2018;pp. 5670–5679. Ljung L. System identification (2nd Ed.): Theory for the user. Prentice Hall PTR, USA. 1999. HintonGEOsinderoSTehY-WA fast learning algorithm for deep belief netsNeural Comput200618715271554222448510.1162/neco.2006.18.7.15271106.68094 Shi Y. Adv Big Data Anal. 2022. Naeem M, Jamal T, Diaz-Martinez J, Butt SA, Montesano N, Tariq MI, De-la-Hoz-Franco E, De-La-Hoz-Valdiris E. Trends and Future Perspective Challenges in Big Data. In: Advances in Intelligent Data Analysis and Applications: Proceeding of the Sixth Euro-China Conference on Intelligent Data Analysis and Applications, 15–18 October 2019, Arad, Romania, 2022;pp. 309–325. Springer. Enguerrand G, Sophie T, Katsumi I. Learning from interpretation transition using feed-forward neural networks. Proceedings of ILP 2016. Tran SN. Propositional knowledge representation in restricted Boltzmann machines. 2017; CoRR abs/1705.10899. GoodfellowIPouget-AbadieJMirzaMXuBWarde-FarleyDOzairSCourvilleABengioYGenerative adversarial networksCommun ACM20206311139144417363310.1145/3422622 HintonGETraining products of experts by minimizing contrastive divergenceNeural Comput20021481771180010.1162/0899766027601280181010.68111 GaoKWangHCaoYInoueKLearning from interpretation transition using differentiable logic programming semanticsMach Learn20221111123145437584710.1007/s10994-021-06058-807510308 LuPYAriño BernadJSoljačićMDiscovering sparse interpretable dynamics from partial observationsCommun Phys20225120610.1038/s42005-022-00987-z Mittelman R, Kuipers B, Savarese S, Lee H. Structured recurrent temporal restricted boltzmann machines. In: International Conference on Machine Learning ICML 2014; pp.1647–1655. UdrescuS-MTegmarkMAI feynman: a physics-inspired method for symbolic regressionSci Adv2020616263110.1126/sciadv.aay2631 Higgins I, Matthey L, Pal A, Burgess C, Glorot X, Botvinick M, Mohamed S, Lerchner A. beta-VAE: learning basic visual concepts with a constrained fariational framework. Int Confer Learn Represent 2016. ZhaoYBillingsSThe identification of cellular automataJ Cell Automata20062476523540921136.68441 SutskeverIHintonGETaylorGWThe recurrent temporal restricted Boltzmann machineAdv Neural Inform Process Syst20082116011608 CranmerMSanchez GonzalezABattagliaPXuRCranmerKSpergelDHoSDiscovering symbolic models from deep learning with inductive biasesAdv Neural Inform Process Syst2020331742917442 RabinerLJuangBAn introduction to Hidden Markov modelsIEEE ASAP Magaz19863141610.1109/MASSP.1986.1165342 Watanabe K, Inoue K. Learning state transition rules from hidden layers of restricted boltzmann machines. In: Principle and practice of data and knowledge acquisition workshop PKAW. 2022. Berg J, Nyström K. Data-driven discovery of PDEs in complex datasets. 2018; ArXiv abs/1808.10788. HopfieldJJNeural networks and physical systems with emergent collective computational abilitiesProc Natl Acad Sci19827982554255865203310.1073/pnas.79.8.25541369.92007 RaissiMPerdikarisPKarniadakisGEPhysics-informed neural networks: a deep learning framework for solving forward and inverse problems involving nonlinear partial differential equationsJ Comput Phys2019378686707388169510.1016/j.jcp.2018.10.0451415.68175 Salakhutdinov R, Hinton GE. Deep Boltzmann machines. In: International Conference on Artificial Intelligence and Statistics. 2009. 26_CR19 26_CR18 I Goodfellow (26_CR28) 2020; 63 Y Yin (26_CR2) 2021; 2021 GE Hinton (26_CR17) 2002; 14 K Inoue (26_CR15) 2014; 94 26_CR1 GE Hinton (26_CR21) 2006; 313 26_CR30 K Gao (26_CR37) 2022; 111 Y Zhao (26_CR13) 2006; 2 I Sutskever (26_CR22) 2008; 21 JJ Hopfield (26_CR25) 1982; 79 L Rabiner (26_CR16) 1986; 3 26_CR38 26_CR8 26_CR14 26_CR36 PY Lu (26_CR3) 2022; 5 26_CR6 26_CR5 26_CR12 26_CR34 26_CR4 26_CR11 26_CR33 26_CR32 26_CR31 GE Hinton (26_CR39) 2006; 18 S-M Udrescu (26_CR9) 2020; 6 M Cranmer (26_CR10) 2020; 33 26_CR29 M Raissi (26_CR7) 2019; 378 26_CR41 26_CR40 E Dupont (26_CR35) 2018; 31 26_CR27 26_CR26 26_CR24 26_CR23 26_CR20 |
References_xml | – ident: 26_CR29 – volume: 313 start-page: 504 issue: 5786 year: 2006 ident: 26_CR21 publication-title: Science doi: 10.1126/science.1127647 contributor: fullname: GE Hinton – ident: 26_CR27 – volume: 5 start-page: 206 issue: 1 year: 2022 ident: 26_CR3 publication-title: Commun Phys doi: 10.1038/s42005-022-00987-z contributor: fullname: PY Lu – volume: 33 start-page: 17429 year: 2020 ident: 26_CR10 publication-title: Adv Neural Inform Process Syst contributor: fullname: M Cranmer – volume: 2 start-page: 47 year: 2006 ident: 26_CR13 publication-title: J Cell Automata contributor: fullname: Y Zhao – ident: 26_CR8 – ident: 26_CR4 doi: 10.1007/978-981-16-5036-9_30 – ident: 26_CR23 – ident: 26_CR12 doi: 10.1201/9780429494093 – volume: 14 start-page: 1771 issue: 8 year: 2002 ident: 26_CR17 publication-title: Neural Comput doi: 10.1162/089976602760128018 contributor: fullname: GE Hinton – ident: 26_CR41 – ident: 26_CR1 – volume: 94 start-page: 51 issue: 1 year: 2014 ident: 26_CR15 publication-title: Mach Learn doi: 10.1007/s10994-013-5353-8 contributor: fullname: K Inoue – ident: 26_CR5 – volume: 111 start-page: 123 issue: 1 year: 2022 ident: 26_CR37 publication-title: Mach Learn doi: 10.1007/s10994-021-06058-8 contributor: fullname: K Gao – ident: 26_CR38 doi: 10.24963/kr.2022/44 – ident: 26_CR33 – ident: 26_CR18 – ident: 26_CR31 – ident: 26_CR14 – volume: 21 start-page: 1601 year: 2008 ident: 26_CR22 publication-title: Adv Neural Inform Process Syst contributor: fullname: I Sutskever – volume: 31 start-page: 710 year: 2018 ident: 26_CR35 publication-title: Adv Neural Inform Process Syst contributor: fullname: E Dupont – volume: 18 start-page: 1527 issue: 7 year: 2006 ident: 26_CR39 publication-title: Neural Comput doi: 10.1162/neco.2006.18.7.1527 contributor: fullname: GE Hinton – ident: 26_CR26 – volume: 63 start-page: 139 issue: 11 year: 2020 ident: 26_CR28 publication-title: Commun ACM doi: 10.1145/3422622 contributor: fullname: I Goodfellow – ident: 26_CR30 doi: 10.1109/ICCV.2015.320 – ident: 26_CR11 doi: 10.1007/978-94-015-8054-0_8 – volume: 3 start-page: 4 issue: 1 year: 1986 ident: 26_CR16 publication-title: IEEE ASAP Magaz doi: 10.1109/MASSP.1986.1165342 contributor: fullname: L Rabiner – volume: 79 start-page: 2554 issue: 8 year: 1982 ident: 26_CR25 publication-title: Proc Natl Acad Sci doi: 10.1073/pnas.79.8.2554 contributor: fullname: JJ Hopfield – ident: 26_CR24 – volume: 6 start-page: 2631 issue: 16 year: 2020 ident: 26_CR9 publication-title: Sci Adv doi: 10.1126/sciadv.aay2631 contributor: fullname: S-M Udrescu – ident: 26_CR20 – ident: 26_CR40 – ident: 26_CR6 – ident: 26_CR34 – volume: 378 start-page: 686 year: 2019 ident: 26_CR7 publication-title: J Comput Phys doi: 10.1016/j.jcp.2018.10.045 contributor: fullname: M Raissi – ident: 26_CR19 – ident: 26_CR36 – volume: 2021 issue: 12 year: 2021 ident: 26_CR2 publication-title: J Stat Mech Theory Exper doi: 10.1088/1742-5468/ac3ae5 contributor: fullname: Y Yin – ident: 26_CR32 |
SSID | ssj0002923722 |
Score | 2.2750638 |
Snippet | Understanding the dynamics of a system is crucial in various scientific and engineering domains. Machine learning techniques have been employed to learn state... Abstract Understanding the dynamics of a system is crucial in various scientific and engineering domains. Machine learning techniques have been employed to... Abstract Understanding the dynamics of a system is crucial in various scientific and engineering domains. Machine learning techniques have been employed to... |
SourceID | doaj crossref springer |
SourceType | Open Website Aggregation Database Publisher |
StartPage | 296 |
SubjectTerms | Computer Science Research Article Restricted Boltzmann Machine State Transition Rules Hidden Variables |
SummonAdditionalLinks | – databaseName: DOAJ Directory of Open Access Journals dbid: DOA link: http://sdu.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV1LT-QwDI5YTlxWrADt8FjlwA2izThtkx6ZHR4XOPCQuFVJ4-5l6CA6c-FP8Jex0zJahASXvVaRUsWO_Tm2PwtxGBBdTYGCqimgVhl6soOF0QrQIeQuUOzDGd2LG3t176anTJOzGvXFNWE9PXB_cL_B5S5aUyNEk8W6CdiAjuh0EZ3RsSfb1u6fYIptMBBusQBDl0zqlcsIOGhFLoobqaFQ8M4TJcL-D9nQ5GTONsX3AR3Kk_6vfog1bLfEy8CB-lcmZCiTe0mVVvJ6OcNOcouI5IINNWWu_p5nQ3Jzh-THL1ox9Qsv-clVXvP7OjMyyduelGomz_2y41ZKNcGnds7JIVrF4zxqQqNyMp8tnh9828rLVHeJ3ba4Ozu9_XOhhjkKqoasANUglARTKLAKAYw3Y8IMhQljHs5HBotAQGzK0pdW29pibaGxjctCKAmc2dx7syPW23mLP4U0EY32sfAmxyyLZKJc8MDJunGTj2MzEkdvZ1o99nQZ1YoYOUmgIglUSQIVjMSEj321kqmu0wdSgGpQgOorBRiJ4zehVcP96z7Zc_d_7LknNiBpEqmR3hfri6clHohvXVz-Snr4CpPN4MQ priority: 102 providerName: Directory of Open Access Journals |
Title | Learning State Transition Rules from High-Dimensional Time Series Data with Recurrent Temporal Gaussian-Bernoulli Restricted Boltzmann Machines |
URI | https://link.springer.com/article/10.1007/s44230-023-00026-2 https://doaj.org/article/2858d73ce2d34dcfbef20de806d830d6 |
Volume | 3 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://sdu.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1Lb9swDBb6uPTSbn2g6dZCh95WFQ5lW_Jxadrlsh7SDujNkCy6h6ZOESeX_on95ZGKE2DoUKC7GjRsiBL5kRQ_CnHuEW1FgYKqKKBWKTqyg7lOFKBFyKyn2IcruqM7c_tgh9dMk6PXqYvm6XJVkYyGet3rlpLjTxS5GG6EhlyR3d0m8JAyYf5V1-LA5hcIshiArkHm36_-5YQiV_-bQmj0Lzd7__Vnn8RuByfl96X-P4sNbPbF3mpUg-xO7oH43fGoPsqILmV0UfG2lhwvJthKbjORfOlDDZnvf8nVIblBRHICjSSGbu4kp23lmHP0zOok75fEVhP5wy1absdUA5w1Uy4wkRSPBKkI0crBdDJ_fXZNI3_Gu5vYHopfN9f3VyPVzWJQFaQ5qBqhIKhDwZn3oJ3uE-7Ite_zgD8yegQkQl0UrjCJqQxWBmpT29T7ggCeyZzTR2KrmTZ4LKQOqBMXcqczTNNAZs56B1zw69dZP9Q98W2lnPJlSblRrsmV41qXtNZlXOsSemLA-ltLMl12fDCdPZbd6SvBZjYYXSEEnYaq9lhDEtAmebA6CXlPXKxUW3ZnuH3nmycfE_8idiDuDtoayVexNZ8t8FRstmFxFrfuWUwE_AGhR-l2 |
link.rule.ids | 315,782,786,866,2108,27935,27936,41130,42199,52244 |
linkProvider | Springer Nature |
linkToHtml | http://sdu.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1Lj9MwELZgOcCF5SnK0wduYCkdO7FzpNtditjdw1IkbpYdT_ZSUtS0F_4Ef5kZN6mEQEhwjSZK5Hl7Zr4R4nVEdA0lCqqhhFoZDGQHK10oQIdQuki5D1d0F5_s5Rc3P2WYHDPOwuRu97EkmS31YdjNkOcvFPkYnoSGSpHhvWVI4ViWT4YZB7a_QDGLBRgmZP786i9eKIP1_1YJzQ7m7Pj_fu2euDsElPLdXgLuixvYPRDH47IGOejuQ_FjQFK9ljm-lNlJ5X4tebVbYS950ERy24eaM-L_Hq1D8oiI5Cs0opiHbZB8cSuv-JaecZ3kcg9ttZLvw67ngUw1w0235hITUfFSkIZiWjlbr7bfv4aukxe5exP7R-Lz2enyZKGGbQyqAVOBahFqCnYoPYsRdNBTijwqHae84o-4QKFEaus61LawjcXGQmtbZ2KsKcSzZQj6sTjq1h0-EVIn1EVIVdAlGpPI0LkYgEt-07acpnYi3ozc8d_2oBv-AK-cz9rTWft81h4mYsYMPFAyYHZ-sN5c-0H_PLjSJasbhKRNatqILRQJXVElp4tUTcTbkbV-0OL-L998-m_kr8TtxfLi3J9_uPz4TNyBLCkkJsVzcbTd7PCFuNmn3cssxz8BAeLsWw |
linkToPdf | http://sdu.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1Nb9QwELWgSIhLWwqoC7T1gRtYTewkdo4s220RUFWlSNwsOx73ss1Wm90Lf4K_zIyTXQkVVUK9RhMl8sf4jWfeG8beeQDTYKAgGgyoRQEO_WClMiHBgCyNx9iHMrpn3_X5TzM5IZmcDYs_VbuvU5I9p4FUmtrl8W2IxxviW4EoIBN43hArWlYCnfCTIkd_TOnage9AvlgiftFSDmyZf7_614mUhPvvZEXTYTPdefhv7rLtAWjyj_3KeM4eQbvHdtZNHPiwp1-w34PC6jVPuJOnwyvVcfHL1Qw6TgQUTuUgYkKdAHoVD07UEU5Xa2gxcUvH6UKXX9LtPek98ate8mrGT92qI6KmGMOinVPqCa2oWUiDWJeP57PlrxvXtvxbquqE7iX7MT25-nQmhi4NopFFJUUEWSMIwrDNe6mcyhGRVMrn1PoP3SFCjBDr2tU6042GRsuooym8rxH66dI59YpttfMW9hlXAVTmQuVUCUUR0AEa7ySlAvNY5iGO2Pv1TNnbXozDbmSX01hbHGubxtrKERvTZG4sSUg7PZgvru2wL600pQlaNSCDKkITPUSZBTBZFYzKQjViH9bTbIfd3d3zzdf_Z37Enl5Mpvbr5_Mvb9gzmRYKrpLsLdtaLlZwwB53YXWYlvQfYUD1LQ |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Learning+State+Transition+Rules+from+High-Dimensional+Time+Series+Data+with+Recurrent+Temporal+Gaussian-Bernoulli+Restricted+Boltzmann+Machines&rft.jtitle=Human-Centric+Intelligent+Systems&rft.au=Watanabe%2C+Koji&rft.au=Inoue%2C+Katsumi&rft.date=2023-06-20&rft.pub=Springer+Netherlands&rft.eissn=2667-1336&rft.volume=3&rft.issue=3&rft.spage=296&rft.epage=311&rft_id=info:doi/10.1007%2Fs44230-023-00026-2&rft.externalDocID=10_1007_s44230_023_00026_2 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2667-1336&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2667-1336&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2667-1336&client=summon |