Informed Pre-Training on Prior Knowledge
When training data is scarce, the incorporation of additional prior knowledge can assist the learning process. While it is common to initialize neural networks with weights that have been pre-trained on other large data sets, pre-training on more concise forms of knowledge has rather been overlooked...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
23-05-2022
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Abstract | When training data is scarce, the incorporation of additional prior knowledge
can assist the learning process. While it is common to initialize neural
networks with weights that have been pre-trained on other large data sets,
pre-training on more concise forms of knowledge has rather been overlooked. In
this paper, we propose a novel informed machine learning approach and suggest
to pre-train on prior knowledge. Formal knowledge representations, e.g. graphs
or equations, are first transformed into a small and condensed data set of
knowledge prototypes. We show that informed pre-training on such knowledge
prototypes (i) speeds up the learning processes, (ii) improves generalization
capabilities in the regime where not enough training data is available, and
(iii) increases model robustness. Analyzing which parts of the model are
affected most by the prototypes reveals that improvements come from deeper
layers that typically represent high-level features. This confirms that
informed pre-training can indeed transfer semantic knowledge. This is a novel
effect, which shows that knowledge-based pre-training has additional and
complementary strengths to existing approaches. |
---|---|
AbstractList | When training data is scarce, the incorporation of additional prior knowledge
can assist the learning process. While it is common to initialize neural
networks with weights that have been pre-trained on other large data sets,
pre-training on more concise forms of knowledge has rather been overlooked. In
this paper, we propose a novel informed machine learning approach and suggest
to pre-train on prior knowledge. Formal knowledge representations, e.g. graphs
or equations, are first transformed into a small and condensed data set of
knowledge prototypes. We show that informed pre-training on such knowledge
prototypes (i) speeds up the learning processes, (ii) improves generalization
capabilities in the regime where not enough training data is available, and
(iii) increases model robustness. Analyzing which parts of the model are
affected most by the prototypes reveals that improvements come from deeper
layers that typically represent high-level features. This confirms that
informed pre-training can indeed transfer semantic knowledge. This is a novel
effect, which shows that knowledge-based pre-training has additional and
complementary strengths to existing approaches. |
Author | Cvejoski, Kostadin Houben, Sebastian Bauckhage, Christian Piatkowski, Nico von Rueden, Laura |
Author_xml | – sequence: 1 givenname: Laura surname: von Rueden fullname: von Rueden, Laura – sequence: 2 givenname: Sebastian surname: Houben fullname: Houben, Sebastian – sequence: 3 givenname: Kostadin surname: Cvejoski fullname: Cvejoski, Kostadin – sequence: 4 givenname: Christian surname: Bauckhage fullname: Bauckhage, Christian – sequence: 5 givenname: Nico surname: Piatkowski fullname: Piatkowski, Nico |
BackLink | https://doi.org/10.48550/arXiv.2205.11433$$DView paper in arXiv |
BookMark | eNotzj1vwjAUhWEPMPDRH8DUjCxJbV87tscKUYqK1A7ZI9e-jiwFG7kStP-eQjsdvcvRMyeTlBMSsmK0EVpK-mTLdzw3nFPZMCYAZmS9TyGXI_rqo2DdFRtTTEOV02_HXKq3lC8j-gGXZBrs-IUP_7sg3cu227zWh_fdfvN8qG2roA5Oo5FMgQfuDWhQrUNv1WcQQiqkvOXCtWC8Vox5h0iDMMFJ67w2AQMsyOPf7Z3an0o82vLT38j9nQxXJYw82g |
ContentType | Journal Article |
Copyright | http://creativecommons.org/licenses/by/4.0 |
Copyright_xml | – notice: http://creativecommons.org/licenses/by/4.0 |
DBID | AKY GOX |
DOI | 10.48550/arxiv.2205.11433 |
DatabaseName | arXiv Computer Science arXiv.org |
DatabaseTitleList | |
Database_xml | – sequence: 1 dbid: GOX name: arXiv.org url: http://arxiv.org/find sourceTypes: Open Access Repository |
DeliveryMethod | fulltext_linktorsrc |
ExternalDocumentID | 2205_11433 |
GroupedDBID | AKY GOX |
ID | FETCH-LOGICAL-a673-fc8e95173d32d938376ceda7bf4457e02624c639d8711dcee0f49fc5acd89fef3 |
IEDL.DBID | GOX |
IngestDate | Mon Jan 08 05:40:16 EST 2024 |
IsDoiOpenAccess | true |
IsOpenAccess | true |
IsPeerReviewed | false |
IsScholarly | false |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-a673-fc8e95173d32d938376ceda7bf4457e02624c639d8711dcee0f49fc5acd89fef3 |
OpenAccessLink | https://arxiv.org/abs/2205.11433 |
ParticipantIDs | arxiv_primary_2205_11433 |
PublicationCentury | 2000 |
PublicationDate | 2022-05-23 |
PublicationDateYYYYMMDD | 2022-05-23 |
PublicationDate_xml | – month: 05 year: 2022 text: 2022-05-23 day: 23 |
PublicationDecade | 2020 |
PublicationYear | 2022 |
Score | 1.8457602 |
SecondaryResourceType | preprint |
Snippet | When training data is scarce, the incorporation of additional prior knowledge
can assist the learning process. While it is common to initialize neural
networks... |
SourceID | arxiv |
SourceType | Open Access Repository |
SubjectTerms | Computer Science - Artificial Intelligence Computer Science - Learning |
Title | Informed Pre-Training on Prior Knowledge |
URI | https://arxiv.org/abs/2205.11433 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://sdu.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwdV09T8MwED2RTiwIBKh8ygMDi0VjO7E9ImiphARIZOgWXeyz1KVFKUX8fOwkFSyMtm85e7j37HvPADfOREyPOnAbCs8VesFRlRNOJg9lcnsJTdI7z9_1y8I8TpNNDttpYbD9Xn71_sDN5i6pQJObrZQZZEKklq2n10X_ONlZcQ3xv3ERY3ZTf4rE7BAOBnTH7vvjOII9Wh3DbS_5Ic_eWuLV8CcDW6_ieLlu2fPuWusEqtm0epjz4YMCjqWWPDhDEaBo6aXwNlG90pFH3QSlCk2R3QjlIgLwkZTkPlajSVA2uAKdNzZQkKcwihyfxsCCKlyJSBFvNApzss57I9DmxqITtjmDcZdW_dF7UNQp47rL-Pz_pQvYF6lbf1JwIS9h9Nlu6Qqyjd9edxv5A8tTcUw |
link.rule.ids | 228,230,782,887 |
linkProvider | Cornell University |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Informed+Pre-Training+on+Prior+Knowledge&rft.au=von+Rueden%2C+Laura&rft.au=Houben%2C+Sebastian&rft.au=Cvejoski%2C+Kostadin&rft.au=Bauckhage%2C+Christian&rft.date=2022-05-23&rft_id=info:doi/10.48550%2Farxiv.2205.11433&rft.externalDocID=2205_11433 |