Fine-Tuning Language Models for Scientific Writing Support
We support scientific writers in determining whether a written sentence is scientific, to which section it belongs, and suggest paraphrasings to improve the sentence. Firstly, we propose a regression model trained on a corpus of scientific sentences extracted from peer-reviewed scientific papers and...
Saved in:
Main Authors: | , , , , , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
19-06-2023
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | We support scientific writers in determining whether a written sentence is
scientific, to which section it belongs, and suggest paraphrasings to improve
the sentence. Firstly, we propose a regression model trained on a corpus of
scientific sentences extracted from peer-reviewed scientific papers and
non-scientific text to assign a score that indicates the scientificness of a
sentence. We investigate the effect of equations and citations on this score to
test the model for potential biases. Secondly, we create a mapping of section
titles to a standard paper layout in AI and machine learning to classify a
sentence to its most likely section. We study the impact of context, i.e.,
surrounding sentences, on the section classification performance. Finally, we
propose a paraphraser, which suggests an alternative for a given sentence that
includes word substitutions, additions to the sentence, and structural changes
to improve the writing style. We train various large language models on
sentences extracted from arXiv papers that were peer reviewed and published at
A*, A, B, and C ranked conferences. On the scientificness task, all models
achieve an MSE smaller than $2\%$. For the section classification, BERT
outperforms WideMLP and SciBERT in most cases. We demonstrate that using
context enhances the classification of a sentence, achieving up to a $90\%$
F1-score. Although the paraphrasing models make comparatively few alterations,
they produce output sentences close to the gold standard. Large fine-tuned
models such as T5 Large perform best in experiments considering various
measures of difference between input sentence and gold standard. Code is
provided under https://github.com/JustinMuecke/SciSen. |
---|---|
DOI: | 10.48550/arxiv.2306.10974 |