Latent Space Interpretation for Stylistic Analysis and Explainable Authorship Attribution
Recent state-of-the-art authorship attribution methods learn authorship representations of texts in a latent, non-interpretable space, hindering their usability in real-world applications. Our work proposes a novel approach to interpreting these learned embeddings by identifying representative point...
Saved in:
Main Authors: | , , , , , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
11-09-2024
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Recent state-of-the-art authorship attribution methods learn authorship
representations of texts in a latent, non-interpretable space, hindering their
usability in real-world applications. Our work proposes a novel approach to
interpreting these learned embeddings by identifying representative points in
the latent space and utilizing LLMs to generate informative natural language
descriptions of the writing style of each point. We evaluate the alignment of
our interpretable space with the latent one and find that it achieves the best
prediction agreement compared to other baselines. Additionally, we conduct a
human evaluation to assess the quality of these style descriptions, validating
their utility as explanations for the latent space. Finally, we investigate
whether human performance on the challenging AA task improves when aided by our
system's explanations, finding an average improvement of around +20% in
accuracy. |
---|---|
DOI: | 10.48550/arxiv.2409.07072 |