StyleDistance: Stronger Content-Independent Style Embeddings with Synthetic Parallel Examples

Style representations aim to embed texts with similar writing styles closely and texts with different styles far apart, regardless of content. However, the contrastive triplets often used for training these representations may vary in both style and content, leading to potential content leakage in t...

Full description

Saved in:
Bibliographic Details
Main Authors: Patel, Ajay, Zhu, Jiacheng, Qiu, Justin, Horvitz, Zachary, Apidianaki, Marianna, McKeown, Kathleen, Callison-Burch, Chris
Format: Journal Article
Language:English
Published: 16-10-2024
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Style representations aim to embed texts with similar writing styles closely and texts with different styles far apart, regardless of content. However, the contrastive triplets often used for training these representations may vary in both style and content, leading to potential content leakage in the representations. We introduce StyleDistance, a novel approach to training stronger content-independent style embeddings. We use a large language model to create a synthetic dataset of near-exact paraphrases with controlled style variations, and produce positive and negative examples across 40 distinct style features for precise contrastive learning. We assess the quality of our synthetic data and embeddings through human and automatic evaluations. StyleDistance enhances the content-independence of style embeddings, which generalize to real-world benchmarks and outperform leading style representations in downstream applications. Our model can be found at https://huggingface.co/StyleDistance/styledistance .
DOI:10.48550/arxiv.2410.12757