Rethinking Model Evaluation as Narrowing the Socio-Technical Gap
The recent development of generative and large language models (LLMs) poses new challenges for model evaluation that the research community and industry are grappling with. While the versatile capabilities of these models ignite excitement, they also inevitably make a leap toward homogenization: pow...
Saved in:
Main Authors: | , |
---|---|
Format: | Journal Article |
Language: | English |
Published: |
31-05-2023
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | The recent development of generative and large language models (LLMs) poses
new challenges for model evaluation that the research community and industry
are grappling with. While the versatile capabilities of these models ignite
excitement, they also inevitably make a leap toward homogenization: powering a
wide range of applications with a single, often referred to as
``general-purpose'', model. In this position paper, we argue that model
evaluation practices must take on a critical task to cope with the challenges
and responsibilities brought by this homogenization: providing valid
assessments for whether and how much human needs in downstream use cases can be
satisfied by the given model (socio-technical gap). By drawing on lessons from
the social sciences, human-computer interaction (HCI), and the
interdisciplinary field of explainable AI (XAI), we urge the community to
develop evaluation methods based on real-world socio-requirements and embrace
diverse evaluation methods with an acknowledgment of trade-offs between realism
to socio-requirements and pragmatic costs to conduct the evaluation. By mapping
HCI and current NLG evaluation methods, we identify opportunities for
evaluation methods for LLMs to narrow the socio-technical gap and pose open
questions. |
---|---|
DOI: | 10.48550/arxiv.2306.03100 |