Bot Believability Assessment: A Novel Protocol & Analysis of Judge Expertise

For video game designers, being able to provide both interesting and human-like opponents is a definite benefit to the game's entertainment value. The development of such believable virtual players also known as Non-Player Characters or bots remains a challenge which has kept the research commu...

Full description

Saved in:
Bibliographic Details
Published in:2018 International Conference on Cyberworlds (CW) pp. 96 - 101
Main Authors: Even, Cindy, Bosser, Anne-Gwenn, Buche, Cedric
Format: Conference Proceeding
Language:English
Published: IEEE 01-10-2018
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:For video game designers, being able to provide both interesting and human-like opponents is a definite benefit to the game's entertainment value. The development of such believable virtual players also known as Non-Player Characters or bots remains a challenge which has kept the research community busy for many years. However, evaluation methods vary widely which can make systems difficult to compare. The BotPrize competition has provided some highly regarded assessment methods for comparing bots' believability in a first person shooter game. It involves humans judging virtual agents competing for the most believable bot title. In this paper, we describe a system allowing us to partly automate such a competition, a novel evaluation protocol based on an early version of the BotPrize, and an analysis of the data we collected regarding human judges during a national event. We observed that the best judges were those who play video games the most often, especially games involving combat, and are used to playing against virtual players, strangers and physically present players. This result is a starting point for the design of a new generic and rigorous protocol for the evaluation of bots' believability in first person shooter games.
DOI:10.1109/CW.2018.00027