Advertisement

Evaluating Coherence in Dialogue Systems Using Entailment, Rasa Developer Summit 2019

Evaluating Coherence in Dialogue Systems Using Entailment, Rasa Developer Summit 2019 Nouha Dziri of Google AI shares how evaluating open-domain dialogue systems is difficult due to the diversity of possible correct answers. Automatic metrics such as BLEU correlate weakly with human annotations, resulting in a significant bias across different models and datasets. Some researchers resort to human judgment experimentation for assessing response quality, which is expensive, time consuming, and not scalable. Moreover, judges tend to evaluate a small number of dialogues, meaning that minor differences in evaluation configuration may lead to dissimilar results.

This talk presents interpretable metrics for evaluating topic coherence by making use of distributed sentence representations. Further, the talk introduces calculable approximations of human judgment based on conversational coherence by adopting state-of-the-art entailment techniques. Finally, the talk shows that the introduced metrics can be used as a surrogate for human judgment, making it easy to evaluate dialogue systems on large-scale datasets and allowing an unbiased estimate for the quality of the responses.

WHAT YOU'LL LEARN
- The task of evaluating dialogue systems is far from being solved, researchers are still on the quest for a strong and reliable metric that highly conforms with human judgment.

- Consistency is key in evaluating dialog systems

- Entailment techniques lay the foundations of future works to evaluate better the consistency in dialogues

- Deep learning and reinforcement enable new research

deep learning,conversational AI,NLU,

Post a Comment

0 Comments