This talk presents interpretable metrics for evaluating topic coherence by making use of distributed sentence representations. Further, the talk introduces calculable approximations of human judgment based on conversational coherence by adopting state-of-the-art entailment techniques. Finally, the talk shows that the introduced metrics can be used as a surrogate for human judgment, making it easy to evaluate dialogue systems on large-scale datasets and allowing an unbiased estimate for the quality of the responses.
WHAT YOU'LL LEARN
- The task of evaluating dialogue systems is far from being solved, researchers are still on the quest for a strong and reliable metric that highly conforms with human judgment.
- Consistency is key in evaluating dialog systems
- Entailment techniques lay the foundations of future works to evaluate better the consistency in dialogues
- Deep learning and reinforcement enable new research
0 Comments