GEM is a benchmark environment for Natural Language Generation with a focus on its Evaluation, both through human annotations and automated Metrics.

GEM aims to:

  • measure NLG progress across 13 datasets spanning many NLG tasks and languages.
  • provide an in-depth analysis of data and models presented via data statements and challenge sets.
  • develop standards for evaluation of generated text using both automated and human metrics.

It is our goal to regularly update GEM and to encourage toward more inclusive practices in dataset development by extending existing data or developing datasets for additional languages.