


default search action
Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization@ACL 2005: Ann Arbor, Michigan, USA
- Jade Goldstein, Alon Lavie, Chin-Yew Lin, Clare R. Voss:

Proceedings of the Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization@ACL 2005, Ann Arbor, Michigan, USA, June 29, 2005. Association for Computational Linguistics 2005 - Bonnie J. Dorr, Christof Monz, Stacy President, Richard M. Schwartz, David M. Zajic:

A Methodology for Extrinsic Evaluation of Text Summarization: Does ROUGE Correlate? 1-8 - BalaKrishna Kolluru, Yoshihiko Gotoh:

On the Subjectivity of Human Authored Summaries. 9-16 - Gregor Leusch, Nicola Ueffing, David Vilar, Hermann Ney:

Preprocessing and Normalization for Automatic Evaluation of Machine Translation. 17-24 - Ding Liu, Daniel Gildea:

Syntactic Features for Evaluation of Machine Translation. 25-32 - Gabriel Murray, Steve Renals, Jean Carletta, Johanna D. Moore:

Evaluating Automatic Summaries of Meeting Recordings. 33-40 - Jimmy Lin, Dina Demner-Fushman:

Evaluating Summaries and Answers: Two Sides of the Same Coin? 41-48 - Enrique Amigó, Julio Gonzalo, Anselmo Peñas, Felisa Verdejo:

Evaluating DUC 2004 Tasks with the QARLA Framework. 49-56 - Stefan Riezler, John T. Maxwell III:

On Some Pitfalls in Automatic Evaluation and Significance Testing for MT. 57-64 - Satanjeev Banerjee, Alon Lavie:

METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments. 65-72

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.


Google
Google Scholar
Semantic Scholar
Internet Archive Scholar
CiteSeerX
ORCID














