One of the main contributions of the paper is the introduction of a new evaluation metric for quantitative argument summarization tasks, which they call "Automatic Summarization Evaluation Metric" (ASEM). They demonstrate that their metric is more effective than existing metrics in evaluating the quality of summaries.
The authors also perform an in-depth analysis of the key points identified by their approach and show that they are effective at capturing the most important information in an argument. They also compare their approach to existing methods in terms of its ability to generate accurate summaries and demonstrate that it outperforms them.
Overall, this paper makes a significant contribution to the field of quantitative argument summarization by introducing a novel approach that leverages cross-domain key point analysis and statistical techniques to generate high-quality summaries. The authors provide detailed evaluation results that demonstrate the effectiveness of their approach and highlight its potential for applications in a variety of domains.
Computation and Language, Computer Science