Content deleted Content added
m v2.04b - Bot T20 CW#61 - Fix errors for CW project (Reference before punctuation) |
→Document summarization: Major copy-edit: move multi-document text to existing Multi-document section; remove redundancy. |
||
Line 83:
Like keyphrase extraction, document summarization aims to identify the essence of a text. The only real difference is that now we are dealing with larger text units—whole sentences instead of words and phrases.
==== Evaluation ====
If there are multiple references, the ROUGE-1 scores are averaged. Because ROUGE is based only on content overlap, it can determine if the same general concepts are discussed between an automatic summary and a reference summary, but it cannot determine if the result is coherent or the sentences flow together in a sensible manner. High-order n-gram ROUGE measures try to judge fluency to some degree. Note that ROUGE is similar to the BLEU measure for machine translation, but BLEU is precision- based, because translation systems favor accuracy.
A promising
====Supervised learning approaches====
Line 109:
It is worth noting that TextRank was applied to summarization exactly as described here, while LexRank was used as part of a larger summarization system ([[MEAD]]) that combines the LexRank score (stationary probability) with other features like sentence position and length using a [[linear combination]] with either user-specified or automatically tuned weights. In this case, some training documents might be needed, though the TextRank results show the additional features are not absolutely necessary.
Unlike TextRank, LexRank has been applied to multi-document summarization.
These methods work based on the idea that sentences "recommend" other similar sentences to the reader. Thus, if one sentence is very similar to many others, it will likely be a sentence of great importance. The importance of this sentence also stems from the importance of the sentences "recommending" it. Thus, to get ranked highly and placed in a summary, a sentence must be similar to many sentences that are in turn also similar to many other sentences. This makes intuitive sense and allows the algorithms to be applied to any arbitrary new text. The methods are ___domain-independent and easily portable. One could imagine the features indicating important sentences in the news ___domain might vary considerably from the biomedical ___domain. However, the unsupervised "recommendation"-based approach applies to any ___domain.▼
====Multi-document summarization====
Line 117 ⟶ 115:
'''Multi-document summarization''' is an automatic procedure aimed at extraction of information from multiple texts written about the same topic. Resulting summary report allows individual users, such as professional information consumers, to quickly familiarize themselves with information contained in a large cluster of documents. In such a way, multi-document summarization systems are complementing the [[news aggregators]] performing the next step down the road of coping with [[information overload]]. Multi-document summarization may also be done in response to a question.<ref>"[https://www.academia.edu/2475776/Versatile_question_answering_systems_seeing_in_synthesis Versatile question answering systems: seeing in synthesis]", International Journal of Intelligent Information Database Systems, 5(2), 119-142, 2011.</ref><ref name="Afzal_et_al">Afzal M, Alam F, Malik KM, Malik GM, [https://www.jmir.org/2020/10/e19810/ Clinical Context-Aware Biomedical Text Summarization Using Deep Neural Network: Model Development and Validation], J Med Internet Res 2020;22(10):e19810, DOI: 10.2196/19810, PMID 33095174</ref>
Multi-document summarization creates information reports that are both concise and comprehensive. With different opinions being put together and outlined, every topic is described from multiple perspectives within a single document. While the goal of a brief summary is to simplify information search and cut the time by pointing to the most relevant source documents, comprehensive multi-document summary should itself contain the required information, hence limiting the need for accessing original files to cases when refinement is required. Automatic summaries present information extracted from multiple sources algorithmically, without any editorial touch or subjective human intervention, thus making it completely unbiased. {{dubious|date=June 2018}}
=====Diversity=====
▲Multi-document extractive summarization faces a problem of redundancy. Ideally, we want to extract sentences that are both "central" (i.e., contain the main ideas) and "diverse" (i.e., they differ from one another). For example, in a set of news articles about some event, each article is likely to have many similar sentences. To address this issue, LexRank applies a heuristic post-processing step that adds sentences in rank order, but discards sentences that are too similar to ones already in the summary. This method is called Cross-Sentence Information Subsumption (CSIS). These methods work based on the idea that sentences "recommend" other similar sentences to the reader. Thus, if one sentence is very similar to many others, it will likely be a sentence of great importance.
▲Multi-document extractive summarization faces a problem of potential redundancy. Ideally, we would like to extract sentences that are both "central" (i.e., contain the main ideas) and "diverse" (i.e., they differ from one another). LexRank deals with diversity as a heuristic final stage using CSIS, and other systems have used similar methods, such as Maximal Marginal Relevance (MMR),<ref>Carbonell, Jaime, and Jade Goldstein. "[https://www.cs.cmu.edu/afs/.cs.cmu.edu/Web/People/jgc/publication/MMR_DiversityBased_Reranking_SIGIR_1998.pdf The use of MMR, diversity-based reranking for reordering documents and producing summaries]." Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval. ACM, 1998.</ref> in trying to eliminate redundancy in information retrieval results. There is a general purpose graph-based ranking algorithm like Page/Lex/TextRank that handles both "centrality" and "diversity" in a unified mathematical framework based on [[absorbing Markov chain]] random walks. (An absorbing random walk is like a standard random walk, except some states are now absorbing states that act as "black holes" that cause the walk to end abruptly at that state.) The algorithm is called GRASSHOPPER.<ref>Zhu, Xiaojin, et al. "[http://www.aclweb.org/anthology/N07-1013 Improving Diversity in Ranking using Absorbing Random Walks]." HLT-NAACL. 2007.</ref> In addition to explicitly promoting diversity during the ranking process, GRASSHOPPER incorporates a prior ranking (based on sentence position in the case of summarization).
The state of the art results for multi-document summarization
A new method for multi-lingual multi-document summarization that avoids redundancy
===Submodular functions as generic tools for summarization===
|