Deep Learning Architecture for Multi-Document Summarization as a cascade of Abstractive and Extractive Summarization approaches
Anita Kumari Singh1 , M Shashi2
Section:Research Paper, Product Type: Journal Paper
Volume-7 ,
Issue-3 , Page no. 950-954, Mar-2019
CrossRef-DOI: https://doi.org/10.26438/ijcse/v7i3.950954
Online published on Mar 31, 2019
Copyright © Anita Kumari Singh, M Shashi . This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
View this paper at Google Scholar | DPI Digital Library
How to Cite this Paper
- IEEE Citation
- MLA Citation
- APA Citation
- BibTex Citation
- RIS Citation
IEEE Style Citation: Anita Kumari Singh, M Shashi, “Deep Learning Architecture for Multi-Document Summarization as a cascade of Abstractive and Extractive Summarization approaches,” International Journal of Computer Sciences and Engineering, Vol.7, Issue.3, pp.950-954, 2019.
MLA Style Citation: Anita Kumari Singh, M Shashi "Deep Learning Architecture for Multi-Document Summarization as a cascade of Abstractive and Extractive Summarization approaches." International Journal of Computer Sciences and Engineering 7.3 (2019): 950-954.
APA Style Citation: Anita Kumari Singh, M Shashi, (2019). Deep Learning Architecture for Multi-Document Summarization as a cascade of Abstractive and Extractive Summarization approaches. International Journal of Computer Sciences and Engineering, 7(3), 950-954.
BibTex Style Citation:
@article{Singh_2019,
author = {Anita Kumari Singh, M Shashi},
title = {Deep Learning Architecture for Multi-Document Summarization as a cascade of Abstractive and Extractive Summarization approaches},
journal = {International Journal of Computer Sciences and Engineering},
issue_date = {3 2019},
volume = {7},
Issue = {3},
month = {3},
year = {2019},
issn = {2347-2693},
pages = {950-954},
url = {https://www.ijcseonline.org/full_paper_view.php?paper_id=3945},
doi = {https://doi.org/10.26438/ijcse/v7i3.950954}
publisher = {IJCSE, Indore, INDIA},
}
RIS Style Citation:
TY - JOUR
DO = {https://doi.org/10.26438/ijcse/v7i3.950954}
UR - https://www.ijcseonline.org/full_paper_view.php?paper_id=3945
TI - Deep Learning Architecture for Multi-Document Summarization as a cascade of Abstractive and Extractive Summarization approaches
T2 - International Journal of Computer Sciences and Engineering
AU - Anita Kumari Singh, M Shashi
PY - 2019
DA - 2019/03/31
PB - IJCSE, Indore, INDIA
SP - 950-954
IS - 3
VL - 7
SN - 2347-2693
ER -
VIEWS | XML | |
450 | 340 downloads | 182 downloads |
Abstract
Document summarizers create a shorter compressed version of a text document automatically. Summaries are created to give the gist of the entire document covering the key points of the document with improved readability while avoiding redundancy. Abstractive summarization synthesizes summary statements of a given document and is presently limited to single document summarization. The proposed model extends the applicability of abstractive summarization for multi-document texts by proposing a new Deep Learning architecture as a cascade of Abstractive and Extractive summarization. The proposed hybrid architecture is used to generate compact and comprehensive summaries from multiple news articles published on specific topics. The architecture was evaluated using DUC 2004 data and its performance is found to be better compared to traditional Multi Document Extractive Summarization methods in terms of ROUGE scores.
Key-Words / Index Term
Document Summarization, Abstractive, Extractive, ROUGE
References
[1]. Mani, Inderjeet. Advances in automatic text summarization. MIT press, 1999.
[2]. Hahn, Udo, and Inderjeet Mani. "The challenges of automatic summarization." Computer 33.11 (2000): 29-36.
[3]. Lopyrev, Konstantin. "Generating news headlines with recurrent neural networks." arXiv preprint arXiv:1512.01712 (2015).
[4]. Nallapati, Ramesh, et al. "Abstractive text summarization using sequence-to-sequence rnns and beyond." arXiv preprint arXiv:1602.06023(2016).
[5]. See, Abigail, Peter J. Liu, and Christopher D. Manning. "Get to the point: Summarization with pointer-generator networks." arXiv preprint arXiv:1704.04368 (2017).
[6]. Erkan, Günes, and Dragomir R. Radev. "Lexrank: Graph-based lexical centrality as salience in text summarization." Journal of artificial intelligence research 22 (2004): 457-479.
[7]. Erkan, Günes, and Dragomir R. Radev. "Lexpagerank: Prestige in multi-document text summarization." Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing. 2004.
[8]. Sullivan, Danny (2007-04-26). "What Is Google PageRank? A Guide for Searchers & Webmasters". Search Engine Land. Archived from the original on 2016-07-03.
[9]. Hochreiter, Sepp, and Jürgen Schmidhuber. "Long short-term memory." Neural computation 9.8 (1997): 1735-1780.
[10]. Lin, Chin-Yew. "Rouge: A package for automatic evaluation of summaries." Text Summarization Branches Out (2004).
[11]. Hermann, Karl Moritz, et al. "Teaching machines to read and comprehend." Advances in Neural Information Processing Systems. 2015.
[12]. Nenkova, Ani. "Automatic text summarization of newswire: Lessons learned from the document understanding conference." (2005).
[13]. Carbonell, Jaime G., and Jade Goldstein. "The Use of MMR and Diversity-Based Reranking for Reodering Documents and Producing Summaries." (1998).