A Survey of Essential Methods in Deep Learning for Big Data
S.Umamageswari 1 , M. Kannan2
Section:Survey Paper, Product Type: Journal Paper
Volume-7 ,
Issue-4 , Page no. 1169-1180, Apr-2019
CrossRef-DOI: https://doi.org/10.26438/ijcse/v7i4.11691180
Online published on Apr 30, 2019
Copyright © S.Umamageswari, M. Kannan . This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
View this paper at Google Scholar | DPI Digital Library
How to Cite this Paper
- IEEE Citation
- MLA Citation
- APA Citation
- BibTex Citation
- RIS Citation
IEEE Style Citation: S.Umamageswari, M. Kannan, “A Survey of Essential Methods in Deep Learning for Big Data,” International Journal of Computer Sciences and Engineering, Vol.7, Issue.4, pp.1169-1180, 2019.
MLA Style Citation: S.Umamageswari, M. Kannan "A Survey of Essential Methods in Deep Learning for Big Data." International Journal of Computer Sciences and Engineering 7.4 (2019): 1169-1180.
APA Style Citation: S.Umamageswari, M. Kannan, (2019). A Survey of Essential Methods in Deep Learning for Big Data. International Journal of Computer Sciences and Engineering, 7(4), 1169-1180.
BibTex Style Citation:
@article{Kannan_2019,
author = {S.Umamageswari, M. Kannan},
title = {A Survey of Essential Methods in Deep Learning for Big Data},
journal = {International Journal of Computer Sciences and Engineering},
issue_date = {4 2019},
volume = {7},
Issue = {4},
month = {4},
year = {2019},
issn = {2347-2693},
pages = {1169-1180},
url = {https://www.ijcseonline.org/full_paper_view.php?paper_id=4183},
doi = {https://doi.org/10.26438/ijcse/v7i4.11691180}
publisher = {IJCSE, Indore, INDIA},
}
RIS Style Citation:
TY - JOUR
DO = {https://doi.org/10.26438/ijcse/v7i4.11691180}
UR - https://www.ijcseonline.org/full_paper_view.php?paper_id=4183
TI - A Survey of Essential Methods in Deep Learning for Big Data
T2 - International Journal of Computer Sciences and Engineering
AU - S.Umamageswari, M. Kannan
PY - 2019
DA - 2019/04/30
PB - IJCSE, Indore, INDIA
SP - 1169-1180
IS - 4
VL - 7
SN - 2347-2693
ER -
VIEWS | XML | |
386 | 300 downloads | 170 downloads |
Abstract
Big data has become an essential technology as many publicand private organizations have continuously collecteda vast amount of information regarding medical informatics, marketing, cyber security, fraud detection, and national intelligence. Deep learning is one of the remarkable machine learning techniques to find abstract patterns in Big data. Deep learning has achieved great success in variousbig data applications such as speech recognition, text understanding, and image analysis. In the field of data science, big data analytics and deep learning have become two highly focused research areas. Deep learning algorithm learns the multi-level representations and features of data in hierarchical structures through supervised and unsupervised strategies for the classification and pattern recognition tasks. In the last decade, deep learning has played a crucial role in providing the solutions for big data analytic problems. This paper provides a comprehensive survey of deep learning in Big data with the comparison of conventional deep learning methods, research challenges, and countermeasures. It also presents the deep learning methods, comparison of deep learning architectures, and deep learning approaches. Furthermore, this survey discusses the application-focused deep learning works in Big data. Finally, this work points out the challenges in big data deep learning and provide several future directions.
Key-Words / Index Term
Big Data, Deep learning, Big data analytics, Machine learning, Deep learning architectures, and Challenges
References
[1] Gheisari, Mehdi, Guojun Wang, and MdZakirulAlamBhuiyan, “A Survey on Deep Learning in Big Data”, IEEE International Conference on Computational Science and Engineering (CSE) and Embedded and Ubiquitous Computing (EUC), Vol.2, pp.173-180, 2017
[2] BengioYoshua, “Deep learning of representations: Looking forward”, In International Conference on Statistical Language and Speech Processing, Springer, pp.1-37, 2013
[3] Yu Dong, and Li Deng, “Deep learning and its applications to signal and information processing”, IEEE Signal Processing Magazine, Vol.28, No.1, pp.145-154, 2011
[4] Hinton Geoffrey E., Simon Osindero, and Yee-WhyeTeh, “A fast learning algorithm for deep belief nets”, Neural computation, Vol.18, No.7, pp.1527-1554, 2006
[5] BengioYoshua, Pascal Lamblin, Dan Popovici, and Hugo Larochelle, “Greedy layer-wise training of deep networks”, In Advances in neural information processing systems, pp.153-160, 2007
[6] Hinton Geoffrey E, “Training products of experts by minimizing contrastive divergence”, Neural computation, Vol.14, No.8, pp.1771-1800, 2002
[7] Hinton, Geoffrey, and RuslanSalakhutdinov, “Discovering binary codes for documents by learning deep generative models”, Topics in Cognitive Science, Vol.3, No.1, pp.74-91, 2011
[8] SalakhutdinovRuslan, and Geoffrey Hinton, “Semantic hashing”, International Journal of Approximate Reasoning, Vol.50, No.7, pp.969-978, 2009
[9] RanzatoMarc`Aurelio, and Martin Szummer, “Semi-supervised learning of compact document representations with deep networks”, ACM Proceedings of the 25th international conference on Machine learning, pp.792-799, 2008
[10] Mikolov Tomas, Kai Chen, Greg Corrado, and Jeffrey Dean, “Efficient estimation of word representations in vector space”, arXiv preprint arXiv:1301.3781, 2013
[11] Krizhevsky Alex, IlyaSutskever, and Geoffrey E. Hinton, “Imagenet classification with deep convolutional neural networks”, In Advances in neural information processing systems, pp.1097-1105, 2012
[12] Le, Quoc V, “Building high-level features using large scale unsupervised learning”, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp.8595-8598, 2013
[13] Socher Richard, Cliff C. Lin, Chris Manning, and Andrew Y. Ng, “Parsing natural scenes and natural language with recursive neural networks”, In Proceedings of the 28th international conference on machine learning (ICML-11), pp.129-136, 2011
[14] Kumar Ranjitha, Jerry O. Talton, Salman Ahmad, and Scott R. Klemmer, “Data-driven Web Design”, In ICML, 2012
[15] Chen, Xue-Wen, and Xiaotong Lin, “Big data deep learning: challenges and perspectives”, IEEE access, Vol.2, pp.514-525, 2014
[16] Raina, Rajat, AnandMadhavan, and Andrew Y. Ng, “Large-scale deep unsupervised learning using graphics processors”, ACM Proceedings of the 26th annual international conference on machine learning, pp.873-880, 2009
[17] Schmidhuber Jürgen, “Deep learning in neural networks: An overview”, Neural networks, Vol.61, pp.85-117, 2015
[18] BengioYoshua, “Learning deep architectures for AI”, Foundations and trends® in Machine Learning, Vol.2, No.1, pp.1-127, 2009
[19] Deng Li, “Three classes of deep learning architectures and their applications: a tutorial survey”, APSIPA transactions on signal and information processing, 2012
[20] Deng, Li, Dong Yu, and John Platt, “Scalable stacking and learning for building deep architectures”, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp.2133-2136, 2012
[21] Hutchinson, Brian, Li Deng, and Dong Yu, “Tensor deep stacking networks”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.35, No.8, pp.1944-1957, 2013
[22] X. Glorot, A. Bordes, and Y. Bengio, “Domain adaptation for large-scale sentiment classification: A deep learning approach,” in Proceedings of International Conference on Machine Learning (ICML), pp.513–520, 2011
[23] Srivastava, Nitish, and Ruslan R. Salakhutdinov, “Multimodal learning with deep boltzmann machines”, In Advances in neural information processing systems, pp.2222-2230, 2012
[24] LeCun, Yann, “Learning invariant feature hierarchies”, In Computer vision–ECCV 2012. Workshops and demonstrations, Springer, pp.496-505, 2012
[25] Zhou, Hongming, Guang-Bin Huang, Zhiping Lin, Han Wang, and Yeng Chai Soh. “Stacked extreme learning machines”, IEEE transactions on cybernetics, Vol.45, No.9 pp.2013-2025, 2015
[26] Zhang, Qingchen, Laurence T. Yang, and Zhikui Chen, “Deep computation model for unsupervised feature learning on big data”, IEEE Transactions on Services Computing, Vol.9, No.1, pp.161-171, 2016
[27] Najafabadi, Maryam M., FlavioVillanustre, Taghi M. Khoshgoftaar, NaeemSeliya, Randall Wald, and EdinMuharemagic, “Deep learning applications and challenges in big data analytics”, Journal of Big Data, Vol.2, No.1, p.1, 2015
[28] R. Weng, J. Lu, Y. Tan, J. Zhou, “Learning cascaded deep auto-encoder networks for face alignment”, IEEE Transactions on Multimedia, Vol.18, No.10, pp.2066–2078, 2016
[29] M. Sun, X. Zhang, H.V. Hmme, T.F. Zheng, “Unseen noise estimation using separable deep auto encoder for speech enhancement”, IEEE/ACM Transactions on Audio, Speech, and Language Processing, Vol.24, No.1, pp.93–104, 2016
[30] Jie, Niu, Bu Xiongzhu, Li Zhong, and Wang Yao, “An Improved Bilinear Deep Belief Network Algorithm for Image Classification”, IEEE Tenth International Conference on Computational Intelligence and Security (CIS), pp.189-192, 2014
[31] Huang, Wenhao, Guojie Song, Haikun Hong, and KunqingXie, “Deep architecture for traffic flow prediction: deep belief networks with multitask learning”, IEEE Transactions on Intelligent Transportation Systems, Vol.15, No.5, pp.2191-2201, 2014
[32] Deng Li, and Dong Yu, “Deep convex net: A scalable architecture for speech pattern classification”, In Twelfth Annual Conference of the International Speech Communication Association, 2011
[33] Indiveri, Giacomo, and Shih-Chii Liu, “Memory and information processing in neuromorphic systems”, Proceedings of the IEEE, Vol.103, No.8, pp.1379-1397, 2015
[34] Liao Bin, JungangXu, Jintao Lv, and Shilong Zhou, “An image retrieval method for binary images based on DBN and softmax classifier”, IETE Technical Review, Vol.32, No.4, pp.294-303, 2015
[35] OquabMaxime, Leon Bottou, Ivan Laptev, and Josef Sivic, “Learning and transferring mid-level image representations using convolutional neural networks”, In Proceedings of the IEEE conference on computer vision and pattern recognition, pp.1717-1724, 2014
[36] X. Liu, J. Gao, X. He, L. Deng, K. Duh, and Y.-Y. Wang, “Representation learning using multi-task deep neural networks for semantic classification and information retrieval,” in Proc. of Conference of North American Chapter of Association for Computational Linguistics (NAACL), 2015
[37] Maggiori, Emmanuel, YuliyaTarabalka, Guillaume Charpiat, and Pierre Alliez, “Convolutional neural networks for large-scale remote-sensing image classification”, IEEE Transactions on Geoscience and Remote Sensing, Vol.55, No.2, pp.645-657, 2017
[38] Han, Yoonchang, Jaehun Kim, Kyogu Lee, Yoonchang Han, Jaehun Kim, and Kyogu Lee, “Deep convolutional neural networks for predominant instrument recognition in polyphonic music”, IEEE/ACM Transactions on Audio, Speech and Language Processing (TASLP), Vol.25, No.1, pp.208-221, 2017
[39] Boureau Y-Lan, Jean Ponce, and YannLeCun, “A theoretical analysis of feature pooling in visual recognition”, In Proceedings of the 27th international conference on machine learning (ICML-10), pp.111-118, 2010
[40] Boureau Y-Lan, Nicolas Le Roux, Francis Bach, Jean Ponce, and YannLeCun, “Ask the locals: multi-way local pooling for image recognition”, IEEE International Conference onn Computer Vision (ICCV), ,pp.2651-2658, 2011
[41] JiaYangqing, Chang Huang, and Trevor Darrell, “Beyond spatial pyramids: Receptive field learning for pooled image features”, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp.3370-3377, 2012
[42] Sermanet Pierre, SoumithChintala, and YannLeCun, “Convolutional neural networks applied to house numbers digit classification”, IEEE 21st International Conference on Pattern Recognition (ICPR), pp.3288-3291, 2012
[43] Zeiler, Matthew D., and Rob Fergus, “Stochastic pooling for regularization of deep convolutional neural networks”, arXiv preprint arXiv:1301.3557, 2013
[44] Chien Jen-Tzung, and Yuan-Chu Ku, “Bayesian recurrent neural network for language modeling”, IEEE transactions on neural networks and learning systems, Vol.27, No.2, pp.361-374, 2016
[45] Lu Xiaoqiang, Yaxiong Chen, and Xuelong Li, “Hierarchical recurrent neural hashing for image retrieval with hierarchical convolutional features”, IEEE Transactions on Image Processing, Vol.27, No.1, pp.106-120, 2018
[46] Lipton, Zachary C., David C. Kale, Charles Elkan, and Randall Wetzell, “Learning to diagnose with LSTM recurrent neural networks”, arXiv preprint arXiv:1511.03677, 2015
[47] Pham Trang, Truyen Tran, DinhPhung, and SvethaVenkatesh, “Deepcare: A deep dynamic memory model for predictive medicine”, In Pacific-Asia Conference on Knowledge Discovery and Data Mining, Springer, pp.30-41, 2016
[48] Lee Honglak, Peter Pham, Yan Largman, and Andrew Y. Ng, “Unsupervised feature learning for audio classification using convolutional deep belief networks”, In Advances in neural information processing systems, pp.1096-1104, 2009
[49] Hu Fan, Gui-Song Xia, Jingwen Hu, and Liangpei Zhang, “Transferring deep convolutional neural networks for the scene classification of high-resolution remote sensing imagery”, Remote Sensing, Vol.7, No.11, pp.14680-14707, 2015
[50] Kim In-Jung, and XiaohuiXie, “Handwritten Hangul recognition using deep convolutional neural networks”, International Journal on Document Analysis and Recognition (IJDAR), Vol.18, No.1, pp.1-13, 2015
[51] Manor Ran, and Amir B. Geva, “Convolutional neural network for multi-category rapid serial visual presentation BCI”, Frontiers in computational neuroscience, Vol.9, 2015
[52] Dong Chao, Chen Change Loy, Kaiming He, and Xiaoou Tang, “Image super-resolution using deep convolutional networks”, IEEE transactions on pattern analysis and machine intelligence, Vol.38, No.2, pp.295-307, 2016
[53] Lane Nicholas D., and PetkoGeorgiev, “Can deep learning revolutionize mobile sensing?”, ACM Proceedings of the 16th International Workshop on Mobile Computing Systems and Applications, pp.117-122, 2015
[54] Nguyen Anh, Jason Yosinski, and Jeff Clune, “Deep neural networks are easily fooled: High confidence predictions for unrecognizable images”, In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.427-436, 2015
[55] Kelley Richard, Liesl Wigand, Brian Hamilton, Katie Browne, Monica Nicolescu, and MirceaNicolescu, “Deep networks for predicting human intent with respect to objects”, In Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction, pp.171-172, 2012
[56] Chen Xiangyu, YanwuXu, Damon Wing Kee Wong, Tien Yin Wong, and Jiang Liu, “Glaucoma detection based on deep convolutional neural network”, In Engineering in Medicine and Biology Society (EMBC), 37th Annual International Conference of the IEEE, pp.715-718, 2015
[57] SutskeverIlya, OriolVinyals, and Quoc V. Le, “Sequence to sequence learning with neural networks”, In Advances in neural information processing systems, pp.3104-3112, 2014
[58] Socher Richard, YoshuaBengio, and Chris Manning, “Deep learning for NLP”, Tutorial at Association of Computational Logistics (ACL), and North American Chapter of the Association of Computational Linguistics (NAACL), 2013
[59] Wallach Izhar, Michael Dzamba, and Abraham Heifets, “AtomNet: a deep convolutional neural network for bioactivity prediction in structure-based drug discovery”, arXiv preprint arXiv:1510.02855, 2015
[60] Elkahky Ali Mamdouh, Yang Song, and Xiaodong He, “A multi-view deep learning approach for cross domain user modeling in recommendation systems”, In Proceedings of the 24th International Conference on World Wide Web, pp.278-288, 2015
[61] Liu Siqi, Sidong Liu, WeidongCai, Sonia Pujol, Ron Kikinis, and Dagan Feng, “Early diagnosis of Alzheimer`s disease with deep learning”, IEEE 11th International Symposium on Biomedical Imaging (ISBI), pp.1015-1018, 2014
[62] Miotto Riccardo, Li Li, Brian A. Kidd, and Joel T. Dudley, “Deep patient: An unsupervised representation to predict the future of patients from the electronic health records”, Scientific reports, Vol.6, p.26094, 2016
[63] Miotto Riccardo, Li Li, and Joel T. Dudley, “Deep Learning to predict patient future diseases from the electronic health records”, In European Conference on Information Retrieval, Springer, pp.768-774, 2016
[64] Deng Jun, Zixing Zhang, Erik Marchi, and Bjorn Schuller, “Sparse autoencoder-based feature transfer learning for speech emotion recognition”, IEEE Humaine Association Conference on Affective Computing and Intelligent Interaction (ACII), pp.511-516, 2013
[65] Zhang Jie, Shiguang Shan, MeinaKan, and Xilin Chen, “Coarse-to-fine auto-encoder networks (CFAN) for real-time face alignment”, In European Conference on Computer Vision, Springer, pp.1-16, 2014
[66] KanMeina, Shiguang Shan, Hong Chang, and Xilin Chen, “Stacked progressive auto-encoders (spae) for face recognition across poses”, In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.1883-1890, 2014
[67] Esteva Andre, Brett Kuprel, Roberto A. Novoa, Justin Ko, Susan M. Swetter, Helen M. Blau, and Sebastian Thrun, “Dermatologist-level classification of skin cancer with deep neural networks”, Nature, Vol.542, No.7639, pp.115-118, 2017
[68] Koh Pang Wei, Emma Pierson, and AnshulKundaje, “Denoising genome-wide histone ChIP-seq with convolutional neural networks”, Bioinformatics, Vol.33, No.14, pp.i225-i233, 2017
[69] Li Jian, JianjiangFeng, and C-C. Jay Kuo, “Deep convolutional neural network for latent fingerprint enhancement”, Signal Processing: Image Communication, Vol.60, pp.52-63, 2018
[70] Tran Truyen, TuDinh Nguyen, DinhPhung, and SvethaVenkatesh, “Learning vector representation of medical objects via EMR-driven nonnegative restricted Boltzmann machines (eNRBM)”, Journal of biomedical informatics, Vol.54, pp.96-105, 2015
[71] GhahabiOmid, and Javier Hernando, “Restricted Boltzmann machines for vector representation of speech in speaker recognition”, Computer Speech & Language, Vol.47, pp.16-29, 2018
[72] Zhang Zhenhua, Qing He, Jing Gao, and Ming Ni, “A deep learning approach for detecting traffic accidents from social media data”, Transportation Research Part C: Emerging Technologies, Vol.86, pp.580-596, 2018
[73] Choi Edward, Mohammad TahaBahadori, Andy Schuetz, Walter F. Stewart, and Jimeng Sun, “Doctor ai: Predicting clinical events via recurrent neural networks”, In Machine Learning for Healthcare Conference, pp.301-318, 2016
[74] Wollmer Martin, Florian Eyben, Joseph Keshet, Alex Graves, Bjorn Schuller, and Gerhard Rigoll, “Robust discriminative keyword spotting for emotionally colored spontaneous speech using bidirectional LSTM networks”, IEEE International Conference on Acoustics, Speech and Signal Processing, pp.3949-3952, 2009
[75] Hammerla Nils Y., Shane Halloran, and Thomas Ploetz, “Deep, convolutional, and recurrent models for human activity recognition using wearables”, arXiv preprint arXiv:1604.08880, 2016
[76] Huang B. Q., Tarik Rashid, and M. T. Kechadi, “Multi-context recurrent neural network for time series applications”, International Journal of Computational Intelligence, Vol.3, No.1, pp.45-54, 2006
[77] Dean Jeffrey, Greg Corrado, RajatMonga, Kai Chen, Matthieu Devin, Mark Mao, Andrew Senior et al, “Large scale distributed deep networks”, In Advances in neural information processing systems, pp.1223-1231, 2012
[78] Heigold Georg, Vincent Vanhoucke, Alan Senior, Patrick Nguyen, Marc’AurelioRanzato, Matthieu Devin, and Jeffrey Dean, “Multilingual acoustic models using distributed deep neural networks”, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp.8619-8623, 2013
[79] Coates Adam, Brody Huval, Tao Wang, David Wu, Bryan Catanzaro, and Ng Andrew, “Deep learning with COTS HPC systems”, In International Conference on Machine Learning, pp.1337-1345, 2013
[80] Weston Jason, FrédéricRatle, HosseinMobahi, and Ronan Collobert, “Deep learning via semi-supervised embedding”, In Neural Networks: Tricks of the Trade, Springer, pp.639-655, 2012
[81] Fergus Rob, Yair Weiss, and Antonio Torralba, “Semi-supervised learning in gigantic image collections”, In Advances in neural information processing systems, pp.522-530, 2009
[82] Vincent Pascal, Hugo Larochelle, YoshuaBengio, and Pierre-Antoine Manzagol, “Extracting and composing robust features with denoising autoencoders”, ACM Proceedings of the 25th international conference on Machine learning, pp.1096-1103, 2008
[83] Calandra Roberto, TapaniRaiko, Marc Peter Deisenroth, and Federico MontesinoPouzols, “Learning deep belief networks from non-stationary streams”, In International Conference on Artificial Neural Networks, Springer, pp.379-386, 2012
[84] Zhou Guanyu, KihyukSohn, and Honglak Lee, “Online incremental feature learning with denoising autoencoders”, In Artificial Intelligence and Statistics, pp.1453-1461, 2012
[85] Ouyang, Wanli, Xiao Chu, and Xiaogang Wang, “Multi-source deep learning for human pose estimation”, In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.2329-2336, 2014
[86] Zhao, Liang, Jiangzhuo Chen, Feng Chen, Wei Wang, Chang-Tien Lu, and NarenRamakrishnan, “Simnest: Social media nested epidemic simulation via online semi-supervised deep learning”, IEEE International Conference on Data Mining (ICDM), pp.639-648, 2015
[87] Chen Minmin, ZhixiangXu, Kilian Weinberger, and FeiSha, “Marginalized denoising autoencoders for domain adaptation”, arXiv preprint arXiv:1206.4683, 2012
[88] NgiamJiquan, AdityaKhosla, Mingyu Kim, Juhan Nam, Honglak Lee, and Andrew Y. Ng, “Multimodal deep learning”, In Proceedings of the 28th international conference on machine learning (ICML-11), pp.689-696, 2011
[89] Zhang Qingchen, Laurence T. Yang, and Zhikui Chen, “Privacy preserving deep computation model on cloud for big data feature learning”, IEEE Transactions on Computers, Vol.65, No.5, pp.1351-1362, 2016
[90] Yu, Lequan, Hao Chen, Qi Dou, Jing Qin, and Pheng Ann Heng, “Integrating Online and Offline Three-Dimensional Deep Learning for Automated Polyp Detection in Colonoscopy Videos”, IEEE journal of biomedical and health informatics, Vol.21, No.1, pp.65-75, 2017
[91] F. Bu, Z. Chen, Q. Zhang, “Incremental updating method for big data feature learning”, Computer Engineering and Applications, Vol.51, No.12, pp.21–26, 2015
[92] Chen Kai, and QiangHuo, “Scalable training of deep learning machines by incremental block training with intra-block parallel optimization and blockwise model-update filtering”, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp.5880-5884, 2016
[93] Shalev-ShwartzShai, “Online learning and online convex optimization”, Foundations and Trends® in Machine Learning, Vol.4, No.2, pp.107-194, 2012
[94] Bartlett Peter L, “Optimal Online Prediction in Adversarial Environments”, In Discovery Science, p.371, 2010
[95] Chien Jen-Tzung, and Hsin-Lung Hsieh, “Nonstationary source separation using sequential and variational Bayesian learning”, IEEE transactions on neural networks and learning systems, Vol.24, No.5, pp.681-694, 2013
[96] J.V.N. Lakshmi and Ananthi Sheshasaayee, “A Big Data Analytical
approach for Analyzing Temperature Dataset using Machine Learning Techniques”, International Journal of Scientific Research in Computer Science and Engineering, Vol.5, Issue.3, pp.92-97, 2017