A Survey on Underwater Fish Species Detection and Classification
Survey Paper | Journal Paper
Vol.07 , Issue.08 , pp.95-98, Apr-2019
CrossRef-DOI: https://doi.org/10.26438/ijcse/v7si8.9598
Abstract
Fish species recognition is a challenging task for research. Great challenges for fish recognition appear in the special properties of underwater videos and images. Due to the great demand for underwater object recognition, many machine learning and image processing algorithms have been proposed. Deep Learning has achieved a significant results and a huge improvement in visual detection and recognition. This paper mainly reviews some techniques proposed in past years for automatic fish species detection and classification.
Key-Words / Index Term
Fish Recognition;Fish Classification;Feature ExtractionImage Processing;Neural Network;Deep Learning
References
[1] W. Rouse. Population dynamics of barnacles in the intertidal zone. Marine Biology Research Experiment,2007
[2] P. Brehmer, T. Do Chi, and D. Mouillot.Amphidromous fish school migration revealed by combining fixed sonar monitoring (horizontalbeaming) with fishing data. Journal of Experimental Marine Biology and Ecology, 334:139–150, 2006
[3] D.-J.Lee,R.B.Schoenberger,D.Shiozawa,X.Xu,P.Zhan,Contour matching for a fish recognition and migration-monitoring system,in:OpticsEast,International Society for Optics and Photonics,2004,pp.37–48.
[4] .C. Pornpanomchai, B. Lurstwut, P. Leerasakultham, W. Kitiyanan, “Shape and Texture based fish image recognition system”, Kasetsart J. (Nat. Sci.) 47 : 624 - 634 (2013)
[5] C.Spampinato,D.Giordano,R.DiSalvo,Y.-H.J.Chen-urger,R.B.Fisher,G. Nadarajan,”Automatic fish classification for underwater species behavior understanding”, in:Proceedings of the FirstACM InternationalWorkshop on Analysis and Retrieval of Tracked Events and MotioninImageryStreams,ACM, Firenze,Italy,2010,pp.45–50.
[6] K.A. Mutasem, B.O. Khairuddin, N. Shahrulazman and A. Ibrahim . “Fish Recognition Based on Robust Features Extraction from color Texture Measurements Using Back Propagation Classifier ”
[7] N J C Strachan ,”Recognition of fish species by colour and shape”,ImageVis.Comput. 11(1)(1993)2-10
[8] 8.P.X.Huang,B.J.Boom,R.B.Fisher,”Underwater live fish recognition using a balance-guaranteed optimized tree”,in:ComputerVision–ACCV2012,Springer, Daejeon, Korea,2013,pp.422–433.
[9] .P.X.Huang,B.J.Boom,R.B.Fisher,”Gmm improves the rejectoption in hierarchical classification for fishrecognition”,in:2014IEEEWinter ConferenceonApplicationsofcomputerVision(WACV),IEEE,SteamboatSpringsCO.,USA,2014,pp.371–376.
[10] Phoenix X. Huang,Xuan.Huang,Bastiaan J. Boom,”Hierarchical classification with reject option for live fish recognition”
[11] C. Rother, V. Kolmogorov, and A. Blake,”Grabcut - interactive foreground extraction using iterated graph cuts," ACM Transactions on Graph-ics (SIGGRAPH), August 2004.
[12] Atkinson, Peter M. et TATNALL, A. R. L. Introduction neural networks in remote sensing. International Journal of remote sensing, 1997, vol. 18, no 4, p.699-709.
[13] Sébastien Villon, Marc Chaumont, Gérard Subsol, Sébastien Villéger, Thomas Claverie and David Mouillot,”Coral reef fish detection and recognition in underwater videos by supervised machine learning : Comparison between Deep Learning and HOG+SVM methods “
[14] .Nour Eldeen M. Khalifa , Mohamed Hamed N. Taha and Aboul Ella Hassanien,”Aquarium Family Fish Species Identification System Using Deep Neural Networks “
[15] .Jonas Jager,Erik Rodner,Joachim Denzler,Viviane Wo and Klaus Fricke-Neuderth ,”SeaCLEF 2016: Object proposal classification for fish detection in underwater videos”
[16] Chris Stau_er and W. Eric L. Grimson. Adaptive background mixture models for real-time tracking. In 1999 Conference on Computer Vision and Pattern Recognition (CVPR `99), 23-25 June 1999, Ft. Collins, CO, USA, pages 2246{2252,1999.
[17] Satoshi Suzuki and Keiichi Abe,” Topological structural analysis of digitized binary images by border following. Computer Vision”, Graphics, and Image Processing,30(1):32{46, 1985.
[18] Alex Krizhevsky, Ilya Sutskever, and Geo_rey E. Hinton,” Imagenet classification with deep convolutional neural networks. In F. Pereira, C.J.C. Burges, L. Bottou and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 25, pages 1097{1105. Curran Associates, Inc., 2012.
[19] .Dhruv Rathi, Sushant Jain and Dr. S. Indu,” Underwater Fish Species Classification using Convolutional Neural Network and Deep Learning “
Citation
R. Fathima Syreen, K. Merriliance, "A Survey on Underwater Fish Species Detection and Classification", International Journal of Computer Sciences and Engineering, Vol.07, Issue.08, pp.95-98, 2019.
A Detailed Survey of Text Line Segmentation Methods in Handwritten Historical Documents and Palm Leaf Manuscripts
Survey Paper | Journal Paper
Vol.07 , Issue.08 , pp.99-103, Apr-2019
CrossRef-DOI: https://doi.org/10.26438/ijcse/v7si8.99103
Abstract
The revolution of document analysis provides the handwritten palm leaf manuscripts and historical documents, epigraphic into digital. Digital document is available as auto recognition of those historical documents which is the second revolution in document research. Many algorithms are available from the period of the 19th century to the 20th century with the researchers across the world. In order to achieve auto recognition, the line segmentation is a foremost process. Though automatic segmentation of text lines is still a burning research, many technical issues remain unsolved yet. The present survey has been carried out as the survey of newly proposed and modified methods of text line segmentation in palm leaf manuscripts and handwritten historical documents published during the period 2008 – 2018. It could benefit the researchers who do research in handwriting recognition.
Key-Words / Index Term
pre-processing, path finding approach, performance measure
References
[1] Made Windu Antara Kesiman, Dona Voly., “Benchmarking of Document Image Analysis Tasks for Palm Leaf Manuscripts from Southeast Asia”,JImaging,4,43;doi:10.3390/jimaging 4020043, 2018.
[2] Himanshu Jain, Archana Praveen Kumar, “A Bottom Up Procedure for Text Line Segmentation of Latin Script”, IEEE, pp 1182 – 1187, 2017.
[3] Payal Jindal, Dr. Balkrishnan Jindal, “Line and Word Segmentation of Handwritten Text Documents written in Gurmukhi Script using Mid Point Detection Technique”, IEEE, Proceedings of 2015 RACES UIET Panjab University Chandigarh 21-22nd December2015, 2015.
[4] Dona Valy, Michel Verleysen, Kimheng Sek, “Line Segmentation for Grayscale Text Images of Khmer Palm Leaf Manuscripts”, IEEE, 2017.
[5] Rapeeporrn Chamchong, Chum Che Fung, “Text Line Extraction Using Adaptive Partial Projection for Palm Leaf Manuscripts from Thailand”, IEEE, 2012 International Conference on Frontiers in Handwriting Recognition, IEEE, PP 586-5591, 2012.
[6] Youbao Tang, Xiangqain Wu, Wei Bu, “Text Line Segmentation Based on Matched Filtering an Top-down Grouping for Handwritten Documents”, IEEE, 2014 11th IAPR International Workshop on Document Analysis System, PP 365 – 369, 2014.
[7] Mullick, Banerjee, Bhatacharya, “An Efficient Segmentation Approach for Handwritten Bangla Document Image, IEEE, 2015.
[8] Guillaume Renton, Clement Chatelian, “Handwritten text line segmentation using Fully Convolutional Network”, IEEE, 2017 14th International Conference on Document Analysis and Recognition, PP 5 – 9, 2017..
[9] David Aldavert, Marcal Rusinol, “Manuscript Text Line Detection and Segmentation using Second Order Derivatives”, IEEE, 2018 13th IAPR International Workshop ON Document Analysis Systems, PP 293 – 298, 2018.
[10] https://towardsdatascience.com/model evaluation-i-precision-and-recall-166ddb257c7b
[11] Vishal Chavan, Kapil Mehrotra, “Text Line Segmentation of Multilingual Handwritten Documents Using Fourier Approximation”, IEEE, 2017 Fourth International Conference on Image Information Processing(ICIIP), PP 250 – 255, 2017.
[12] Ayush Padhan, Sidharth Behra, Paushpalata Pujari, “Comparative Study on Recent Text Line Segmentation Methods of Unconstrained Handwritten Scripts”, IEEE, International Conference on Energy, Communication, Data Analysis and Soft Computing(ICECDS-2017), PP 3853 – 3858, 2017.
[13]Themos Stafylakis, Vasilis Papavassiliou, “Robust Test-Line and word segmentation for Handwritten Documents Images”, IEEE, PP 3393 – 3396, 2008.
[14]Sanchez, Suarez, Mello, Oliveria, Alves,”Text Line Segmentation Images of Handwritten Historical Documents”, IEEE, Image Processing Theory, Tools and Applications, 2008.
[15] Rodolfo P dos Santos, Gabriela S Clemente, Tsang Ing Ren, “Text Line Segmentation Based on Morphology and Histogram Projection”, IEEE, 2009 10th International Conference on Document Analysis and Recognition PP 651 – 655, 2009..
[16] Rajiv Kumar, Amardeep Singh, “Detection and Segmentation of Lines and Words in Gurmukhi Handwritten Text”, IEEE, 2010 IEEE 2nd International Advance Computing Conference, PP 353 – 356, 2010.
[17] Naresh Kumar Garg, Lakswinder Kaur, Jindal, “A new method for Line Segmentation of Handwritten Hindi Text”, IEEE, 2010 Seventh International Conference on Information Technology, PP 392 – 397, 2010.
[18] Jija Das Gupt, Bhabatosh Chanda, “A model based Text Line Segmentation method for Off-Line Handwritten Documents”, IEEE, 2010 12th International Conference on Frontiers in Handwritting Recognition, pp 125-129, 2010.
[19] Vijaya Kumar Koppula, Atul Negi, “ Fringe Map based Text Line Segmentation of Printed Telugu Document Images”, IEEE, 2011 International Conference on Document Analysis and Recognition, PP 1294 – 1298, 2011.
[20] Rajib Ghosh, Debnath Bhattacharyya, Tai hoon kim, Gang soo Lee, “ New Algorithm for Skewing Detection of Handwritten Bangla Words”, Springer – Verlag Berlin Heidelberg, PP 153 – 159, 2011.
[21] Ines ben Messaoud, Hamid Amiri, “A Multilevel Text Line Segmentation Framework for Handwritten Historical Documents”, IEEE, 2012 International Conference on Frontiers in Handwriting Recognition, PP 515-520, 2012.
[22] Hande Adiguzel, Emre Sahin, Pinar Duygulu, “A Hybrid Approach for Line Segmentation in Handwritten Documents”, IEEE, 2012 International Conference on Frontiers in Handwriting Recognition, PP 503-508, 2012.
[23] Xi Zhang, Chew Lim Tan, “ Text Line Segmentation for Handwritten Documents Using Constrained Seam Carving”, 2014 14th International Conference on Frontiers in Handwriting Recognition, IEEE, PP 98-103, 2014.
[24]Rapeeporrn Chamchong, Chum Che Fung, “A Combined Method of Segmentation for Connected Handwritten on Palm Leaf Manuscripts”,IEEE, 2014 IEEE International Conference on Systems, Man and Cybernetics, PP 4158 – 4161, 2014.
[25] Dona Valy, Michel Verleysen, Kimheng Sek, “Line Segmentation Approach for Ancient Palm Leaf Manuscripts using Competitive Algorithm”, IEEE, 2016 15th International Conference on Frontiers in Handwriting Recognition, PP 108-113, 2016.
[26]Banumathi, Jagadeesh Chandra, “ Line and Word Segmentation of Kannada Handwritten Text documents using Projection Profile Technique”, IEEE, 2016 International Conference on Electrical, Electronics, Communication, Computer and Optimization Techniques (ICEECCOT) PP 196 – 201, 2016.
[27]Quang Nhat Vo, GueeSang Lee, “ Dense Prediction for Text Line segmentation in Handwritten Document Images”, IEEE, ICIP 2016, PP 3264 – 3268, 2016.
[28] Kathirvalavakumar Thangairulappan, Karthigai selvi Mohan, “ efficient Segmentation of Printed Tamil Script into Characters Using Projection and Structure”, IEEE, 2017 Fourth International Conference on Image Information Processing(ICIIP), PP 484 – 489, 2017.
Citation
R. Spurgen Ratheash, M. Mohamed Sathik, "A Detailed Survey of Text Line Segmentation Methods in Handwritten Historical Documents and Palm Leaf Manuscripts", International Journal of Computer Sciences and Engineering, Vol.07, Issue.08, pp.99-103, 2019.
Big Data Computing and Analytics Business Value in - E-Commerce
Survey Paper | Journal Paper
Vol.07 , Issue.08 , pp.104-107, Apr-2019
CrossRef-DOI: https://doi.org/10.26438/ijcse/v7si8.104107
Abstract
Big data is continuously creating new challenges and opportunities, all of which have been forged by the information revolution. The ecommerce industry are already using data sets to introduce a new level of strategic marketing and provide better customer service experiences .Data growth has undergone a renaissance, by ever lower computing power in the internet. It will be top method shift in the E-commerce sector; as data is no longer seen as by the product of their business activities, but as their biggest trademark providing: the needs of their customers, predicting trends in customer’s behavior, democratizing of advertisement to suits consumers varied taste, as well as providing a performance metric to assess the effectiveness in meeting customers’ needs. This paper provides the unique features that differentiate big data from traditional datasets. In this way the application of big data analytics in the E-commerce and its technologies that make analytics of consumer data possible is discussed. Improvement of technology usage, the methods to measure and collect data also increase. One of the methodsto understand our world is to study trends in behavior.
Key-Words / Index Term
Big Data Analytics, Computing
References
[1]. http://www.harapartners.com/blog/big-data-affect-our-daily-e-commerce-experience
[2]. Yaser Ahangari Nanehkaran, An Introduction To Electronic Commerce, International Journal Of Scientific & Technology Research Volume 2, Issue 4, April 2013,Pg190-193
[3]. http://ecommerce.about.com/od/eCommerce-Basics/tp/Advantages-Of-Ecommerce.html
[4]. http://www.tutorialspoint.com/e_commerce/e_ commerce_advantages.htm
[5]. Assuncoa, M.D. et al., 2013. Big Data Computing and Clouds: Challenges, Solutions, and Future Directions. arXiv, 1(1), pp.1-39.
[6]. Chalmers, S., Bothorel, C. & CLEMENTE, R., 2013. Big Data-State of the Art. Thesis. Brest: Telecom Bretagne, Institute Mines-Telecom
Citation
M. Gandhimathi, "Big Data Computing and Analytics Business Value in - E-Commerce", International Journal of Computer Sciences and Engineering, Vol.07, Issue.08, pp.104-107, 2019.
Future Generation Education on Augmented Reality
Survey Paper | Journal Paper
Vol.07 , Issue.08 , pp.108-113, Apr-2019
CrossRef-DOI: https://doi.org/10.26438/ijcse/v7si8.108113
Abstract
Educator says that the learning process should be all about creativity and interaction. While teachers do not necessarily need to recruit all students into subject, their goal is to get them interested in a subject. Augmented Reality technology has the potential to blend both real and virtual environment to create an enhanced learning experience. Besides compared to paper-based solutions and conventional mobile applications, AR book system with scaffolding models they developed could help students to learn, particularly those with low reading ability. Objective of this paper is to highlight the applications and opportunities of research on AR applications in education.
Key-Words / Index Term
Augmented Reality, Education, Learning, Marker based
References
[1] Comport, E. Marchand and F. Chaumette, “A real-time tracker for markerless augmented reality” In ISMAR ‘03, pp. 36-45, 2003.
[2] Cheok et al., “Human Pacman:Amobile entertainment system with ubiquitous computing and tangible interaction over a wide outdoor area”, in Proc. Mobile HCI, pp. 209 224, 2003 .
[3] Alakärppä, I. et al., “Using nature elements in mobile AR for education with children”, Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services - MobileHCI ’17, pp. 1–13, 2017.
[4] Babak A. Parviz, “Augmented reality in a contact lens “(http://spectrum.ieee.org/biomedical/bionics/ augmented-reality-in-a-contact-lens/0) IEEE Spectrum, September, 2009.
[5] Blum T, Kleeberger V, Bichlmeier C, Navab N. “Mirracle: An Augmented Reality Magic Mirror System for Anatomy Education”, In Virtual Reality (VR) IEEE, Costa Mesa, California: IEEE, pp. 115–116, 2012.
[6] Olson, “A Robust and Flexible Visual Fiducial System”, Proceedings of the 2011 International Conference on Robotics and Automation (ICRA2011), pp. 3400-3407, 2011.
[7] G. Reitmayr and D. Schmalstieg, “Data management strategies for mobile augmented reality”, in Proc. Int. Workshop Softw. Technol. Aug- mented Reality Syst. , pp. 47-52, 2003.
[8] G. Reitmayr and T. Drummond, “Going out: robust model-based tracking for outdoor augmented reality”, In ISMAR ‘06, pp. 109-118, 2003.
[9] H. A. Karimi and A. Hammad, Eds, “Telegeoinformatics: Location-Based Computing and Services”, Boca Raton, FL, USA: CRC Press, 2004.
[10] H. Kato and M. Billinghurst, “Marker Tracking and HMD Calibration for a Video-based Augmented Reality Conferencing System”, Proceedings of the 2nd International Workshop on Augmented Reality (IWAR 99), pp.85-94, 1999.
[11] J. Gutiérrez, “Proposal of Methodology for Learning of Standard Mechanical Elements Using Augmented Real-ity”, ASEE/IEEE Frontiers in Education Conference, Rapid City, pp. 1-6, 2011.
[12] J. Loomis, R. Golledge, and R. Klatzky, “Personal guidance system for the visually impaired using GPS, GIS, and VR technologies”, In Proc. Conf. on Virtual Reality and Persons with Disabilities, Millbrae, CA, USA, 1993).
[13] J. P. Rolland, R. L. Holloway, H. Fuch, “Comparison of optical and video see-through, head-mounted displays”, Proc. SPIE, vol. 2351, pp. 293 307, 2004.
[14] J. Schöning, M. Rohs, and S. Kratz, “Map torchlight: A mobile augmented reality camera projector unit”, in Proc. CHI, pp. 3841 3846, 2009).
[15] J. Zhou, I. Lee, B. H. Thomas, A. Sansome, and R. Menassa, “Facilitating collaboration with laser projector-based spatial augmented reality in industrial applications”, in Recent Trends of Mobile Collaborative Augmented Reality Systems, 1st ed. London, U.K.: Springer,pp. 161 173, 2011.
[16] Krajcik, J.S. & Mun, K., “Promises and challenges of using learning technology to promote student learning of science”. In N. Lederman and S. Abell (eds.). Handbook of Research in Science Education, Vol II, pp. 337-360, 2014.
[17] L. Naimark and E. Foxlin, “Circular data matrix fiducial system and robust image processing for a wearable vision-inertial self-tracker”, In ISMAR ‘02, pp. 27-36, 2002.
[18] L. Vacchetti, V. Lepetit and P. Fua, “Combining edge and texture information for real-time accurate 3d camera tracking”, In ISMAR ‘04, pp. 48-57, 2004 .
[19] Lu, S. J. and Liu, Y. C, “Integrating augmented reality technology to enhance children’s learning in marine education”, Environmental Education Research, 21(4), pp. 525–541, 2015.
[20] M. Mohring, C. Lessig, and O. Bimber, “Video see-through AR on consumer cell-phones”, in Proc. ISMAR, pp. 252-253, 2004).
[21] P. Milgram and F. Kishino, “A taxonomy of mixed reality visual displays”, IEICE Trans. on Information and Systems, E77-D(12), pp:1321-1329, 1994).
[22] P. Mistry, P. Maes, and L. Chang, “WUW-wear Ur world: A wearable gestural interface”, in Proc. Human Factors Comput. Syst., pp. 4111 4116, 2009.
[23] P. Renevier and L. Nigay, “Mobile collaborative augmented reality: The augmented stroll”, in Proc. 8th IFIP Int. Conf. Eng. Human-Comput. Interaction (EHCI), London, U.K., pp. 299 316, 2001.
[24] Păduraru BM, Iftene A, “Tower Defense with Augmented Reality”, In Proceedings of the 14th Conference on Human Computer Interaction - RoCHI 2017, Craiova, Romania; 2017. pp: 113-118, 2017.
[25] Project3D.. “How to Project on 3D Geometry”. [Online]. Available: http://vvvv.org/documentation/how-to-project-on-3d-geometry, 2011.
[26] Q. Bonnard, S. Lemaignan, G. Zufferey, A. Mazzei, S. Cuendet, N. Li, and P. Dillenbourg, “Chilitags2: Robust Fiducial Markers for Augmented Reality and Robotics”, CHILI, EPFL, Switzerland, http://chili.epfl.ch/software , 2013.
[27] R. T. Azuma, Y. Baillot, R. Behringer, S. K. Feiner, S. Julier, and B. MacIntyre, “Recent advances in augmented reality. IEEE Computer Graphics and Applications”, Volume 21, issue6, pp:34-47, 2001.
[28] R. T. Azuma, “A survey of augmented reality. Presence: Teleoperators and virtual environments”, volume 6, issue 4, pages:355-385, 1997).
[29] R.Wen,W.-L. Tay, B. P. Nguyen, C.-B. Chng, and C.-K. Chui, “Hand gesture guided robot-assisted surgery based on a direct augmented reality interface”, Computer Methods and Programs in Biomedicine, 2014.
[30] Radu, I, “Augmented reality in education: a meta-review and cross-media analysis”, Personal and Ubiquitous Computing, 18(6), 1533–1543, 2014.
[31] Restivo, T., Chouzal, F., Rodrigues, J., Menezes, P., & Lopes, J.B, “Augmented reality to improve stem motivation”, Proceedings of Global Engineering Education Conference (EDUCON), 2014 IEEE, pp, 803–806, 2014.
[32] S. Benford, C. Greenhalgh, G. Reynard, C. Brown, and B. Koleva, “Understanding and constructing shared spaces with mixed-reality boundaries”, ACM Trans. on Computer-Human Interaction, volume 5, issue 3, pp:185- 223, 1998.
[33] S. Vogt, A. Khamene, F. Sauer and H. Niemann, “Single camera tracking of marker clusters: Multiparameter cluster optimization and experimental verification”, In ISMAR ‘02, pp. 127-136, 2002.
[34] Siltanen, S, “Theory and applications of marker based augmented reality”, Espoo, VTT Science 3, ISBN 978-951-38-7449-0, 2012.
[35] Squire, K. D. and Jan, M., “Mad city mystery: Developing scientific argumentation skills with a place-based augmented reality game on handheld computers”, Journal of Science Education and Technology, 16(1), pp. 5–29, 2007.
[36] T. Blum, R. Stauder, E. Euler, and N. Navab, “Superman-like x-ray vision: Towards brain-computer interfaces for medical augmented reality”, in Mixed and Augmented Reality (ISMAR), IEEE International Symposium on. IEEE, pp. 271–272, 2012.
[37] T. P. Caudell and D. W. Mizell, “Augmented reality: An application of heads-up display technology to manual manufacturing processes”, In Proc. Hawaii Int`l Conf. on Systems Sciences, Kauai, HI, USA, IEEE CS Press. pp. 659-669, 1992.
[38] U. Neumann and S. Y. You, “Natural Feature Tracking for Augmented Reality”, IEEE Transactions on Multimedia 1, 1 (1999), pp:53–64, 1999.
[39] Wu, H., Lee, S.W., Chang, H. & Liang, J, “Current status, opportunities and challenges of augmented reality in education”, Computers & Education, 62, pp:41-49, 2013.
[40] Y. Cho, J. Lee and U, “Neumann. A multi-ring fiducial system and an intensity-invariant detection method for scalable augmented reality”, In IWAR ’98, pp. 147-156, 1998.
[41] Z. Mohana, I. Musae, M. A. Ramachandran and A. Habibi, “Ubiquitous Medical Learning Using Augmented Reality Based on Cognitive Information Theory”, Advances in Computer Science, Engineering & Applications, Vol. 167, pp. 305- 312, 2012.
[42] Akçayır, M. and Akçayır, G, “Advantages and challenges associated with augmented reality for education: A systematic review of the literature”, Educational Research Review, 20, pp. 1–11, 2017.
Citation
K. Ganesh Kumar, "Future Generation Education on Augmented Reality", International Journal of Computer Sciences and Engineering, Vol.07, Issue.08, pp.108-113, 2019.
Application of Nanotechnology in Civil Infrastructure
Survey Paper | Journal Paper
Vol.07 , Issue.08 , pp.114-119, Apr-2019
CrossRef-DOI: https://doi.org/10.26438/ijcse/v7si8.114119
Abstract
In this paper, use of engineering in building materials on behalf of a spread of applied science mechanism is mentioned. Strength, sturdiness and different properties of materials are significantly affected underneath a scale of nano meter(10-9m).This paper in addition tells however the employment of nano technology makes concrete more stronger, sturdy and additional simply placed. differing kinds of nano materials used are mentioned with its wide applications. The properties like self-sensing, self- treatment, self-structural health watching are studied. Following this the analysis were applied in versatile structural composites together with its improved properties, low repairs coatings, higher properties of building material materials, reduction of the thermal transfer rate of fireside retardation and insulation, varied nanosensors, sensible materials, intellectual construction technology.
Key-Words / Index Term
Civil Infrastructure Application, Nanotechnology properties
References
[1] Mann, S. (2006). “Nanotechnology and Construction,” Nanoforum Report. www.nanoforum.org, May 30, 2008.
[2] Balaguru, P. N., “Nanotechnology and Concrete: Background, Opportunities and Challenges.” Proceedings of the International Conference – Application of Technology in Concrete Design, Scotland, UK, p.113-122, 2005.
[3] ARI News (2005). “Nanotechnology in Construction – One of the Top Ten Answers to World’s Biggest Problems.” www.aggregateresearch.com/article.asp?id=6279, June 1, 2007.
[4] Goddard III, W.A., Brenner, D.W., Lyshevski, S.E. and Iafrate, G.J. “Properties of High-Volume Fly Ash Concrete Incorporating Nano-SiO2.” Cement and Concrete Research, vol.34, p.1043-1049, 2004.
[5] [8] Bigley C. and Greenwood P. “Using Silica to Control Bleed and Segregation in Self-Compacting Concrete.” Concrete, vol. 37, no. 2,p.43-45, 2003.
[6] Tong, Z., Bischoff, M. and Nies, L. “Impact of Fullerene (C60) on a soil microbial community”. B. Environ. Sci. Technol. 2007, 41, 2985-2991, 2007.
[7] MMFX Steel Corporation of America, http://www.~rarffxsteel.com/.
[8] NanoPore Incorporated, http://www.nanopore.com.
[9] Pilkington, http://www.activglass.com/.
[10] St. Gobain, http://www.saint-gobain.com/.
[11]BASF, http://www.basf.de.
[12] Castano, V.M. and Rodriguez, R, ‘A nanotechnology approach to high performance anti-graffiti coatings’., London, Oct. 2003
[13] Baughman, R. H., Zakhidov, A. A., and de Heer W. (2002). “Carbon nanotubes— The route toward applications.” Science, 297(5582), 787–792.
[14] BCC Research. (2008). “Nanotechnology reports and reviews.” “http://www.bccresearch.com/nanotechnology/” (Mar. 5, 2008).
[15] Beatty, C. (2006). “Nanomodification of asphalt to lower construction temperatures.” NSF Workshop on Nanotechnology, Material Science and Engineering, National Science Foundation, Washington, DC.
[16] ASCE. (2005). “Report card for America’s infrastructure. American society of civil engineers” “http://www.asce.org”(Mar. 8, 2008).
[17] Baer, D. R., Burrows, P. E., and El-Azab, A. A. (2003). “Enhancing coating functionality using nanoscience and nanotechnology.” Prog. Org. Coat., 47(3–4), 342–356.
[18] Bartos, P. J. M. (2006). “NANOCONEX Roadmap-novel materials.” Centre for Nanomaterials Applications in Construction, Bilbao, Spain “http://www.mmsconferencing.com/nanoc/” (Jan. 13, 2008).
[19] Shah, S. P., and A. E. Naaman. “Mechanical Properties of Glass and Steel Fiber Reinforced Mortar.” ACI Journal 73, no. 1 (Jan 1976): 50-53.
[20] Saafi, M. and Romine, P. (2005).”Nano- and Microtechnology.” Concrete International, Vol. 27 No. 12, p 28-34.
[21] Sandvik Nanoflex Materials Technology. http://www.smt.sandvik.com/nanoflex, May 30, 2008.
[22] Sobolev, K. and Gutierrez, M. F. (2005). “How Nanotechnology can Change the Concrete World,” American Ceramic Society Bulletin, vol. 84, no. 10, p. 14-16.
[23] Song, Gl, Gu, H. and Mo,Y. (2008). “Smart Aggregates: Multi-Functional Sensors for Concrete Structures—a Tutorial and a review.” Smart Mater. Struct. vol.17.
[24] Zhu, W., Bartos, P.J.M., Gibbs, J.: Application of Nanotechnology in Construction. State of the Art report,Technical Report, Project “NANOCONEX”, 49 p. (March 2004).
[25] Zhu, W., Bartos, P.J.M., Porro, A. (eds.): Application of Nanotechnology in Construction. Mater. Struct. 37, 649–659 (2004).
[26] Lau, Kin-Tak, and David Hui. “The revolutionary creation of new advanced materials—carbon nanotube composites.” Composites: Part B 33, no. 4 (2002): 263-277.
[27] Li, Hui, Hui-gang Xiao, Jie Yuan, and Jinping Ou. “Microstructure of cement mortar with nanoparticles.” Composites Part B: Engineering 35, no. 2 (March 2004): 185-189.
[28] Li, Hui, Mao-hua Zhang, and Jin-ping Ou. “Abrasion resistance on concrete containing nanoparticles for pavement.” Wear 260, no. 11-12 (2006): 1262-1266.
[29] Saafi and Romine, 2005; Song and Mo, 2008
[30] PCI, TR-6-03. Interim Guidelines for the Use of Self-Consolidating Concrete in Precast/ Chicago: Concrete Institute, 2003.
Citation
P. Krishnaveni, "Application of Nanotechnology in Civil Infrastructure", International Journal of Computer Sciences and Engineering, Vol.07, Issue.08, pp.114-119, 2019.
An In-Depth Study of Mobile Computing Devices Applications and Challenges
Survey Paper | Journal Paper
Vol.07 , Issue.08 , pp.120-124, Apr-2019
CrossRef-DOI: https://doi.org/10.26438/ijcse/v7si8.120124
Abstract
The mobile computing is a computing organization in which a computer and all necessary accessories like files and software are taken outto the field. It is a system of computing through which it is being able to use a computing device even when someone being mobile and therefore changing location. The portability is one of the important aspects of mobile computing. The mobile phones are being used to gather scientific data from remote and isolated places that could not be possible to retrieve by other means. The expertsaremakingtoproceduremobiledevicesand web-based applications to scientifically explore interesting scientific aspects of their surroundings, ranging from climate change, environmental pollution to earthquake monitoring.Thismobilerevolutionempowersnew ideas and innovations to spread out more speedily andefficiently.Herewewilldiscussinbriefabout the mobile computing technology, its guessing, challenges and theapplications.
Key-Words / Index Term
Mobile Computing, Mobile Sensing, Applications
References
[1]R. Les Cottrell, “The internet, mobile computing and mobile phones in developing countries” published in “m-Science sensing, computing and dissemination” Editors: E. Canessa and M. Zennaro, ICTP (2010), ICTP Science Dissemination Unit
[2]“3GPP Long Term Evolution”
:http://en.wikipedia.org/wiki/3GPP_Long_Term_Evolution
[3]“First cell phone a true brick”, Associated Press, see:http://www.msnbc.msn.com/id/7432915
[4]http://en.wikipedia.org/wiki/MVEDR
[5]Dashboard Center piece aka Dashtop Mobile:http://driveonpay.com/Dash Cente piece.ht ml.
Citation
A. Rajeswari, "An In-Depth Study of Mobile Computing Devices Applications and Challenges", International Journal of Computer Sciences and Engineering, Vol.07, Issue.08, pp.120-124, 2019.
Automated Detection of High Exudates and Cotton Wool Spots in Diabetic Retinopathy
Survey Paper | Journal Paper
Vol.07 , Issue.08 , pp.125-128, Apr-2019
CrossRef-DOI: https://doi.org/10.26438/ijcse/v7si8.125128
Abstract
Diabetic Retinopathy is the damage caused to the retina of the eye due to the diabetes. There are a large number of people, who are suffering from Diabetic Retinopathy that leads to blurring of vision or even blindness, if not treated at an early stage. Hence it is important to detect Diabetic Retinopathy at an earlier stage and provide treatment otherwise it may lead to vision demages. This paper presents to identify or earlier detection of the High Exudates and Cotton Wool Spots in Diabetic Retinopathy through Color transformation and Image Enhancement using FFT.
Key-Words / Index Term
High Exudates ,Cotton Wool Spots,FFT etc
References
[1]. http://www.isi.uu.nl/Research/Databases/DRIVE
[2]. Zahira Asifa Tarannum, B.Srilatha “Detection of Diabetic Retinopathy with Feature Extraction using Image Processing” in IJEECS.
[3]. 3. R.ManjulaSri 1 , M.Raghupathy Reddy and K.M.M.Rao “ Image Processing for Identifying Different Stages of Diabetic Retinopathy” ACEEE in Int. J. of Recent Trends in Engineering & Technology, Vol. 11, June 2014.
[4]. Dr. Prasannakumar. S.C, Mrs. Deepashree Devaraj,’Automatic exudatete detection for the diagnosis of diabetic rethinopathy ’,issn 2319- 9725, Vol 2 issue.
[5]. Nikhil Amrutkar , Yogesh Bandgar , Sharad Chitalkar, S.L.Tade “ RETINAL BLOOD VESSEL SEGMENTATION ALGORITHM FOR DIABETIC RETINOPATHY AND ABNORMALITY DETECTION USING IMAGE SUBSTRACTION” International Journal of Advanced Research in Electrical, Electronics and Instrumentation Engineering, Vol. 2, Issue 4, April 2013.
Citation
S. Ponnammal, "Automated Detection of High Exudates and Cotton Wool Spots in Diabetic Retinopathy", International Journal of Computer Sciences and Engineering, Vol.07, Issue.08, pp.125-128, 2019.
Survey on Image Compression Techniques
Survey Paper | Journal Paper
Vol.07 , Issue.08 , pp.129-132, Apr-2019
CrossRef-DOI: https://doi.org/10.26438/ijcse/v7si8.129132
Abstract
The goal of this paper is to survey about various image compression techniques. Image compression is a very important to implement in digital image processing. It is the case of converting an image file in such a way that it consumes smaller space than the original file. Compression technique that reduces the size of an image file without disturbing or degrading its quality to a greater extent. The main intention of the compression is to minimize the amount or unnecessary data.The goal behind is to save the amount of memory required to save the images or to make practical network bandwidth in efficient manner.
Key-Words / Index Term
Image Compression, Run length encoding, Entropy encoding, LZW
References
[1]. Yogesh Chandra, Shikha Mishra “Review Paper of Image Compression”, International Journal of Engineering and Applied Sciences (IJEAS) ISSN: 2394-3661, Volume-2, Issue-2, February 2015.
[2]. Dalvir Kaur, Kamaljit Kaur, “Huffman Based LZW Lossless Image Compression Using Retinex Algorithm”, International Journal of Advanced Research in Computer and Communication Engineering, Vol. 2, Issue 8, August 2013.
[3]. Mrs.Bhumika Gupta, “Study Of Various Lossless Image Compression Technique”, IJETTCS, volume 2, issue 4, July- August 2013.
[4]. David Jeff Jackson & Sidney Joel Hannah, “Comparative Analysis of image CompressionTechniques,” System Theory 1993, Proceedings SSST ’93, 25th Southeastern Symposium,pp 513-517, 7 –9March 1993.
[5]. Malwinder Kaur, Navdeep Kaur, “A Litreature Survey on Lossless Image Compression”, International Journal of Advanced Research in Computer and Communication Engineering, Vol. 4, Issue 3, March 2015.
[6]. Mridul Kumar Mathur, Seema Loonker, Dr. Dheeraj Saxena, “Lossless Huffman Coding Technique For Image Compression And Reconstruction Using Binary Trees”, IJCTA, Vol 3 , Jan-Feb 2012.
[7]. Jagadeesh B, Ankitha Rao, “An approach for Image Compression Using Adaptive Huffman Coding”, International Journal of Engineering Research & Technology (IJERT), Vol. 2 Issue 12, December – 2013.
[8]. Xiaoqiang Ma ,Chengdong Xu ,Pengfei Zhang ,Chunsheng Hu, “The Application of the Improved LZW Algorithm in the Data Processing of GNSS Simulation”, Fourth International Conference on Computational and Information Sciences 2012.
[9]. JayavrindaVrindavanam ,SaravananChandran, Gautam K. Mahanti, “A Survey of Image Compression Methods” International Journal of Computer Applications® (IJCA) 2012.
Citation
C. Jenifer Kamalin, G. Muthu Lakshmi, "Survey on Image Compression Techniques", International Journal of Computer Sciences and Engineering, Vol.07, Issue.08, pp.129-132, 2019.
Natural Language Understanding of Malayalam Language
Research Paper | Journal Paper
Vol.07 , Issue.08 , pp.133-138, Apr-2019
CrossRef-DOI: https://doi.org/10.26438/ijcse/v7si8.133138
Abstract
Natural Language Understanding (NLU) is really challenging sub domain of language processing as far as any highly agglutinative and morphologically rich south Indian Languages are concerned. It requires highly complex procedures and techniques to extract its inflections and grammatical information, thereby make a computer to understand the real sense of the language as human beings does. This paper aims not only providing insights into natural language understanding but also gathers information about the various existing techniques in Malayalam Language. Natural language Understanding refers to the understanding of any language by a machine with the help of an intermediate representation. Here we have briefed the various techniques and algorithms used with Morphological Analyzer, POS tagger, chunking, Parsing, Named Entity Recognition, and Word Sense Disambiguation which are the inevitable components for understanding any Natural Language.
Key-Words / Index Term
Natural Language Understanding, Lemmatization, Suffix stripping, Sequence labelling, Stemming, POS tagger, Hidden Markov Model, Support Vector Machine, Morphology, Parsing, Chunking, Sandhi Splitter, Named Entity Recognition, Word Sense Disambiguation
References
[1] Kavallieratou, Ergina, et al. "Handwritten word recognition based on structural characteristics and lexical support." Seventh International Conference on Document Analysis and Recognition, 2003. Proceedings. IEEE, 2003.
[2] Hammerton, James, et al. "Introduction to special issue on machine learning approaches to shallow parsing." Journal of Machine Learning Research 2.Mar (2002): 551-558.
[3] Haroon, Rosna P. "Malayalam word sense disambiguation." 2010 IEEE International Conference on Computational Intelligence and Computing Research. IEEE, 2010.
[4] Aasha, V. C., and Amal Ganesh. "Rule Based Machine Translation: English to Malayalam: A Survey." Proceedings of 3rd International Conference on Advanced Computing, Networking and Informatics. Springer, New Delhi, 2016.
[5] Antony, J. Betina, and G. S. Mahalakshmi. "Named entity recognition for Tamil biomedical documents." 2014 International Conference on Circuits, Power and Computing Technologies [ICCPCT-2014]. IEEE, 2014.
[6] Nisha, M., and PC Reghu Raj. "Malayalam morphological analysis using MBLP approach." 2015 International Conference on Soft-Computing and Networks Security (ICSNS). IEEE, 2015.
[7] Dinh, Phu-Hung, Ngoc-Khuong Nguyen, and Anh-Cuong Le. "Combining statistical machine learning with transformation rule learning for Vietnamese word sense disambiguation." 2012 IEEE RIVF International Conference on Computing & Communication Technologies, Research, Innovation, and Vision for the Future. IEEE, 2012.
[8] Khalil El Hindi, Muna Khayyat & Areej Abu Kar (2017) Comparing the Machine Ability to Recognize Hand-Written Hindu and Arabic Digits, Intelligent Automation & Soft Computing, 23:2, 295-301, DOI: 10.1080/10798587.2016.1210257.
[9] Sharma, Sanjeev Kumar, and G. S. Lehal. "Improving Existing Punjabi Grammar Checker." 2016 International Conference on Computational Techniques in Information and Communication Technologies (ICCTICT). IEEE, 2016.
[10] Manju, K., S. Soumya, and Sumam Mary Idicula. "Development of a POS tagger for Malayalam-an experience." 2009 International Conference on Advances in Recent Technologies in Communication and Computing. IEEE, 2009.
[11] Idicula, Sumam Mary, and Peter S. David. "A morphological processor for malayalam language." South Asia Research27.2 (2007): 173-186.
[12] Rajeev, R. R., and Elizabeth Sherly. "Morph analyser for malayalam language: A suffix stripping approach." Proceedings of 20th Kerala Science Congress, 2007.
[13] Jayan, Jisha P., R. R. Rajeev, and S. Rajendran. "Morphological analyser for malayalam-a comparison of different approaches." IJCSIT 2.2 (2009): 155-160.
[14] Vinod, P. M, V. Jayan, and V. K. Bhadran. "Implementation of Malayalam morphological analyzer based on hybrid approach." Proceedings of the 24th Conference on Computational Linguistics and Speech Processing (ROCLING 2012). 2012.
[15] Rinju O.R, Rajeev R R, Raghu Raj P C, Elizabeth Sherly,“Morphological Analyzer for Malayalam: Probabilistic Method vs Rule Based Method”, International Journal of Computational Linguistics and Natural Language Processing, Vol 2 Issue 10 October 2013.
[16] Jancy Joseph et.al. “Rule based Morphological Analyser for Malayalam nouns”, Computational Analysis of Malayalam Linguistics, IJIRCCE vol3, special issue 7, OCT 2015.
[17] Nair, Latha R. "Language Parsing and Syntax of Malayalam Language." 2nd International Symposium on Computer, Communication, Control and Automation. Atlantis Press, 2013.
[18] Jayan, Jisha P., and R. R. Rajeev. "Parts Of Speech Tagger and Chunker for Malayalam–Statistical Approach." Computer Engineering and Intelligent Systems 2.2 (2011): 68-78.
[19] Manju, K., S. Soumya, and Sumam Mary Idicula. "Development of a POS tagger for Malayalam-an experience." 2009 International Conference on Advances in Recent Technologies in Communication and Computing. IEEE, 2009.Latha R Nair, David s Peter “Shallow Parser for Malayalam Language Using Finite State Cascades”, 4th International Conference on Image and Signal Processing 2011.
[20] Nair, Latha R., and S. David Peter. "Development of a rule based learning system for splitting compound words in malayalam language." 2011 IEEE Recent Advances in Intelligent Computational Systems. IEEE, 2011.
[21] Antony, P. J., Santhanu P. Mohan, and K. P. Soman. "SVM based part of speech tagger for Malayalam." 2010 International Conference on Recent Trends in Information, Telecommunication and Computing. IEEE, 2010.
[22] Pragisha, K., and Dr PC Reghuraj. "A Natural Language Question Answering System in Malayalam Using Domain Dependent Document Collection as Repository." International Journal of Computational Linguistics and Natural Language Processing 3.3 (2014): 2279-0756.
[23] Jayan, Jisha P., R. R. Rajeev, and Elizabeth Sherly. "A hybrid statistical approach for named entity recognition for malayalam language." Proceedings of the 11th Workshop on Asian Language Resources. 2013.
[24] Devadath, V. V., and Dipti Misra Sharma. "Significance of an accurate sandhi-splitter in shallow parsing of dravidian languages." Proceedings of the ACL 2016 Student Research Workshop. 2016.
Citation
Usha K, S Lakshmana Pandian, "Natural Language Understanding of Malayalam Language", International Journal of Computer Sciences and Engineering, Vol.07, Issue.08, pp.133-138, 2019.
Suspicious Activity Detection in Surveillance Video Using Fully Convolutional Networks Segmentation
Research Paper | Journal Paper
Vol.07 , Issue.08 , pp.139-142, Apr-2019
CrossRef-DOI: https://doi.org/10.26438/ijcse/v7si8.139142
Abstract
In Recent Years, suspicious activity detection is used to detect traffic in different surveillance videos with high accuracy and high speed in daytime. This surveillance video detection method includes Adaptive Background, Object Modeling, Object Tracking, Activity Recognition, and Segmentation. The semantic segmentation using suspicious activity detection techniques plays a major role in the segmentation of the surveillance video. U-Net is one of the popular Fully Convolutional Networks (FCN) which is applicable for image segmentation. This method could found the different anomalies activity from the videos.
Key-Words / Index Term
Segmentation, FCN, Object Modeling, Suspicious Activity Detection, Surveillance Video
References
[1]K.He,X.Zhang,S.Ren,andJ.Sun.patialpyramidpooling in deep convolutional networks for visual recognition. In ECCV, 2014. 1, 2
[2]Y.Jia,E.Shelhamer,J.Donahue,S.Karayev,J.Long,R.Girshick,S. Guadarrama, and T. Darrell. Caffe: Convolutionalarchitectureforfastfeatureembedding. arXivpreprint arXiv:1408.5093, 2014. 7
[3] J. J. Koenderink and A. J. van Doorn. Representation of local geometry in the visual system. Biological cybernetics, 55(6):367–375, 1987.
[4] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012. 1, 2, 3, 5
[5]Y.LeCun,B.Boser,J.Denker,D.Henderson,R.E.Howard, W. Hubbard, and L. D. Jackel. Backpropagation applied to hand-written zip code recognition. In Neural Computation, 1989. 2, 3
[6] Y. A. LeCun, L. Bottou, G. B. Orr, and K.-R. M¨uller. Efficient backprop. In Neural networks: Tricks of the trade, pages 9–48. Springer, 1998. 7
[7] L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. Semantic image segmentation with deep convolutional nets and fully connected CRFs. In ICLR, 2015. 6
[8] J. Dai, K. He, Y. Li, S. Ren, and J. Sun. Instance-sensitive fully convolutional networks. In ECCV, 2016. 2, 3
[9] J. Dai, Y. Li, K. He, and J. Sun. R-FCN: Object detection via region-based fully convolutional networks. In NIPS, 2016. 2
[10] A. Faktor and M. Irani. Video segmentation by non-local consensus voting. In BMVC, 2014. 5
[11] Q. Fan, F. Zhong, D. Lischinski, D. Cohen-Or, and B. Chen. Jumpcut: Non-successive mask transfer and interpolation for video cutout. ACM Trans. Graph., 34(6), 2015. 2, 3, 5, 7
[12] C. Farabet, C. Couprie, L. Najman, and Y. LeCun. Learning hierarchical features for scene labeling. TPAMI, 35(8):1915– 1929, 2013. 2, 3
[13] K. Fragkiadaki, G. Zhang, and J. Shi. Video segmentation by tracing discontinuities in a trajectory embedding. In CVPR, 2012. 5
[14] R. Girshick. Fast R-CNN. In ICCV, 2015. 1, 3
[15] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In CVPR, 2014. 1
[16] M. Godec, P. M. Roth, and H. Bischof. Hough-based tracking of non-rigid objects. CVIU, 117(10):1245–1256, 2013. 5
[17] M. Grundmann, V. Kwatra, M. Han, and I. A. Essa. Efficient hierarchical graph-based video segmentation. In CVPR, 2010. 2, 3, 5, 7
[18] B. Hariharan, P. Arbel´aez, R. Girshick, and J. Malik. Hypercolumns for object segmentation and fine-grained localization. In CVPR, 2015. 2, 4
[19] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016. 1, 3, 4
[20] S. D. Jain and K. Grauman. Supervoxel-consistent foreground propagation in video. In ECCV, 2014. 2, 5, 8
[21] V. Jampani, R. Gadde, and P. V. Gehler. Video propagation networks. In CVPR, 2017. 3
[22] A. Khoreva, F. Perazzi, R. Benenson, B. Schiele, and A. Sorkine-Hornung. Learning video object segmentation from static images. In CVPR, 2017. 3
[23] I. Kokkinos. Pushing the boundaries of boundary detection using deep learning. In ICLR, 2016. 1, 4
[24] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012. 1, 3, 4
[25] Y. J. Lee, J. Kim, and K. Grauman. Key-segments for video object segmentation. In ICCV, 2011. 5
[26] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, and S. Reed. SSD: Single shot multibox detector. In ECCV, 2016. 1
[27] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015. 2, 3, 4
[28] M. Kristan et al. The visual object tracking VOT2015 challenge results. In Visual Object Tracking Workshop 2015 at ICCV 2015, Dec 2015. 8
[29] K. Maninis, J. Pont-Tuset, P. Arbel´aez, and L. Van Gool. Convolutional oriented boundaries. In ECCV, 2016. 1, 3, 4, 5
[30] K. Maninis, J. Pont-Tuset, P. Arbel´aez, and L. Van Gool. Deep retinal image understanding. In MICCAI, 2016. 2, 3, 4
[31] R. Mottaghi, X. Chen, X. Liu, N.-G. Cho, S.-W. Lee, S. Fidler, R. Urtasun, and A. Yuille. The role of context for object detection and semantic segmentation in the wild. In CVPR, 2014. 5
[32] H. Nam and B. Han. Learning multi-domain convolutional neural networks for visual tracking. In CVPR, 2016. 3, 8
[33] N. Nicolas M¨arki, F. Perazzi, O. Wang, and A. SorkineHornung. Bilateral space video segmentation. In CVPR, 2016. 2, 5, 7
[34] H. Noh, S. Hong, and B. Han. Learning deconvolution network for semantic segmentation. In ICCV, 2015. 2, 3
[35] P. Ochs, J. Malik, and T. Brox. Segmentation of moving objects by long term video analysis. TPAMI, 36(6):1187– 1200, 2014. 5
[36] A. Papazoglou and V. Ferrari. Fast object segmentation in unconstrained video. In ICCV, 2013.
[37] F. Perazzi, J. Pont-Tuset, B. McWilliams, L. Van Gool, M. Gross, and A. Sorkine-Hornung. A benchmark dataset and evaluation methodology for video object segmentation. In CVPR, 2016. 2, 5, 6
[38] F. Perazzi, O. Wang, M. Gross, and A. Sorkine-Hornung. Fully connected object proposals for video segmentation. In ICCV, 2015. 2, 5, 7
[39] P. O. Pinheiro, T.-Y. Lin, R. Collobert, and P. Doll´ar. Learning to refine object segments. In ECCV, 2016. 3
[40] J. Pont-Tuset, P. Arbel´aez, J. T. Barron, F. Marques, and J. Malik. Multiscale combinatorial grouping for image segmentation and object proposal generation. TPAMI, 2017. 3, 5
Citation
S. Santhiya, T. Ratha Jeyalakshmi, "Suspicious Activity Detection in Surveillance Video Using Fully Convolutional Networks Segmentation", International Journal of Computer Sciences and Engineering, Vol.07, Issue.08, pp.139-142, 2019.