Skip navigation

putin IS MURDERER

Please use this identifier to cite or link to this item: https://oldena.lpnu.ua/handle/ntb/52547
Title: Shallow Convolutional Neural Networks for Pattern Recognition Problems
Authors: Gorokhovatskyi, Oleksii
Peredrii, Olena
Affiliation: Simon Kuznets Kharkiv National University of Economics
Bibliographic description (Ukraine): Gorokhovatskyi O. Shallow Convolutional Neural Networks for Pattern Recognition Problems / Oleksii Gorokhovatskyi, Olena Peredrii // Data stream mining and processing : proceedings of the IEEE second international conference, 21-25 August 2018, Lviv. — Львів : Lviv Politechnic Publishing House, 2018. — P. 459–463. — (Machine Vision and Pattern Recognition).
Bibliographic description (International): Gorokhovatskyi O. Shallow Convolutional Neural Networks for Pattern Recognition Problems / Oleksii Gorokhovatskyi, Olena Peredrii // Data stream mining and processing : proceedings of the IEEE second international conference, 21-25 August 2018, Lviv. — Lviv Politechnic Publishing House, 2018. — P. 459–463. — (Machine Vision and Pattern Recognition).
Is part of: Data stream mining and processing : proceedings of the IEEE second international conference, 2018
Conference/Event: IEEE second international conference "Data stream mining and processing"
Issue Date: 28-Feb-2018
Publisher: Lviv Politechnic Publishing House
Place of the edition/event: Львів
Temporal Coverage: 21-25 August 2018, Lviv
Keywords: image
recognition
classification
convolution
shallow neural network
layer
partial training
dataset
Number of pages: 5
Page range: 459-463
Start page: 459
End page: 463
Abstract: Paper describes an investigation of possible usage of shallow (limited by few layers only) convolutional neural networks to solve famous pattern classification problems. Brazilian coffee scenes, SAT-4/SAT-6, MNIST, UC Merced Land Use and CIFAR datasets were tested. It is shown that shallow convolution neural networks with partial training may be effective enough to produce the result close to state-ofthe-art deep networks but also limitations are found.
URI: https://ena.lpnu.ua/handle/ntb/52547
ISBN: © Національний університет „Львівська політехніка“, 2018
© Національний університет „Львівська політехніка“, 2018
Copyright owner: © Національний університет “Львівська політехніка”, 2018
URL for reference material: https://arxiv.org/pdf/1602.01517.pdf
https://arxiv.org/pdf/1409.1556.pdf
https://arxiv.org/pdf/1409.4842.pdf
https://arxiv.org/pdf/1512.03385v1.pdf
https://adeshpande3.github.io/adeshpande3.github.io/The-9-DeepLearning-Papers-You-Need-To-Know-About.html
http://setosa.io/ev/image-kernels/
https://cambridgespark.com/content/tutorials/convolutional-neuralnetworks-with-keras/index.html
https://arxiv.org/abs/1412.6980v8.pdf
http://www.patreo.dcc.ufmg.br/downloads/brazilian-coffee-dataset/
http://dsp.etfbl.net/aerial/unsupervised_final.pdf
https://arxiv.org/pdf/1508.00092.pdf
https://arxiv.org/pdf/1612.08879.pdf
http://csc.lsu.edu/~saikat/deepsat/
http://bit.csc.lsu.edu/~saikat/publications/sigproc-sp.pdf
http://dx.doi.org/10.1080/2150704X.2016.1235299
http://proceedings.utwente.nl/403/1/Yang-DropBand-91.pdf
http://yann.lecun.com/exdb/mnist/
https://pdfs.semanticscholar.org/4191/fe93bfd883740a881e6a60e54b371c2f241d.pdf
https://www.cs.toronto.edu/~kriz/cifar.html
https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf
http://rodrigob.github.io/are_we_there_yet/build/classification_datase
https://github.com/fchollet/keras
References (Ukraine): [1] J. Wang, C. Luo, H. Huang, H. Zhao and S. Wang, “Transferring PreTrained Deep CNNs for Remote Scene Classification with General Features Learned from Linear PCA Network,” Remote Sens. 2017, 9(3), 225; doi:10.3390/rs9030225.
[2] K. Nogueira, W. O. Miranda and J. A. Dos Santos, “Improving spatial feature representation from aerial scenes by using convolutional networks,” in: 28th IEEE SIBGRAPI Conference on Graphics, Patterns and Images, pp. 289–296, 2015.
[3] A. Krizhevsky, I. Sutskever and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Neural Information Processing Systems, pp. 1106–1114, 2012.
[4] K. Nogueira, O. A. B. Penatti and J. A. Dos Santos, “Towards Better Exploiting Convolutional Neural Networks for Remote Sensing Scene Classification,” [Online]. Available: https://arxiv.org/pdf/1602.01517.pdf [June 02, 2018].
[5] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” [Online]. Available: https://arxiv.org/pdf/1409.1556.pdf. [March 15, 2017].
[6] C. Szegedy, W. Liu, Y. Jia et al., “Going deeper with convolutions,” [Online]. Available: https://arxiv.org/pdf/1409.4842.pdf. [March 10, 2017].
[7] K. He, X. Zhang, S. Ren and J. Sun, “Deep Residual Learning for Image Recognition,” [Online]. Available: https://arxiv.org/pdf/1512.03385v1.pdf. [May 12, 2017].
[8] A. Deshpande, “The 9 Deep Learning Papers You Need To Know About (Understanding CNNs Part 3),” [Online]. Available: https://adeshpande3.github.io/adeshpande3.github.io/The-9-DeepLearning-Papers-You-Need-To-Know-About.html. [May 15, 2017].
[9] V. Powell, “Image Kernels,” [Online]. Available: http://setosa.io/ev/image-kernels/. [June 02, 2017].
[10] P. Veličković, “Deep learning for complete beginners: convolutional neural networks with keras,” [Online]. Available: https://cambridgespark.com/content/tutorials/convolutional-neuralnetworks-with-keras/index.html. [June 02, 2017].
[11] D. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization,” [Online]. Available: https://arxiv.org/abs/1412.6980v8.pdf. [July 20, 2017].
[12] Brazilian Coffee Scenes Dataset [Online]. Available: http://www.patreo.dcc.ufmg.br/downloads/brazilian-coffee-dataset/. [May 17, 2017].
[13] O. A. B. Penatti, K. Nogueira and J. A. Dos Santos, “Do deep features generalize from everyday objects to remote sensing and aerial scenes domains?,” in IEEE Computer Vision and Pattern Recognition Workshops, pp. 44–51, 2015.
[14] R. de O. Stehling, M. A. Nascimento and A. X. Falcao, “A compact and efficient image retrieval approach based on border/interior pixel classification,” In Eleventh International Conference on Information and Knowledge Management (CIKM'02), pp.102–109, 2002.
[15] V. Risojevic and Z. Babic, “Unsupervised Quaternion Feature Learning for Remote Sensing Image Classification,” [Online]. Available: http://dsp.etfbl.net/aerial/unsupervised_final.pdf. [June 15, 2017].
[16] M. Castelluccio, G. Poggi, C. Sansone and L. Verdoliva, “Land Use Classification in Remote Sensing Images by Convolutional Neural Networks,” [Online]. Available: https://arxiv.org/pdf/1508.00092.pdf. [June 17, 2017].
[17] DaoYu Lin, “Deep Unsupervised Representation Learning for Remote Sensing Images,” [Online]. Available: https://arxiv.org/pdf/1612.08879.pdf. [June 20, 2017].
[18] SAT-4 and SAT-6 airborne datasets [Online]. Available: http://csc.lsu.edu/~saikat/deepsat/. [June 20, 2017].
[19] S. Basu, S. Ganguly, S. Mukhopadhyay, R. DiBiano, M. Karki and R. Nemani, “DeepSat – A Learning framework for Satellite Imagery,” [Online]. Available: http://bit.csc.lsu.edu/~saikat/publications/sigproc-sp.pdf. [June 20, 2017].
[20] Y. Zhong, F. Fei, Y. Liu, B. Zhao, H. Jiao and L. Zhang, “SatCNN: satellite image dataset classification using agile convolutional neural networks,” Remote Sensing Letters, 8:2, 136-145, DOI: 10.1080/2150704X.2016.1235299. [Online]. Available: http://dx.doi.org/10.1080/2150704X.2016.1235299. [June 23, 2017].
[21] N. Yang, H. Tang, H. Sun and X. Yang, “DropBand: a convolutional neural network with data augmentation for scene classification of VHR satellite images,” [Online]. Available: http://proceedings.utwente.nl/403/1/Yang-DropBand-91.pdf. [June 25, 2017].
[22] M. Papadomanolaki, M. Vakalopoulou, S. Zagoruyko and K. Karantzalos, “Benchmarking deep learning frameworks for the classification of very high resolution satellite multispectral data,” in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume III-7, XXIII SPRS Congress, Prague, Czech Republic, 12–19 July 2016.
[23] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, "Gradient-based learning applied to document recognition," Proceedings of the IEEE, 86(11):2278-2324, November 1998.
[24] Y. LeCun, C. Cortes and C.J.C. Burges, “THE MNIST DATABASE of handwritten digits,” [Online]. Available: http://yann.lecun.com/exdb/mnist/. [July 10, 2017].
[25] Y. Yang and S. Newsam, “Bag-of-visual-words and spatial extensions for land-use classification,” in: 18th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, pp. 270– 279, November 02 – 05, 2010.
[26] M. Castelluccio, G. Poggi, C. Sansone and L.Verdoliva, “Land Use Classification in Remote Sensing Images by Convolutional Neural Networks,” [Online]. Available: https://pdfs.semanticscholar.org/4191/fe93bfd883740a881e6a60e54b371c2f241d.pdf. [July 26, 2017].
[27] F. P. S. Luus, B. P. Salmon, F. van den Bergh and B. T. J. Maharaj, “Multiview Deep Learning for Land-Use Classification,” in IEEE Geoscience and Remote Sensing Letters, Vol. 12, pp. 2448 – 2452, 2015.
[28] Y. Zhong, F. Fei and L. Zhang, “Large patch convolutional neural networks for the scene classification of high spatial resolution imagery,” J. Appl. Remote Sens. 10(2), 025006 (2016), doi: 10.1117/1.JRS.10.025006.
[29] The CIFAR-10 dataset [Online]. Available: https://www.cs.toronto.edu/~kriz/cifar.html. [July 27, 2017].
[30] A. Krizhevsky, “Learning Multiple Layers of Features from Tiny Images,” [Online]. Available: https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf. [July 27, 2017].
[31] Classification datasets results, [Online]. Available: http://rodrigob.github.io/are_we_there_yet/build/classification_datase ts_results.html. [July 27, 2017].
[32] F. Chollet, “Keras,” [Online]. Available: https://github.com/fchollet/keras. [July 30, 2017].
References (International): [1] J. Wang, C. Luo, H. Huang, H. Zhao and S. Wang, "Transferring PreTrained Deep CNNs for Remote Scene Classification with General Features Learned from Linear PCA Network," Remote Sens. 2017, 9(3), 225; doi:10.3390/rs9030225.
[2] K. Nogueira, W. O. Miranda and J. A. Dos Santos, "Improving spatial feature representation from aerial scenes by using convolutional networks," in: 28th IEEE SIBGRAPI Conference on Graphics, Patterns and Images, pp. 289–296, 2015.
[3] A. Krizhevsky, I. Sutskever and G. E. Hinton, "Imagenet classification with deep convolutional neural networks," in Neural Information Processing Systems, pp. 1106–1114, 2012.
[4] K. Nogueira, O. A. B. Penatti and J. A. Dos Santos, "Towards Better Exploiting Convolutional Neural Networks for Remote Sensing Scene Classification," [Online]. Available: https://arxiv.org/pdf/1602.01517.pdf [June 02, 2018].
[5] K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition," [Online]. Available: https://arxiv.org/pdf/1409.1556.pdf. [March 15, 2017].
[6] C. Szegedy, W. Liu, Y. Jia et al., "Going deeper with convolutions," [Online]. Available: https://arxiv.org/pdf/1409.4842.pdf. [March 10, 2017].
[7] K. He, X. Zhang, S. Ren and J. Sun, "Deep Residual Learning for Image Recognition," [Online]. Available: https://arxiv.org/pdf/1512.03385v1.pdf. [May 12, 2017].
[8] A. Deshpande, "The 9 Deep Learning Papers You Need To Know About (Understanding CNNs Part 3)," [Online]. Available: https://adeshpande3.github.io/adeshpande3.github.io/The-9-DeepLearning-Papers-You-Need-To-Know-About.html. [May 15, 2017].
[9] V. Powell, "Image Kernels," [Online]. Available: http://setosa.io/ev/image-kernels/. [June 02, 2017].
[10] P. Veličković, "Deep learning for complete beginners: convolutional neural networks with keras," [Online]. Available: https://cambridgespark.com/content/tutorials/convolutional-neuralnetworks-with-keras/index.html. [June 02, 2017].
[11] D. Kingma and J. Ba, "Adam: A Method for Stochastic Optimization," [Online]. Available: https://arxiv.org/abs/1412.6980v8.pdf. [July 20, 2017].
[12] Brazilian Coffee Scenes Dataset [Online]. Available: http://www.patreo.dcc.ufmg.br/downloads/brazilian-coffee-dataset/. [May 17, 2017].
[13] O. A. B. Penatti, K. Nogueira and J. A. Dos Santos, "Do deep features generalize from everyday objects to remote sensing and aerial scenes domains?," in IEEE Computer Vision and Pattern Recognition Workshops, pp. 44–51, 2015.
[14] R. de O. Stehling, M. A. Nascimento and A. X. Falcao, "A compact and efficient image retrieval approach based on border/interior pixel classification," In Eleventh International Conference on Information and Knowledge Management (CIKM'02), pp.102–109, 2002.
[15] V. Risojevic and Z. Babic, "Unsupervised Quaternion Feature Learning for Remote Sensing Image Classification," [Online]. Available: http://dsp.etfbl.net/aerial/unsupervised_final.pdf. [June 15, 2017].
[16] M. Castelluccio, G. Poggi, C. Sansone and L. Verdoliva, "Land Use Classification in Remote Sensing Images by Convolutional Neural Networks," [Online]. Available: https://arxiv.org/pdf/1508.00092.pdf. [June 17, 2017].
[17] DaoYu Lin, "Deep Unsupervised Representation Learning for Remote Sensing Images," [Online]. Available: https://arxiv.org/pdf/1612.08879.pdf. [June 20, 2017].
[18] SAT-4 and SAT-6 airborne datasets [Online]. Available: http://csc.lsu.edu/~saikat/deepsat/. [June 20, 2017].
[19] S. Basu, S. Ganguly, S. Mukhopadhyay, R. DiBiano, M. Karki and R. Nemani, "DeepSat – A Learning framework for Satellite Imagery," [Online]. Available: http://bit.csc.lsu.edu/~saikat/publications/sigproc-sp.pdf. [June 20, 2017].
[20] Y. Zhong, F. Fei, Y. Liu, B. Zhao, H. Jiao and L. Zhang, "SatCNN: satellite image dataset classification using agile convolutional neural networks," Remote Sensing Letters, 8:2, 136-145, DOI: 10.1080/2150704X.2016.1235299. [Online]. Available: http://dx.doi.org/10.1080/2150704X.2016.1235299. [June 23, 2017].
[21] N. Yang, H. Tang, H. Sun and X. Yang, "DropBand: a convolutional neural network with data augmentation for scene classification of VHR satellite images," [Online]. Available: http://proceedings.utwente.nl/403/1/Yang-DropBand-91.pdf. [June 25, 2017].
[22] M. Papadomanolaki, M. Vakalopoulou, S. Zagoruyko and K. Karantzalos, "Benchmarking deep learning frameworks for the classification of very high resolution satellite multispectral data," in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume III-7, XXIII SPRS Congress, Prague, Czech Republic, 12–19 July 2016.
[23] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, "Gradient-based learning applied to document recognition," Proceedings of the IEEE, 86(11):2278-2324, November 1998.
[24] Y. LeCun, C. Cortes and C.J.C. Burges, "THE MNIST DATABASE of handwritten digits," [Online]. Available: http://yann.lecun.com/exdb/mnist/. [July 10, 2017].
[25] Y. Yang and S. Newsam, "Bag-of-visual-words and spatial extensions for land-use classification," in: 18th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, pp. 270– 279, November 02 – 05, 2010.
[26] M. Castelluccio, G. Poggi, C. Sansone and L.Verdoliva, "Land Use Classification in Remote Sensing Images by Convolutional Neural Networks," [Online]. Available: https://pdfs.semanticscholar.org/4191/fe93bfd883740a881e6a60e54b371c2f241d.pdf. [July 26, 2017].
[27] F. P. S. Luus, B. P. Salmon, F. van den Bergh and B. T. J. Maharaj, "Multiview Deep Learning for Land-Use Classification," in IEEE Geoscience and Remote Sensing Letters, Vol. 12, pp. 2448 – 2452, 2015.
[28] Y. Zhong, F. Fei and L. Zhang, "Large patch convolutional neural networks for the scene classification of high spatial resolution imagery," J. Appl. Remote Sens. 10(2), 025006 (2016), doi: 10.1117/1.JRS.10.025006.
[29] The CIFAR-10 dataset [Online]. Available: https://www.cs.toronto.edu/~kriz/cifar.html. [July 27, 2017].
[30] A. Krizhevsky, "Learning Multiple Layers of Features from Tiny Images," [Online]. Available: https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf. [July 27, 2017].
[31] Classification datasets results, [Online]. Available: http://rodrigob.github.io/are_we_there_yet/build/classification_datase ts_results.html. [July 27, 2017].
[32] F. Chollet, "Keras," [Online]. Available: https://github.com/fchollet/keras. [July 30, 2017].
Content type: Conference Abstract
Appears in Collections:Data stream mining and processing : proceedings of the IEEE second international conference

Files in This Item:
File Description SizeFormat 
2018_Gorokhovatskyi_O-Shallow_Convolutional_459-463.pdf332.46 kBAdobe PDFView/Open
2018_Gorokhovatskyi_O-Shallow_Convolutional_459-463__COVER.png530.67 kBimage/pngView/Open
Show full item record


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.