Skip navigation

putin IS MURDERER

Please use this identifier to cite or link to this item: https://oldena.lpnu.ua/handle/ntb/52469
Title: Deep Neural Network for Image Recognition Based on the Caffe Framework
Authors: Komar, Myroslav
Golovko, Vladimir
Sachenko, Anatoliy
Dorosh, Vitaliy
Yakobchuk, Pavlo
Affiliation: Ternopil National Economic University
Brest State Technical University
Kazimierz Pulaski University of Technology and Humanities in Radom
Bibliographic description (Ukraine): Deep Neural Network for Image Recognition Based on the Caffe Framework / Myroslav Komar, Vladimir Golovko, Anatoliy Sachenko, Vitaliy Dorosh, Pavlo Yakobchuk // Data stream mining and processing : proceedings of the IEEE second international conference, 21-25 August 2018, Lviv. — Львів : Lviv Politechnic Publishing House, 2018. — P. 102–106. — (Big Data & Data Science Using Intelligent Approaches).
Bibliographic description (International): Deep Neural Network for Image Recognition Based on the Caffe Framework / Myroslav Komar, Vladimir Golovko, Anatoliy Sachenko, Vitaliy Dorosh, Pavlo Yakobchuk // Data stream mining and processing : proceedings of the IEEE second international conference, 21-25 August 2018, Lviv. — Lviv Politechnic Publishing House, 2018. — P. 102–106. — (Big Data & Data Science Using Intelligent Approaches).
Is part of: Data stream mining and processing : proceedings of the IEEE second international conference, 2018
Conference/Event: IEEE second international conference "Data stream mining and processing"
Issue Date: 28-Feb-2018
Publisher: Lviv Politechnic Publishing House
Place of the edition/event: Львів
Temporal Coverage: 21-25 August 2018, Lviv
Keywords: Deep Neural Network
Information Technology
Image Recognition
Artificial Intelligence
Caffe Framework
Number of pages: 5
Page range: 102-106
Start page: 102
End page: 106
Abstract: Deep Leaning of the Neural Networks has become one of the most demanded areas of Information Technology and it has been successfully applied to solving many issues of Artificial Intelligence, for example, speech recognition, computer vision, natural language processing, data visualization. This paper describes the developing the deep neural network model for image recognition and a corresponding experimental research on an example of the MNIST data set. Some practical details for creating the Deep Neural Network and image recognition in the Caffe Framework are given as well.
URI: https://ena.lpnu.ua/handle/ntb/52469
ISBN: © Національний університет „Львівська політехніка“, 2018
© Національний університет „Львівська політехніка“, 2018
Copyright owner: © Національний університет “Львівська політехніка”, 2018
URL for reference material: http://caffe.berkeleyvision.org
http://yann.lecun.com/exdb/mnist
https://www.cs.toronto.edu/~kriz/learning-features-2009-
References (Ukraine): [1] G. E. Hinton, S. Osindero, and Y. Teh, “A fast learning algorithm for deep belief nets,” Neural Computation, vol. 18, pp. 1527–1554, 2006.
[2] G. E. Hinton, A practical guide to training restricted Boltzmann machines, Department of Computer Science, University of Toronto, 2010.
[3] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521 (7553), pp. 436–444, 2015.
[4] Y. Bengio, “Learning deep architectures for AI,” Foundations and Trends in Machine Learning, vol. 2(1), pp. 1–127, 2009.
[5] X. Glorot, A. Bordes, and Y. Bengio, “Deep sparse rectifier networks,” In Proc. of the 14th Int. Conf. on Artificial Intelligence and Statistics (AISTATS), vol. 15, pp. 315–323, 2011.
[6] V. Golovko, A. Kroshchanka, U. Rubanau, and S. Jankowski, “A Learning Technique for Deep Belief Neural Networks,” Communication in Computer and Information Science, vol. 440, pp. 136–146, 2014.
[7] V. Golovko, A. Kroshchanka, V. Turchenko, S. Jankowski, and D. Treadwell, “A New Technique for Restricted Boltzmann Machine Learning,” 8th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS’2015), Warsaw, Poland, pp. 182–186, 24–26 September, 2015.
[8] V. Golovko, A. Kroshchanka, and D. Treadwell, “The Nature of Unsupervised Learning in Deep Neural Networks: A New Understanding and Novel Approach,” Optical Memory and Neural Networks, vol. 25(3), pp. 127–141, 2016.
[9] S. Jankowski, Z. Szymański, U. Dziomin, V. Golovko, and A. Barcz, “Deep learning classifier based on NPCA and orthogonal feature selection,” International Conference on Photonics Applications in Astronomy, Communications, Industry, and High–Energy Physics Experiments, Wilga, Poland, pp. 5–9, May 29, 2016.
[10] G. Hinton, at al., “Deep neural network for acoustic modeling in speech recognition,” IEEE Signal Processing Magazine, vol. 29, pp. 82–97, 2012.
[11] T. Mikolov, A. Deoras, D. Povey, L. Burget, and J. Černocký, “Strategies for training large scale neural network language models,” in Automatic Speech Recognition and Understanding, pp. 195–201, 2011.
[12] A. Krizhevsky, L. Sutskever, and G. Hinton, “ImageNet classification with deep convolutional neural networks,” in Advances in Neural information Processing Systems, vol. 25, pp. 1090–1098, 2012.
[13] V. Golovko, S. Bezobrazov, A. Kroshchanka, A. Sachenko, M. Komar, and A. Karachka, “Convolutional Neural Network Based Solar Photovoltaic Panel Detection in Satellite Photos,” 9th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS’2017), Bucharest, Romania, pp. 14-19, September 21-23, 2017.
[14] G. Hinton, and R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” Science, vol. 313 (5786), pp. 504–507, 2006.
[15] Jia-Ren Chang, and Yong-Sheng Chen, “Batch-normalized Maxout Network in Network,” arXiv:1511.02583, 2015.
[16] D. Ciresan, U. Meier, and J. Schmidhuber, “Multi-column deep neural networks for image classification,” 25th IEEE conference on computer vision and pattern recognition (CVPR), New York, pp. 3642-3649, 2012. DOI: 10.1109/CVPR.2012.6248110, 2012.
[17] I. Sato, H. Nishimura, and K Yokoi, “APAC: Augmented PAttern Classification with Neural Networks,” arXiv:1505.03229v1, 2015.
[18] L. Wan, M. Zeiler, S. Zhang, Y. Le Cun, and R. Fergus, “Regularization of Neural Networks using DropConnect,” Proceedings of the 30th International Conference on Machine Learning, PMLR, vol. 28(3), pp. 1058-1066, 2013.
[19] M. D. Zeiler and R. Fergus. “Stochastic pooling for regularization of deep convolutional neural networks,” ArXiv:1301.3557, 2013.
[20] Caffe Deep Learning Framework, http://caffe.berkeleyvision.org, last accessed 15.03.2018.
[21] The MNIST database, http://yann.lecun.com/exdb/mnist, last accessed 15.03.2018.
[22] Y. Le Cun, L. Bottou, Y. Bengio, and P. Haffner. “Gradientbased learning applied to document recognition,” Proceedings of the IEEE, vol. 86(11), pp. 2278–2324, 1998.
[23] A. Krizhevsky, and G. Hinton, Learning multiple layers of features from tiny images. Technical report, University of Toronto, 1 (4), 7, 2009. https://www.cs.toronto.edu/~kriz/learning-features-2009- TR.pdf, last accessed 15.03.2018.
[24] Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng, “Reading digits in natural images with unsupervised feature learning,” In NIPS workshop on deep learning and unsupervised feature learning, Granada, Spain, vol. 2011, pp. 5. 2011.
[25] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. FeiFei, “Imagenet: A large-scale hierarchical image database,” In CVPR09, pp. 248–255, 2009.
[26] D. T. V. Dharmajee Rao, and K. V. Ramana, “Winograd’s Inequality: Effectiveness for Efficient Training of Deep Neural Networks,” Intelligent Systems and Applications, vol. 6, pp. 49-58, 2018.
References (International): [1] G. E. Hinton, S. Osindero, and Y. Teh, "A fast learning algorithm for deep belief nets," Neural Computation, vol. 18, pp. 1527–1554, 2006.
[2] G. E. Hinton, A practical guide to training restricted Boltzmann machines, Department of Computer Science, University of Toronto, 2010.
[3] Y. LeCun, Y. Bengio, and G. Hinton, "Deep learning," Nature, vol. 521 (7553), pp. 436–444, 2015.
[4] Y. Bengio, "Learning deep architectures for AI," Foundations and Trends in Machine Learning, vol. 2(1), pp. 1–127, 2009.
[5] X. Glorot, A. Bordes, and Y. Bengio, "Deep sparse rectifier networks," In Proc. of the 14th Int. Conf. on Artificial Intelligence and Statistics (AISTATS), vol. 15, pp. 315–323, 2011.
[6] V. Golovko, A. Kroshchanka, U. Rubanau, and S. Jankowski, "A Learning Technique for Deep Belief Neural Networks," Communication in Computer and Information Science, vol. 440, pp. 136–146, 2014.
[7] V. Golovko, A. Kroshchanka, V. Turchenko, S. Jankowski, and D. Treadwell, "A New Technique for Restricted Boltzmann Machine Learning," 8th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS’2015), Warsaw, Poland, pp. 182–186, 24–26 September, 2015.
[8] V. Golovko, A. Kroshchanka, and D. Treadwell, "The Nature of Unsupervised Learning in Deep Neural Networks: A New Understanding and Novel Approach," Optical Memory and Neural Networks, vol. 25(3), pp. 127–141, 2016.
[9] S. Jankowski, Z. Szymański, U. Dziomin, V. Golovko, and A. Barcz, "Deep learning classifier based on NPCA and orthogonal feature selection," International Conference on Photonics Applications in Astronomy, Communications, Industry, and High–Energy Physics Experiments, Wilga, Poland, pp. 5–9, May 29, 2016.
[10] G. Hinton, at al., "Deep neural network for acoustic modeling in speech recognition," IEEE Signal Processing Magazine, vol. 29, pp. 82–97, 2012.
[11] T. Mikolov, A. Deoras, D. Povey, L. Burget, and J. Černocký, "Strategies for training large scale neural network language models," in Automatic Speech Recognition and Understanding, pp. 195–201, 2011.
[12] A. Krizhevsky, L. Sutskever, and G. Hinton, "ImageNet classification with deep convolutional neural networks," in Advances in Neural information Processing Systems, vol. 25, pp. 1090–1098, 2012.
[13] V. Golovko, S. Bezobrazov, A. Kroshchanka, A. Sachenko, M. Komar, and A. Karachka, "Convolutional Neural Network Based Solar Photovoltaic Panel Detection in Satellite Photos," 9th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS’2017), Bucharest, Romania, pp. 14-19, September 21-23, 2017.
[14] G. Hinton, and R. Salakhutdinov, "Reducing the dimensionality of data with neural networks," Science, vol. 313 (5786), pp. 504–507, 2006.
[15] Jia-Ren Chang, and Yong-Sheng Chen, "Batch-normalized Maxout Network in Network," arXiv:1511.02583, 2015.
[16] D. Ciresan, U. Meier, and J. Schmidhuber, "Multi-column deep neural networks for image classification," 25th IEEE conference on computer vision and pattern recognition (CVPR), New York, pp. 3642-3649, 2012. DOI: 10.1109/CVPR.2012.6248110, 2012.
[17] I. Sato, H. Nishimura, and K Yokoi, "APAC: Augmented PAttern Classification with Neural Networks," arXiv:1505.03229v1, 2015.
[18] L. Wan, M. Zeiler, S. Zhang, Y. Le Cun, and R. Fergus, "Regularization of Neural Networks using DropConnect," Proceedings of the 30th International Conference on Machine Learning, PMLR, vol. 28(3), pp. 1058-1066, 2013.
[19] M. D. Zeiler and R. Fergus. "Stochastic pooling for regularization of deep convolutional neural networks," ArXiv:1301.3557, 2013.
[20] Caffe Deep Learning Framework, http://caffe.berkeleyvision.org, last accessed 15.03.2018.
[21] The MNIST database, http://yann.lecun.com/exdb/mnist, last accessed 15.03.2018.
[22] Y. Le Cun, L. Bottou, Y. Bengio, and P. Haffner. "Gradientbased learning applied to document recognition," Proceedings of the IEEE, vol. 86(11), pp. 2278–2324, 1998.
[23] A. Krizhevsky, and G. Hinton, Learning multiple layers of features from tiny images. Technical report, University of Toronto, 1 (4), 7, 2009. https://www.cs.toronto.edu/~kriz/learning-features-2009- TR.pdf, last accessed 15.03.2018.
[24] Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng, "Reading digits in natural images with unsupervised feature learning," In NIPS workshop on deep learning and unsupervised feature learning, Granada, Spain, vol. 2011, pp. 5. 2011.
[25] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. FeiFei, "Imagenet: A large-scale hierarchical image database," In CVPR09, pp. 248–255, 2009.
[26] D. T. V. Dharmajee Rao, and K. V. Ramana, "Winograd’s Inequality: Effectiveness for Efficient Training of Deep Neural Networks," Intelligent Systems and Applications, vol. 6, pp. 49-58, 2018.
Content type: Conference Abstract
Appears in Collections:Data stream mining and processing : proceedings of the IEEE second international conference

Files in This Item:
File Description SizeFormat 
2018_Komar_M-Deep_Neural_Network_for_Image_102-106.pdf278.96 kBAdobe PDFView/Open
2018_Komar_M-Deep_Neural_Network_for_Image_102-106__COVER.png591.49 kBimage/pngView/Open
Show full item record


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.