DC Field | Value | Language |
dc.contributor.author | Zerbino, D. | |
dc.date.accessioned | 2020-02-28T09:27:46Z | - |
dc.date.available | 2020-02-28T09:27:46Z | - |
dc.date.created | 2019-06-26 | |
dc.date.issued | 2019-06-26 | |
dc.identifier.citation | Zerbino D. Improving image sharpness by surface recognition / D. Zerbino // Econtechmod : scientific journal. — Lublin, 2019. — Vol 8. — No 4. — P. 39–44. | |
dc.identifier.uri | https://ena.lpnu.ua/handle/ntb/46304 | - |
dc.description.abstract | The article proposes a rule for improving
image sharpness and analyzes its implementation by means of
the cellular automata formalism and neural networks. It has been
proved, that the previously known contrasting algorithm, which
uses a template and 3x3 pixels, can be improved considerably by
repeatedly applying the iterative process over templates 2x2
with the rule “anti – blur” ( C 11 = C 11 x F - ( C 12 + C 21 +
+C 22) x S ) and gradient color correction at each step after the
“anti – blur”. Colors of images in the template are presented as
real numbers (R, G, B). To correct the gradient (C11 < C12,
C11 < C21, C11 <C 22, C 12 < C 22, C 21 < C 22) it is
necessary to choose a number Cij, that requires minimal tightening in the direction of the neighbor's color. Number of necessary iterations of the rules application depends on the image. | |
dc.format.extent | 39-44 | |
dc.language.iso | en | |
dc.relation.ispartof | Econtechmod : scientific journal, 4 (8), 2019 | |
dc.relation.uri | http://www.ee.iitm.ac.in/ncc2017/tutorials/NCC_DL_Tutorial.pdf | |
dc.relation.uri | https://en.wikipedia.org/wiki/Data | |
dc.relation.uri | https://web.itu.edu.tr/hulyayalcin/Signal_Processing_Books/2010_Szeliski_ComputerVision.pdf | |
dc.subject | cellular automata | |
dc.subject | image contrasting | |
dc.subject | sharpness | |
dc.subject | correct gradient | |
dc.subject | logical correction of colors | |
dc.subject | neocognitron | |
dc.title | Improving image sharpness by surface recognition | |
dc.type | Article | |
dc.rights.holder | © Copyright by Lviv Polytechnic National University 2019 | |
dc.rights.holder | © Copyright by University of Engineering and Economics in Rzeszów 2019 | |
dc.contributor.affiliation | Lviv Polytechnic National University | |
dc.format.pages | 6 | |
dc.identifier.citationen | Zerbino D. Improving image sharpness by surface recognition / D. Zerbino // Econtechmod : scientific journal. — Lublin, 2019. — Vol 8. — No 4. — P. 39–44. | |
dc.relation.references | 1. Deep Learning Tutorial. The National Conference on Communications, 2017, http://www.ee.iitm.ac.in/ncc2017/tutorials/NCC_DL_Tutorial.pdf | |
dc.relation.references | 2. Kunihiko Fukushima. 1980. Neocognitron: A Self-organizing Neural Network Model for a Mechanism of Pattern Recognition Unaffected by Shift in Position. Biol. Cybernetics 36, 193 202:193–202. | |
dc.relation.references | 3. https://en.wikipedia.org/wiki/Data. | |
dc.relation.references | 4. https://web.itu.edu.tr/hulyayalcin/Signal_Processing_Books/2010_Szeliski_ComputerVision.pdf | |
dc.relation.references | 5. Zerbino D., Farid T. 1999. Realization of Complex Arithmetic on Cellular Automata”. Parallel Computing Technologies, Lecture Notes in Computer Science, Springier, 1999, Vol. 1662: 479–480. | |
dc.relation.references | 6. Zerbino D.D., Tereshkiv S.M. 2017. The Connection Between Prime Numbers”, In: Computer Science and Information Techologies CSIT 2017, 05–08 September, 2017, Lviv, Ukraine: 438–441. | |
dc.relation.references | 7. Brytik V., Grebinnik O., Kobziev V. 2016. RESEARCH THE POSSIBILITIES OF DIFFERENT FILTERS AND THEIR APPLICATION TO IMAGE RECOGNITION PROBLEMS. Econtechmod. Vol. 5, No 4: 21–27. | |
dc.relation.references | 8. Veres O., Kis Ya., Kugivchak V., Rishniak I. 2018. Development of a Reverse-search System of Similar or Identical Images. Econtechmod. Vol. 7, No 2: 23–30. | |
dc.relation.references | 9. Shumeiko A., Smorodskyi V.. 2017. Discrete trigonometric transform and its usage in digital image processing. Econtechmod. Vol. 6, No 4: 21–26. | |
dc.relation.references | 10. Rybchak Z., Basystiuk O. 2017. Analysis of computer vision and image analysis technics. Econtechmod. Vol. 6, no 2: 79–84. | |
dc.relation.references | 11. Sajad Sabzi, Yousef Abbaspour-Gilandeh, Ginés García-Mateos, 2018. A new approach for visual identification of orange varieties using neural networks and metaheuristic algorithms. In: Information Processing in Agriculture, Vol. 5, Issue 1: 162–172. | |
dc.relation.references | 12. Lessmann, M. 2015. Learning of invariant object recognition in hierarchical neural networks using temporal continuity. In: ELCVIA Electronic Letters On Computer Vision And Image Analysis, 14(3), 16–18. | |
dc.relation.references | 13. Romanuke, V. V. 2016. Optimal Pixel-to-Shift Standard Deviation Ratio for Training 2-Layer Perceptron on Shifted 60 × 80 Images with Pixel Distortion in Classifying Shifting-Distorted Objects. Applied Computer Systems, 19(1), 61–70. | |
dc.relation.references | 14. Ryan N. Rakvic, Hau Ngo, Randy P. Broussard, Robert W. Ives. 2010. Comparing an FPGA to a Cell for an Image Processing Application. EURASIP Journal on Advances in Signal Processing | |
dc.relation.referencesen | 1. Deep Learning Tutorial. The National Conference on Communications, 2017, http://www.ee.iitm.ac.in/ncc2017/tutorials/NCC_DL_Tutorial.pdf | |
dc.relation.referencesen | 2. Kunihiko Fukushima. 1980. Neocognitron: A Self-organizing Neural Network Model for a Mechanism of Pattern Recognition Unaffected by Shift in Position. Biol. Cybernetics 36, 193 202:193–202. | |
dc.relation.referencesen | 3. https://en.wikipedia.org/wiki/Data. | |
dc.relation.referencesen | 4. https://web.itu.edu.tr/hulyayalcin/Signal_Processing_Books/2010_Szeliski_ComputerVision.pdf | |
dc.relation.referencesen | 5. Zerbino D., Farid T. 1999. Realization of Complex Arithmetic on Cellular Automata". Parallel Computing Technologies, Lecture Notes in Computer Science, Springier, 1999, Vol. 1662: 479–480. | |
dc.relation.referencesen | 6. Zerbino D.D., Tereshkiv S.M. 2017. The Connection Between Prime Numbers", In: Computer Science and Information Techologies CSIT 2017, 05–08 September, 2017, Lviv, Ukraine: 438–441. | |
dc.relation.referencesen | 7. Brytik V., Grebinnik O., Kobziev V. 2016. RESEARCH THE POSSIBILITIES OF DIFFERENT FILTERS AND THEIR APPLICATION TO IMAGE RECOGNITION PROBLEMS. Econtechmod. Vol. 5, No 4: 21–27. | |
dc.relation.referencesen | 8. Veres O., Kis Ya., Kugivchak V., Rishniak I. 2018. Development of a Reverse-search System of Similar or Identical Images. Econtechmod. Vol. 7, No 2: 23–30. | |
dc.relation.referencesen | 9. Shumeiko A., Smorodskyi V.. 2017. Discrete trigonometric transform and its usage in digital image processing. Econtechmod. Vol. 6, No 4: 21–26. | |
dc.relation.referencesen | 10. Rybchak Z., Basystiuk O. 2017. Analysis of computer vision and image analysis technics. Econtechmod. Vol. 6, no 2: 79–84. | |
dc.relation.referencesen | 11. Sajad Sabzi, Yousef Abbaspour-Gilandeh, Ginés García-Mateos, 2018. A new approach for visual identification of orange varieties using neural networks and metaheuristic algorithms. In: Information Processing in Agriculture, Vol. 5, Issue 1: 162–172. | |
dc.relation.referencesen | 12. Lessmann, M. 2015. Learning of invariant object recognition in hierarchical neural networks using temporal continuity. In: ELCVIA Electronic Letters On Computer Vision And Image Analysis, 14(3), 16–18. | |
dc.relation.referencesen | 13. Romanuke, V. V. 2016. Optimal Pixel-to-Shift Standard Deviation Ratio for Training 2-Layer Perceptron on Shifted 60 × 80 Images with Pixel Distortion in Classifying Shifting-Distorted Objects. Applied Computer Systems, 19(1), 61–70. | |
dc.relation.referencesen | 14. Ryan N. Rakvic, Hau Ngo, Randy P. Broussard, Robert W. Ives. 2010. Comparing an FPGA to a Cell for an Image Processing Application. EURASIP Journal on Advances in Signal Processing | |
dc.citation.volume | 8 | |
dc.citation.issue | 4 | |
dc.citation.spage | 39 | |
dc.citation.epage | 44 | |
dc.coverage.placename | Lublin | |
Appears in Collections: | Econtechmod. – 2019. – Vol. 8, No. 4
|