Skip navigation

putin IS MURDERER

Please use this identifier to cite or link to this item: https://oldena.lpnu.ua/handle/ntb/56896
Title: Performance evaluation of Self-Quotient image methods
Other Titles: Оцінка ефективності методів самооцінювання зображення
Authors: Парубочий, В. О.
Шувар, Роман Ярославович
Parubochyi, V. O.
Shuvar, R. Ya.
Affiliation: Львівський національний університет ім. Івана Франка
Ivan Franko National University of Lviv
Bibliographic description (Ukraine): Parubochyi V. O. Performance evaluation of Self-Quotient image methods / V. O. Parubochyi, R. Ya. Shuvar // Український журнал інформаційних технологій. — Львів : Видавництво Львівської політехніки, 2020. — Том 2. — № 1. — С. 8–14.
Bibliographic description (International): Parubochyi V. O. Performance evaluation of Self-Quotient image methods / V. O. Parubochyi, R. Ya. Shuvar // Ukrainian Journal of Information Technology. — Lviv : Vydavnytstvo Lvivskoi politekhniky, 2020. — Vol 2. — No 1. — P. 8–14.
Is part of: Український журнал інформаційних технологій, 1 (2), 2020
Ukrainian Journal of Information Technology, 1 (2), 2020
Journal/Collection: Український журнал інформаційних технологій
Issue: 1
Volume: 2
Issue Date: 23-Sep-2020
Publisher: Видавництво Львівської політехніки
Place of the edition/event: Львів
Lviv
Keywords: нормалізація освітлення
метод самооцінювання зображень
SQI
фільтр Гауса
фільтр Габора
метод Габора для самооцінювання зображень
GQI
метод швидкої самооцінювання зображень
FSQI
lighting normalization
illumination normalization
self-quotient image
SQI
Gaussian filter
Gabor filter
Gabor quotient image
GQI
fast self-quotient image
FSQI
illumination invariant face recognition
Number of pages: 7
Page range: 8-14
Start page: 8
End page: 14
Abstract: Нормалізація освітлення є дуже важливою проблемою в системах розпізнавання зображень, оскільки різні умови освітлення можуть істотно змінити результати розпізнавання, а нормалізація освітлення дає змогу мінімізувати негативні наслідки різних умов освітлення. У цій роботі ми оцінюємо ефективність розпізнавання декількох методів нормалізації освітлення, заснованих на методі самооцінювання зображення SQI (англ. Self-Quotient Image method), запровадженому Haitao Wang, Stan Z. Li, Yangsheng Wang, та Jianjun Zhang. Для оцінки ми вибрали оригінальну реалізацію та найперспективніші модифікації оригінального методу SQI, в т.ч. й метод Gabor Quotient ImagE(GQI), запропонований Sanun Srisuk та Amnart Petpon у 2008 році, а також метод Fast Self-Quotient ImagE(FSQI) та його модифікації, запропоновані авторами статті в попередніх роботах. У цій роботі ми запропонували модель оцінки, яка використовує Cropped Extended Yale Face Database B, що дає змогу показати відмінність результатів розпізнавання для різних умов освітлення. Також ми перевіряємо всі результати за допомогою двох класифікаторів: класифікатора найближчих сусідів (англ. Nearest Neighbor Classifier) та лінійного класифікатора опорних векторів (англ. Linear Support Vector Classifier). Такий підхід дає змогу не тільки обчислити точність розпізнавання для кожного методу та вибрати найкращий метод, але й показати важливість правильного вибору методу класифікації, який може мати значний вплив на результати розпізнавання. Нам вдалося показати значне зменшення точності розпізнавання для необроблених (RAW) зображень із збільшенням кута між джерелом освітлення та нормаллю до об'єкта. З іншого боку, наші експерименти показали майже рівномірний розподіл точності розпізнавання для зображень, оброблених методами нормалізації освітлення на підставі методу SQI. Ще одним отриманим, проте очікуваним результатом, представленим у цій роботі, є підвищення точності розпізнавання із збільшенням розміру ядра фільтра. Однак великі розміри ядра фільтра є більш обчислювально-затратні і можуть спричинити негативні ефекти на вихідних зображеннях. Окрім цього, в наших експериментах було показано, що друга модифікація методу FSQI, яку ми скорочено позначаємо як FSQI3, краща майже в усіх випадках для всіх розмірів ядра фільтра, особливо якщо ми використовуємо лінійний класифікатор опорних векторів для класифікації.
Lighting Normalization is an especially important issue in the image recognitions systems since different illumination conditions can significantly change the recognition results, and the lighting normalization allows minimizing negative effects of various illumination conditions. In this paper, we are evaluating the recognition performance of several lighting normalization methods based on the Self-Quotient Image(SQI) method introduced by Haitao Wang, Stan Z. Li, Yangsheng Wang, and Jianjun Zhang. For evaluation, we chose the original implementation and the most perspective latest modifications of the original SQI method, including the Gabor Quotient Image(GQI) method introduced by Sanun Srisuk and Amnart Petpon in 2008, and the Fast Self-Quotient Image(FSQI) method and its modifications proposed by authors in previous works. We are proposing an evaluation framework which uses the Cropped Extended Yale Face Database B, which allows showing the difference of the recognition results for different illumination conditions. Also, we are testing all results using two classifiers: Nearest Neighbor Classifier and Linear Support Vector Classifier. This approach allows us not only to calculate recognition accuracy for each method and select the best method but also show the importance of the proper choice of the classification method, which can have a significant influence on recognition results. We were able to show the significant decreasing of recognition accuracy for un-processed (RAW) images with increasing the angle between the lighting source and the normal to the object. From the other side, our experiments had shown the almost uniform distribution of the recognition accuracy for images processed by lighting normalization methods based on the SQI method. Another showed but expected result represented in this paper is the increasing of the recognition accuracy with the increasing of the filter kernel size. However, the large filter kernel sizes are much more computationally expensive and can produce negative effects on output images. Also, we were shown in our experiments, that the second modification of the FSQI method, called FSQI3, is better almost in all cases for all filter kernel sizes, especially, if we use Linear Support Vector Classifier for classification.
URI: https://ena.lpnu.ua/handle/ntb/56896
Copyright owner: © Національний університет “Львівська політехніка”, 2020
URL for reference material: https://doi.org/10.1109/34.598229
https://doi.org/10.1109/34.598228
https://doi.org/10.1145/130385.130401
https://doi.org/10.1109/CVPR.2005.181
https://doi.org/10.1007/BF00994018
https://doi.org/10.1109/34.927464
https://doi.org/10.1109/CVPR.1998.698587
https://doi.org/10.15421/40250745
https://doi.org/10.1109/CVPR.1994.323941
https://doi.org/10.1109/stc-csit.2019.8929778
https://doi.org/10.1109/83.597272
https://doi.org/10.1364/josa.61.000001
https://doi.org/10.1109/TPAMI.2005.92
https://doi.org/10.1016/j.procs.2010.11.013
https://doi.org/10.5772/6396
https://doi.org/10.1109/ELIT.2019.8892347
https://doi.org/10.1080/13682199.2018.1517857
https://doi.org/10.1016/S0734-189X(87)80186-X
https://doi.org/10.1023/B:VLSI.0000028532.53893.82
https://doi.org/10.1109/CVPR.1999.784968
https://doi.org/10.1109/34.908964
https://doi.org/10.1007/978-3-540-89646-3_50
https://doi.org/10.1162/jocn.1991.3.1.71
https://doi.org/10.1109/AFGR.2004.1301635
https://doi.org/10.1109/CVPR.2004.1315205
https://doi.org/10.1109/ICIP.2004.1419763
https://doi.org/10.1360/jos182318
https://doi.org/10.1109/BTAS.2007.4401921
References (Ukraine): [1] Adini, Y, Moses, Y., & Ullman, S. (1997). Face recognition: the problem of compensating for changes in illumination direction. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(7), 721–732. https://doi.org/10.1109/34.598229
[2] Belhumeur, P. N., Hespanha, J. P., & Kriegman, D. J. (1997). Eigenfaces vs. Fisherfaces: recognition using class specific linear projection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(7), 711–720. https://doi.org/10.1109/34.598228
[3] Boser, B. E., Guyon, I. M., & Vapnik, V. N. (1996). A Training Algorithm for Optimal Margin Classifier. Proceedings of the Fifth Annual ACM Workshop on Computational Learning Theory (COLT 92), Association for Computing Machinery, New York, NY, USA, 144–152. https://doi.org/10.1145/130385.130401
[4] Chen, T., Yin, W., Zhou, X. S., Comaniciu, D., & Huang, T. S. (2005). Illumination normalization for face recognition and uneven background correction using total variation based image models. 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR05), 2, 532–539, San Diego, CA, USA. https://doi.org/10.1109/CVPR.2005.181
[5] Cortes, C., & Vapnik, V. (2004). Support-Vector Networks. Machine Learning, 20, 273–297. https://doi.org/10.1007/BF00994018
[6] Fan, R.-E., Chang, K.-W., Hsieh, C.-J., Wang, X.-R., & Lin, C.-J. (2008). LIBLINEAR: A Library for Large Linear Classification. Journal of Machine Learning Research, 9, 1871–1874.
[7] Georghiades, A. S., Belhumeur, P. N., & Kriegman, D. J. (2001). From Few to Many: Illumination Cone Models for Face Recognition Under Variable Lighting and Pose. IEEE Transactions on Pattern Analysis and Machine Intelligence,23(6), 643–660. https://doi.org/10.1109/34.927464
[8] Georghiades, A. S., Kriegman, D. J., & Belhumeur, P. N. (1998). Illumination Cones for Recognition under Variable Lighting: Faces. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 98), 52–59. https://doi.org/10.1109/CVPR.1998.698587
[9] Gonzalez, R. C., & Woods, R. E. (2001). Digital Image Processing (2nd. ed.). Addison-Wesley Longman Publishing Co. Inc., USA., 793.
[10] Gross, R., & Brajovie, V. (2003). An Image Preprocessing Algorithm for Illumination Invariant Face Recognition. 4th International Conference on Audio and Video Based Biometric Person Authentication (AVBPA), 10–18.
[11] Gryciuk, Yu. I., & Grytsyuk, P. Yu. (2015). Contemporary problems of scientific evaluation of the application software quality. Scientific Bulletin of UNFU, 25(7), 284–294. https://doi.org/10.15421/40250745
[12] Hallinan, P. W. (1994). A low-dimensional representation of human faces for arbitrary lighting conditions. 1994 Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 995–999. https://doi.org/10.1109/CVPR.1994.323941
[13] Heusch, G., Cardinaux, F., & Marcel, S. (2005). Lighting Normalization Algorithms for Face Verification. IDIAP.
[14] Hrytsiuk, Yuriy, & Bilas, Orest. (2019). Visualization of Software Quality Expert Assessment. IEEE 2019 14th International Scientific and Technical Conference on Computer Sciences and Information Technologies (CSIT 2019), (Vol. 2, pp. 156–160), 17–20 September, 2019. https://doi.org/10.1109/stc-csit.2019.8929778
[15] Jobson, D. J., Rahman, Z., & Woodell, G. A. (1997). A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE transactions on image processing: a publication of the IEEE Signal Processing Society, 6(7), 965–976. https://doi.org/10.1109/83.597272
[16] Land, E. H., & McCann, J. J. (1971). Lightness and Retinex Theory. Journal of the Optical Society of America, 61, 1–11. https://doi.org/10.1364/josa.61.000001
[17] Lee, K. C., Ho, J., & Kriegman, D. J. (2005). Acquiring linear subspaces for face recognition under variable lighting. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(5), 684–698. https://doi.org/10.1109/TPAMI.2005.92
[18] Makwana, R. M. (2010). Illumination invariant face recognition: A survey of passive methods. Procedia Computer Science, 2, 101–110. https://doi.org/10.1016/j.procs.2010.11.013
[19] Muruganantham, S., & Jebarajan, T. (2011). Exaggerate Self Quotient Image Model for Face Recognition Enlist Subspace Method. International Journal of Computer Science and Information Security (IJCSIS), 9(6), 264–269
[20] Nimeroff, J. S., Simoncelli, E., & Dorsey, J. (1994). Efficient rerendering of naturally illuminated environments. Proceedings of the Fifth Annual Eurographics Symposium on Rendering.
[21] Nishiyama, M., Kozakaya, T., & Yamaguchi, O. (2008). Illumination Normalization using Quotient Image-based Techniques, Recent Advances in Face Recognition Kresimir- Delac, IntechOpen, 97–108. https://doi.org/10.5772/6396
[22] Parubochyi, V., & Shuvar, R. (2019). Normalization Modifications for Fast Self-Quotient Image Method. 2019 XIth International Scientific and Practical Conference on Electronics and Information Technologies (ELIT), Lviv, Ukraine, 179–182. https://doi.org/10.1109/ELIT.2019.8892347
[23] Parubochyi, V., & Shuwar, R. (2018). Fast self-quotient image method for lighting normalization based on modified Gaussian filter kernel. The Imaging Science Journal, 66(8), 471–478. https://doi.org/10.1080/13682199.2018.1517857
[24] Pizer, M. S., Amburn, E. P., Austin, J. D., Cromartie, R., Geselowitz, A., Greer, T., Romeny, B. ter H., Zimmerman, J. B., & Zuiderveld, K. (1987). Adaptive histogram equalization and its variations. Computer Vision, Graphics, and Image Processing, 39(3), 355–368. https://doi.org/10.1016/S0734-189X(87)80186-X
[25] Reza, A. M. (2004). Realization of the Contrast Limited Adaptive Histogram Equalization (CLAHE) for Real-Time Image Enhancement. The Journal of VLSI Signal Processing- Systems for Signal, Image, and Video Technology, 38, 35–44. https://doi.org/10.1023/B:VLSI.0000028532.53893.82
[26] Riklin-Raviv, T., & Shashua, A. (1999). The quotient image: Class based recognition and synthesis under varying illumination conditions. Proceedings of 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2, 566–571. https://doi.org/10.1109/CVPR.1999.784968
[27] Shashua, A., & Riklin-Raviv, T. (2001). The Quotient Image: Class-Based Re-Rendering and Recognition with Varying Illuminations. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(2), 129–139. https://doi.org/10.1109/34.908964
[28] Srisuk, S., & Petpon, A. (2008). A Gabor Quotient Image for Face Recognition under Varying Illumination. Proceedings of the 4th International Symposium on Advances in Visual Computing, Part II (ISVC 08), Springer-Verlag, Berlin, Heidelberg, pp. 511–520. https://doi.org/10.1007/978-3-540-89646-3_50
[29] Turk, M., & Pentland, A. (1991). Eigenfaces for recognition. Journal of Cognitive Neuroscience, 3(1), 71–86. https://doi.org/10.1162/jocn.1991.3.1.71
[30] Wang, H., Li, S. Z., & Wang, Y. (2004). Face recognition under varying lighting conditions using self quotient image. Proceedings of Sixth IEEE International Conference on Automatic Face and Gesture Recognition, Seoul, South Korea,819–824. https://doi.org/10.1109/AFGR.2004.1301635
[31] Wang, H., Li, S. Z., & Wang, Y. (2004). Generalized quotient image. Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2, 498–505. https://doi.org/10.1109/CVPR.2004.1315205
[32] Wang, H., Li, S. Z., Wang, Y., & Zhang, J. (2004). Self quotient image for face recognition. 2004 International Conference on Image Processing (ICIP 04), Singapore, 2, 1397–1400. https://doi.org/10.1109/ICIP.2004.1419763
[33] Xiao-guang, H., Jie, T., Li-fang, W., Yao-yao, Z., & Xin, Y. (2007). Illumination Normalization with Morphological Quotient Image. Journal of Software, 18(9), 2318–2325. https://doi.org/10.1360/jos182318
[34] Zou, X., Kittler, J., & Messer, K. (2007). Illumination Invariant Face Recognition: A Survey. First IEEE International Conference on Biometrics: Theory, Applications, and Systems, 1–8. https://doi.org/10.1109/BTAS.2007.4401921
References (International): [1] Adini, Y, Moses, Y., & Ullman, S. (1997). Face recognition: the problem of compensating for changes in illumination direction. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(7), 721–732. https://doi.org/10.1109/34.598229
[2] Belhumeur, P. N., Hespanha, J. P., & Kriegman, D. J. (1997). Eigenfaces vs. Fisherfaces: recognition using class specific linear projection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(7), 711–720. https://doi.org/10.1109/34.598228
[3] Boser, B. E., Guyon, I. M., & Vapnik, V. N. (1996). A Training Algorithm for Optimal Margin Classifier. Proceedings of the Fifth Annual ACM Workshop on Computational Learning Theory (COLT 92), Association for Computing Machinery, New York, NY, USA, 144–152. https://doi.org/10.1145/130385.130401
[4] Chen, T., Yin, W., Zhou, X. S., Comaniciu, D., & Huang, T. S. (2005). Illumination normalization for face recognition and uneven background correction using total variation based image models. 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR05), 2, 532–539, San Diego, CA, USA. https://doi.org/10.1109/CVPR.2005.181
[5] Cortes, C., & Vapnik, V. (2004). Support-Vector Networks. Machine Learning, 20, 273–297. https://doi.org/10.1007/BF00994018
[6] Fan, R.-E., Chang, K.-W., Hsieh, C.-J., Wang, X.-R., & Lin, C.-J. (2008). LIBLINEAR: A Library for Large Linear Classification. Journal of Machine Learning Research, 9, 1871–1874.
[7] Georghiades, A. S., Belhumeur, P. N., & Kriegman, D. J. (2001). From Few to Many: Illumination Cone Models for Face Recognition Under Variable Lighting and Pose. IEEE Transactions on Pattern Analysis and Machine Intelligence,23(6), 643–660. https://doi.org/10.1109/34.927464
[8] Georghiades, A. S., Kriegman, D. J., & Belhumeur, P. N. (1998). Illumination Cones for Recognition under Variable Lighting: Faces. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 98), 52–59. https://doi.org/10.1109/CVPR.1998.698587
[9] Gonzalez, R. C., & Woods, R. E. (2001). Digital Image Processing (2nd. ed.). Addison-Wesley Longman Publishing Co. Inc., USA., 793.
[10] Gross, R., & Brajovie, V. (2003). An Image Preprocessing Algorithm for Illumination Invariant Face Recognition. 4th International Conference on Audio and Video Based Biometric Person Authentication (AVBPA), 10–18.
[11] Gryciuk, Yu. I., & Grytsyuk, P. Yu. (2015). Contemporary problems of scientific evaluation of the application software quality. Scientific Bulletin of UNFU, 25(7), 284–294. https://doi.org/10.15421/40250745
[12] Hallinan, P. W. (1994). A low-dimensional representation of human faces for arbitrary lighting conditions. 1994 Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 995–999. https://doi.org/10.1109/CVPR.1994.323941
[13] Heusch, G., Cardinaux, F., & Marcel, S. (2005). Lighting Normalization Algorithms for Face Verification. IDIAP.
[14] Hrytsiuk, Yuriy, & Bilas, Orest. (2019). Visualization of Software Quality Expert Assessment. IEEE 2019 14th International Scientific and Technical Conference on Computer Sciences and Information Technologies (CSIT 2019), (Vol. 2, pp. 156–160), 17–20 September, 2019. https://doi.org/10.1109/stc-csit.2019.8929778
[15] Jobson, D. J., Rahman, Z., & Woodell, G. A. (1997). A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE transactions on image processing: a publication of the IEEE Signal Processing Society, 6(7), 965–976. https://doi.org/10.1109/83.597272
[16] Land, E. H., & McCann, J. J. (1971). Lightness and Retinex Theory. Journal of the Optical Society of America, 61, 1–11. https://doi.org/10.1364/josa.61.000001
[17] Lee, K. C., Ho, J., & Kriegman, D. J. (2005). Acquiring linear subspaces for face recognition under variable lighting. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(5), 684–698. https://doi.org/10.1109/TPAMI.2005.92
[18] Makwana, R. M. (2010). Illumination invariant face recognition: A survey of passive methods. Procedia Computer Science, 2, 101–110. https://doi.org/10.1016/j.procs.2010.11.013
[19] Muruganantham, S., & Jebarajan, T. (2011). Exaggerate Self Quotient Image Model for Face Recognition Enlist Subspace Method. International Journal of Computer Science and Information Security (IJCSIS), 9(6), 264–269
[20] Nimeroff, J. S., Simoncelli, E., & Dorsey, J. (1994). Efficient rerendering of naturally illuminated environments. Proceedings of the Fifth Annual Eurographics Symposium on Rendering.
[21] Nishiyama, M., Kozakaya, T., & Yamaguchi, O. (2008). Illumination Normalization using Quotient Image-based Techniques, Recent Advances in Face Recognition Kresimir- Delac, IntechOpen, 97–108. https://doi.org/10.5772/6396
[22] Parubochyi, V., & Shuvar, R. (2019). Normalization Modifications for Fast Self-Quotient Image Method. 2019 XIth International Scientific and Practical Conference on Electronics and Information Technologies (ELIT), Lviv, Ukraine, 179–182. https://doi.org/10.1109/ELIT.2019.8892347
[23] Parubochyi, V., & Shuwar, R. (2018). Fast self-quotient image method for lighting normalization based on modified Gaussian filter kernel. The Imaging Science Journal, 66(8), 471–478. https://doi.org/10.1080/13682199.2018.1517857
[24] Pizer, M. S., Amburn, E. P., Austin, J. D., Cromartie, R., Geselowitz, A., Greer, T., Romeny, B. ter H., Zimmerman, J. B., & Zuiderveld, K. (1987). Adaptive histogram equalization and its variations. Computer Vision, Graphics, and Image Processing, 39(3), 355–368. https://doi.org/10.1016/S0734-189X(87)80186-X
[25] Reza, A. M. (2004). Realization of the Contrast Limited Adaptive Histogram Equalization (CLAHE) for Real-Time Image Enhancement. The Journal of VLSI Signal Processing- Systems for Signal, Image, and Video Technology, 38, 35–44. https://doi.org/10.1023/B:VLSI.0000028532.53893.82
[26] Riklin-Raviv, T., & Shashua, A. (1999). The quotient image: Class based recognition and synthesis under varying illumination conditions. Proceedings of 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2, 566–571. https://doi.org/10.1109/CVPR.1999.784968
[27] Shashua, A., & Riklin-Raviv, T. (2001). The Quotient Image: Class-Based Re-Rendering and Recognition with Varying Illuminations. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(2), 129–139. https://doi.org/10.1109/34.908964
[28] Srisuk, S., & Petpon, A. (2008). A Gabor Quotient Image for Face Recognition under Varying Illumination. Proceedings of the 4th International Symposium on Advances in Visual Computing, Part II (ISVC 08), Springer-Verlag, Berlin, Heidelberg, pp. 511–520. https://doi.org/10.1007/978-3-540-89646-3_50
[29] Turk, M., & Pentland, A. (1991). Eigenfaces for recognition. Journal of Cognitive Neuroscience, 3(1), 71–86. https://doi.org/10.1162/jocn.1991.3.1.71
[30] Wang, H., Li, S. Z., & Wang, Y. (2004). Face recognition under varying lighting conditions using self quotient image. Proceedings of Sixth IEEE International Conference on Automatic Face and Gesture Recognition, Seoul, South Korea,819–824. https://doi.org/10.1109/AFGR.2004.1301635
[31] Wang, H., Li, S. Z., & Wang, Y. (2004). Generalized quotient image. Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2, 498–505. https://doi.org/10.1109/CVPR.2004.1315205
[32] Wang, H., Li, S. Z., Wang, Y., & Zhang, J. (2004). Self quotient image for face recognition. 2004 International Conference on Image Processing (ICIP 04), Singapore, 2, 1397–1400. https://doi.org/10.1109/ICIP.2004.1419763
[33] Xiao-guang, H., Jie, T., Li-fang, W., Yao-yao, Z., & Xin, Y. (2007). Illumination Normalization with Morphological Quotient Image. Journal of Software, 18(9), 2318–2325. https://doi.org/10.1360/jos182318
[34] Zou, X., Kittler, J., & Messer, K. (2007). Illumination Invariant Face Recognition: A Survey. First IEEE International Conference on Biometrics: Theory, Applications, and Systems, 1–8. https://doi.org/10.1109/BTAS.2007.4401921
Content type: Article
Appears in Collections:Ukrainian Journal of Information Technology. – 2020. – Vol. 2, No. 1

Files in This Item:
File Description SizeFormat 
2020v2n1_Parubochyi_V_O-Performance_evaluation_8-14.pdf1.54 MBAdobe PDFView/Open
2020v2n1_Parubochyi_V_O-Performance_evaluation_8-14__COVER.png1.88 MBimage/pngView/Open
Show full item record


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.