Skip navigation

putin IS MURDERER

Please use this identifier to cite or link to this item: https://oldena.lpnu.ua/handle/ntb/42836
Title: Паралельне фільтрування рангу на основі імпульсної нейронної мережі типу “K-WINNERS-TAKE-ALL”
Other Titles: Parallel rank-order filtering based on impulse K-WINNERS-TAKE-ALL neural network
Authors: Тимощук, П. В.
Tymoshchuk, P. V.
Affiliation: Національний університет “Львівська політехніка”
Lviv Polytechnic National University
Bibliographic description (Ukraine): Тимощук П. В. Паралельне фільтрування рангу на основі імпульсної нейронної мережі типу “K-WINNERS-TAKE-ALL” / П. В. Тимощук // Вісник Національного університету «Львівська політехніка». Серія: Комп’ютерні системи та мережі. — Львів : Видавництво Львівської політехніки, 2017. — № 881. — С. 160–165.
Bibliographic description (International): Tymoshchuk P. V. Parallel rank-order filtering based on impulse K-WINNERS-TAKE-ALL neural network / P. V. Tymoshchuk // Visnyk Natsionalnoho universytetu "Lvivska politekhnika". Serie: Kompiuterni systemy ta merezhi. — Lviv : Vydavnytstvo Lvivskoi politekhniky, 2017. — No 881. — P. 160–165.
Is part of: Вісник Національного університету «Львівська політехніка». Серія: Комп’ютерні системи та мережі, 881, 2017
Journal/Collection: Вісник Національного університету «Львівська політехніка». Серія: Комп’ютерні системи та мережі
Issue: 881
Issue Date: 28-Mar-2017
Publisher: Видавництво Львівської політехніки
Place of the edition/event: Львів
UDC: 004.032.026
Keywords: мережа неперервного часу
нейронна мережа (НМ) типу “K-winners-take-all” (KWTA)
рівняння стану з розривною правою частиною
шлейф імпульсів
дельта-функція Дірака
паралельне фільтрування рангу
continuous-time network
K-winners-take-all (KWTA) neural network (NN)
state equation with a discontinuous right-hand side
impulse train
Dirac delta function
parallel rank-order filtering
Number of pages: 6
Page range: 160-165
Start page: 160
End page: 165
Abstract: Представлено нейронну мережу (НМ) неперервного часу типу “K-winners-take-all” (KWTA), яка ідентифікує К найбільші з-поміж N входів, де керуючий сигнал 1 £ K < N . Мережа описується рівнянням стану з розривною правою частиною і вихідним рівнянням. Рівняння стану містить шлейф імпульсів, які описуються сумою дельта- функцій Дірака. Головною перевагою мережі порівняно з іншими близькими аналогами є відсутність обмежень на швидкість збіжності. Описано застосування мережі для паралельного фільтрування рангу. Отримані теоретичні результати проілюстровано прикладом комп’ютерного моделювання, який демонструє ефективність мережі.
A continuous-time K-winners-take-all (KWTA) neural network (NN) which is capable of identifying the largest K of N inputs, where a command signal 1 £ K < N has presented. The network is described by a state equation with a discontinuous right-hand side and by an output equation. The state equation contains an impulse train defined by a sum of Dirac delta functions. The main advantage of the network is not subject to the intrinsic convergence speed limitations of comparable designs. Application of the network for parallel rank-order filtering has described. Theoretical results are derived and illustrated with computer simulation example that demonstrates the network’s performance.
URI: https://ena.lpnu.ua/handle/ntb/42836
Copyright owner: © Національний університет „Львівська політехніка“, 2017
© Тимощук П. В., 2017
References (Ukraine): 1. Majani E., Erlanson R., and Abu-Mostafa Y. On the k-winners-take-all network Ґ// in Advances in Neural Information Processing Systems 1, R. P. Lippmann, J. E. Moody, and D. S. Touretzky, Eds. San Mateo, CA: Morgan Kaufmann, 1989, pp. 634–642.
2. Tymoshchuk P. A dynamic K-winners take all analog neural circuit // in Proc. IV th Int. Conf. “Perspective technologies and methods in MEMS design”, Lviv-Polyana, Ukraine, 2008, pp. 13–18.
3. Wang J. Analysis and design of a k-winners-take-all network with a single state variable and the Heaviside step activation function // IEEE Trans. Neural Netw., vol. 21, no. 9, P. 1496–1506, Sept. 2010.
4. Lippmann R. P. An introduction to computing with neural nets // IEEE Acoustics, Speech and Signal Processing Magazine, vol. 3, no. 4, pp. 4–22, Apr. 1987.
5. Tymoshchuk P. and Kaszkurewicz E. A winner-take all circuit using neural networks as building blocks // Neurocomputing, vol. 64, pp. 375–396, Mar. 2005.
6. Wunsch D. C. The cellular simultaneous recurrent network adaptive critic design for the generalized maze problem has a simple closed-form solution // in Proc. Int. Joint Conf. Neural Netw., Jul. 2000, P. 79–82.
7. Atkins M. Sorting by Hopfield nets, in Proc. Int. Joint Conf. Neural Netw., Jun. 1989, – P. 65–68.
8. Binh L. N. and Chong H. C. A neural-network contention controller for packet switching networks // IEEE Trans. Neural Netw. vol. 6, no. 6, P. 1402–1410, Nov. 1995.
9. Itti L., Koch C., and Niebur E. A network of saliency-based visual attention for rapid scene analysis // IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 20, no. 11, P. 1254 – 1259,Nov. 1998.
10. Cilingiroglu U. and Dake T. L. E. Rank-order filter design with a sampled-analog multiplewinners-take-all core // IEEE J. Solid-State Circuits, vol. 37, no. 2, pp. 978-984, Aug. 2002.
11. Erlanson R. and Abu-Mostafa Y. Analog neural networks as decoders // in Advances in Neural Information Processing Systems, vol. 1, R. P. Lippmann, J. E. Moody, and D. S. Touretzky, Eds. San Mateo, CA: Morgan Kaufmann, 1991.
12. Fish A., Akselrod D., and Yadid-Pecht O. High precision image centroid computation via an adaptive k-winner-take-all circuit in conjunction with a dynamic element matching algorithm for star tracking applications // Analog Integrated Circuits and Signal Processing, vol. 39, no. 3, P. 251–266, Jun. 2004.
13. Jain B. J. and Wysotzki F. Central clustering of attributed graphs // Machine Learning, vol. 56, no. 1, pp. 169–207, Jul. 2004.
14. Chartier S., Giguere G., Langlois D. and Sioufi R. Bidirectional associative memories, self-organizing maps and k-winners-take-all; uniting feature extraction and topological principles // in Proc. Int. Joint Conf. Neural Netw., Jun. 2009, pp. 503–510.
15. G. N. DeSouza and A. C. Zak, “Vision for mobile robot navigation: a survey,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 24, no. 2, р. 237–267, Feb. 2002.
16. O’Reilly R. C. and Munakata Y. Computational Explorations in Cognitive Neuroscience: Understanding the Mind by Simulating the Brain. Cambridge, MA: MIT Press, 2000.
17. Lazzaro J., Ryckebusch S., Mahowald M. A., and Mead C. A. Winner-take-all networks of O (N) complexity // in Advances in Neural Information Processing Systems 1, R. P. Lippmann, J. E. Moody, and D. S. Touretzky, Eds. San Mateo, CA: Morgan Kaufmann,1989, pp. 703-711.
18. Sekerkiran B. and Cilingiroglu U. A CMOS K-winners-take-all circuits with 0(N) complexity // IEEE Trans. Circuits Syst. II, vol. 46, no. 1, р. 1–5, Jan. 1999.
19. Maass W. Neural computation with winner-take-all as the only nonlinear operation // in Advances in Information Processing Systems, vol. 12, S. A. Solla, T. K. Leen, and K.-R. Mueller, Eds. Cambridge, MA: MIT Press, 2000, pp.293–299.
20. Calvert B. D. and Marinov C. A. Another K-winners-take-all analog neural network // IEEE Trans. Neural Netw., vol. 4, no. 1, P. 829–838, Jul. 2000.
21. Wang J. Analogue winner-take-all neural networks for determining maximum and minimum signals,” Int. J. Electron., vol. 77, no. 3, р. 355–367,Mar. 1994.
22. Cichocki A. and Unbehauen R. Neural Networks for Optimization and Signal Processing. New York, NY, USA: Wiley, 1993.
References (International): 1. Majani E., Erlanson R., and Abu-Mostafa Y. On the k-winners-take-all network G// in Advances in Neural Information Processing Systems 1, R. P. Lippmann, J. E. Moody, and D. S. Touretzky, Eds. San Mateo, CA: Morgan Kaufmann, 1989, pp. 634–642.
2. Tymoshchuk P. A dynamic K-winners take all analog neural circuit, in Proc. IV th Int. Conf. "Perspective technologies and methods in MEMS design", Lviv-Polyana, Ukraine, 2008, pp. 13–18.
3. Wang J. Analysis and design of a k-winners-take-all network with a single state variable and the Heaviside step activation function, IEEE Trans. Neural Netw., vol. 21, no. 9, P. 1496–1506, Sept. 2010.
4. Lippmann R. P. An introduction to computing with neural nets, IEEE Acoustics, Speech and Signal Processing Magazine, vol. 3, no. 4, pp. 4–22, Apr. 1987.
5. Tymoshchuk P. and Kaszkurewicz E. A winner-take all circuit using neural networks as building blocks, Neurocomputing, vol. 64, pp. 375–396, Mar. 2005.
6. Wunsch D. C. The cellular simultaneous recurrent network adaptive critic design for the generalized maze problem has a simple closed-form solution, in Proc. Int. Joint Conf. Neural Netw., Jul. 2000, P. 79–82.
7. Atkins M. Sorting by Hopfield nets, in Proc. Int. Joint Conf. Neural Netw., Jun. 1989, P. 65–68.
8. Binh L. N. and Chong H. C. A neural-network contention controller for packet switching networks, IEEE Trans. Neural Netw. vol. 6, no. 6, P. 1402–1410, Nov. 1995.
9. Itti L., Koch C., and Niebur E. A network of saliency-based visual attention for rapid scene analysis, IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 20, no. 11, P. 1254 – 1259,Nov. 1998.
10. Cilingiroglu U. and Dake T. L. E. Rank-order filter design with a sampled-analog multiplewinners-take-all core, IEEE J. Solid-State Circuits, vol. 37, no. 2, pp. 978-984, Aug. 2002.
11. Erlanson R. and Abu-Mostafa Y. Analog neural networks as decoders, in Advances in Neural Information Processing Systems, vol. 1, R. P. Lippmann, J. E. Moody, and D. S. Touretzky, Eds. San Mateo, CA: Morgan Kaufmann, 1991.
12. Fish A., Akselrod D., and Yadid-Pecht O. High precision image centroid computation via an adaptive k-winner-take-all circuit in conjunction with a dynamic element matching algorithm for star tracking applications, Analog Integrated Circuits and Signal Processing, vol. 39, no. 3, P. 251–266, Jun. 2004.
13. Jain B. J. and Wysotzki F. Central clustering of attributed graphs, Machine Learning, vol. 56, no. 1, pp. 169–207, Jul. 2004.
14. Chartier S., Giguere G., Langlois D. and Sioufi R. Bidirectional associative memories, self-organizing maps and k-winners-take-all; uniting feature extraction and topological principles, in Proc. Int. Joint Conf. Neural Netw., Jun. 2009, pp. 503–510.
15. G. N. DeSouza and A. C. Zak, "Vision for mobile robot navigation: a survey," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 24, no. 2, r. 237–267, Feb. 2002.
16. O’Reilly R. C. and Munakata Y. Computational Explorations in Cognitive Neuroscience: Understanding the Mind by Simulating the Brain. Cambridge, MA: MIT Press, 2000.
17. Lazzaro J., Ryckebusch S., Mahowald M. A., and Mead C. A. Winner-take-all networks of O (N) complexity, in Advances in Neural Information Processing Systems 1, R. P. Lippmann, J. E. Moody, and D. S. Touretzky, Eds. San Mateo, CA: Morgan Kaufmann,1989, pp. 703-711.
18. Sekerkiran B. and Cilingiroglu U. A CMOS K-winners-take-all circuits with 0(N) complexity, IEEE Trans. Circuits Syst. II, vol. 46, no. 1, r. 1–5, Jan. 1999.
19. Maass W. Neural computation with winner-take-all as the only nonlinear operation, in Advances in Information Processing Systems, vol. 12, S. A. Solla, T. K. Leen, and K.-R. Mueller, Eds. Cambridge, MA: MIT Press, 2000, pp.293–299.
20. Calvert B. D. and Marinov C. A. Another K-winners-take-all analog neural network, IEEE Trans. Neural Netw., vol. 4, no. 1, P. 829–838, Jul. 2000.
21. Wang J. Analogue winner-take-all neural networks for determining maximum and minimum signals," Int. J. Electron., vol. 77, no. 3, r. 355–367,Mar. 1994.
22. Cichocki A. and Unbehauen R. Neural Networks for Optimization and Signal Processing. New York, NY, USA: Wiley, 1993.
Content type: Article
Appears in Collections:Комп'ютерні системи та мережі. – 2017. – №881

Files in This Item:
File Description SizeFormat 
2017n881_Tymoshchuk_P_V-Parallel_rank_order_160-165.pdf564.47 kBAdobe PDFView/Open
2017n881_Tymoshchuk_P_V-Parallel_rank_order_160-165__COVER.png364.41 kBimage/pngView/Open
Show full item record


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.