Skip navigation

putin IS MURDERER

Please use this identifier to cite or link to this item: https://oldena.lpnu.ua/handle/ntb/52232
Title: Data Correction Using Hamming Coding and Hash Function and its CUDA Implementation
Authors: Melnyk, Anatoliy
Kozak, Nazar
Affiliation: Lviv Polytechnic National University
Bibliographic description (Ukraine): Melnyk A. Data Correction Using Hamming Coding and Hash Function and its CUDA Implementation / Anatoliy Melnyk, Nazar Kozak // Advances in Cyber-Physical Systems. — Lviv : Lviv Politechnic Publishing House, 2019. — Vol 4. — No 2. — P. 100–104.
Bibliographic description (International): Melnyk A. Data Correction Using Hamming Coding and Hash Function and its CUDA Implementation / Anatoliy Melnyk, Nazar Kozak // Advances in Cyber-Physical Systems. — Lviv : Lviv Politechnic Publishing House, 2019. — Vol 4. — No 2. — P. 100–104.
Is part of: Advances in Cyber-Physical Systems, 2 (4), 2019
Advances in Cyber-Physical Systems, 2 (4), 2019
Journal/Collection: Advances in Cyber-Physical Systems
Issue: 2
Issue Date: 26-Feb-2019
Publisher: Видавництво Львівської політехніки
Lviv Politechnic Publishing House
Place of the edition/event: Львів
Lviv
Keywords: GPGPU
Hamming code
hash function
automatic parallelization
Number of pages: 5
Page range: 100-104
Start page: 100
End page: 104
Abstract: This article deals with the use of block code for the entire amount of data. A hash function is used to increase the number of errors that can be detected. The automatic parallelization of this code by using special means is considered.
URI: https://ena.lpnu.ua/handle/ntb/52232
Copyright owner: © Національний університет “Львівська політехніка”, 2019
© Melnyk A., Kozak N., 2019
References (Ukraine): [1] History of Hamming Codes. Archived from the original on 2007-10-25. Retrieved 2008-04-03.
[2] NVIDIA CUDA Programming Guide, Version 2.1, 2008
[3] M. M. Baskaran, U. Bondhugula, S. Krishnamoorthy, J. Ramanujam, A. Rountev, and P. Sadayappan. A Compiler Framework for Optimization of Affine Loop Nests for GPGPUs. In Proc. International Conference on Supercomputing, 2008.
[4] N. Fujimoto. Fast Matrix-Vector Multiplication on GeForce 8800 GTX. In Proc. IEEE International Parallel & Distributed Processing Symposium, 2008
[5] N. Govindaraju, B. Lloyd, Y. Dotsenko, B. Smith, and J. Manferdelli. High performance discrete Fourier transforms on graphics processors. In Proc. Supercomputing, 2008.
[6] G. Ruetsch and P. Micikevicius. Optimize matrix transpose in CUDA. NVIDIA, 2009.
[7] S. Ueng, M. Lathara, S. S. Baghsorkhi, and W. W. Hwu. CUDAlite: Reducing GPU programming Complexity, In Proc. Worksops on Languages and Compilers for Parallel Computing, 2008
[8] V. Volkov and J. W. Demmel. Benchmarking GPUs to tune dense linear algebra. In Proc. Supercomputing, 2008.
[9] J. A. Stratton, S. S. Stone, and W. W. Hwu. MCUDA:An efficient implementation of CUDA kernels on multicores. IMPACT Technical Report IMPACT-08-01, UIUC, Feb. 2008.
[10] S. Ryoo, C. I. Rodrigues, S. S. Stone, S. S. Baghsorkhi, S. Ueng, J. A. Stratton, and W. W. Hwu. Optimization space pruning for a multithreaded GPU. In Proc. International Symposium on Code Generation and Optimization, 2008.
[11] S. Ryoo, C. I. Rodrigues, S. S. Baghsorkhi, S. S. Stone, D. B. Kirk, and W.W. Hwu. Optimization principles and application performance evaluation of a multithreaded GPU using CUDA. In Proc. ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, 2008.
[12] S. Hong and H. Kim. An analytical model for GPU architecture with memory-level and thread-level parallelism awareness. In Proc. International Symposium on Computer Architecture, 2009.
References (International): [1] History of Hamming Codes. Archived from the original on 2007-10-25. Retrieved 2008-04-03.
[2] NVIDIA CUDA Programming Guide, Version 2.1, 2008
[3] M. M. Baskaran, U. Bondhugula, S. Krishnamoorthy, J. Ramanujam, A. Rountev, and P. Sadayappan. A Compiler Framework for Optimization of Affine Loop Nests for GPGPUs. In Proc. International Conference on Supercomputing, 2008.
[4] N. Fujimoto. Fast Matrix-Vector Multiplication on GeForce 8800 GTX. In Proc. IEEE International Parallel & Distributed Processing Symposium, 2008
[5] N. Govindaraju, B. Lloyd, Y. Dotsenko, B. Smith, and J. Manferdelli. High performance discrete Fourier transforms on graphics processors. In Proc. Supercomputing, 2008.
[6] G. Ruetsch and P. Micikevicius. Optimize matrix transpose in CUDA. NVIDIA, 2009.
[7] S. Ueng, M. Lathara, S. S. Baghsorkhi, and W. W. Hwu. CUDAlite: Reducing GPU programming Complexity, In Proc. Worksops on Languages and Compilers for Parallel Computing, 2008
[8] V. Volkov and J. W. Demmel. Benchmarking GPUs to tune dense linear algebra. In Proc. Supercomputing, 2008.
[9] J. A. Stratton, S. S. Stone, and W. W. Hwu. MCUDA:An efficient implementation of CUDA kernels on multicores. IMPACT Technical Report IMPACT-08-01, UIUC, Feb. 2008.
[10] S. Ryoo, C. I. Rodrigues, S. S. Stone, S. S. Baghsorkhi, S. Ueng, J. A. Stratton, and W. W. Hwu. Optimization space pruning for a multithreaded GPU. In Proc. International Symposium on Code Generation and Optimization, 2008.
[11] S. Ryoo, C. I. Rodrigues, S. S. Baghsorkhi, S. S. Stone, D. B. Kirk, and W.W. Hwu. Optimization principles and application performance evaluation of a multithreaded GPU using CUDA. In Proc. ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, 2008.
[12] S. Hong and H. Kim. An analytical model for GPU architecture with memory-level and thread-level parallelism awareness. In Proc. International Symposium on Computer Architecture, 2009.
Content type: Article
Appears in Collections:Advances In Cyber-Physical Systems. – 2019. – Vol. 4, No. 2

Files in This Item:
File Description SizeFormat 
19v4n2_Melnyk_A-Data_Correction_Using_Hamming_100-104.pdf491.98 kBAdobe PDFView/Open
19v4n2_Melnyk_A-Data_Correction_Using_Hamming_100-104__COVER.png1.16 MBimage/pngView/Open
Show full item record


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.