Skip navigation

putin IS MURDERER

Please use this identifier to cite or link to this item: https://oldena.lpnu.ua/handle/ntb/52232
Full metadata record
DC FieldValueLanguage
dc.contributor.authorMelnyk, Anatoliy-
dc.contributor.authorKozak, Nazar-
dc.date.accessioned2020-06-16T08:12:17Z-
dc.date.available2020-06-16T08:12:17Z-
dc.date.created2019-02-26-
dc.date.issued2019-02-26-
dc.identifier.citationMelnyk A. Data Correction Using Hamming Coding and Hash Function and its CUDA Implementation / Anatoliy Melnyk, Nazar Kozak // Advances in Cyber-Physical Systems. — Lviv : Lviv Politechnic Publishing House, 2019. — Vol 4. — No 2. — P. 100–104.-
dc.identifier.urihttps://ena.lpnu.ua/handle/ntb/52232-
dc.description.abstractThis article deals with the use of block code for the entire amount of data. A hash function is used to increase the number of errors that can be detected. The automatic parallelization of this code by using special means is considered.-
dc.format.extent100-104-
dc.language.isoen-
dc.publisherВидавництво Львівської політехніки-
dc.publisherLviv Politechnic Publishing House-
dc.relation.ispartofAdvances in Cyber-Physical Systems, 2 (4), 2019-
dc.relation.ispartofAdvances in Cyber-Physical Systems, 2 (4), 2019-
dc.subjectGPGPU-
dc.subjectHamming code-
dc.subjecthash function-
dc.subjectautomatic parallelization-
dc.titleData Correction Using Hamming Coding and Hash Function and its CUDA Implementation-
dc.typeArticle-
dc.rights.holder© Національний університет “Львівська політехніка”, 2019-
dc.rights.holder© Melnyk A., Kozak N., 2019-
dc.contributor.affiliationLviv Polytechnic National University-
dc.format.pages5-
dc.identifier.citationenMelnyk A. Data Correction Using Hamming Coding and Hash Function and its CUDA Implementation / Anatoliy Melnyk, Nazar Kozak // Advances in Cyber-Physical Systems. — Lviv : Lviv Politechnic Publishing House, 2019. — Vol 4. — No 2. — P. 100–104.-
dc.relation.references[1] History of Hamming Codes. Archived from the original on 2007-10-25. Retrieved 2008-04-03.-
dc.relation.references[2] NVIDIA CUDA Programming Guide, Version 2.1, 2008-
dc.relation.references[3] M. M. Baskaran, U. Bondhugula, S. Krishnamoorthy, J. Ramanujam, A. Rountev, and P. Sadayappan. A Compiler Framework for Optimization of Affine Loop Nests for GPGPUs. In Proc. International Conference on Supercomputing, 2008.-
dc.relation.references[4] N. Fujimoto. Fast Matrix-Vector Multiplication on GeForce 8800 GTX. In Proc. IEEE International Parallel & Distributed Processing Symposium, 2008-
dc.relation.references[5] N. Govindaraju, B. Lloyd, Y. Dotsenko, B. Smith, and J. Manferdelli. High performance discrete Fourier transforms on graphics processors. In Proc. Supercomputing, 2008.-
dc.relation.references[6] G. Ruetsch and P. Micikevicius. Optimize matrix transpose in CUDA. NVIDIA, 2009.-
dc.relation.references[7] S. Ueng, M. Lathara, S. S. Baghsorkhi, and W. W. Hwu. CUDAlite: Reducing GPU programming Complexity, In Proc. Worksops on Languages and Compilers for Parallel Computing, 2008-
dc.relation.references[8] V. Volkov and J. W. Demmel. Benchmarking GPUs to tune dense linear algebra. In Proc. Supercomputing, 2008.-
dc.relation.references[9] J. A. Stratton, S. S. Stone, and W. W. Hwu. MCUDA:An efficient implementation of CUDA kernels on multicores. IMPACT Technical Report IMPACT-08-01, UIUC, Feb. 2008.-
dc.relation.references[10] S. Ryoo, C. I. Rodrigues, S. S. Stone, S. S. Baghsorkhi, S. Ueng, J. A. Stratton, and W. W. Hwu. Optimization space pruning for a multithreaded GPU. In Proc. International Symposium on Code Generation and Optimization, 2008.-
dc.relation.references[11] S. Ryoo, C. I. Rodrigues, S. S. Baghsorkhi, S. S. Stone, D. B. Kirk, and W.W. Hwu. Optimization principles and application performance evaluation of a multithreaded GPU using CUDA. In Proc. ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, 2008.-
dc.relation.references[12] S. Hong and H. Kim. An analytical model for GPU architecture with memory-level and thread-level parallelism awareness. In Proc. International Symposium on Computer Architecture, 2009.-
dc.relation.referencesen[1] History of Hamming Codes. Archived from the original on 2007-10-25. Retrieved 2008-04-03.-
dc.relation.referencesen[2] NVIDIA CUDA Programming Guide, Version 2.1, 2008-
dc.relation.referencesen[3] M. M. Baskaran, U. Bondhugula, S. Krishnamoorthy, J. Ramanujam, A. Rountev, and P. Sadayappan. A Compiler Framework for Optimization of Affine Loop Nests for GPGPUs. In Proc. International Conference on Supercomputing, 2008.-
dc.relation.referencesen[4] N. Fujimoto. Fast Matrix-Vector Multiplication on GeForce 8800 GTX. In Proc. IEEE International Parallel & Distributed Processing Symposium, 2008-
dc.relation.referencesen[5] N. Govindaraju, B. Lloyd, Y. Dotsenko, B. Smith, and J. Manferdelli. High performance discrete Fourier transforms on graphics processors. In Proc. Supercomputing, 2008.-
dc.relation.referencesen[6] G. Ruetsch and P. Micikevicius. Optimize matrix transpose in CUDA. NVIDIA, 2009.-
dc.relation.referencesen[7] S. Ueng, M. Lathara, S. S. Baghsorkhi, and W. W. Hwu. CUDAlite: Reducing GPU programming Complexity, In Proc. Worksops on Languages and Compilers for Parallel Computing, 2008-
dc.relation.referencesen[8] V. Volkov and J. W. Demmel. Benchmarking GPUs to tune dense linear algebra. In Proc. Supercomputing, 2008.-
dc.relation.referencesen[9] J. A. Stratton, S. S. Stone, and W. W. Hwu. MCUDA:An efficient implementation of CUDA kernels on multicores. IMPACT Technical Report IMPACT-08-01, UIUC, Feb. 2008.-
dc.relation.referencesen[10] S. Ryoo, C. I. Rodrigues, S. S. Stone, S. S. Baghsorkhi, S. Ueng, J. A. Stratton, and W. W. Hwu. Optimization space pruning for a multithreaded GPU. In Proc. International Symposium on Code Generation and Optimization, 2008.-
dc.relation.referencesen[11] S. Ryoo, C. I. Rodrigues, S. S. Baghsorkhi, S. S. Stone, D. B. Kirk, and W.W. Hwu. Optimization principles and application performance evaluation of a multithreaded GPU using CUDA. In Proc. ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, 2008.-
dc.relation.referencesen[12] S. Hong and H. Kim. An analytical model for GPU architecture with memory-level and thread-level parallelism awareness. In Proc. International Symposium on Computer Architecture, 2009.-
dc.citation.journalTitleAdvances in Cyber-Physical Systems-
dc.citation.issue2-
dc.citation.spage100-
dc.citation.epage104-
dc.coverage.placenameЛьвів-
dc.coverage.placenameLviv-
Appears in Collections:Advances In Cyber-Physical Systems. – 2019. – Vol. 4, No. 2

Files in This Item:
File Description SizeFormat 
19v4n2_Melnyk_A-Data_Correction_Using_Hamming_100-104.pdf491.98 kBAdobe PDFView/Open
19v4n2_Melnyk_A-Data_Correction_Using_Hamming_100-104__COVER.png1.16 MBimage/pngView/Open
Show simple item record


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.