Understanding the Science Behind GPGPU Computational Models: Overcoming Challenges and Debunking Myths.
The short-term future of hardware technology no longer lies in similar major technological breakthroughs to those from the 80s and 90s, that often used to put Moore's law to shame, but rather in the ability to build more power-efficient, parallel architectures that easily adapt to today's computational needs. Nonetheless, algorithms also require changes for increased parallelism, in order to achieve higher throughput and lower storage, especially when dealing with GPGPU computational models. Most often, these changes are a combination of a number of factors, such as efficient data structures, improved software parallelism through better mathematical models and more efficient hardware utilization - yet, most often, researchers only take into account only 2 out of the 3 criteria specified. Unfortunately, that leads to rather emphatic research results, which end up being published yet are often lacking proper peer-review or contain highly-emphatic results. This presentation aims to present the dangers of such approaches, as well as tips and advice on how to avoid them in the long run, with the final purpose of producing high-quality, tangible research results.