Search results for "Parallel computing"

showing 10 items of 189 documents

Mapping of BLASTP Algorithm onto GPU Clusters

2011

Searching protein sequence database is a fundamental and often repeated task in computational biology and bioinformatics. However, the high computational cost and long runtime of many database scanning algorithms on sequential architectures heavily restrict their applications for large-scale protein databases, such as GenBank. The continuing exponential growth of sequence databases and the high rate of newly generated queries further deteriorate the situation and establish a strong requirement for time-efficient scalable database searching algorithms. In this paper, we demonstrate how GPU clusters, powered by the Compute Unified Device Architecture (CUDA), OpenMP, and MPI parallel programmi…

Source codeSequence databaseComputer sciencemedia_common.quotation_subjectMessage passingParallel computingGPU clusterComputational scienceCUDATask (computing)Search algorithmGenBankScalabilityAlgorithmmedia_common2011 IEEE 17th International Conference on Parallel and Distributed Systems
researchProduct

The Dynamical Kernel Scheduler - Part 1

2015

Emerging processor architectures such as GPUs and Intel MICs provide a huge performance potential for high performance computing. However developing software using these hardware accelerators introduces additional challenges for the developer such as exposing additional parallelism, dealing with different hardware designs and using multiple development frameworks in order to use devices from different vendors. The Dynamic Kernel Scheduler (DKS) is being developed in order to provide a software layer between host application and different hardware accelerators. DKS handles the communication between the host and device, schedules task execution, and provides a library of built-in algorithms. …

Speedup010308 nuclear & particles physicsComputer sciencebusiness.industryFast Fourier transformGeneral Physics and AstronomyFOS: Physical sciencesParallel computingComputational Physics (physics.comp-ph)Supercomputer01 natural sciencesCUDASoftwareKernel (image processing)Hardware and Architecture0103 physical sciencesHardware acceleration010306 general physicsbusinessPhysics - Computational PhysicsXeon Phi
researchProduct

Optimization of Reactive Force Field Simulation: Refactor, Parallelization, and Vectorization for Interactions

2022

Molecular dynamics (MD) simulations are playing an increasingly important role in many areas ranging from chemical materials to biological molecules. With the continuing development of MD models, the potentials are getting larger and more complex. In this article, we focus on the reactive force field (ReaxFF) potential from LAMMPS to optimize the computation of interactions. We present our efforts on refactoring for neighbor list building, bond order computation, as well as valence angles and torsion angles computation. After redesigning these kernels, we develop a vectorized implementation for non-bonded interactions, which is nearly $100 \times$ 100 × faster than the management processing…

SpeedupComputational Theory and MathematicsXeonHardware and ArchitectureComputer scienceComputationSignal ProcessingVectorization (mathematics)Node (circuits)Parallel computingSupercomputerForce field (chemistry)Sunway TaihuLightIEEE Transactions on Parallel and Distributed Systems
researchProduct

Reducing complexity in H.264/AVC motion estimation by using a GPU

2011

H.264/AVC applies a complex mode decision technique that has high computational complexity in order to reduce the temporal redundancies of video sequences. Several algorithms have been proposed in the literature in recent years with the aim of accelerating this part of the encoding process. Recently, with the emergence of many-core processors or accelerators, a new approach can be adopted for reducing the complexity of the H.264/AVC encoding algorithm. This paper focuses on reducing the inter prediction complexity adopted in H.264/AVC and proposes a GPU-based implementation using CUDA. Experimental results show that the proposed approach reduces the complexity by as much as 99% (100x of spe…

SpeedupComputational complexity theoryComputer science020206 networking & telecommunicationsData_CODINGANDINFORMATIONTHEORY02 engineering and technologyParallel computingCUDAAlgorithmic efficiency0202 electrical engineering electronic engineering information engineeringWorst-case complexity020201 artificial intelligence & image processingContext-adaptive binary arithmetic codingData compressionContext-adaptive variable-length coding
researchProduct

CUDA-enabled Sparse Matrix–Vector Multiplication on GPUs using atomic operations

2013

We propose the Sliced Coordinate Format (SCOO) for Sparse Matrix-Vector Multiplication on GPUs.An associated CUDA implementation which takes advantage of atomic operations is presented.We propose partitioning methods to transform a given sparse matrix into SCOO format.An efficient Dual-GPU implementation which overlaps computation and communication is described.Extensive performance comparisons of SCOO compared to other formats on GPUs and CPUs are provided. Existing formats for Sparse Matrix-Vector Multiplication (SpMV) on the GPU are outperforming their corresponding implementations on multi-core CPUs. In this paper, we present a new format called Sliced COO (SCOO) and an efficient CUDA i…

SpeedupComputer Networks and CommunicationsComputer scienceSparse matrix-vector multiplicationParallel computingComputer Graphics and Computer-Aided DesignTheoretical Computer ScienceMatrix (mathematics)CUDAArtificial IntelligenceHardware and ArchitectureBenchmark (computing)MultiplicationGeneral-purpose computing on graphics processing unitsSoftwareSparse matrixParallel Computing
researchProduct

Finding near-perfect parameters for hardware and code optimizations with automatic multi-objective design space explorations

2012

Summary In the design process of computer systems or processor architectures, typically many different parameters are exposed to configure, tune, and optimize every component of a system. For evaluations and before production, it is desirable to know the best setting for all parameters. Processing speed is no longer the only objective that needs to be optimized; power consumption, area, and so on have become very important. Thus, the best configurations have to be found in respect to multiple objectives. In this article, we use a multi-objective design space exploration tool called Framework for Automatic Design Space Exploration (FADSE) to automatically find near-optimal configurations in …

SpeedupComputer Networks and CommunicationsDesign space explorationComputer sciencebusiness.industryParallel computingProgram optimizationMulti-objective optimizationComputer Science ApplicationsTheoretical Computer ScienceMicroarchitectureComputational Theory and MathematicsScalabilityCode (cryptography)Engineering design processbusinessSoftwareComputer hardwareConcurrency and Computation: Practice and Experience
researchProduct

CliffoSor: A Parallel Embedded Architecture for Geometric Algebra and Computer Graphics

2006

Geometric object representation and their transformations are the two key aspects in computer graphics applications. Traditionally, compute-intensive matrix calculations are involved to model and render 3D scenery. Geometric algebra (a.k.a. Clifford algebra) is gaining growing attention for its natural way to model geometric facts coupled with its being a powerful analytical tool for symbolic calculations. In this paper, the architecture of CliffoSor (Clifford Processor) is introduced. ClifforSor is an embedded parallel coprocessing core that offers direct hardware support to Clifford algebra operators. A prototype implementation on an FPGA board is detailed. Initial test results show more …

SpeedupComputer scienceClifford algebraSolid modelingParallel computingComputational geometryApplication softwarecomputer.software_genreComputational scienceComputer graphicsGeometric algebraComputingMethodologies_SYMBOLICANDALGEBRAICMANIPULATIONRepresentation (mathematics)computer
researchProduct

cuBool: Bit-Parallel Boolean Matrix Factorization on CUDA-Enabled Accelerators

2018

Boolean Matrix Factorization (BMF) is a commonly used technique in the field of unsupervised data analytics. The goal is to decompose a ground truth matrix C into a product of two matrices A and $B$ being either an exact or approximate rank k factorization of C. Both exact and approximate factorization are time-consuming tasks due to their combinatorial complexity. In this paper, we introduce a massively parallel implementation of BMF - namely cuBool - in order to significantly speed up factorization of huge Boolean matrices. Our approach is based on alternately adjusting rows and columns of A and B using thousands of lightweight CUDA threads. The massively parallel manipulation of entries …

SpeedupRank (linear algebra)Computer science02 engineering and technologyParallel computingMatrix decompositionCUDAMatrix (mathematics)Factorization020204 information systemsSingular value decomposition0202 electrical engineering electronic engineering information engineering020201 artificial intelligence & image processingMassively parallelInteger (computer science)2018 IEEE 24th International Conference on Parallel and Distributed Systems (ICPADS)
researchProduct

Reconfigurable Accelerator for the Word-Matching Stage of BLASTN

2013

BLAST is one of the most popular sequence analysis tools used by molecular biologists. It is designed to efficiently find similar regions between two sequences that have biological significance. However, because the size of genomic databases is growing rapidly, the computation time of BLAST, when performing a complete genomic database search, is continuously increasing. Thus, there is a clear need to accelerate this process. In this paper, we present a new approach for genomic sequence database scanning utilizing reconfigurable field programmable gate array (FPGA)-based hardware. In order to derive an efficient structure for BLASTN, we propose a reconfigurable architecture to accelerate the…

SpeedupSequence databaseHardware and ArchitectureComputer scienceSequence analysisGenomicsParallel computingElectrical and Electronic EngineeringData structureGenomic databasesSoftwareReconfigurable computingWord (computer architecture)IEEE Transactions on Very Large Scale Integration (VLSI) Systems
researchProduct

Simulating spin models on GPU

2010

Over the last couple of years it has been realized that the vast computational power of graphics processing units (GPUs) could be harvested for purposes other than the video game industry. This power, which at least nominally exceeds that of current CPUs by large factors, results from the relative simplicity of the GPU architectures as compared to CPUs, combined with a large number of parallel processing units on a single chip. To benefit from this setup for general computing purposes, the problems at hand need to be prepared in a way to profit from the inherent parallelism and hierarchical structure of memory accesses. In this contribution I discuss the performance potential for simulating…

Statistical Mechanics (cond-mat.stat-mech)Computer scienceHigh Energy Physics - Lattice (hep-lat)Monte Carlo methodFOS: Physical sciencesGeneral Physics and AstronomyParallel computingComputational Physics (physics.comp-ph)Power (physics)CUDAHigh Energy Physics - LatticeParallel processing (DSP implementation)Hardware and ArchitectureParallelism (grammar)Ising modelGraphicsPhysics - Computational PhysicsVideo gameCondensed Matter - Statistical MechanicsComputer Physics Communications
researchProduct