Search results for "multiprocessor"

showing 8 items of 18 documents

The Acts project: track reconstruction software for HL-LHC and beyond

2019

The reconstruction of trajectories of the charged particles in the tracking detectors of high energy physics experiments is one of the most difficult and complex tasks of event reconstruction at particle colliders. As pattern recognition algorithms exhibit combinatorial scaling to high track multiplicities, they become the largest contributor to the CPU consumption within event reconstruction, particularly at current and future hadron colliders such as the LHC, HL-LHC and FCC-hh. Current algorithms provide an extremely high standard of physics and computing performance and have been tested on billions of simulated and recorded data events. However, most algorithms were first written 20 year…

Multi-core processor010308 nuclear & particles physicsEvent (computing)track data analysisPhysicsQC1-999Complex event processing01 natural sciencesprogrammingComputing and ComputersComputer engineeringMultithreading0103 physical sciencesmultiprocessorCERN LHC Coll: upgradeProgramming paradigmThread safety[INFO]Computer Science [cs]data managementReference implementation010306 general physicsnumerical calculationsperformanceactivity reportEvent reconstruction
researchProduct

Wireless versus Wired Network-on-Chip to Enable the Multi- Tenant Multi-FPGAs in Cloud

2021

The new era of computing is not CPU-centric but enriched with all the heterogeneous computing resources including the reconfigurable fabric. In multi-FPGA architecture, either deployed within a data center or as a standalone model, inter-FPGA communication is crucial. Network-on-chip exhibits a promising performance for the integration of one FPGA. A sustainable communication architecture requires stable performance as the number of applications or users grows. Wireless network-on-chip has the potential to be that communication architecture, as it boasts the same performance capability as wired solutions in addition to its multicast capacities. We conducted an exploratory study to investiga…

Network on a chipMulticastComputer architecturebusiness.industryComputer scienceWirelessSymmetric multiprocessor systemData centerCloud computingArchitecturebusinessField-programmable gate array2021 IEEE International IOT, Electronics and Mechatronics Conference (IEMTRONICS)
researchProduct

The Mu3e Data Acquisition

2020

The Mu3e experiment aims to find or exclude the lepton flavour violating decay $\mu^+\to e^+e^-e^+$ with a sensitivity of one in 10$^{16}$ muon decays. The first phase of the experiment is currently under construction at the Paul Scherrer Institute (PSI, Switzerland), where beams with up to 10$^8$ muons per second are available. The detector will consist of an ultra-thin pixel tracker made from High-Voltage Monolithic Active Pixel Sensors (HV-MAPS), complemented by scintillating tiles and fibres for precise timing measurements. The experiment produces about 100 Gbit/s of zero-suppressed data which are transported to a filter farm using a network of FPGAs and fast optical links. On the filte…

Nuclear and High Energy PhysicsParticle physicsPhysics - Instrumentation and DetectorsMesonPhysics::Instrumentation and Detectorsdata acquisitionfibre: opticalFOS: Physical scienceshigh energy physics instrumentationprinted circuits7. Clean energycomputer: networkOptical fiber communicationData acquisitionsemiconductor detector: pixelOptical switchesmultiprocessor: graphicshardwareSensitivity (control systems)muon+: decay[PHYS.PHYS.PHYS-INS-DET]Physics [physics]/Physics [physics]/Instrumentation and Detectors [physics.ins-det]Electrical and Electronic EngineeringGeneralLiterature_REFERENCE(e.g.dictionariesencyclopediasglossaries)scintillation counterFPGAClocksPhysicsData acquisition (DAQ)MuonPixelMesonsDetectorlepton: flavor: violationField programmable gate arraysDetectorsInstrumentation and Detectors (physics.ins-det)sensitivityNuclear Energy and EngineeringFilter (video)field programmable gate arrays (FPGAs)Data acquisition (DAQ); field programmable gate arrays (FPGAs); high energy physics instrumentation; printed circuitselectronics: readoutHigh Energy Physics::ExperimentLeptonelectronics: design
researchProduct

Two Job Cyclic Scheduling with Incompatibility Constraints

2001

The present paper deals with the problem of scheduling several repeated occurrences of two jobs over a finite or infinite time horizon in order to maximize the yielded profit. The constraints of the problem are the incompatibilities between some pairs of tasks which require a same resource.

Rate-monotonic schedulingMathematical optimizationJob shop schedulingComputer scienceStrategy and ManagementDistributed computingFlow shop schedulingDynamic priority schedulingManagement Science and Operations ResearchFair-share schedulingMultiprocessor schedulingComputer Science ApplicationsNurse scheduling problemManagement of Technology and InnovationTwo-level schedulingBusiness and International ManagementComputer Science::Operating Systems
researchProduct

Optimizing H.264/AVC interprediction on a GPU-based framework

2011

H.264/MPEG-4 part 10 is the latest standard for video compression and promises a significant advance in terms of quality and distortion compared with the commercial standards currently most in use such as MPEG-2 or MPEG-4. To achieve this better performance, H.264 adopts a large number of new/improved compression techniques compared with previous standards, albeit at the expense of higher computational complexity. In addition, in recent years new hardware accelerators have emerged, such as graphics processing units (GPUs), which provide a new opportunity to reduce complexity for a large variety of algorithms. However, current GPUs suffer from higher power consumption requirements because of…

Reduction (complexity)Computational Theory and MathematicsComputer Networks and CommunicationsComputer scienceDistortionMotion estimationSymmetric multiprocessor systemEnergy consumptionParallel computingSoftwareComputer Science ApplicationsTheoretical Computer ScienceData compressionConcurrency and Computation: Practice and Experience
researchProduct

Scalable Dense Factorizations for Heterogeneous Computational Clusters

2008

This paper discusses the design and the implementation of the LU factorization routines included in the Heterogeneous ScaLAPACK library, which is built on top of ScaLAPACK. These routines are used in the factorization and solution of a dense system of linear equations. They are implemented using optimized PBLAS, BLACS and BLAS libraries for heterogeneous computational clusters. We present the details of the implementation as well as performance results on a heterogeneous computing cluster.

ScaLAPACKComputer scienceMathematicsofComputing_NUMERICALANALYSISSymmetric multiprocessor systemParallel computingLU decompositionComputational sciencelaw.inventionMatrix decompositionFactorizationlawScalabilityLinear algebraConcurrent computing2008 International Symposium on Parallel and Distributed Computing
researchProduct

Compiler Driven Automatic Kernel Context Migration for Heterogeneous Computing

2014

Computer systems provide different heterogeneous resources (e.g., GPUs, DSPs and FPGAs) that accelerate applications and that can reduce the energy consumption by using them. Usually, these resources have an isolated memory and a require target specific code to be written. There exist tools that can automatically generate target specific codes for program parts, so-called kernels. The data objects required for a target kernel execution need to be moved to the target resource memory. It is the programmers' responsibility to serialize these data objects used in the kernel and to copy them to or from the resource's memory. Typically, the programmer writes his own serializing function or uses e…

Source codeProgramming languageComputer sciencemedia_common.quotation_subjectSerializationSymmetric multiprocessor systemcomputer.software_genreData structureKernel preemptionKernel (image processing)Operating systemCompilerProgrammercomputermedia_common2014 IEEE 34th International Conference on Distributed Computing Systems
researchProduct

Real-time data processing in the ALICE High Level Trigger at the LHC

2019

At the Large Hadron Collider at CERN in Geneva, Switzerland, atomic nuclei are collided at ultra-relativistic energies. Many final-state particles are produced in each collision and their properties are measured by the ALICE detector. The detector signals induced by the produced particles are digitized leading to data rates that are in excess of 48 GB/$s$. The ALICE High Level Trigger (HLT) system pioneered the use of FPGA- and GPU-based algorithms to reconstruct charged-particle trajectories and reduce the data size in real time. The results of the reconstruction of the collision events, available online, are used for high level data quality and detector-performance monitoring and real-tim…

calibration ; ALICE ; trigger ; monitoring ; quality ; data management ; programming ; FPGA ; multiprocessor: graphics ; performancePhysics - Instrumentation and DetectorsHigh level triggerPhysics::Instrumentation and DetectorsLevel datatutkimuslaitteetFPGA; GPUDetector calibrationGPUFOS: Physical sciencesGeneral Physics and AstronomyhiukkasfysiikkaPhysics and Astronomy(all)01 natural sciencesprogramming010305 fluids & plasmasCombinatoricsALICE0103 physical sciencesmultiprocessor: graphics[INFO]Computer Science [cs][PHYS.PHYS.PHYS-INS-DET]Physics [physics]/Physics [physics]/Instrumentation and Detectors [physics.ins-det]Detectors and Experimental Techniques010306 general physicsNuclear Experimentphysics.ins-detFPGAcomputer.programming_languagePhysicsLarge Hadron ColliderFPGA; GPU; TRACKsignaalinkäsittelyInstrumentation and Detectors (physics.ins-det)triggercalibrationmonitoringdatailmaisimetqualityHardware and ArchitectureTRACKHigh Energy Physics::Experimentdata managementAlice (programming language)computerperformance
researchProduct