Search results for " memory"

showing 10 items of 1351 documents

Parallelizing Epistasis Detection in GWAS on FPGA and GPU-Accelerated Computing Systems

2015

This is a post-peer-review, pre-copyedit version of an article published in IEEE - ACM Transactions on Computational Biology and Bioinformatics. The final authenticated version is available online at: http://dx.doi.org/10.1109/TCBB.2015.2389958 [Abstract] High-throughput genotyping technologies (such as SNP-arrays) allow the rapid collection of up to a few million genetic markers of an individual. Detecting epistasis (based on 2-SNP interactions) in Genome-Wide Association Studies is an important but time consuming operation since statistical computations have to be performed for each pair of measured markers. Computational methods to detect epistasis therefore suffer from prohibitively lon…

Computer scienceBioinformaticsDNA Mutational AnalysisGenome-wide association studyParallel computingPolymorphism Single NucleotideSensitivity and SpecificityComputational biologyComputer GraphicsGeneticsComputer architectureField-programmable gate arrayRandom access memoryApplied MathematicsChromosome MappingHigh-Throughput Nucleotide SequencingReproducibility of ResultsField programmable gate arraysEpistasis GeneticSignal Processing Computer-AssistedEquipment DesignRandom access memoryComputing systemsReconfigurable computingEquipment Failure AnalysisTask (computing)EpistasisHost (network)Graphics processing unitsGenome-Wide Association StudyBiotechnology
researchProduct

2020

Abstract Efficient neuronal communication between brain regions through oscillatory synchronization at certain frequencies is necessary for cognition. Such synchronized networks are transient and dynamic, established on the timescale of milliseconds in order to support ongoing cognitive operations. However, few studies characterizing dynamic electrophysiological brain networks have simultaneously accounted for temporal non-stationarity, spectral structure, and spatial properties. Here, we propose an analysis framework for characterizing the large-scale phase-coupling network dynamics during task performance using magnetoencephalography (MEG). We exploit the high spatiotemporal resolution of…

Computer scienceCognitive NeurosciencePipeline (computing)Facial recognition system050105 experimental psychologyTask (project management)03 medical and health sciences0302 clinical medicinemedicine0501 psychology and cognitive sciencesEffects of sleep deprivation on cognitive performanceQuantitative Biology::Neurons and Cognitionmedicine.diagnostic_testWorking memorybusiness.industryFunctional connectivity05 social sciencesCognitionPattern recognitionMagnetoencephalographyHuman brainElectrophysiologymedicine.anatomical_structureNeurologyArtificial intelligencebusiness030217 neurology & neurosurgeryNeuroImage
researchProduct

Concurrent Computing with Shared Replicated Memory

2019

Any concurrent system can be captured by a concurrent Abstract State Machine (cASM). This remains valid, if different agents can only interact via messages. It even permits a strict separation between memory managing agents and other agents that can only access the shared memory by sending query and update requests. This paper is dedicated to an investigation of replicated data that is maintained by a memory management subsystem, where the replication neither appears in the requests nor in the corresponding answers. We specify the behaviour of a concurrent system with such memory management using concurrent communicating ASMs (ccASMs), provide several refinements addressing different replic…

Computer scienceDistributed computing020207 software engineering0102 computer and information sciences02 engineering and technology01 natural sciencesReplication (computing)Consistency (database systems)Memory managementShared memory010201 computation theory & mathematics0202 electrical engineering electronic engineering information engineeringAbstract state machinesConcurrent computingVDP::Teknologi: 500::Informasjons- og kommunikasjonsteknologi: 550
researchProduct

Persistent software transactional memory in Haskell

2021

Emerging persistent memory in commodity hardware allows byte-granular accesses to persistent state at memory speeds. However, to prevent inconsistent state in persistent memory due to unexpected system failures, different write-semantics are required compared to volatile memory. Transaction-based library solutions for persistent memory facilitate the atomic modification of persistent data in languages where memory is explicitly managed by the programmer, such as C/C++. For languages that provide extended capabilities like automatic memory management, a more native integration into the language is needed to maintain the high level of memory abstraction. It is shown in this paper how persiste…

Computer scienceProgramming languagecomputer.software_genreRuntime systemSoftware portabilityMemory managementSoftware transactional memoryHaskellPersistent data structureSafety Risk Reliability and QualitycomputerSoftwareGarbage collectioncomputer.programming_languageVolatile memoryProceedings of the ACM on Programming Languages
researchProduct

Distributed Computing on Distributed Memory

2018

Distributed computation is formalized in several description languages for computation, as e.g. Unified Modeling Language (UML), Specification and Description Language (SDL), and Concurrent Abstract State Machines (CASM). All these languages focus on the distribution of computation, which is somewhat the same as concurrent computation. In addition, there is also the aspect of distribution of state, which is often neglected. Distribution of state is most commonly represented by communication between active agents. This paper argues that it is desirable to abstract from the communication and to consider abstract distributed state. This includes semantic handling of conflict resolution, e.g. i…

Computer scienceSemantics (computer science)ConcurrencyDistributed computing020207 software engineering0102 computer and information sciences02 engineering and technology01 natural sciencesSpecification and Description LanguageUnified Modeling Language010201 computation theory & mathematics0202 electrical engineering electronic engineering information engineeringAbstract state machinesDistributed memoryMemory modelState (computer science)computercomputer.programming_language
researchProduct

A Methodology for the Analysis of Memory Response to Radiation through Bitmap Superposition and Slicing

2015

A methodology is proposed for the statistical analysis of memory radiation test data, with the aim of identifying trends in the single-even upset (SEU) distribution. The treated case study is a 65nm SRAM irradiated with neutrons, protons and heavy-ions.

Computer sciencebitmap slicingParallel computingHardware_PERFORMANCEANDRELIABILITYRadiationSlicingUpsetElectronic mailSuperposition principleStatic random-access memoryMemoriesstatic testNuclear Experimentdynamic testta114ta213computer.file_formatSRAMBitmap[SPI.TRON]Engineering Sciences [physics]/ElectronicsMultiple Cell Upset (MCU)MCUSERBitmapradiation testevent accumulationSingle Event Upset (SEU)AlgorithmcomputerSEUTest data
researchProduct

An efficient swap algorithm for the lattice Boltzmann method

2007

During the last decade, the lattice-Boltzmann method (LBM) as a valuable tool in computational fluid dynamics has been increasingly acknowledged. The widespread application of LBM is partly due to the simplicity of its coding. The most well-known algorithms for the implementation of the standard lattice-Boltzmann equation (LBE) are the two-lattice and two-step algorithms. However, implementations of the two-lattice or the two-step algorithm suffer from high memory consumption or poor computational performance, respectively. Ultimately, the computing resources available decide which of the two disadvantages is more critical. Here we introduce a new algorithm, called the swap algorithm, for t…

Computer simulationComputer sciencebusiness.industryLattice Boltzmann methodsGeneral Physics and AstronomyComputational fluid dynamicsProgram optimizationNonlinear Sciences::Cellular Automata and Lattice GasesHigh memoryHardware and ArchitecturebusinessAlgorithmImplementationSwap (computer programming)Coding (social sciences)Computer Physics Communications
researchProduct

Too many passwords? : How understanding our memory can increase password memorability

2018

Passwords are the most common authentication mechanism, that are only increasing with time. Previous research suggests that users cannot remember multiple passwords. Therefore, users adopt insecure password practices, such as password reuse in response to their perceived memory limitations. The critical question not currently examined is whether users’ memory capabilities for password recall are actually related to having a poor memory. This issue is imperative: if insecure password practices result from having a poor memory, then future password research and practice should focus on increasing the memorability of passwords. If, on the other hand, the problem is not solely related to memory…

ComputingMilieux_MANAGEMENTOFCOMPUTINGANDINFORMATIONSYSTEMSSoftware_OPERATINGSYSTEMSpassword securitymemorabilitytodentamineninformation securitymetamemorysalasanattietoturvahuman memorymuisti (kognitio)
researchProduct

The advantage of errorless learning for the acquisition of new concepts' labels in alcoholics

2009

BackgroundPrevious findings revealed that the acquisition of new semantic concepts' labels was impaired in uncomplicated alcoholic patients. The use of errorless learning may therefore allow them to improve learning performance. However, the flexibility of the new knowledge and the memory processes involved in errorless learning remain unclear.MethodNew concepts' labels acquisition was examined in 15 alcoholic patients and 15 control participants in an errorless learning condition compared with 19 alcoholic patients and 19 control subjects in a trial-and-error learning condition. The flexibility of the new information was evaluated using different photographs from those used in the learning…

Concept FormationSemanticsSeverity of Illness IndexArticle050105 experimental psychologyTask (project management)Developmental psychology03 medical and health sciences0302 clinical medicineMemoryTask Performance and AnalysisReaction TimeExplicit memoryHumansLearning0501 psychology and cognitive sciencesApplied PsychologyAnalysis of Variance05 social sciencesFlexibility (personality)CognitionMiddle AgedTest (assessment)AlcoholismPsychiatry and Mental healthErrorless learningImplicit memoryCuesPsychology030217 neurology & neurosurgeryCognitive psychologyPsychological Medicine
researchProduct

Parallelization strategies for density matrix renormalization group algorithms on shared-memory systems

2003

Shared-memory parallelization (SMP) strategies for density matrix renormalization group (DMRG) algorithms enable the treatment of complex systems in solid state physics. We present two different approaches by which parallelization of the standard DMRG algorithm can be accomplished in an efficient way. The methods are illustrated with DMRG calculations of the two-dimensional Hubbard model and the one-dimensional Holstein-Hubbard model on contemporary SMP architectures. The parallelized code shows good scalability up to at least eight processors and allows us to solve problems which exceed the capability of sequential DMRG calculations.

Condensed Matter::Quantum GasesDensity matrixNumerical AnalysisStrongly Correlated Electrons (cond-mat.str-el)Physics and Astronomy (miscellaneous)Hubbard modelApplied MathematicsDensity matrix renormalization groupComplex systemFOS: Physical sciencesParallel computingRenormalization groupComputer Science ApplicationsCondensed Matter - Strongly Correlated ElectronsComputational MathematicsShared memoryModeling and SimulationScalabilityCode (cryptography)Condensed Matter::Strongly Correlated ElectronsAlgorithmMathematicsJournal of Computational Physics
researchProduct