Search results for " memory"
showing 10 items of 1351 documents
Parallelizing Epistasis Detection in GWAS on FPGA and GPU-Accelerated Computing Systems
2015
This is a post-peer-review, pre-copyedit version of an article published in IEEE - ACM Transactions on Computational Biology and Bioinformatics. The final authenticated version is available online at: http://dx.doi.org/10.1109/TCBB.2015.2389958 [Abstract] High-throughput genotyping technologies (such as SNP-arrays) allow the rapid collection of up to a few million genetic markers of an individual. Detecting epistasis (based on 2-SNP interactions) in Genome-Wide Association Studies is an important but time consuming operation since statistical computations have to be performed for each pair of measured markers. Computational methods to detect epistasis therefore suffer from prohibitively lon…
2020
Abstract Efficient neuronal communication between brain regions through oscillatory synchronization at certain frequencies is necessary for cognition. Such synchronized networks are transient and dynamic, established on the timescale of milliseconds in order to support ongoing cognitive operations. However, few studies characterizing dynamic electrophysiological brain networks have simultaneously accounted for temporal non-stationarity, spectral structure, and spatial properties. Here, we propose an analysis framework for characterizing the large-scale phase-coupling network dynamics during task performance using magnetoencephalography (MEG). We exploit the high spatiotemporal resolution of…
Concurrent Computing with Shared Replicated Memory
2019
Any concurrent system can be captured by a concurrent Abstract State Machine (cASM). This remains valid, if different agents can only interact via messages. It even permits a strict separation between memory managing agents and other agents that can only access the shared memory by sending query and update requests. This paper is dedicated to an investigation of replicated data that is maintained by a memory management subsystem, where the replication neither appears in the requests nor in the corresponding answers. We specify the behaviour of a concurrent system with such memory management using concurrent communicating ASMs (ccASMs), provide several refinements addressing different replic…
Persistent software transactional memory in Haskell
2021
Emerging persistent memory in commodity hardware allows byte-granular accesses to persistent state at memory speeds. However, to prevent inconsistent state in persistent memory due to unexpected system failures, different write-semantics are required compared to volatile memory. Transaction-based library solutions for persistent memory facilitate the atomic modification of persistent data in languages where memory is explicitly managed by the programmer, such as C/C++. For languages that provide extended capabilities like automatic memory management, a more native integration into the language is needed to maintain the high level of memory abstraction. It is shown in this paper how persiste…
Distributed Computing on Distributed Memory
2018
Distributed computation is formalized in several description languages for computation, as e.g. Unified Modeling Language (UML), Specification and Description Language (SDL), and Concurrent Abstract State Machines (CASM). All these languages focus on the distribution of computation, which is somewhat the same as concurrent computation. In addition, there is also the aspect of distribution of state, which is often neglected. Distribution of state is most commonly represented by communication between active agents. This paper argues that it is desirable to abstract from the communication and to consider abstract distributed state. This includes semantic handling of conflict resolution, e.g. i…
A Methodology for the Analysis of Memory Response to Radiation through Bitmap Superposition and Slicing
2015
A methodology is proposed for the statistical analysis of memory radiation test data, with the aim of identifying trends in the single-even upset (SEU) distribution. The treated case study is a 65nm SRAM irradiated with neutrons, protons and heavy-ions.
An efficient swap algorithm for the lattice Boltzmann method
2007
During the last decade, the lattice-Boltzmann method (LBM) as a valuable tool in computational fluid dynamics has been increasingly acknowledged. The widespread application of LBM is partly due to the simplicity of its coding. The most well-known algorithms for the implementation of the standard lattice-Boltzmann equation (LBE) are the two-lattice and two-step algorithms. However, implementations of the two-lattice or the two-step algorithm suffer from high memory consumption or poor computational performance, respectively. Ultimately, the computing resources available decide which of the two disadvantages is more critical. Here we introduce a new algorithm, called the swap algorithm, for t…
Too many passwords? : How understanding our memory can increase password memorability
2018
Passwords are the most common authentication mechanism, that are only increasing with time. Previous research suggests that users cannot remember multiple passwords. Therefore, users adopt insecure password practices, such as password reuse in response to their perceived memory limitations. The critical question not currently examined is whether users’ memory capabilities for password recall are actually related to having a poor memory. This issue is imperative: if insecure password practices result from having a poor memory, then future password research and practice should focus on increasing the memorability of passwords. If, on the other hand, the problem is not solely related to memory…
The advantage of errorless learning for the acquisition of new concepts' labels in alcoholics
2009
BackgroundPrevious findings revealed that the acquisition of new semantic concepts' labels was impaired in uncomplicated alcoholic patients. The use of errorless learning may therefore allow them to improve learning performance. However, the flexibility of the new knowledge and the memory processes involved in errorless learning remain unclear.MethodNew concepts' labels acquisition was examined in 15 alcoholic patients and 15 control participants in an errorless learning condition compared with 19 alcoholic patients and 19 control subjects in a trial-and-error learning condition. The flexibility of the new information was evaluated using different photographs from those used in the learning…
Parallelization strategies for density matrix renormalization group algorithms on shared-memory systems
2003
Shared-memory parallelization (SMP) strategies for density matrix renormalization group (DMRG) algorithms enable the treatment of complex systems in solid state physics. We present two different approaches by which parallelization of the standard DMRG algorithm can be accomplished in an efficient way. The methods are illustrated with DMRG calculations of the two-dimensional Hubbard model and the one-dimensional Holstein-Hubbard model on contemporary SMP architectures. The parallelized code shows good scalability up to at least eight processors and allows us to solve problems which exceed the capability of sequential DMRG calculations.