0000000000161584

AUTHOR

Tim Sub

showing 7 related works from this author

Improving Collective I/O Performance Using Non-volatile Memory Devices

2016

Collective I/O is a parallel I/O technique designed to deliver high performance data access to scientific applications running on high-end computing clusters. In collective I/O, write performance is highly dependent upon the storage system response time and limited by the slowest writer. The storage system response time in conjunction with the need for global synchronisation, required during every round of data exchange and write, severely impacts collective I/O performance. Future Exascale systems will have an increasing number of processor cores, while the number of storage servers will remain relatively small. Therefore, the storage system concurrency level will further increase, worseni…

Input/outputFile system020203 distributed computingMulti-core processorbusiness.industryComputer scienceConcurrency020206 networking & telecommunications02 engineering and technologycomputer.software_genreSupercomputerNon-volatile memoryMemory managementData accessServerComputer data storage0202 electrical engineering electronic engineering information engineeringbusinesscomputerComputer network2016 IEEE International Conference on Cluster Computing (CLUSTER)
researchProduct

GekkoFS - A Temporary Distributed File System for HPC Applications

2018

We present GekkoFS, a temporary, highly-scalable burst buffer file system which has been specifically optimized for new access patterns of data-intensive High-Performance Computing (HPC) applications. The file system provides relaxed POSIX semantics, only offering features which are actually required by most (not all) applications. It is able to provide scalable I/O performance and reaches millions of metadata operations already for a small number of nodes, significantly outperforming the capabilities of general-purpose parallel file systems. The work has been funded by the German Research Foundation (DFG) through the ADA-FS project as part of the Priority Programme 1648. It is also support…

File system020203 distributed computingBurst buffersParallel processing (Electronic computers)Computer scienceProcessament en paral·lel (Ordinadors)020207 software engineering02 engineering and technologyBuffer storage (Computer science)computer.software_genreData structureDistributed file systemsMetadataParallel processing (DSP implementation)POSIXServerScalabilityHPC0202 electrical engineering electronic engineering information engineeringOperating systemHigh performance computingDistributed File System:Informàtica::Arquitectura de computadors::Arquitectures paral·leles [Àrees temàtiques de la UPC]computerCàlcul intensiu (Informàtica)2018 IEEE International Conference on Cluster Computing (CLUSTER)
researchProduct

And Now for Something Completely Different: Running Lisp on GPUs

2018

The internal parallelism of compute resources increases permanently, and graphics processing units (GPUs) and other accelerators have been gaining importance in many domains. Researchers from life science, bioinformatics or artificial intelligence, for example, use GPUs to accelerate their computations. However, languages typically used in some of these disciplines often do not benefit from the technical developments because they cannot be executed natively on GPUs. Instead existing programs must be rewritten in other, less dynamic programming languages. On the other hand, the gap in programming features between accelerators and common CPUs shrinks permanently. Since accelerators are becomi…

Programming languageComputer science020207 software engineering02 engineering and technology010501 environmental sciencescomputer.software_genre01 natural sciencesParallel processing (DSP implementation)0202 electrical engineering electronic engineering information engineeringParallelism (grammar)CompilerLispGraphicscomputerHost (network)Interpreter0105 earth and related environmental sciencescomputer.programming_languageRange (computer programming)2018 IEEE International Conference on Cluster Computing (CLUSTER)
researchProduct

Deduplication Potential of HPC Applications’ Checkpoints

2016

HPC systems contain an increasing number of components, decreasing the mean time between failures. Checkpoint mechanisms help to overcome such failures for long-running applications. A viable solution to remove the resulting pressure from the I/O backends is to deduplicate the checkpoints. However, there is little knowledge about the potential to save I/Os for HPC applications by using deduplication within the checkpointing process. In this paper, we perform a broad study about the deduplication behavior of HPC application checkpointing and its impact on system design.

0301 basic medicine03 medical and health sciences030104 developmental biologyComputer scienceDistributed computingScalabilityData_FILESRedundancy (engineering)Data deduplicationApplication checkpointing2016 IEEE International Conference on Cluster Computing (CLUSTER)
researchProduct

MERCURY: A Transparent Guided I/O Framework for High Performance I/O Stacks

2017

The performance gap between processors and I/O represents a serious scalability limitation for applications running on computing clusters. Parallel file systems often provide mechanisms that allow programmers to disclose their I/O pattern knowledge to the lower layers of the I/O stack through a hints API. This information can be used by the file system to boost the application performance. Unfortunately, programmers rarely make use of these features, missing the opportunity to exploit the full potential of the storage system. In this paper we propose MERCURY, a transparent guided I/O framework able to optimize file I/O patterns in scientific applications, allowing users to control the I/O b…

File systemPOSIXComputer scienceScalabilityNon-blocking I/OOperating systemNetwork File SystemAsynchronous I/OLinux kernelLustre (file system)computer.software_genrecomputer2017 25th Euromicro International Conference on Parallel, Distributed and Network-based Processing (PDP)
researchProduct

VarySched: A Framework for Variable Scheduling in Heterogeneous Environments

2016

Despite many efforts to better utilize the potential of GPUs and CPUs, it is far from being fully exploited. Although many tasks can be easily sped up by using accelerators, most of the existing schedulers are not flexible enough to really optimize the resource usage of the complete system. The main reasons are (i) that each processing unit requires a specific program code and that this code is often not provided for every task, and (ii) that schedulers may follow the run-until-completion model and, hence, disallow resource changes during runtime. In this paper, we present VarySched, a configurable task scheduler framework tailored to efficiently utilize all available computing resources in…

ScheduleComputer science020204 information systemsDistributed computing0202 electrical engineering electronic engineering information engineeringProcessor scheduling020201 artificial intelligence & image processing02 engineering and technologyEfficient energy useScheduling (computing)2016 IEEE International Conference on Cluster Computing (CLUSTER)
researchProduct

Extending PluTo for Multiple Devices by Integrating OpenACC

2018

For many years now, processor vendors increased the performance of their devices by adding more cores and wider vectorization units to their CPUs instead of scaling up the processors' clock frequency. Moreover, GPUs became popular for solving problems with even more parallel compute power. To exploit the full potential of modern compute devices, specific codes are necessary which are often coded in a hardware-specific manner. Usually, the codes for CPUs are not usable for GPUs and vice versa. The programming API OpenACC tries to close this gap by enabling one code-base to be suitable and optimized for many devices. Nevertheless, OpenACC is rarely used by `standard programmers' and while dif…

060201 languages & linguisticsMulti-core processorExploitComputer scienceClock rate06 humanities and the arts02 engineering and technologyParallel computingUSablecomputer.software_genrePluto0602 languages and literature0202 electrical engineering electronic engineering information engineering020201 artificial intelligence & image processingCompilercomputer2018 26th Euromicro International Conference on Parallel, Distributed and Network-based Processing (PDP)
researchProduct