Search results for "Data_FILES"
showing 10 items of 197 documents
File system scalability with highly decentralized metadata on independent storage devices
2016
This paper discusses using hard drives that integrate a key-value interface and network access in the actual drive hardware (Kinetic storage platform) to supply file system functionality in a large scale environment. Taking advantage of higher-level functionality to handle metadata on the drives themselves, a serverless system architecture is proposed. Skipping path component traversal during the lookup operation is the key technique discussed in this paper to avoid performance degradation with highly decentralized metadata. Scalability implications are reviewed based on a fuse file system implementation. Peer Reviewed
MOESM1 of NF1 microdeletion syndrome: case report of two new patients
2019
Additional file 1. Timelines of the clinical cases.
Additional file 1 of Genome-wide association meta-analysis for early age-related macular degeneration highlights novel loci and insights for advanced…
2020
Additional file 1: Supplementary Tables.
Additional file 2 of PVAmpliconFinder: a workflow for the identification of human papillomaviruses from high-throughput amplicon sequencing
2020
Additional file 2: Supplementary Data 1. Info file description. Supplementary Data 2. Details of the workflow steps. Supplementary Data 3. Description of output files format. Supplementary Data 4. Sample collection, preparation, and sequencing
MOESM5 of What are the effects of even-aged and uneven-aged forest management on boreal forest biodiversity in Fennoscandia and European Russia? A sy…
2019
Additional file 5. Data extraction spreadsheet.
Can the Retailer’s ICT Enhance the Impact of Service Recovery Efforts on Customer Satisfaction?
2019
Service recovery remains a topic of considerable interest for both academics and practitioners. This paper aims to explore the relations between recovery efforts and causal attributions, satisfacti...
ADA-FS—Advanced Data Placement via Ad hoc File Systems at Extreme Scales
2020
Today’s High-Performance Computing (HPC) environments increasingly have to manage relatively new access patterns (e.g., large numbers of metadata operations) which general-purpose parallel file systems (PFS) were not optimized for. Burst-buffer file systems aim to solve that challenge by spanning an ad hoc file system across node-local flash storage at compute nodes to relief the PFS from such access patterns. However, existing burst-buffer file systems still support many of the traditional file system features, which are often not required in HPC applications, at the cost of file system performance.
One Phase Commit: A Low Overhead Atomic Commitment Protocol for Scalable Metadata Services
2012
As the number of client machines in high end computing clusters increases, the file system cannot keep up with the resulting volume of requests, using a centralized metadata server. This problem will be even more prominent with the advent of the exascale computing age. In this context, the centralized metadata server represents a bottleneck for the scaling of the file system performance as well as a single point of failure. To overcome this problem, file systems are evolving from centralized metadata services to distributed metadata services. The metadata distribution raises a number of additional problems that must be taken into account. In this paper we will focus on the problem of managi…
Simurgh
2021
The availability of non-volatile main memory (NVMM) has started a new era for storage systems and NVMM specific file systems can support extremely high data and metadata rates, which are required by many HPC and data-intensive applications. Scaling metadata performance within NVMM file systems is nevertheless often restricted by the Linux kernel storage stack, while simply moving metadata management to the user space can compromise security or flexibility. This paper introduces Simurgh, a hardware-assisted user space file system with decentralized metadata management that allows secure metadata updates from within user space. Simurgh guarantees consistency, durability, and ordering of updat…
Data Backup Dilemma
2016
When the Great East Japan Earthquake struck in 2011, several municipalities lost their residential data including backup. Since none of them had ever considered the total loss of data, data backup policy had been paid little attention. In many cases, the backup tapes were simply stored inside the server room, just beside the server rack. Following the calamity, the Japanese national government tried to introduce a data backup system to municipalities using the cloud. The purpose was to secure the safekeeping of backup data. However, municipalities were reluctant to go along with this since overcoming the loss of network connectivity during an earthquake remained foremost in their minds. The…