Search results for "computer.software_genre"
showing 10 items of 3858 documents
Towards Dynamic Scripted pNFS Layouts
2012
Today's network file systems consist of a variety of complex subprotocols and backend storage classes. The data is typically spread over multiple data servers to achieve higher levels of performance and reliability. A metadata server is responsible for creating the mapping of a file to these data servers. It is hard to map application specific access patterns to storage system specific features, which can result in a degraded IO performance. We present an NFSv4.1/pNFS protocol extension that integrates the client's ability to provide hints and I/O advices to metadata servers. We define multiple storage classes and allow the client to choose which type of storage fits best for its desired ac…
ADA-FS—Advanced Data Placement via Ad hoc File Systems at Extreme Scales
2020
Today’s High-Performance Computing (HPC) environments increasingly have to manage relatively new access patterns (e.g., large numbers of metadata operations) which general-purpose parallel file systems (PFS) were not optimized for. Burst-buffer file systems aim to solve that challenge by spanning an ad hoc file system across node-local flash storage at compute nodes to relief the PFS from such access patterns. However, existing burst-buffer file systems still support many of the traditional file system features, which are often not required in HPC applications, at the cost of file system performance.
Using On-Demand File Systems in HPC Environments
2019
In modern HPC systems, parallel (distributed) file systems are used to allow fast access from and to the storage infrastructure. However, I/O performance in large-scale HPC systems has failed to keep up with the increase in computational power. As a result, the I/O subsystem which also has to cope with a large number of demanding metadata operations is often the bottleneck of the entire HPC system. In some cases, even a single bad behaving application can be held responsible for slowing down the entire HPC system, disrupting other applications that use the same I/O subsystem. These kinds of situations are likely to become more frequent in the future with larger and more powerful HPC systems…
One Phase Commit: A Low Overhead Atomic Commitment Protocol for Scalable Metadata Services
2012
As the number of client machines in high end computing clusters increases, the file system cannot keep up with the resulting volume of requests, using a centralized metadata server. This problem will be even more prominent with the advent of the exascale computing age. In this context, the centralized metadata server represents a bottleneck for the scaling of the file system performance as well as a single point of failure. To overcome this problem, file systems are evolving from centralized metadata services to distributed metadata services. The metadata distribution raises a number of additional problems that must be taken into account. In this paper we will focus on the problem of managi…
Simurgh
2021
The availability of non-volatile main memory (NVMM) has started a new era for storage systems and NVMM specific file systems can support extremely high data and metadata rates, which are required by many HPC and data-intensive applications. Scaling metadata performance within NVMM file systems is nevertheless often restricted by the Linux kernel storage stack, while simply moving metadata management to the user space can compromise security or flexibility. This paper introduces Simurgh, a hardware-assisted user space file system with decentralized metadata management that allows secure metadata updates from within user space. Simurgh guarantees consistency, durability, and ordering of updat…
Context metadata to adapt Ambient Learning Environments
2008
Ambient learning and knowledge environments (ALKE) are a promising concept for new methods of learning and in particular adapted, personalized learning environments. However, currently very few approaches specify concepts for adaptation. We present a metadata approach to identify and (automatically) derive the context of learning environments as a basis for adaptation. The concept has been partially validated in a scenario of ldquoSpontaneous Group Learningrdquo in Higher Education.
An empirical study of recommendations in OLAP reporting tool
2015
This paper presents the results of the experimental study that was performed in laboratory settings in the context of the OLAP reporting tool developed and put to operation at the University. The study was targeted to explore which of the modes for generating recommendations in the OLAP reporting tool has a deeper impact on users (i.e. produces more accurate recommendations). Each of the modes of the recommendation component â report structure, user activity, and semantic â employs a separate content-based method that takes advantage of OLAP schema metadata and aggregate functions. Gained data are assessed (i) quantitatively by means of the precision/recall and other metrics from the lo…
Content Management in Organizations
2005
Content management may be characterized as “a variety of tools and methods that are used together to collect, process, and deliver content of diverse types” (McIntosh, 2000, p. 1). At least three differing approaches on content management may be identified: 1) Web content management, 2) Document management, and 3) Utilization of structured documents.
Analysing Requirements for Content Mangement
2006
The content to be managed in organisations is in textual or multimedia formats. Major part of the content is, however, stored in documents. In order to find out the needs of the people and organisations producing and using the content a profound requirements analysis is needed. In the paper, a novel method for the requirements analysis for content management purposes is introduced. The method combines different techniques from two existing methods, which were used in various content management development projects. The paper also describes a case study where the new method is exploited.
Neural Network Techniques for Metal Forming Design
1993
Neural networks are computing structures able to predict the behaviour of a system on the basis of the knowledge of facts; main characteristic of a network is the capability to find a rule in a very complex environment. In the paper a neural network, based on the results of FEM simulations, is utilized to predict the occurrence of defects in a forward extrusion metal forming process. In particular a three layers neural network, relating the operative parameters with the failure or the success of the working process, has been used and the back-propagation algorithm has been employed to train the network. Few experimental data were enough to train the neural network allowing to achieve better…