Search results for "Programming language"
showing 10 items of 624 documents
Panel Discussion: Systems for Data Analysis What they AEE; what they Could be?
1985
CRANE: I’d like to pose a couple of questions: (1) Command Languages — A tool for the astronomer or for the programmer? (2) Portability — Holy Cow or Red Herring? I propose that we start with the first one and see how far we get. If we don’t get past that, fine. If we get on to the question of portability, this is also fine. Let me just open up the discussion by asking Rudi Albrecht to make a comment.
Graph grammar engineering: A software specification method
1987
Graphs as conceptual data models are accepted and used in a wide range of different problem areas. Giving some examples we outline common aspects for modeling complex structures by graphs. We present a formal frame-work based on graph grammars to specify graph classes and the corresponding graph manipulations. We show that such a specification can be written in a systematic, engineering-like manner. This is achieved by an extension of the known programmed, attributed graph grammars. Node-set operators are introduced to facilitate graph queries. Concepts like abstraction, decomposition, refinement, parameterization, and integration have been adopted from software engineering to yield a compr…
Fire risk sub-module assessment under solvency II. Calculating the highest risk exposure
2021
The European Directive 2009/138 of Solvency II requires adopting a new approach based on risk, applying a standard formula as a market proxy in which the risk profile of insurers is fundamental. This study focuses on the fire risk sub-module, framed within the man-made catastrophe risk module, for which the regulations require the calculation of the highest concentration of risks that make up the portfolio of an insurance company within a radius of 200 m. However, the regulations do not indicate a specific methodology. This study proposes a procedure consisting of calculating the cluster with the highest risk and identifying this on a map. The results can be applied immediately by any insur…
Suffix array and Lyndon factorization of a text
2014
Abstract The main goal of this paper is to highlight the relationship between the suffix array of a text and its Lyndon factorization. It is proved in [15] that one can obtain the Lyndon factorization of a text from its suffix array. Conversely, here we show a new method for constructing the suffix array of a text that takes advantage of its Lyndon factorization. The surprising consequence of our results is that, in order to construct the suffix array, the local suffixes inside each Lyndon factor can be separately processed, allowing different implementative scenarios, such as online, external and internal memory, or parallel implementations. Based on our results, the algorithm that we prop…
Sound and reusable components for abstract interpretation
2019
Abstract interpretation is a methodology for defining sound static analysis. Yet, building sound static analyses for modern programming languages is difficult, because these static analyses need to combine sophisticated abstractions for values, environments, stores, etc. However, static analyses often tightly couple these abstractions in the implementation, which not only complicates the implementation, but also makes it hard to decide which parts of the analyses can be proven sound independently from each other. Furthermore, this coupling makes it hard to combine soundness lemmas for parts of the analysis to a soundness proof of the complete analysis. To solve this problem, we propose to c…
Compiler Driven Automatic Kernel Context Migration for Heterogeneous Computing
2014
Computer systems provide different heterogeneous resources (e.g., GPUs, DSPs and FPGAs) that accelerate applications and that can reduce the energy consumption by using them. Usually, these resources have an isolated memory and a require target specific code to be written. There exist tools that can automatically generate target specific codes for program parts, so-called kernels. The data objects required for a target kernel execution need to be moved to the target resource memory. It is the programmers' responsibility to serialize these data objects used in the kernel and to copy them to or from the resource's memory. Typically, the programmer writes his own serializing function or uses e…
FINDUS: An Open-Source 3D Printable Liquid-Handling Workstation for Laboratory Automation in Life Sciences
2020
3D-printed laboratory devices can enable ambitious research purposes even at a low-budget level. To follow this trend, here we describe the construction, calibration, and usage of the FINDUS (Fully Integrable Noncommercial Dispensing Utility System). We report the successful 3D printing and assembly of a liquid-handling workstation for less than $400. Using this setup, we achieve reliable and flexible liquid-dispensing automation with relative pipetting errors of less than 0.3%. We show our system is well suited for several showcase applications from both the biology and chemistry fields. In support of the open-source spirit, we make all 3D models, assembly instructions, and source code ava…
TIME : A Translator Compiler for CIS
2002
To build a Cooperative Information System, a first step is to collect schemas of each local database. All the schemas exported from databases are translated and integrated into a cooperative schema, which is used by the final user to query the cooperation “transparently”. In this article, we focus on the definition of tools that are used to build and manage cooperative information systems. These tools enable the automatic or semi-automatic generation of specific translators. The first step of our methodology is a knowledge acquisition step that allows for the data model description of each local database. The second step is to compare all these descriptions in order to organize the correspo…
The Shuffle Product: New Research Directions
2015
In this paper we survey some recent researches concerning the shuffle operation that arise both in Formal Languages and in Combinatorics on Words.
A Comparison of Formulae for Calculating Cost-Efficient Sample Sizes of Case-Control Studies with an Internal Validation Scheme
2000
When a case-control study is planned to include an internal validation study, the sample size of the study and the proportion of validated observations has to be calculated. There are a variety of alternative methods to accomplish this. In this article some possible procedures will be compared in order to clarify whether considerable differences in the suggested optimal designs occur, dependent on the used method.