Search results for "Knowledge base"
showing 10 items of 142 documents
A Conceptual Architecture of Ontology Based KM System for Failure Mode and Effects Analysis
2014
Failure Mode and Effects Analysis (FMEA) is a systematic method for procedure analyses and risk assessment. It is a structured way to identify potential failure modes of a product or process, probability of their occurrence, and their overall effects. The basic purpose of this analysis is to mitigate the risk and the impact associated to a failure by planning and prioritizing actions to make a product or a process robust to failure. Effective manufacturing and improved quality products are the fruits of successful implementation of FMEA. During this activity valuable knowledge is generated which turns into product or process quality and efficiency. If this knowledge can be shared and reused…
A Knowledge Based Decision Support System for Bioinformatics and System Biology
2011
In this paper, we present a new Decision Support System for Bioinformatics and System Biology issues. Our system is based on a Knowledge base, representing the expertise about the application domain, and a Reasoner. The Reasoner, consulting the Knowledge base and according to the user’s request, is able to suggest one or more strategies in order to resolve the selected problem. Moreover, the system can build, at different abstraction layers, a workflow for the current problem on the basis of the user’s choices, freeing the user from implementation details and assisting him in the correct configuration of the algorithms. Two possible application scenarios will be introduced: the analysis of …
A knowledge-based decision support system in bioinformatics: An application to protein complex extraction
2013
Abstract Background We introduce a Knowledge-based Decision Support System (KDSS) in order to face the Protein Complex Extraction issue. Using a Knowledge Base (KB) coding the expertise about the proposed scenario, our KDSS is able to suggest both strategies and tools, according to the features of input dataset. Our system provides a navigable workflow for the current experiment and furthermore it offers support in the configuration and running of every processing component of that workflow. This last feature makes our system a crossover between classical DSS and Workflow Management Systems. Results We briefly present the KDSS' architecture and basic concepts used in the design of the knowl…
Probabilistic Logic under Coherence‚ Model−Theoretic Probabilistic Logic‚ and Default Reasoning in System P
2016
We study probabilistic logic under the viewpoint of the coherence principle of de Finetti. In detail, we explore how probabilistic reasoning under coherence is related to model-theoretic probabilistic reasoning and to default reasoning in System P. In particular, we show that the notions of g-coherence and of g-coherent entailment can be expressed by combining notions in model-theoretic probabilistic logic with concepts from default reasoning. Moreover, we show that probabilistic reasoning under coherence is a generalization of default reasoning in System P. That is, we provide a new probabilistic semantics for System P, which neither uses infinitesimal probabilities nor atomic bound (or bi…
Probabilistic Logic under Coherence, Model-Theoretic Probabilistic Logic, and Default Reasoning
2001
We study probabilistic logic under the viewpoint of the coherence principle of de Finetti. In detail, we explore the relationship between coherence-based and model-theoretic probabilistic logic. Interestingly, we show that the notions of g-coherence and of g-coherent entailment can be expressed by combining notions in model-theoretic probabilistic logic with concepts from default reasoning. Crucially, we even show that probabilistic reasoning under coherence is a probabilistic generalization of default reasoning in system P. That is, we provide a new probabilistic semantics for system P, which is neither based on infinitesimal probabilities nor on atomic-bound (or also big-stepped) probabil…
Conceptual and Paradigmatic Foundations of ISD
1995
Quasi Conjunction and Inclusion Relation in Probabilistic Default Reasoning
2011
We study the quasi conjunction and the Goodman & Nguyen inclusion relation for conditional events, in the setting of probabilistic default reasoning under coherence. We deepen two recent results given in (Gilio and Sanfilippo, 2010): the first result concerns p-entailment from a family F of conditional events to the quasi conjunction C(S) associated with each nonempty subset S of F; the second result, among other aspects, analyzes the equivalence between p-entailment from F and p-entailment from C(S), where S is some nonempty subset of F. We also characterize p-entailment by some alternative theorems. Finally, we deepen the connections between p-entailment and the Goodman & Nguyen inclusion…
Heyting-valued interpretations for Constructive Set Theory
2006
AbstractWe define and investigate Heyting-valued interpretations for Constructive Zermelo–Frankel set theory (CZF). These interpretations provide models for CZF that are analogous to Boolean-valued models for ZF and to Heyting-valued models for IZF. Heyting-valued interpretations are defined here using set-generated frames and formal topologies. As applications of Heyting-valued interpretations, we present a relative consistency result and an independence proof.
Transitive Reasoning with Imprecise Probabilities
2015
We study probabilistically informative (weak) versions of transitivity by using suitable definitions of defaults and negated defaults in the setting of coherence and imprecise probabilities. We represent \(\text{ p-consistent }\) sequences of defaults and/or negated defaults by g-coherent imprecise probability assessments on the respective sequences of conditional events. Finally, we present the coherent probability propagation rules for Weak Transitivity and the validity of selected inference patterns by proving p-entailment of the associated knowledge bases.
Building Semantic Trees from XML Documents
2016
International audience; The distributed nature of the Web, as a decentralized system exchanging information between heterogeneous sources, has underlined the need to manage interoperability, i.e., the ability to automatically interpret information in Web documents exchanged between different sources, necessary for efficient information management and search applications. In this context, XML was introduced as a data representation standard that simplifies the tasks of interoperation and integration among heterogeneous data sources, allowing to represent data in (semi-) structured documents consisting of hierarchically nested elements and atomic attributes. However, while XML was shown most …