0000000000156924

AUTHOR

Steve Legrand

Defining classifier regions for WSD ensembles using word space features

Based on recent evaluation of word sense disambiguation (WSD) systems [10], disambiguation methods have reached a standstill. In [10] we showed that it is possible to predict the best system for target word using word features and that using this 'optimal ensembling method' more accurate WSD ensembles can be built (3-5% over Senseval state of the art systems with the same amount of possible potential remaining). In the interest of developing if more accurate ensembles, w e here define the strong regions for three popular and effective classifiers used for WSD task (Naive Bayes – NB, Support Vector Machine – SVM, Decision Rules – D) using word features (word grain, amount of positive and neg…

research product

Ontology languages for the semantic web: A never completely updated review

This paper gives a never completely account of approaches that have been used for the research community for representing knowledge. After underlining the importance of a layered approach and the use of standards, it starts with early efforts used for artificial intelligence researchers. Then recent approaches, aimed mainly at the semantic web, are described. Coding examples from the literature are presented in both sections. Finally, the semantic web ontology creation process, as we envision it, is introduced.

research product

Building an Optimal WSD Ensemble Using Per-Word Selection of Best System

In Senseval workshops for evaluating WSD systems [1,4,9], no one system or system type (classifier algorithm, type of system ensemble, extracted feature set, lexical knowledge source etc.) has been discovered that resolves all ambiguous words into their senses in a superior way. This paper presents a novel method for selecting the best system for target word based on readily available word features (number of senses, average amount of training per sense, dominant sense ratio). Applied to Senseval-3 and Senseval-2 English lexical sample state-of-art systems, a net gain of approximately 2.5 – 5.0% (respectively) in average precision per word over the best base system is achieved. The method c…

research product

Semi-automatic Derivation of Specific-Domain Ontologies for the Semantic Web

This paper describes an approach for helping in the semi-automatic construction of specific-domain ontology components contained in a digital archive. This proposal for extracting knowledge from digital sources allows users to have a view of this knowledge and visualize specific-domain ontology components that with further processing can be shared with software agents by embedding it into digital archives themselves in the context of the Semantic Web. In particular, we deal with the issue of not constructing the ontology from scratch, our approach helps us to speed up the ontology creation process.

research product

Artificial learning approaches for the nextgeneration Web: Part I

Resumen en: In this paper we present an ontology learning tool for assembling and visualizing ontology components from a specific domain for the semantic web. The fo...

research product

Symbolic Reductionist Model for Program Comprehension

This article presents the main features of a novel construction, symbolic analysis, for automatic source code processing. The method is superior to the known methods, because it uses a semiotic, interpretative approach. Its most important processes and characteristics are considered here. We describe symbolic information retrieval and the process of analysis in which it can be used in order to obtain pragmatic information. This, in turn, is useful in understanding a current Java program version when developing a new version.

research product

Case-Sensitivity of Classifiers for WSD: Complex Systems Disambiguate Tough Words Better

We present a novel method for improving disambiguation accuracy by building an optimal ensemble (OE) of systems where we predict the best available system for target word using a priori case factors (e.g. amount of training per sense). We report promising results of a series of best-system prediction tests (best prediction accuracy is 0.92) and show that complex/simple systems disambiguate tough/easy words better. The method provides the following benefits: (1) higher disambiguation accuracy for virtually any base systems (current best OE yields close to 2% accuracy gain over Senseval-3 state of the art) and (2) economical way of building more effective ensembles of all types (e.g. optimal,…

research product

Natural Language Processing Agents and Document Clustering in Knowledge Management

While HTML provides the Web with a standard format for information presentation, XML has been made a standard for information structuring on the Web. The mission of the Semantic Web now is to provide meaning to the Web. Apart from building on the existing Web technologies, we need other tools from other areas of science to do that. This chapter shows how natural language processing methods and technologies, together with ontologies and a neural algorithm, can be used to help in the task of adding meaning to the Web, thus making the Web a better platform for knowledge management in general.

research product