Search results for "Natural language processing"
showing 10 items of 413 documents
Improving Classification of Tweets Using Linguistic Information from a Large External Corpus
2016
The bag of words representation of documents is often unsatisfactory as it ignores relationships between important terms that do not co-occur literally. Improvements might be achieved by expanding the vocabulary with other relevant word, like synonyms.
Conceptual graph operations for formal visual reasoning in the medical domain
2014
International audience; Objective - Conceptual graphs (CGs) are used to represent clinical guidelines because they support visual reasoning with a logical background, making them a potentially valuable representation for guidelines.Materials and methods - Conceptual graph formalism has an essential and basic component: a formal vocabulary that drives all of the other mechanisms, notably specialization and projection. The graph's theoretical operations, such as projection, rules, derivation, constraints, probabilities and uncertainty, support diagrammatic reasoning.Results - A conceptual graph's graphical user interface includes a multilingual vocabulary management, some query and decision-m…
Impact of textual data augmentation on linguistic pattern extraction to improve the idiomaticity of extractive summaries
2021
International audience; The present work aims to develop a text summarisation system for financial texts with a focus on the fluidity of the target language. Linguistic analysis shows that the process of writing summaries should take into account not only terminological and collocational extraction, but also a range of linguistic material referred to here as the "support lexicon", that plays an important role in the cognitive organisation of the field. On this basis, this paper highlights the relevance of pre-training the CamemBERT model on a French financial dataset to extend its domainspecific vocabulary and fine-tuning it on extractive summarisation. We then evaluate the impact of textua…
Adaptive Vocabulary Learning Environment for Late Talkers
2016
The main aim of this research is to provide children who have an early language delay with an adaptive way to train their vocabulary taking into account individuality of the learner. The suggested system is a mobile game-based learning environment which provides simple tasks where the learner chooses a picture that corresponds to a played back sound from multiple pictures presented on the screen. Our basic assumption is that the more similar the concepts (in our case, words) are, the harder the recognition task is. The system chooses the pictures to be presented on the screen by calculating the distances between the concepts in different dimensions. The distances are considered to consist o…
Numerical Analysis of Word Frequencies in Artificial and Natural Language Texts
1997
We perform a numerical study of the statistical properties of natural texts written in English and of two types of artificial texts. As statistical tools we use the conventional Zipf analysis of the distribution of words and the inverse Zipf analysis of the distribution of frequencies of words, the analysis of vocabulary growth, the Shannon entropy and a quantity which is a nonlinear function of frequencies of words, the frequency "entropy". Our numerical results, obtained by investigation of eight complete books and sixteen related artificial texts, suggest that, among these analyses, the analysis of vocabulary growth shows the most striking difference between natural and artificial texts…
A practical solution to the problem of automatic part-of-speech induction from text
2005
The problem of part-of-speech induction from text involves two aspects: Firstly, a set of word classes is to be derived automatically. Secondly, each word of a vocabulary is to be assigned to one or several of these word classes. In this paper we present a method that solves both problems with good accuracy. Our approach adopts a mixture of statistical methods that have been successfully applied in word sense induction. Its main advantage over previous attempts is that it reduces the syntactic space to only the most important dimensions, thereby almost eliminating the otherwise omnipresent problem of data sparseness.
Ontology languages for the semantic web: A never completely updated review
2006
This paper gives a never completely account of approaches that have been used for the research community for representing knowledge. After underlining the importance of a layered approach and the use of standards, it starts with early efforts used for artificial intelligence researchers. Then recent approaches, aimed mainly at the semantic web, are described. Coding examples from the literature are presented in both sections. Finally, the semantic web ontology creation process, as we envision it, is introduced.
Natural Language Processing Agents and Document Clustering in Knowledge Management
2008
While HTML provides the Web with a standard format for information presentation, XML has been made a standard for information structuring on the Web. The mission of the Semantic Web now is to provide meaning to the Web. Apart from building on the existing Web technologies, we need other tools from other areas of science to do that. This chapter shows how natural language processing methods and technologies, together with ontologies and a neural algorithm, can be used to help in the task of adding meaning to the Web, thus making the Web a better platform for knowledge management in general.
Within and between variations of texts elicited from nine wine experts
2006
Nine wine experts tasted in replicate six Chardonnay wines that had been aged in oak barrels from different forests and/or species. They freely gave their descriptions in writing; the only instruction given was to underline three words or expressions that best characterized each tasted wine. The texts were submitted to an objective lexical analysis that quantified the important variation among the experts. In addition a matching task was performed by 117 assessors in which each assessor received from each expert six white cards and six yellow cards representing the descriptions of the six white wines and six red wines. The assessors were incapable of matching the descriptions for the same e…
An Extension of the VSM Documents Representation using Word Embedding
2017
Abstract In this paper, we will present experiments that try to integrate the power of Word Embedding representation in real problems for documents classification. Word Embedding is a new tendency used in the natural language processing domain that tries to represent each word from the document in a vector format. This representation embeds the semantically context in that the word occurs more frequently. We include this new representation in a classical VSM document representation and evaluate it using a learning algorithm based on the Support Vector Machine. This new added information makes the classification to be more difficult because it increases the learning time and the memory neede…