Search results for "computer.software_genre"
showing 10 items of 3858 documents
Reflection Assignment as a Tool to Support Students’ Metacognitive Awareness in the Context of Computer-Supported Collaborative Learning
2021
The present study explores the potential of a reflection assignment as a tool for supporting master’s degree students’ metacognitive skills in the context of computer-supported collaborative learning (CSCL). The research question (RQ) is formulated as follows: How does a regularly submitted reflection assignment support the development of students’ individual metacognitive awareness in the context of CSCL? The empirical data is a text corpus (7878 words) extracted from individual students’ (N = 13) reflection assignments (N = 65) submitted during one semester. Qualitative content analysis was employed to analyze the data. The results demonstrate that by the end of the course, the students s…
Supporting Emotion Automatic Detection and Analysis over Real-Life Text Corpora via Deep Learning: Model, Methodology, and Framework
2021
This paper describes an approach for supporting automatic satire detection through effective deep learning (DL) architecture that has been shown to be useful for addressing sarcasm/irony detection problems. We both trained and tested the system exploiting articles derived from two important satiric blogs, Lercio and IlFattoQuotidiano, and significant Italian newspapers.
The computation of word associations
2002
It is shown that basic language processes such as the production of free word associations and the generation of synonyms can be simulated using statistical models that analyze the distribution of words in large text corpora. According to the law of association by contiguity, the acquisition of word associations can be explained by Hebbian learning. The free word associations as produced by subjects on presentation of single stimulus words can thus be predicted by applying first-order statistics to the frequencies of word co-occurrences as observed in texts. The generation of synonyms can also be conducted on co-occurrence data but requires second-order statistics. The reason is that synony…
Revisiting corpus creation and analysis tools for translation tasks
2016
Many translation scholars have proposed the use of corpora to allow professional translators to produce high quality texts which read like originals. Yet, the diffusion of this methodology has been modest, one reason being the fact that software for corpora analyses have been developed with the linguist in mind, which means that they are generally complex and cumbersome, offering many advanced features, but lacking the level of usability and the specific features that meet translators’ needs. To overcome this shortcoming, we have developed TranslatorBank, a free corpus creation and analysis tool designed for translation tasks. TranslatorBank supports the creation of specialized monolingual …
Discovering the Senses of an Ambiguous Word by Clustering its Local Contexts
2005
As has been shown recently, it is possible to automatically discover the senses of an ambiguous word by statistically analyzing its contextual behavior in a large text corpus. However, this kind of research is still at an early stage. The results need to be improved and there is considerable disagreement on methodological issues. For example, although most researchers use clustering approaches for word sense induction, it is not clear what statistical features the clustering should be based on. Whereas so far most researchers cluster global co-occurrence vectors that reflect the overall behavior of a word in a corpus, in this paper we argue that it is more appropriate to use local context v…
Weights Space Exploration Using Genetic Algorithms for Meta-classifier in Text Document Classification
2012
Aspects Concerning SVM Method’s Scalability
2008
In the last years the quantity of text documents is increasing continually and automatic document classification is an important challenge. In the text document classification the training step is essential in obtaining a good classifier. The quality of learning depends on the dimension of the training data. When working with huge learning data sets, problems regarding the training time that increases exponentially are occurring. In this paper we are presenting a method that allows working with huge data sets into the training step without increasing exponentially the training time and without significantly decreasing the classification accuracy.
A Controllable Text Simplification System for the Italian Language
2021
Text simplification is a non-trivial task that aims at reducing the linguistic complexity of written texts. Researchers have studied the problem by proposing new methodologies for addressing the English language, but other languages, like the Italian one, are almost unexplored. In this paper, we give a contribution to the enhancement of the Automated Text Simplification research by presenting a deep learning-based system, inspired by a state of the art system for the English language, capable of simplifying Italian texts. The system has been trained and tested by leveraging the Italian version of Newsela; it has shown promising results by achieving a SARI value of 30.17.
Movie Script Similarity Using Multilayer Network Portrait Divergence
2020
International audience; This paper addresses the question of movie similarity through multilayer graph similarity measures. Recent work has shown how to construct multilayer networks using movie scripts, and how they capture different aspects of the stories. Based on this modeling, we propose to rely on the multilayer structure and compute different similarities, so we may compare movies, not from their visual content, summary, or actors, but actually from their own storyboard. We propose to do so using “portrait divergence”, which has been recently introduced to compute graph distances from summarizing graph characteristics. We illustrate our approach on the series of six Star Wars movies.
On parsing optimality for dictionary-based text compression—the Zip case
2013
Dictionary-based compression schemes are the most commonly used data compression schemes since they appeared in the foundational paper of Ziv and Lempel in 1977, and generally referred to as LZ77. Their work is the base of Zip, gZip, 7-Zip and many other compression software utilities. Some of these compression schemes use variants of the greedy approach to parse the text into dictionary phrases; others have left the greedy approach to improve the compression ratio. Recently, two bit-optimal parsing algorithms have been presented filling the gap between theory and best practice. We present a survey on the parsing problem for dictionary-based text compression, identifying noticeable results …