Search results for " Informatica"

showing 10 items of 978 documents

Analysis and Comparison of Deep Learning Networks for Supporting Sentiment Mining in Text Corpora

2020

In this paper, we tackle the problem of the irony and sarcasm detection for the Italian language to contribute to the enrichment of the sentiment analysis field. We analyze and compare five deep-learning systems. Results show the high suitability of such systems to face the problem by achieving 93% of F1-Score in the best case. Furthermore, we briefly analyze the model architectures in order to choose the best compromise between performances and complexity.

Text corpusComputer sciencemedia_common.quotation_subjectCompromiseFace (sociological concept)02 engineering and technologycomputer.software_genreField (computer science)020204 information systems0202 electrical engineering electronic engineering information engineeringnatural language processingmedia_commonSettore ING-INF/05 - Sistemi Di Elaborazione Delle InformazioniSettore INF/01 - InformaticaSarcasmbusiness.industryDeep learningSentiment analysisdeep learningirony detectionIrony020201 artificial intelligence & image processingArtificial intelligencebusinesscomputersarcasm detectionNatural language processingProceedings of the 22nd International Conference on Information Integration and Web-based Applications & Services
researchProduct

Supporting Emotion Automatic Detection and Analysis over Real-Life Text Corpora via Deep Learning: Model, Methodology, and Framework

2021

This paper describes an approach for supporting automatic satire detection through effective deep learning (DL) architecture that has been shown to be useful for addressing sarcasm/irony detection problems. We both trained and tested the system exploiting articles derived from two important satiric blogs, Lercio and IlFattoQuotidiano, and significant Italian newspapers.

Text corpusSettore ING-INF/05 - Sistemi Di Elaborazione Delle InformazioniSettore INF/01 - InformaticaComputer sciencebusiness.industryDeep learningcomputer.software_genreNLPDeep LearningArtificial intelligenceSatire DetectionbusinesscomputerNatural language processing
researchProduct

A Controllable Text Simplification System for the Italian Language

2021

Text simplification is a non-trivial task that aims at reducing the linguistic complexity of written texts. Researchers have studied the problem by proposing new methodologies for addressing the English language, but other languages, like the Italian one, are almost unexplored. In this paper, we give a contribution to the enhancement of the Automated Text Simplification research by presenting a deep learning-based system, inspired by a state of the art system for the English language, capable of simplifying Italian texts. The system has been trained and tested by leveraging the Italian version of Newsela; it has shown promising results by achieving a SARI value of 30.17.

Text simplificationComputer scienceText simplification02 engineering and technologyEnglish languagecomputer.software_genreTask (project management)03 medical and health sciences0302 clinical medicineLinguistic sequence complexityDeep Learning0202 electrical engineering electronic engineering information engineeringValue (semiotics)Natural Language ProcessingSettore ING-INF/05 - Sistemi Di Elaborazione Delle InformazioniDeep Neural NetworksSettore INF/01 - Informaticabusiness.industryDeep learningItalian language030221 ophthalmology & optometryComputingMethodologies_DOCUMENTANDTEXTPROCESSING020201 artificial intelligence & image processingArtificial intelligenceState (computer science)businesscomputerNatural language processing
researchProduct

Network Centralities and Node Ranking

2017

An important problem in network analysis is understanding how much nodes are important in order to “propagate” the information across the input network. To this aim, many centrality measures have been proposed in the literature and our main goal here is that of providing an overview of the most important of them. In particular, we distinguish centrality measures based on walks computation from those based on shortest-paths computation. We also provide some examples in order to clarify how these measures can be calculated, with special attention to Degree Centrality, Closeness Centrality and Betweennes Centrality.

Theoretical computer scienceCentrality measureNetwork topologyShortest pathSettore INF/01 - InformaticaComputer scienceBiological networkComputationNode (networking)Network topologySubgraph extractionNode centralityRankingShortest path problemCentralityBiological networkNetwork analysisNode neighborhoodNode ranking
researchProduct

On parsing optimality for dictionary-based text compression—the Zip case

2013

Dictionary-based compression schemes are the most commonly used data compression schemes since they appeared in the foundational paper of Ziv and Lempel in 1977, and generally referred to as LZ77. Their work is the base of Zip, gZip, 7-Zip and many other compression software utilities. Some of these compression schemes use variants of the greedy approach to parse the text into dictionary phrases; others have left the greedy approach to improve the compression ratio. Recently, two bit-optimal parsing algorithms have been presented filling the gap between theory and best practice. We present a survey on the parsing problem for dictionary-based text compression, identifying noticeable results …

Theoretical computer scienceComputer scienceData_CODINGANDINFORMATIONTHEORYTop-down parsingcomputer.software_genreTheoretical Computer ScienceParsing optimalityCompression (functional analysis)Discrete Mathematics and CombinatoricsLossless compressionParsingLZ77 algorithmSettore INF/01 - InformaticaDeflate algorithmbusiness.industryDictionary-based text compressionComputational Theory and MathematicsData compressionDEFLATECompression ratioArtificial intelligencebusinesscomputerNatural language processingBottom-up parsingData compressionJournal of Discrete Algorithms
researchProduct

Dictionary-symbolwise flexible parsing

2012

AbstractLinear-time optimal parsing algorithms are rare in the dictionary-based branch of the data compression theory. A recent result is the Flexible Parsing algorithm of Matias and Sahinalp (1999) that works when the dictionary is prefix closed and the encoding of dictionary pointers has a constant cost. We present the Dictionary-Symbolwise Flexible Parsing algorithm that is optimal for prefix-closed dictionaries and any symbolwise compressor under some natural hypothesis. In the case of LZ78-like algorithms with variable costs and any, linear as usual, symbolwise compressor we show how to implement our parsing algorithm in linear time. In the case of LZ77-like dictionaries and any symbol…

Theoretical computer scienceComputer science[INFO.INFO-DS]Computer Science [cs]/Data Structures and Algorithms [cs.DS][INFO.INFO-DS] Computer Science [cs]/Data Structures and Algorithms [cs.DS]Data_CODINGANDINFORMATIONTHEORY0102 computer and information sciences02 engineering and technologycomputer.software_genre01 natural sciencesDirected acyclic graphTheoretical Computer ScienceConstant (computer programming)020204 information systemsEncoding (memory)Optimal parsing0202 electrical engineering electronic engineering information engineeringDiscrete Mathematics and CombinatoricsStringologySymbolwise text compressionTime complexityLossless compressionParsingSettore INF/01 - InformaticaDictionary-based compressionOptimal Parsing Lossless Data Compression DAGDirected acyclic graphPrefixComputational Theory and MathematicsText compression010201 computation theory & mathematicsAlgorithmcomputerBottom-up parsingData compressionJournal of Discrete Algorithms
researchProduct

A Logical Key Hierarchy Based approach to preserve content privacy in Decentralized Online Social Networks

2020

Distributed Online Social Networks (DOSNs) have been proposed to shift the control over user data from a unique entity, the online social network provider, to the users of the DOSN themselves. In this paper we focus on the problem of preserving the privacy of the contents shared to large groups of users. In general, content privacy is enforced by encrypting the content, having only authorized parties being able to decrypt it. When efficiency has to be taken into account, new solutions have to be devised that: i) minimize the re-encryption of the contents published in a group when the composition of the group changes; and, ii) enable a fast distribution of the cryptographic keys to all the m…

Theoretical computer scienceFacebookComputer scienceInformation privacyCyber SecurityGroup communicationJoinsEncryptionEncryptioncomputer.software_genreKey managementSet (abstract data type)Peer-to-peer computingElectrical and Electronic EngineeringFocus (computing)VegetationSocial networkSettore INF/01 - Informaticabusiness.industryGroup (mathematics)Composition (combinatorics)Decentralized Online Social NetworksDecentralized Online Social Networks; Encryption; Facebook; Group communication; Information privacy; Key management; Peer-to-peer computing; Privacy; Vegetation; Electrical and Electronic EngineeringPrivacyContent (measure theory)Decentralized online social networkData miningbusinesscomputerData privacy
researchProduct

The Burrows-Wheeler Transform between Data Compression and Combinatorics on Words

2013

The Burrows-Wheeler Transform (BWT) is a tool of fundamental importance in Data Compression and, recently, has found many applications well beyond its original purpose. The main goal of this paper is to highlight the mathematical and combinatorial properties on which the outstanding versatility of the $BWT$ is based, i.e. its reversibility and the clustering effect on the output. Such properties have aroused curiosity and fervent interest in the scientific world both for theoretical aspects and for practical effects. In particular, in this paper we are interested both to survey the theoretical research issues which, by taking their cue from Data Compression, have been developed in the conte…

Theoretical computer scienceSettore INF/01 - InformaticaBurrows–Wheeler transformmedia_common.quotation_subjectTheoretical researchContext (language use)Data_CODINGANDINFORMATIONTHEORYBurrows Wheeler transform; Clustering effect; Combinatorial propertiesCombinatorial propertiesBurrows Wheeler transformCombinatorics on wordsClustering effectBWT balancing optimal partitioning text-compressionCuriosityArithmeticCluster analysisFocus (optics)media_commonData compressionMathematics
researchProduct

Correlation Analysis of Node and Edge Centrality Measures in Artificial Complex Networks

2021

The role of an actor in a social network is identified through a set of measures called centrality. Degree centrality, betweenness centrality, closeness centrality, and clustering coefficient are the most frequently used metrics to compute the node centrality. Their computational complexity in some cases makes unfeasible, when not practically impossible, their computations. For this reason, we focused on two alternative measures, WERW-Kpath and Game of Thieves, which are at the same time highly descriptive and computationally affordable. Our experiments show that a strong correlation exists between WERW-Kpath and Game of Thieves and the classical centrality measures. This may suggest the po…

Theoretical computer scienceSettore INF/01 - InformaticaComputational complexity theorySocial networkComputer sciencebusiness.industryNode (networking)Complex networksComplex networkSocial network analysisK-pathBetweenness centralityCentrality measuresCorrelation coefficientsCentralitybusinessSocial network analysisClustering coefficient
researchProduct

Game of Thieves and WERW-Kpath: Two Novel Measures of Node and Edge Centrality for Mafia Networks

2021

Real-world complex systems can be modeled as homogeneous or heterogeneous graphs composed by nodes connected by edges. The importance of nodes and edges is formally described by a set of measures called centralities which are typically studied for graphs of small size. The proliferation of digital collection of data has led to huge graphs with billions of nodes and edges. For this reason, we focus on two new algorithms, Game of Thieves and WERW-Kpath which are computationally-light alternatives to the canonical centrality measures such as degree, node and edge betweenness, closeness and clustering. We explore the correlation among these measures using the Spearman’s correlation coefficient …

Theoretical computer scienceSettore INF/01 - InformaticaDegree (graph theory)Computer scienceClosenessComplex networksMafia networksComplex networkCorrelationComputational complexityBetweenness centralityNode (computer science)CentralityRank (graph theory)Cluster analysisCentrality
researchProduct