Search results for "computer.internet_protocol"
showing 10 items of 168 documents
X!TandemPipeline: a tool to manage sequence redundancy for protein inference and phosphosite identification
2017
X!TandemPipeline is a software designed to perform protein inference and to manage redundancy in the results of phosphosite identification by database search. It provides the minimal list of proteins or phosphosites that are present in a set of samples using grouping algorithms based on the principle of parsimony. Regarding proteins, a two-level classification is performed, where groups gather proteins sharing at least one peptide and subgroups gather proteins that are not distinguishable according to the identified peptides. Regarding phosphosites, an innovative approach based on the concept of phosphoisland is used to gather overlapping phosphopeptides. The graphical interface of X!Tandem…
Predictive Model Markup Language (PMML) Representation of Bayesian Networks: An Application in Manufacturing
2018
International audience; Bayesian networks (BNs) represent a promising approach for the aggregation of multiple uncertainty sources in manufacturing networks and other engineering systems for the purposes of uncertainty quantification, risk analysis, and quality control. A standardized representation for BN models will aid in their communication and exchange across the web. This article presents an extension to the predictive model markup language (PMML) standard for the representation of a BN, which may consist of discrete variables, continuous variables, or their combination. The PMML standard is based on extensible markup language (XML) and used for the representation of analytical models…
Efficient Transport Protocol for Networked Haptics Applications
2008
The performance of haptic application is highly sensitive to communication delays and losses of data. It implies several constraints in developing networked haptic applications. This paper describes a new internet protocol called Efficient Transport Protocol (ETP), which aims at developing distributed interactive applications. TCP and UDP are transport protocols commonly used in any kind of networked communication, but they are not focused on real time application. This new protocol is focused on reducing roundtrip time (RTT) and interpacket gap (IPG). ETP is, therefore, optimized for interactive applications which are based on processes that are continuously exchanging data. ETP protocol i…
Data Sources Handling for Emergency Management: Supporting Information Availability and Accessibility for Emergency Responders
2017
Information is an essential component for better emergency response. Although a lot of information being available at various places during any kind of emergency, many emergency responders (ERs) use only a limited amount of the available information. The reason for this is that the available information heterogeneously distributed, in different formats, and ERs are unable to get access to the relevant information. Moreover, without having access to the needed information, many emergency responders are not able to obtain a sufficient understanding of the emergency situation. Consequently, a lot of time is being used to search for the needed information and poor decisions may be made. Therefo…
Proteomics Standards Initiative: Fifteen Years of Progress and Future Work.
2017
Abstract: The Proteomics Standards Initiative (PSI) of the Human Proteome Organization (HUPO) has now been developing and promoting open community standards and software tools in the field of proteomics for 15 years. Under the guidance of the chair, co-chairs, and other leadership positions, the PSI working groups are tasked with the development and maintenance of community standards via special workshops and ongoing work. Among the existing, ratified standards, the PSI working groups continue to update PSI-MI XML, MITAB, mzML, mzIdentML, mzQuantML, mzTab, and the MIAPE (Minimum Information About a Proteomics Experiment) guidelines with the advance of new technologies and techniques. Furthe…
Semi-automated annotation of page-based documents within the Genre and Multimodality framework
2016
This paper describes ongoing work on a tool developed for annotating document images for their multimodal features and compiling this information into a corpus. The tool leverages open source computer vision and natural language processing libraries to describe the content and structure of multimodal documents and to generate multiple layers of XML annotation. The paper introduces the annotation schema, describes the document processing pipeline and concludes with a brief description of future work.
2021
Functional proprioceptive information is required to allow an individual to interact with the environment effectively for everyday activities such as locomotion and object manipulation. Specifically, research suggests that application of compression garments could improve proprioceptive regulation of action by enhancing sensorimotor system noise in individuals of different ages and capacities. However, limited research has been conducted with samples of elderly people thus far. This study aimed to examine acute effects of wearing knee-length socks (KLS) of various compression levels on ankle joint position sense in community-dwelling, older adults. A total of 26 participants (12 male and 14…
Novel Version of PageRank, CheiRank and 2DRank for Wikipedia in Multilingual Network Using Social Impact
2020
International audience; Nowadays, information describing navigation behaviour of internet users are used in several fields, e-commerce, economy, sociology and data science. Such information can be extracted from different knowledge bases, including business-oriented ones. In this paper, we propose a new model for the PageRank, CheiRank and 2DRank algorithm based on the use of clickstream and pageviews data in the google matrix construction. We used data from Wikipedia and analysed links between over 20 million articles from 11 language editions. We extracted over 1.4 billion source-destination pairs of articles from SQL dumps and more than 700 million pairs from XML dumps. Additionally, we …
Data-Centric and Multimedia Components
2011
The content of XML documents is often primarily plain text, interspersed with various headers and perhaps some lists and tables. However, there are many applications for which the content of documents is not primarily narrative in nature, but instead includes (portions of) data records that are subject to storage and computational manipulation. The latter documents are sometimes referred to as data-centric or record-like, and they rely extensively on precise descriptions of the forms of data that can appear. In this chapter we first introduce the data type definition capabilities in XML Schema. We then consider the types of data very common in traditional databases: numeric data, dates, and…
Creating a semantically-enhanced cloud services environment through ontology evolution
2014
Currently, the availability of Web resources has grown enormously to the point that whatever a user needs at a given moment can potentially be found on the Internet. These resources are not limited to data items anymore, functionality delivered through some sort of service architectural model is also offered on the Internet. In the last few years, cloud computing has emerged as one of the most popular computing models to provide services over the Internet. However, as the number of available cloud services increases, the problem of service discovery and selection arises. Experience indicates that semantic technologies can provide the basis for enhanced and more precise search processes. In …