Search results for "sort"
showing 10 items of 487 documents
Sorting suffixes of a text via its Lyndon Factorization
2013
The process of sorting the suffixes of a text plays a fundamental role in Text Algorithms. They are used for instance in the constructions of the Burrows-Wheeler transform and the suffix array, widely used in several fields of Computer Science. For this reason, several recent researches have been devoted to finding new strategies to obtain effective methods for such a sorting. In this paper we introduce a new methodology in which an important role is played by the Lyndon factorization, so that the local suffixes inside factors detected by this factorization keep their mutual order when extended to the suffixes of the whole word. This property suggests a versatile technique that easily can b…
Right-jumps and pattern avoiding permutations
2015
We study the iteration of the process "a particle jumps to the right" in permutations. We prove that the set of permutations obtained in this model after a given number of iterations from the identity is a class of pattern avoiding permutations. We characterize the elements of the basis of this class and we enumerate these "forbidden minimal patterns" by giving their bivariate exponential generating function: we achieve this via a catalytic variable, the number of left-to-right maxima. We show that this generating function is a D-finite function satisfying a nice differential equation of order~2. We give some congruence properties for the coefficients of this generating function, and we sho…
Why Should the Q-method be Integrated Into the Design Science Research? A Systematic Mapping Study
2019
The Q-method has been utilized over time in various areas, including information systems. In this study, we used a systematic mapping to illustrate how the Q-method was applied within Information Systems (IS) community and proposing towards the integration of Q-method into the Design Sciences Research (DSR) process as a tool for future research DSR-based IS studies. In this mapping study, we collected peer-reviewed journals from Basket-of-Eight journals and the digital library of the Association for Information Systems (AIS). Then we grouped the publications according to the process of DSR, and different variables for preparing Q-method from IS publications. We found that the potential of t…
Superlinear advantage for exact quantum algorithms
2012
A quantum algorithm is exact if, on any input data, it outputs the correct answer with certainty (probability 1). A key question is: how big is the advantage of exact quantum algorithms over their classical counterparts: deterministic algorithms. For total Boolean functions in the query model, the biggest known gap was just a factor of 2: PARITY of N inputs bits requires $N$ queries classically but can be computed with N/2 queries by an exact quantum algorithm. We present the first example of a Boolean function f(x_1, ..., x_N) for which exact quantum algorithms have superlinear advantage over the deterministic algorithms. Any deterministic algorithm that computes our function must use N qu…
Quantum Computation With Devices Whose Contents Are Never Read
2010
In classical computation, a "write-only memory" (WOM) is little more than an oxymoron, and the addition of WOM to a (deterministic or probabilistic) classical computer brings no advantage. We prove that quantum computers that are augmented with WOM can solve problems that neither a classical computer with WOM nor a quantum computer without WOM can solve, when all other resource bounds are equal. We focus on realtime quantum finite automata, and examine the increase in their power effected by the addition of WOMs with different access modes and capacities. Some problems that are unsolvable by two-way probabilistic Turing machines using sublogarithmic amounts of read/write memory are shown to…
Large-scale compression of genomic sequence databases with the Burrows-Wheeler transform
2012
Motivation The Burrows-Wheeler transform (BWT) is the foundation of many algorithms for compression and indexing of text data, but the cost of computing the BWT of very large string collections has prevented these techniques from being widely applied to the large sets of sequences often encountered as the outcome of DNA sequencing experiments. In previous work, we presented a novel algorithm that allows the BWT of human genome scale data to be computed on very moderate hardware, thus enabling us to investigate the BWT as a tool for the compression of such datasets. Results We first used simulated reads to explore the relationship between the level of compression and the error rate, the leng…
Comparison of spike parameters from optically identified GABAergic and glutamatergic neurons in sparse cortical cultures
2015
We are pleased to note that our publication “Comparison of spike parameters from optically identified GABAergic and glutamatergic neurons in sparse cortical cultures” by Weir et al. (2015) raised some discussion on the feasibility of solely electrophysiological discrimination of distinct neuronal subpopulations in vitro. We agree with Becchetti and Wanke (2015) that their report and our study on the same question were conducted with different technical approaches and that this may explain the observed differences between both studies. Although we obviously recorded a reduced spontaneous neuronal activity under our sparse culture conditions, these conditions were necessary to enable the uneq…
Canción que en celebridad del feliz cumpleaños de la Reyna Nuestra Señora
Sorted deduplication: How to process thousands of backup streams
2016
The requirements of deduplication systems have changed in the last years. Early deduplication systems had to process dozens to hundreds of backup streams at the same time while today they are able to process hundreds to thousands of them. Traditional approaches rely on stream-locality, which supports parallelism, but which easily leads to many non-contiguous disk accesses, as each stream competes with all other streams for the available resources. This paper presents a new exact deduplication approach designed for processing thousands of backup streams at the same time on the same fingerprint index. The underlying approach destroys the traditionally exploited temporal chunk locality and cre…
Study of aerobic granular sludge stability in a continuous-flow membrane bioreactor.
2015
A granular continuous-flow membrane bioreactor with a novel hydrodynamic configuration was developed to evaluate the stability of aerobic granular sludge (AGS). Under continuous-flow operation (Period I), AGS rapidly lost their structural integrity resulting in loose and fluffy microbial aggregates in which filamentous bacteria were dominant. The intermittent feeding (Period II) allowed obtaining the succession of feast and famine conditions that favored the increase in AGS stability. Although no further breakage occurred, the formation of new granules was very limited, owing to the absence of the hydraulic selection pressure. These results noted the necessity to ensure, on the one hand the…