Search results for "Theoretical Computer Science"
showing 10 items of 1151 documents
Scalable Hierarchical Clustering: Twister Tries with a Posteriori Trie Elimination
2015
Exact methods for Agglomerative Hierarchical Clustering (AHC) with average linkage do not scale well when the number of items to be clustered is large. The best known algorithms are characterized by quadratic complexity. This is a generally accepted fact and cannot be improved without using specifics of certain metric spaces. Twister tries is an algorithm that produces a dendrogram (i.e., Outcome of a hierarchical clustering) which resembles the one produced by AHC, while only needing linear space and time. However, twister tries are sensitive to rare, but still possible, hash evaluations. These might have a disastrous effect on the final outcome. We propose the use of a metaheuristic algor…
Context–content systems of random variables : The Contextuality-by-Default theory
2016
Abstract This paper provides a systematic yet accessible presentation of the Contextuality-by-Default theory. The consideration is confined to finite systems of categorical random variables, which allows us to focus on the basics of the theory without using full-scale measure-theoretic language. Contextuality-by-Default is a theory of random variables identified by their contents and their contexts, so that two variables have a joint distribution if and only if they share a context. Intuitively, the content of a random variable is the entity the random variable measures or responds to, while the context is formed by the conditions under which these measurements or responses are obtained. A …
Can back-projection fully resolve polarity indeterminacy of independent component analysis in study of event-related potential?
2011
a b s t r a c t In the study of event-related potentials (ERPs) using independent component analysis (ICA), it is a traditional way to project the extracted ERP component back to electrodes for correcting its scaling (magnitude and polarity) indeterminacy. However, ICA tends to be locally optimized in practice, and then, the back-projection of a component estimated by the ICA can possibly not fully correct its polarity at every electrode. We demonstrate this phenomenon from the view of the theoretical analysis and numerical simulations and suggest checking and modifying the abnormal polarity of the projected component in the electrode field before further analysis. Moreover, when several co…
Type-2 intuitionistic fuzzy TODIM for intelligent decision-making under uncertainty and hesitancy
2022
AbstractThe classical TODIM considers the crisp numbers to handle the information. However, in a real-world applicative context, this information is bounded by noise and vagueness and hence uncertain. There are wide range of works in the literature which utilizes fuzzy sets to handle the uncertainty in the various dimensions. However, there is a constraint of hesitancy in such decision-making problems due to the involvement of various decision-makers. Also, in the TODIM method, decision-maker’s bounded rationality and psychological behavior are also taken into consideration which adds up the hesitation and considers the problem with higher dimension of uncertainty. There are various applica…
Designing a graphics processing unit accelerated petaflop capable lattice Boltzmann solver: Read aligned data layouts and asynchronous communication
2016
The lattice Boltzmann method is a well-established numerical approach for complex fluid flow simulations. Recently, general-purpose graphics processing units (GPUs) have become available as high-performance computing resources at large scale. We report on designing and implementing a lattice Boltzmann solver for multi-GPU systems that achieves 1.79 PFLOPS performance on 16,384 GPUs. To achieve this performance, we introduce a GPU compatible version of the so-called bundle data layout and eliminate the halo sites in order to improve data access alignment. Furthermore, we make use of the possibility to overlap data transfer between the host central processing unit and the device GPU with com…
Sparsity-aware multiple relay selection in large multi-hop decode-and-forward relay networks
2016
In this paper, we propose and investigate two novel techniques to perform multiple relay selection in large multi-hop decode-and-forward relay networks. The two proposed techniques exploit sparse signal recovery theory to select multiple relays using the orthogonal matching pursuit algorithm and outperform state-of-the-art techniques in terms of outage probability and computation complexity. To reduce the amount of collected channel state information (CSI), we propose a limited-feedback scheme where only a limited number of relays feedback their CSI. Furthermore, a detailed performance-complexity tradeoff investigation is conducted for the different studied techniques and verified by Monte …
Compression and load balancing for efficient sparse matrix-vector product on multicore processors and graphics processing units
2021
We contribute to the optimization of the sparse matrix-vector product by introducing a variant of the coordinate sparse matrix format that balances the workload distribution and compresses both the indexing arrays and the numerical information. Our approach is multi-platform, in the sense that the realizations for (general-purpose) multicore processors as well as graphics accelerators (GPUs) are built upon common principles, but differ in the implementation details, which are adapted to avoid thread divergence in the GPU case or maximize compression element-wise (i.e., for each matrix entry) for multicore architectures. Our evaluation on the two last generations of NVIDIA GPUs as well as In…
On the Amount of Nonconstructivity in Learning Recursive Functions
2011
Nonconstructive proofs are a powerful mechanism in mathematics. Furthermore, nonconstructive computations by various types of machines and automata have been considered by e.g., Karp and Lipton [17] and Freivalds [11]. They allow to regard more complicated algorithms from the viewpoint of much more primitive computational devices. The amount of nonconstructivity is a quantitative characterization of the distance between types of computational devices with respect to solving a specific problem. In the present paper, the amount of nonconstructivity in learning of recursive functions is studied. Different learning types are compared with respect to the amount of nonconstructivity needed to lea…
Теория алгоритмов и программ
1986
Сборник посвящен исследованию различных типов вычислительных устройств, таких как альтернирующие вероятностные машины, а также проблемам индуктивного синтеза программ.
Теория алгоритмов и программ. Выпуск 3
1977
Статьи сборника посвящены в основном теории индуктивного вывода. Рассмотрены также вопросы семантики программ (аппарат формального доказательства свойств программ) и теории сводимости.