Search results for "data set"
showing 10 items of 154 documents
Advances in automated diffraction tomography
2009
Crystal structure solution by means of electron diffraction or investigation of special structural features needs high quality data acquisition followed by data processing which delivers cell parameters, space group and in the end a 3D data set. The final step is the structure analysis itself including structure solution and subsequent refinement.
Towards automated diffraction tomography. Part II--Cell parameter determination.
2008
Automated diffraction tomography (ADT) allows the collection of three-dimensional (3d) diffraction data sets from crystals down to a size of only few nanometres. Imaging is done in STEM mode, and diffraction data are collected with quasi-parallel beam nanoelectron diffraction (NED). Here, we present a set of developed processing steps necessary for automatic unit-cell parameter determination from the collected 3d diffraction data. Cell parameter determination is done via extraction of peak positions from a recorded data set (called the data reduction path) followed by subsequent cluster analysis of difference vectors. The procedure of lattice parameter determination is presented in detail f…
Towards automated diffraction tomography: Part I—Data acquisition
2007
Abstract The ultimate aim of electron diffraction data collection for structure analysis is to sample the reciprocal space as accurately as possible to obtain a high-quality data set for crystal structure determination. Besides a more precise lattice parameter determination, fine sampling is expected to deliver superior data on reflection intensities, which is crucial for subsequent structure analysis. Traditionally, three-dimensional (3D) diffraction data are collected by manually tilting a crystal around a selected crystallographic axis and recording a set of diffraction patterns (a tilt series) at various crystallographic zones. In a second step, diffraction data from these zones are com…
Nonlinear PCA for Spatio-Temporal Analysis of Earth Observation Data
2020
Remote sensing observations, products, and simulations are fundamental sources of information to monitor our planet and its climate variability. Uncovering the main modes of spatial and temporal variability in Earth data is essential to analyze and understand the underlying physical dynamics and processes driving the Earth System. Dimensionality reduction methods can work with spatio-temporal data sets and decompose the information efficiently. Principal component analysis (PCA), also known as empirical orthogonal functions (EOFs) in geophysics, has been traditionally used to analyze climatic data. However, when nonlinear feature relations are present, PCA/EOF fails. In this article, we pro…
SELECTING HERB-RICH FOREST NETWORKS TO PROTECT DIFFERENT MEASURES OF BIODIVERSITY
2001
Data on vascular plants of herb-rich forests in Finland were used to compare the efficiency of reserve selection methods in representing three measures of biodiversity: species richness, phylogenetic diversity, and restricted-range diversity. Comparisons of reserve selection methods were carried out both with and without consideration of the existing reserve system. Our results showed that the success of a reserve network of forests in representing different measures of biodiversity depends on the selection procedure, selection criteria, and data set used. Ad hoc selection was the worst option. A scoring procedure was generally more efficient than maximum random selection. Heuristic methods…
The identifiability analysis for setting up measuring campaigns in integrated water quality modelling.
2012
Abstract Identifiability analysis enables the quantification of the number of model parameters that can be assessed by calibration with respect to a data set. Such a methodology is based on the appraisal of sensitivity coefficients of the model parameters by means of Monte Carlo runs. By employing the Fisher Information Matrix, the methodology enables one to gain insights with respect to the number of model parameters that can be reliably assessed. The paper presents a study where identifiability analysis is used as a tool for setting up measuring campaigns for integrated water quality modelling. Particularly, by means of the identifiability analysis, the information about the location and …
Search for relativistic magnetic monopoles with the ANTARES neutrino telescope
2012
Magnetic monopoles are predicted in various unified gauge models and could be produced at intermediate mass scales. Their detection in a neutrino telescope is facilitated by the large amount of light emitted compared to that from muons. This paper reports on a search for upgoing relativistic magnetic monopoles with the ANTARES neutrino telescope using a data set of 116 days of live time taken from December 2007 to December 2008. The one observed event is consistent with the expected atmospheric neutrino and muon background, leading to a 90% C.L. upper limit on the monopole flux between 1.3 ¿ 10¿17 and 8.9 ¿ 10¿17 cm¿2 s¿1 sr¿1 for monopoles with velocity ß ¿ 0.625.
Deep Learning Based Cardiac MRI Segmentation: Do We Need Experts?
2021
Deep learning methods are the de facto solutions to a multitude of medical image analysis tasks. Cardiac MRI segmentation is one such application, which, like many others, requires a large number of annotated data so that a trained network can generalize well. Unfortunately, the process of having a large number of manually curated images by medical experts is both slow and utterly expensive. In this paper, we set out to explore whether expert knowledge is a strict requirement for the creation of annotated data sets on which machine learning can successfully be trained. To do so, we gauged the performance of three segmentation models, namely U-Net, Attention U-Net, and ENet, trained with dif…
Mislabel Detection of Finnish Publication Ranks
2019
The paper proposes to analyze a data set of Finnish ranks of academic publication channels with Extreme Learning Machine (ELM). The purpose is to introduce and test recently proposed ELM-based mislabel detection approach with a rich set of features characterizing a publication channel. We will compare the architecture, accuracy, and, especially, the set of detected mislabels of the ELM-based approach to the corresponding reference results on the reference paper.
Sparsity-Driven Digital Terrain Model Extraction
2020
We here introduce an automatic Digital Terrain Model (DTM) extraction method. The proposed sparsity-driven DTM extractor (SD-DTM) takes a high-resolution Digital Surface Model (DSM) as an input and constructs a high-resolution DTM using the variational framework. To obtain an accurate DTM, an iterative approach is proposed for the minimization of the target variational cost function. Accuracy of the SD-DTM is shown in a real-world DSM data set. We show the efficiency and effectiveness of the approach both visually and quantitatively via residual plots in illustrative terrain types.