6533b851fe1ef96bd12a8fae

RESEARCH PRODUCT

Improvement of multimodal images classification based on DSMT using visual saliency model fusion with SVM

Aissam BekkariAlamin MansouriGaëtan Le GoïcDriss MammassHanan Anzid

subject

Support vector machineSvm classifierFusionComputer sciencebusiness.industryPattern recognitionArtificial intelligenceVisual saliency modelbusinessSensor fusionVisual saliency

description

Multimodal images carry available information that can be complementary, redundant information, and overcomes the various problems attached to the unimodal classification task, by modeling and combining these information together. Although, this classification gives acceptable classification results, it still does not reach the level of the visual perception model that has a great ability to classify easily observed scene thanks to the powerful mechanism of the human brain.
  In order to improve the classification task in multimodal image area, we propose a methodology based on Dezert-Smarandache formalism (DSmT), allowing fusing the combined spectral and dense SURF features extracted from each modality and pre-classified by the SVM classifier. Then we integrate the visual perception model in the fusion process.
 To prove the efficiency of the use of salient features in a fusion process with DSmT, the proposed methodology is tested and validated on a large datasets extracted from acquisitions on cultural heritage wall paintings. Each set implements four imaging modalities covering UV, IR, Visible and fluorescence, and the results are promising.

https://dx.doi.org/10.5281/zenodo.2989224