Improvement of Multimodal Images Classification Based on DSMT Using Visual Saliency Model Fusion With SVM
Multimodal images carry available information that can be complementary, redundant information, and overcomes the various problems attached to the unimodal classification task, by modeling and combining these information together. Although, this classification gives acceptable classification results, it still does not reach the level of the visual perception model that has a great ability to classify easily observed scene thanks to the powerful mechanism of the human brain.
In order to improve the classification task in multimodal image area, we propose a methodology based on Dezert-Smarandache formalism (DSmT), allowing fusing the combined spectral and dense SURF features extracted from each modality and pre-classified by the SVM classifier. Then we integrate the visual perception model in the fusion process.
To prove the efficiency of the use of salient features in a fusion process with DSmT, the proposed methodology is tested and validated on a large datasets extracted from acquisitions on cultural heritage wall paintings. Each set implements four imaging modalities covering UV, IR, Visible and fluorescence, and the results are promising.
Copyright (c) 2019 INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY
This work is licensed under a Creative Commons Attribution 4.0 International License.
The author warrants that the article is original, written by stated author(s), has not been published before, contains no unlawful statements, does not infringe the rights of others, is subject to copyright that is vested exclusively in the author and free of any third party rights, and that any necessary written permissions to quote from other sources have been obtained by the author(s).