Variability: Analysis
Survival Tree
Propagation of Uncertainty from Systematic Error
Attribution Theory
Classification of Systems-I
Classification of Signals
Ilija Šimić1,2, Eduardo Veas3,4, Vedran Sabol4
1Graz University of Technology, Graz, Austria. isimic@know-center.at.

Evidence-based Knowledge Synthesis and Hypothesis Validation: Navigating Biomedical Knowledge Bases via Explainable AI and Agentic Systems
Published on: June 13, 2025
07:12Using Informational Connectivity to Measure the Synchronous Emergence of fMRI Multi-voxel Information Across Time
Published on: July 1, 2014
11:18Closed-loop Neuro-robotic Experiments to Test Computational Properties of Neuronal Networks
Published on: March 2, 2015
View abstract on PubMed
This study introduces a new metric, the Consistency-Magnitude-Index, for validating feature attribution methods in explainable AI (XAI). It offers improved faithfulness assessment for AI model explanations, especially for time series data.
Area of Science:
Background:
Purpose of the Study:
Main Methods:
Main Results:
Conclusions: