Interpretability Using Reconstruction of Capsule Networks

Dominik Vranay, Mykhailo Ruzmetov, Peter Sinčák

Interpretability Using Reconstruction of Capsule Networks

Číslo: 3/2024
Periodikum: Acta Electrotechnica et Informatica
DOI: 10.2478/aei-2024-0010

Klíčová slova: Capsule Neural Networks, Model Explainability, Reconstruction Mechanism, Decoder Architectures, Explainable Artificial Intelligence

Pro získání musíte mít účet v Citace PRO.

Přečíst po přihlášení

Anotace: This paper evaluates the effectiveness of different decoder architectures in enhancing the reconstruction quality of Capsule Neural Networks (CapsNets), which impacts model interpretability. We compared linear, convolutional, and residual decoders to assess their performance in improving CapsNet reconstructions. Our experiments revealed that the Conditional Variational Autoencoder Capsule Network (CVAECapOSR) achieved the best reconstruction quality on the CIFAR-10 dataset, while the residual decoder outperformed others on the Brain Tumor MRI dataset. These findings highlight how improved decoder architectures can generate reconstructions of better quality, which can enhance changes by deforming output capsules, thereby making the feature extraction and classification processes within CapsNets more transparent and interpretable. Additionally, we evaluated the computational efficiency and scalability of each decoder, providing insights into their practical deployment in real-world applications such as medical diagnostics and autonomous driving.