Anotace:
Person identification through chest X-ray radiographs stands as a vanguard in both healthcare and biometrical security domains. In contrast to traditional biometric modalities, such as facial recognition, fingerprints and iris scans, the research orientation towards chest X-ray recognition has been spurred by its remarkable recognition rates. Capturing the intricate anatomical nuances of an individual's rib cage, lungs and heart, chest X-ray images emerge as a focal point for identification, even in scenarios where the human body is entirely damaged. Concerning the field of deep learning, a paradigm is exemplified in contemporary generations, with promising outcomes in classification and image similarity challenges. However, the training of convolutional neural networks (CNNs) requires copious labelled data and is time-consuming. In this study, we delve into the rich repository of the NIH ChestX-ray14 dataset, comprising 112,120 frontal-view chest radiographs from 30,805 unique patients. Our methodology is nuanced, employing the potency of Siamese neural networks and the triplet loss in conjunction with refined CNN models for feature extraction. The Siamese networks facilitate robust image similarity comparison, while the triplet loss optimizes the embedding space, mitigating intra-class variations and amplifying inter-class distances. A meticulous examination of our experimental results reveals profound insights into our model performance. Noteworthy is the remarkable accuracy achieved by the VGG-19 model, standing at an impressive 97%. This achievement is underpinned by a well-balanced precision of 95.3% and an outstanding recall of 98.4%. Surpassing other CNN models utilized in our research and outshining existing state-of-the-art models, our approach establishes itself as a vanguard in the pursuit of person identification through chest X-ray images.