Anotace:
The pursuit of fairness in machine learning models has become increasingly crucial across various applications, including bank loan approval and face detection. Despite the widespread use of artificial intelligence algorithms, concerns persist regarding biases and discrimination within these models. This study introduces a novel approach, termed “The Fairness Stitch” (TFS), aimed at enhancing fairness in deep learning models by combining model stitching and training jointly, while incorporating fairness constraints. We evaluate the effectiveness of TFS through a comprehensive assessment using two established datasets, CelebA and UTKFace. The evaluation involves a systematic comparison with the existing baseline method, fair deep feature reweighting (FDR). Our analysis demonstrates that TFS achieves a better balance between fairness and performance compared to the baseline method (FDR). Specifically, our method shows significant improvements in mitigating biases while maintaining performance levels. These results underscore the promising potential of TFS in addressing bias-related challenges and promoting equitable outcomes in machine learning models. This research challenges conventional wisdom regarding the efficacy of the last layer in deep learning models for debiasing purposes. The findings suggest that integrating fairness constraints into our proposed framework (TFS) can lead to more effective mitigation of biases and contribute to fairer AI systems.