Adversarial attacks and defences in Federated Learning

  1. Rodríguez Barroso, Nuria
Dirixida por:
  1. Francisco Herrera Triguero Director

Universidade de defensa: Universidad de Granada

Fecha de defensa: 01 de decembro de 2023

Tribunal:
  1. Óscar Cordón García Presidente/a
  2. Rocío C. Romero Zaliz Secretario/a
  3. María José del Jesús Díaz Vogal
  4. Pietro Ducange Vogal
  5. Senén Barro Vogal

Tipo: Tese

Resumo

Artificial Intelligence (AI) is currently in the process of revolutionising numerous facets of everyday life. Nevertheless, as its development progresses, the associated risks are on the rise. Despite the fact that its full potential remains uncertain, there is a growing apprehension regarding its deployment in sensitive domains such as education, culture, and medicine. Presently, one of the foremost challenges confronting us is finding a harmonious equilibrium between the prospective advantages and the attendant risks, thereby preventing precaution from impeding innovation. This necessitates the development of AI systems that are robust, secure, transparent, fair, respectful to privacy and autonomy, have clear traceability, and are subject to fair accountability for auditing. In essence, it entails ensuring their ethical and responsible application, giving rise to the concept of trustworthy AI. In this context, Federated Learning (FL) emerges as a paradigm of distributed learning that ensures the privacy of training data while also harnessing global knowledge. Although its ultimate objective is data privacy, it also brings forth other cross-cutting enhancements such as robustness and communication cost minimisation. However, like any learning paradigm, FL is susceptible to adversarial attacks aimed at altering the model’s operation or inferring private information. The central focus of this thesis is the development of defence mechanisms against adversarial attacks that compromise the model’s behaviour while concurrently promoting other requirements to ensure trustworthy AI.