Explainable Artificial Intelligence (XAI)

Deep learning models have been the key ingredient to increase the capabilities of current AI based systems. As a relevant limitation, these models operate mostly as black boxes being difficult to understand the reasons behind their predictions or decisions. Consequently, as AI based systems start to operate in critical applications, there is an increasing need to provide them with the ability to explain their decisions, in an emerging area known as explainable-AI or XAI. In our group, we share the interest to develop XAI technologies. Furthermore, we foresee that the ability “to explain” requires the ability “to understand”, that in turn leads to model representations with enhanced generalization capabilities, models able to transfer knowledge across domains and able to learn from few examples, two highly desirable but limited abilities in current AI technologies. Our group approaches XAI using natural language explanations as the main communication channel.