Local organizing committee
• Chloé Le Bail (LISN, Université Paris-Saclay)
• Vincent Mousseau (MICS, CentraleSupélec, Université Paris-Saclay)
• Wassila Ouerdane (MICS, CentraleSupélec, Université Paris-Saclay)
• Fabien Tarissan (CNRS, ENS Paris-Saclay)
• Anaëlle Wilczynski (MICS, CentraleSupélec, Université Paris-Saclay)
Fair and Explainable Collective Decision
Many real-life problems in our societies involve making a collective decision, where a group of individuals needs to aggregate their individual preferences over different alternatives to make a decision for the group. As concrete real-world examples, one can cite the choice of projects to fund in cities, known as participatory budgeting, a participatory democratic approach which is increasingly adopted in many countries around the world [Cabannes, 2004, Aziz and Shah, 2020], or the assignment of students to courses or universities (see, e.g., in France the Parcoursup application for graduate studies), which falls into the class of matching problems under preferences [Klaus et al., 2016], where agents need to be matched to elements (tasks, roommates, jobs, etc.) based on their preferences.
With the increasing use of algorithms and AI tools in systems governing our life choices (job recruitment, insurance covering, university assignment), important decisions for the agents can be made in systems assisting collective decision. Therefore, to ensure confidence and participation
in the system, it is crucial to guarantee that algorithms used for computing collective decision are fair to the agents. A fair decision should respect the expressed preferences of the agents and should not discriminate any population. In participatory budgeting, the decision has a collective
implementation, in the sense that all agents will be impacted by the decision in the same way: all citizens will benefit from the same selected projects. Under this perspective, fairness imposes to adequately represent all agents’ preferences in the outcome, e.g., in a proportional way [Aziz et al.,
2018, Peters et al., 2021, Los et al., 2022]. In contrast, in matching problems, the decision implies different treatments for different agents since they may not be assigned to the same elements. In consequence, in this setting, the question of fairness is more oriented towards the agents’ individual
satisfaction and feeling of equal treatment.
The guarantee of fairness in collective decision making can be achieved via the justification of the final decision to satisfy a given appropriate fairness concept. This involves designing realistic and achievable fairness concepts, but also to explain that the final outcome is actually fair. Justifying that a given solution is fair is related to the ability of explaining decisions. Explainable Artificial Intelligence (XAI) is a hot topic in AI [Barredo Arrieta et al., 2020], which has even become a political and legal concern (see, e.g., the Villani report on AI in 2018 [Villani et al., 2018] and the European General Data Protection Regulation (GDPR) [Goodman and Flaxman, 2017]). The goal is to explain decisions made by systems based on algorithms, which may appear as black boxes, and making them more transparent. Concerning collective decision making, researchers agree that transparency, explainability and justificability of collective decision processes are the new challenges
in social choice [Nunez, 2019, Procaccia, 2019].
In computational social choice [Brandt et al., 2016], there is a long tradition of axiomatic characterizations of preference aggregation methods [Arrow et al., 2002, 2011, Procaccia, 2019], which can be seen as rule-based explanations. One can indeed argue that a given solution is a good choice if it is the only one satisfying a given combination of desirable properties. To this aim, we need to define axioms which capture the merits of a solution. It remains to be seen whether the chosen combination of axioms is possible. This can notably be checked by using computer-aided techniques to derive impossibility theorems [Geist and Peters, 2017]. But, such computer-aided methods can also be used in order to derive justification of the outcomes of voting systems. As defined by Boixel and Endriss [2020], a valid justification for a given collective decision is the combination of a set of normative principles (axioms) and the explanation that the outcome actually follows them. The initial studies in this line of research focused on the generation of justification algorithms for very specific scenarios in voting [Cailloux and Endriss, 2016]. More generality could then be reached for generating justification by using more optimized AI tools such as constraint programming [Boixel and Endriss, 2020] or SAT solving [Boixel et al., 2022]. Hence, important research questions remain, on the design of adequate notions of fairness and their explanation. Moreover, this raises key legal issues that need to be addressed, and have been largely under-investigated.
The goal of this workshop is to bring together researchers from different communities in order to share insights on these topics and progress towards better understanding the requirement of fairness and its explanation in collective decision making, especially on the two concrete topics of participatory budgeting and matchings under preferences. The main research questions we can investigate in these two settings are the following:
- What are adequate definitions of fairness, and how fair solutions can be efficiently computed?
- What are the characteristics of a relevant explanation for explaining fairness?
- How to design algorithmically efficient methods/approaches to compute explanations for fair decisions?
- How to evaluate the relevance of explanations that would make these explanations acceptable/understandable/relevant for end-users (metrics, methodologies, etc.)?
- How to integrate the possibility for end-users to challenge/question the result of a fair collective decision and obtain a reasoning to justify/support such a decision?
References
Haris Aziz and Nisarg Shah. Participatory budgeting: Models and approaches. In Pathways Between Social Science and Computational Social Science: Theories, Methods, and Interpretations, pages 215–236. Springer, 2020.
Haris Aziz, Barton E. Lee, and Nimrod Talmon. Proportionally representative participatory budgeting: Axioms and algorithms. In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems (AAMAS-18), pages 23–31, 2018.
Alejandro Barredo Arrieta, Natalia Diaz-Rodriguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador Garcia, Sergio Gil-Lopez, Daniel Molina, Richard Benjamins, Raja Chatila, and Francisco Herrera. Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI. Information Fusion, 58:82–115, 2020.
Arthur Boixel and Ulle Endriss. Automated Justification of Collective Decisions via Constraint Solving. In Proceedings of the 19th International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS-20), pages 168–176, 2020.
Arthur Boixel, Ulle Endriss, and Ronald de Haan. A Calculus for Computing Structured Justifications for Election Outcomes. In Proceedings of the 36th AAAI Conference on Artificial Intelligence (AAAI-22), 2022.
Felix Brandt, Vincent Conitzer, Ulle Endriss, Jérôme Lang, and Ariel D. Procaccia, editors. Handbook of Computational Social Choice. Cambridge University Press, 2016.
Yves Cabannes. Participatory budgeting: a significant contribution to participatory democracy. Environment and urbanization, 16(1):27–46, 2004.
Olivier Cailloux and Ulle Endriss. Arguing about voting rules. In Proceedings of the 15th International Conference on Autonomous Agents and Multiagent Systems (AAMAS-16), pages 287–295, 2016.
Christian Geist and Dominik Peters. Computer-aided methods for social choice theory. In Trends in Computational Social Choice, pages 249–267. AI Access, 2017.
Bryce Goodman and Seth Flaxman. European Union regulations on algorithmic decision-making and a “right to explanation”. AI magazine, 38(3):50–57, 2017.
Maaike Los, Zoé Christoff, and Davide Grossi. Proportional budget allocations: Towards a systematization. In Proceedings of the 31st International Joint Conference on Artificial Intelligence (IJCAI-22), pages 398–404, 2022.
Matias Nunez. Towards Transparent Mechanisms. In Jean-Fran¸cois Laslier, Hervé Moulin, M. Remzi Sanver, and William S. Zwicker, editors, The Future of Economic Design, pages 341–346. Springer, 2019.
Dominik Peters, Grzegorz Pierczynski, and Piotr Skowron. Proportional participatory budgeting with additive utilities. In Proceedings of the 35th Conference on Neural Information Processing Systems (NeurIPS-21), 34:12726–12737, 2021.
Ariel D. Procaccia. Axioms Should Explain Solutions. In Jean-Fran¸cois Laslier, Hervé Moulin, M. Remzi Sanver, and William S. Zwicker, editors, The Future of Economic Design, pages 195–199. Springer, 2019.
Cédric Villani, Yann Bonnet, Charly Berthet, Fran¸cois Levin, Marc Schoenauer, Anne Charlotte Cornut, and Bertrand Rondepierre. Donner un sens à l’intelligence artificielle : pour une stratégie nationale et européenne. Conseil national du numérique, 2018.
Schedule
The workshop will take place over two weeks from March 30th to April 10th at Institut Pascal (Orsay, France).
The global schedule of the two weeks is composed of:
-
a global introduction and conclusion for the workshop,
-
a description of GDR RADIA and a GDR invited talk,
-
two transverse invited talks,
-
four slots for participants’ short presentations,
-
a doctoral consortium,
-
a social event,
-
two social dinners,
-
two working blocks dedicated to specific topics:
-
Week A: participatory budgeting
-
Week B: matching under preferences
-
Each working block is composed of 11 sessions of 1h30 structured as follows:
-
one tutorial, given by an expert of the topic,
-
a global discussion in order to identify interesting research questions and to form subgroups,
-
two invited talks,
-
five working sessions in subgroups and a last one dedicated to the writing of the report and preparation of the presentation of the subgroup,
-
one final session where each subgroup presents its work to the whole audience.
The global preliminary organization of the two weeks is presented below.
Global schedule of the two weeks
WEEK A | Monday 30th March | Tuesday 31th March | Wednesday 1st April |
Thursday 2nd April |
Friday 3rd April |
9.00-10.30 | Global Introduction |
Invited talk A.1 | Invited talk A.2 | Transverse invited talk |
Reports per subgroup A |
BREAK | |||||
11.00-12.30 |
GDR RADIA presentation + Invited talk |
Work in subgroups A.1 |
Work in subgroups A.3 |
Work in subgroups A.5 |
Subgroup final presentations A |
LUNCH | |||||
14.00-15.30 | Tutorial A | Participants’ short presentations 1.1 |
Participants’ short presentations 1.2 |
Social event | Doctoral consortium |
BREAK | |||||
16.00-17.30 | Identification of research questions for subgroups A |
Work in subgroups A.2 |
Work in subgroups A.4 |
Social event | Doctoral consortium |
Evening | Social dinner |
WEEK B | Monday 6th March | Tuesday 7th April | Wednesday 8th April |
Thursday 9th April |
Friday 10th April |
9.00-10.30 | Tutorial B |
Invited talk B.2 |
Transverse invited talk |
Reports per subgroup B |
|
BREAK |
|||||
11.00-12.30 |
Identification of research questions for subgroups B |
Work in subgroups B.2 |
Work in subgroups B.4 |
Subgroup final presentations B |
|
LUNCH |
|||||
14.00-15.30 |
Invited talk B.1 |
Participants’ short presentations B.1 |
Participants’ short presentations B.2 |
Global conclusion | |
BREAK |
|||||
16.00-17.30 | Work in subgroups B.1 |
Work in subgroups B.3 |
Work in subgroups B.5 |
||
Evening | Social dinner |