This project aimed to tackle social injustice in future algorithmic-based decision-making applications by devising strategies to expose, counterbalance, and remedy bias and exclusion built into algorithms, considering fairness, transparency, and accountability. The project employed a design fiction approach to developing a toolkit in a collaborative workshop session with supporting materials to be used by stakeholders to experiment with scenarios to expose potential bias and reflect on mitigation strategies early in the design process.
The Design Fiction Toolkit helps developers to apply social justice principles during the Machine Learning development pipeline and to signal to researchers where further work is needed. It responds to the needs of product managers, developers, and data scientists of ML applications to mitigate bias. The intention is to develop the Toolkit further and adapt it to two use cases that emerged during the research – in educational settings as part of an ethics awareness activity and in small digital teams within innovative start-ups interested in ethical design features.
MiniCoDe Workshops. Minimise Algorithmic Bias in Collaborative Decision Making with Design Fiction:
Co-Creation and Co-Design Methodologies to address Social Justice and Ethics in Machine Learning
© Not-Equal.tech 2025