Empathy-driven User Awareness and Bias Mitigation Next-Generation AI
Rooted in Game Theory and Cultural Traits
(an excerpt from the Mozilla Fellowship Application)
True ethical AI requires humans to develop empathy, not just for others, but also for the potential consequences of their interactions with AI systems. This project explores how real-time feedback mechanisms can cultivate a sense of responsibility within human-AI interaction, ultimately leading to more ethical AI development.
Systems and players focused solely on achieving optimal outcomes can lead to situations resembling a Nash Equilibrium in game theory where individual actors make optimal choices based on their understanding of the system, but the combined effect can be suboptimal for everyone involved. By incorporating game theory, the project will design real-time feedback loops that nudge users to critically examine their own biases and thought processes during interaction with AI (reflect on what values/interests they are driving towards and what they may be compromising ). This self-reflection, triggered by the feedback, encourages users to take greater responsibility for the data they contribute to and the biases they may unknowingly introduce during AI development and use.
Using a user-centered design approach, the project will develop an explainable AI system that provides users with clear insights into the AI's decision-making processes taking conversational AI and recommendation systems as its test bed. The project will also be conduced amongst users in both USA and India for comparative study on how cultures with inherent traits such as agreeability could affect the system long-term so we can approach bias training with fresh eyes and bridge gaps between motivation and desired outcomes in East and West.
The Mozilla Fellowship network provides a unique opportunity to connect and collaborate with a diverse range of researchers, technologists, and activists working on trustworthy AI bringing a sense of richness to this project. All code, data, design documents, and findings will be made available through open-source repositories and publications accessible to both the scientific community and the public.
By equipping users with the tools for self-reflection and highlighting potential biases, we can build a future where humans develop and deploy AI systems with a deeper sense of responsibility and a commitment to fostering ethical AI that benefits all.