top of page

racial
justice
fellowship 

OUR PROCESS

The Racial Justice Fellowship was established in collaboration with the Ada Lovelace Institute, with support from the AHRC, and in response to the peripheral position of racial-justice aspects in data and AI ethics research. The Fellowship was designed to connect and fund individuals locating and filling this gap in ethical thinking. 

 

An open invitation was made to anyone with relevant research, policy experience, or applicable creative skills, who wanted to explore the interlocking and contested issues of data, AI, Blackness, and racial justice through a research or creative project. 

 

Applicants were asked to propose work that would develop anti-racist and decolonial perspectives on data and AI, address structural and systemic inequalities, and shape the field by investigating ethics in new ways while foregrounding overlooked scholars and perspectives. They additionally submitted a CV that evidenced their relevant experience and expertise without any minimum academic requirements. 

 

The request for submissions inspired 98 project proposals from academic and policy researchers and creative practitioners. The Commissioning Board, composed of a diverse group of members, including leading BIPOC scholars and practitioners in academia, industry, and the arts, met to review the quality of the applications and the potential impact of applicants’ work during the course of their six-month fellowship. The selection process sought to balance an assessment of the qualities of the proposed project and the qualities of the person applying, reward potential and emerging talent (rather than previous recipients of funding or recognition), and help redress historical imbalances resulting in the underrepresentation of people from racialized communities and backgrounds in public-facing research, creative, and policy roles.  

 

Four fellowships were appointed to the programme’s first cohort of five fellows each awarded up to £10,000. The fellows chosen were selected based on the commission criteria which also considered how the synergies between their projects would build capacity for interdisciplinary work addressing racial justice in the field of data and AI ethics.

 

The JUST AI team supported the fellows in designing and delivering their projects by structuring regular conversations with their peers and advisors. The fellows also gained affiliation with the Ada Lovelace Institute for the duration of the program, which provided additional opportunities for interdisciplinary research, policy, and practice. 

Fellows showcased their projects and contributed to ongoing conversations around data and AI ethics with reflective essays and presentations during the JUST AI series. 

Learnings from this pilot project will contribute to a greater understanding of challenges and approaches to funding and hosting fellowship programs within the Ada Lovelace Institute and in the broader data and AI research and funding community as it continues to develop across the UK and globally.

OUR FELLOWS

Yasmine Boudiaf

creative technologist and researcher

 

Yasmine’s project proposed a radical ethical data practice framework. The output was a constantly evolving collaborative online space – a ‘Living Document’ – that came out of a collaboration between groups of people who have resisted traditionally extractive practices, and which they can continue to contribute to.

​

​

Dr Irene Fubara-Manuel

researcher and creative practitioner

 

Irene’s project proposed ethnographic and participatory action research to explore possibilities for decolonial and anti-racist alternatives to migration algorithms. The project aimed to move the focus of streaming tools, used in applying for UK visas, away from automating a ‘hostile environment’ and towards reimagining a fair and welcoming UK for all migrants, irrespective of background.

​

​

Dr. Erinma Ochu, neuroscientist, filmmaker, and curator

Caroline Ward, designer and researcher

collectively Squirrel Nation

​

Squirrel Nation proposed a creative project that would re-enact the 1956 Dartmouth AI study which comprised a decolonising AI reading group and gatherings in which key texts and concepts were read aloud collectively, discussed, recorded and creatively edited into a ceremonial, multimedia piece.

​

​

Sarah Devi Chander

racial justice and digital rights policy

​

Sarah’s project synthesised the policy development strand of the Fellowship and delineated the most pervasive issues in UK data and AI policy in relation to racial justice.

RESOURCE LIST

 VIDEOS 

 ESSAYS 

bottom of page