Responsible AI Practices
Considerations for developing a framework and toolkits to support the operationalisation of Responsible AI principles for Health
Reach uses simple technologies like WhatsApp and SMS to empower citizens on their health journey, support health workers in building their resilience and provide critical data insights to health administrators to drive health system change.
We see AI as a catalyst for transforming how we work and deliver services at scale while maximising impact and efficiency. Reach applies a systems-level theory of change that considers that maintaining behaviour change requires ongoing measurement, personalisation and adjustment. Leveraging data insights, AI and machine learning, we optimise individual health journeys by accurately disseminating information and interventions at the right time, on the right topic, and better understand, predict, and address citizens’ ever-changing individual needs.
Our approach seeks to find a high degree of safety, effectiveness, and appropriateness in any AI solution that we develop or deploy. We aim to improve trust through transparent, explainable and responsible use of AI. In recent months, our team has presented on how we’re instituting responsible AI practices into our maternal, sexual and reproductive health services at the Civic Tech Innovation Forum’s #AfricaFlows Conference and the AI for Non-Profits panel at Skoll World Forum.
Our AI governance model is established on ethical principles that focus on citizen’s rights, freedoms, and well-being. We have incorporated recommendations from timely publications like the WHO’s Technical Brief on the role of artificial intelligence in sexual and reproductive health and rights, which outlines opportunities for the application of AI in digital health while raising concerns about how to protect individuals’ privacy, prevent data breaches and mitigate for risks that may endanger people’s rights and safety.
Responsible AI requires a mix of governance, best practices and technology to be implemented effectively. Our approach prioritises practical ways to operationalise these best practices, including trusting our “team to think and act ethically, operationalising values and principles on AI ethics and aiming for broad impact on AI ethics” (World Economic Forum) in our standard ways of working. There is no one-size-fits-all approach to Responsible AI, but some of the practical steps we’re taking to instituting this in our organisation include:
Alignment of AI use cases with our Monitoring, Evaluation, Research and Learning (MERL) agenda
Our AI strategy aligns with our MERL Framework to ensure that AI use cases support the necessary indicators and impact measures defined in our research agenda. This ensures that evaluating AI use cases and investments focuses on the distinct needs of low-and middle-income countries (LMICs) and areas where they can have the greatest impact on health (e.g. improving quality and access to services, reducing cost).
Establishing policies that address accountability, uncertainty and potential flaws
There is no perfect solution to AI accountability, but there should be mechanisms to ensure responsible development and deployment of AI technologies. It is not enough to have terms of service or disclaimers that speak to a lack of accuracy or correctness of AI models, without first being transparent about what has been done to ensure the accuracy, safety, privacy and fairness of AI.
Committing to representative and secure data use addressing unfairness, bias and discrimination
This includes updated policy and controls for data provenance and informed consent on any intended data use, with provisions for local deployments of AI models and solutions. We have updated our data protection and recovery policies and mechanisms to include AI workflows and data sources, with steps to address security vulnerabilities in the AI design process. We consider LMIC representativeness in training data sets, how they may be affected by biases, and the adjustments needed to mitigate this. This has included the adoption of supporting technologies that allow us to demonstrate compliant design and the use of algorithms, with transparency on how they’ve been evaluated and applied to address bias.
Documentation that explains how AI models and algorithms work
To support model explainability and transparency, we aim to document the models we develop and deploy, ensuring we can explain how and why an AI model produces the results it does, and to quantify uncertainty by expressing when a model is unsure. We want to ensure transparency on data quality, training data and fitness for its intended use.
An AI Return on Investment (ROI) Framework
This covers the cost of access and deployment of solutions at scale, to ensure the benefits of AI can be made affordable and accessible to citizens, and encourage local innovation. The costs of AI are about more than just financial operating costs, and should consider factors such as the balance between the benefits and risks of using predictive algorithms; operational costs vs model performance (does an increase in performance warrant an increase in operational costs); the level of potential harm or inference on people’s rights when using AI; and a Return on Investment (ROI) framework and set of metrics for how we measure, evaluate and analyse the introduction of AI in our services.
Human in the Loop (HITL) model development and auditing
We aim to assess the robustness of AI models during development, to verify, validate and test their decision-making process and results, and evaluate their reliability and predictability. This requires models and algorithms to be explainable and transparent, and human intervention is a key requirement to deliver safe and effective AI solutions.