The use of Artificial Intelligence (AI) in society and its presence in shaping government policy is no longer a theoretical promise. On the contrary, almost every government now has significant swathes of its GDP devoted to promoting and developing AI systems which can be implemented in society. Accordingly, the need for trustworthy AI has never been more important as it infiltrates economies, societies, technology, business, academic and political spheres. Governments are likely aware of the challenge of deploying and using AI that is human-centred and trustworthy in order to maximise its benefits whilst minimising risks. To this end, in 2019 the Office for Economic Co-operation and Development (OECD) has identified five complementary values-based principles for the responsible stewardship of trustworthy AI. They are as follows: 

1. AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being;

2. AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards - for example, enabling human intervention where necessary - to ensure a fair and just society;

3. There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them; 

4. AI systems must function in a robust, secure and safe way throughout their life cycles and potential risks should be continually assessed and managed; 

5. Organisations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles. 

The hope is that, if all governments and companies are able to uphold these principles, AI systems can promote shared wellbeing and prosperity whilst protecting individual rights and democratic values. These principles are evaluated in the OECD’s  most recent report. 

The report also identifies what can be done going forward. and concludes that, whilst many AI systems and policies currently exist, there is no database which governments or other actors can access in order to assess their methods of manufacturing and implementation on a relative scale or against the OECD principles of trustworthiness. As a result, in June 2020, the OECD set up a working group to survey a wide range of stakeholders in order to determine a framework which would become the model for a database which parties could refer to in order to assess their AI strategy. The working group agreed to focus its strategy on ‘tools’, which are understood to be instruments and structured methods which can be leveraged by others to facilitate the implementation of AI principles. In particular, the focus was on three types of tools: technical, procedural or educational.  

Technical tools seek to address AI-related issues from a technical angle such as those arising from ‘bias detection, transparency and the explainability of AI systems, performance, robustness, safety and security against adversarial attacks’. Large technology companies such as IBM and Microsoft were the key contributors to this portion of the survey. Procedural tools relate to operational and process driven issues. They broadly look at policy, product development, lifecycle issues, risk management tools, sector specific codes of conduct and collective agreements. By their nature, therefore, they relate to a wider range of parties than technical tools. Thus, the working group noted that respondents to this part of the survey came from a broad number of stakeholders including governments and stakeholders. Finally, educational tools encompass mechanisms to build awareness, inform, educate and prepare stakeholders who may be involved in or affected by the implementation of an AI system. Such tools include ‘change management processes, capacity and awareness building tools, guidance for inclusive AI system design, training programmes and educational materials’.  

In asking stakeholders about their technical, procedural and/or educational tools and developing a database created from their answers, the OECD will be able to ensure others have a reference point for their own policies and strategies. The hope is that, in increasing transparency, parties will feel more inclined to engage in building and implementing trustworthy AI. Along with a trend in increasing global cooperation on AI and wider information sharing, the database will be a welcome addition to an ever increasing dialogue and it is all the more necessary as the range of stakeholders involved in these issues continues to grow seemingly exponentially.