In the rapidly evolving world of Artificial Intelligence (AI), it is important to ensure that society continues to reap its benefits without being subject to its many harms. As AI continues to be integrated into various sectors such as healthcare, finance, and transportation, ensuring these technologies are developed and used responsibly becomes increasingly important. While this need is generally recognised, there is currently a lack of globally representative data on how countries are addressing AI’s challenges and opportunities, especially in relation to the protection and promotion of human rights.
Recognising this need, a global effort called the Global Index on Responsible AI (GIRAI) was initiated in 2023 as a flagship project by the Global Center on AI Governance. The GIRAI is the first tool to set globally relevant benchmarks for responsible AI and assess them in countries around the world. This study constitutes the largest global data collection on responsible AI to date. In its first edition, the Global Index on Responsible AI represents primary data collected by researchers from 138 countries, between November 1, 2021, and November 1, 2023.
By setting benchmarks and standards for responsible AI practices, this long-term initiative aims to drive progress towards a more inclusive, accountable, and sustainable future for AI. The comprehensive index, covering 19 thematic areas, will work as a multidimensional tool for policymakers, researchers, journalists, and stakeholders to understand and improve AI governance. Furthermore, the methodology developed for the Global Index is meant to reflect the realities of countries globally, especially regarding socio-economic rights, resources, and diverse circumstances that exist in each country, while examining the progress toward benchmarks that are universally applicable.
What is Responsible AI?
The concept of “responsible AI,” grounded in AI ethics and focused on the governance of AI systems, is crucial for achieving equitable and peaceful human futures. While the importance of responsible AI is recognised, global guidelines are inconsistent and mainly influenced by players in Europe and North America. A notable exception is the UNESCO Recommendation on the Ethics of AI, adopted by 193 member states in 2021, establishing global AI ethics principles. To move from principles to practice in responsible AI, we need to know what efforts countries are making and to track and measure progress.
However, there is a lack of comprehensive data on countries’ efforts to address AI challenges and human rights impacts. The Global Index on Responsible AI addresses this by providing measurable, human rights-based benchmarks and assessing the performance of these 138 countries. This index will produce six further annual editions with the aim of tracking the relationship between responsible AI and the SDG targets in 2030.
How is it measured?
Responsible AI is a systemic challenge that requires an ecosystem of active engagement and continuous dialogue among diverse stakeholders, including government officials, the private sector, and non-state actors like academics and students. It cannot be achieved merely by creating frameworks or appropriate AI products. Therefore, the GIRAI uses a multifaceted approach to measure the performance of the responsible AI ecosystem in each country across the 19 thematic areas and the 3 dimensions as displayed below.
It evaluates government leadership in establishing frameworks and protecting human rights, as well as the role of non-state actors. GRAI collects primary data on three pillars: Government Frameworks, Government Actions, and Non-State Actors. To accurately measure these, specific coefficients, derived from World Bank and Freedom House data on rule of law, regulatory quality, government effectiveness, control of corruption, and freedoms of expression and association, are applied to adjust the primary data. This contextualises the findings and provides a precise reflection of the effectiveness of each pillar at the national level.
Any index is limited by data availability, accessibility, and cultural and political biases. This study required balancing inclusivity and specificity to accommodate diverse country realities and enable fair comparisons, ensuring the measurability of a new framework for responsible AI. However, as the data in the index covers responsible AI activities from November 1, 2021, to November 1, 2023, it does not reflect developments after this period, which will be included in the next edition.
As one of the two Asian hubs, LIRNEasia supported this effort by supervising country researchers throughout the data collection process and overseeing their contributions through a specialised supervisor module. In addition to providing the necessary guidance and support to researchers in 15 countries, we were also responsible for ensuring the integrity and quality of the data collected. This was achieved via the thorough evaluation of questionnaires completed by country researchers, approving or rejecting them based on a comprehensive set of guidelines for regional team leaders.
Once the initial approval was granted, the questionnaires were made available to reviewers, and LIRNEasia worked with country researchers to address any feedback or comments, thereby ensuring that the final submissions met the highest standards required for this global index.
To gain further insights and explore country-specific data, check out the Global Index on Responsible AI here: https://global-index.ai. The report will include rankings showcasing how well the 138 assessed countries have performed in developing, implementing, and maintaining national responsible AI frameworks.
The interactive clickable map below showcases detailed information about the researchers from the 15 countries overseen by LIRNEasia as the regional team leader in Asia.The interactive clickable map below showcases detailed information about the researchers from the 15 countries overseen by LIRNEasia as the regional team leader in Asia.
Comments are closed.