The Dialogue’s Research Paves the Way for Trustworthy AI Adoption

New Delhi – 08 Feb 2024 – Mr Abhishek Singh, President & CEO, National e-Governance Division (NeGD), Ministry of Electronics and Information Technology (MeitY), calls for public-private collaboration to mitigate user harm and develop implementable AI frameworks at the launch of The Dialogue’s research report on Trustworthy AI. It was held on February 08, 2024, in New Delhi and saw participation from a diverse group, including government officials, industry experts, civil society, law firms, and academicians.

The research report ‘Towards Trustworthy AI: Sectoral Guidelines for Responsible Adoption’ explores the imperative need for a paradigm shift, with a focus on the financial and healthcare sectors, towards establishing comprehensive and trustworthy AI systems. It is authored by Rama Vedashree, former CEO of DSCI, and Jameela Sahiba, Senior Programme Manager; Kamesh Shekar, Senior Programme Manager and Bhoomika Agarwal, Senior Research Associate, The Dialogue.  

During his keynote address, Mr Abhishek Singh explained, “There are numerous applications of AI, but there is also a growing concern about potential misuse of technology. Globally, countries and individuals are apprehensive about the risks associated with AI. The focus on promoting safe, secure, and trustworthy AI is prevalent worldwide. When we talk about trustworthy and responsible AI, we are primarily addressing the practical implications and what it means for those utilising AI technologies. The lens through which India views this issue is primarily centred on mitigating user harm. This involves developing AI-based solutions that serve the broader social good and benefit various segments of society without causing harm to users….We need to consider how the government can collaborate with the private sector and the non-profit sector in India to develop implementable frameworks. Laws and regulations should not be enacted merely for the sake of it; instead, we must establish adequate regulatory capacity within the government to ensure effective regulation. Our objective should not be to stifle innovation but rather to facilitate innovation in sectors where such solutions can be beneficial. It’s crucial to ensure that users are informed about what they are seeing and using. This approach can form the foundational framework for building safe and trustworthy AI.”

The research paper unveils a roadmap for instilling trustworthiness in AI systems through strategies that can be operationalised. It aims to fulfil two objectives:

  • Identifying universal principles for trustworthy AI: Through a comprehensive review of global AI policy frameworks, the paper identifies nine fundamental principles essential for building trustworthy AI systems. These principles include transparency, accountability, fairness, robustness, human autonomy, privacy, sustainability, governance, and contestability.
  • Tailoring ethics for high-impact sectors: Acknowledging the unique challenges of specific domains, the paper develops responsible AI adoption in healthcare and finance. These actionable strategies include explainable AI techniques, bias detection algorithms, user education, and ethical design principles.

Ms Rama Vedashree, co-author of the report, said, “Recently, the Prime Minister’s economic advisory council introduced the CAS framework for AI. This indicates a growing awareness among Indian regulators regarding the principles of AI. It emphasises the significance of providing guidance and toolkits not only for developers crafting these solutions but also for the deployers – the large user industry that aims to harness cutting-edge AI for business productivity, delivering hyper-personalized services to their consumers.”

Building trustworthy AI across diverse sectors requires a multi-pronged approach. This research establishes nine core principles like transparency and fairness as the bedrock of ethical AI development, applicable throughout the entire design and implementation process. Additionally, it provides sector-specific frameworks for healthcare and finance, addressing their unique ethical and regulatory landscapes with practical guidance. 

Kazim Rizvi, Founding Director of The Dialogue, said, “In the era of foundation models becoming integral to our lives, permeating various aspects of our daily routines, it’s crucial to recognise the need to mitigate potential harm. Similar to any dual-purpose technology, AI presents its own set of risks. However, the key lies in optimising AI’s potential, leveraging its benefits for economic growth while concurrently minimising harm and preventing abuse. This approach aims to maximise the positive impact of AI while mitigating its negative consequences.

Ultimately, responsible AI integration hinges on collaboration between researchers, policymakers, industry leaders, and the public, ensuring trustworthy technology benefits everyone.

This research can potentially impact how AI is developed and implemented significantly. By offering operationalisation strategies and practical recommendations, it empowers stakeholders across various sectors to build and deploy AI that is both beneficial and trustworthy.

Access the research report here.

Authors:

No data was found