A software toolkit to help financial institutions evaluate the “responsible” use of artificial intelligence (AI) has been updated to cover more areas.
First launched in February last year, the Evaluation Toolkit focuses on four key principles of fairness, ethics, accountability and transparency – collectively referred to as FEAT. It provides a checklist and methodology for businesses in the financial sector to define the objectives of their AI and data analytics use and to identify potential bias.
Too: These 3 AI tools made my two-minute ‘How To’ video more fun and engaging
The toolkit was developed by a consortium led by the Monetary Authority of Singapore (MAS), involving 31 industry leaders including Bank of China, BNY Mellon, Google Cloud, Microsoft, Goldman Sachs, Visa, OCBC Bank, Amazon Web Services, IBM The players were compromised. and Citibank.
The first release of the toolkit focused on the evaluation methodology for the “fairness” component in the FEAT principles, including automating metrics evaluation and visualization of this principle.
The MAS said the second iteration has been updated to include review methods for the other three principles, as well as an improved “fairness” evaluation method. The toolkit was tested by several banks in the consortium.
Available on GitHubThe open-source toolkit allows plugins to enable integration with the financial institution’s IT systems.
Too: Six skills you need to become an AI prompt engineer
The consortium, called Veritas, also developed new use cases to demonstrate how the methodology could be applied and offered key implementation lessons. These included a case study involving Swiss Reinsurance, which ran a transparency assessment for its predictive AI-based underwriting function. Google also shared its experience implementing FEAT methodologies in its fraud detection payment systems in India and mapping out its AI principles and processes.
Veritas also released a whitepaper outlining lessons shared by seven financial institutions, including Standard Chartered Bank and HSBC, on the integration of AI assessment methodology with their internal governance frameworks. These include the need for a “responsible AI framework” that spans geographies and a risk-based model for determining the governance required for AI use cases. The document also details responsible AI practices and training for the new generation of AI professionals in the financial sector.
Sopnendu Mohanty, Chief Fintech Officer, MAS, said: “Given the rapid pace of development in AI, it is critical that financial institutions have robust frameworks in place for the responsible use of AI. Their AI use cases for fairness, ethics, accountability and transparency. This will help foster a responsible AI ecosystem.”
Too: AI has the potential to automate 40% of the average work day
The Singapore government has identified six top risks associated with generative AI and proposed a framework on how these issues can be addressed. It has also set up a foundation that seeks to harness the open-source community to develop testing toolkits that reduce the risks of AI adoption.
During his visit to Singapore earlier this month, OpenAI CEO Sam Altman urged public consultation as well as the development of generative AI, with humans in control. He said this was necessary to reduce the potential risks or pitfalls associated with the adoption of AI.
Addressing challenges related to bias and data localisation, Altman said, is also important as AI gains traction and the interest of nations. For OpenAI, the brainchild behind ChatGPT, this meant figuring out how to train its generative AI platform on datasets that are “as diverse as possible” and that span across multiple cultures, languages and values. Are.











