Singapore has released draft guidelines on how personal data should be handled when it is used to train artificial intelligence (AI) models and systems.
The document details how the country’s Personal Data Protection Act (PDPA) will apply when businesses use personal information to develop and train their AI systems, according to the Personal Data Protection Commission (PDPC), which administers the act. The guidelines also include best practices for establishing transparency in how AI systems use personal data to make decisions, predictions and recommendations.
Too: AI is more likely to cause world destruction than climate change, according to an AI expert
However, the guidelines are not legally binding and do not supplement or replace any existing law. They look at issues and situations, such as how companies can benefit from existing exceptions within the PDPA in the development of machine learning models or systems.
The guidelines also outline how organizations can meet requirements related to consent, accountability and notification when collecting personal data for machine learning AI systems that facilitate predictions, decisions and recommendations.
The document also outlines when it is appropriate for companies to turn to the two exceptions for research and business improvement, without consent for the use of personal data to train AI models.
Too: 6 Harmful Ways ChatGPT Can Be Used
Business improvement exceptions may apply when companies develop a product, or have an existing product, that they want to improve. This exception may also be relevant when AI systems are used to empower decision-making processes that improve operational efficiency or that offer personalized products and services.
For example, the business improvement exception may be applied to internal HR recommendation systems that are used to provide a first list of potential candidates for a role. It can also be applied in the use of AI or machine learning models and systems to provide new features that improve the competitiveness of products and services.
However, organizations must ensure that the business improvement purpose “cannot reasonably be achieved” without personal data being used in a personally identifiable way.
Too: How big is this Generative AI? Think Internet-level disruption
Under the research exception, organizations are permitted to use personal data to conduct research and development that may not have immediate application in existing products and services or business operations. This may include joint commercial research work with other companies to develop new AI systems.
Organizations must ensure that research cannot be appropriately completed without the use of personal data in an identifiable form. There must also be clear public benefits in using personal data for research, and research results cannot be used to make decisions that affect the individual. Furthermore, the published results of the research should not identify the individual.
The guidelines also recommend organizations that use personal data for AI systems should conduct a data protection impact assessment, which looks at the effectiveness of risk mitigation and remedial measures applied to the data.
Too: AI can automate 25% of all jobs. Here’s which one is most at risk
With respect to data protection, organizations should incorporate appropriate technical procedures and legal controls when developing, training and monitoring AI systems that use personal data.
“In the context of developing AI systems, organizations should adopt data minimization as a good practice,” the guidelines state.
“Using only personal data with the characteristics necessary to train and improve AI systems or machine learning models will also reduce unnecessary data security and cyber risks for AI systems.”
PDPC is seeking public feedback on this draft guidelinesWhich has to be submitted by 31 August.
Partnership to test privacy protection tools
Singapore has also announced a partnership with Google that will enable local businesses to test the use of “privacy enhancing technologies” or PET created by the government.
Describing these as additional tools to help organizations build their datasets, Communications and Information Minister Josephine Teo said: “PETs allow businesses to extract value from consumer datasets, while ensuring that personal data is secure. By facilitating data sharing, they can also help businesses develop useful data insights and AI systems.”
Using PET, for example, allows banks to aggregate data and build AI models for more effective fraud detection, while protecting their customers’ identities and financial data, Teo said.
To drive PET adoption, the Infocomm Media Development Authority (IMDA) last year introduced a PET sandbox to provide businesses access to grants and resources to develop such solutions.
Too: Futurist Says ChatGPT Is More Like ‘Alien Intelligence’ Than Human Brain
The collaboration with Google will allow organizations in Singapore to test their Google Privacy Sandbox applications within the IMDA sandbox. The system provides a secure environment in which companies can access or share data without revealing sensitive information, PDPC said.
It added that IMDA and Google Sandbox is available for Singapore-based businesses and is designed for edtech, publishers and developers, among others.
Too: Why your ChatGPT conversations may not be as secure as you think
According to Teo, this partnership marks Google’s first such collaboration with a regulator in Asia-Pacific to facilitate the testing and adoption of PET.
Through this initiative, he said, organizations can access a “safe space” to pilot projects using PET on the platform they are already working on.
“With the exclusion of third-party cookies, businesses can no longer rely on these to track consumers’ behavior through browsers and PET will be needed as an alternative,” he added. “Consumers will experience being served more relevant content without the fear of their personal data being compromised.”











