Singapore has identified six top risks associated with generative artificial intelligence (AI) and proposed a framework on how these issues can be addressed. It has also set up a foundation that seeks to tap the open-source community to develop testing toolkits that reduce the risks of AI adoption.
Hallucinations, accelerated misinformation, copyright challenges, and embedded bias are among the major risks of generative AI outlined in a report released by Singapore’s Infocomm Media Development Authority (IMDA). discussion paper details The country has a “credible and responsible” framework for adopting emerging technology, including disclosure standards and global interoperability. The report was jointly developed with Academia, an AI tech company founded by state-owned investment firm Temasek Holdings.
Too: AI Ethicist Says Today’s AI Boom Will Exacerbate Social Problems If We Don’t Act Now
The framework takes a look at how policymakers can boost existing AI governance to address the “unique feature” and immediate concerns of generative AI. IMDA said it also discussed the investments needed to ensure governance outcomes in the long term.
In identifying hallucinations as a major risk, the report said that – like all AI models – generative AI models make mistakes and these are often vivid and take on anthropomorphization.
The discussion paper noted, “Current and previous versions of ChatGPT have been known to make factual errors. Such models have a more challenging time performing tasks such as reasoning, math, and trivia.”
“This is because ChatGPT is a model of how people use language. While language often mirrors the world, these systems do not yet have a deep understanding of how the world works. “
The report states that these false responses can be deceptively reassuring or even authentic, pointing out how language models have generated seemingly valid but incorrect responses to medical questions, as well as such Have generated software code that is susceptible to vulnerabilities.
Furthermore, the dissemination of false content is becoming increasingly difficult due to credible but misleading text, images and videos that can potentially be generated on a large scale using generative AI.
Too: How to use ChatGPT to write code
Impersonation and reputation attacks have become easier, including social-engineering attacks that use deepfakes to gain access to privileged individuals.
Generative AI also makes it possible to cause other types of harm, where threat actors with little or no technical skill can generate potentially malicious code.
Too: Don’t fall for fake ChatGPT apps: Here’s what to look out for
According to the discussion paper, these emerging risks may require new approaches to the governance of generative AI.
Singapore’s Communications and Information Minister Josephine Teo said that while global leaders are still exploring alternative AI architectures and approaches, many are warning about the dangers of AI.
AI offers “human-like intelligence” at a potentially high level and at a significantly lower cost, which is especially valuable for countries like Singapore, where human capital is a key differentiator, said Teo, who will be speaking at this week’s Asia Tech X. Speaking at the Singapore Summit. ,
Improper use of AI, however, can do a lot of damage, she said. “Therefore, guardrails are needed to guide people to use them responsibly and to design AI products to be ‘safer for all of us’,” she said.
“We hope (the discussion paper) will spur many conversations and create awareness on the safety measures needed,” he added.
Too: ChatGPT can be used in 6 harmful ways
During a closed-door dialogue at the summit, he revealed that senior government officials also discussed recent advances in AI, including generative AI models, and how these could affect economic growth and society. can do.
Teo, who provided a summary as chair of the discussion, said officials reached a consensus that AI should be governed “properly” and used for the good of humanity. Participants in the meeting, which included ministers, represented nations including Germany, Japan, Thailand and the Netherlands.
The delegates also agreed that increased collaboration and exchange of information on AI governance policies would help identify common grounds and better align approaches. Teo said this unity will lead to sustainable and fit-for-purpose AI governance frameworks and technical standards.
Officials urged greater interoperability between governance frameworks, which they believe is needed to facilitate responsible development and adoption of AI technologies globally.
It was also recognized that AI ethics should be communicated at the earliest stages of education, while investment in reskilling should be prioritised.
inspire community
Singapore has launched a non-profit foundation to “harness the collective power” and contribution of the global open-source community to develop AI-testing tools. The goal here is to facilitate the adoption of responsible AI and promote best practices and standards for AI.
Called the AI Verify Foundation, it will set the strategic direction and development roadmap for AI Verify, which was introduced last year as a governance-testing framework and toolkit. test toolkit Has been made open source.
Too: This New AI System Can Read Minds As Accurately As Almost Half The Time
AI Verify Foundation’s current crop of 60 members IBM, Salesforce, DBS, Singapore Airlines, Zoom, Hitachi and Standard Chartered.
The Foundation operates as a wholly owned subsidiary under IMDA. With AI-testing technologies still nascent, the Singapore government agency said tapping the open-source and research communities would help further develop the market segment.
Teo said: “We believe AI is the next big shift after the internet and mobile. Amidst very real fears and concerns about its development, we must actively channel AI into beneficial uses and away from bad. Will need to run. It’s important for Singapore. Thinks about AI.”
In his speech at the summit, Singapore’s Deputy Prime Minister and Finance Minister Lawrence Wong reiterated the importance of establishing trust in AI, so that the technology can be widely accepted.
“We are already using machine learning to optimize decision making and potentially generative AI will go beyond this to create new content and generate new ideas,” Wong said. “Still, serious concerns remain. Used improperly,[AI]can perpetuate dangerous biases in decision-making. And with the latest wave of AI, the risks are even greater as AI becomes more intimate and becomes human.”
Also: AI is more likely to cause world destruction than climate change, according to an AI expert
These challenges have raised difficult questions for regulators, businesses and society, he added. “What kinds of work should AI be allowed to assist with? How much control should AI have over decision making and what ethical safeguards should we put in place to help its development?”
Wong said: “No one person, organization, or even country will have all the answers. We all have to come together to have important discussions to determine the proper guardrails needed to build more reliable AI systems.” can join in.”
In a panel discussion, Alexandra van Hüfelen of the Netherlands’ Ministry of the Interior and Kingdom Relations recognized the potential benefits of AI, but expressed concern about its potential impact, especially amid mixed signals from industry.
The Digitalization Minister said that market players, such as OpenAI, claim the benefits of their products, but at the same time issue warnings that AI has the potential to destroy humanity.
“It’s a crazy story to tell,” quipped van Huflen, before asking a fellow panelist from Microsoft how he felt as his company at OpenAI, the brainchild behind ChatGPT.
Too: I used ChatGPT to write similar routines in these 10 languages
Co-founder and CEO of OpenAI last month jointly published a note On the company’s website, it urged regulation of “superintending” AI systems. He talked about the need for a global authority like the International Atomic Energy Agency to oversee the development of AI. This agency “should inspect systems, require audits, test for compliance with security standards, restrict the degree of deployment and level of security,” among other responsibilities, he proposed.
“It would be important that such an agency focus on mitigating existential risk and not on issues that should be left to individual countries, such as defining what AI is allowed to say,” he added. Needed.” “In terms of both potential upside and downside, superintelligence will be more powerful than other technologies humanity has faced in the past…Given the potential for existential risk, we cannot simply be reactive. Nuclear power is a Commonly used historical example of a technique with this property.”
In response, Microsoft’s Asia president Ahmed Mazhari acknowledged that van Huffelen’s pushback was not unreasonable, noting that supporters who signed a petition in March calling for the halt of AI development are planning to launch their own AI development campaign the following month. went ahead to invest in chatbots.
Too: Best AI Chatbots: ChatGPT and Alternatives to Try
Pointing to the societal harm that resulted from the lack of oversight of social media platforms, Mazhari said the tech industry has a responsibility to prevent a repeat of that failure with AI.
He added that the ongoing discussion and increased awareness of the need for AI regulations, particularly in the areas of generative AI, was a positive sign for a technology that came to market just six months ago.
In addition, van Huflen stressed the need for tech companies to act responsibly, along with the need to establish rules to ensure organizations follow these rules. She said it remains “untested” whether this dual approach can be achieved in tandem.
He also stressed the importance of establishing trust, so people want to use the technology, and making sure users have as much control online as they do in the physical world.
Too: How does ChatGPT work?
Fellow panelist Keith Strier, vice president of Nvidia’s worldwide AI initiative, noted the complexity of governance due to the widespread reach of AI tools. This general availability means that there are more opportunities to create unsafe products.
Strayer suggested that regulations should be part of the solution, but not the only answer, as industry standards, social norms and education are equally important to ensure the safe adoption of AI.











