AI developers must move quickly to develop and deploy systems that address algorithmic bias, said Cathy Baxter, principal architect of ethical AI practices at Salesforce. In an interview with ZDNET, Baxter stressed the need for diverse representation in data sets and user research to ensure fair and impartial AI systems. He also highlighted the importance of making AI systems transparent, understandable and accountable while protecting individual privacy. Baxter stresses the need for cross-sector collaboration, such as the model used by the National Institute of Standards and Technology (NIST), so that we can develop robust and secure AI systems that benefit everyone.
One of the fundamental questions in AI ethics is ensuring that AI systems are developed and deployed Without reinforcing existing social prejudices or creating new ones. To achieve this, Baxter stresses the importance of asking who benefits from, and who pays for, AI technology. It is important to consider the data sets being used and ensure that they represent everyone’s voice. It is also essential to identify potential pitfalls through inclusivity and user research in the development process.
Too: ChatGPT’s IQ Is Zero, But It’s A Revolution In Usability, Says AI Expert
“It’s one of the fundamental questions we have to discuss,” Baxter said. ,Women of color, in particular, have been asking this question And doing research in this field for years. I’m thrilled to see so many people talking about this, especially with the use of Generative AI. But one of the things we need to do, fundamentally, is ask who benefits and who pays for this technology. Whose voices are involved?”
Social bias can be communicated to AI systems through the data sets used to train them. Unrepresentative data sets containing bias, such as image data sets with predominantly one race or lacking cultural differentiation, can result in biased AI systems. Furthermore, applying AI systems unevenly to society may perpetuate existing stereotypes.
To make AI systems transparent and understandable to the average person, it is important to prioritize interpretability during the development process. Techniques such as “chain of thought prompts” can help AI systems to show their work and make their decision-making process more understandable. User research is also important to ensure that explanations are clear and that users can identify uncertainties in AI-generated content.
Too: AI can automate 25% of all jobs. Here are the ones who are most (and least) at risk
Transparency and consent are needed to protect the privacy of individuals and ensure responsible use of AI. Salesforce is like Guidelines for Responsible Generative AI, including respecting the data source and using customer data only with consent. Allowing users to opt in, opt out, or exercise control over their data usage is important to privacy.
“We only use customer data when we have their consent,” Baxter said. “It’s really important to be transparent when you’re using someone’s data, to allow them to opt-in and to allow them to go back and say they no longer want their data involved. ”
As competition for innovation in generative AI intensifies, maintaining human control and autonomy over increasingly autonomous AI systems is more important than ever. Empowering users to make informed decisions about the use of AI-generated content and keeping humans in the loop can help maintain control.
Ensuring that AI systems are secure, reliable and usable is critical; Industry-wide collaboration is critical to achieving this. baxter praised AI Risk Management Framework Created by NIST, involving more than 240 experts from various fields. This collaborative approach provides a common language and framework for identifying risks and sharing solutions.
Failing to address these ethical AI issues can have serious consequences, as seen in the cases Wrongful arrests due to facial recognition errors or the creation of harmful images. Investing in safeguards and focusing only on potential future harm rather than in the here and now can help mitigate these issues and ensure the responsible development and use of AI systems.
Too: How does ChatGPT work?
While the future of AI and the potential for artificial general intelligence are intriguing topics, Baxter stresses the importance of focusing on the present. Ensuring responsible use of AI and addressing societal biases today will better prepare society for AI advancements in the future. By investing in ethical AI practices and collaborating across industries, we can help build a safer, more inclusive future for AI technology.
“I think the timeline matters a lot,” Baxter said. “We really have to invest in the here and now and build this muscle memory, build these resources, build rules that allow us to move forward but doing it safely.”











