AI apps to feature ‘safety labels’ highlighting risks and testing
In the near future, users of generative AI apps might find labels that clearly outline how the AI should be utilized, its associated risks, and its testing process. Following new guidelines, this will be made mandatory to make the technology easier for the masses to understand.
In the near future, users of generative AI apps might find labels that clearly outline how the AI should be utilized, its associated risks, and its testing process. Following new guidelines, labels will be made mandatory to make the technology easier for the masses to understand.
Also read: European Union(EU) takes lead in AI regulation
Singapore’s Minister of Communications and Information, Mrs Josephine Teo, said the new measure is an effort to define a standard of transparency and testing for tech firms. She was speaking at the Personal Data Protection tech conference on July 15, according to a report by Strait Times.
AI app developers should be transparent about their innovations
Like the practice of how medicines or home appliances carry safety labels, in the same practice, developers of generative AI apps must be clear in informing users about AI model usage and development. Mrs Teo, who is also Minister-in-charge of Smart Nation and Cybersecurity initiative, said,
“We will recommend that developers and deployers be transparent with users by providing information on how the generative AI models and apps work.”
Explaining the upcoming rules, the minister said they are the same as when a user opens a “box of over-the-counter medication.” They find a sheet of paper that clearly states how the medicine should be used and the possible “side effects you may face.”
Teo asserted, “this level of transparency” is needed for AI systems that are built on one generative artificial intelligence. The new set of rules will define safety benchmarks which should be practiced before an AI model is made available to the public.
Guide on data anonymization set to be released in 2025
Generative artificial intelligence is a genre of AI that is capable of generating new text and images and is not as predictable as traditional AI. According to the new guidelines, creators will be required to mention the risks of disseminating lies, hostile comments, and biased narratives in the content.
Teo said that home appliances come with labels that clearly say the item is being tested for safe use; otherwise, a customer won’t know if the appliance is safe or not. The same will be practiced for AI applications. Singapore’s Infocomm Media Development Authority (IMDA) will start a consultation with the industry on the new rules. However, Teo did not mention a date for the guidelines.
IMDA has also published a guide on privacy-related issues in technology, which will cater to the increasing demand for data to train AI models while protecting users’ privacy, said Teo.
Also read: Democrat roundtable to host Mark Cuban and Brad Garlinghouse to discuss crypto regulations
IMDA assistant chief executive, Denise Wong said, Keeping data secure in generative AI is more challenging for the industry. She was giving her views in a separate panel discussion on AI and data privacy during the event. Representatives from different tech firms were part of the panel including ChatGPT maker OpenAI and consulting firm Accenture.
Data protection guardrails should be in place at all stages of AI development and deployment, said OpenAI’s privacy legal head, Jessica Gan Lee. She said AI models should be trained on diverse data sets from “all corners of the world.” Lee emphasized that multiple cultures, languages, and sources should be included in AI training, along with finding ways to limit personal data processing.
Teo said a guide on data anonymization will be introduced for companies operating in ASEAN in early 2025. The guide will result from a February meeting among regional officials who explored ways to create a secure global digital ecosystem.
What's Your Reaction?