Singapore made two significant announcements through Minister Josephine Teo.
Singapore recently introduced new initiatives to enhance AI safety and trust at the AI Action Summit in France. One of these initiatives is the Global AI Assurance Pilot, which aims to develop a reliable system for testing AI applications. Ensuring that AI systems are fair and safe is crucial because trust is essential for AI to be widely adopted in businesses and everyday life.
Many people are concerned about AI because of issues such as bias in decision-making and the loss of human control over AI systems. To address this, Singapore is working to understand why people distrust AI and how AI systems can be tested fairly and rigorously to make them more reliable. The Global AI Assurance Pilot will connect AI testing providers with companies that need AI systems to be tested, such as banks using AI for credit approvals. This ensures that customers are treated fairly and are not unfairly denied loans due to AI biases.
Another key effort is reducing cultural bias in large language models (LLMs) and improving their accuracy in non-English languages. Singapore and Japan have launched the Joint AI Testing Report, which examines AI performance in 10 languages, including Malay, Chinese, Japanese, and Korean. This is important because most AI models are developed for English speakers but are used worldwide in many different languages and cultures.
Additionally, Singapore worked with 350 experts from nine Asia-Pacific countries to produce the AI Safety Red Teaming Challenge Report 2025. This report focuses on improving AI safety in diverse cultural and linguistic environments. Since AI is becoming a global technology, it is critical to ensure it works effectively across different regions. (Note: AI Geographical Bias may become an issue where countries are suspicious of the intents and flow of data to governmental organisations to eavesdrop on citizens and users from other countries.
Singapore is committed to AI safety and governance by building strong partnerships with industries, universities, and communities worldwide. Through these efforts, Singapore hopes to ensure AI benefits society and is used responsibly across different sectors, such as healthcare and finance.
This initiative represents a key step toward creating standardized methods for evaluating AI systems’ safety and effectiveness, promoting trust and accountability in AI deployments. These efforts underscore a global commitment to collaborative approaches in AI governance, aiming to balance innovation with safety and ethical considerations.
Secondly, Mark Chen and his team from Singapore’s Government Technology Agency (GovTech) showcased an innovative solution to combat online scams.
The team developed an AI-driven system that detects and preemptively blocks scam websites. This solution enables enforcement and cybersecurity agencies to process over 10 suspicious sites per second, significantly enhancing scam detection and prevention capabilities.
Online scams have been a growing concern, with scammers rapidly creating fraudulent websites to deceive individuals. Traditional methods of identifying and blocking these sites often lag behind the scammers’ tactics, leaving users vulnerable to financial losses and personal data breaches.
The system employs artificial intelligence and machine learning algorithms to analyze and identify characteristics of scam websites. Once a suspicious site is detected, the system can automatically block it through platforms like Google Safe Browsing. This proactive approach allows real-time intervention, preventing users from accessing malicious sites.
Between January and November 2024, the AI-driven system successfully blocked over 5,000 scam websites, achieving a more than 90% precision rate. This high level of accuracy has been instrumental in preventing potential victims from falling prey to online scams.
No responses yet