With increasingly widespread adoption, it is no wonder that concerns have also sharpened. Concerned citizens want more protections against AI risks. Concerned businesses worry that more Protection equals less innovation. Here in Singapore, we hoped to avoid such zeroesum thinking to fulfill our vision of AI for the public good. We’ve always believed that AI governance is as important as AI innovation. As in so many areas, good governance is not the enemy of innovation. On the contrary, governance enables sustained innovation.
One important set of tools for governance is laws and regulations that serve the public interest. To help society meet governance objectives in governing the digital domain, we have introduced new laws for personal data Protection against misinformation and this information that is spread online to better manage cyber risks and egregious content and curb online criminal activities. We have indicated the intention for new legislation to better safeguard the security and resilience of our digital infrastructure, help victims of online harms seek redress from their perpetrators and address problems of deep fix. We have not introduced and overarching AI law and have no immediate plans to do so.
Why? One reason is that some of the harms associated with AI can already be addressed by existing laws and regulations. You take for example, AI generated fake news that is spread online regardless of how the fake news is produced, as long as there is public interest to debunk it, our laws already allow us to issue correction notices to alert people. What about AI models used to support hiring? For starters, many employers here do not yet intend to use AI for recruitment, mostly because they worry about biased outcomes. Because regardless of how bias comes about, with or without AI existing guidelines on fair employment practices and on workplace fairness legislation that will be upcoming will hold employers accountable.
Another reason for not yet introducing an AI law is that in some instances, an update of existing laws is the most efficient response. Take, for example, sex torsion, where someone threatens to distribute intimate images of a victim. We can all agree that even if an image was not real, but rather a deep fake, the distress cost is enough for it to be outlawed. That was precisely what we did. When we updated the penal code to introduce a specific offense of sex torsion, we ensured that sex torsion would be illegal with or without AI.
The examples I shared suggest that we are not defenseless against AI enabled harms in AI governance, we are not starting at Ground Zero. However, we must also have an attitude of humility in recognizing that it is one thing to deal with the harmful effects of AI, quite another to prevent them from happening in the first place through proper design and upstream measures. To borrow from road safety, it is in our interest to implement the equivalent of traffic rules, likes and signs, speed limits, seatbelts and airbags, all of which work together to protect road users. But when cars were first sold to the masses, we didn’t understand all the risks, nor did we know all the measures that could minimize them. And so the successful identification, development and validation of risk mitigating measures is essential and there is no shortcut. However, we believe that if we persist, we will have a much stronger basis for new laws and regulations, one that is grounded on evidence that results in more meaningful an impactful AI governance. This conviction underpins our efforts to develop a second set of tools for AI governance. It is the proverbial figuring out the nature of the beast through high quality research on what can tame the beast and bring out its goodness.