Thursday, July 23, 2020 - 13:31



Navigating these issues can be tricky given their intersectional nature.


If the first half of the year is anything to go by, 2020 is going to be one full of conversations about everything from the Black Lives Matter protests to the role of the physical office in a post-COVID world. Now, thanks to two reports from the Singapore Academy of Law’s Law Reform Committee, we can add to that list the issues of ethics and artificial intelligence (AI) and database rights. We sat down with two subcommittee members responsible for the reports to find out more.

Mr Ronald Wong, Director of Covenant Chambers LLC, explains the importance of these conversations today, saying, “It’s widely accepted that, for all its potential benefits, AI also brings with it risks of harm and unfair outcomes, as well as, say, security and privacy issues. So as stakeholders discuss these issues, it’s important that they also consider things from a human-centred, ethical perspective – that is, how can we ensure that AI promotes human wellbeing and safety across the various scenarios in which it might be deployed.”

These scenarios aren’t always far-removed from some of the wider conversations currently taking centre-stage. “AI systems can actually either worsen or diminish fault lines over race, class and inequality,” says Mr Wong, pointing to the example of AI systems used in criminal justice exhibiting apparent biases against certain racial groups, as a result of the data they are fed. He continues, “So a key theme around ethics and AI is the issue of fairness, and fairness specifically with regards to social justice. It’s a reminder that, when AI systems are being designed or regulated, we need to be mindful of such biases or errors and ensure that the systems don’t inadvertently perpetuate or exacerbate existing social injustices.”

The COVID-19 pandemic has also raised the possibility of injustice in other areas. For instance, AI can help predictive modelling, flagging out potential high-risk patients based on their medical history. But is there a risk that this could drive inequalities in access to healthcare? “In a dystopic world,” Wong notes, “such predictive applications could, in principle, be used to directly determine the extent or cost of, for example, someone’s medical insurance or treatment. But what if there is an error in such an AI-driven decision? How would these be reviewed?”

These are issues with which law and policy makers will need to grapple when reforming laws and regulations to adapt to AI. To that end, the subcommittee’s report sets out certain core ethical principles that the human-centred deployment of AI should pursue, and the issues and questions that policy makers may face in designing laws that protect these. Ranging from accountability and transparency to respect for values and culture, these principles are accompanied by examples of different policy approaches that could be taken to that end.


Importantly, Wong adds, the report is intended to be technology neutral. While we’re not yet in a world of fully sentient robots, and while systems like Iron Man’s J.A.R.V.I.S.  are likely to remain in the realm of science fiction for the foreseeable future, even as we move ever closer to that future, the core ethical principles aren’t really going to change.


Regardless of how advanced AI systems are, or the situations in which they are used, one thing that typically unites them is their reliance on data – where you’re talking about ride sharing, image recognition or even COVID-19 contact tracing, data is those systems’ feedstock. Again, there’s a tension here.  As fellow subcommittee member Desmond Chew remarks: “Many of the apps and other services we use today rely on data to provide better, more tailored services. So for better services, individuals may need to provide more data. The question then is how prepared they are to provide such access to the government, to businesses.”


That’s true enough; so if we want a health application to better advise you on a meal plan, you should be prepared to share with it all sorts of data, including non-personal data – that is, anonymised data that can't be traced back to identify you directly. But that doesn’t mean it doesn’t have potential value to you, or the company you’re giving it to. So the question then, is how to strike the balance between ensuring companies have the incentives to build the huge databases on which the online economy relies, while also ensuring that individuals are protected and other companies remain able to compete?


The subcommittee attempts to do just this, examining whether key data-related laws in Singapore currently operate effectively to promote the beneficial production of, and access to, databases, while also protecting individual rights. It also examines the oft-overlooked area of database rights. “We talk so much about being a smart nation and yet we do not pay enough attention to areas such as database protections,” says Mr Chew. “Even in an offline context, cases like Global Yellow Pages Ltd v Promedia Directories Pte Ltd – which emphasised the need for intellectual effort, judgment or creativity for copyright to subsist – have shown the limits of existing compilation rights under copyright law. When you extrapolate that to electronic databases, the gaps under existing laws are even more pronounced. So, we need to consider whether there are ways in which we can ensure that databases are adequately protected, and those who put in the ‘grunt work’ of compiling them are rewarded and incentivised to continue developing them. In the European Union, for example, databases are protected by a standalone intellectual property right.”

I’m curious if there is proof that such rights make a positive difference. “It’s open to question” says Mr Chew, pointing out the lack of hard evidence that the EU database right, for example, has led to significant economic benefits or increased database production in the EU. “In fact, the assumption that more and more layers of IP protection means greater innovation and growth appears not to have held up. And I think this is a very important learning point for Singapore: Just because we grant more rights and layers of protection doesn't necessarily really mean that we actually are creating or incentivising innovation and growth.”

“The report’s recommendations reflect this,” he concludes. “We think that ultimately, targeted clarifications through both hard and soft law of how existing copyright laws can protect databases will, when combined with ongoing developments to data protection laws, help get that difficult balance right.”

The reports are the first in a series of reports by the Law Reform Committee, focused on the impact of Robotics and AI on the Law. You can read the subcommittee’s reports here.

The series is also coming to #TechLawFest 2020! To hear more about all the subcommittee’s AI-related reports, register now to attend one of the dedicated panel discussions.