On National AI strategies


Recently, I have become quite interested in how countries have been shaping their national AI strategies or frameworks. Since the launch of ChatGPT, several concerns have been raised about AI safety and how such groundbreaking AI technologies could augment or adversely affect our daily lives. To address the public’s concerns and set standards and practices for AI development, some countries have recently released their national AI frameworks. As a budding academic researcher in this space who is keen to make AI more useful for medicine and healthcare, there are two key aspects from the few frameworks I have looked at (specifically the US, UK and Singapore) that are of interest to me, namely, the multi-stakeholder approach and focus on AI education which I will delve further into in this post.

Multi-stakeholder approach

The multi-stakeholder approach encompasses the contribution from and inclusion of multiple stakeholders. Singapore’s national AI strategy calls for the industry, government and public research to contribute to meaningful efforts that aid the economy and society. In addition, people or the workforce will also be trained in AI to take AI-centric approaches in their work where necessary. There is also a public call for AI project proposals open to anyone who has ideas for national-level AI programs. Moreover, the US AI Bill of Rights states that AI systems should be created with ‘should be developed with consultation from diverse communities, stakeholders, and domain experts to identify concerns, risks, and potential impacts of the system’. Therefore, there is an emphasis on ensuring that AI systems created do not leave out or overlook minority sectors of the population while also ensuring that all stakeholders have a part to play in national AI projects.

AI education

To overcome the fear around the idea that AI will replace humans, educating the public on the technicalities of AI would be necessary. Technical education could be at different levels of rigour and content depending on an individual’s use of AI but understanding the technicality to some level of depth could be quite helpful in easing the fear of AI while also supporting people to integrate more AI into their work to better prepare themselves for future job requirements. Towards that, the Office for Artificial Intelligence and the Office for Students in the UK have worked together to provide scholarships for minority students to pursue postgraduate courses in AI and Data Science. Singapore, on the other hand, had launched the AI Apprenticeship Programme (AIAP) which is a nine-month program that trains Singaporeans in AI during which they work on a real-world AI project before they can seek employment in the tech sector.

I believe that both these elements are foundational to ensuring the development of safe AI that could be beneficial for and trusted by all sectors of the population. Given how fast-paced and far-reaching AI research and deployment of AI technologies are, governments should regularly iterate on their frameworks and regulations to keep up.