We’ve all been hearing how almost any organization, of any size, anywhere can leverage artificial intelligence (AI) to boost productivity and revenue. Teachers can alter course material to fit the learning needs of students, insurers can grow capacity and reduce fraudulent claims, utilities can predict equipment downtime and avoid outages, consumer packaged goods (CPG) organizations can forecast what their customers will want to buy next.
The applications of AI to business are endless, limited only by the fact that poor data used to train the AI can lead to algorithmic bias and trust issues. In the past, we have discussed the dangers of data bias on AI and the path to building responsible and trustworthy AI. Poor data and clumsy deep learning processes are two causes of worry. They can make AI, in the words of Elon Musk, “far more dangerous than nukes.” The observation is important, especially because data is now widely recognized as a strategic asset and governments are one of the largest owners of data. The relationship between data, AI and governments is tricky — with the potential to deliver revolutionary change or lead to undesirable outcomes.
Governments have rich and diverse data repositories. They have data on industry production, natural resources, biodiversity, GHG emissions, space exploration, citizen health, language usage and ethnicity, people movement, employment, education, housing, trade, investments, markets, patents, transport networks, law and order, poverty levels … the list goes on. Governments are committing billions of dollars to the creation of smart cities. They are putting their signatures on measures to mitigate global warming. They are responsible for combating pandemics and for the well-being of their citizens. Governments are mandated with solving the most complex problems faced by humanity.
If taxpayer money is to be spent wisely, much depends on the available data, its quality and how machine learning, deep learning systems and neural networks will put it to use. These form the bedrock of AI.
The Bright Future of AI in Government
The future of AI within governments is bright. The technology allows the public sector to improve the efficiency with which governments deliver projects. Private technology companies are making tremendous advances in AI and how it is applied to industry and the good of society. Funding for AI reached record levels in the second quarter of 2021, with over 550 AI startups globally raised more than $20 billion in investments. Collaboration between the private sector and governments will be critical in making intelligent and accurate connections between needs, resources and events on a national scale. Governments can use the frameworks, software libraries, tools, models, hardware, test beds and skills that technology companies possess to process their data and transform public administration.
This area of collaboration between the public and private sector is rich in opportunities. At the moment, says one 2020 survey commissioned by Microsoft, only 4% of the European public sector has scaled AI to transform their organizations. That figure, plus or minus a few percentage points, will likely hold true for the entire planet. But as increasing numbers of senior leadership in the public sector sponsor AI programs and commit budgets to it, that figure will see considerable improvement. Expect to see these same leaders seek professional assistance to prioritize where to apply AI, and incentivize collaboration with the private sector.
There are several examples of governments using AI successfully to impact society. The Las Vegas health department used AI to mine information from millions of Tweets to identify restaurants to inspect, replacing their previous system which operated on a rotating basis. By narrowing down the restaurants to inspect, the health department lowered incidents of food poisoning by 9,000 and saw 500 fewer food poisoning related hospital admissions as a result. In San Diego, a chatbot called Coptivity helps law enforcement officers access criminal information in seconds, a task which would take dispatchers up to 30 minutes to fulfill (example: run a license plate number). At Singapore’s National Cancer Center, AI is helping improve health services by accurately pinpointing gastric cancer.
Without Trust, AI’s Benefits Won’t Matter
Despite the benefits, all is not smooth sailing for AI and public-private partnerships. Gaining societal trust is the single-biggest hurdle before we can see the rise of AI-augmented governments. No society will trust systems that do not enshrine the ethical and moral values of that society or which violate the fair and transparent use of data.
Therefore, to ensure that AI can increase efficiency, reduce risk, improve citizen experience, scale and transform services, promote equality, and leverage unbiased data-based decisions, governments must focus on defining the ethical boundaries within which the AI systems of their private sector partners operate. This means identifying the right data, improving data quality, eliminating data bias and engaging leaders from civil society to define and monitor ethical practices and guarantee transparent solutions.
By implication, the foundation for AI must ensure it is:
- Trustworthy: AI systems and their decisions must be explainable, designed so they can stop when their probabilistic outcomes do not comply with deterministic ethical tenets.
- Collaborative: When faced with uncertainty or decisions that conflict with ethical practices, the AI should hand over the decision-making to humans. In addition, AI systems must be designed providing users the option to determine how much of their data can be used and when it can be used.
- Sustainable: AI systems must be energy efficient and cannot be run at the cost of environmental degradation.
- Computationally scalable: AI systems must be able to deliver real-time decisions, using all data points found suitable for decision-making.
The world of AI and its nexus with governments is moving at a tremendous pace. After years of zero regulation, governments are now moving rapidly to define, regulate and leverage AI.
Related Article: IBM and Microsoft Sign ‘Rome Call for AI Ethics’: What Happens Next?
Emerging Partnerships, Principles
In early August, the world was witness to a document published by the Chinese government called The Implementation Outline for Building a Government Under the Rule of Law (2021-2025) (English translation here). It laid the ground rules for integrating the Chinese state with digital technologies to deliver public services. The regulations will require big tech companies to share their data with the government. The government will then use AI to sift out decisions related to public life (legislation, law enforcement, etc.). The world will watch these developments closely. How governments partner private organizations, for data and AI technologies, will be drawn quickly.
The methods and principles of public-private engagement will depend largely on the socio-political environment of nations. But one thing is certain: The path to public-private partnership for AI may vary, but every government will begin the journey sooner rather than later.
Kalyan Kumar (KK) plays the role of Global CTO & Head – Ecosystems for HCL Technologies. He is actively involved in Product & Technology Strategy, Strategic Partner Ecosystem, Startup incubation, Open Innovation/Open Source, Enterprise Technology Office and supports the company’s organic/inorganic initiatives.