Gartner: Harnessing the Power of Large Language Model Applications for Government CIOs

News Desk -

Share

By Dean Lacheca, VP Analyst at Gartner

Several large language models (LLMs), such as ChatGPT, GPT4, Bard, etc., have entered the mainstream conversation among government executives who want to understand the technology, its opportunities, and its risks.

Public sector organizations can use LLMs to generate value across a wide range of government use cases, both internal and facing citizens. However, to achieve this, government CIOs will need to provide the platform’s access to a wide range of government data, much of which may be sensitive. For governments, ensuring the control of sensitive information and maintaining privacy are critical. As a result, balancing the opportunity against the risks will require government executives to formulate appropriate policies.

Potential Value of LLMs for Government Service Delivery and Operations

Appropriately trained LLMs deployed with other automation tooling represent significant potential sources of value to government service delivery and operations. Some of the use cases include:

  • Text generation. New levels of personalization in government services can be achieved through the ability to draft communications in multiple formats to address multiple target audiences – aiming at young audiences, marginal communities or different language communities.
  • Text summarization. Summarizing long or complex related cases for case managers to support improved decision making. Similarly summarizing complex documents for laypeople or policymakers could improve productivity.
  • Text classification. LLMs enable classification and collation of the large volume of unstructured text, improving the quality of the data used to support decision intelligence and policy development. Likewise, text classification could also be used for sentiment analysis of citizen engagements and communications.

Current Limitations and Risks of LLMs

To deliver value through LLMs, it is crucial to gain and maintain the trust of the community and the government workforce. This requires strong governance and risk management, and a comprehensive understanding and management of the inherent limitations of the technology, including the differences between the consuming-facing products and those that are available for integration into products and services. There are four major sources of risk:

  1. Accuracy. LLMs are not cognitive models, but statistical ones. As a result it can be difficult to determine what is fact giving rise to the concept of a hallucination, a convincing but misleading response.  LLMs cannot directly update their model continuously with newly published information. Therefore, the data used to train the model may not reflect the latest position and facts, thus impacting the accuracy of the result.
  2. Bias. Data used to train the LLM may be incomplete, of poor quality, or contain biases. These biases will undermine the quality of their responses.
  3. Copyright. There is the potential for copyright violations, since the position of copyright in data used to train GPT3 and GPT4 models, and in the use of their results, has not yet been settled in any major jurisdiction.
  4. Privacy. Using consumer-facing LLMs, such as ChatGPT, can risk confidential or private information being accidentally released , since they may use your input for further training. Therefore, they are unlikely to meet privacy legislation or community expectations.

Despite these risks,  LLMs and other generative AI applications do open the door to innovation across government, such as the U-Ask platform– an AI-powered chatbot connecting citizens to government services in the UAE. Since LLMs are not engineered to give a specific result, traditional governance and assurance practices cannot be applied in the same way. A structured approach toward understanding, using, and implementing responsible AI practices is essential for government organisations leveraging LLMs.

In setting a roadmap for LLMs within a government department, CIOs should advocate for policies that reduce the risk of sensitive information being exposed — at the very least.


Leave a reply