Government agencies are awash in unstructured and difficult to interpret data. To gain meaningful insights from data for policy analysis and decision-making, they can use natural language processing, a form of artificial intelligence.
Tom is an analyst at the US Department of Defense (DoD).1 All day long, he and his team collect and process massive amounts of data from a variety of sources—weather data from the National Weather Service, traffic information from the US Department of Transportation, military troop movements, public website comments, and social media posts—to assess potential threats and inform mission planning.
Explore the Government and public services collection
Read more from the AI & cognitive technologies collection
Subscribe to receive related content from Deloitte Insights
While some of the information Tom’s group collects is structured and can be categorized easily (such as tropical storms in progress or active military engagements), the vast majority is simply unstructured text, including social media conversations, comments on public websites, and narrative reports filed by field agents. Because the data is unstructured, it’s difficult to find patterns and draw meaningful conclusions. Tom and his team spend much of their day poring over paper and digital documents to detect trends, patterns, and activity that could raise red flags.
In response to these kinds of challenges, DoD’s Defense Advanced Research Projects Agency (DARPA) recently created the Deep Exploration and Filtering of Text (DEFT) program, which uses natural language processing (NLP), a form of artificial intelligence, to automatically extract relevant information and help analysts derive actionable insights from it.2
Across government, whether in defense, transportation, human services, public safety, or health care, agencies struggle with a similar problem—making sense out of huge volumes of unstructured text to inform decisions, improve services, and save lives. Text analytics, and specifically NLP, can be used to aid processes from investigating crime to providing intelligence for policy analysis.
Think of unstructured text as being “trapped” in physical and virtual file cabinets. The promise is clear: Governments could improve effectiveness and prevent many catastrophes by improving their ability to “connect the dots” and identify patterns in available data.
This article explores NLP, its capabilities, and critical government issues it can address.
Natural language processing dates back to the 1940s, when Roberto Busa, an Italian Jesuit priest and a pioneer of computational linguistics, analyzed the complete works of St. Thomas Aquinas, the 13th-century Catholic priest and philosopher. In 1949, Busa met with IBM founder Thomas J. Watson and persuaded him to sponsor the Index Thomisticus, a computer-readable compilation of Aquinas’ works. The project took more than 30 years and eventually was published in 56 volumes based on more than 11 million computer punch cards, one for every word analyzed.3
NLP first received widespread recognition in the 1950s, when researchers and linguistics experts began developing machines to automate language translation.4 In the 1990s, historian Sharon Block used topic modeling, one facet of NLP, to conduct a quantitative analysis of the Pennsylvania Gazette, one of the most prominent American newspapers of the 18th century.5
Today’s biggest advances in NLP tend to be guided by deep-learning methods based on neural networks (figure 1). Such methods are designed to mimic the function of the neurons in a human brain to ensure better performance.6 Unlike applications based on algorithms or statistical methods, deep-learning NLP can automatically learn from examples.7 According to the International Data Corporation, organizations deriving insights from structured and unstructured data could achieve an additional US$430 billion in productivity gains by 2020.8
The US Department of Defense has long been a pioneer in the application of NLP and Natural Language Generation (NLG) to government. NLP tools encompass the entire cycle of recognizing human speech, understanding and processing natural language, and generating text that can be read and interpreted by humans. NLG is a subset of the NLP tools that translate data into interpretable natural language narratives.9 Back in the 1990s, the second of the three waves of AI at DARPA focused largely on developments in natural language technologies to accelerate advances in pattern recognition. And today, Natural Language Understanding (NLU), a crucial component of NLP that helps comprehend unstructured text, as well as Natural Language Generation, form a core part of DARPA’s latest AI campaign to promote the development of machines that can mimic human reasoning and communication.10 Of the DoD’s total AI spend, NLP has emerged as one of the larger investments with a budget totaling close to US$83 million in 2017, reflecting a jump of nearly 17 percent from 2012 spending.11
Government agencies around the world are accelerating efforts to abandon paper and modernize how they handle data. According to the National Archives and Records Administration, the US federal government has already digitized more than 235 million pages of government records and plans to reach 500 million pages by fiscal 2024.12
While digitizing paper documents can help government agencies increase efficiency, improve communications, and enhance public services, most of the digitized data will still be unstructured. That’s where NLP comes in.
With recent technological advances, computers now can read, understand, and use human language. They can even measure the sentiment behind certain text and speech (see sidebar, “Applications of natural language technologies”).13 These capabilities (figure 2) allow government agencies to recognize patterns, categorize topics, and analyze public opinion.
NLP has seven key capabilities:
Topic modeling is a method based on statistical algorithms to help uncover hidden topics from large collections of documents. Topic models are unsupervised methods of NLP; they do not depend on predefined labels or ontologies. A popular method within topic modeling is Latent Dirichlet Allocation (LDA), which is used to discover latent patterns in a sea of unstructured data. The US Securities and Exchange Commission (SEC), for example, made its initial foray into natural language processing in the aftermath of the 2008 financial crisis. The SEC used LDA to identify potential problems in the disclosure reports of companies charged with financial misconduct.14
The UK government uses the same technique to better understand public comments on GOV.UK. With LDA, the government can see how customer complaints and comments relate to one another; for example, that mortgage complaints often contain allegations of racial discrimination. Uncovering such topics allows the government to address them.15
This method sorts text into specific taxonomies, typically after training by humans. For instance, complaints can be automatically filed into specific categories; tweets can be categorized as pro candidate A or against candidate B.16
One use concerns the classification of sensitive information. A research study used NLP and machine learning on nearly one million declassified diplomatic cables from the 1970s to identify the features of records that had a higher chance of being classified, such as those concerning international negotiations and military matters.17
In another study, researchers underscore the benefits of automating security classification by classifying US security documents using NLP. The researchers also propose using text categorization to classify over a 100,000 declassified government records available on the Digital National Security Archive.18
Text clustering is a technique used to group text or documents based on similarities in content. It can be used to group similar documents (such as news articles, tweets, and social media posts), analyze them, and discover important but hidden subjects.19
The Center for Tobacco Products (CTP), part of the US Food and Drug Administration, uses text clustering and topic modeling to group documents based on specific key terms. For instance, documents related to the topic “menthol” could form one cluster, with those concerning menthol usage among youths representing a subset.20 Text clustering helps the CTP organize and glean insights from documents—from FDA submissions for new tobacco products to advertising claims—to better understand the impact of the manufacture, marketing, and distribution of tobacco products on public health and help inform policy-making, particularly concerning the implicit marketing of tobacco products to youths.21
Some of the most common applications (figure 3) of natural language processing are:
Information extraction is used to automatically find meaningful information in unstructured text. One potential application can improve transparency and accuracy in crime reporting. Often, police reports are written in haste, while crime witnesses’ and victims’ accounts can be incomplete due to embarrassment or fear of repercussions. Researchers at Claremont Graduate University found that NLP technology could comb through witness and police reports and related news articles to identify crucial elements such as weapons, vehicles, time, people, clothes, and locations with high precision.28
This method can extract the names of persons, places, companies, and more, and classify them into predefined labels and link the named entities to a specific ontology.29 For instance, a text may contain references to the entity “Georgia,” which is both a nation and a US state. With the help of entity resolution, “Georgia” can be resolved to the correct category, the country or the state.
Government agencies can extract named entities in social media to identify threat perpetrators of cybercrime, for instance, as well as their future prospects.30 The more ontologies are defined in the NLP tool, the more effective the outcome.
This capability helps establish semantic relations between entities. For instance, if a document mentions the Office of Management and Budget and the US federal government, relationship extraction identifies and creates a parent–agency relationship between them.31
Sentiment analysis decodes the meaning behind human language, allowing agencies to analyze and interpret citizen and business comments on social media platforms, websites, and other venues for public comment. Washington, DC’s sentiment analysis program (GradeDC.gov), for example, examines citizens’ feedback by analyzing their comments on social media platforms. The district was the first municipal government in the United States to adopt such an initiative.32
Another study used sentiment analysis to examine the experiences of patients with various health care providers throughout the United States. The authors used an exhaustive dataset of more than 27 million tweets related to patient experience over a period of four years. A principal objective of the study was to examine the variation in such experiences across the country. The findings suggested a higher proportion of positive experiences for metropolitan areas compared to the nonmetropolitan areas.33
The deluge of unstructured data pouring into government agencies in both analog and digital form presents significant challenges for agency operations, rulemaking, policy analysis, and customer service. NLP can provide the tools needed to identify patterns and glean insights from all of this data, allowing government agencies to improve operations, identify potential risks, solve crimes, and improve public services. Ways in which NLP can help address important government issues are summarized in figure 4.
Whether it’s a comment dropped into a suggestion box at the Department of Motor Vehicles, an online survey on the Internal Revenue Service website, or various grievances posted on social media, public comments help government agencies to understand citizen and business concerns and better serve the public. NLP can analyze feedback, particularly in unstructured content, far more efficiently than humans can. Many organizations today are monitoring and analyzing consumer responses on social media with the help of sentiment analysis.
FiscalNote, a data and media company, used NLP to analyze more than 22 million comments on the FCC’s proposal to repeal net neutrality. A major challenge was sifting through millions of responses to separate genuine comments from fake ones generated by bots. To identify fakes, the company used NLP techniques to cluster the comments and identify similarities in sentences and paragraph structures.34
NLP also can help governments engage with citizens and provide answers to their questions. Singapore’s government used NLP to create “Ask Jamie,” a virtual assistant that can be placed on agency websites and trained to respond to questions posed by citizens. For questions with multiple answer options and permutations, Ask Jamie can ask questions to narrow down to an answer relevant to the query posed.35
A growing number of government agencies are using NLP-based solutions to improve investigations in critical areas such as law enforcement, defense, and intelligence. The DoD’s DEFT program referenced above uses NLP to uncover connections implicit in large text documents. Its objective is to improve the efficiency of defense analysts who investigate multiple documents to detect anomalies and causal relationships.36
The European Union’s Horizon 2020 program launched an initiative called RED (Real-time Early Detection) Alert, aimed at countering terrorism by using NLP to monitor and analyze social media conversations. RED Alert is designed to provide early alerts of potential propaganda and signs of warfare by identifying online content posted by extremists. To comply with the General Data Protection Regulation (GDPR), this analysis uses homomorphic encryption, a method that allows mathematical operations to be performed on encrypted text without disturbing the original encryption.37
One of the most striking characteristics of NLP is its ability to facilitate better predictions, which can help agencies design preemptive measures. The police department of Durham, North Carolina, uses NLP in crimefighting by enabling the police to observe patterns and interrelations in criminal activities and identify pockets with a high incidence of crime, thus allowing for quicker interventions. This contributed to a 39 percent drop in violent crime in Durham from 2007 to 2014.38
NLP also is being used to combat child trafficking. About 75 percent of child trafficking involves online advertisements. DARPA, in collaboration with commercial technology experts, has developed a platform that monitors and draws connections among the dubious content of online advertisements. Virginia’s Fairfax County Police Department and New Orleans’s Homeland Security investigations both use this advanced software to identify high-risk web advertisements and detect code words used by traffickers.39
Similarly, the Australian Securities and Investments Commission (ASIC) is piloting the use of NLP applications to identify dubious product promotions and sales malpractice.40
The World Bank’s Poverty and Equity Global Practice Group used LDA topic modeling to measure changes in policy priorities by examining presidential speeches in 10 Latin American countries and Spain from 1819 to 2016. Using LDA, the authors could identify the main topics for each document and indicate the variation in their significance across countries and over time. In Peru, for instance, topics on infrastructure and public services diminished in importance over time. With the help of topic modeling, the authors were able to establish, for each nation, a negative correlation between policy volatility and long-term growth.41
NLP can engender stricter adherence to regulations. One case in point is a pilot launched by the General Services Administration’s (GSA’s) Office of Government-wide Policy (OGP). Solicitations posted on the Federal Business Opportunities website (fbo.gov) must comply with Section 508 of the federal Rehabilitation Act, which requires federal agencies “to make their electronic and information technology accessible to people with disabilities.”42 A Solicitation Review Tool (SRT) piloted by OGP uses NLP to automatically check for compliance with Section 508 with 95 percent accuracy, which allows the GSA to redeploy some of its human resources to other important tasks.43
Another pertinent example relates to the use of AI-based pattern recognition technology by the World Anti-Doping Agency (WADA). The WADA plans to deploy AI algorithms that can instantaneously search through data collected by global anti-doping agencies to identify any breach of conduct by athletes, allowing it to monitor athletes effectively while maximizing its limited resources.44
NLP capabilities have the potential to be used across a wide spectrum of government domains. In this chapter, we explore several examples that exemplify the possibilities in this area.
The US National Library of Medicine’s Lister Hill National Center for Biomedical Communications uses NLP to “de-identify” clinical information in narrative medical reports, protecting patient privacy while preserving clinical knowledge.45
Topic modeling has been used to deconstruct large biomedical datasets for medical surveillance. The National Center for Toxicological Research, for instance, used topic modeling on 10 years of reports extracted from the FDA’s Adverse Event Reporting System (FAERS) to identify relevant drug groups from more than 60,000 drug adverse event pairs, i.e., pairs of drugs and adverse events in which the adverse reaction is caused by the drug. The objective: to better predict potential adverse drug reactions.46
DARPA’s DEFT program uses NLP to automatically extract operationally relevant information from unstructured text to help defense analysts derive actionable insights from data.47
The Institute for Strategic Dialogue in the United Kingdom developed NLP-based solutions to monitor signs of extremism and radicalization. Analysts used NLP capabilities to examine comments on select public pages and flag instances of violent or aggressive language. Of the total sample of 42,000 individuals identified online, nearly 800 were found to indicate signs of extremism.48
The US Department of Energy’s Oak Ridge national laboratory is leveraging NLP capabilities to extract data on energy ecosystem components to rank the top clean energy innovation ecosystems in the United States.49 They used NLP to transform text and numerical data into metrics on clean energy innovation activity and geography. This helps investors, researchers, and corporations rapidly identify, quantify, and characterize clean energy innovation.50
Researchers at the Environmental Defense Fund are working to develop a system backed by NLP that can analyze applications for oil and gas permits submitted under the national Environmental Protection Act. The system would provide a deeper analysis of filed applications, thereby helping local regulators and other stakeholders determine whether a project may pose a threat to wildlife, water, or cultural heritage sites.51
Government agencies can build NLP capabilities by following the steps elaborated below.
The first step is to define the problems the agency faces and which technologies, including NLP, might best address them. For example, a police department might want to improve its ability to make predictions about crimes in specific neighborhoods. After mapping the problem to a specific NLP capability, the department would work with a technical team to identify the infrastructure and tools needed, such as a front-end system for visualizing and interpreting data.
It’s important for agencies to create a team at the beginning of the project and define specific responsibilities. For example, agency directors could define specific job roles and titles for software linguists, language engineers, data scientists, engineers, and UI designers. Data science expertise outside the agency can be recruited or contracted with to build a more robust capability. Analysts and programmers then could build the appropriate algorithms, applications, and computer programs. Technology executives, meanwhile, could provide a plan for using the system’s outputs. Building a team in the early stages can help facilitate the development and adoption of NLP tools and helps agencies determine if they need additional infrastructure, such as data warehouses and data pipelines.
Next, organizations should identify the relevant data and determine its accessibility. Some data may be easily acquired; others may not be in a machine-readable format, or may be unlabeled or of poor quality. If necessary, agencies can use optical character recognition (OCR) to convert the data into a machine-readable format, clean it, create a labeled data set, and perform exploratory analysis.52
Initiative leaders should select and develop the NLP models that best suit their needs. The final selection should be based on performance measures such as the model’s precision and its ability to be integrated into the total technology infrastructure. The data science team also can start developing ways to reuse the data and codes in the future.
The next step is to amend the NLP model based on user feedback and deploy it after thorough testing. It is important to test the model to see how it integrates with other platforms and applications that could be affected. Additional testing criteria could include creating reports, configuring pipelines, monitoring indices, and creating audit access. Another key element is training end users.
The postdeployment stage typically calls for a robust operations and maintenance process. Data scientists should monitor the performance of NLP models continuously to assess whether their implementation has resulted in significant improvements. The models may have to be improved further based on new data sets and use cases. Government agencies can work with other departments or agencies to identify additional opportunities to build NLP capabilities.
As the digitization of information accelerates, government agencies around the world will increasingly face an onslaught of unstructured text—social media posts, user comments on public websites, emails, narrative reports from government employees, or applications for permits or new products—that new technologies can analyze in ways never before possible. By taking steps now to harness the power of NLP and other machine-learning capabilities, agencies can stay ahead of the curve and derive meaningful insights from the data before they are overwhelmed.