Artificial intelligence (AI) models can analyze and interpret large health data sets at scale and can be transformative for public health assessment; however, enthusiasm for the potential of AI technology is accompanied by several concerns:
- Data quality, quantity and transparency of AI models, evidence of clinical utility, and regulatory challenges.
- Ethical data use and the impact of equity and bias on outputs from AI models.
- Worsening of health inequities (especially in rural, underprivileged communities and in the developing world), poor model interpretability, structural challenges including data sharing, insufficient infrastructure for integration, lack of public health workforce training in AI, and other ethical and privacy concerns.
Our Advice
Critical Insight
AI and machine learning algorithms can be trained to help reduce or eliminate bias by promoting data diversity and transparency to help address health inequities. Personalized data science training can also ensure cross-pollination of traditional public health with cutting-edge AI technologies.
Impact and Result
Improvements across the public health and healthcare ecosystem will be realized by:
- Investing in data management technologies and resources.
- Using preventive safeguards and notifications in all data systems.
- Offering data and compliance training to all providers.
- Using tools to quantify and qualify data.
- Ensuring that AI tools are developed using high-quality data that is representative of the population.
- Ensuring that AI tools are developed with a focus on reducing bias.
- Increasing transparency, improving data sharing, and ensuring that AI tools are developed with privacy and ethical considerations in mind.
Getting to Automated Surveillance and Location Intelligence in Public Health Practice
An AI readiness assessment framework and seven-step guide to getting started with AI.
Analyst Perspective
Digital technologies and AI are transforming medicine, medical research, and public health.
Digital technologies and artificial intelligence (AI), particularly machine learning (ML), are transforming medicine, medical research, and public health.
The use of AI technologies for health has already contributed to advances in fields like drug discovery, genomics, radiology, pathology, and prevention. However, challenges and obstacles to the use of AI for health raise ethical, legal, commercial, and social concerns. Although many of these concerns are not unique to AI, AI poses additional, novel ethical challenges (e.g., trustworthiness and bias) that extend beyond the purview of traditional regulators and participants in healthcare systems.
Organizationally, through specific, measurable, achievable, relevant, and time-bound (SMART) strategic planning and clearly articulated governance, ethically optimized tools and applications could sustain the widespread use of AI to improve human health and quality of life while ensuring equitable access to such technologies and care. This approach would also mitigate or eliminate many risks and bad practices.
IT leaders, in alignment with public health decision makers, program managers, and policymakers, need to act fast when an outbreak occurs. Automated alerts and reporting tools can keep officials informed as emergent situations evolve. This research provides the requisite background, addresses challenges and obstacles, and offers an AI readiness assessment framework and seven-step approach to getting started with AI.
Neal Rosenblatt
Principal Research Director
Info-Tech Research Group
”Our future is a race between the growing power of technology and the wisdom with which we use it.”
‒ Stephen Hawking
Executive Summary
Your Challenge |
Common Obstacles |
Info-Tech’s Approach |
---|---|---|
Artificial intelligence (AI) models can analyze and interpret large health data sets at scale, making them transformative for public health and epidemiologic surveillance. While AI technology has exciting potential, there are relevant concerns about data quality, quantity, explainability and transparency of AI models, evidence of clinical utility, regulatory challenges, ethical data use, and the impact of equity and bias on AI model outputs. |
There are many risks and limitations to using AI for public health. These include worsening health inequities (especially in rural, underprivileged communities and in the developing world), poor model interpretability, structural challenges including data sharing, insufficient infrastructure for integration, lack of public health workforce training in AI, and other ethical and privacy concerns. |
Several ways to overcome AI challenges include:
|
Info-Tech Insight
AI doesn’t bring us full intelligence but a critical component thereof – prediction, generating information upon which decisions can be made. AI, therefore, elevates/augments the decision-making process.
Getting to automated surveillance and location intelligence
Use AI/ML tools and technology to improve health status assessment and preparedness core functions and to enhance capacity and capability through automated surveillance and location intelligence.
Public health surveillance
Definition
Public health surveillance is the continuous, systematic collection, analysis, and interpretation of health-related data. It is essential for planning, implementing, and evaluating public health practices. Public health surveillance helps track and respond to emerging health issues, identify changes in disease patterns, and target resources and interventions. It can be conducted at different levels (local, state, provincial, national, or international) and use different methods (passive, active, or sentinel). It serves as the cornerstone of public health practice and a key element of public health core functions and essential services.
Source: “Introduction to Public Health Surveillance,” CDC
“You can’t manage what you can’t measure.”
– Peter Drucker in Management Challenges for the 21st Century
“In public health, we can’t do anything without surveillance. That’s where public health begins.”
– David Satcher, MD, PhD, quoted in “Surveillance Strategy Report – Moving Public Health Surveillance Ahead,” CDC, 2023
Automated surveillance in public health practice
Automated surveillance involves the systematic collection, analysis, and interpretation of streaming health-related data using automated tools and technologies for nowcasting, forecasting (prediction), and scenario modeling of upstream social conditions and downstream health outcomes. This helps assess overall population health status at varying geographic and temporal levels.
Over the past decade, public health agencies and other groups have invested considerable resources in automated surveillance systems. These systems generally follow syndromes in prediagnostic data drawn from sources such as emergency department visits.
IT leaders, in alignment with public health decision makers, program managers, and policymakers, need to act fast when an outbreak occurs. Automated alerts and reporting tools can keep officials informed as emergent situations evolve in real time.
Sources: “Artificial Intelligence,” CDC, 2022; “Surveillance Strategy Report,” CDC, 2023
Using artificial intelligence in public health surveillance
Artificial intelligence (AI) has been used in public health surveillance to improve the efficiency and effectiveness of processes across an expanded public health landscape.
AI techniques are commonly used to track, forecast trends, provide early warnings, and model and measure public health responses. AI predictive applications can support public health surveillance, research, and, ultimately, decision making.
AI solutions are built upon one or more AI methods. Recognizing the AI methods that a solution uses is important, as each has different implications.
By selecting the appropriate method(s), AI can improve the speed and accuracy of diagnosis and screening for diseases, assist with clinical care, strengthen health research and drug development, and support diverse public health interventions such as disease surveillance, outbreak response, and health systems management.
Sources: “AI Strategy,” U.S. HHS, 2021; WHO, 2021; “Artificial Intelligence and Machine Learning,” CDC, 2022
Ethical considerations when using AI in public health practice
There are several concerns regarding the use of AI:
Bias
One primary concern is that AI may be biased, which may result in the unfair and unequal treatment of different groups of people.
Download Info-Tech’s Mitigate Machine Bias blueprint
Privacy & Security
Other concerns include the potential breach of privacy
and security, and AI being used in ways that could harm individuals or patients.
Tools Development
To address these issues, it is crucial to develop AI tools
with a strong emphasis on reducing bias and improving transparency. It’s also important to ensure AI tools are developed with privacy and ethical considerations in mind.
What is trustworthy AI?
“Trustworthy AI refers to the design, development, acquisition, and use of AI in a manner that fosters public trust and confidence while protecting privacy,
civil rights, liberties, and American values, consistent with applicable laws.”
– U.S. Department of Health and Human Services
Challenges and obstacles using AI in public health practice
Data access, quality, equity, bias, ethical use, workforce training, and regulatory challenges
AI models can analyze and interpret large health data sets at scale, making them transformative for public health and epidemiologic surveillance. However, while AI technology holds exciting potential, there are relevant concerns about data quality, quantity, explainability and transparency of AI models, evidence of clinical utility, regulatory challenges, ethical data use, and the impact of equity and bias on AI model outputs.
There are many further risks and limitations to using AI for public health. These include worsening health inequities (especially in rural, underprivileged communities and in the developing world), poor model interpretability, structural challenges including data sharing, insufficient infrastructure for integration, lack of public health workforce training in AI, and other ethical and privacy concerns.