• Home
  • Blog
  • ChatGPT: A double-edged sword in reducing healthcare disparity
"Harness generative AI for good" with healthcare icons and automation anywhere logo

ChatGPT, Bard, and other generative AIs are rapidly evolving technologies with enormous potential in multiple domains, from natural language processing to image generation. These can be powerful tools in the right hands, reducing inequity and promoting fairness across various sectors. However, they also raise concerns about potential risks and consequences. Let’s dive into the benefits and challenges of using generative AI to address healthcare disparity and the unexpected benefit of conventional automation.

Benefits of generative AI

Access to more effective education

Healthcare literacy is a social determinant of health, and it starts with education. Generative AI can create personalized learning experiences for children or adults, adapting to their individual age, background, needs, strengths, and weaknesses. This technology enables the development of more relevant, personalized educational content and resources and partly compensates for the shortage of qualified teachers, thereby increasing access to quality education for marginalized and underprivileged communities. Will it replace teachers? Not at this point.


Removing language barriers

Generative AI can be used to develop advanced translation systems, allowing individuals from different linguistic backgrounds to communicate effectively. This can foster global collaboration, empower marginalized communities, and facilitate access to valuable healthcare resources. Patient materials can easily be personalized for language, age, educational level, culture, and many other factors. In the future, these may even include medical history and insurance coverage.


Enhancing employment opportunities

Income is another social determinant of health. This technology can be used to streamline the recruitment process, identifying and reducing bias and discrimination in hiring practices. This technology can also predict future workforce needs, enabling governments and organizations to develop targeted skill development programs to help disadvantaged communities access better employment opportunities.


Financial inclusion

Generative AI can analyze complex financial data, enabling the development of innovative financial products and services such as biometric Know Your Customer (KYC) methods tailored to the needs of low-income and unbanked individuals, perhaps in partnership with redevelopment agencies and corporate partners. This technology may help facilitate access to credit, savings, and insurance for marginalized communities, promoting financial inclusion and reducing economic inequality. It can also flag predatory lending and suggest alternatives for those with low financial literacy.


Healthcare improvement

Generative AI can help identify patterns in complex medical data, leading to faster diagnoses and more effective treatment plans for subpopulations and enhancing population health research. By analyzing historical medical records and predicting patient needs and barriers, this technology can surface unexpected insights, improve care outcomes, and possibly reduce disparities.


Distribution of resources

From a public policy viewpoint, AI algorithms can analyze complex historical data to decide the most effective ways to distribute resources equitably. For example, generative AI can help allocate government funding and aid to communities that need it most, ensuring that resources are used efficiently and have a meaningful impact on reducing inequity.
 

Challenges of generative AI

Data bias that perpetuates inequality

Generative AI relies on data to make predictions and generate content, which can be a significant drawback if the data used is biased, unrepresentative, or incomplete. Using biased data or wrong assumptions can lead to biased AI models that mirror existing inequalities and unfairly impact marginalized communities. In 2019, a study examined a major algorithm used by providers and insurers on 200M people to predict healthcare needs based on historical costs. The algorithm concluded that black patients had lower medical risks because they incurred lower costs. In reality, it failed to account for the fact that lower-income groups receive less care and have less trust in doctors who seem biased. They access less care, so the cost of care is lower—despite the greater need.

Ironically, this is where conventional rules-based automation like robotic process automation (RPA) offers an undeniable advantage. RPA is typically based on deterministic data-driven decisions and actions where the rules are well-defined, stable, consensus-driven, and understood to be best practice or policy. The rationale for decisions will be clear, and such rules-driven processes will not be susceptible to hidden bias (although the rules themselves may show bias). For example, rules-based triage could theoretically remove bias from medical diagnosis and therapy, which can be influenced by clinician preconceptions and experience, and improve access to care.


Job displacement

While generative AI can create new job opportunities, it can also lead to job displacement. Automation may disproportionately affect low-skilled jobs, increasing income inequality and exacerbating the healthcare divide. If AI is adopted broadly, one estimate is that 15-30% of workers, or 400M people worldwide, could lose their jobs by 2030. Thus, job reskilling or upskilling of affected populations will always need to be considered in any AI implementation.

Digital divide

The benefits of generative AI may not be equitably distributed, as access to technology and digital literacy varies across communities. The widening digital divide could result in certain groups being unable to benefit from AI-driven solutions, leading to further disparities. Social impact efforts may need to ratchet down to the lowest common digital denominator. On the other hand, it’s estimated that 91.40% of the world’s population owns a smartphone or feature phone today.


Potential misuse

Bad actors could exploit generative AI for malicious purposes, such as creating deepfakes, generating fake news, or automating cyberattacks. The misuse of AI technology could undermine trust in digital systems, disproportionately affecting vulnerable populations. Conversely, even bona fide truths may be ignored. It’s an opportunity for startups to develop advanced technology like AI detectors to detect and prevent such misuse.

Harness AI for good

Generative AI holds tremendous promise in reducing inequity, but it also poses significant challenges that need to be addressed. To harness its potential for good, it is crucial to invest in ethical AI development, unbiased data, digital infrastructure, and education. As generative AI transitions from toddler to teenager, we are bound to shape what it is exposed to, its purpose, morals, and duties. If we develop this technology responsibly, we can build a future where generative AI augments, rather than detracts from, human productivity and value.

About Yan Chow

user image

Dr. Yan Chow is a global healthcare industry leader and strategist for Automation Anywhere.

Subscribe via EmailView All Posts LinkedIn

Get to know the Automation Success Platform.

Try Automation Anywhere
Close

For Businesses

Sign up to get quick access to a full, personalized product demo

For Students & Developers

Start your RPA journey instantly with FREE access to Community Edition