Skip to main content
An official website of the United States government

Priority 3: Promote Responsible Development and Use of Technology to Support Navigation

Technology does not automatically lead to increased efficiency or improved outcomes. Poorly designed and implemented technological solutions can result in frustration, wasted resources, diminished trust, and even serious harm. This is particularly true in the setting of healthcare, where sensitive information is collected and patients and providers are making decisions with profound implications. The unchecked use of technology in healthcare may result in negative consequences to patients, including increasing health disparities and medical errors.

Consensus-based frameworks are needed to ensure that the technologies used for cancer patient navigation are developed and used in ways that serve, protect, and build trust with patients. Thoughtful consideration is particularly important for new approaches, such as those that include artificial intelligence (AI) (see Artificial Intelligence to Support Patient Navigation: Opportunities and Concerns).

Furthermore, ensuring that technology mitigates rather than exacerbates health disparities is essential. Public and private stakeholders in the health technology development space are working to address the responsible development and use of technology, including artificial intelligence (see Activities to Address Responsible Use of Technology).

It would be impossible to develop a single framework able to address all types of technologies and applications. However, all frameworks and guidelines should adhere to a set of core principles (see Core Principles for Navigation Technology Development and Use). First and foremost, health technology—and technology that will be used for navigation in particular—must be developed using a people-first approach that aims to augment rather than replace interactions between patients and their care teams. Equity must be considered throughout development and implementation, as the inequitable distribution of technology in healthcare has in some cases widened, rather than narrowed, gaps in access and outcomes. For example, although the rapid pivot to telemedicine at the beginning of the COVID-19 pandemic was a testament to the power of technology, the benefits were not evenly distributed. Women, people without a high school diploma, people who were ages 65 years and older, and/or those who identified as Latino, Asian, or Black were even less likely to be able to get the care they needed than when all visits were in person. (1,2) At a minimum, care must be taken to ensure that new technologies do not exacerbate disparities. Ideally, technology would be intentionally designed and implemented to close gaps in care and outcomes.

User-centered design is essential for the success of any technology. To be effective, technological solutions must be both useful to and usable by the intended users. (3) Poorly designed tools increase burden on providers, including navigators, contributing to burnout. Electronic health record (EHR)-related burnout has been a significant issue (4) and provides a cautionary tale for health technologies. Thus, it is crucial that technological tools for cancer care and navigation are developed with input from end users, including patients and caregivers, providers, and navigation professionals. These stakeholders should be included from the earliest stages of development through testing and implementation. Patients and caregivers involved in this process should be representative of the target populations with respect to culture, age, educational attainment, and digital literacy. Failure to include those groups may result in low uptake, limiting real-world impact. (5,6,7) Technology should be thoroughly tested in each new setting and population prior to implementation to optimize benefits and reduce the risk of negative unintended consequences.

Technology can only be as good as its source data. Technology developers should use evidence-based information whenever possible. For AI-based tools, use of large, representative datasets for training models is critical (see Artificial Intelligence to Support Patient Navigation: Opportunities and Concerns). Training data must also be evidence based and peer reviewed. Large language models (LLMs) consistently recapitulate any problematic attitudes and beliefs inherent in the texts upon which they are trained. One study of AI for healthcare found that four major LLMs promoted racist stereotypes and previously debunked race-related claims, providing inaccurate and dangerous medical advice. (8) Concerns have also been raised about algorithmic bias negatively influencing clinicians from the very beginning of their careers as medical schools may begin to utilize LLMs in the development of clinical training vignettes and other educational resources. (9,10,11) New biases may also be inadvertently introduced over time as models incorporate additional data. (12) While elimination of algorithmic bias may be impossible, being transparent about design and data sources can help troubleshoot and mitigate problems, including bias.

Health technology must protect the privacy of patient information through robust security practices. Health systems’ increasing reliance on technology has created a significant vulnerability to cyberattacks. Over the last few years, cyber criminals have turned their focus on hospitals and payors, with 46 attacks on health systems in 2023 alone. (13,14) These attacks not only expose patients’ sensitive data but bring hospital operations to a halt, causing dangerous disruptions in care. (15)

Interoperability is also essential to allow integration of data across healthcare organizations and platforms. Ongoing evaluation and continual improvement are important to ensure that technology is achieving its goals without creating undue burden. Developers and implementers should establish evaluation metrics and plans and ensure that updates and improvements can be implemented as needed. Use of any technology that is not achieving its goals or is causing harms that outweigh its benefits should be discontinued.

Activities to Address Responsible Use of Technology

The Biden White House has initiated a number of actions, including releasing a Blueprint for an AI Bill of Rights in 2022 and issuing two Executive Orders on responsible AI in 2023. (16,17,18,19) Following the Executive Orders, agencies across the government have acted to address the promise and risks of AI. The Administration has also secured voluntary commitments from 16 major technology companies to move together toward the safe, secure, and transparent development of AI technology. (20,21) The U.S. Department of Health and Human Services (HHS) Health IT Alignment Policy was adopted in 2022 to align information technology activities across the Department. (22) An AI risk management framework developed by the National Institute of Standards and Technology will help guide technology development within the government and industry. (23) More broadly, the U.S. Food and Drug Administration’s Digital Health Center of Excellence provides guidance and support for the development of health-related technologies. (24) In the private sector, the Coalition for Health AI is currently developing a consensus-driven framework to improve the quality of healthcare by promoting the adoption of credible, fair, and transparent AI systems. (25)

Core Principles for Navigation Technology Development and Use

  • People-first approach: Augment, rather than replace or diminish, human interactions between patients and their care teams.
  • Equity: Take an equity-first approach that incorporates insights from marginalized communities into every stage of the process and takes care not to exacerbate existing disparities.
  • User-centered design: Address an area of need and reduce burden for patients, caregivers, navigators, and other members of the care team. Ensure that tools are easy to use.
  • Effectiveness and validity: Conduct testing to confirm validity and benefits for the intended settings and populations.
  • Use of high-quality source data: Rely on evidence-based information and trusted sources. Train LLMs and other algorithms using accurate and inclusive datasets.
  • Transparency: Navigation technologies and the process of their development should be transparent and explainable. Human users must be able to troubleshoot and understand the systems they are using.
  • Privacy: Health technologies, including third-party and direct-to-consumer products, must balance utility and the secure exchange of data with the protection of patients’ privacy.
  • Interoperability: Incorporate a minimum set of common data elements and facilitate the secure exchange of health information between appropriate parties like healthcare organizations, navigators, and cancer patients. (26)
  • Ongoing assessment and improvement: Develop a plan for evaluation and improvement that incorporates outcomes data. Enable continual updates and improvements, and discontinue use of ineffective tools.

Recommendation 3.1: Adhere to core principles for responsible development and use of technologies that support cancer patient navigation.

All organizations that are developing and using technology for cancer patient navigation—including healthcare organizations, EHR vendors, third-party developers, and others—should adhere to the core principles for navigation technology development and use to ensure optimal benefit and return on investment (see Core Principles for Navigation Technology Development and Use). For technologies and applications for which there are more detailed guiding frameworks, these should also be referenced and followed. The risks of implementing technology that does not meet these standards are dire. In addition to potentially worsening health outcomes and widening disparities, security breaches, perpetuation of bias, and creation of tools with limited value will diminish patient and provider trust in both new technology and the healthcare system. (27)

Developers have an ethical responsibility to create tools that are aligned with these core principles. Potential short-term profits should not trump the importance of long-term benefits, trust, and value. Organizations that are purchasing or funding development of technology for cancer patient navigation must take the lead in ensuring that technology is responsibly developed and implemented. Healthcare organizations should establish clear and binding expectations that all products purchased from or developed in partnership with third parties be responsibly developed, implemented, and assessed. Research funding organizations—including, but not limited to, the National Institutes of Health (NIH) and Agency for Healthcare Research and Quality (AHRQ)—should include core principle requirements in the terms of award for any grant that involves development of a technology tool for patient navigation.

Recommendation 3.2: Support research to ensure that technology to support navigation achieves its goals.

Research is needed to explore new types of technology and new applications of existing technology that could be used to support care teams, navigators, patients, and caregivers. Research funding organizations should provide funding for the development and testing of cancer patient navigation technologies with a focus on tools that will address health disparities.

Implementation research is also needed to determine the best ways to implement navigation-supporting technologies in real-world settings. The AHRQ Digital Healthcare Research Program—which aims to produce and disseminate evidence about how the evolving digital healthcare ecosystem can best advance the quality, safety, and effectiveness of healthcare for patients and their families—is well suited to conduct this research. (28) The Panel encourages AHRQ to assess technologies used by cancer patient navigators and care teams as well as those used by patients and caregivers. Research questions could include the impact of navigation technologies on patient outcomes (e.g., time to treatment initiation, successful resolution of social determinants of health–related challenges), disparities in healthcare access within an institution, or navigator capacity and effectiveness. AHRQ should develop and disseminate best practices and lessons learned to guide development, implementation, and evaluation of technology for navigation.

Recommendation 3.3: Incorporate technology knowledge and skills into patient navigator training and core competencies.

Technology-based tools have potential to increase the efficiency and effectiveness of patient navigators. However, realizing this potential depends on navigators’ understanding and feeling comfortable with using these tools. As noted in the core principles, navigators should be included in the design of any technology intended for their use. In addition, navigators may need training to ensure that they are able to effectively use these tools once implemented. Navigators come from a range of backgrounds and will have a range of experience with technology.

Healthcare organizations must provide training for their navigators on any technology tool that is implemented within their system. This could include up-front training as well as a mechanism for continued support as needed during rollout.

Several navigator training programs have been developed to provide navigators with the knowledge and skills they need to carry out their jobs. Navigator training programs should incorporate learning objectives so that navigators understand how to use technology effectively and responsibly. As navigation technologies become more commonplace, digital skills should be included among oncology navigator core competencies such as those developed by the National Navigation Roundtable and Professional Oncology Navigation Task Force. (29,30)

Selected References

  1. Haimi M. The tragic paradoxical effect of telemedicine on healthcare disparities—a time for redemption: a narrative review. BMC Med Inform Decis Mak. 2023;23(1):95. [Available Online]

  2. Yao R, Zhang W, Evans R, et al. Inequities in health care services caused by the adoption of digital health technologies: scoping review. J Med Internet Res. 2022;24(3). [Available Online]

  3. Karsh B-T. Beyond usability: designing effective technology implementation systems to promote patient safety. Qual Saf Health Care. 2004;13(5):388-94. [Available Online]

  4. Budd J. Burnout related to electronic health record use in primary care. J Prim Care Community Health. 2023;14:21501319231166921. [Available Online]

  5. Guckert M, Milanovic K, Hannig J, et al. The disruption of trust in the digital transformation leading to Health 4.0. Front Digit Health. 2022;4. [Available Online]

  6. Strawley C, Richwine C. Individuals’ access and use of patient portals and smartphone health apps, 2022. Washington (DC): Office of the National Coordinator for Health Information Technology; 2023. [Available Online]

  7. Dabbs ADV, Myers BA, McCurry KR, et al. User-centered design and interactive health technologies for patients. Comput Inform Nurs. 2009;27(3):175-83. [Available Online]

  8. Omiye J, Lester J, Spichak S, et al. Large language models propagate race-based medicine. NPJ Digit Med. 2023;6(196). [Available Online]

  9. Hastings J. Preventing harm from non-conscious bias in medical generative AI. Lancet Digit Health. 2024;6(1):e2-e3. [Available Online]

  10. Holmgren AJ, McBride S, Gale B, Mossburg S. Technology as a tool for improving patient safety [Internet]. Rockville (MD): Agency for Healthcare Research and Quality; 2023 [cited 2023 Mar 29]. [Available Online]

  11. Zack T, Lehman E, Suzgun M, et al. Assessing the potential of GPT-4 to perpetuate racial and gender biases in health care: a model evaluation study. Lancet Digit Health. 2024;6(1):e12-e22. [Available Online]

  12. Clusmann J, Kolbinger FR, Muti HS, et al. The future landscape of large language models in medicine. Commun Med. 2023;3(1):141. [Available Online]

  13. Associated Press. Cyberattacks on hospitals are likely to increase, putting lives at risk, experts warn [Internet]. Washington (DC): U.S. News and World Report; 2024 Feb 14 [cited 2024 Sep 10]. [Available Online]

  14. Committee on Energy and Commerce. What we learned: change healthcare cyber attack [Internet]. Washington (DC): Committee on Energy and Commerce; 2024 [cited 2024 Jul 12]. [Available Online]

  15. U.S. Department of Health and Human Services. Healthcare sector cybersecurity: introduction to the strategy of the U.S. Department of Health and Human Services. Washington (DC): HHS; 2023. [Available Online]

  16. The White House. President Biden issues Executive Order on safe, secure, and trustworthy artificial intelligence [Fact Sheet]. Washington (DC): The White House; 2023 Oct 30. [Available Online]

  17. The White House. Biden-Harris Administration announces key AI actions following President Biden's landmark Executive Order [Fact Sheet]. Washington (DC): The White House; 2024 Jan 29. [Available Online]

  18. The White House. Executive Order on further advancing racial equity and support for underserved communities through the federal government [Internet]. Washington (DC): The White House; 2023 Feb 16 [cited 2024 Mar 29]. [Available Online]

  19. White House Office of Science and Technology Policy. Blueprint for an AI bill of rights: making automated systems work for the American people. Washington (DC): OSTP; 2022 Oct. [Available Online]

  20. The White House. Biden-Harris Administration secures voluntary commitments from leading artificial intelligence companies to manage the risks posed by AI [Fact Sheet]. Washington (DC): The White House; 2023 Jul 21. [Available Online]

  21. The White House. Biden-Harris Administration announces new AI actions and receives additional major voluntary commitment on AI [Fact Sheet]. Washington (DC): The White House; 2024 Jul 26. [Available Online]

  22. Assistant Secretary for Technology Policy/Office of the National Coordinator for Health Information Technology. HHS Health IT Alignment Policy [Internet]. Washington (DC): ASTP/ONC; n.d. [updated 2024 Jul 5; cited 2024 Sep 8]. [Available Online]

  23. National Institute of Standards and Technology. AI risk management framework [Internet]. Gaithersburg (MD): NIST; 2023 [cited 2024 Mar 29]. [Available Online]

  24. U.S. Food and Drug Administration. Digital Health Center of Excellence [Internet]. Silver Spring (MD): FDA; 2024 [cited 2024 March 8]. [Available Online]

  25. Coalition for Health AI. Blueprint for trustworthy AI implementation guidance and assurance for healthcare (version 1.0). McLean (VA): The MITRE Corporation; 2023 Apr 4. [Available Online]

  26. National Institutes of Health. Common Data Elements (CDEs) Program [Internet]. Bethesda (MD): NIH; 2024 [updated 2024 Mar 5; cited 2024 Mar 8]. [Available Online]

  27. Kanter GP, Packel EA. Health care privacy risks of AI chatbots. JAMA. 2023;330(4):311-2. [Available Online]

  28. Agency for Healthcare Research and Quality. AHRQ's Digital Healthcare Research Program [Internet]. Rockville (MD): AHRQ; 2021 [cited 2024 Aug 21]. [Available Online]

  29. The Professional Oncology Navigation Task Force. Oncology navigation standards of professional practice. Clin J Oncol Nurs. 2022;26(3):E14-E25. [Available Online]

  30. American Cancer Society. Patient navigator training competency domains [Internet]. Atlanta (GA): ACS; n.d. [cited 2024 Aug 26]. [Available Online]

  • Posted:
Email