2024 Signature Series Public Congressional Briefing

October 2, 2024
12:00 pm-

1:30 pm

hart senate office building

Navigating AI in Health Care Policy: How Are Standards Evolving?

Event Description

This briefing provided a foundational conversation for congressional staff and federal policymakers on how leaders in healthcare AI policy are conceiving and creating standards for responsible Artificial Intelligence (AI) in health care. Speakers led the discussion, offering various perspectives and frameworks under development with the purpose of ensuring patient safety, fostering trust, and promoting responsible AI. The briefing also addressed the challenges of establishing effective standards that foster trust and encourage innovation in an environment of fast paced technology change. 

By the end of the event, attendees gained a deeper understanding of the strategic principles and tradeoffs involved in developing and implementing AI standards, frameworks and guidelines in health care. They also learned about specific resources and current models available, how they are alike and different, and the tradeoffs involved in different approaches for evaluating AI models to  encourage AI deployment that aligns with ethical standards and public interests.

Learning Goals

  • Understand the role of standards, frameworks and guidelines for  AI health care applications, including their role in ensuring patient safety, fostering trust, and promoting responsible innovation. 
  • Understand different available resources and groups working on standards, and the different approaches that guide them, and how they balance risks and opportunities for AI.
  • Identify the key challenges in establishing effective AI standards, tradeoffs of different approaches, and complexity of keeping pace with rapid technological advancements.
  • Explore opportunities to build on existing frameworks and improve current approaches to AI standards, with a focus on enhancing interoperability, addressing biases, and supporting privacy protections.
  • Gain insights into the strategic principles necessary,  developing and implementing AI standards, including defining success, balancing economic incentives with public trust, and ensuring best practices for version control and traceability.

Speakers

René Quashie, J.D.

Vice President, Digital Health, Consumer Technology Association
René Quashie, Esq. is the Vice President, Digital Health, at the Consumer Technology Association (CTA) René Quashie is the first-ever Vice President of Policy & Regulatory Affairs of Digital Health and provides guidance on key technical, policy, and regulatory issues relating to digital health products, services, software and apps. CTA supports the health technology industry through advocacy, education, research, standards work, policy initiatives and more. Quashie works closely with the Food and Drug Administration, the Centers for Medicare and Medicaid Services, the Office of the National Coordinator and other related government agencies. Quashie previously was in private law practice at several national firms, and earned his law degree from George Washington University.

Laura Adams, M.S.

Senior Advisor, National Academy of Medicine (NAM)
Laura Adams, Senior Advisor at the National Academy of Medicine (NAM), provides strategic leadership for the Science and Technology portfolio of the Leadership Consortium and leads the NAM’s Artificial Intelligence Code of Conduct (AICC) national initiative. Her expertise is in AI, digital health, and human-centered care. She is a member of the international AI Expert Panel for the Organization of Economic Co-operation and Development (OECD) and chairs the Global Opportunities Group for the AI Regulatory Science and Innovation Network in the UK. Laura serves on the boards of Boston-based T2 Biosystems and TMA Precision Health; and is a strategic advisor for Inflammatix, a Burlingame, CA-based biotech company specializing in transcriptomics/host immune response diagnostics. She chaired the Institute of Medicine’s (IOM) Planning Committee for the “Digital Infrastructure for the Learning Healthcare System” initiative. She is recognized as a strategic leader of large scale multi-sectoral initiatives with a deep experience in and understanding of the complex U.S. health care industry, including how the components function and interact. Laura was among the first to bring the science of healthcare quality improvement to the Middle East, in conjunction with Donald Berwick, MD and the Harvard Institute for Social and Economic Policy in the Middle East. She served as Institute for Healthcare Improvement (IHI) faculty at the inaugural IHI Middle East Forum on Quality Improvement in Healthcare in Doha, Qatar. Prior to her work at the NAM, Laura was founding President and CEO of the Rhode Island Quality Institute (RIQI), Rhode Island’s statewide health information exchange. Under her leadership, RIQI won the National Council for Community Behavioral Health Excellence Award for Impact in serving those with behavioral health and substance abuse challenges. RIQI was the recipient of the national Healthcare Informatics Innovation Award; and was a top finalist for the New England Business Innovation award for impact on the opioid crisis. Laura has delivered keynotes in nearly every state in the union and in 13 foreign countries.

Mark Sendak, M.D., MPP

Population Health and Data Science Lead, Duke Institute for Health Innovation (DIHI), Co-Lead, Health AI Partnership
Mark Sendak, MD, MPP is the Population Health & Data Science Lead at the Duke Institute for Health Innovation (DIHI), where he leads interdisciplinary teams of data scientists, clinicians, and machine learning experts to build technologies that solve real clinical problems. Together with his team, he has built tools to transform chronic disease management within an Accountable Care Organization and detection and management of inpatient deterioration within hospitals. He has integrated dozens of data-driven technologies into clinical operations and is a co-inventor of software to scale machine learning applications and real-world evidence generation across health systems. He leads the DIHI Clinical Research & Innovation scholarship, which equips medical students with the business and data science skills required to lead health care innovations. He co-leads Health AI Partnership, a learning collaborative to advance the safe, effective, and equitable use of AI software within healthcare delivery organizations. Him and his team have published in top technical, clinical, and management venues. Their work has been featured in The Wall Street Journal, MIT Technology Review, Wired, and STAT News. He has served as an expert advisor to national organizations, including the American Medical Association, AARP, American Board of Family Medicine, White House Office of Science, Technology, and Policy, and National Academies of Medicine. In 2024, he was nominated by the Government Accountability Office to serve as a member of the Health Information Technology Advisory Committee (HITAC). Also in 2024, he testified before the US Senate Finance Committee during a hearing titled ‘Artificial Intelligence in Health Care: Promise and Pitfalls.” He serves on the board of Machine Learning in Healthcare (MLHC), the premier computer science conference exclusively dedicated to healthcare. He was named a STAT Wunderkind in 2020 for his efforts to responsibly build and integrate AI into clinical practice. He obtained his MD and Masters of Public Policy at Duke University as a Dean’s Tuition Scholar and his Bachelor’s of Science in Mathematics from UCLA, where he was awarded the Charles E. Young Humanitarian Award, the top honor for community service.

Brian Anderson, M.D.

CEO and Founder, Coalition for Health AI (CHAI)
Dr. Brian Anderson is the Chief Executive Officer of the Coalition for Health AI (CHAI), a non-profit coalition he co-founded in 2021. CHAI is focused on developing a set of consensus-driven guidelines and best practices for Responsible AI in Health, as well as supporting the ability to independently test and validate AI for safety and effectiveness. Prior to leading CHAI, Dr. Anderson was the Chief Digital Health Physician at MITRE, where he led research and development efforts across major strategic initiatives in digital health alongside industry partners and the U. S. Government. He was responsible for leading much of MITRE’s work during the COVID-19 pandemic, working closely with the White House COVID Task Force, as well as Operation Warp Speed. He also led MITRE’s largest R&D effort in Oncology, focusing on the initial development of mCODE and the use of AI in more efficient and inclusive clinical trial design. Dr. Anderson is an internationally recognized author and expert in digital health, and is regularly engaged as a speaker on digital health innovation, health standards development, clinical decision support systems, and interoperability. Prior to MITRE, Anderson led the Informatics and Network Medicine Division at athenahealth. He has also served on several national, and international, health information technology committees in partnership with the Office of the National Coordinator (ONC), the National Institutes of Health (NIH) and the Organization for Economic Cooperation and Development (OECD)

Emma Beavins

Staff Writer, Fierce Healthcare
Emma Beavins is a staff writer at Fierce Healthcare. She has covered healthcare technology and policy for two years, previously at Inside Health Policy. She covers regulation and legislation related to healthcare technology, including telehealth, remote monitoring, artificial intelligence, health data privacy, and at home health care enabled by health technology. She aims to keep readers informed on how Congress and the administration are thinking about health technology and virtual care in the wake of the COVID-19 pandemic.

Presentation: Slides

Event Resources

Additional Resources

Thank you to our 2024 Signature Series Sponsors!

Visionary

Champion

Premier

Convener

Transcript

Claire Sheahan:

Well, welcome, everybody. So great to see everyone here. My name’s Claire Sheahan. I’m the president and CEO for the Alliance for Health policy. We are so grateful that you all took time out of your day to join us for lunch, and to learn a little bit about AI and how standards are evolving, how we’re thinking about frameworks, how we’re thinking about best practices in AI. A couple of different pieces of housekeeping before we get started, one is that we have done a lot of programming on AI this year, and there are a number of different resources available on the Alliance website. So, I would really encourage you to dig in there. We are, many of you may have seen, releasing a report today, so that’s really exciting.

Also, there was a webinar that we just did several weeks ago on the topic of AI as really a one-on-one, and I would really recommend folks, if you’re interested in this topic, that you go ahead and watch that. It was saved. We had over 200 people view that, but would love to share it with many, many more. So just so you know, this is one of many resources we have available, and we encourage you to use them all. So, for those of you who are new to the Alliance, I mean, many of you may have heard of us, but for those of you who are new, we are a 30 plus year old organization. Our mission is pretty simple. It’s to educate policymakers on complex health policy topics.

Full stop. That’s where it ends. It ends with education. It’s not because we have an angle. It’s not because we have an ask. It’s not because we’re pushing a piece of legislation. Our mission is really educational, and that’s why we are a nonpartisan organization. We are stakeholder neutral, and we have been for many, many years. For those of you who don’t know about our history, in the 30 years we’ve been around, we were founded by Senators Rockefeller and Danforth. So, I always say one of the unusual things about the Alliance, it was founded of, for, and by the Hill, and it is really been a resource to our policymaker community, but particularly our congressional staff for years. We’re happy to continue doing that on emerging topics like AI.

There’s a couple of other things that I wanted to cover. I think we talked a little bit about some of the resources that are available. This event is actually the final event in our signature series this year on the topic of AI. Next year’s topic will be aging, so another big topic with number of policy implications. So, we’re looking forward to doing that in 2025. But the outcome… Today’s congressional briefing is the outcome of a process that started in January where we did insight development. Then we brought folks together for some strategy and workshopping. Out of that workshop came some recommendations for our summit, also recommendations for our educational content, which includes this briefing and the webinar that I talked about as well.

So, it really is all part of a year-long process, and we know that there are a lot of different convenings that you can attend. We’re really grateful that you’ve joined us today for this one. Our hope is that today’s event does a couple of different things. We want for you to walk away with the idea that we’ve highlighted a couple of key thinkers in the emerging area of health AI policy and health AI best practices. We want to encourage you to take several different takes on the challenges and opportunities of how we envision AI frameworks, and the points of view and evaluation opportunities. Finally, we want to provide you with resources to learn more.

So, I would say that these are our objectives today, and I hope that you walk out of the room with each of these fulfilled. One of the main things that keeps the Alliance going is our sponsors. We are of, for, and by the hill, but it is our sponsors who keep our work going, and we are very grateful to have… In many cases, these folks have been with us for 10 plus years, or maybe they’re newer to the Alliance as well, but we are very grateful to have their support. You can see it’s a various stakeholders who support this work, and all of them care deeply about policy that’s based in fact, and policy that’s not partisan, and it really represents a number of different points of view.

So with that, I wanted to introduce our esteemed speakers. If folks give me a minute here, I am dealing with a little bit of tech challenges. So first and foremost, you can see Emma Beavins, this wonderful face Beavins, right? Sorry, I have a silent H in my name, so feel free to pronounce it. So, we came across Emma’s work as a staff writer for Fierce Healthcare. She has been one of the leading voices in media really covering this emerging part of how healthcare AI is working. She covers regulation and legislation related to healthcare technology, including telehealth, remote monitoring, AI, health data privacy, and at-home healthcare enabled by technology, so lots of cool different iterations of that intersection of policy and technology.

We’re really grateful to have Dr. Brian Anderson with us today. Dr. Anderson is the CEO of the Coalition for Health AI known as CHAI. It’s a non-profit coalition that he co-founded in 2021, focused on developing a set of consensus-driven guidelines and best practices for responsible AI and health, as well as supporting the ability to independently test and validate AI for safety and effectiveness. Prior to leading CHAI, Dr. Anderson was the chief digital health physician at MITRE, where he led the research and development efforts across major strategic initiatives in digital health alongside industry partners in the U.S. government. He has also served on several national, international and health information technology committees and partnership with the ONC, the National Institutes of Health, NIH, and the OECD. A little bit of alphabet soup there, but lots of range in terms of international, national organizations.

We’re really lucky to have René Quashie here. Rene is the vice president of Digital Health in the first ever Vice President of Policy and Regulatory Affairs of Digital Health at the Consumer Technology Association, CTA. I’m from Chicago, so that means also other things to me, but Rene’s… Sorry, guys. I don’t have a podium, so I’m bending over. Rene provides guidance on key technical policy and regulatory issues relating to digital health products, services, and software and apps. CTA supports the health technology industry through advocacy, education, research standards work, policy initiatives and more.

He works closely with the FDA, first with CMS and the ONC and other related government agencies. Previously, he was in private law practice with several national firms, and earned his law degree from GW. We’d also like to thank Laura Adams, MS, who joins us from the National Academy of Medicine. She’s a senior advisor for the National Academy of Medicine, providing strategic leadership in the science and technology portfolio of the leadership consortium that leads the NAM’s AI code of conduct, the AICC National Initiative. So, her expertise in AI, digital health and human-centered care is something that we look forward to hearing more on. She’s a member of the International AI Expert Panel for the Organization of Economic Cooperation and Develop, OECD, and chairs the Global Opportunities Group for the AI Regulatory Science and Innovation Network in the UK.

Last but certainly not least, we have Mark Sendak, MD, MPP, joining us today. Mark is the population health and data science lead at Duke Institute for Health Innovation, where he leads interdisciplinary teams of data scientists, clinicians, and machine learning experts to build technologies that solve real clinical problems. Together with his team, he has built tools to transform chronic disease management within an ACO, Accountable Care Organization, and detection and management for inpatient deterioration within hospitals. He co-leads the Health AI partnership, a learning collaborative to advance the safe, effective and equitable use of AI software within the healthcare delivery organization.

He and his team have published in top, technical, clinical and management venues, including the Wall Street Journal, MIT Technology Review, Wired and STAT News. I just want to say I have spent the last eight months really trying to engage in the community of AI and health policy experts, and I am so excited about this panel today. I think it’s one of the first times these folks have come together to be able to share multiple different perspectives, and how we are looking ahead to see what kinds of frameworks and best practices are most relevant within AI. I am very eager to hear this conversation, and very excited and honored that these folks join us. So with that, I’m going to turn it over to Emma, and ask you to take it away.

Emma Beavins:

I am pushing. Oh yeah, okay. There’s the button. Hi. As Claire said, I’m Emma Beavins. I’m a staff writer at Fierce Healthcare, and I, as well, am super excited to be on a panel with all of these experts. This is truly a dream to have so much knowledge sitting right here at this table. So, I would love to hand it over to Laura to provide your remarks and what the National Academy of Medicine is doing on AI.

Laura Adams:

Can I press this, and have it hold, or is it on? Perfect. All right, well, if the clicker works, then we’re good, because it’s always at a technology conference where we’re talking or a panel, and it’s very embarrassing when the tech only works. So, thank you so much. It’s such a pleasure to be here with you today. I couldn’t be more delighted that it looks like that outside because all of your attention, of course, will be in here and not out the blue sky. This is an incredibly important topic, and I think I agree definitely that this is an A-team panel in terms of the people that you’ll hear from today.

I think it’s an all teach, all learn moment, and I intend to learn right alongside the rest of you. So, I’d like to talk with you about the National Academy of Medicine’s AI code of conduct for health, healthcare, and biomedical science. We were approached by some of our members at the National Academy, and the academy’s mission is to improve health for all by advancing science, accelerating health equity, and providing independent, authoritative, and trusted advice nationally and globally. We were formed by the federal government to advise the nation and the government, but we’re not the federal government, so we’re a neutral, trusted third party, if you will, that all things science, engineering, and medicine.

So, we were approached by our academy members saying, “Listen, here comes the tsunami of AI, and it’s headed for us.” This was a couple of years ago. Now, in a big, big wave, we see it coming. This was before even that ChatGTP was released, and we started, they’ll understand generative AI and what a talk about an impact in a tsunami. We realized that the good news was that everybody was developing frameworks, guidelines, and principles for AI and healthcare. The bad news was same, that everybody was developing a set of principles, guidelines, and frameworks. What that means then is that we start to… We don’t have almost a governance harmonization, a governance interoperability.

So, we took it upon ourselves, and began working with CHAI in the very beginning, because we were funded by the same… Gordon and Betty Moore Foundation funded CHAI and us, and they said from the very beginning, “We want you to work hand in hand.” We did. Fortunately, I work hand in hand with everyone on this panel, because we started looking at this idea of what would be a code of conduct that we might be able to have for the nation, not one that said one size fits all. Now, hear this, you’ll use our code of conduct, but act as an alignment function. So, we built upon the work that we did. We published in 2019 the artificial intelligence and healthcare, the Hope, the Hype, the Promise, the Peril, which now in 2019 publication, AI ages in dog ears.

So, that publication don’t even bother. No, I’m not kidding. There’s some really great stuff in there. Mark Sendak’s featured prominently in that publication, good stuff in there, but the world has changed since 2019 in AI. A lot of foundational things that are still there, there are definitely immutables that when I hear people say, “Oh, something that we’re doing in AI is going to be outdated in six months,” and I think we’ve got an eye on equity, and it’s really important. I can’t imagine that next year we’re going to say, “Yeah, we’re so over that. That’s passe now.” The immutables are the things that we focus on at the National Academy.

So, we wanted to be able to promote responsible use of AI by harmonizing. Just take a look at these. Let’s do a scholarly view of all of these guidelines. We looked at over 60 different guidelines, frameworks and principles. Find out where we are in violent agreement on things, where we see inconsistencies, and where are the out-and-out gaps. So, that was the work. We also wanted to take this down one level into granularity, and begin to translate it into the behaviors that we might see from every one of the critical stakeholders, and infuse that with a super strong patient orientation. We wanted this to be one of the strongest things that the National Academy of Medicine ever did in terms of bringing the patient voice and perspective in from the very beginning.

So, that work is ongoing, and we wanted to promote that national alignment, again, not wholesale adoption of the code. Every one of my esteemed colleagues here in some way, shape or other, have worked on the alignment piece so that they aren’t operating independently, and to the extent that as Ryan will tell you about the Coalition for Health AI and their work that they will be assuring and certifying against these standards. So, we’re excited about the reach and the work of this. You might wonder who’s behind the code of conduct project at the academy. It’s chaired by Gianrico Farrugia, the CEO of Mayo, Bakul Patel, the global lead for digital health strategy for Google, and Roy Jakobs, the CEO of Philips, a small company that you might’ve heard of based in the Netherlands.

So, the simple rules that we… By the way, you may want to look on the website to see all of the people that were involved in the creation of the code of conduct. We had some fierce and clear patient voices like Grace Cordovano, someone who’s not only got lived experience with this but does this for her. She advocates for people as her work and her life. We have people like Vardit Ravitsky who was the head of the Hastings Global… is the CEO of Hastings, which is a global ethics organization deeply infusing the code of conduct with the kind of things that we think will make it bedrock that will make it matter. So by the time we got done looking at all of these, we saw places where there were…

Transparency was one of the things that we had alignment on. We had some inconsistencies around the fact that it should be safe, imagine. I can imagine because sometimes we don’t think about those things when we’re developing this, but we should have. So, that was one of the inconsistencies, the out-and-out gaps. The one that caught our breath actually was that almost none of these, in fact, I think it was maybe one or two of the 60 if that had human primacy to advance human health. We thought, “Should that not be the number one?” I think there has been obviously some concern that we’ve already seen AI in healthcare use to do things like potentially deny care. So, this principle matters. It matters in a very big way.

We also saw, too, that there was at that time a lack of calling out the need for tracking these things after we unleash them. I think we’ve been used to the idea of drugs and devices, molecules and devices. Those things, once you put them out into the wild of healthcare, they don’t evolve. They don’t change. They’re not emergent. Algorithms are. AI is by nature. It changes. So, it was one of the other gaps that we saw is that there was very little attention to following up and making sure that we are tracking and monitoring these things after they’re in. For me, this is a Trojan horse for healthcare.

I’ve been working in quality and safety for a very long period of time, and not pleased with our progress. Thinking about how one of the things that we’ve never done well is understand whether what we do works for people, whether what we do really produces the outcomes that we’re going for. Some of it’s the payment model. We get paid for, by and large, our prominent payment model is still, “If you do the service, you get paid.” We’re the only industry in the world that can inflict defects on our customers, and bill them to fix it. We don’t do it intentionally at all, but there were times when before we got a handle on things like post-operative infections, things like that, we could not wash our hands, and if you got an infection, we could actually bill you to clean up after that infection.

So, we have issues that create safety issues for us. So, these simple rules around those elements, environmental impact was also one of the others that we saw that was not properly paid attention to. Here are the six simple rules that we distilled out of all of those principles and frameworks. We know that principle sets can sometimes get very long. In the heat of decision making, what can we recall right away? It’s complex adaptive systems for those of you that have studied that science. Protect and advance human health and connection is the primary aim without question. Ensure equitable distribution of the risks and the benefits, engage people as partners with agency in every stage of the AI life cycle.

We define agency as you have the ability to impact decisions and outcomes. It doesn’t mean that you’ve been invited as a token to the table. It means that you have actual agency. Renew the moral well-being and the sense of shared purpose to the healthcare workforce. We see AI as a phenomenal opportunity to write some of the most vexing problems that we have in healthcare, but we’re not going to do it with a demoralized workforce. There just isn’t any way we’re going to do that. Monitor and openly share methods and evidence of AI’s performance and impact on health and safety. Let’s monitor. Let’s understand whether what we do works here. Believe me, at this stage of the game, we don’t know.

Number six is innovate, adopt, collaboratively learn, and continuously improve. We cannot go down in silos. We’ve got to come together. We’ve got to share learnings across this, because there are too many unknowns, and there’s too much at stake. So, AI is not alike. We have been working in the world of predictive AI. When somebody says to me, “Oh, this AI is the same of what we’ve been doing for many years,” the predictive type of AI, which is more singularly focused, it’s been trained on certain data underneath, absolutely. When we get to Generative AI, and that picture of Generative AI was generated by Generative AI. I asked it to draw a picture of itself, and that’s what it came up with. This is different, because it is moving at a scale and a speed that we have never seen before.

It can move so much faster than typical AI. It is ubiquitous. There is no aspect of healthcare or actually a lot of other industries in our country that AI will not touch. The other thing about it is that it’s democratized. If you have a phone that’s connected to the internet, you have access to the world’s most powerful large language models globally. You have it at your fingertips. If you are a patient, you’re just discovering the power of that. That will be transformational. So, I think we’re going to be talking about both predictive and Generative AI, because you can’t govern them the same. That’s the point about that.

Healthcare is multifaceted. We’re going to have to do it at a lot of levels. Legislation can’t and won’t do it. I don’t want you to feel like there’s a burden that you have to be the be-all end-all for governing AI. So, there is a federal government component. There’s a state government. Gavin Newsom signed 17 AI laws in the last 30 days. There were 38 sitting on his desk at the time that I last checked. I am somewhat concerned that Senate Bill 1047 was vetoed on Sunday. That was going to have enormous impact nationally. So ,one of the things that we’ve got to be thinking about is where’s the alignment there? That will be something I want to talk more about in the panel.

National and international collaborations are key to governing AI, assurance labs, accreditation labs, industry collaboration, where we all agree on a set of standards for what good looks like with AI. Public-private partnerships, those need to flourish because each one of us bring something to that party, to that table that we cannot do alone. Local health system governance, Mark’s going to talk about that, exceedingly important, because these things behave one way in the lab and another way in real life. So, that’s new versus what do we need to simply build on. These are key questions. I’m just going to…

I want to end with a perspective about how important this is. Don Beyer, Congressman Beyer said that the healthcare is one of the few arenas where the decisions and the actions that we take will affect our lives and the lives of our families. But I think that Don Berwick said it in a way that I wake up every day, and I think about Don Berwick’s quote, and he said that, “I’ve heard it said from cynics that the quality of medical care would be far better, and the hazards far fewer if we, like pilots, were passengers in our own airplanes.” I think we have to know for sure that we’re passengers in this airplane. All of us, every one of us will be a patient at some time. Thank you so much.

René Quashie:

You guys are all clapping, but I got to follow that, which is ridiculous. That’s okay. That’s okay. I’m up to the task, I think. So, I represent a bunch of technology-enabled companies. I think one of the advantages I have I think on this panel is I see how AI is being used in transportation and smart cities and robotics, in smart home. So, I get to see sort of AI from a horizontal perspective as well as my vertical perspective sitting, running the health division at CTA. So, one of the things I want to point out, first of all, CTA is an accredited standards development body. I’ll get into why that’s important a little bit later.

Over the years, we’ve had a standards program for over 100 years. By the way, when I use the word standards, I’m really just talking about measurable and verifiable criteria. That’s based on a consensus process, which I’m going to talk about in a second. There’s probably almost no piece of consumer tech that has not been touched by a CTA standard. I’m going to give you some examples. AI on your phone, that’s a CTA standard. It used to be… I’m looking around the room, a lot of young people, so you don’t remember this, but it used to be when you got on a plane, and you had a phone, you had to power it down fully before you took off.

How many of you remember that? Okay, you’re showing your age, but that used to be a thing. One of the things that industry decided, CTA decided is there’s a technological fix for this. So, we created a standard called airplane mode, which you all have, which you all use. So when you get on a plane or other places where you’re not allowed to use Wi-Fi, you can use airplane mode to power down the Wi-Fi, and still have use of your phone in case you’ve downloaded other equipment. So, that’s the value of standards in certain ways. If you are watching a TV show, and you see closed captioning, that’s also a CTA standard.

One of the other standards we developed was on step counting. It used to be… Again, this is going to show your age, that back in the day you wore different wearables, and they measured steps differently. So, you could take the same amount of steps. You were wearing two different wearables, and it would give you totally different step counting information. CTA got together and developed a standard about what constitutes a step. So, every time you take a step, it means the same thing. So, there’s just a few examples I have. We’ve got others here. The most recent example, like in site two, is the OTC hearing aid. Hearing aids are now available over the counter. The FDA finalized its rule about 18 months ago.

The technical specifications for OTC hearing aids was also a CTA standard. So, we believe in standards, right? So, again, we’ve developed over 130 of them across all industries. We’ve developed about 30 in the digital health space. We’ve developed four on AI in healthcare. The first one was on definitions and characteristics of AI. So when we use the words deep learning, what does that mean? What does machine learning mean, all those kinds of terms? So, we define that. So, we had a standard on that, and then we had a couple of other standards. One of my favorites is on bias, bias management and AI solutions in healthcare. How do you control for bias? How do you manage it? Then one on data governance as well. So just an example, sort of a lot of the standards work that we’ve done over the years.

Now, standards are one piece to ensure trust in health AI. The gentlemen to my left are going to talk a little bit about this in a moment, but I think standards are the foundational piece, and then there’s a compliance piece of the validation piece that’s going to come later, but I think they all work in conjunction in order to produce trusted products at the end of the cycle. Now, one of the things I want to point about, I said earlier that CTA is accredited as a standards development body, and we’re accredited by the American National Standards Institute. Why that’s important is because in order to be accredited as a standards development organization, you have to meet certain due process requirements, which are important, because those due process requirements ensure that at the end of the standards process, you have voluntary consensus standards that have been through an open process, that have been through a transparent process and a comment process as well, which I’m going to go through here.

The other thing too is designation by the American National Standards Institute of a standard also allows integration into international standards, which are incredibly important in the technological space. This chart goes through how a standard is actually developed. It starts off with an idea in a proposal. Then it’s sent to our consensus body, which looks at it and says, “Yes, there’s a gap in the market. We need to develop a standard.” I talked about AI and step counting. That’s exactly the process that happens. Somebody said, “Listen, there’s a gap here. It needs to be filled. Can this be filled by standard?”

Then you have to approach ANSI, the American National Standards Institute to ensure that this is something that they will bless. Then you go through a process, and we develop work groups, and they’re made up of a diverse set of stakeholders from all across the health ecosystem. If you have a material interest in the subject of the standard, we cannot say no to you. You have to be part of the standards working group, which is a great thing. There’s a comment process. There’s a ballot process as well. So as you can see, there are really very, very formal processes you have to go through from beginning to end in order to generate a standard. What that means is by the time the standard is finalized and published, it’s been through a lot of rigor, and I think that’s the value of standards.

We are creating a new CTA health AI project. We’re calling it the CTA Health Planning Council, because we have looked at the market, and we think there are gaps that need to be addressed. These gentlemen to my left are going to talk a little bit about this, but what we are doing is we’ve created a planning council that is analyzing the gaps that may exist in the market in terms of standards. We’re looking only predictive AI, not GenAI, to figure out where we may need to develop some standards in this space. I’ve talked to all my panelists here at one point or another about this project, and I think it’s important because what we are in danger of facing in the future is a bunch of inconsistent practice standards across the ecosystem. That’s only going to lead to inconsistencies that lead to safety issues, which is something that we don’t want, as Laura talked about before.

The last thing I’m going to leave you on is, again, because CTA works across all industries, we developed what we called our framework. Our framework, really, I’m just going to boil it down to its essence, which is we believe in a risk-based approach to the regulation of AI. Not all AI is built the same. There’s certain AI models and systems that require much more scrutiny than others. Look, we all use AI, whether we know it or not. All of us are subscribers to video streaming services, music streaming services. Most of us have GPS apps on our phone. Most of us use frequent shopper or frequent user kinds of apps. They’re all in some way empowered by AI. So whether we know it or not, we live in the world of AI. Brian, what did you say? We live in a world of AI. What was your statement again?

Dr. Brian Anderson:

We’re just living in it.

René Quashie:

We’re just living in it. So, for health though, I think there are some unique aspects of healthcare that are incredibly important. So, that to me has a higher risk. So, what we did is we developed this approach, and to us, the high risk AI systems and models that need scrutiny are the ones that are solely based on automated decision-making, meaning there’s no human in the loop that affects health and safety for the purpose of this audience. So, if there’s a human in the loop, it’s less risky. If it’s automated decisions, it’s high risk, and those require a great deal of scrutiny. Now, there’s a continuum, obviously, right? So, the lower the risk, the lower the scrutiny.

We believe this makes sense, and this achieves the balance between innovation on one hand that we’re all incredibly interested in, because one of the things that we always lose sight of when we’re having these discussions is there’s a broader geopolitical and global economic and competitiveness context that is at play here. Hope everybody understands that. This is not just about health AI. This is not just about AI. It’s about U.S. global competitiveness with other countries, China coming to mind. So, all these discussions we’re having, we need to ensure that we’re approaching the discussions with the right context in mind, and we don’t want to stifle innovation. So, I’ll leave it at that. I’m looking forward to the panel discussion because I have lots to say.

Mark Sendak:

Hello. Okay, perfect. So, thank you, everyone, for making the trip out here today. I’m also thrilled to be part of this panel. We’re going to have a lively discussion, I am sure in the invitation-

René Quashie:

I can’t wait.

Mark Sendak:

… to share perspectives. So, I’m going to start from the first AI project I led as a project manager and data scientist. So, this is two patients in my community, Durham, North Carolina. Don’t look at me. Look at that. So, what we have here is 10 to 15 years of longitudinal kidney function data. It’s hard to see from far away, but the person who is lower lived for a decade with end-stage renal disease on dialysis. We see over a three to four period time that this individual went to emergency departments at Duke dozens of times, was not connected with primary care, was not connected to a nephrologist, was not given the treatments that they would’ve needed to actually prevent kidney disease progression.

They lived the last 10 years of their life with end-stage renal disease, and died in their 60s. That data was there. We ran those labs. We had that data, and we weren’t using it to the best of our ability as a health system. The other patient also repeated emergency department visits, repeated hospital admissions. They were getting to the threshold where they needed to be referred to a specialist, and it hadn’t happened yet. We started this project in time to get that patient actually referred to a specialist. So, this is why we do this work, because we have the data to intervene and improve patient care.

So when we did this project, we actually changed the organization. This was part of an accountable care organization. We developed new workflows, new roles. AI can change the way healthcare is delivered. I was the data scientist on this project. Health data is a mess. I’m not going to get into it, but there’s 20 different ways that the creatinine value was represented in many different iterations of our health record. Part of that messiness is what got us working for years to build technology infrastructure to maintain high quality data to run algorithms, and this infrastructure, we still use to this day, and we’ve used it for all of our AI projects.

In preparation for today, I actually went through our whole portfolio. We’ve been an innovation team for over a decade. We’ve done 49 AI projects over the years. I don’t know how much we’re going to dig into it, but I would push back on the notion that this technology is so new that it requires a completely new approach. We’ve actually been building these tools for a decade. There’s eight large language models as part of this, and it’s actually a lot of the same best practices. So, one of the things that we’ve learned over the years, having I think one of the largest, if not the largest portfolios of AI projects that have been implemented is best practices from introducing these technology tools into patient care.

So, one of the best practices that I actually brought something that I’m going to hand out is the model fact sheet. This is something that we started distributing to our frontline clinicians in 2019 five years ago. We published the first model fact sheet in 2020. We’re going to talk about transparency today. This is something that’s used by health systems across the country. We’ve worked with health systems in other countries as well, and it was cited 10 times in ONC’s HTI-1 final role. So, this is actually federal law now. I would say from my perspective, it was one of the neatest professional examples of seeing something that was a problem on the front lines.

We were innovating, developing solutions on the front lines, and now, it’s federal law. So, I’m going to pass these around, and then if you can, actually… I don’t have enough for everybody, but try to get at least one or two to each table. So, this is where best practices come from. It’s doing the work, and that’s why we were funded by the Gordon and Betty Moore Foundation. So, we’ve built a national learning collaborative. We have 20 of the biggest health systems that have been leading the way in AI. Our vision is to be the trusted partner of up-to-date resources for frontline professionals trying to use AI in their daily work.

There’s a lot of organizations. You see representatives from four of the organizations here today. We’ve been tracking over 20. These groups all serve different stakeholder needs in the ecosystem. Healthcare is complex. There’s lots of stakeholder groups, and we actually work with a lot of these groups. They develop really valuable content, host valuable convenings, and we want them to be successful, and we want to be part of this ecosystem. There are things that make health AI partnership unique. First and foremost, we are so focused on last mile implementation that we actually are providing technical assistance for last mile implementation in community and rural settings.

We focus on the delivery organizations. We represent them. We interface with policy makers. That’s our goal. We have diverse collections of delivery organizations, high and low resource, and this also relates to the implementation assistance. Then the types of people that we bring together are practitioners and experts across domains. I can assure you that every health system today, it’s interdisciplinary teams trying to work on these projects of clinicians, technical experts, operational experts, and regulatory experts. I forgot Rene was a JD, but we actually work a lot with lawyers right now.

So, we do think that there’s value in different approaches. So, here’s some of the key takeaways where we think that we need to focus on post-market integration of AI. The market is flooded with products, and people need help putting these into use. Just looking at pre-market controls is not going to ensure safe, effective, and equitable use of AI in practice. We need to build and scale local capabilities. Obviously, I’m biased. I’ve been part of a health system for 15 years. We’re prioritizing the needs of patients and delivery organizations. Healthcare has lots of stakeholders, misaligned incentives, and so at least from our perspective, the way to improve things at the bedside is to focus at the bedside and those sets of incentives and those stakeholder groups.

We need to celebrate variability. I can give you numerous examples where even within our own system, we’ve adapted workflows and implementations in different hospitals of the same technology, because different hospitals had different staffing models. They had different infrastructure. We have to support variability. Then lastly, we have to be really thoughtful about resourcing and partnerships to make sure that we’re focusing on the interests of patients, and improving patient care versus further entrenching incumbent interests in the delivery organization or in the ecosystem. So, our core audience is the delivery organizations, those interdisciplinary teams.

We’ve surfaced best practices. We do interviews and regular convenings with all of the leading health systems. These are eight key decision points that all of our health systems follow in some shape or form as they navigate the AI product life cycle. We’re philanthropically funded. All of this material is publicly available and usable. These are our core sites. So on the left-hand side, you see the systems that have been leading the way in AI. We also have ecosystem partners, federal agencies that we meet with regularly. One of the things I’m most excited about is our technical assistance program.

So on the left-hand side, you see four federally-qualified health centers and one community hospital. So, we’ve got five use cases to participate in the program. These organizations had to have alignment from senior leadership to move forward with an AI product, and they had to have an off-the-shelf solution that they were looking to implement. There’s two large language model use cases for GenAI. There’s two EHR vendor solutions, and there’s one FDA-approved device. So, we have a variability because part of what we’re trying to pressure test is how much can these best practices be implemented to support a variety of technologies versus how tailor made do they need to be.

Then we’re doing research with a health system partner and with a vendor partner to build and advance open source tools that allow all organizations to evaluate models locally. This is a story announcing the technical assistance program, and this is a picture of one of the sites that we’re working with in rural Arizona that treats a large Native American population in Arizona and New Mexico. I also just want to emphasize we’re in this for the long run. I’ve been part of DIHI for almost a decade. This isn’t just about policy decisions that we think are going to happen in the next six to 12 months.

We want to level the playing field for all healthcare delivery organizations to benefit from AI. We want to make sure that we’re nimble. AI is going to keep moving. It’s going to keep presenting new challenges. We want to be able to respond to those challenges with the other leaders in the field. We want an efficient market. I would say, and maybe we’ll talk about this in discussion, but we do not live in a world today where best-in-class solutions can emerge and diffuse across delivery organizations, and we need that.

We need transparency and shared accountability. Then the last bullet, that’s why we’re here. We’re trying to eliminate inequities, reduce costs, improve provider experience, and improve patient outcomes. So, thank you.

Dr. Brian Anderson:

All right, so I had the pleasure of going last before we get to our discussion. Laura, I think you’re right. I’ve never been on one panel with all of you guys. It’s so special to be here with you. Alliance, thank you so much for bringing us. Thank you for the chance to speak with everyone here. So, I’m Brian Anderson. I am a physician data scientist by training. I’m the CEO and president of the Coalition for Health AI. CHAI as we like to say, like the spice tea, was an effort that we started in the middle of the pandemic in March of 2021. A number of organizations were working together to help solve the pandemic at that time.

We were working in what I would call a pre-competitive space. You had a Microsoft, a Google. You had a Pfizer and a Moderna all looking to come together against a common goal of tackling covid. One of the things that we asked in the pandemic was, “Is there something more that we collectively can do beyond the pandemic?” We looked at AI, and one of the initial questions that we asked ourselves, I think you’re hearing echoed from all of my other fellow co-panelists, is do we have consensus? The question we asked was, “Do we have consensus at a technically-specific level about what responsible AI and health looks like?”

I think we all agree that we have violent agreement that we want fairness. We want transparency. We want reliability. But at the level for a software developer or an implementer at a health system, do we have agreement about what good responsible AI looks like that could inform that individual? The answer that we quickly came to, this small group of ours, was that we don’t. We have a lot of fantastic work happening at big tech and small tech companies, at big health systems and small health systems, but we don’t yet have a space where we can come together, share our best practices, and begin to build the technical framework that actually informs when that rubber hits the road, what good responsible AI and health looks like.

So, we started with very humble beginnings, eight doctors, clinicians, and a number of technology companies coming together, looking to try to solve this problem. I would never have guessed how quickly CHAI grew from those humble beginnings to, I think, now we have over nearly 4,000 different member organizations across the U.S. and internationally. We have, I think, importantly, Rene and Laura were citing the important members from the public sector as well. So, a number of US government officials have been participating, working alongside the private sector innovators, helping to come together to define what these guidelines and guardrails look like.

Additionally, we have brought a number of patient community advocates ensuring that at every level in our working groups, we’re having the patient advocate ensuring that their voice is heard, that their voice is ensuring that how we think about developing these models, deploying these models and monitoring these models always has the patient in mind. Now, one of the ways that we do this, or I should say the principal way that we build these technical guidelines is through working groups. Our working groups are intentionally quite large. They’re between 20 and 60 different member organizations. We make them large because we want to have a diverse group of perspectives, and a sharing of best practices.

We want a Duke and a Mayo sharing alongside any of the number of FQHCs alongside a big tech company and a startup so that we are ensuring that we’re getting that diverse set of technical perspectives to build these consensus guidelines. Now, what are they doing with these guidelines? The hope is that the members of CHAI, those 3,800 members are going to take these guidelines, and then actually internally go back to if you’re a Microsoft or a startup or a big health system or a small health system, are going to actually begin implementing them and using them with that level of technical detail that actually informs how things are done.

So, it’s great. Okay, so you have a definition of what good responsible AI looks like. So what? One of the things that is missing in health AI or I would argue in AI in general is this concept of quality assurance labs. When you look at any other sector of consequence in the U.S. economy, you will find independent entities that evaluate, that ensure through rigorous testing that cars, airplanes, consumer electrical devices, the lamp that I put in my child’s bedroom are safe, are effective. There are things that we take for granted. I get on the airplane this morning, and I don’t have second thoughts about, “Is the airframe safe and effective?”

The importance of having quality assurance labs that test and validate that the tools that we use as clinicians is critical, we do not have that in AI and health yet. I would argue that that is a huge blind spot. So, one of the things that we are focusing on in the Coalition for Health AI is developing a nationwide network of these quality assurance labs. We intend in CHAI to certify these labs that they are trustworthy, that they are independent, that they don’t have commercial entanglements with vendors, that they have the capabilities and the kind of testing data to rigorously validate that these models are safe and effective.

If I’m an FQHC in Appalachia, I have a particular kind of patient population that I see. That patient population might be different than the tribal nation that you see up here or inner city Chicago or a rural community in Kansas. One of the challenges in assuring that a model is actually safe and effective is the diversity of kinds of patients that every health system has, that that FQHC has in Appalachia. So, one of the things that we’re focusing on in building this assurance lab network is that it draws from essentially health systems that are participating in this network.

It draws from health systems that are across the nation, that if I wanted to have a model tested and validated on how it performs on patients in rural Appalachia, I could actually go to one of the two health systems in North Carolina or Virginia that is participating, that could draw and create a testing data sample from those patients. Whereas if I was deploying a model in the Rio Grande Valley of Texas or the Pacific Northwest, I could work with a different set of labs that could actually create the kind of testing data set and the rigor around model evaluating that gives a level of insight into how these models are performing.

One of the biggest challenges that health systems face today, just as Mark was saying, they’re being inundated with vendors that are promising all sorts of, “These AI tools are going to solve all your problems,” but they don’t have the kind of ability to turn to an independent evaluator that can actually say, “Well, here’s a report card on how that model actually is doing on patients like yours.” So, they have… They’re making decisions without data in these procurement processes that every one of these health systems, over 70% of them by a recent survey, is actively engaged in trying to make informed decisions about what are the truly safe and effective AI.

Because just like Mark says, how that model works in my local environment is going to be very different than how it works in Boston, where I’m from. So, we need to create the robust capability set that allows these models to both be monitored and tested locally, but also in those important instances where I’m that FQHC in Appalachia, and I don’t have an army of IT people like I might have if I were at a big academic center to do that kind of testing that they can turn to one of these trustworthy labs, and get the kinds of model report cards. So, one of the things that we’re committed to in CHAI is standing up this network of labs. We’re going to announce all of their names later this month at a big conference called Health. So more to come on that, but we’re committed to sharing these evaluation report cards publicly.

So, everyone in society, both the model customers, the health systems, the vendors, but you and I patients can see how these models are performing, and we think that’s a really important thing to have that level of transparency. Now, the other big thing… I think Mark shared with you guys a little bit of an insight in terms of this concept of a model card. All of us go to grocery stores. Some of us, I don’t do it probably as often as I should, turn around the label of something that we’re purchasing, and look inside how much fat is in there, how much of it’s saturated, unsaturated, how much sugar, how much proteins. These are the nutrition labels that we look at.

We don’t have those for AI models. We desperately need. I think as probably many of you are already aware, a model performs well on the kinds of data that it’s trained on, and the methodology that it was trained in. So, if you don’t know what went into that model, what its training data sets were, the indications for the model, the limitations of the model, there is real risk that that model would perform quite poorly on a particular patient population. The vast majority of models are trained on highly educated, urban, suburban, white individuals. They are not trained on rural people from Appalachia or tribal nations, which is where I’m from.

We need to ensure that if we want to build AI that’s going to serve all of us, that we need to have transparent ways of informing the public, informing model customers on how these models are created. So, CHAI will be launching a model card that the customers, the health systems, the over 600 health systems that are part of CHAI will begin using and requiring of model vendors to share as part of the procurement process. That’s something we’re really excited about. All right, with that, I think I’ll turn it back to Emma for next steps.

Emma Beavins:

Great. Thank you guys so much for your words, for your perspectives. That was a lot of information, which even for me who does this mostly every day is a lot to absorb. I want to start out our discussion with two framing questions just to set the stage how I like to approach things. Also, as a note, we are collecting question cards, so if you guys have questions for the panelists, you can hand them to Sarah who’s walking around, and she’ll bring them up to me. But to set the stage a little more, I want to recap a tiny bit what was just said.

So, I want to have you guys compare and contrast what each different organization is doing in the health AI standards space. I want to ask this to Laura and to Brian. So Laura, National Academies of Medicine has been engaged in this work of assessing the landscape. What’s your personal assessment today of the perspectives on this panel and what’s been represented?

Laura Adams:

I’d have to say that my sense is because we’ve worked so closely together, and with many other organizations as well as my esteemed colleagues here, that there’s an enormous amount of alignment at the level of the commitments that we talked about, at the level of what are we really looking for in terms of what’s our aspiration? What’s the vision that we have for what good AI looks like? It needs to be now taken down into the level of standards of the certification process of the application at the local level, but I feel that there’s a great sense of harmonization. I don’t look across our panel certainly here, nor do I look across a lot of other organizations, and say there’s a lot of misalignment here, because one of the things that’s been almost a Trojan horse for how risky AI is is it’s made us talk to each other.

It’s made us come together and say, “All of us could get in a lot of trouble with this.” We want to do this right. This has so much potential that if we screw it up, and there’s a disaster or there’s a catastrophe, something that happens or gets out of hand, we’ve hurt all of us in every single way, because the prevention of this coming into being would be a catastrophe in and of itself. So, I just feel like that that notion of the high risk has been this blessing in disguise that’s caused us to come together and work. So at that level, in fact, a multitude of levels, at least with my colleagues on the panel here, there’s a lot of harmonization going on. You can feel good about that.

Emma Beavins:

Great. Brian?

Dr. Brian Anderson:

Yeah, just to echo a few things Laura said. So the first thing, CHAI, when we first launched, we were incredibly excited to be partnering with the National Academy of Medicine. The way that we’ve framed the work that we’re doing in CHAI is CHAI is getting down into the technical weeds and the details of a lot of these best practice frameworks. Whereas I think importantly, NAM as a visionary, as the place that convenes the luminaries in our field is helping to establish what that North Star or set of North Stars are. So, we’re very much aligned within CHAI about taking the AI of code of conduct, and building out those six things that Laura had referred to into greater technical detail.

Now, CTA, we love the idea of working with standards developing organizations. CHAI, we don’t want to be a standards development organization. What we want to do is bring together industry in the private sector, academic institutions, health systems, patient community advocates, startups, big tech companies to develop… In the standards’ lingo, it’s called schema or technical schema that can inform standards. So, there are a number of standards development organizations out there. CTA is a great example of one. What we want to be able to do is partner with organizations like Rene’s that need to do that really important task of developing the standards that companies can then begin using.

Duke and DIHI is another wonderful example of some of that really important critical work. That Last Mile is so critical. One of the things I admire about the work Mark is doing, Emma, is creating that community of lower resourced health systems, and helping to partner with them in that technical assistant program, building out the kinds of open source tooling based on some of these best practices. I’ll also say that a lot of the work that DIHI has done in the best practices forms some of the initial substrate that is part of these working group conversations where folks from Duke are coming alongside Stanford and others sharing their best practices, building that consensus.

So, I think just like Laura said, it takes all of us to do this, and we need to come together. Those are just some examples of how we’re partnering.

Emma Beavins:

Yeah. Great. Thank you. I’m going to take an audience question for the next one. So, for Mark and Brian, why do stakeholders join your respective organizations, CHAI and HYPE, and what value do they derive from joining? Mark, I’ll start with you.

Mark Sendak:

Can you hear me?

Emma Beavins:

No.

Mark Sendak:

Hello. Okay, cool. So, I would say we are cultivating two different communities, and I had these on different slides. The core sites are the sources of the best practices. So, some of those are folks that we had known for many years, and I mean like five, 10 years, the folks who are early movers in using digital technologies for machine learning and healthcare. Some of those, we’ve actually sought after, because we really did want a diverse set of hospitals. So, we brought in public safety net hospitals like Parkland Hospital in Dallas, Texas, Metro Health in Cleveland, Ohio. We brought in Kaiser. We brought in Mayo Clinic.

So, those are folks that we knew their work. We knew we had relationships. So, it’s the source of the best practices. The reason people participate is because they get to contribute to the best practice development. They learn from each other. It’s a platform for them to share their learnings. Part of our goal is to raise the visibility of what they’re doing on the front lines. The core sites, sorry, the practice network sites, that’s actually a solicitation. So, we ran our first cohort. We announced it maybe in March or April, and it was a competitive program where we said we’re going to launch with five sites, and these are the first five sites that are going to be part of this demonstration project of how do we take the best practices from the most advanced organizations, and diffuse that into community and rural settings.

I’ll say, as someone who’s studied health policy worked in population health for a long time, if anyone here knows of the ECHO model, Extension of Community Health Outcomes, there’s a really long history in medicine of extending expertise outside of academic medical centers. So, I draw a lot of inspiration from programs that have nothing to do with technology of just, “How do we build this multi-sided network of identifying where there is expertise, surfacing that expertise, and diffusing it?” It’s actually been really exciting to see that both sides of those networks, there are network effects where people want to join it.

I will say we’re not trying to recruit every organization in the country, so it’s pretty selective on both ends. That’s why it’s a small community, but we have 100% engagement in all of our events. I mean, people get along. There’s a lot of mutual respect.

Dr. Brian Anderson:

Mark, I thought you were going to say they joined our organizations because of the cool names, CHAI, DIHI. Yeah, so organizations join CHAI. It’s an open group. Anybody can join. There’s over 3,800 as I shared. I can’t possibly describe all of the reasons they join. At a very high level, I would say it’s very similar to Mark’s. The groups that join want to be able to participate and contribute to the development of these technically specific best practice frameworks. I would say that a lot of the patient community advocates that join, I think, are excited about the transparency and the rigor that comes with the assurance labs, and being able to be more informed about how models are performing or working on patients like them.

I think that’s a very important concept. The technology companies that have joined CHAI, I think, are really interested in understanding, “What are the industry standards in this space?” If I were to ask you, do you think AI engineers at startups and big tech have agreement on how do you measure bias? You might think, “Of course, they do.” No, they don’t. There are wildly different definitions of how we think about basic concepts like fairness and how you measure bias. I think everyone, I mean broadly in society, we agree that’s not a good thing. So, I think industry, specifically, the technology companies are coming together because we want to have a common agreement about how to measure good, and define that.

Emma Beavins:

Great. Thank you. I’ve counted that we have seven questions and 12 minutes, so we’re going to burn… Oh, and there’s more questions, so 30 seconds, guys. Before I get to this pile of questions, I want to ask one of my own. So, we’ve talked a lot, and I think people generally agree that there’s some role for national governance of healthcare AI, and there’s also a need for local governance at the level of the individual health system clinic, you name it. So, I want to ask Mark and Brian, what do you think the right ratio is between national governance and local governance for health AI?

René Quashie:

You’re good.

Mark Sendak:

Is this on?

René Quashie:

No.

Mark Sendak:

Okay, sorry.

René Quashie:

Don’t touch it anymore. Don’t touch it anymore.

Mark Sendak:

Thank you. I feel like Rene and Claire have saved me from myself today. But, so a few years ago, we actually wrote a piece called the Collaborative Governance, and it was presenting this new model where you need shared accountability. It’s not just local and federal. It’s also vendor. It’s the professional clinicians whose medical licenses are on the line. It’s the organizations that pay malpractice insurance to hire professional clinicians to provide care. I mean, there’s so many accountabilities in healthcare, and AI is going to be deeply embedded in every aspect of it, but I do want to just take a moment to talk from the perspective of an innovator.

I know that Rene mentioned this innovation right now is so important to be able to make the most of what AI can bring into healthcare. Part of why I brought these model facts labels is to show you what this looks like on the front lines. It’s two-sided. You can see other examples on the back. These are public labels that are algorithms that we’ve been building and implementing. There’s examples from other organizations. This came from the front lines. It’s written into law right now. I would recommend everyone here can go read the HTI-1 rule. See the 31 source attributes that all vendors of AI products distributed through EHRs are going to be required to disclose.

So I think for us, that bottom-up approach of creating open resources that are broadly available that anyone can implement is extremely valuable. I’m going to use a very crude analogy, but I’m concerned about efforts to take tap water that is broadly available. How much are we bottling it and trying to sell it, versus things that government and the innovators in this space have been leading the way and already implementing and adopting into their routines organizationally.

Dr. Brian Anderson:

So, we need… Now it’s green. We need both, Emma. We need both local recurrent validation and AI governance that is critical to how these models function. I think as Laura had shared at the beginning, they’re dynamic. They change. You need to have the ability for high-resource health systems and low-resource health systems to be able to locally monitor and validate their models. At the same time, you need to have a common objective way of defining what safe and effective is, right? The FDA does that right now for drugs and therapeutics, and in the AI space, I would say somewhat.

What we don’t have the ability to do is to have the kind of objective set of evaluation metrics, common definitions of what bias is, as an example, that you can then apply rigor in an evaluation framework too at a more national level. We need that as well. You need both. You can’t have one without the other.

Emma Beavins:

Thank you. Rene, for you, you mentioned that when considering regulation for technology, but in this case AI, that national competitiveness is at stake. The U.S. has largely not regulated social media, while the person that wrote this card says, “China regulates it heavily within their country.” Do you think there are any lessons to learn from social media regulation in terms of AI regulation? 30 seconds, please.

René Quashie:

Yes and no, but I won’t get into a lot of detail on that. What I will say is I wasn’t trying to say we shouldn’t regulate in this space because of national competitiveness issues. What I was trying to say is we need to make sure that that’s part of the consideration going forward. Part of the considerations going forward are also what Mark talked about in terms of the balance between innovation and regulation. Look, Colorado just passed a comprehensive AI bill. The governor was hesitant to sign it. He signed it, and then a few weeks later, he came out publicly and said, “Eh, maybe I shouldn’t have done that,” because he’s worried about Colorado’s economic competitiveness, right?

If you have organizations and businesses that want to do business in Colorado, and see all these requirements that they have to comply with, they may not want to do business in Colorado, and go next door to Utah and Arizona. So, I think there’s a balance we need to reach. I’m just saying all these issues need to be part of the consideration.

Laura Adams:

I want to double down on that, because one of the things that… We built a health information exchange where we gathered up used Rhode Island. It was wonderful living laboratory that we had there, gathered up all the patients’ data in a central repository with patient consent. When we got ready to turn around and share that data with the rest of the nation, every door was slammed in our face, because every state had a different privacy law associated with it. We talk about the idea to make sure that we have competitiveness, and we have the ability to bring this to places everywhere, ubiquitously, all across our nation. We’re not going to be able to do that with a patchwork of regulations going on in every state that are different.

So, one of the things that I would want to advocate for very strongly is conversations between what’s going on in state legislature and what’s going on in Congress. What’s going on with our federal agencies? I think we have people coming to the National Academy saying, “For your next big project, or at least the next project in the next year, get these groups together. Let’s talk about this.” Because while we do understand the push to do this, especially when there’s a sense that it’s not happening in Congress, I think there’s actually quite a bit going on in Congress, but there’s a sense that it isn’t. I’m really concerned that we’re going to end up with a patchwork for AI that’s going to kneecap us as a nation.

Dr. Brian Anderson:

Emma, can I just pile in? So, Laura and I were talking about this earlier. In CHAI, we have a dozen states that are coming to us, and I think just similar to what Laura is describing, saying, “Help us create AI regulation and health.” I have this deep existential concern just as Laura is outlining that there is going to be a patchwork of state regulations, each of them with different metrics, different ways of evaluating different thresholds. There is an urgency to act at the federal level in this space. Now, the balance of that-

René Quashie:

Be careful what you wish for.

Dr. Brian Anderson:

Yeah, yeah. Well, just let me… The balance of that is we also need to have a pretty significant degree of humility that we don’t have common agreement about a lot of these things. I keep coming back to how we think about measuring bias and fairness. What are the thresholds for performance? Just very basic questions like the level of humility that we need to have in this space is pretty profound. So, when it comes to legislating, Laura and I, I think, agree that thinking about this in a stepwise fashion about where we have agreement, and thinking about the appropriate laws that don’t get us into the situation, Rene, that the Colorado governor might’ve found himself in, might be a good step.

René Quashie:

But two points, real quick points, I think the horse is out of the barn. There was a bill in Connecticut, bill in Colorado that was passed. California is vetoed. State legislative sessions start in January. You are going to see thousands of bills come on board. Texas, by the way, comes on board in January. So, be careful about that. I also want to be wary of urging the Feds to act just because the states are. I think one of the things that irks me a lot about all the conferences we go to is there’s really a lack of fundamental understanding of a lot of the nuances of AI. We toggle between extremes that the Panglossian AI is going to cure everything to the dystopian. The robots are going to take over, and us humans are going to be serving at their whims.

We don’t have nuanced conversations particularly about health AI, which is incredibly important. The other thing we don’t do is we don’t tie AI solutions to the problems we’re trying to solve. Healthcare is heading headlong into a crisis. We’ve got workforce issues. We don’t need to talk about burnout, early retirements. We don’t have enough young people who want to go to medical school for all kinds of reasons. We have an aging population. By the middle 20, 30s, there are going to be more people in this country 65 and over than 18 and under. We’re old. That means more healthcare services with fewer clinicians. How are you going to solve that problem? So, I think all of these considerations need to be thought about very carefully.

Mark Sendak:

So, quick point on… Because so much discussion here about policy, first off, Rene, key decision 0.1 in health AI partnership is surface and prioritize problems. Always start with a problem. So, we got you there. I want to make sure I included this in my slides. We talk way too much about market controls and not about post-market supports. So, okay, I want to give you two analogies. The last 15 years in healthcare, what have been some of the biggest technology shifts? EHRs, telemedicine. EHRs, we had regional extension centers across the country. Telemedicine, we have HRSA-funded telehealth centers of excellence and telehealth resource centers across the country.

I’ve asked in every forum I’ve been in, “Who’s doing technical assistance and capacity building for AI?” I have never gotten a clear answer. So, my big ask is this technology is more important than EHRs and telemedicine, and that’s where we have to be focusing.

Emma Beavins:

Great. Thanks. Another question from the audience. So, if a health system purchases a product to help assess outcomes, they will also want this purchase to result in cost savings to justify the price of the purchase. How do we incentivize quality outcomes over cost savings?

Mark Sendak:

Value-based care.

René Quashie:

Which we have or don’t have, Mark.

Laura Adams:

We do not have.

Mark Sendak:

Mostly don’t have.

René Quashie:

Okay.

Mark Sendak:

That’s why accountable care organizations are phenomenal settings for AI implementation.

René Quashie:

See, this is another one of the issues we never talk about. I never hear or talked about when I go to these conferences, which is coverage and reimbursement from payers, Medicare, Medicaid, commercial payers. Who is paying for this stuff, right? Particularly, based on the model we have now mostly fee-for-service, some value-based care integrated in there, it’s going to be hard to really adopt emerging health technology. This is just not for AI. This is for all emerging health technology. Given the problems I talked about with workforce in an aging population, I think we need to look at the way we pay for healthcare services in this country on a serious level. Without that discussion, a lot of what we’re talking about is only going to exist in the margins.

Laura Adams:

Absolutely. My first-hand experience of that is developing the system in Rhode Island in the Living Laboratory where we were able to notify any doctor in a nanosecond anytime one of their patients went in or out of any ED or any hospital in the state, regardless of whether they were a patient of the behemoth or the small, whatever. Their data followed them. I had one of the health systems switch off this system, which was… They were stellar in terms of their bringing down preventable admissions out of their outpatient setting. So, I sat down with the CEO across the table, and said, “Why is the system being shut off?”

He said, “I have to shut it off because it works.” I said, “What?” He said, “Yeah, because it works. The problem, he said, is that all those preventable admissions, actually, I need those people to be sick and come into my hospital, so I can have the revenue in my hospital, or I can’t keep the hospital running for other patients.” He wasn’t saying that with a flip or I don’t… He had it with anguish on his face that the payment model that we have right now does not pay for keeping people healthy, and it does not reward you.

So, I worry about a system like mine that absolutely did the right thing for patients, but when you look at what the cost effect was on the health system, it was a negative cost effect. We were delivering a financial hit to them for something that did something good in the world. We have a toxic pain.

René Quashie:

Just one more factor to throw in. If you look at the largest employers in most states, they’re health systems. So, that’s another complexity that we have to consider.

Emma Beavins:

Okay. We are out of time, but I’m going to ask one more question. Just a quick, quick hit down the row for all, especially the staffers that are here today, what’s one thing you want them to take back to their member? Laura, I’ll start with you.

Laura Adams:

I think the one thing I’d like to have them take back is connect with others on this. Let’s get together. These conversations are critically important. They’re the beginning of the consensus. They’re the antidote to the patchwork that we have. So, raise up, look around you. Join with others. We all work together with each other on many different forums, and there’s a reason for that. That would be my number one thing is. Especially, we’ve got to do something with the state federal collaboration.

René Quashie:

Learn. Learn. Learn.

Mark Sendak:

Engage people doing the work, frontline in the trenches doing the work of implementation.

Dr. Brian Anderson:

Everything they just said, I’ll summarize the same. Partner with us is a big ask.

Emma Beavins:

Great. Thank you.

Claire Sheahan:

Yes. How about a big round of applause for our awesome panelists and our great moderator? I know I learned so much today, so it’s my job to thank you all for coming. We are so grateful that you spent your precious time with us today. I hope that we delivered on the three objectives we set out today to introduce you to some great leading lights in this field, to provide you new resources and to encourage you to go deeper in this topic. I will say at the Alliance for Health Policy, we work in educating staff and the policy community on a number of topics. One of the things we heard on AI was that it’s a little bit intimidating.

So, congratulations to all of you for jumping in the pool, and learning something that might be a little out of your comfort zone. We look forward to hearing from all of you. We have a ton of programming and resources on our website we encourage you to use. We have QR codes on your table, and the report, which includes a report from our workshop we held with many different stakeholders in April, along with some of the learnings from the summit, is available to you there, so if you missed those. But, I want to really thank our panelists one more time. Thanks so much, everybody. If we can give them a round of applause, and take a cookie on the way out.