Lessons Learned From Crisis: How COVID-19 Yielded Lessons for AI and Health Policy
10:55 am
The COVID-19 pandemic proved to be a transformative period for health systems, offering critical insights into managing rapid and complex health care challenges. Panelists discussed lessons learned from the pandemic and their application to the ongoing development of artificial intelligence (AI) in health care and health policy.
Speakers
- Moderator
- Jennifer Alton, MPP, President, Pathway Policy Group LLC
- Panelists
- Lee Fleisher, M.D., M.L., Former CMO & Director of CCSQ at CMS, Founding Principal, Rubrum Advising
- Laura Holliday, M.S., Assistant Director, Government Accountability Office
- Hilary Marston, M.D., MPH, Chief Medical Officer, Food & Drug Administration
- Due to his specific role in government one of our panelists is not listed in promotional materials.
Summit Details
This panel is part of a larger summit event.
July 25, 2024
Speakers
Jennifer Alton, MPP
Lee Fleisher, M.D., M.L.
Laura Holliday, M.S.
Hilary Marston, M.D., MPH
Transcript
Speaker 1
With that, I’d love to welcome our next panel. Lessons learned from crisis. How COVID-19 yielded lessons for AI and health policy to the stage, and I’m so pleased to introduce Jennifer Alton, President of Pathway Policy Group LLC, who will be moderating today’s discussion. As a reminder, you can find Jen’s bio, along with the bio’s of our other panelists on the Alliance website. Thank you so much.
Speaker 2
Good morning. Very excited to be here with you all. Thank you for joining us and this is going to be a fascinating panel. We’re going to be focusing on lessons learned from the COVID-19 pandemic and which can apply to the challenges and opportunities in AI and health policy. The COVID crisis was transformative in health systems, public health, and government at all levels. It can offer insights into managing rapid and complex health challenges. And our panelists are well positioned to share their insights into these lessons and how they can apply to AI and healthcare and health policy. To my right is Doctor Matt Hepburn from the White House Office of Pandemic Preparedness and response policy. Next to him, Doctor Hillary Marston, FDA chief medical officer. Next, Miss Laura Holiday, assistant director from the Government Accountability Office. And last but not least, Doctor Lee Fleischer, former CMO and director of Center for Clinical Standards and Quality at CMS. So our first question, which I’m going to start with Matt and Hillary, is the basis, what lessons can we draw from how the federal government navigated challenges during the COVID pandemic and how can those lessons be applied to AI and healthcare and health policy? Matt, I’ll start with you.
Speaker 3
All right, I get to start. So good morning, everyone. Thrilled to be here. Thank you for having us.
What a great group. It’s a perfect size and hopefully being kind of the 1st morning panel we can shake things up a little bit and and at least keep you awake. We have a clock up here. We’ve got like 38 minutes left and I’m like, we’re going to get to like 1% of how deep we could go on this issue. So but to start us off. I guess I’d like to do a little framing based what Jen’s talked about, but what we all know about is that I think what’s amazingly awful but unique about the the COVID pandemic experience is that it was universal. Everyone. And you know, I’ve been working on pandemic preparedness for 25 years and basically I would go to parties and no one would care what. Oh, that’s interesting. Move on, you know. And so and now. When I go to parties, everybody has to tell me their pandemic story, you know, and sometimes for an hour or so. And then I say, well, I worked. On vaccines and then we go there. But I bring up that point that it affected everyone. Everybody has their story and what the universal experience was, is disruption. Things changed and some things changed temporarily. Other things changed permanently and specific to if you will to the health systems and to AI and the things that we’re talking about today, I think it’s truly fascinating how we had this massive disruption and now we’re in this phase of this, I think a fundamental choice. Of do we go back to pre pandemic and keep going back to the way things were before or do we, if you will take advantage of the best of that disruption? Through a horrible, tragic experience, but take the best of that disruption and and run with the ball. There are a lot of really good examples we can get into. I think vaccines is a really good example of where things were accomplished that we never would have accomplished a steady state. And in that accomplishment we made so much progress in the field of vaccinology. Now we’re like, OK, well, how do we sustain that? Our ability to make, you know, safe and highly effective vaccines. At speed for all types of of medical illnesses beyond infectious diseases into oncology and all the other applications that we can truly use our immune system to manipulate to really to for for health benefit. But I think the fun part for this conversation as we get into AI and health policy. Is that That disruption is now continuing and it’s accelerating and I think we do have a choice of. And I’m very interested where this group’s going to fall, and everybody else does. But do we embrace it? Do we ride the wave? Do we say this is fantastic? We’re going to push forward. We’re going to be forward leaning. We’re going to make a lot of changes or not. Because and the the phrase that I loved from the after there’s an afternoon panel on the Goldilocks effect and I I just, I say Goldilocks all the time when we talk about AI, right. And I think it’s the perfect phrase, but the choice for our group based on COVID with the massive disruptions that we do telehealth and we do zoom calls and all these things now is can we take the best of it? The best of that disruption and transform human health and then you know, but also then go back if there are some things that we were doing pre pandemic that are. Still good that we bring those back.
Speaker 2
Hillary, what would you add?
Speaker 4
And first off, I obviously agree with a lot of it. Just about everything that Matt just said. So just for context, I’m now Chief Medical officer at the FDA, but I spent much of the pandemic as a pandemic preparedness advisor at NYAD. I had worked with Matt on Zika, Ebola many outbreaks over the years and certainly never wanted this expertise to be as relevant as it is today. We worked, I think Matt and I had our first phone call about COVID in January and so we worked extensively on on this together and I think in many ways we share some of the. Same PTSD from. Those days, unfortunately, but they’re so I love the framing of this panel because there are so many commonalities between pandemic preparedness and response, and the way that we’re looking at. AI. One, one very important one to remember here, particularly in the context of COVID, is that this change, this this disruption, it’s a marathon, not a Sprint, right. So you just need to be constantly ready for this change. That both means getting yourself set for that. And getting getting used to the flexibility but also being sure that we communicate to interested parties. that there should, they should expect change. This is something that I think we learned over the course of the pandemic to communicate that I honestly think it was something that reflecting we could have done better in the early days, right, communicating that things will change that our knowledge will evolve. another lesson, an area where I think we did better was understanding that the expertise is not just in government, right. So we early on recognized the need to reach out to academics, reach out to the private sector and build the sorts of public private partnerships that are common in some spaces. But don’t always move that quickly together. So I think the key learning from this was both the ability to reach out, but also understand how to do a public private partnership quickly. Which is a dance, no question. It’s a dance. But I would love to see us draw some of that forward into the AI space, another element. And this is just a small thing, but directly relevant for AI is the clinical trials. So we both worked on the. Operation warp speed together and in the clinical trials, we found ourselves catching up in particularly in the vaccine trials in diverse. This was something that we prioritized early on. It was something that we nobody’s really cracked the nut on how to do this well, to reach out to communities, to engage them early in the setting of a pandemic, that was even harder. But I think it deserves all of the attention. Early on, FDA has recently put out its diversity action plan guidance. We certainly take this very seriously, but in the field of AI, in order to ensure that we have truly representative tools in this space, we’re going to have to pay attention to that quite early.
Speaker 2
Thanks. Hilary Lee or Laura, would you like to add anything?
Speaker 5
Yeah, no, I started the pandemic and I still remember I was having dinner with Steve Hahn, who was the FDA Commissioner and he was constantly being called by the Secretary. And I realized that when I went home as chair of Anesthesiology and critical care at Penn, I had to stand up our task force. To address and be ready for massive critical care needs. So really learning as a provider and I actually continue to be a provider. Once I joined the administration in July, actually in a career role. But what became interesting when you developed that vaccine is as a practicing physician, as someone who led a medical board. Is patient safety. Includes that providers don’t actually transmit something to a patient. I always had to get my flu shot. And the decision to actually mandate the vaccine in healthcare was one that I still remember, the White House said. Let’s do it in nursing homes after listening to the ecosystem and really that public private partnership and and talking to all the health systems. They said to me, please, if you’re going to do it either. Don’t do it or do it for everyone, and we did. We mandated it for all of healthcare and could use the condition to participation and what became really interesting is the Supreme Court. This current Supreme Court, five to four said yes, the secretary has the authority to protect those individuals inside the facility. And we really needed the CVC’s data to add, and that’s what they anchored on when, if you look at the the ruling they anchored on the fact that CMS had the authority and they used the evidence. And therefore Loper Bright would not actually apply here and that we could get it through and and it did stand. So as I started to think about AI before I left and in the last year, it’s a safety issue. It’s a safety issue of how it’s applied. And one of the key. Issues for many facilities is governance. Of how we actually use new technology, and I almost think of it as you getting a new bed. In you’re getting a new anesthesia. Do you properly train individuals? And if not, is that a safety issue? So I believe it’s in your readings. There’s a JAMA Health Forum article we put out about 3 weeks ago that said CMS already has the authorities. Under its safety regs, to say that if AI was part of the solution is their proper governance. Is there a proper education if not? Then CMS or the accrediting organizations could actually cite the facilities. I almost think there are three issues. One is, how does the practitioner apply the AI? How does the system itself educate and provide the rules around the use of that? And then how does the manufacturer actually think about product liability? Do they do some of the things that we’ll be discussing shortly, and I think that middle issue of the governance, the structure is something that the government will be able to regulate through its current authorities.
Speaker 2
Thank you. Laura, would you like to add anything?
Speaker 6
Sure. Yeah. I’ll jump in from. And I wanted to build on some of what Matt and Hillary were talking about with respect to disruption. And, you know, one of the things that we learned during the pandemic is that it’s absolutely critical to prepare for disruption. And this can apply very well to AI. Right. So it. Has potential to be hugely disruptive and I think we’re starting to get a sense of how to prepare for it in the case of the pandemic, there were problems that we were aware of ahead of time, such as knowing that our health, IT has a lot of interoperability issues, has issues where the formatting varies a lot, just and then an even access where some hospitals for example, don’t have electronic health record systems. So we we knew that there are a lot of a lot of problems and they’re hard to address, but at the same time, you know, if we could have made more progress with that. They could have it the IT systems could have been a lot more useful during the pandemic kind of taking that over to AI, we have those exact same problems, which are really important for for AI, right? So so data is a foundation and kind of getting our data house in order is critical. In addition, there are a lot of things we’ve learned about AI over the past few years of specifically AI and healthcare and kind of understanding those and taking action based on what we know is very important, so a few of those are scaling. We know that scaling is very difficult. Scaling is essentially when you build a tool for one environment, say a major research hospital, and then you want to apply that tool to another environment, it can be really problematic. And that’s partly because you train, that tool is trained on this one environment, and there are certain assumptions that end up getting baked into the model right, and so that can be assumptions about the tests that are available, assumptions that that specialists might be available and then even assumptions that are about the population served. And so if you take that that model from a higher resource setting and you apply it to say rural or low resource setting, it doesn’t work right. So really kind of understanding that and figuring out how to deal with that, how to work on scalability, how to understand scalability is critical. Couple of other areas we’ve learned about that I’ll just mention is we know a lot about the the tools can be biased, right and that it’s really important to evaluate the tools to understand if they’re introducing bias that’s that’s essential. And then just kind of understanding functionality of the tools as well. And we’ve seen cases where they’re not learning the right thing, right? So learning something like smokers are less likely to develop severe COVID or asthmatics are less likely to develop severe COVID. You know, those are things you have to learn by evaluating the tools and kind of taking that forward as we roll out more tools and really improving evaluation efforts.
Speaker 2
Great. Thank you. So I’m going to turn to Lee, maybe to build a little bit on your, your previous comments really looking at how health systems manage the rapid deployment of new technologies during the pandemic. What best practices can be adopted towards the development and implementation of AI? And what role does government play in that process?
Speaker 5
Well, you actually stole or stole exactly the area that you, you you left off on which, which is interoperability and and I remember having multiple conversations with Hillary during the pandemic and the issue of hospitals had to report out what was happening locally that the CDC could then make decisions on, but they were doing it on spreadsheets, they were doing it on phone calls and as you mentioned, the rule, the nursing homes, it became nearly impossible. And yet the conditions of participation, if they didn’t do it, they could be terminated from medicare, I mean, it’s a huge hammer for hospitals which is, there is no civil monetary penalties in the hospital setting. There are in the nursing homes. So one hammer. So how do you actually deploy that? That’s one of the things government has to think a lot about. But I think learning the frailties in our interoperable system was one of the key things that really made a difference. The other was when Empoks came around and you know, we had the academic medical centers saying and I, I don’t know if Hillary, you remember these we had weekly almost daily calls in which people would say we can stand up a test at our site for our individuals or even during COVID and how do you approve? We know what happened with the CDC test at the beginning of COVID that you know, it didn’t perform well at all. And the role of the FDA in saying this has our stamp of approval, but we actually one of the things that we learned is we didn’t know who got the test necessarily it we couldn’t get the data back and in fact we never learned who got the vaccine during that first tranche. So when we tried to go out and deliver vaccines to those who didn’t get that because of the advanced vaccination. Thank you for operation work. It was brilliant. And we didn’t have the data. So when CMS said we wanted to take care of the most vulnerable, we had no clue who they were because nobody submitted bills. So I think this idea of data flow with new technology as well as particularly in the lab developed tests and in testing, how do we stand up novel technologies but yet have that safe and effective infrastructure approved by the FDA?
Speaker 4
And it’s not just novel technologies, right? Tests that are used every day like hemoglobin A1C for diabetes monitoring it is formatted differently depending what which electronic health record that you’re using. So if you’re trying to aggregate for real world data, real world evidence, you get a mishmash of things and it’s just that should be easy, right? That should be the easiest thing that we can do, and yet that is not something that we’ve yet managed to conquer. So we have some work to do there.
Speaker 2
Either of you want to add anything? If not, we can move on.
Speaker 6
Sure, I could jump in for a minute and then I guess going essentially back to disruption and preparing for disruption. I’d like to talk a little bit about the example of contact tracing apps, right. So I think there are lessons that we can learn from the rollout of contact tracing apps. They so as a country we didn’t have a nationwide system prior to COVID, but we still don’t. But and then what happened was part way into the the pandemic, Google and Apple offered to develop a system. They got that in place and then states kind of piece meal started adopting it and tailoring it to their needs. But this was a system that was developed kind of after the fact. Right? And so there wasn’t, there wasn’t a lot of time to do the testing, the maturation that one might normally do. And so there were some issues that came up, right. I mean, a lot of issues, but some of those were accuracy, others were related to understanding if it even worked right. If it affected behavior and then adoption was quite low. So about a year and a half into the pandemic, about half of the US states had had adopted contact tracing apps and so really something that could have maybe been very helpful had it been rolled out at the very beginning was not very useful, right? And so I I raised that example partly in the context of trying to kind of get everything in order for for AI and to go to best practices for a minute. We we talk about best practices quite a bit in our reports. But one aspect of best practices that could be really helpful is to help hospitals and help providers understand the tools better, right? So what are they? How do they work? What are their limitations? How do you test them in your settings so that to be sure that it works for your patient population? Works for your resources at your hospital and then really just to understand if they’re working as intended. Oh and then lastly kind of to understand how to integrate these tools into their workflow because that can be very complex. As well, but so there are. There are actually so many ways in which best practices can be helpful and sets. That’s just one example, but further work on that area could be useful.
Speaker 4
And just to add one small thing there, bringing end users along right, which I think is something that the AI community can get ahead of. But folks are going to need to understand what’s coming with all these different tools in order to ensure that they’re willing to adopt them so that you actually get the promise realized. I think that’s it it takes work. It’s not something necessarily that that every institution is set up to do, but it’s an important thing to pay attention to. The other way that that helps is with pre bunking against misinformation. So obviously plenty of lessons from that in COVID-19, but I can see that same issue coming up with the AI.
Speaker 5
But and and I know we may want to go to misinformation, but the thing you talked about with respect to the States and Apple and Google, you know, one of the questions is and I was asked frequently in my previous role is how do we ensure from cybersecurity that all of our devices actually have all the patches and are up to date. And how do we know where everything’s deployed? A la the fact that we can’t get from the states always to the CDC information about where there are events. So how are we going to know if there’s been an update to an AI algorithm? Or even more importantly, an AI algorithm has been sunsetted because it actually has some significant problems. How are we going to find who’s using it to say please stop using it, you know, do we have that same system of recalls when I get and, you know, my letter from my car dealer to say you, there’s a recall you have like six weeks to take it in or maybe faster. Will we have that in healthcare given the way Healthcare is regulated, dispersed information is dispersed?
Speaker 2
Great. Well, I want to keep us moving along and make sure we have a little bit of time for questions from the audience. So please be thinking about what you would like to to ask. You started talking about misinformation, which leads kind of to my next question around how we can balance the need for thoughtful policies with emerging technologies in a quickly evolving landscape and how these decisions impact public trust in science and government. So Laura, I’ll start with you and then open it to the rest.
Speaker 6
Sure. And I’ll keep it brief given the time constraints, but you know, one thing I’d like to emphasize that I think probably a lot of us know is that Healthcare is not really the setting to move fast and break things, right. So so in the IT sector, that’s kind of a mantra, right, where you move fast and then you iterate. And that iteration is still essential, with AI in the healthcare setting, but especially if we want to maintain trust or or build trust is perhaps a better way to say it being very thoughtful and careful about rolling out the technology is is essential. Kind of looking at first rolling things out on a small scale and then exploring how they might work elsewhere before rolling them out. So not kind of rolling them out on a wide scale but and then just being very deliberate, making sure you’re understanding how it’s going to impact different communities. It’s just take it slowly. And safely.
Speaker 5
Trying to convince the nursing home community to take the vaccine. I’m the wrong person to actually convince them, and we went and we had a behavioral science group help us and. And they were really good at saying you need to move the movable middle to understand what they’re worried about, the fact that it was operation warp speed. I don’t know if it would be interesting to hear if you think that’s still the right name, given how many individuals thought that meant they went faster than they should. The fact that the VAERS system at the CDC was another problem and Hillary may or may not comment on this issue of some of the leaders who talked about it were so anchored. Because in our heads, as scientists, as clinicians, as policy experts, we knew all the caveats, but we didn’t say that. We actually said you shouldn’t wear masks. You should wear masks. It’s six feet. It’s three feet and therefore a lot of the communication experts said. Just one counterfactual. Somebody can anchor on. So that we have to be more careful about. Saying all those caveats and making it less definitive. Now. My question to the people on the front lines is that’s a really hard thing to do.
Speaker 3
Oh yeah, it’s really hard to do but, but to broaden the issue and and it’s it’s going to get way worse and it’s going to be ultimately way better for our society and what it I think if we broaden the issue to what can AI do for the healthcare for healthcare or for health? It can do extraordinary things, right? I mean, and for people like, well, I don’t really want AI and let’s keep things the way they were. I’m like, is the healthcare system awesome now? Like is it perfect? Are we doing great? Is is health in the United States great and we can just status quo and make incremental change it I I don’t feel that way. It’s hard to get healthcare. I mean, we had a. Lot to work on, in my humble opinion, but what’s going to happen is. If if we do AI really well, we’re going to truly understand the complexity of health and disease and be able to take that into account, which we never have been able to do before, right? So when I’m in Med school, everybody over a certain age gets an aspirin a day. Everybody take an aspirin a day. Why? Because it helps you won’t get heart attacks and you just take an aspirin a day. Like and you think about how foolish that sounds right now, but that’s what we did. And there there is this need in our healthcare system to make things easy, because then we can develop a drug and everybody can take it and then we just come up with one solution and then everybody’s healthy again. And health doesn’t work that way, right? It’s just the opposite like this, the massive. Complexity of all the different factors that make up us. To individualize that for the most healthy thing for us to do today or this week or next month, is going to be super complex. But I’ll tell you what. It’s probably going to if we do it really well, we’re going to end up finding that things we do right now, which are very dogmatic in healthcare are actually not good. They’re not helpful. For some people at least, maybe not. Maybe even for all people. So how do you message that? Well, you’re never going to be able to message that clearly because people aren’t going to like people aren’t going to get that. But if we’re talking about, if we find that, you know, half our population should take an aspirin a day and the other half shouldn’t, and how do we message people are going to be like, I’m so confused. And this is so hard. This is so complicated. And that’s our reality. So if we’re going to transform health and disease, we are going to have to face this idea that it’s going to be super confusing. We’re going to have lots of contradictory messages. We’re going to have all the science in the world saying we should do one thing and then two years later, it’s going to say the opposite. And then we’re going to have to oscillate back and forth. In that ambiguity and uncertainty, I think that’s the future of health. The awesomeness about it is that it then creates a system where people are going to have to be individually responsible for that, and so you know the solution is impossible in some ways, but we have to improve our individual ability to make good health decisions, to understand and process health information to make each one of us more healthy, but that individual empowerment as much as it’s like, well, you know, people don’t listen to the government stuff like that. Ok, that’s good. But if the government is telling you something really stupid, that’s good. We don’t listen to the government, right? I mean, there’s. There’s some. There’s some beauty. In not listening to all the messages, you get so, but I think I think that type of transformation of health is going to happen regardless and what we have to do is try to figure out how people take the best of that information. So they make good health choices.
Speaker 2
Thanks, Matt. Hillary, would you like to add anything on the public trust?
Speaker 4
You know, I think I already commented on it. Basically trying to figure out how to set expectations that there will be changes in our understanding that’s. That’s the key, and it’s so hard to do.
Speaker 2
OK, great. Well, we have 10 more minutes, so I’m going to ask one more question and people please kind of get yourself ready for for a question if you have one. So Matt, I think already got us down this path a little bit, so I might skip you on this final one, but what’s the biggest opportunity you see for AI and healthcare moving forward, and how can the government policy support that?
Speaker 4
OK, so I don’t know if this is the biggest. We put out a discussion paper about all of the potential, actually two discussion papers, one for devices, one for drugs. All of the potential that AI has for medical product development, which is just one portion of what we’re talking about here. But one of the areas that I’m most excited about is the implications for the rare disease community. So these folks, I mean. This. Is not a blinding insight, but these folks spend so much time on that diagnostic odyssey, right? Just not knowing that there’s something wrong with them and just having so many frontline clinicians not know what it is because they hadn’t seen it before. This has the potential to collapse that, but also has the potential to give tools. To connect that community better, right. So there’s a loneliness in having a rare disease and social media can help with that to an extent. But I’m excited to see a more dynamic network evolve from these tools.
Speaker 2
Laura.
Speaker 6
Similarly, I don’t know if I can pick one, you know, kind of like one favorite, but you know, I think AI in healthcare can be especially useful in a strained environment, right? When you’re strapped for providers, when you’re strapped for expertise, such as specialists. When you’re strapped for time, and so that’s actually part of the reason I think this panel was such a good idea because it’s important to think about how AI could be beneficial in a pandemic environment. Also, even just how it could be useful in other strained environments, so a low resource setting, maybe a rural setting and and look at that kind of ground up. So there. I mean there are a lot of tools that could be really helpful, right? So we we all know about the administrative tools that can cut the burden on the. Note taking potentially. They’re also may be able to help with diagnostics is what we’re seeing also triaging patients just so many different things. But what I think we aren’t seeing as much of is kind of an analysis of the needs kind of bottoms up, right. So in the defense world, they call it a capability gap analysis where you you take the time. And you look, you know, if we had a pandemic in the future, what would be really useful in terms of AI and what can we do about it? And then the same thing for some of those other strange settings.
Speaker 5
So I’ll lean into what you just said and and I think it’s it’s a bigger issue. You know I I always ask the students being at a medical school is what’s the difference of those who graduate today and go out into private practice versus those who stay in academia, are they smarter, are they. And the only difference is you keep having students. Who you are scared are going to ask you a question about something new that has come up. And you have to stay up to date. Well, in and it doesn’t have to be a rule setting. It can just be medicine advances. So the fact that you’re going to have AI be able to give you a greater differential diagnosis, a greater options for treatment is tremendous. The fear is that clinicians, and I mean clinicians with large doctors, nurses, all aspects of the healthcare system, are not trained. To think about how to use and I would call it Bayesian logic. How to actually say this is what the patient has. This is what the algorithm tells me. Now, what do I think it has? And there’s been a lot of studies saying that if the algorithm gets it wrong, you’re going to totally dismiss the algorithm out of hand. And how do we really rethink the human algorithm interface to train better, and I think that’s going to be the promise. It’s huge. I agree entirely with Matt. But we need to retrain people of how to use it correctly.
Speaker 3
And just one comment to add about the policy and this this is going to be too generic for this audience, but I think it’s an invitation for you to think about it. We can’t wait to read what we hear from you today and and ongoing especially from the Alliance for Health Policy really does inform us. I mean, I always say policy and strategies about priorities. And So what of the different things that we’re talking about is most important, and I think you’re going to get the Goldilocks and make sure AI doesn’t do those things. But I think the really fun part is to say, like we don’t have to geek out on the large language models. But what we can say is what is the health system that we want and what is the most important things that we want that health system to do, and then we can write a policy to, say, make sure we’re make sure that’s the top priority for the AI investments and all the other things, and we can then shape the system. Right? So, you know, do we want our healthcare system to make as much money as possible from the patients? I would say no, I don’t think that’s a priority. I think we should do the opposite. I think we should make healthcare. You know, freely accessible for everyone, and we should make sure that the healthcare system is delivering on massive health benefit. You know both at at steady state and when someone’s really sick. Right. So like. Articulate the the most important things that you want our health system to do, and then we can write the policy. We’ll write the policy to make sure we have AI programs that do that and don’t do all the bad things that our healthcare system is doing now.