What is Responsible AI?: Mitigating Bias, Driving Equity, and Maximizing Benefits

July 25, 2024
11:05 am-

11:55 am

Understanding and implementing responsible practices is paramount for using AI as a tool to advance health equity. Panelists discussed strategies to address known shortcomings, including bias in health care AI, considerations for mitigating community harm and maximizing benefits. Experts also shared insights on creating AI products and policies to improve the health care system for all populations. 

Speakers

  • Moderator
    • Anna McCollister, Consultant, Four Lights Consulting / Sequoia Project and Health Information Technology Advisory Committee
  • Panelists
    • Arlene Bierman, M.D., M.S., Chief Strategy Officer, Agency for Healthcare Research and Quality
    • Stephanie Enyart, J.D.,  Chief Public Policy and Research Officer, American Foundation for the Blind 
    • Elliott Green, Co-Founder and CEO,  Dandelion Health 
    • Jenny Ma, M.A., J.D., Principal Deputy Director, U.S. Department of Health and Human Services

Summit Details

This panel is part of a larger summit event.

July 25, 2024

AI in Health- Navigating New Frontiers Summit Thursday, July 25, 2024 Barbara Jordan Conference Center – 1330 G St NW, Washington, DC 20005 9:00 AM Registration | 9:30 – 3:30 PM Panel Presentations This year, the Signature Series delves into the transformative power of Artificial Intelligence (AI) in both health...

Speakers

Anna McCollister

Consultant, Four Lights Consulting / Sequoia Project and Health Information Technology Advisory Committee
Anna McCollister is a health technology entrepreneur, strategic consultant and reform advocate. Her work focuses on creating new ways to involve health care constituents in critical aspects of health research, care and treatment, as well as data governance, evidence development and policy reform. In addition to being a member of the board of directors for the Sequoia project, Anna is leading the Consumer Engagement Strategy Workgroup, a new Sequoia Project workgroup aimed at engaging with consumers to support health data interoperability and enable greater patient access to their personal health data. Anna has founded two health technology startups: VitalCrowd, a Web-based collaboration platform for crowdsourcing the design of health research, and Galileo Analytics, a visual data exploration and analytics company aimed at democratizing access to and understanding of complex health data. Previously, she served as Chief Advocate for Participatory Research at the Scripps Research Translational Institute (SRTI). Through that work, Anna was a Co-Primary Investigator for the “All of Us“ Research Program, a centerpiece of the National Institutes of Health’s Precision Medicine Initiative. Since 2019, Anna has worked as an independent consultant, advising C-Suite leaders and developing strategic approaches for engaging stakeholders, patients and advocacy groups in critical aspects of corporate and public policy. Anna’s work covers an array of issues and topics, but focuses heavily on data access, use, ethics and governance, with a goal of building corporate programs that enable companies and organizations to earn the trust of patients through action. Anna’s passion for innovation in healthcare is rooted in her personal experiences living with type 1 diabetes. As an entrepreneur and advocate, she was among the founders of the #WeAreNotWaiting movement, a global patient-led hacker movement that helped accelerate the pace of diabetes device data access, connectivity and interoperability. She speaks frequently about the promise of digital health, the critical need for patient data access and the imperative and promise of using “real world” data to gain better insight into treatments for complex illness. In 2022, Anna was appointed by the GAO to serve on the federal Health Information Technology Advisory Committee (HITAC), which advises the Office of the National Coordinator for Health IT on federal health IT policy. Previously, Anna was a member of two FDA advisory committees. She currently as an advisor on a range of non-profit health projects and groups, helping to facilitate patient-centered design of products, policy and research.

Arlene Bierman, M.D., M.S.

Chief Strategy Officer, Agency for Healthcare Research and Quality (AHRQ)
Arlene S. Bierman, M.D., M.S., is Chief Strategy Officer of the Agency for Healthcare Research and Quality’s (AHRQ) in the US Department of Health and Human Services. Previously she was director of AHRQ’s Center for Evidence and Practice Improvement CEPI) consisting of five divisions the Evidence-Based Practice Center Program; the U.S. Preventive Services Task Force Program; Digital Healthcare Research; Practice Improvement; Healthcare Delivery and Systems Research; and the National Center for Excellence in Primary Care Research. Dr. Bierman is a general internist, geriatrician, and health services researcher whose work has focused on improving access, quality, and outcomes of health care for older adults with chronic illness in disadvantaged populations and has published widely in these areas. Dr. Bierman has also developed strategies for using performance measurement as a tool for knowledge translation, as well as conducted research to increase policymakers’ uptake of evidence. She is developing an interoperable shared electronic care plan for use to improve care delivery for people living with or at risk for multiple chronic conditions. As tenured professor she held appointments Health Policy, Evaluation, and Management; Public Health; and Medicine; and Nursing at the University of Toronto, where she was the inaugural holder of the Ontario Women's Health Council Chair in Women's Health and a senior scientist in the Li Ka Shing Knowledge Institute at St. Michael's Hospital. She was principal investigator for the Project for an Ontario Women’s Health Evidence-Based Report Card (POWER) study, which provided actionable data to help policymakers and health care providers improve health and reduce health inequities in Ontario. Dr. Bierman has served on many advisory committees including the Geriatric Measurement Advisory Panel of the National Committee for Quality Assurance, the board of Health Quality Ontario, and the Institutional Advisory Board of the CIHR Institute for Health Services and Policy Research. She received her MD degree from the University of North Carolina School of Medicine in Chapel Hill where she was a Morehead Fellow. She completed fellowships in Outcomes Research at Dartmouth Medical School, and Community and Preventive Medicine at the Mount Sinai School of Medicine and served as an Atlantic Philanthropies Health and Aging Policy Fellow/American Political Science Foundation Congressional Fellow.

Stephanie Enyart, J.D.

Chief Public Policy and Research Officer, American Foundation for the Blind
Stephanie Enyart is a disability rights leader with 20 years of experience advocating for people with disabilities. She currently serves as American Foundation for the Blind’s Chief Public Policy & Research Officer. She launched the AFB Public Policy and Research Institute in 2019, which conducts mixed-methods research that informs AFB’s advocacy. She provides strategic leadership for the policy and research functions in the key focus areas of technology and transportation across the lifespan. In April 2023, President Biden appointed her to serve on the U.S. Access Board. Ms. Enyart spent over a decade working in state and federal government and has extensive expertise in nonprofit and community organizing. Ms. Enyart holds a B.A. from Stanford University and a J.D. from the UCLA School of Law where she served as an Editor-in-Chief of Recent Developments for the UCLA Women’s Law Journal.

Elliott Green

Co-Founder and CEO, Dandelion Health
Elliott Green is the co-founder and CEO of Dandelion Health — a real-world data and clinical AI platform powering next-generation precision medicine and personalized care. His career has spanned finance and healthcare technology, culminating in executive roles within healthtech startups that are focused on payers, providers, life sciences, and healthcare data. Elliott brings an in-depth understanding of the operational components of the U.S. payer, provider, and life sciences ecosystems, as well as an ability to establish and manage complex institutional partnerships. He has taken all of this experience into his latest AI-focused venture, Dandelion, where he is driving impact in the transition to precision medicine and personalized care.

Jenny Ma, M.A., J.D.

Principal Deputy Director, U.S. Department of Health and Human Services (HHS)
Jenny Ma serves as the Principal Deputy Director at the Office for Civil Rights at the U.S. Department of Health & Human Services. In her role, she oversees the implementation and enforcement of civil rights and privacy laws and related policy and strategic initiatives. Jenny also coordinates work with OCR’s 11 national offices and supports the Director in the management of day-to-day operations of the office and staff. Before joining the Biden-Harris Administration, Jenny was a Senior Counsel at the Center for Reproductive Rights, where she led complex, high-stakes reproductive rights cases in state and federal courts, including as counsel in Dobbs v. Jackson Women’s Health Organization before the U.S. Supreme Court. Prior to these roles, Jenny served as a law clerk to the Honorable John T. Noonan, Jr. of the U.S. Court of Appeals for the Ninth Circuit and the Honorable Robert W. Sweet of the U.S. District Court for the Southern District of New York. She also worked at the law firms Jones Day and Patterson Belknap Webb & Tyler LLP, where she co-authored an amicus brief in Obergefell v. Hodges. Jenny is a graduate of Columbia Law School, where she was a Harlan Fisk Stone Scholar, a Ms. JD Fellow, and received the Louis Henkin Outstanding Note Award for her scholarship. She also holds an M.A. from Columbia Graduate School of Arts and Sciences, and a B.A. with honors from Wesleyan University. Jenny recently received the National Asian Pacific American Bar Association’s Women’s Leadership Award, which recognized professional excellence in legal field and demonstrated leadership and advancement of women or women’s issues.

Transcript

Speaker 1

It is now my pleasure to welcome our next panel to the stage. What is responsible AI? Mitigating bias, driving equity and maximizing benefits. I’m excited to welcome Anna McAllister, who will moderate this discussion. Anna is an independent consultant focused on technology, health data use, access, governance and policy. She’s also one of the early founders of the patient driven White Hat hacker movement, hashtag. We are not waiting and was appointed to and currently serves on the health IT Advisory Committee. Welcome to our friend and panelists.

Anna McCollister

Hello everybody. Can you guys hear me OK? It’s truly a pleasure to be here. I’ve been looking forward to this meeting. It’s such an important topic and one that is near and dear to my heart in many respects. Before we get started, I. I wanted to give you a little bit of context for why I think this is important and what this means for me. I have a somewhat eclectic background. I started in journalism, then got into economic policy and foreign policy and for personal reasons. I decided to switch gears and get into healthcare and did healthcare public affairs for a while and over time became more and more frustrated with over reliance on what I think are pretty clumsy blood based. Biomarkers and random the use of randomized controlled trials that measure those things that result in. The identification of outcomes measures that frankly are irrelevant to people like me who live with type one diabetes, the big E and type one diabetes is hemoglobin A1C, which is a little bit like using the farmer’s almanac to plan your afternoon. It’s not particularly, it’s moderately interesting, not helpful, whereas what we have now are continuous glucose monitors that generate data every 5 minutes that are far more accurate and far more meaningful. And as we began as a patient community to see this data come online, we began to realize that there’s a lot of hope and potential for using new and novel data sources and very fine-tuned algorithms. Not just for clinical care, but for research. So about 13 years ago, I got into cofounding a company who do big data analytics with my background in public affairs with the goal of helping researchers, and perhaps stimulating the development of what we now call digital biomarkers. And what I’ve seen over the years has been frustrating because I don’t feel like we’ve progressed sufficiently in the creation of digital biomarkers. And one of the hopes that I have for AI is that we will be able to use new and novel data sources that come up with better ways of measuring what it means to be healthy versus sick, and I think as we get into generative AI to some extent the large language models, but really generative AI that is a significant hope. And I think that if we can do that, it will help mitigate some of the potential for bias that exists in some of these analog algorithms that we’ve seen over many years that have been used by, you know, medical societies to create clinical practice guidelines. On the other hand, as a patient who has type one diabetes. All the complications I take 20 different meds, use eight different medical devices, have 16 different physicians. It’s a lot to manage. I have seen and there are you know, some case studies that have been covered in media. The way that algorithms or AI can be used to create barriers and burdens on patients that are, you know, from my perspective, somewhat diabolical and need to be considered when we think about how do we actually apply algorithms and AI, particularly as it becomes a more and more sophisticated nuance to put behind black boxes. How do we actually apply that and for what reasons? What are we maximizing for and what are the ways that we have to actually from a regulatory perspective, whether you know that’s the federal government or whether from a private policy perspective in terms of corporate policy and governance, what are the systems that we can set in place to keep those sorts of uses of algorithms that can create bias and generate further divisions, and further inequities and care. How do we keep that from happening and how do we keep these algorithms, from becoming tools of discriminating, whether that’s based on race or bias or sex or physical disability or other types of attributes that may be less obvious as it relates to, like complexity of disease, etcetera. So these are some of my concerns as a patient and as an innovator and I’m super excited to to dig into this with a little bit more detail with some of our panels. I’m going to let each of them introduce themselves to you. It’s an esteemed panel that with with a lot of great experience. So first I’ll hand it to Jenny.

Jenny Ma

Hi everyone great to be here today and thank you and the panelists for sharing space. I’m really excited to talk about this topic. My name is Jenny Ma. I am the principal deputy director at the Office for Civil Rights at US Health and Human Services. I am an appointee of the Biden Harris administration. And prior to that, my kind of wheelhouse was as a litigator working for healthcare facilities all across the country. I worked in reproductive rights and health, so I have worked with the smallest of providers. I have now worked in the policy space in my current role and OCR is an enforcement agency. We’re small and mighty and one of the key highlights and hopefully you know, we’re now that we’re done with the drafting of section 1557 of the Affordable Care Act, which is the non discrimination provision which we’ll talk about a little bit more today. I’m here because. For the first time. In that regulation, we specifically outline responsibilities of covered entities as it relates to AI and obviously as a non discrimination Reg, we very much care about the topic here of mitigating bias and driving equity and maximizing benefits. So that you know, regulators can be in partnership. With covered entities with developers to make sure that AI fulfills the kind of promise that we’ve all been talking about, that we all hope for without the possibility of kind of veering off to some of the potholes that we all know can be there. And we’ll discuss today. So thanks so much.

Arlene Bierman

Hi, I’m Arlene Bierman. I’m also very happy to be here today. I I actually am now the chief strategy officer for the Agency for Healthcare Research and Quality. Some of you may not be familiar with AHRQ, but our mission is to really improve the quality and outcomes of care for everybody who uses the health system. And and we’re a research agency. So our mission is really to provide the evidence to inform decision making by others. I’m here today because we did a series of consultations and work to develop guiding principles to use algorithms and AI to mitigate and not exacerbate bias. I’m also a general internist and geriatrician researcher, and my focus is really on improving care for older adults who have multiple conditions and doing coordinated integrated care, and I see the promise of AI in sort of doing more better precision person centered care but also the risks of creating burden and bias. So we need to get it right.

Elliot Green

Good morning and thank you very much for the invitation. My name is Elliot Green. I’m the co-founder and CEO of Dandelion Health. I suppose on this panel I somewhat represent industry. Dandelion health is a real world data platform or as Anna mentioned, kind of one of the novel Big data sources that we’re hoping to use to help mitigate this bias and improve the quality of AI and ensure that what is developed is developed for the right patients at the right time and ensure that kind of no stone is left unturned.

Stephanie Enyart

Hi, I’m Stephanie Enyart. I’m the chief public policy and research officer at the American Foundation for the Blind. I run a policy and Research Institute. So I lead a team of PhD level researchers, all who have the lived experience of blindness to look at a variety of issues related to how different aspects of our disability experience, you know, affect our lives. And then we leverage that with our policy advocacy and outside of that job, I also have a presidentially appointed role as a public member of the United States Access Board. And we’re looking at a lot of the regulatory and technical assistance aspects of various disability laws.

Anna McCollister

So one of the things I’d like to start with and Arlene, I’d I’d like to kind of begin with you in part because you’ve written several papers and some of the work that you’ve done with looking at AI governance models and issues that might be created, you know, in response to congressional requests. Actually, but one of the things that I thought was really important. That frightens me as a patient, frankly, as you reference the use of algorithms that are already in place that have been demonstrated definitively to that they contain bias and there have been real world real life clinical implications, things like. Estimated glomerular filtration rates EGFR where traditionally there’s been a different one for for people with African American descent versus non African Americans. As a great example, you cite several others in your papers and I’d like to talk to you a little bit about. Or to at least mention those and talk about how these governance models that you helped create might keep these analog models that have sort of cemented and been very stubborn to change when it comes to clinical guidelines and biomarkers in a pretty visible world how what happens when that gets behind the black box and how can the governance model that you helped create help mitigate against those kinds of concerns?

Arlene Bierman

Yeah. So bias in clinical decision making and algorithms is not new. And actually the work that ARC embarked on was was at a request by Congress, but it was also, it was sparked by a paper that was in the New England Journal of Medicine by VIAS, which really categorized. All of the existing and this is not AI. These are standard, you know linear models, but how how they had bias. And so we, we were very systematic in how did we approach it. The first thing we did was we put out a request for information in the Federal Register about what people knew about bias. What people knew about specific algorithms. We got so many responses. We got 500 pages with references. It was really rich. So we started with a qualitative analysis of those responses which was published in the in Gemma Health Forum, so that’s available there if people are interested and there’s so much work going on in mitigating bias in traditional algorithms, and you know, I’ll give a good example just to make it concrete. There was an algorithm, there is an algorithm for once a woman has had a C-section, can she have a vaginal birth after the C-section? And there was actually a variable for race in there. So African American women were less likely to be offered the ability to, you know, try having a vaginal birth after they’ve had one C-section. And So what happened was when they started looking at the algorithm. And finding out why this was, it was really hypertension driving it and they were able to replace hypertension with race in the algorithm and it actually benefited white women too, because it’s a more accurate algorithm. So just to know and and there’s just dozens of examples. So we we learned from our, you know request for information in the Federal Register that that we had a big problem but we also. You know, people shared solutions like with with kidney transplant. There’s lots of people have been working on fixing this. And so then our next step was well, we just didn’t want to document the problem. We wanted solutions and so we decided we wanted to develop guiding principles for how to develop and use and monitor algorithms to mitigate bias. So what we did was we convened an expert panel and did extensive. Stakeholder consultation and the result was guiding principles for the use of algorithms that was published in January. Network open. So and we would hope, you know, and basically the bottom line from that paper is that, you know, we have guiding principles to develop and use. You know, algorithms to. Addressed by us, but it has to be baked in. It has to be explicit across the whole life cycle of algorithm development. So we would hope that you know people here will start using those principles and think about, you know, the potential for bias, because I think you’re right when you know what’s in the algorithm, that’s easier. But with the black box and lack of transparency algorithms. How important it is to really have algorithms that are validated in the populations that it’s used for.

Anna McCollister

You know somebody who spent a lot of time kind of like, you know, fighting against these these analog algorithms that are very much in the open that get very calcified and difficult to change. It really frightens me that we’re going to take that kind of bias and put it behind a black box. And. Not have any transparency about what is the thing that we’re maximizing for and what is the training data that supporting this the use of this algorithm and what where the bias is that went into the selection of that training data and algorithm. So it’s super important as the governance model you put forward suggest to to think about bias in the beginning. To realize from the beginning that that that this bias does exist and we need to build in ways to to check for it kind of building on that, Stephanie, one of the discussions we had prior prior to this panel, you know was talking about the concerns that you have as a patient. I’d like for you to talk a little bit about the concerns that you have as a patient and somebody who works for a patient organization about how that kind of bias might exist, particularly when it comes to people with disabilities, and the fact that a lot of training data will not necessarily reflect the needs and the the specifics of individuals who aren’t part of the mean or the norm.

Stephanie Enyart

Right. I mean, so I mean disability as as a category or or someone who has let’s say chronic illness or something that from a diagnosis perspective puts their situation at the edges of what may be considered the average or the norm means that our unique set. Of you know, data is something that’s really something not in the mainstream or the the center of whatever we’re experiencing. And so when we’re training AI on things, we are feeding it data sets. It’s learning based on the data that we put in. And just just as we are, you know as humans what we eat affects how our body works. And so when we’re feeding AI training models, data sets that may not include a lot of disability experience or data about people with disabilities or people who have the abnormal atypical aspect of something, then it’s going to be very under inclusive. It’s not going to be able to spot and map and look at the full range of experiences of someone like myself who has a very rare condition, so we have to think about the fact that a lot of the the training models are being looked at carefully now. Hopefully all the time for for race and gender. But when we’re looking at it in the healthcare context, we also have to think about it in terms of disability. Because when my experience is at the edge or maybe not even in the data set, and there are now devices that are operating with without data to be able to even encompass my experience, then it’s it’s not going to be accurate and there’s going to be all kinds of implications that will affect the the massive population of people who have atypical health characteristics. So like my retina, for example, is going to look very different tomorrow, maybe or next year. But if that’s the mooring point for something, then it’s it’s really going to be very inaccurate in treating all kinds of aspects of my health situation or spotting me similar if if someone who’s blind for example has very different bone structure around the eyes and from an imaging perspective we’re using the the shape, the bone structure shape to be able to spot or diagnose or you know, analyze something. Anyone with that abnormality is is not going to be, you know, kind of treated in the right way or spotted in the right way. So we’re going to have to augment with humans. We’re going to have to augment. In the sense of making sure that the data sets are actually very, very diverse and it’s very hard to do, I I do work in a research space and we’re constantly having to make sure that the data sets that we’re using have some representative connection to the population. So it’s it’s a, it’s something that’s a constant challenge and will it will require constant engagement, you know and and this is where we are with AI. So AI is going to continue. To grow and. Change so this is an engagement model that we. Need to adopt so.

Anna McCollister

Yeah, I think it’s super critical for us to recognize and like last year when we were talking about AI within the context of HTI-1 as as a member of the Health IT Advisory Committee, this was a significant concern. If patients with rare diseases are not represented in the vast majority of data sets, if they are, there’s maybe one or two. And no one person represents their entire disease state. So we have to make sure that as we think through governance principles, we think about bias that we’re not just focused on ethnic differences or gender differences that we’re thinking about what are the differences that exist within the context of health and disease and how can we ensure that these algorithms that have the potential to do so much good, actually aren’t creating harm and aren’t creating a new form of bias and restrictions to care. So Speaking of training and data sets and and algorithm validation, Elliot, I know your company Dandelion is is attempting to create a mechanism or a tool for validating the legitimacy of specific algorithms. So tell us a little bit about the the approach you’re taking.

Elliot Green

Sure. Thank you. Yes, so. To give it a little more context, Dandelion ultimately is is a kind of consoled view of health systems from around the country. And when we built it, one of the realizations to Stephanie’s point was that we really wanted to try and find as as efficient a way to get the most representative data set. And as we looked at. We thought, OK, well. Principles. We should really find non academic medical centers more representative of the care that’s really being given. We should make sure we have different care, practice patterns, race, ethnicity, gender obviously would be be normal, even machinery. You know, for example, eye exams, completely different machines and so you need to capture that and the only place this lives is realistically an integrated delivery networks. They’re on an incredible number, but. And So what we decided to do was convince some very forward thinking people in health systems that wouldn’t be a great idea if we extracted literally everything you have from a clinical perspective. Ethically, de-identify it, tokenize it, pull it into an environment and make it available for people to build representative algorithms. And, you know, I think everyone’s hearing on this stage data is the key. And so that’s what we embarked on as we went through this, what we also realized as we evolved was wait a second. These things that are being built to the point of this panel, how are we policing? How are we safeguarding what these products are? How do we ensure that the data to your point that they’re using is accurate? And so we built. And have at least finished the beginnings bit a validation tool with the help of the Betty and Gordon Moore Foundation and the Scan Foundation. That is really taking these 10, 15 soon to be 20 million patients and using that data to assess algorithms. So what people do is they will give us. Their algorithm we. Will put it into a container, so they still have their IP and we will run it through our data set and give a 20 page report. As things stand on. How did you do on race, age, ethnicity. How did you do on inpatient versus outpatient? You know, that report will get ever more complex. And that is the first step of being able to say, did the algorithm perform the way you thought? And so that’s what we’ve really been focused on. We’re participating in CHAI. So you know Lee mentioned the previous panel about assurance labs and that idea of starting to get these things off the ground. And so we’ve already validated. I think there is one final point that’s important. There’s a real expanse from just, you know, to Anna’s point on legacy algorithms. That’s really been, EMR, based. But there’s an incredible amount of information that lives outside of that images, wave forms, pathology slides, and that’s what we have created. Is that multimodal data set because that’s where the patient benefit we think is really going to lie. You know, what don’t you see in the scan that the radiologist is only normally looking for one thing? How many incidental? Findings could you have you know, and ECG is full of information, yet often just used to look for an arrhythmia. And so it’s, how do we responsibly build these incredible tools? But I think we have to keep an eye on the fact that they really could have an incredibly profound, you know, impact on patient care. So as long as we do it right. You know, hopefully there’s a new dawn coming.

Anna McCollister

I think that’s a really important point. I mean one of the things that I have concerns about is again, as as somebody who is a multi layered nerd who like looks at things like data quality and your EHR. Is when I download all of my data from my EHR. It’s a mess, frankly, I mean. It’s the the I first. Of all, at this point in my health career, it’s it’s about 200 pages. All sorts of diagnosis, like so many different ICD 10 codes. It’s. It’s it’s kind of funny, like there’s no way a doctor’s going to be able to dig through that when he gets it, you know, through through an exchange. So I mean, you know, part of me thinks maybe AI can help, like, turn that into something that’s useful. And I think that is something that different people are looking at. But but also it’s like frankly this is not that great of data to start with. It really isn’t all that reflective of my health. So what happens if we take that, let’s say we clean it up and make it a little bit. Better put it. Use that as the training data. Put it by in a black box. What’s it going to spit out? Well, if you don’t have really nuanced data going in, you’re not going to get great stuff coming out, so which is not to say that there won’t be really good uses for EHR data with AI. I have perfect confidence that there are a lot of brilliant people thinking this through, but we need to be aware of the fact that the data that is being used to train these models and it’s being used to run. Has some limitations and that’s OK if we keep remembering that those limitations exist and set set set things in place to. Help us check that. Speaking of checking limitations and thinking about ways to really oversee. The potential discriminatory impact of AI. Jenny, I know you worked on a rule and your your, your department has come out with a with what I find to be a somewhat comforting federal rule around discrimination, specifically as it relates to to, to AI or I can’t remember the specific euphemism for AI that was used in the rule forgive me. Care support something decision? I feel like it’s a different one for each federal agency, so I’d love for you to talk a little bit about the rule and what it means and how it will give you the ability or articulate the ability of of OCR  to to be able to help police. Or prevent or work with industry to prevent the the use of harmful algorithms.

Jenny Ma

Can I just ask the audience how many of you have heard of section 1557? OK, so like half and half. So I will kind of backtrack and let you know. So OCR has had a banner year within policy. We’ve had 6 rules come out in the last, you know, at one point we had four rules roll out within a week and a half and that’s a lot for folks who aren’t. Dabbling in policy making one of those rules is section 1557 of the Affordable Care Act. I think 100% of you probably have heard of the Affordable Care Act. Section 1557 was drafted first in 1996 during the Obama administration, and it put non discrimination equity principles in place as it relates to race, national origin, color, age, sex and disability. So those are the covered protected categories. Obviously things looked very different in 1996 and then again in 2020 when the Trump administration rewrote those rags and we recently had. This is the third iteration of the rule, and it’s the first time where AI is mentioned within the rule itself, and that’s not to say that this is anything new. So I want to just acclimate that OCR covers covered entities. So that’s your providers big and small. It’s your dentist, it’s your single single physicians, it’s your giant hospital networks, it’s your insurance companies as of this role. So when you think about who in the federal government is kind of overseeing whom you’ve heard a lot about developers. That’s kind of ONC and FDA’s lane, and that’s really important, right. Because that’s a lot of things are happening on the front end that we’ve already discussed. And then what happens when it gets to? Prior to the patient, prior to folks all here and all of you experiencing the care that you receive, who’s kind of looking over the provider and that’s OCR’s job, so I hope I’ve kind of laid it out for you. Those are the three pillars I think of it as kind of a triangle, if it helps, I like the analogy of when you go to the grocery store. ONC’s kind of drafting what’s on the nutrition label FDA kind of allows the consumer to look at what’s on there. And then we say, OK, well, if the consumer just Willy nilly picks at things and puts like fake like badges like fat free like what have you like there needs to be some regulation on that. So we have, we’re an enforcement agency and we’re a policy shop. This rule specifically, I urge you all to read it. I’m glad I got kudos for it. That’s great. Yes. Really good footnotes. Very interesting. It’s a kitchen sink rule. I urge you to read 92.210 or skim it, or read a white paper on it if that’s easier. But in short, section A of the rule just reiterates the non discrimination protected categories like hey. AI applies here, as it always has been. This is nothing necessarily new, but we want to make sure that we reflect the realities of the kind of care that’s being given. Section B says you need to make reasonable efforts to identify when there is discrimination occurring with the use of AI. Reasonable right? Again, that’s the legal term. But. We really want to work with providers to make sure that like folks are not just turning a blind eye and then the third provision in section C is to make reasonable efforts to mitigate the harms. Now, what that necessarily means we give a couple of examples, but we’re going to learn through our enforcement powers, don’t like to use the word policing because I want to, really urge that OCR is an enforcement agency. This is not meant to be putative. We all are trying to get it right. That word has been used a lot here, but it’s not going to look linear, right? It’s going to have it’s bumps. I think there’s some combination of providers, covered entities, industry regulators, kind of needing to communicate with each other. To make sure that we edge towards getting it right. And so this role really and what our goal is, is to make sure that we reflect the current realities, understand that bias is occurring in healthcare. Of course that it always has been. But to also put some onus on the covered entity to not just allow, you know, AI to take over what they know and I’ll just end with quickly saying right. You don’t want your doctor to have had, you know, 40 years of experience and then just kind of turn a blind eye, like not think about that 40 years of experience and give you care because the machinery or the algorithm told you to do that, right. So that’s where we’re really trying to come in and we’ll do a lot of learning through the coming years as well. But we hope that this rule is. Enduring, reasonable and hopefully applicable to covered entities.

Anna McCollister

So it sounds like you’re going to be pretty busy, and I will say this because she cannot. But one of the things that that I as a patient who’s very involved in this space I’m concerned about is that and we’ve seen this with ONC’s budget as well as like Congress says go do this stuff, you develop regulations, really complex things and industries that are booming, such as AI. But you know, is there budget to actually fund the people to be able to do any kind of enforcement to be able to do oversight? And I think that’s a really important and often overlooked thing that as a patient. It’s like it’s great that you have this regulation. But like, who’s going to, like, follow all of this stuff and make sure that these things. Are not discriminatory and you know we can’t just have federal agencies tasked with doing things without funding them. So that’s my personal plug as we think about this is not her. This is me. As we think about, you know, what are our priorities? That’s one thing to talk about and saying this is a really big thing and we’re going to have. Hearings or whatever, but like we have to be able to fund the enforcement and if we have any expectation or hope whatsoever of of having reasonable policy that works, Stephanie, one of the things that we talked about that I thought was really interesting and I’d love for you to talk about from the perspective of a patient that somebody who works at the patient organization. As the way that you know, we’ve talked a lot about the concerns and fears and I think that’s very real. We need to voice those and sort of bake that concern into our approach. But it’s also I see AI as being very hopeful and you mentioned some of the uses that I find fascinating, not just because I have. Diabetic eye disease. And desperately hope to never be one of the members of your organization. But like some of the uses of AI that have been incredibly helpful in terms of of, of supporting people with disability that you know and giving you additional, you know, tools to be able to navigate in such a, you know, a very visual world. So I’d love for you to talk a little bit about that, but also some of the things, the concerns around privacy that. That. That come along with that.

Stephanie Enyart

Sure, sure. So I mean, I could take this in so many different directions. AI is going to touch in a profound way every aspect of our lives. And so whether we’re talking about the way that I can grocery shop quickly and independently or the way that I can navigate through a credit airport. Or read legal documents. All of these things are going to be affected by AI and in its advances through all kinds of assistive technologies. So I mean, I I could go in 10 directions or even how I parent, you know, I I have 4 kids in my house. So from the standpoint of you know life. And where AI fits in, I mean the way that I read my health records will will be assisted by AI in a growing way and is deeply ingrained with assistive technology usage. So the kind of advances that are available for people with all kinds of disabilities are really significant. But in many cases, things like assistive AI where I’m either having an AI describe something to me in in live experience, some of the services that individuals with disabilities are relying on actually. Really, I would say personally overstep the bounds of a. A privacy concern because you’re offering a certain service, but in order to get that service, you have to give up a certain amount of privacy. Some services even ask for a legal. A legal right to be able to access that information going forward and into the future, so I may rely on a service to be able to get access in live time, but I have to think carefully about whether or not I want them. You know, them being the the company the company exposed to whatever data they may find. Because that’s something that they may have a license to going forward. So these concerns are are are difficult because many of us want to embrace technology in the moment that we’re in to live radically independent lives, but at the same time, you know, privacy, autonomy, all of these things. There’s the the drawbacks, but I I also, as I was listening to to Elliott, I think things like Dandelion, if, if you don’t mind, I think that this is a a wonderful space. Of of great opportunity. but I I I personally want to. Know like how? Are you harnessing disability data in your data sets? Because my my first and big main point is you know where we’re not represented, where our experiences are outside of the data set that’s being fed to AI. It’s it’s not going to be able to engage with us in the same way. And you know you’re doing great work. You’re you’re one of the. Good ones and. So since you’re one of the good ones, you know, how are you getting that Black diamond goal? You know, like that. How are you tackling the really tough part of the disability experience or the abnormalities in the data work? Sorry, I just had to ask.

Anna McCollister

Frankly, I was going to ask the same thing because like it’s, it’s a very, very important with whether it’s disabilities, whether it’s rare disease. I mean one of. My. And again, you’re doing great stuff, Elliot. I don’t mean it like that, but, but one of my concerns is that, like, you guys have an algorithm that like proves to be legitimate in certain senses. And then you become like this measure of this is the hallmark of, you know, the the Good Housekeeping seal of approval. So then it becomes harder to like challenge something because you’ve given it this seal of approval. So how are you thinking about those kinds of concerns? And again, I’ve done 2 startups, I know like it’s a lot to to think through all of this stuff as you’re especially something this complex. So how are you beginning to think through some of these concerns? As you know, you create this stamp of of of of the seal of approval like? What happens if if it’s not all that great?

Elliot Green

A lot, a lot there. Thank you. Very nice to hear good work. I think it’s just this is a long journey and no one has got to the end of of this yet. And so we’re all kind of working it out now. One of the the really excellent things about AI is that it’s quantifiable depending on what you’re assessing. But the key bit is actually the data set that you’re using to do the validation. The validation algorithms themselves are are not that complex, that the key is have I incorporated, you know, people with disabilities, people with certain conditions. When we built dandelion the best way we thought. To mitigate this was to deal with the outright data, so hence we have sharp in San Diego Sanford in the Dakotas, Texas Health, you know, another one coming. How do we find representative care? And I I said to Stephanie earlier, you know, I think we’re doing our best to do that in an ideal world. The US healthcare system would allow data to flow very easily and we could all do this and we could use 350 million people and dandelion wouldn’t. Exist and I’d. Be OK with that. But that’s not the world we live in. The world we live in is a very siloed data world, and So what we’re trying to do is our best of how do we extract? As much of that as we can responsibly. And then I think, Hannah, to your point, no, we don’t want to be, you know, judge, jury and executioner as it were. Like, there’s important to have checks and balances in the system and that’s why this panel is great, like from a number of reasons. One, you incorporating the correct dynamics and solutions disabilities for example. It’s a longer answer with HIPAA. But I’ll come back to that another time maybe. And then also you know, are you appropriately being validated by others, you know, in an ideal world, we’re not the only ones, but there are other companies that have access or or agencies that have access to the data and we can all validate each other to get to the best result. There will be an ecosystem. There can’t be a single company and we just hope that we can lead that charge and encourage others to try and do the same because ultimately that’s going to be the best for society.

Anna McCollister

I think that’s important and I mean what I’m hoping is that. But first of all, we can’t let the perfect be the enemy of. The. Good. And we need to keep that in mind and we need to be able to create room and make room for innovation and for innovation that isn’t perfect and maybe may have mistakes in it. And you know, as we continue along. Journey. But we also have to be able to admit like there are limitations and some whether it’s like this nutrition label approach like you know we talked about with the context of HTI one and I know different groups are looking at that but be able to say these are the things that this data and this is what frankly I think we do a really bad job. With RCT’s is. We we we acknowledge the imitations for about 5 minutes and then we forget about those limitations when we’re making access decisions and other types of of regulatory approval decisions like there are limitations and that’s fine. But we have to be able to find a way to to declare those limitations. And to make sure that that anybody who uses these tools actually is aware of those biases. So. I could keep asking questions as you may have gathered, I could keep asking questions. This is a fascinating topic and everybody up here has so much great experience. But I want to open it up to audience.