Dr. Peter Laussen - It's about time: Meaningful Use of Physiologic Data
2
Views
0
Likes
0
Shares
0
Comments
Timestops
0:00
Presentation Overview
Introduction to the presentation by Peter
15:58
Next Steps at Institution Level
Jeff Burns discussing immediate next steps for institution-level engagement with AI systems
31:57
Engaging Clinicians in Toronto
Peter discussing strategies for engaging clinicians in Toronto, including demystifying AI and providing resources
47:55
Data Quality Indices
Laura commenting on developing quality indices for real-time signal evaluation
1:03:54
Welcome Back to Boston
Steve welcoming back Peter and expressing appreciation for the presentation
Topic overview
Peter Laussen, MB, BS - It's about time: Meaningful Use of Physiologic Data
Surgery and Anesthesia Grand Rounds (September 16, 2020)
Intended audience: Healthcare professionals and clinicians.
Categories
Disease/Condition
Anatomy/Organ System
Diagnostic/Imaging Modality
Care Context
Population
Topic Format
Clinical Task
Keywords
Keywords will be added soon through AI processing
Hashtags
Hashtags will be added soon through AI processing
Transcript
Speaker: Peter Laussen
Introduce our speaker this morning, Dr. Peter Lawson. Dr. Lawson is the executive vice president for health affairs at Boston Children's Hospital. He is originally from Australia and completed his undergraduate and medical degrees there as well as fellowships in anesthesia and pediatric critical care medicine. Shortly after his training in Australia, Dr. Lawson came to Boston Children's Hospital where he stayed for 20 years as a cardiac anesthesiologist and intensivist in the cardiac ICU. He became professor of anesthesia at HMS and was a Dolly D. Hanson endowed chair in pediatric anesthesia for 10 years. In 2012, he left Boston to become the chief of the Department of Critical Care Medicine at the hospital for six children in Toronto. And this year, he returned to Boston Children's Hospital to assume his current role. He continues to practice as an intensivist in our cardiac ICU. Dr. Lawson's professional interest focus on the use of continuous physiologic data for predictive modeling. During his time here at Boston Children's Hospital, he developed the T3 platform, which stands for Tracking, Protectory, and Trigger, and is a tool for the collection and interactive display of physiologic signals in critical care. And it's now in use in over 25 pediatric ICUs across the world. During his time in Toronto, he built a lab with expertise across systems architecture, machine learning and statistics, signal processing, and computer programming. And he continues to lead research efforts in this area with his team in Toronto. Dr. Lawson, thank you so much for speaking to us today and I will turn things over to you. Wonderful. Thank you very much for the introduction. And it is a pleasure to be speaking with you all virtually. As was said, my 20 years in Boston were a joy. They were formative in so many ways and led to what I'm going to be talking about this morning. It's really about how we can use the data that is generated by our patients in the operating room and the ICU in a way that augments the decisions we make and can be used in a meaningful way. Not straightforward. There are a number of barriers and there are no off-the-shelf solutions to this. What I'm going to describe is the work we've been doing to try and get it to an online platform. But there are a number of challenges. But I think they're becoming less of a barrier as time has gone on. My disclosures funded through the CIHR in Canada for a lot of this work. I am the principal developer of the T3 tool. It's really a data visualization platform and hosting platform for algorithms. It's to work with a company that has that license as an advisor that's entity or entry. And in six kids with developed data management architecture that can really help the collection and access and utilization of high frequency data, particularly waveforms. What's spurred this really was just looking at the way in which we practice and we all wanted to live a safe and efficient patient care. And I think this is a fairly appropriate overview of the way which we practice, at least in the ICU environment. I don't know. Pretty short translates to the operating room as well. And that we have criteria on admission to our areas around the disease. We risk just accordingly. We know the procedure or procedures we sometimes apply acuity in the case. And we use that to understand how the care we're going to deliver. And importantly to affect the discharge that we want, which is the appropriate outcome. And we benchmark across a whole lot of areas to determine that we're actually following a pathway that is ideal and safe and efficient for the patient. We overlay that with our teams workflow, an environment in which we work. We create guidelines and protocols. We have early warning systems, track quality metrics. So we wrap that patient in all of this and then rely on our data from the electronic health break order and monitoring devices to give us that continuous view of the patient. And we see this as the sources of truth. That's I think a fairly generic way to look at the care we provide. What tends to happen is during the course of a case. Our decision knows change. It's we bounce around is what I've often been referred to as particularly in the ICU environment. But I think it happens in the operating room as well in that there is variability in the decision making. And the critics of this type of variability have told me it's all about practice variability. And it's about humans working in the systems and that we're just doing our own thing and we're not paying attention to what the information is that's coming towards us. And we should all be able to stay practice in a standardized way. There should not be any sort of variation in our in these decision nodes. But I don't agree with that. I think that we manage biological variability. The very impact of anesthesia is to alter the physiology. We do it every day. And of course we overlay that with impacts of the pharmacology, procedures that the patient's being exposed to, et cetera. And what we're really dealing with is biology variability. And I think it's important to understand that because we need to be able to accept this biology variability and use it towards precision critical care or precision care in the operating room. And that's really individualizing the care to a particular patient. So I'm very wary of edics that say you just have to practice the standard way. Call you checklist. You'll be fine. In many circumstances, that may be the case, but you we have to appreciate we're dealing with inherent biology variability. That's the first point. The second point is if we're trying to get to some sort of precision care. And that's where our decisions, treatments, practices are tailored to the individual patient. And we need to get I think to this type of schema where the data flows through some sort of connected customized network into a data management platform that allows for online analytics analytics. What currently happens is we can have these first three boxes to a data management platform. And then the data is taken offline. It ends up in a registry or something. History database, somebody's desktop. And then the data is is analyzed in various ways. But really does that offline analytic process make it back to the users and to clinical protocols. And our goal is to make sure that we can have real time online analytics of this stage of it streaming so that it really impacts the decisions we make and the treatment we provide. That's precision and it's easier precision to require. And I think we're getting closer to being able to achieve that now. And it's really about an end to end lifecycle management of the entire pipeline of data from a patient. It starts with live signals at the bedside, historic data that's available in the EMR. It means to pre process that data to undertake it offline or experimental models to start with with that data to understand the problems we're trying to address to do offline and online training. And this online training of the model is really important because it's new data for you to get the model has to be passed. And then you eventually get to this sort of online inference where it really helps us despite the care that we need to provide. Why is this hard? Well, these are just two editorials or viewpoints in JAMA than now we've been told months old, but we still see the same sort of editorials written. The value of the data. The people learning and the ability to transform healthcare. And there's constant pressure. There's very few major journals that don't have a focus on this now. I went back and looked at just to end of 2017 the manuscripts, medical manuscripts that just reference AI. And since 1987, there's been, as you might expect, the dramatic increase were around 100,000 in this particular subset that I looked at. Machine learning in the last 10 years, particularly has really become the flavor where large data sets can be evaluated within machine learning models. And so there's a huge amount of interest and a huge amount published about this. But the problem is only about less estimated less than 1% of these make them into clinical routine use. So I understand the skepticism. We have to be able to shift from the research to adoption. And that's not a straightforward process because we have problems getting access to data both internally and externally. We have to make sure that the models are developed irrelevant and they can scale across environments. We have to appreciate that we need to interact with the data. And it's easier critical care of this interface between humans and technology and data. And how we relate to that data is very important. And of course, there are multiple problems to solve. There's no one size fits all. And this relates to the adaptability that models need to have if they're going to be deployed on a routine basis. So the problem is that that's is really understanding what we are trying to solve. Often a lot of the AI that we're looking at now focuses on operational aspects. Efficiency is a care resource utilization. How do we save dollars? I think it's important to understand what is our contribution to the risk for a patient because there is a problem of recurring harm. So we need to understand our contributions to quality of the care we decisions we make. And these are modifiable. Getting to modifiable risk is very important and where AI will help us. And finally, it needs to enhance what our roles are. And we do manage risk and we are full costors. And it gets to our performance, which is our ability to rescue the patient. And then our judgment, which is our ability to predict what's coming down the track. I think over time, a lot of the artificial intelligence, new intelligence, perhaps that is being generated will help us address these issues. But with some distance from it, but I think it's important to understand that the outset, what are we trying to achieve? Because AI can be applied in medicine in many areas, clearly when the machine learning, when intelligence is superior to our own capabilities. And that's in areas of image recognition. Our ability to assemble huge amounts of data. Often that data is from disparate sources, maybe structure, maybe labeled. But how do we use that data to transfer into actionable information? And that's very hard for us to be able to do it continuously. And we all vary in our abilities to do that in the same way. So AI, I think, will help us in terms of evaluating risk, triaging, looking at outcomes of interest. And for comparative benchmarks, if you like. But AI will help us with the generation of new information around biological ability, as I mentioned, up front. And that's really looking at new associations and relationships. What's the physiologic state of the patient and how do we process these signals? And there will be new signals and new learning that will come. And I think this is the really exciting part about it. Because it becomes then an active learning environment for all of us. It's not just an environment that says, OK, this is what's happening. Model says you have to do this. On the contrary, I see it as putting a whole lot of information together that augments assist our decision making. And our learning will see different ways to think in different interactions between signals. I think that's really exciting. Seconds, I started in 2012. This is the atrium building. And this is the inside of the atrium building here. The ICUs are on the second floor here. Here's the pediatric cardiac ICU. The NICU is the floor above that. And over to the right here is where the IR, the interventional radiology is. And then further around that on that same level, the 20 operating rooms. I was reminded when I started a sick kids that in 1993 they were noted to be the hospital of the future. And that's when this atrium building opened. And it was named that by Apple magazine. This is before the internet, for networks. This was named by Apple because some physicians in the ICU network locally for Mac and Tosh computers. And they developed what I think is the first EMR that was used in critical care. They could see the patient's data real time by aggregating information streaming from the bedside monitors. They had audit entry. They could see ventilated data. There was a lot of activity that they could see on those little small box back in Tosh computers that the bedside, the problem was that the computing power was really poor. At that time, unable to store the processors was slow. So while this was created, it couldn't scale because of the limitations in the computing power. Well, obviously over the last 20 years, that's changed dramatically. And it's become much less of a barrier to what we're trying to do. And if you take away the computing power, the problems around computing power, then you face with the signals that are coming through and how to manage those signals because they're not straightforward. So I took the approach that to really start to understand is we need to capture everything. Store it in a structured way. So all the data that's streaming in, store it, capture it, store it. And the second part of that, and this was really very important in terms of the hospital use of this data, was that we're going to do it. We do it on all patients that the patients own the data. It's not owned by monitoring company, the EMR company, the hospital was owned by the patient. Therefore, you need to store it permanently. And you don't purge any data, purging data to me is an atma. And it means it's discoverable as well. So if you take that way, you don't need to get consent on collecting this data. It becomes part of the patient's overall record. Then you have to develop a platform that enables you to use it. And we were, we've built a platform for both clinical use for research, developed models, and an iterative way to really try and helping form trajectory and give us new understanding. One piece about the data for modeling, and I'm not going to spend time talking about the intricacies of machine learning because I don't only choose myself and everybody else. But I think it's important to understand what's feeding into these models. And the data for modeling can be looked at in two ways. One is there's this prognostic enrichment of the data. So you take a patient population data from a registry, EMR, other categorical data points. And it helps you look at patient populations and outcomes in particular. A huge amount of work done in that area, particularly around operational intelligence outcomes, usually as I said, population based. So the prognostic enrichment that comes from data is very important in terms of the outcome. I think the data has to also have this predictive enrichment component. And that's really based on the biological responses to illness and the care we provide. Therefore, incorporating all the physiologic data with other biomarkers, omics, and it gets to individualize care. So when you're looking at data and the entering the model, you need to understand what that data is going to enrich prognosis or prediction. With that framework, we set up the following at sick kids. We first of all had to capture all the data flowing from the bed size. And the 42 beds now, pediatric cardiac ICU. We collect all the data from all the monitors at the bedside through the Phillips Gateway and through a direct feed into a middleware as well. And we incorporated laboratory system data, some categorical data out of the EMR. So we collected all of that data. And this was low frequency data. So the data points at every five seconds. And this was fed into T3 servers that would then do some modeling of that data and broadcast it to a web based platform at the bedside with a permanent display. And in our time, my time is sick kids. We had over 16,000 patients in 2.0 million patients hours of data accumulated at this lower frequency, five second data. But you've had T3 Boston Children since 2010, end of 2010. So when I started here again, I asked to see how much data have we accumulated of Boston Children's. And this is since November 2010 to August 2020. This is a summary of all the data collected across the institution on the T3 platform. So the lower frequency, five second data. 371 beds in total. But I know now that we have just checked over 10 million patient hours of data. We're accumulating 1.7 hours of data a year. It includes ventilation data. Laboratory data, data that coming from various devices and lines such as that T3 LINE. And break it down by the types of patients as well in terms of their age. But we can also break it down by diagnosis and some other categorical aspects. The reason for showing you this is that it's a huge data set that's available for you. And there are certainly groups that have taken this data and applied it to their various research areas. The challenge now is to get it from that format into actionable bedside use. A large amount of data sitting here to be used. In the operating rooms, if we break this down in the operating rooms, we've been collecting here since 2015. There's coming up now to 400,000 hours of data in 50,000 patients. So once again, there is a rich source of data that is available for use and to help you solve the problems that you are facing. How have we used that data? Well, there are different ways. Certainly we've used it to understand the phenotype. And this is just one example of just taking patients admitted to intensive care unit and trying to understand their ranges of normal. Or use that word normal, of course. Our current ranges for physiologic signals is based on normative data. Obviously patients in the ICU are not normal patients under anesthesia as well. And so we went back and looked at continually looking at our patient populations that come through and looking at their range. That population based on diagnosis or procedure. Just show it here in these box plots here. On the right hand side with the graph shows a heat map with the range for populations. The patient took about a transition from right arteries. The thing I wanted to show you and highlight in this is the fact that right hand panels B and D. And that within that population heat map. I've shown two individual patients who are in red lines here. If they're mean and then they're 25th, 75th percentile for this particular signal. This is a higher rate of top of blood pressure at the bottom. What I want to make is that populations have wide ranges of boundaries if you like. Individual patients keep their signals at a much closer level. So here we're varying by 10 to 20 percent. And I think appreciating that a patient may be within a population range. But may still not be the right. It may still require much tighter range for their own individual management. I think is very important. And there's a lot more work I'm going around this because we should be able to look at the boundaries for a physiologic signal. And the target ranges on an individual basis, not just on a population basis. And we should be able to look at a real time and how it's changing real time according to the different conditions going on. Another area that I think is really exciting is to really set the terms a little bit out. Precise the ideas as to the data that's being presented at the bedside. And this was just looking at oxygen dissociation curves. And we looked at that in 3,500 patients and just looked at blood gas tests and SPO2 values that were available. And developed that once again a heat map of the worst measure that the bedside and what the blood gas onto a blood gas was actually measuring. And we could apply that then to the serving house oxygen dissociation curve. And this black line here is what is actually calculated by serving house. It's not a sigmoid curve shown in this particular subject of papers, but that's that's what the calculation using the data from serving house that was now over 40 years old. That's the mathematical model and that's what we tend to accept it. When we looked at our patient population, that's much different in many respects. First, our curve is shifted to the right. Second, that there is quite a range. It's not this line that we track on. And this range can vary between patients. So for particular SPO2, you can have quite a range in P0, too. An understanding that reacting just to a number is potentially a problem that understanding the variability that occurs within that. We've developed a number of oxygen dissociation distribution, conditional probabilities, which enables us to actually drill down on this curve to move the cursor on this left hand screen. And look at various conditions. What's the case here to the age of the patient and see exactly where the patient should ideally fit within that curve. We're playing the same logic to the way in which we manage blood pressure. So what's the source of truth is that the noninvasive. And we all struggle with that at times when there are disparities between the measurements. So this is just a work as well. You can go to this link below that will take you to this interactive site that will allow you to look at these various cumulative probabilities. But it's possible to look at what a invasive blood pressure is or a noninvasive blood pressure. And look at the probability of that actually reflecting what the invasive or noninvasive measurements might be. So that's it's fun. It's interesting, you know, fighting around with this type of information. I think the value is it's us to challenge a little bit the way in which we work and the way we think about things. It's us to show us that there are broader contexts here. Eventually this will make it to the bedside. I think this is an example of really trying to understand the signals that are generated and their relationships. Being predictive as I know that is a very important goal for all of us. And the way we've approached this is using control theory. So this is gone on circulation and regulation. And you can take that sort of map and using base. And here I'm able to actually develop other indices. And this is what's called the inadequate oxygen delivery index that we're now validated. FDR approved up to 12 years of age. But it is the probability of an evolving physiologic state. In this case, object delivery. To me, I think this is one of the most important aspects of the care we provide. It's one of the predicting that look at the physiologic state of that patient. Understand where the patient is within that state. And this type of idea of tools available on the T3 website. It's a platform that's an example of how you need a hosting platform to take these algorithms. And there's another, on the left hand side here, the various physiologic variables that feed into that model and checklists that five second increments give you an estimation of oxygen delivery. And just to solve it, another took this a step further to look at a different concept, which is how can we look at the dose or time of exposure to a physiologic state and the risk for an event in this case, cardiac arrest in the cardiac ICU. And without going into details about the dosing windows and the time prior to the event, the overall concept that we now can take this data and look at it in terms of not a point in time, but a dose of exposure, I think, is very important. And I know we think like that, but we have to actually start to measure it and incorporate that into our decision models, I think, is very important. And I think another component around that, these composite signals, is that we can look at different relationships. So we routinely get brain MRIs in case it's secured and keeps undergoing, and some newborns undergoing bypass. So we have pre-bipers MRIs and then discharge MRIs. So that we have a whole population of patients where we see lesions or new lesions developing as a result of their care that have been provided for our versus ICU. And we wanted to look at whether some parameters within the ICU, I mean, they could have contributed new findings on MRI. And we looked at all of these individual signals and couldn't find one. There was no one particular variable. And it wasn't until we looked at a composite signal like the IDO2 that we started to see the patients with lesions. This is on the top right here on this graph that you can start to see more of a relationship between the risk for new lesions developing and inadequate oxygen delivery. Well, that makes sense. The question is, the problem is how do you know and how can you appreciate that real time and address it real time at the bedside? I think this is important because we have our routinely measured vital signs, but developing new derived to continuous measures will allow us to take a whole subset of individual signals into a composite signal that will give us perhaps a richer understanding as to what the risk of the patient is. I think this is actually really quite exciting and we'll be more work on this in the future. The other thing, of course, that I haven't talked about machine learning a lot of what I've shown you is really being based on control theory. But predicting Categ arrest in the ICU is another area that we have started and received funding for. And this is really taking all the signals, doing an ensemble technique of these various signals that were feeding into the model. And then we could estimate that within five, ten minutes of a Categ arrest, we could estimate about 70% of patients, 75% of patients that they were going to have a Categ arrest. Well, you know, that's not that useful offline. And the question is, well, how does that really help you in terms of improving the care that you provide? I think it helps us in terms of providing the resources for that risk patient. And the argument against that is, well, we know the patients who are sick. It's not always the case. But this type of model, I think, as they get with more data, more training, will get more precise. And we'll be able to use it as a true early warning signal, not five minutes, ten minutes before 30 minutes, two hours, four hours before as to what that award week state will be in the risk for an event. The funding that we have right now in Canada is to actually translate this to the bedside, which is a challenge in of itself. And one of the challenges of the signals that are generated and how we manage them, which is Niagara Falls, is 200,000 cubic feet per second of water going over Niagara Falls. The data that we were collecting in Boston, sorry, at sick kids, is 200,000 bikes per second. So I would say people would have got an Niagara Falls of data. You probably have three Niagara Falls of data here at Boston Children's across the ICUs, and other areas in the operating rooms, and other areas where the data was collecting. That's a massive amount of data that's coming across every second. And to manage this high frequency data, we, there's no official product. We've got so many different signals of different variability, frequency, veracity, streaming through, that you can't expect just one system develops to aggregate and to faithfully collect an aggregate or that information. So we built our own system called HMDB. And within that, we now have 6,000 patients just over a million hours, this is since April 2016. But we're collecting about 100 million data points a day. Right now, the database is 3.2 trillion data points that have been stored. And the advantage is that it allows you to then take that data and apply it to high performance computing. And there's really no point leaving it stuck in our databases. So for people to come and take certain subsets of data, you really want it to be able to feed into for real time analysis into some sort of high performance computing domain. That's the first thing is building this platform to actually capture all of that data. The second is to appreciate that this is data in motion. It's time series data. So there is this important output components to it. And the data it can be referred to as being I of Bound, input output bound. And you need to solve the problems of input into the system and then how to get the data out of the system. Because right now our problem is in this area. Our problem is not related to computing power. As I mentioned in the early years back in the early 90s, computing power was the main limitation. That's not a limitation now. Right now it's our ability to manage the input and the output components to the data as it's streaming. And our goal is to minimize this so that we get back to the problem of computing power. That's where I'd like to see our main problem. Because we will be able to quite rapidly accelerate and we see that now the power of whatever computing has continued to really dramatically increase. One of the most important aspects about the input of physiological signals, whether it be the OR or the ICU, is their quality and their continuity. So we all know that there are artifacts. We all know that there's variability in the time. But it is possible to generate signal quality indices and to generate heat maps so that you know from each base base what the what data is formed. That's important because if you these are not set and forget. It needs to be a continual evaluation of the continuity and the quality of the data that's streaming through. And that gets to one of the fundamentals which is understanding how signals are generated, their relevance, the clinical context in which they're generated. I can't emphasize this enough in particularly the ICU and that the bedside nurses are referred to as the guardians of the data. Where they put the sensors, how they zero their lines, how they appreciate the quality of the signals that are being generated is absolutely vital. And then the other aspect is well how do we manage artifacts? Well, let's talk about that briefly. So this is systolic, diastolic, by pressure continually collected in a patient in the ICU and sick kids. We've identified over 30 discrete artifacts that can be assigned to that to those signals, to the arterial line signal. The important aspect is that artifacts should be captured. There's important data within them. The frequency of artifacts is a really good indication of the evolving clinical state of the patient, for example. But you can collecting them, you can actually account for them within your models. And this is one approach that we're taking to look at test arterial line access and scale it over a prolonged period of time. And you can within your models filter out these type of artifacts very quickly. But first you have to be able to categorize them and label them. And obviously doing that manually is very laborious and inaccurate. But it is possible to label them automatically by the characteristics of the artifact and then account for them in the model. The other aspect of these continuous signals is that it's time management. And we have different models, devices, all synchronized in a different way than that. And it's very less than all streaming on the same platform. You can't be guaranteed of synchronization. The time clocks vary between devices. They actually tick at a different rate in some circumstances. And it creates really delays and lags in the transformation of waveform to digital data. And maybe up to seconds in various waveforms like the arterial waveform in ECG. These signals have phase shifts. They have gaps within them that we need to be able to set a tolerance level at. And the question is then what is the window that we should look at? Because time now becomes an artifact. We have to synchronize it is critical. There is no doubt though that the frequency by which data is recorded is a limitation. So often in the EMR we will record data at set intervals 30 minutes, 1 hour, 4 hours. The longer you push out the time interval for collecting a data point, then you see much less variability in that signal over when you look at the data over a longer, treating period of time. Point is that we have biological variability. That is a normal state for us to be. In fact, lots of variability is an indication of illness in many respects. But we expect to have biological variability. If we are collecting data points with long gaps between those data points, we miss out on that variability. That variability is really important for us to be able to capture. The other problem with data signals is that we collect them on different devices and then how do we merge that information. If time is an artifact because the time stamping and synchronization between devices is inaccurate, is there another way to do it? So we have been looking at using the physiologic signals as really fingerprints. And it is possible to, this is just one example where we have got two ECG signals, one generated by the bedside monitor, the other on the EEG device, the latest device. And we were trying to synchronize EEG and ECG with even an amic data to look at a number of different aspects of traumatic brain injury. And the problem was, we were able to synchronize this information these way for. And we couldn't do it based on time, but we could do it. And we started to synchronize the alignment of the signals. And physiologic signals have discrete fingerprints they do. And so it is a new way in which to align signals and account for differences between devices as well as how they are capturing the data. So the whole idea was to develop a data management system that enabled us to account for the continuity and the quality of the data to develop indices that tell us real time what the quality of the data is. And then to have an architecture that enabled us to actually store it and use it. And right now as I said, we are collecting around 200,000 bytes per second, it is a kid. The data storage would be 100 terabytes if we didn't right now for the 2.3 trillion data points. We are storing it now just around a terabyte. We are so effectively compressing and decompressing the data. And we can retrieve it importantly to around 500 million samples a second. That is still not fast enough, but it is getting faster. And this is really not just about the storage and the compression, but also the decompression and the indexing of the data that enables us to retrieve it and feed it into real time models at the bedside. Signal processing is another area that I think we will see by collecting waveforms that we have to really start to analyze these signals and the richness of them. This is just one that said good fellow and I have looked at with actual fibrillation and was able to apply step by step machine learning model to detect the actual fibrillation of the sinus rhythm. It is also possible to estimate blood pressure from non-traditional measurements such as ECG and post-plasmography. This is a machine learning model with a convoluted network that is really able to demonstrate that we can quite accurately predict blood pressure based on these other signals. It is also possible to look at new information. So these are myawaves that rhythmic cyclic waves at around 10 second point one of its frequency, but a reflection of the sympathetic nervous activity. And it is possible to actually take the signals, ECG, to real-blad pressure, look at those oscillations at that lower frequency and display them. Right now we can't do that. It is an advantage I think of collecting all these physiological signals. So I wanted to just finish by saying there are huge possibilities here. Solving the problems of the architecture to get store and use the data is being solved. The question then is how we should analyze it. And I mentioned earlier that we are using control theory based in approach to some models. Machine learning is clearly what is talked about the black box. It is really not based on any pre-existing knowledge. We take a lot of data from many sources. And then through various networks, automated networks, the machine comes with an output. And that output is hard to prove. We don't know the mathematics behind it. We don't know and understand how the machine learning model was created, what was weighted, what was discarded from the model, how the machine did that. And it has created this idea of the black box. And that is we don't understand how this model is generating its output. You know what is coming into it? How does the output really tell us what is going on and is it reliable and relevant? So that black box is a problem. I think equally as a problem is our own black box. And that we like to be binary. We like to be very clear, yes or no, is from what is going on. I understand that I have practiced that way. But I think we need to be also appreciating these models to give you an estimation. A probability that something is occurring. And it is not telling you what to do rather giving you additional aggregated information that informs the decisions you are about to make. Because we want our models to be explainable and believable. They have to be actionable. And if that is what we want them to be a value, then the data feeding into it has to be robust, that is to be reputable and relevant. And that means the types of data that is being fed in has to be reliably generated. And that feeds right back to us in the operating room in the ICU in terms of the connections, the artifacts, the time management. So it is not so much taking at one of the data that has been generated. You have to understand the journey of that data and how then that will feed into the model that may give you an output to augment the decisions you are trying to make. We developed, this is a current prototype or current strategy for developing models. It is a kids because we realize that offline analytics is good for research, but it doesn't help us at the bedside. Together to the bedside, we didn't want to be seek control and to give you a product that is always had, something that was more important. So we developed these scrums, which is based on software development. It is nothing new. But we, first of all, had the team that decides, this is the problem that we want to solve and understand that problem. And we early on implement human factors into that decision making. Once we have determined what the problem is, then it starts to work at what data do we need? How do we get that data? And we start these very rapid sprints, usually two weekly cycles of data, both generation of valuation and analysis, which continually review in an incremental way. And we do hidden studies where we apply the data, but don't use it for clinical use. But we are training it in the background to eventually get to a finished model. This incremental process is really important because we have to, as I said, get to these fundamental aspects. And we are not available. Can we explain it? Does it allow us to action the information that is coming towards us? And that really then starts to get to our own interaction with the data and our decision making. And there is a lot of macro cognitive research actually going on now as to how we make decisions and how this data is that feeds through influences that. I think it is really important to have this sort of framework. We focus a lot on research, taking the data and coming up with some new finding. And that is fine. But it has to be able to translate across to the bedside. And you need a framework for doing that. And you need people. And these are the people on the lab that are just remarkable. John Mosley leads the lab. He is a prequel care cardiologist. That is it, kids, trained here in Boston. We have environmental engineers, physicists, neuroscientists, PhDs, cardiac surgeon, nursing staff, across the board, involved in these projects. And having that team together is really given us new ideas, new insights to think about the problems we are trying to solve. So thank you. I want to, as I said, recognize this great team. You have great teams here as well. We have great teams here as well. And being able to bring the teams together next to develop an infrastructure for collecting all the data that we have here. Intensity care units is our goal. Bringing people together across the ice use and the operating room with different skill sets that can manage data and use that data across the whole spectrum of its journey. I think it is not only important, but we really accelerate the work that we are doing. So I want to thank you all. And I am happy to take any questions in the couple of minutes that are waiting. Thank you. And that was a really great presentation. Does anyone have any questions? Feel free to just chime in or cast them in. Peter is Jeff Burns. Thank you for that presentation. You have got a lot of work since you left us. And now your back. So could I ask you the hardest question? Your back and you are seeing the strengths and limitations of our current system. The aim system in the operating room is arguably, I think, the most functional. We are getting neon code monitoring. We are hopeful that that will also be in advance. The problems of the EMR are universal. What are, as you see it, what are the immediate next steps that we need to take as an institution? I need to start worrying about these systems as being problems. They are solvable. And it will require an investment in people. So the data you have in the aim system is tremendous. And you have to have a team that is able to access that and then link out with the 360 data from SON. Also very critical. To link it with the T3 data sets databases that you have. So I think the first step is really developing a team that will look at the sources of data, the structure of the data within those and how they can be used. It doesn't mean that you have to have a merged centralized system, federated system of data. You have had these data areas, it is a matter of bringing them or connecting them. And that requires that it is hard lifting computer science to do that. But that is the first thing. I think the second thing is to have people that are able to navigate through that data. And where the data is, the quality of the data within those various sources. And then to navigate, sort of like a data officer, data navigators. And then the third, and this is all around the technical and the structural side of things, is to have an approach to the analysis of the data. Overlaying all of that has to be the clinical context. And what is the problem you are trying to solve? And there are many problems. And it is not going to be sequential. But having rapid cycles and rapid teams that are in rapid cycles able to not only understand that problem, but to very quickly put into place the steps necessary to get the data and start those processes of an analysis. And it is really making sure that you can federate, aggregate the data sources. And that is actually easier to do than what might be mentioned. It requires us not seeing ourselves necessarily as ownership, but really in a sharing way trying to use that data. So I guess it gets down to the data governance. So the infrastructure and data governance, I think probably the two steps next, Jeff. Peter, I want to thank you so much for this presentation. And we have some discussions about this. I guess to me, for some clinicians looking at this, it could be a little bit overwhelming. And I'm wondering how did you engage clinicians in Toronto to participate? Because the AI itself seems like analogous to basic science. And without clinicians identifying the issues, trying to help with what data is actually critical to actually solve problems and to actually participate in the validation of some of these models. And their usefulness, et cetera, this doesn't go very far. So did you have strategies for how you actually got people to engage and work with the information technologists and information scientists in Toronto? Yes. The first was to make the, to demystify it, to say, look, the data is there. It can be used. You have a problem. You have an idea. Let's see how we can apply the data and we'll give you the resources and help you do that. So it became, and it didn't matter that you didn't know how to program. Or it didn't matter that you didn't know how to do whatever statistical technique. We can support you within that. But so there's that, you get that sense of engagement. So people have a problem. We have a potential way in which to solve it. Doesn't sound like very much, but it is actually quite a barrier. And so you make it accessible. You make it welcoming and that, and supportive. And then the other sides of that, then, is to understand what are you going to do with it? What's your ultimate aspect of this? Is it going to be for research or if we're going to be taking it down to a point of deployment? And that's, that requires a different way to think about it as well in terms of the talk of model that gets developed and how it's going to be, how it's going to influence the decisions that you make. So deployment, I didn't touch on here at all, but that's a really important component to all of this. And often we'll start with at the downstream area that we want to get to and we want to see this or see that. And so, I think that's a very important matter of how we can do this or that. I think starting upstream is very important as well. But I actually had not found engagement to be a problem. We all have various insights and clinical contexts and problems we'd like to solve, which is a matter of them making sure that we can provide people with the resources and the framework to address them. It looks like there's a question in the chat box about ensuring quality of the data. What resources? So that's a, you can set your quality on the data in a number of different ways. You can be look at gaps, for instance, I took that the gap tower. So if there's a gap beyond 10 seconds in the data, you don't enter that data into a model. If you can look at the frequency and the variability in that frequency and also filter out based on those sort of criteria on the way for itself. So you can preset what you want. What is important is not to make those decisions a prior. You collect everything and you build into the structure, on the means of which you can filter out the data that you don't wish to include in the model. You don't want a data dump. You want to be able to say, give me a set of data about this patient. An example, we just looked at 130 liver transplants at CQ. It's a side-seed, the equi and co in the ice in the operating room. And they've identified the trajectory of these patients that lead to good recovery and those that lead to complicated recovery. And they were able to filter out data that was whether there were gaps, where there was not artifacts, various things like that. You do that before you actually start to model the data. So we gave them a data set that was curated to what they wish to see. So that's one way. The other is to real time is to develop quality indices. And this is really an early phase, but it's just like we have ranges and boundaries for various drug testing. So we should have quality indices for labels that the coefficients of variance. And these are being developed right now. And I think we'll eventually be incorporated into all of these signals that are being collected at the bedside so that we can understand the quality of the signal real time. And then adjust real time to make sure that that signal can be enhanced. Laura, I wonder if I could just make a quick comment. The time is up. So I was not time for question. I'm not smart enough. That's the question about this. I just want to make a comment. Peter, I think you can probably see by the attendance this morning. How excited people are about this information and just as much. What have you had you back with us in Boston? So many people were thrilled that Kevin Saini, we were talking to you and taking this job. And you were willing to move back here with your family. And I've had a great pleasure in just a short while. You've been back to interact with you more than maybe most of the people on the screen. And this is a very bright time for Boston children's in large part due to your leadership. So thank you so much for everything you are. Steve, I really appreciate coming home. Thank you, Timidow, Timidow. And warmth and generosity that has been directed to what's been. I really do appreciate it. As I said, it's been coming home and it's been a joy. A lot of work to do, of course. This is just one area that I'd really like to see accelerate. There is incredible skill here. I just look at the Hollywood squares here and the faces and names. There's no barrier here. There's no as if you need to be searching for people to come and do this. It's here. And the ideas are here. And the ability to take this forward and accelerate it rapidly with the right infrastructures. It's exciting. It really is so much that can be done. Thank you, Steve. I'm really pleased to be back here. Laura, thank you very much. I appreciate it, too. Thank you, Dr. Lousen. And we are out of time. So we'll close things out now. Everyone have a great day. Thank you.
Click "Show Transcript" to view the full transcription (49426 characters)
Comments