Patricia Trbovich, PhD - Human Factors and the Role of Standardization and Adaptation in Patient Safety
2
Views
0
Likes
0
Shares
0
Comments
Timestops
14:35
Introduction to the problem
Patricia introduces the problem of clinicians feeling overwhelmed by standardization and bureaucracy
27:45
The importance of understanding human factors in healthcare
Patricia emphasizes the need for understanding human factors in healthcare to improve patient safety and clinician well-being
40:55
Simulation as a tool for improvement
Patricia discusses how simulation can be used to test new procedures and policies before implementation
54:05
The need for collaboration between clinicians, bureaucrats, and technology developers
Patricia highlights the importance of collaboration to ensure that solutions are user-centered and effective
1:07:15
The impact of standardization on clinician resilience
Patricia discusses how standardization can erode clinician resilience and lead to burnout
Topic overview
Patricia Trbovich, PhD - Human Factors and the Role of Standardization and Adaptation in Patient Safety
Surgery and Anesthesia Grand Rounds (December 18, 2019)
Intended audience: Healthcare professionals and clinicians.
Categories
Anatomy/Organ System
Care Context
Population
Topic Format
Clinical Task
Keywords
Keywords will be added soon through AI processing
Hashtags
Hashtags will be added soon through AI processing
Transcript
Speaker: Patricia Trbovich
My big fans and scattered. You You You You You You You You You You You You You You You We will get started. Thanks everyone. Good morning. Welcome to Grand Rounds. We are delighted to welcome Patricia simulator programs, as well as the Department of Anesthesia, and it's wonderful to have shared rounds this morning and thanks to the Department of Surgery as well for offering to do this. I think, as you'll see, it's just such a perfect fit given Patricia's work to be speaking here at this forum, particularly at this point in time, given all that's going on in patient safety, work that's going on in simulation, work that's ongoing now in live video, and you'll hear about work that Patricia's doing that's really pushing the field. Just briefly, some background. She's incredibly well-trained. I can tell you that. I'm doing a summary of all of what Patricia has done in her 10 years. She got her PhD in human factors from Carlton University in Ottawa. She's a native of Ottawa. It's done most of her education in that area. Did her masters in cognition and psychology and did her original BA in psychology at the University of Ottawa. She's multiple awards. She's published over 90 papers. She's raised somewhere on the order of eight or nine million dollars in funding for the kind of work that she does. She started actually in the automotive industry, interestingly. So she really was all about understanding form and function, understanding devices, device testing, and then rapidly started to make her way into healthcare. She's currently associate professor of quality improvement in patient safety at the Institute of Health Policy Management and Evaluation at the University of Toronto. She holds a family research chair, the Bedou family research chair in patient safety and quality at the nearby North York General Hospital in Toronto. And she holds a cross appointment at the Institute of Biomaterials and Biomedical Engineering at the University as well. Perhaps most interesting has been her recent efforts leading what she has developed as what's called the human era team, human ERA, a new era of understanding human behaviors is how she's coined the name of her group. And it's becoming incredibly active both in Toronto and throughout the region. It specializes in the evaluation and improvement of safety and healthcare from human factors standpoint. So she really is about thinking about anticipating uncertainty. How do you pick up on uncertainty early and start to make changes or to create fail safe systems? Her current research within that realm really is quite diverse. She's studying opioids and opioid administration and handling through a funded grant. She has a very large set of projects related to surgical safety, hence her being here. And a lot of that work is being done using what many of you may have heard of the Black Box, which is adapting Black Box technologies from airline into the operating room. She's going to tell you about some very exciting work around that. And finally, she's been working in pediatric critical care with her dear friend Peter Lawson out at Sikids looking at rounds, multi-disciplinary rounds in the ICU and understanding the ergonomics of that. How do we optimize rounds? How do we reinvent rounds and some really interesting conversations we've been having over the last couple of days? And she also serves as associate editor for BMJ's Quality and Safety Journal. So I am thrilled to welcome my friend and colleague, Patricia Tupovich. Welcome, Patricia. So hello everyone and thank you so much for being here. And thank you for the opportunity. It's my great pleasure to be spending some time with you during grand rounds. And a special thank you to Dr. Winstock for making this visit possible. So what I'd like to do today is really focus on why we need to human factors lens on surgical safety. And more specifically, what I'd like to hone in on is this idea that in healthcare, we tend to overvalue standardization and undervalue adaptation. And it's not so much a question of one versus the other, but rather both are required and really trying to unpack when is each needed and how do we help support clinicians in achieving both of these goals. So why is this important? Well, because despite over 20 years of focus attention on trying to improve patient safety, we really haven't moved the needle that much. We still have a severingly high rate of preventable adverse events. And when we look at the types of solutions that we've been implementing over the past 20 years, we have many of them. So for example, double-checked, sublimation, simplification, protocols, training, one of the things that all of these interventions have in common is that we're trying to achieve standardization. So essentially we're saying we know what the right thing to do is we want to ensure that everyone is following that standard of practice. So this works really well when you're on the low uncertainty end of the spectrum. That's where standardization really is key. But how often are we in healthcare on the lower end of the uncertainty spectrum compared to the higher end of the spectrum? So to reflect on this, I wanted to start off with a quote from William Osler. He's a famous physician in Canada. He was one of the four founders of John Hopkins Hospital as well. We actually have a hospital named after him in Toronto. So we don't only have Wayne Gretzky to be proud of, but we also have William Osler. And so he has this quote, medicine is the science of uncertainty and the art of probability. And I think that over the course of several decades, we really have in our drive to standardize. We really have started to lose respect for just how much uncertainty we have in healthcare. And we do try to see healthcare as having much more certainty than it actually does. And this has tremendous impacts, not the least of which is on our healthcare workers who are constantly faced with chronic uncertainty having to make decisions under ambiguous conditions. And if we don't recognize and support them through that, then we can't really be surprised that we have the high rate of burnout that we do. And I would actually argue that load management is not just something that we should reserve for co-I Leonard, but it might be something that we want to think about for our healthcare workers when they've gone through a very stressful time and gone through chronic uncertainty. It might be important to provide some support there as well. So it's important to really try to identify when are we slowly drifting into higher levels of uncertainty because this is precisely where we're going to have to shift from being working in a very standard way to working in a more adaptive way. And so just to really emphasize the fact that both are really important, I just want to spend some time on talking about the importance of standardization. So there are definitely many tasks that lend themselves to standardization. These are typically the tasks where we really have good understanding of the different system factors at play. Not only do we have an understanding of the different system factors, but we also understand how they're interconnected and how one relates to the other. So we have many examples of these. A lot of improvement in anesthesia gas machines, for example, have been made over the years. I think we'd all agree that you don't want to leave it up to whichever clinician's discretion on how they're going to configure an ECMO or a hemodialysis machine. So we have many tasks and healthcare that require standardization and there's still a huge amount of effort that needs to be done in that area. But then we also have many tasks that require adaptation. And these are the ones that I like to refer to as our big hairy problems. These are similar to our structured problems where we often know what the different system factors are, but unlike the structured problems, we sometimes don't necessarily understand how they're all interconnected and how one might be impacting the other. Or even the interaction between them might be dynamically shifting as we're working. So here are some examples coping with interoperative complications. Or anytime there's a change in a surgical procedure whenever you have to deal with the mass casualty incident. There's many examples of where we have these more hairy problems. The problem, however, is that we often take solutions that work really well for structured problems and we try to extrapolate those solutions to the big hairy problems. And that we shouldn't do because we did lead to a lack of alignment. A lack of alignment between the type of problems that we're dealing with and the types of solutions that we come up with. So an example of this would be, you have a big hairy problem and you're trying to assign a more structured solution. So for example, how do you reduce mortality and morbidity and surgery? And then an example of a structured solution is a surgical checklist. Now this is not to say, and I certainly would not suggest that the surgical checklist is not good here in Boston. It is, but the importance is understanding that it might help untangle some knots in the more complex problems, but it's not necessarily getting at all of them. And it's important to have a good understanding and recognition for which aspects of the complex problems is it addressing and then where do we still have areas to work on? The other thing I wanna highlight is that when you're dealing with these more complex problems, it's important to not even be trying to achieve a solution per se, but rather to be trying to think of what strategy is required. So the difference between telling people what to think versus telling people how to think because when we don't necessarily understand we haven't predicted this particular permutation of events. We can't necessarily assign a solution to it. So rather than focus on trying to arrive at a solution, we should be trying to think about how to strategize through that problem. So this is where human factors, it's a science that really helps understand when should we standardize and when should we be more adaptive. It's been around in other high risk industries for a long time. For example, in military nuclear power aviation, it's relatively new to healthcare. Probably in the last 10 years where we're starting to see more human factors being applied. And the definition is the study of how people interact both physically and psychologically with their product's tools and processes. So what this definition is trying to get at is the fact that we do consider the person who's at the center of the system and their abilities and their ability to work in teams and communicate, but there are many other system factors that we have to take into account. How our technologies and our tools are designed, some organizational factors that just staffing levels, how our workflow and our tasks are designed, or even how some of the environments are designed. So human factors is really about looking holistically at how all of these differences in factors interact. And not only looking at the ones that exist within the four walls of our hospital, but also them looking at what are some of the external environmental factors that also have an impact on how we behave in our hospitals. So for example, when we're looking at manufacturers who are creating devices that come into our hospitals or organizational bodies or governments who create guidelines and standards that we expect people to abide by. So all of these different system factors interact to have an impact on how we behave and it's important to have an appreciation for them. So that's really what human factors is about. Now on this topic, I really wanted to also highlight that it's important for us to think about the different pressures that we put on our healthcare workers as healthcare organizations. Pressures that may be nudging people's behaviors, influencing their behaviors in ways that we might not necessarily want. So a good example, a good framework for thinking of this is one by Russ Musin and Nalm Alberti. And I'll just take a few minutes to walk you through it because it really has helped shape our thinking around when do you need standardization versus when do you need adaptation. So essentially in this framework, what they say is that in every healthcare industry we have workers. And we put various pressures on workers or another way to think of it is we establish different boundaries. So an example of a boundary would be an economic boundary. We don't want people spending too much money, so we pressure them to spend less. Another example of a boundary would be a workload boundary. We don't want people slacking off too much. So it's the classic trying to get our staff to do more with less. And so our staff respond to these pressures by trying to do more with less. But of course, we don't want them going too far because that could lead to an adverse event. So we establish yet another boundary, which is our official work practices boundary. And now if we're within these boundaries, it's referred to as the legal zone. We know to stay within these boundaries because we have rules, policies, and standards that have been put in place to ensure that we're in that zone. So it's analogous to I'm driving on the highway. I know the speed limit is 100 kilometers an hour, or I guess in the US, there'd be 55 miles an hour, I'm not sure what it is. And I know that I'm going to stay within those limits. Now, given certain pressures that I'm under, though, I may now go beyond that limit. So now I may be driving at 75 miles an hour, despite the fact that I know full well the speed limit is 55. And I might start to do this on a regular basis. Why? Well, because I haven't been stopped by the police. It hasn't led to an accident. So it starts to feel very normal. And this is why it's referred to as the illegal normal. Illegal in the sense that you're not following the standards that rules policies and procedures. But normal because it's become your new normal. So we live in the space a lot in health care in this illegal normal space. And the thing I really want to highlight is that there are various reasons that we live in that space. It might be that that line that official work practices boundary is not really at the right place. It may, when we start to look at the protocols that have been put in place, and if you actually look at, if somebody were to follow that protocol, would they even be able to do their tasks? So sometimes our protocols are unrealistic or they might be too vague. I might interpret them one way, somebody else interprets them the other way. And we're not even sure when we're crossing that line. Or I may be expecting you to work with a system that's been designed for certain constraints. But now I have other factors, other contextual factors in which I'm trying to work. And therefore, I have to do a work around. So the point here is that there are very valid reasons that we live in this illegal normal space. And it's not simply because people are trying to break rules and protocols. And we have to have an appreciation for the fact that we often live in that space. Now, the risk or the danger of being in this illegal normal zone is that once you've crossed that first official work practices boundary, that real safety boundary, the one that when you cross it, it will lead to a preventable adverse event. We don't know where it is. So here I have it in red, but in reality, it's invisible. And this is the one that it's referred to as the illegal illegal because if you cross it, it's actually leading to a preventable adverse event. And of course, you might be thinking, well, adverse events can happen anywhere. It's not because I'm in the legal zone and following standards that an error can't happen. And that's correct. But the idea behind this model is that the closer you are to this outer boundary, the higher the probability that the classic system factors align so that that's going to lead to an adverse event. And so when we reflect on what have we been trying to do in the last 20 years or over 20 years now in trying to reduce preventable adverse events, well, essentially, we've been trying to figure out how to push our staff back into that legal zone. We do it through remedial training, reinforcing a policy or procedure. There are many different ways that we try to get them back in that zone. From a human factors perspective, what we've been trying to do is say, how can we turn that dash line into a brick wall? How can we just make it impossible for people to do the wrong thing by design? So instead of relying on rules, policies, and standards, we try to implement things like simplifying a task, automating a task, or creating a forcing function to keep people within those boundaries. So regardless of what solution you use, again, one of the things that they have in common is that you're trying to achieve standardization when you're in that zone. And on this topic of standardization, we do know that we have a hierarchy of effectiveness in terms of interventions that are effective at reaching standardization. The ones at the bottom are considered less effective because they rely on people. People who are already overburdened and yet we're asking them to do another thing. So we're asking them to remember the education, remember a photo call, remember to pick up a checklist. Whereas the ones at the top are considered to be more effective because they're more system based. Rather than trying to change things, try to change people, you're making the task simpler, you're automating it, you're creating a forcing function, so they don't necessarily have to put much cognitive load into thinking about what to do. It's just simpler by design. So again, this is our current focus a lot of times in healthcare has been on this standardization. And this works well when we're under lower levels of uncertainty, but we also have to appreciate and recognize that we often live in this higher level of uncertainty when we're in this illegal normal space. And this is where we're gonna require completely different interventions than the ones that we require in the legal zone. So we don't want to be standardizing because this is where you're dealing with unpredictability. So in order to be in that legal zone, you needed to have predicted those events. In order to put it on a checklist or in order to automate or force function, something you need to have predicted that those events were gonna happen. We don't have that luxury in healthcare being able to predict all of the permutations of events that will happen. And so we're gonna be in this zone and we require different types of interventions. One's that really help our clinicians be more resilient. And so we need to think about how can we help them improvise, adapt, and overcome that higher level of uncertainty that they inevitably will face. So again, we have all of these solutions when we're trying to achieve standardization, but what do we have in healthcare when it comes to resilience? This is where we haven't put as much focus compared to other high risk industries and how we help build resilience in our systems and in our people. So I just wanna take a few minutes to talk about some of the strategies that we can use to build resilience. So one of them is definitely technology. So we know that we can provide more meaningful information to our clinicians to be able to make better decisions. The problem, however, in healthcare, is that we often tend to throw more data at people. And throwing more data at people doesn't necessarily help them make better decisions. So what we really have to do is ensure that we're transforming that data into meaningful information and presenting it in such a way that we truly are helping with their cognitive decision making. And so this is definitely part of it. Technology will definitely help with this. However, when you're dealing with high levels of uncertainty, this is where people far away technology. We currently do and we will for a long time be better than automation technology at dealing with uncertain events. These times when you have to rely on your hunch or your best guess that something just isn't right and you have to navigate your way through it. So on this topic of gut feelings or hunch, a good example of this is you may remember with the Challenger Space Disaster that happened over 30 years ago, as they started to look into the incident, one of the things that they talked about was the fact that there were several engineers one in particular that kept saying that you had a hunch, you had a feeling that something just wasn't right. And the engineer kept being asked to quantify his argument. And what he was worried about was that this was going to be the coldest day that they had ever launched. And he was worried that the rubber ring around the booster rocket would be too loose. And they kept saying quantify it. You're an engineer, quantify it, quantify it. But at the time he couldn't and he was saying the best that he could mess her up, given the pressures that he was under was that it just doesn't feel right. And then NASA came back and said, well, it's not a quantifiable argument. And of course, we know what happened. Now what's interesting with this example is that the culture of NASA prior to this incident before they had originally been very much open and they had a structured approach of listening to people's hunches and their best guesses and just trying to get some insight from people as to where they could improve. And then they changed CEOs. And that CEO was much more about quantification. In fact, they even had a sign in God we trust for everything else bring data. Very much sending out the message that we're not interested in your opinions unless they're quantifiable. And after this disaster happened, they went back to the old culture of having a more structured way of listening to where people thought that there may be cracks in their system. And so one of the things that they fell into was this fallacy referred to as the McNamara or quantitative fallacy when decisions are based solely on quantitative observations and all qualitative factors are ignored. It's named after McNamara because he was the Secretary of Defense during the Vietnam War. And he was another one that said, I'm only interested in the data. As long as there are more dead bodies and not camp than in our camp than we're doing fine. And he neglected to listen to other people and other measures that were showing that their army, the soldiers morale was decreasing. The other camp was slowly encroaching on their territory. And of course again, we know here what happened. So listening to these hunches and gut feelings is really important. And it's also important, although we refer to them as gut feeling and hunches, what they really are are insights that experts had built over the years. And so again, it's important to sometimes think of it in that way because we might listen to them more. Another way to enhance resilience, it's to really think about how we can work better in teams and communicate better. And of course this is where psychological safety is really key. And you probably are aware of this study that was done by Google a few years ago, where they were on the quest to find what makes a perfect team. And they found that the number one factor that predicted whether a team was going to be successful was whether or not they had psychological safety. That ability to say, hey, I don't think this is right, or have you considered this without fear of retaliation or being judged. And so what does this mean for health care? Well, it may mean that vulnerability leads to safety. And this at first seems paradoxical, but it actually makes a lot of sense. And Amy Edmondson from here has done a lot of research showing that when you're dealing with uncertainty and when the task at hand requires the interdependence of many people on the team, that is where psychological safety is key. And so we really need to think about how do we operationalize psychological safety? It's kind of one of those terms like culture where you know it's good, but how do you achieve it? And so here are just some examples that we can borrow from other industries. One that I'd like to look into is the Navy SEALs. So they talk about the importance of creating space for connection and for reflection. And they conduct what they refer to as AERs after action reviews. They do this either after an actual incident or sometimes after a simulated incident where they spend the time to reflect and say, what did we do right? What did we do wrong? What could have we done better? And they say that the most important thing that the leader can say is ice crude that up. Why? Because if the leader can be vulnerable, then it opens the floodgates for other people to also be vulnerable. So we're starting to see that we're doing much better job of this in healthcare where we have scheduled hurdles that we conduct. And it's true that in sharing in our weaknesses, it's what's gonna make us stronger as a team. But where I think we can also improve is not only having these scheduled hurdles, but after you've gone through an event, after you've gone through a stressful event where there was a lot of uncertainty taking the time, right? Then instead of everyone just scattering back into their jobs of really reflecting on what went well, what went wrong, and how can we do better? And this is led to the next topic, which is we have to practice this. It's not enough to tell people, didactically, that they need to speak up. This is where simulation can be hugely helpful, where you have people come in, and you stress the environment in predictable ways, and then you try to see how people are reacting, and you get people used to speaking up, and getting that snarky remark, or that judgmental look, and forcing themselves through it to still speak up, even though they're feeling that, we can't expect that this is just gonna emerge during a crisis when it actually needs to happen. We need to practice this so that it becomes a muscle that we build, because it's very true that any team can work well under calm situations, but what really defines a team is how well they come together in the time of a crisis. The other thing I wanted to highlight is the importance of breadth of training. So this is an expression that really resonated with me, breadth of training predicts breadth of transfer. And so this is again where we can learn from other industries, even in sports, to become a really good athlete, they often tell you that you should cross-train, and this is something that we should reflect on for how we could do a better job of in healthcare. So we tend to do cross-training in residency, in fellowships where you tend to rotate, but once you're in the job, it tends to be more specialized, specialized, specialized. And so we might lose that ability to have an appreciation for what it's like to be in the other person's shoes. So this is another place where we may want to consider using simulations. So sometimes what we'll do is we'll have, you know, the anesthetists play the role of the surgeon, the surgeon play the role of the nurse, the nurse, you know, whichever combination you want. And it's not so much to get the people to become specialized in that, but rather for them to just have an appreciation for what it feels like to be from that person's vantage point. What is the type of uncertainty that they deal with? And then having discussions around how that might be able to help when they are dealing with uncertainty, or maybe reducing interruptions when you really have a better appreciation for when that person is really in a cognitively intense task. And they find that when you do this sampling, this breadth of training, it really helps reduce your willingness to want to be flexible. So that's another nice side effect. Another thing that I want to touch on is the importance for also training for why you're doing the task, not only focusing on what you're doing. So we've done a lot of work with manufacturers on trying to understand how we can better design their devices, so that again, you're making the right thing to do, the easy thing to do for people. And when we're working with these manufacturers, we also tend to look at what are the types of training that they're getting from the manufacturers and from the hospitals in terms of how to use the devices. And one thing that we notice is that we're often focusing on the skills or another word for this is the nobology. What knobs are you gonna turn, what buttons are you gonna push on this system? But we sometimes neglect to look at what is the knowledge required to, what are the fundamental principles that play behind what you're doing? So an example of where we saw this is a very common task in healthcare, where nurses predominantly will set up a secondary infusion. So they know to hang the mini bag, the drug is often in a mini bag, and they learn the rule that use a hook and you hang your primary solution, do five W, or normal saline, lower. And they know that if they set up this way, it's gonna run such that the first drug, the small mini bag is gonna run first, and then it's gonna open the back to valve on the bigger bag, and then that one will run next. And of course, this is just what they learn, and they understand that they have to set up that way. What we don't necessarily teach them is that the reason that you're doing this is because you're trying to establish a difference in hydrostatic pressure between the two bags. So you can see here that because of the difference in the bag heights, there's a difference in hydrostatic pressure. If I were to remove that hook, and now we'll hook these both at the same height, we have a negligible difference in the height, and they're at the same hydrostatic pressure, and what will happen now is that that pump is going to pull from both of those bags at the same time. And of course, then you don't have the right rate of infusion. So this is typically not a problem, because again, nurses have learned the rule, use the hook. But what happens when they're presented with something non-routine, like a bigger bag of drug? So when they're presented with this, and they still apply the same rule of using the hook, then again, you don't have that difference in hydrostatic pressure, and again, the pump will pull from both bags at the same time. This is when they'll send the pump down to a medical engineering and say it's faulty. Medical engineering will run their tests and say, no fault found. It's actually a term we use, because it's often that this is what happens, no fault found. This is where we'll often get a call, and say, can you come and help us investigate this? We don't understand, they're saying the pump is faulty. We've run all the tests. So we go spend some time observing, this was in the chemo daycare environment. And we then realize, ah, they're applying the same rule, and yet they have a bigger bag. And so in this case, if you wanted to, establish the difference in hydrostatic pressure, you would have had to use two hooks, and then the pump would work the way it should. Well, the pump is always working the way it should. It's that you really need to have an understanding of the hydrostatic pressure. And so this is where, if you knew, you wouldn't think of using two hooks, unless you have been trained with that knowledge. So one of the things that we did here is we created some e-learning modules for the nurses to review, so that we could explain these basic principles to them. And then we conducted usability tests to know how, well, the e-learning modules were working for them. And it was amazing. When we would watch them watching the e-learning modules, you could see the light bulbs going off, and people would say, oh my, I knew that I had to use the hook, but I never really knew why. So again, when you're trying to help build resiliency in your system, it can be really important to explain the basic principles. In this case, the fluid dynamics at play, explaining the why behind what people are doing, will really help them become more resilient in the face of unexpected events. So the next thing I wanted to focus on is the different methods that we use for studying safety. So a lot of our traditional methods, chart reviews, incident reporting, mortality and morbidity rounds, these are good, but they're often limited, because we're relying on people's best memory, the recollection of what led to the events. And we know that people's memories are sometimes limited or faulty, and so it can be difficult sometimes to really get at what are the true root causes of the problem, and therefore we're often not aligning the right solutions to those true root causes. So we need to think about different methods, like lab simulations in situ simulations, or now we're starting with the OR black box where it's not simulation, you're actually filming live. But one of the things I really wanted to stress here is that no matter which of these you're using, you can, in the case where you're videoing, you can always then rewind it and play it again, and really try to understand the details. So as the expression says, the devil is really in the details, and it's important that we understand these details when we're trying to come up with new ways of working. And in fact, one of the ways that we have been using lab simulations, you don't always need to have in situ, you don't always have to be testing it in live environments. One way that we've been using lab simulations is a lot of times for procurement purposes. If our hospital is about to procure a technology, we have now used simulation, doing usability testing, and having whatever clinicians who are going to be expected to interact with these, interact with these different devices. So we're not just facing procurement on price and different functions of the devices, but now it's been ingrained that our hospital will also go through simulation in order to decide which products to buy. And then we'll use in such a simulation where we really do need to understand more of the details, and sometimes when we're looking for latent safety threats, and I'll show an example, and then the OR black box, again, is even more realistic because it is the live environment. So the main point I wanted to drive home here is that because you have access to all of this extra detail, it really does allow you to identify the true root causes of the problem and then align the problem accordingly, the solution accordingly. So an example of in situ simulation, this is one that we did in a trauma bay. And I always say that for a human factor specialist to walk into a trauma bay is like a kid on Christmas morning. You know right away that you have a lot of material that you're going to be able to work with. You might notice that there are three clocks and there are different times. And just seeing how the equipment is positioned and where things are, you just know that you're going to have some good stuff. So here in this project, what we did is we observed live, and then we also reviewed the videos and coded the videos. And so traumas were called as normal. People were paged and then they would know to come into the trauma bay. And we ran these once a month over the course of a year, and sometimes we would repeat the same scenarios to be able to see if there were differences across the different teams. So when we reviewed the, we coded the videos, we came up with over 300 latent safety threats. Well, nobody wants to hear about your 300 latent safety threats is overwhelming. What are we going to do about it? So we knew that we had to come up with an easy way to really try to convince people of what are the main pain points that we need to work on. So in one of our scenarios, the team had to recognize that they had to perform a criteria to me. A rare but safety critical procedure. And what we found was that there was a lot of variability between the teams in terms of how they were going about doing this task. And so one way that we thought of maybe trying to measure performance was in terms of their motion pattern. So we decided to create a tracing tool similar to when you're watching a hockey game and you can trace where the puck is going. Well, that's what we were trying to do. But this time we're tracing the motion patterns of the person. So what I'm going to show you now is an example of, at the point in time, where the team has identified that they have to perform this criteria to me, the motion patterns of the person who's about to perform that task. So what you're seeing here is that this person is going to look for the equipment. He doesn't know where it is. So we spend some time looking for the equipment. He finds the crate, it opens it up. But it doesn't contain everything that he needs. So he spends more time going to look for it before he actually starts his procedure. OK, so this is one example. And then now what I'll do is I'll just show you. In the interest of time, I won't go dynamically through all of them. But the top one is the one that I just showed you. So you can see there's a lot of motion pattern. The one on the bottom left, there's a little less motion pattern. In this case, the person knew where the crate hit was. It opened it. We hadn't made any changes. So still had to go look for the extra equipment. And that's what caused more motion pattern there. And the third one, you'll notice that it's very much, there's way less motion pattern. And the reason for that is that this person delegated the task to go look for equipment to other people in the room. And that's precisely what you would want. When you're about to perform a rare but safety critical procedure, you would want the person who's about to perform the task to be focused on the task at hand, not to be going around looking for equipment, especially if they don't know where that equipment is and other people in the room do. So really quickly here, the other thing that we did was we created heat maps, so where you can see there's more white spots. So on the top one, there's three main areas where there's more white concentration at the patient, where they open the crate hit and looking for equipment. And the second one is really at the patient and looking for equipment. And then the third one, again, is very concentrated at the patient. And so really quickly here, we were able to identify three main things that they needed to work on. One, we need to standardize where the equipment is and how it's labeled. Two, we have to ensure that when we create a bundle or a kit, we have to ensure that it actually contains the equipment that you need. And third, we can do some better job of task allocation, especially when you're working in a team. And then, of course, what we're doing now is we can say, well, what are some of the interventions that we want to put in place to reduce some of these risks, again, test them in situ. And then go live when we feel like we have something that is ready. Just want to quickly catch on the OR block box. Again, this is something where now it's not in simulation, but rather live and we're recording. It's based on the aviation where you have access to the block box to understand more what led to an incident. Here, we have access to all the video, the audio. We often use it in lacroscopic surgery, so you can see inside the body. We can also see the physiological parameters from the monitor. And we start to add more analytics to it. For example, we can see how many times the door has opened and closed, which is quite often. There's a camera to be able to tell the temperature in the room. We're starting to put wearables so that we can tell when people are stressed. How is that interacting? Also, we can synchronize eye tracking. Many different things that we can do to really understand how people are behaving and the impact that this has on safety. But also the impact that it has on where things go well. So just as an example, this is one of our master's students who was coding the videos and coding them for latent safety threats as well as for resilient supports. So here, this is just an example of some of the latent safety threats. So on the x-axis, it's the number of observations and then the y-axis, the different types of events that were found. So the biggest safety threat that was found was tool malfunction. So for example, a stapler misfiring or distracting workflow sounds. The thing I really want to draw your attention to is that all the ones that are highlighted in green are associated with work system elements beyond the individual. So if we're trying to improve safety and we only focus on reinforcing training or getting people to or reinforcing a protocol, we're not going to make much success because the problem is not with the people with a lot of these problems. It's, we'd be better off changing how we've designed the technology, our workflows, our environments, and we'll be further ahead. Conversely, when she coded for resilient supports, here we're on the y-axis we're looking at all the times that we've coded it for resilience for something unexpected happened, but people were still able to cope through. And a lot of it was through skill coaching, also proactive task completion. So when we'd hear this might happen and therefore we should plant ABC, that's again, thinking proactively about where things may go wrong. So here, the point is that all of the ones that are highlighted in blue are associated with the person at the center of the system. And so if we try to constrain people's behaviors too much and don't allow them to be adaptive, then we're going to be taking away all of this resilience. And it's important because safety is a non-event. It's one of those things where you notice it when it's bad, but you don't notice it when it's good. And so it's important to really look at where are things going really well, despite the fact that we're dealing with this uncertainty, and a lot of times again, that's based on the people that you have. So quickly, I just wanted to touch on the research framework that we use for deciding when should we be using more realistic environments or lab. So this is referred to as a Jeffersonian research framework. It's named after Thomas Jefferson, who said if we're going to give money to people to do research, we really want to ensure that they're not only contributing to academic science, but they're having practical impact. So this is a nice framework to really think about how you can navigate through it in order to achieve both of those. So on the X-axis, we ask ourselves, what's the methodology that's required for this part of our program? Do we need something very representative, or do we need something more controlled? And then on the Y-axis, what is the intention of the research? Are we trying to add to academic knowledge, or are we trying to get a product out there and implemented? Both are good, and how you achieve them will depend on how you navigate. So just as an example, oh, first, when we're talking about academic research, we often fall in this bottom left hand quadrant, trying to make very controlled, be able to make cause-effect relationships, and our goal is to add to knowledge. Conversely, when we're trying to get practical impact, we're often in the upper right hand quadrant, trying to be very much in representative environment and get penetration into the market. So here's just a quick example of how we sometimes navigate through these quadrants. So we'll start off with field observations or block box now, to be able to really observe what is happening in the current state. It really gives us an appreciation for what are the different risks and latent safety threats that are there. But one thing that's difficult is that it's often hard to get cause-effect relationships. You might be able to get correlations, but cause-effect is harder because there's so many other extraneous factors that may be having an impact. So once we've understood the risks that we're interested in studying, then we'll move more into a controlled environment, like a lab or an encyclopedia simulation, where we can recreate what we've seen in the real world. We can plant errors. We can then look at errors of omission where have they detected that error or errors of commission errors that we haven't necessarily planted for, but they're making anyways. And then we can make those cause-effect relationships and understand the safety impact of what we observed in the real world. And then what we do is that we'll move towards designing interventions. Now here I have design of interventions to the left and because we do it often as human factors specialists will come up with these interventions, but I could have also put it more to the middle because we often engage frontline staff in helping us come up with these interventions. But the goal here is really to think about what interventions might they want to implement in their practice. And we resist the urge of jumping right into the right upper quadrant and implementing it right away into the real world as we often do in healthcare. But instead we go back into the lab, into either in situ or lab simulation and test out the effectiveness of these interventions before going live. The ones that are found to be good, then we know that they're ready to be implemented. The ones that have been a dismal failure, we get rid of those in situ instead of putting them in the live environment. And the ones that need tweaking, then we'll go through another round of simulation testing. And then eventually we're ready to go forward. So now we do another set of field observations once we've launched these products. And you'll notice that originally I had OR Blackbox and field observations at the bottom. Here I have them at the top. They're always to the right because they're very representative. But the goals of what we were measuring at first was really to understand what are the risks and what are the system factors at play. Our goals are very different and our metrics are different when we're launching. There we're really trying to see is there updates? So we'll look at uptake at the time that we implemented, three months later, six months later, however long we want to see for sustainability. And then we also wonder, can we scale this? So it might work with the unit that we've been working with because they've been part of this process, but can we scale it to another unit or to another hospital? And so then we'll implement it in a different environment and see again whether there's uptake or not. And we purposely will engage these different units proactively through, you know, when we're planning for grant or whatever so that we're not just begging people to let us in at the end. So it's just an interesting framework. There are many other methods that you could put on here. Focus groups, randomized control trials, interviews, whatever you want, but it just helps you really navigate how you're going to achieve both academic input through publications as well as trying to get products out there and implemented in the real world. So I just want to end with the importance of also thinking about how we operationalize patient safety. This is a definition from the WHO. Patient safety is the absence of preventable harm during the process of healthcare. I think we'd all agree that this is a good definition. However, sometimes in terms of how we operationalize this definition and currently in healthcare, we tend to say that we want to arrive at zero preventable adverse events. We have a lot of campaigns, at least in the greater Toronto area, it is a big obsession of chasing zero. And although this is a good and aspirational goal, the problem is that we often borrow from other industries like manufacturing or construction where we start to post the number of days since lost time injury. We're starting to post number of days since lost preventable adverse event. The problem with this is that nobody wants to be that person that mucks up their CEO's numbers and brings this back to zero. So we may inadvertently be fostering this culture of hiding our errors and not coming forward with them, which of course goes against our objectives of increasing patient safety. So we really have to be careful that this LTI, the lost time injury index doesn't become the LGI, and that's what Shingi Decker refers to as the looking good index. Or as Mary Dixon would, another famous patient safety advocate says, we have to be careful that we don't become an industry that is comfort seeking. We have to be problem seeking. It's not because your data tells you that you're safe, that you are in fact safe. So we always have to be looking for those problems listening to people's hunches and so forth. And so Resilience Engineering provides a different definition of patient safety, doesn't define it by the absence of something it's not about arriving at zero preventable adverse events all the time, but also looking at the presence of something and that something is people's adaptive capacity. Or another word for that is your resilience. Your ability to bounce back, absorb disturbance without breaking down, without catastrophic failure. So it's really important when standardization, yes, you can try to achieve zero. But when you're dealing with uncertainty, you really need to be operationalizing it in terms of how much adaptive capacity do we have in our system. So in summary, standardization and adaptation are dynamic behaviors that should code this within our healthcare or safety framework. Going back to William Osler, I think we need to start to have more of a healthy appreciation for just how much uncertainty we do have in our systems. And I think one of the reasons that we often don't like it, it feels uncomfortable, it feels like we're leaving a bit of a crack in the system if we haven't dealt with that uncertainty. So I thought I would end with another Canadian, a singer that I really enjoy, Leonard Cohen. And in one of his lyrics, he says, there's a crack in everything, that's how the light gets in. So hopefully if we appreciate that, then we will be able to reduce the rate of preventable adverse events that we've had. So thank you very much and I'm happy to open it up for discussion and questions. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Hello. Patricia, thank you. Fantastic. We do have a little bit of time for questions, comments. Yeah. Jeff. Patricia, thank you. It was a really interesting talk and you gave voice and concept to the things that we all worry about as you know. There's a trade-off as you know, and I don't know, this is not an easy answer, I don't think, for anybody. But what is the trade-off between putting in interventions, the checklist manifesto, that do make us better, as you noted, that there are certain places to introduce standardized processes. On the other hand, I think everyone in this audience would say if you add up all the things that we're being asked to do, combined with new devices that are placed in our hands, it sure feels like every step now has safety steps that aren't making us safer. You're an expert in this area. How do you start to push back against that and say, the cognitive overload from all of these steps is not making us safer? Yeah. So that's an excellent question. And actually, I was just working with some of your fellows yesterday during the journal club, and that was one of the things that they brought up is we keep adding an extra time out for different things. And then eventually we're spending a lot of time doing timeouts. And obviously, it does make a lot of sense to implement standardization, as you're saying. One of the approaches that we try to take is, again, really understanding what is the root cause of the problem. And when can we try to put, as we refer to it, the knowledge in the world? So instead of having people have to remember that, oh, in reliant, people to think about it, how can we make it easier for them to do the right thing through how we've designed it? And so oftentimes when we've established what the right thing to do is, we can try to nudge people by putting those affordances in the world. So a good example is, I often say, with the door handle, right? If it's a door handle that you're not sure if you have to pull their push, then you have to think about it. Whereas if it's just that flat metal panel, the only thing you can do is push it. It's not like you're going to try to put your nails and try to pull it. So we try to look for ways that we can put those physical affordances so that you're not relying on the person having to remember to stop and think about it, but you're making it easier. Conversely, when you're dealing with more uncertainty and you haven't necessarily predicted it, you can't put it in the world. So you have to try to put the knowledge in the head. And then that's where we try to use other ways of, that's usually where the experts are the ones that really have that knowledge. And even watching them on an or a block box, we don't know what's going on in their head. So we try to use other methods like cognitive task analysis, where we really try to have the experts reflect on what they're thinking and how they're going from one decision to the next. And then we try to turn those insights into cognitive affordances. So you're giving people some cues to the novices, for example, of things that they might want to think about as they're working through a problem. So I think that's one of the ways that we really try to go about it is, when does it make sense to put the knowledge in the world versus when do we really have to try to put the knowledge in people's heads? Other questions? Steve? I would second, just comment that you really sort of hit a warm spot or heart for frustrated. I think we are. I think we take people who are resilient, who desire to be out of the dealers, and are driven to get through all the prostitutes to get there. And then we drive the resilience out of them by making them check boxes and fit into constructs. And it probably wasn't Einstein who said it, but he's often quoted as saying, not everything that matters can be counted, and not everything that can be counted matters. We spend so much of our time counting things, ignoring sort of what we use to think is obvious. And some of the way we train, the things you just described in terms of learning from the master, the experience that comes, but how do you translate that? We've now taken our training programs and put them into forced counting. How do we, and we can talk about work hours and mastery and all those things that are out there? How do we reverse that trend in this environment of forced legal constructs? You will do this, you will check the box, you'll have a knee-jerk reaction to everything to make another rule for something that's a rare event that is going to happen again. And the other 65 rare events that you haven't thought about are going to have more often now, because you spent so much time on the knee-jerk reaction to the one that happened yesterday. Yeah, I think it's a really important question. And that's what I'm passionate about is really trying to, first of all, recognize that we're taking these really smart people and trying to make them force them into following rules and procedures and not do what we've actually hired them to do and think. And so that's part of it. The other thing is, I think it's really important, again, to try to identify when are the times when we really truly know what the best practice is and that then we can try to standardize it. And then how can we help people dynamically realize that they're more and more drifting into uncertainty? And that's where maybe through a lot more data and analytics and so forth, eventually, we can get to the point where they'll be able to have some kind of technological intervention that won't necessarily tell you what to do, but will help you identify that something, last time that this permutation of events happened, it led to some uncertainty. But I do think that it's really more about the people as well, and it's about going back to how do we continue to do mentorship. It's exactly like you're saying where you can try to convey that knowledge and that wisdom that you've built over the years. And how do we try to extract that in a way that then we can codify it in some ways? The best, as Gary Klein talks about putting hues in people's heads as opposed to only putting them in the outside world. But it's a topic that I'm really passionate about, and that's exactly it is, I don't know the answer to it. How we necessarily arrive there. I just know that if we continue to try to push people into standardization, we're not going to make the progress that I think we need to, and I think that is a reflection on why we haven't been able to reduce the rate of adverse events. When you think back to 20 years, people thought that it was going to be CPUE or barcode scanning or all of these other things. They have had tremendous success, and they have helped in certain aspects, but it's not the end of the story. Work. Yes, thank you again for a terrific talk. You focused on the clinicians. I'd just like to ask you a question about whether you do any of this work with the non-clinicians and the bureaucrats in the hospital. That seemed to be increasing the stress and demands on clinicians. I think the two previous questions were related to that. We get emails all the time from the computer people. Do this, your computer systems changing. You have to do this continuing education thing. There's a new monitor that's come out, do a net learning, and have a learn it. Part of the problem, I think, the clinicians are feeling is an increasingly burden from this. Do you do any simulation work with the hospital bureaucracy and simulate them? What happens when they put out an announcement to the whole hospital about do something? What's the side effects and how to prevent them from making the problems worse? Yes, so I guess that's an excellent question. We do. The way that I think we do that is proactively trying to not wait until it's been a dismal failure and that everyone is refusing to use your product. But that's where we've really said, we need you to come to us early before you actually procure the product. And this was really hard to get through our hospital. In fact, there was one time it was for an automated external defibrillator. And they thought, this is a simple product. We don't need your help. And we said, please, let us. This was when our team was starting this years ago. And we said, we'll do a pro bono. Just let us do it. So we did it. And we had started off with what we call a heuristic evaluation, where we do our own analysis. And we thought, OK, there's probably going to be problems with where to put the pads and how to perform the compressions and how to respond to the auditory signals. Even ourselves, we hadn't actually anticipated the risk that was the major risk was when we actually did it in simulation. And usually in simulation, we started from the time that it's on the wall. They grabbed it. And the biggest problem they had was opening the case. One had multiple zippers. One had these different snaps. It was painful to watch them. And of course, every second matters when somebody's in the cardiac arrest. So that simple evaluation completely changed their decision purchase. They decided to not go with any of the three that were shortlisted and wait until they had a better product. It also changed how we do procurement. Because now, especially for expensive devices, we'll go through usability testing in a lab environment prior to making that purchasing decision. So that's one of the ways. The other thing that it made me think of is sometimes it's not even just within the hospital. But we get these policies. So somebody will come in, the government will come in, do an audit and say that we're not compliant with when that's recent is with our personal protective equipment. OK. So they're always supposed to wear goggles in a mask. So we try to put the goggles on the mask attached. You don't have a choice. You're just going to grab it and it's all one. But then they refuse to use it or they rip it. And clinicians are really resilient and fine work around. And we tried to work with them and said, OK, fine. We'll put the goggles in the mask separate. But they don't like that either. And at the end of the day, it's because the goggles just don't work. It's not designed to do my job. And I'd rather take the risk of harming myself from getting some kind of a splash or whatever, then not being able to do my job with the patient. So there, what I really think needs to happen is to go back to those external environmental and say, this is not right. You cannot make these rules and make us change our practices just to fit your standard when you don't truly understand how we're working. And that's how another way that we use simulation or even live observations and really trying to hone in on, you really truly have to have an appreciation for work as done, not work as imagined. I had the opportunity to show Trisha yesterday a wonderful example of this, in that the specter link foams have been issued to everyone. Turns out that when you push the power on button, you also instinctually push the other side of the case. And on the other side, the case is located the ringer off button. So for a long period of time, as this was introduced, people were trying to turn on their phone and they were turning off the ringer, and they couldn't get in touch with one another. We finally pointed this out to ISD, and now we have, I'm brokering a marriage between the simulation program and ISD so that perhaps ISD will do simulations and turn to simulation before they introduce new ISD devices, procedures, protocols, et cetera, et cetera. And so Peter is going to reach out to the ISD people, the information technology people, and see if we can bring simulation to the world of information technology. Yeah, I love that example. I think it's such a classic human factors example, and you're right, just doing a couple of rounds of simulation could actually turn a product that's really not, that people are resisting to one that they really embrace if you do it the right way. So I think that's a really good point. Well, I want to thank again, Patricia for coming out to Boston and spending some time with us, offering a real fresh perspective on such an important topic. So thank you, Patricia. Thanks everyone.
Click "Show Transcript" to view the full transcription (63314 characters)
Comments