Let’s suppose you go to work every day at a high risk job – where preventable deaths happen every day – and you start to realize several obvious problems with its systems of safety. The processes for training and monitoring the safety attitudes and behaviors of those on the front lines are woefully inadequate. There is no training on how to respond to catastrophic events. Alarms go off frequently and almost never represent actual emergencies, leading to widespread alarm fatigue. There is no formal or informal emphasis on teamwork. Instead, the business is just a collection of individuals that rarely speak up about their concerns and take no ownership in safety. Most departments (particularly upper management) are oblivious to the actual problems. Those departments who’ve experienced safety problems view these events as failures to be hidden rather than learning opportunities to be shared with other individuals and groups within the organization (certainly no one on the outside, no matter how much society might benefit).
This business is like any other and is often tempted to cut corners on safety. From time to time, they become complacent about safety and push their staff instead for higher productivity. In light of this, laws were created to protect public interest by holding the license of this facility accountable to frequent inspections by a regulatory agency. The agency’s purpose is serve as a check against inevitable temptations to compromise safety. Unfortunately, they do not accomplish this goal. Their review process causes a variety of unintended consequences that aggravate the situation far more than it helps. Both the specific metrics used in their evaluation and their generally permissive attitude towards industry’s ability to self-regulate are grossly misguided. Licenses are often given/renewed with unresolved safety issues and fines/suspensions are rarely used for enforcement. In the end, those on the front line of this business, the management team and its regulatory agency all share the same mindset: actual safety is not the priority.
Most physicians or nurses reading about this job would feel it describes virtually every hospital in the US and their regulatory agency, the Joint Commission. For reasons this blog post will address, they are well justified for this feeling. As it turns out, the above paragraph was not referring to a hospital. This was almost a verbatim description of a Presidential Commission report about the state of the Three Mile Island nuclear power plant just before its meltdown in 1979. Two years prior, equally stark conclusions were reached about the airline industry after an FAA investigation of a jumbo jet crash at Tenerife . These painful catastrophes changed the whole safety paradigm in the 1970’s. Before these two accidents, regulators focused on the technology with the belief that it could be made sufficiently safe to become “people-proof”. After intense soul searching, it became clear that it wasn’t technology that failed at Three Mile Island and Tenerife, but teamwork and communication. What made these accidents even more tragic was that the ultimate solution ended up being so simple. There was no new technology required. Both the Airline and Nuclear Power industry initiated intensive teamwork training called crew resource management (CRM) and noted dramatic reductions in accidents over the past 3 decades.
Three Mile Island is an invaluable case study for hospitals. This event has the advantage of an intense and public investigation of all its root causes. Both the Presidential Commission as well as a consulting group hired by the Nuclear Regulatory Commission concluded that the event was preventable but – given all the systemic deficiencies – an accident like what happened seemed inevitable. While lawyer would say that human error was the “proximate cause” of the accident, lax oversight by the Nuclear Regulatory Commission (NRC) was given a lion’s share of the actual blame. The Commission essentially described this event as a regulatory disaster – the direct result of the poor design of the regulatory system that was specifically put in place to prevent it. More than any other, this is the lesson that hospitals should take home from studying Three Mile Island.
The main error of the NRC at that time was that they were measuring things that had little to do with actual safety. For example, they evaluated training of the staff, but it focused on operating a plant under normal circumstances and not during emergent scenarios and the training didn’t need to include learning from prior accidents at their plant or others. Their supervision of training and licensing processes virtually ignored concerns for how to mitigate the impact human error. Paradoxically, the list of requirements imposed on plants at the time of this accident was actually very complex. At the end of all the immense efforts for compliance, merely satisfying the regulatory requirements was inappropriately equated with achieving safety. This was the lethal flaw because it weakened the very real and very necessary feeling of risk and naked exposure that catalyzes the team to seek ways to improve their systems of safety. Every time an NRC inspector evaluated and the plant passed an inspection that had little to do with advancing safety, it perpetuated mediocrity at the plant. The goal of performance at Three Mile Island was not framed on safety but just compliance. In the end, it was government-induced complacency that made the meltdown in their reactor inevitable.
Hospitals have a moral obligation to understand the lessons of Three Mile Island. They share similar deficiencies starting with the same flawed oversight by its own regulatory agency, the Joint Commission (TJC). Accreditation by TJC is a prerequisite for funding by federal programs and many private insurers and it is necessary for hospitals affiliated with a medical school to receive residency accreditation. They focus their inspections on highly prescriptive rules and explicitly stated metrics. Similar to the NRC prior to Three Mile Island, what they choose to investigate and enforce are not issues that are most important to safety but the easiest to measure and interpret. For example, their most commonly cited violation around the country is a problem with the “safe environment of care”. A violation of this metric could be due to one of a hundred different things, including finding a rip in the carpet that staff could trip over, unattended clutter in the hall, equipment and clinical areas are not clean, a wet floor, and unsecured O2 tanks that could fall over and cause harm. However, preventable patient deaths in hospitals are caused by the same fundamental issues that led to the accident at Three Mile Island – problems with training, communication and teamwork. I challenge those at TJC to cite a single patient death where the root cause was hallway clutter, disrupted carpet or a rogue O2 tank.
TJC investigates whether staff receive training, but not in courses like CRM training that have been credited for creating highly reliable organizations in other high risk industries. Instead, they focus on trivial issues such as whether staff are trained to use fire extinguishers or operate sterility equipment in the OR. They purport to investigate staff communication – not by sitting in the OR and/or ICU and looking for telltale signs of effective teamwork – but using superficial, meaningless metrics like the “read back” of lab results or using only the list of approved abbreviations. At the top of their National Patient Safety Goals is to reduce the risk of hospital acquired infections, but again their metric is not based on hard evidence like compliance rates with the central line bundle (i.e. maximal barrier precautions). Instead, their metric is to catch staff storing their food and drinks near patients. They don’t assess hand hygiene by directly monitoring whether staff wash their hands before entering a patient’s room. TJC instead determines whether the hospital has a “tool” in place to monitor compliance. In lieu of direct observation of staff performance, they prefer to check for documents like a list of competencies that were signed off, written protocols for performance, and data from quality control measures.
The second most commonly cited violation is whether “the hospital maintains the integrity of the means of egress”. This awkward rule means that the hallways must remain clear except for clinically important items that need to be near patients. It is not easy to debate what is “clinically important” with inspectors that lack a clinical background but are armed with their clipboard and itchy pen. Instead, most hospitals avoid this debate and shift around the location of critically important equipment just prior to a site visit only to move it back immediately afterwards. Worrying about getting cited on this metric can make a seasoned nurse mindlessly focus on passing the inspection and discount the clinical impact of important equipment being shuffled around that might actually be needed for patient care. More importantly, doctors and nurses both know this is a game that doesn’t in any way promote patient safety or even the integrity of “egress”. When staff and supervisors collude in bending rules it creates a company culture of de facto noncompliance. Rules like this one are viewed as “not invented here”. When this lack of ownership occurs, trying to excite the staff about other safety issues feels like getting them to wash a rented car.
The rules used by TJC suffer from something known as the lamppost problem, which refers to the story of the man looking for his lost keys at night under a lamppost. When asked what he was doing, he admitted that he lost his keys on the other side of the street but he was looking under the lamppost because the light was better and that made it far easier to see. There is no evidence that the current rules chosen by TJC, at least the ones they seem to investigate and cite the most, make patient safer at all. They are just easier to measure. Yet hospital administrators conflate successfully passing a Joint Commission review with a hospital that is safe for patients.
Even worse than having no impact on patients at all is the potential for a host of unintended consequences that can make things less safe. First of all is the tendency of TJC to prefer rules that are rigid and hard to misinterpret. The downside of such rules is that they are more difficult to apply to clinical practice. The rigidity increases the number of situations that are exceptions to the rule, which spawns additional new prescriptive rules and even more exceptions. Moreover, noncompliance has legal implications. As hospitals focus on strict compliance with voluminous and complex rules, there is a greater tendency for administrators and lawyers with no clinical knowledge to dominate this discussion. As you might imagine, ideas on how to translate regulations into clinical practice from those with no clinical background tend to be suboptimal. Moreover, this common practice contradicts one of the main characteristics of a high reliability organization which is to transfer authority for patient safety to clinicians and other technologically savvy individuals who are likely to generate the best ideas.
Unintended consequences are a well recognized result of short-sighted regulations. For example, public concern about car safety in the 1970’s led to new federal standards and the development of heavier cars. This reduced passenger injury during a crash but it increased the amount of air pollution. It has been demonstrated that the ill effects of pollution from this shift to large cars outweighed the benefits of reduced injuries from accidents, thus reducing overall societal benefit. At our hospital, nurses improved the “means of egress” outside of the cardiac surgical OR that I use everyday just prior to our last TJC inspection. This entailed discarding important equipment required for emergencies that we kept there due to the lack of alternate storage space. As luck would have it, we unexpectedly needed that equipment for a cardiac case on the day that TJC came to Mt View for their triennial inspection (by the way, this event was what inspired this blog post). Like the auto industry, we also demonstrated that the small theoretical safety advantage of improved “egress” was outweighed by the huge practical disadvantage of not having equipment readily available that was desperately needed at the time to save the life of a patient.
Preventable harm to patients is so common in US hospitals it kills the equivalent of a jumbo jet full of patients every day. The analogy of patients to jet passengers is eye opening but inaccurate because these patients typically don’t die right away like passengers in a crashed jet. They often suffer over many days and weeks in the ICU. That means they are treated for by staff that realize the patient’s healthcare was flawed. Some of the staff in the OR and ICU exposed to these patients eventually become the second victims of this harm and quit their job or become jaded to the idea of patient safety. The slow deaths of victims of preventable harm unfortunately occur outside of the public eye. These two facts mean that these patients’ deaths make a more compelling actual case but less compelling political case to change the status quo. There are two problems comparing patient deaths at US hospitals to the events at Three Mile Island: 1) the reactor meltdown and its subsequent investigation unfolded under intense public scrutiny and 2) no one ever died at Three Mile Island. This illustrates that public outcry is a more potent trigger than human death for inducing the paradigm shift needed to create the below “Tale of Two Safety Cultures”:
- NUCLEAR POWER: NRC developed a “resident inspector program” in order to increase their knowledge of activities and potential hazards at the nuclear power plant. Their job is to directly watch workers in action and follow-up on any concerns they raise about safety. The program is credited for increasing the accountability of the plants to safety.
HOSPITAL: The hospital does not see Joint Commission inspectors on site more than once every three years. Their inspections miss many of the violations that end up being detected later by state inspectors. The closest analogy of the resident inspector is the hospital director of quality who monitors employee practices that influence patient safety. However, this position reports to the hospital administration and they are held responsible for passing the inspection, which heavily influences their objectivity. For instance, the quality director at our hospital participated with other staff in the shuffling of equipment in the halls before and after the Joint Commission site visit, which potentially undercuts the safety value of inspections. - NUCLEAR POWER: The NRC enforced the redesign of plants and control panels so that they are operated in a more foolproof manner (e.g. automatic shut downs).
HOSPITAL: There is no similar effort to influence to design of hospital systems. For instance, it is generally accepted that all the electronic medical record systems were designed to maximize billing efficiency and not to influence patient safety in a meaningful way. TJC does not evaluate the effectiveness of EMR systems. - NUCLEAR POWER: The underlying philosophy that NRC uses to evaluate training is whether it influences human performance in plant safety. They mandate that drills of emergency scenarios are performed routinely and that operators must be tested on a high fidelity simulator. The drills that are performed during NRC inspections are graded as part of the licensing requirement. If the facility does not pass the training exam, its license is revoked.
HOSPITAL: There is no systematic use of simulation for rehearsing emergency drills in hospitals nor is there any incentive to train teams on the use of crew resource management in the OR and ICU. The presence of absence of any training in patient safety is not part of the accreditation or licensing requirement for hospitals or physicians. - NUCLEAR POWER: NRC inspections prioritize their efforts on those issues where the risk at the power plant is greatest for a major catastrophic error.
HOSPITAL: In contrast, TJC has a very narrow definition of the sentinel events they investigate. This includes high profile mishaps like wrong site surgery, retained foreign bodies, deaths due to medication errors. However, there is no oversight of the far more common type of medical errors caused by problems with teamwork and/or communication. There are reliable sources of this information – peer review evaluations, the minutes of M&M conferences and depositions from lawsuits – but TJC chooses not to analyze these data. - NUCLEAR POWER: The NRC collects and publically reports all the data from their inspections as a means of improving transparency and public trust in the safety of nuclear power. Their website details hundreds of cases in which violations by the plants have resulted in license suspensions. In addition, there is a systematic effort to have problems at one plant be used to instruct other nuclear facilities.
HOSPITAL: In contrast, the hospital errors and corrective action plans that result from inspections are kept secret. TJC was requested by Medicare to make the results of their inspections public, but they have resisted all efforts at transparency. Lobbyists from the American Hospital Association say that the information would not be “helpful” to the public.
A fundamental rule of modern business management is that you get what you measure. If you don’t measure it, it won’t improve. Outside of healthcare, others have accomplished great things in high risk fields such as nuclear power, the military and commercial airlines by choosing and holding themselves accountable to the right measurements. They have overcome the inherent dangers and inevitability of human error using a set of principles that have come to characterize what is labeled the “high reliability organization” (HRO). They now perform the many complex tasks that are required in these fields with a near zero rate of preventable error. Their long journey to an HRO may have been catalyzed by the public’s response to tragedy but it was accountability to the correct measurements that made it a reality.
Effective regulations would push hospitals towards pursuing state-of-the-art methods of training, teamwork and communication that would eventually create healthcare’s first actual HRO. Ineffective regulations create yet another hospital that is cursed by an over-reactive compliance culture where the minimum requirements of passing an inspection has turned into the standard for safety. Those at TJC have extensively published about what is required for hospitals to become an HRO. This insight without taking action makes it seem like their leadership consciously decided that patient safety is just too difficult and expensive to achieve. They seem content to ask hospitals for mere compliance with ineffective rules. By abandoning what would be a more fruitful albeit challenging path, TJC has thrown out more than just the baby with the bathwater. They have also thrown out the bathtub, towel, soap, and the bathroom altogether.
In defense of TJC, being the sole arbiter of patient safety in hospitals is beyond the scope of any single organization and certainly outside the current core competencies of TJC. Other more specialized regulatory bodies are better positioned to use their oversight to improve various aspects of patient safety. For example, there is no question that FDA and Medicare regulations improved patient outcomes for both carotid stenting and transcatheter aortic valve implantation (TAVI). Almost every hospital in the country wanted access to these promising, high profile technologies. However, CMS has enforced significant restrictions on its widespread use by limiting access only to those programs that meet the strict conditions of a National Coverage Determination. This included prerequisite experiences and skills and the completion of a thorough course of training. There was also a requirement to contribute to a registry and reporting outcomes, which resulted in objective evidence of the invaluable impact of this oversight.
But the system doesn’t work this way all the time, particularly when a new surgical procedure does not require a regulated device. For instance, left ventriculectomy, known as the Batista procedure, was introduced in the 1990’s as a surgical treatment for end-stage dilated cardiomyopathy. Dr. Batista performed an initial promising surgical series in a rural environment in South America and did not have long term follow-up data. Despite the lack of evidence, its early adoption in the US created considerable excitement for this as a potential breakthrough therapy. A prominent surgeon, Dr. Tomas Solerno, was quoted in the NY Times that he would “stake his reputation in predicting that the Batista procedure would change the course of cardiac surgery”. Ultimately, the evidence showed that the clinical improvement it provided was not sustainable and the surgical death rates approached 80%. A decade later, it stopped being performed after guidelines from the relevant professional societies (AHA, ACC, STS) relisted the ventriculectomy as a Class III procedure, indicating it is “potentially harmful”. Batista procedures continued until insurance companies designated it as “not medically necessary”. After 10 years of enthusiastic support by big name cardiac surgeons, the procedure is now no longer performed anywhere in the US.
In case you were worried, Dr. Solerno’s reputation remains intact in spite of the demise of the Batista procedure. He remains the emeritus chief of cardiac surgery at the University of Miami and was recently inducted into the Brazil National Academy of Medicine. Similarly, neither “Girls” creator Lena Dunham nor Alec Baldwin moved to Canada after Trump won the 2016 Presidential election.
Yet other times – like the case of robotic CABG – a new procedure introduces a regulated device into the OR but the FDA and CMS remain silent about the necessary prerequisites, training or a registry. Like the NRC prior to Three Mile Island, they probably over-relied on self-regulation by surgeons. Without CMS mandating a certain structure for adopting robotics, the Urologists, Gynecologists and General Surgeons struggled early on to adopt but eventually succeed through trial and error. Adult learning theory demonstrates trial and error is an excellent method for learning but it comes at an underappreciated cost: error. Errors are needed in order to learn. That cost is OK for some jobs (e.g. basketweaver), but it is unacceptable for two: skydivers and cardiac surgeons. Cardiac surgeons have two main ways to minimize that cost: 1) develop a strong culture of safety which promotes rapid team learning and minimizes error and 2) avoid doing new procedures in order to minimize the risks associated with learning something new.
CABG has a mortality risk of around 2% and a risk of preventable harm that is 10 fold higher than prostatectomy, hysterectomy or cholecystectomy. However, the cardiac OR has not increased their emphasis on the culture of safety relative to other fields. Instead, the performed the same 1 or 2 procedures over the past several decades using highly scripted techniques every day. Trial and error was not required. Adverse events still happened and half of all operative mortality in cardiac surgery has been attributed to preventable issues (e.g. poor communication, insufficient teamwork). But these events seldom prompt any red flags because when standard methods are used it is assumed that occasional bad outcomes are just the price of doing business.
However, the story changes when trying a complex new procedure like robotic CABG. Success of robotics in cardiac surgery over the past 10 yrs has been limited by the inability to mitigate the risk that it adds on top of conventional cardiac surgery. The skills needed to accomplish this were lacking mainly outside the OR. The broad category of managerial skills known as “change management” are critical for the brainstorming used to get teams to adapt to a new way and prevent team members from become disengaged. Cardiac surgery teams have struggled for decades to avoid preventable harm during conventional open surgery. Well before the robot was ever FDA approved, operating rooms across the country showed the hallmarks of poor culture of safety – a vicious cycle of mistakes and unsafe events that go undisclosed and uncorrected and prompt more mistakes. Hospitals are largely noncompliant with even the most basic safety tools like Gawande’s checklist or Provonost’s central line bundle. In retrospect, implementing a complex innovation like robotic CABG was doomed from the start.
Robotic CABG created the perfect storm for the poorly designed system of safety to “come home to roost”. In this case, the latent risks of decades of inadequate regulation and incomplete understanding of teamwork and communication among cardiac surgeons ended up being realized in tragic fashion. It took 30 years for the washing machine to be used in a majority of U.S. households. Homes and apartments did not initially have the pipes or drain lines required for installation. Few people bought these products until they first bought new homes equipped with this costly infrastructure. Cardiac surgeons interested in robotics did not appreciate that having an adequate safety culture was their own set of proverbial pipes. Without these “pipes”, it was impossible to safely implement robotics.
Legal remedies are purportedly another way to protect patients from unsafe systems of care but the evidence shows that this route also fails to do so. Malpractice laws have been formed around the traditional view of medical errors as due to inadequate knowledge or skill of a healthcare provider. This conflicts with the modern systems approach that views most errors as predictable human failings in the context of poorly designed systems. Hospital administrators, not physicians and other front line staff, design these systems: they make the decisions about staffing, enforce protocols, provide training, foster a culture of safety where everyone can learn from mistakes, etc. When a patient is harmed by unsafe care, the responsible physician, nurse or anyone else at the “sharp end” of medicine is always exposed to legal liability. Yet administrators and others at the “blunt end” that create an unsafe system of care are rarely held accountable for that same adverse event. It is a highly effective legal defense for hospitals to point out that health care professionals are licensed to make independent decisions and distance themselves from having any influence on their negligent acts. It is actually in their legal interest to deny that systems of care has any important impact on patient outcomes. In cases where a patient was harmed by a system issue and wins their case, it is the physician that is held accountable and reported to the National Practitioner Databank.
Two other issues make hospitals less legally accountable to safety than physicians. First, most patients harmed by negligence find it easier to change their physician than their hospital. This vulnerability to their local hospital make patients worry that they might alienate the staff and be refused care in the future if they were to sue. As a result, few sue even the worst hospitals. Second, it is hard to prove that a poor system of care is a “proximate cause” of an adverse event. Showing that a poor system contributed to a bad patient outcome requires two legal tests for causality to be proved– foreseeability and reasonableness – in order to prevail in court. It is difficult to foresee that a specific patient was harmed by a defect in a system. It is also hard to define what it is that an unreasonable hospital does regarding patient safety. It is much easier (and more psychologically satisfying for a grieving family) to blame a negligent physician.
In contrast, it is a well established modern responsibility of any employer, including hospitals, to provide, maintain and enforce a safe system of work for their employees. Extensive case law provides unambiguous guidelines for what hospitals need to do to keep their employees safe. TJC focuses their inspections on issues like the environment of care and maintaining egress mainly out of concern for staff. This is in large part measured by TJC as a favor to hospitals so they can correct deficits in these areas before it leads to legal liability. Paradoxically, this clear accountability to employee safety makes it far safer to be an employee than a patient at a hospital.
Why hasn’t TJC lived up to their true role? When organizations fail to perform as expected, it helps to examine the motives of its leaders. TJC is led by its board of directors and a majority of those members are healthcare executives or lobbyists. In their day jobs, these people are held accountable to hospital finances and legal risks far more than concerns about patient safety. That perspective makes them naturally biased against TJC using disciplinary actions or anything else that would affect CMS funding at hospitals because that impacts their bottom line. They understand that standards set by TJC are often used by plaintiffs’ attorneys as minimum standards of care in malpractice suits. Every time TJC raises the bar for safety, it provides liability attorneys with a veritable treasure trove for negligence litigation.
A better analogy for TJC is not the NRC but Arthur Anderson, the accounting firm responsible for auditing Enron just prior to its collapse. Like Anderson, TJC’s revenue comes from fees paid by the companies they oversee ($18K for inspections+annual fee of up to $30K/yr). Like Anderson, TJC has to respond to competition growing in the accreditation market. Accreditation Commission for Health Care, Accreditation Association for Ambulatory Health Care, and URAC are new accreditation bodies that have gained ground in the health care market. It is not good business to affect the license of a hospital. At best, it would promote the business of more lenient competitors. At worst, it eliminates your customers’ ability to pay their fees. The Wall Street Journal recently reported that TJC virtually never revokes the operating licenses of poor performing hospitals. If even a jumbo jet filled with patients dying in hospitals every day from preventable harm doesn’t spur TJC into taking effective action, it is logical to believe that nothing ever will as long as they maintain their current form. Similarly, it took overt evidence of Anderson signing off on Enron’s financial statements showing strong profits in the quarter just prior to their total collapse to force disciplinary boards to revoke their CPA license. This led to them taking a different form – bankruptcy.
If TJC actually wanted to prioritize patient safety, they would also change their current form. This starts by loading their board of directors with outspoken physicians, a few independent minded nurses and others on the front lines that understand what makes patients safe. From their perspective, it would be a good thing for a few underperforming hospitals to be shut down every once in a while. Only a few stories of such events are required to scare administrators into paying very close attention to an issue neglected for too long.
It is human nature to point the finger of blame elsewhere. TJC is made up of humans. But they of all people know that vulnerable patients have been exposed to more risk than they should be because of a flawed system and that TJC is their last line of defense (not insurance companies, the FDA, reliance on self-regulation, etc). Every day, hospitals must make difficult choices between financial and clinical issues. Only TJC is in the position to incentivize hospitals to pursue those choices that improve patient safety even when it may have an adverse impact on the (short term) financial bottom line.
The answer about what would happen if TJC supervised nuclear power plants involves an inappropriate story of how to boil a frog. The correct way is to place the frog in warm water and then turn up the heat gradually until the water is boiling. The frog will not realize the dire situation it is in until it is too late. In contrast, a frog dropped into boiling water will immediately jump out. We have failed to see problems with patient safety as water that is gradually boiling over. If TJC was also in charge of nuclear power plants, we would see another Three Mile Island and immediately react to the boiling water we recognize. This would almost certainly spur on the major changes that are needed. First, better measurements are needed that drive at the heart of patient safety. For example, TJC should assess whether electronic health records systems were designed primarily with the goal of patient safety and demand changes from the vendors if this is not the case. They should inspect high risk areas of the hospital (OR, ICU) for evidence of effective teamwork and communication. Second, they should improve the way measurements are obtained. Resident inspectors should remain at the hospital year round and report directly to TJC, not hospital administration. This would stop the game of shuffling around equipment in hallways before and after a rare site inspection. Finally, TJC should maximally leverage their findings to drive change at the hospital. That means that safety violations should be at the very least publicly reported so that administrators have an incentive for enacting change. If no changes are found, disciplinary actions must be taken including suspension of the license of the hospital. It seems unlikely that such an aggressive advocacy for patients is not likely to happen with the current makeup of their board. In the long run, keeping patients safe will end up improving their bottom line. The impact of safety on financial success is well documented. As lawyers are fond of saying: “whether executives in hospitals or TJC believe it does not make this fact untrue.”