27 Feb


PeriNews, PeriGen's take on what matters in obstetrics
The “cat’s whiskers” behind a

key concept of obstetrical safety

by Emily Hamilton, MD CM

No, we have not taken to caring for cats or declining Latin verbs. Yes, we are “over the top” on the concepts behind these acronyms and their contributions to obstetrical safety. MEWS1,MEWT2, MEOWS3 refer to examples of Maternal Early Warning Systems that operate in addition to usual clinical care. They have been proposed to facilitate timely recognition, diagnosis and treatment of patients developing critical illness. This assistance is relevant today because delayed recognition and intervention remain contributory factors in about half of births with maternal deaths or neonatal encephalopathy.4, 5

MEWS, MEWT and MEOWS are scoring systems based on a short checklist of key criteria, Maternal Early Warning Systemprimarily vital signs. Some variants add factors such as persistence, level of consciousness, urine output, lab tests or fetal condition. Some use color coding or weighting for different levels of abnormality. Important characteristics of successful warning systems are:

  1. Assessments are routine and repeated periodically, e.g. on admission and at frequent intervals
  2. Specific scores are tied to specific actions e.g. physician assessment, calling a rapid response team or initiating a specific investigation or therapy.

In obstetrics, early warning scores on ICU admission were strongly related to maternal mortality.6 A large “Before and After” study in the US found very significant reductions in severe maternal morbidity in the group where an early warning system was implemented and no changes in these rates in the control group without a warning system.7 A survey of 130 maternity units in 2013 in the UK indicated that 100% had implemented a formal early warning system, up from 19% in 2007.8

We are still learning the impact of Early Warning Systems in obstetrics and how to optimize them to improve efficiency. Nursing effort and alarm fatigue are important considerations.

But wait…. software could reduce that nursing effort by using information already recorded in the electronic medical record. In addition it could track how often and how early a particular trigger appears. We could use statistical techniques to determine exactly what clusters of parameters are most useful.

Stay tuned … the best is yet to come. Look for more information on this topic later this year.

1 Mhyre JM, DʼOria R, Hameed AB, Lappen JR, Holley SL, Hunter SK, Jones RL, King JC, DʼAlton ME. The maternal early warning criteria: a proposal from the national partnership for maternal safety. Obstet Gynecol. 2014 Oct;124(4):782-6.

2 Hedriana HL, Wiesner S, Downs BG, Pelletreau B, Shields LE.
Baseline assessment of a hospital-specific early warning trigger system for reducing maternal morbidity. Int J Gynaecol Obstet. 2016 Mar;132(3):337-41.

3 Singh S , McGlennan A, England A, Simons R. A validation study of the CEMACH recommended modified early obstetric warning system (MEOWS). Anaesthesia. 2012 Jan;67(1):12-8.

4 Main EK, McCain CL, Morton CH, Holtby S, Lawton ES. Pregnancy-related mortality in California: causes, characteristics, and improvement opportunities. Obstet Gynecol. 2015 Apr;125(4):938-47.

5 Sadler LC, Farquhar CM, Masson VL, Battin MR. Contributory factors and potentially avoidable neonatal encephalopathy associated with perinatal asphyxia. Am J Obstet Gynecol. 2016 Jun;214(6):747.e1-8.

6 Carle C, Alexander P, Columb M, Johal J. Design and internal validation of an obstetric early warning score: secondary analysis of the Intensive Care National Audit and Research Centre Case Mix Programme database. Anaesthesia. 2013 Apr;68(4):354-67.

7 Shields LE, Wiesner S, Klein C, Pelletreau B, Hedriana HL. Use of Maternal Early Warning Trigger tool reduces maternal morbidity. Am J Obstet Gynecol. 2016 Apr;214(4):527.e1-6.

8 Isaacs, RA, Wee MYK, Bick DE, et al. (2015) A national survey of obstetric early warning surveys in the United Kingdom: five years on. Anaesthesia2014;69:687-9

27 Sep

My Post-Summer Research Reading List

In The myths and physiology surrounding intrapartum decelerations: the critical role of the peripheral chemoreflex published in the Journal of Physiology, Lear et al have written a highly readable and methodical analysis of current evidence about the mechanisms of fetal heart rate decelerations. This is a must read for anyone seriously using fetal monitoring. He challenges long held tenets and presents a simplified coherent approach to the interpretation of heart rate monitoring. Here are two excerpts that may compel you to read further:

“… Despite multiple detailed analyses, there is no consistent FHR marker of fetal compromise …”

“… We believe that it is better to focus on the frequency, depth and total duration of decelerations during labor rather than on timing, shape or supposed aetiology of the specific deceleration.”

In an article published recently in the American Journal of Obstetrics & Gynecology titled Triggers, bundles, protocols, and checklists — what every maternal care provider needs to know, Arora et al define and provide examples of various methods to standardize and streamline clinical care and summarize the evidence supporting their association with improved outcomes. Many examples are provided for obstetrical issues such as hemorrhage, hypertension, oxytocin usage or preop preparation. With burgeoning evidence from diverse medical and non-medical domains, the question is no longer “Do these methods work?“, but rather “How can we get wider adoption and sustain compliance?” In short, how do we actually change established clinical beliefs and behaviors? This article is less informative on these practical issues.

InfluencerThere is abundant data about effective ways to change behavior. In short, behaviors will not change without aligning a critical mass of influential factors. Determination and good intentions alone are insufficient and depending upon them alone is destined to fail. To supplement this review of available obstetrical safety packages we strongly recommend the book- Influencer: The New Science of Leading Change, Second Edition by David Maxfield, Ron McMillan, Al Switzler. (Click here to see a summary of this excellent work)

03 Jun

AI will always need humans

Watson and His AI Cousins Will

Always Need Humans:

AI will always need humans

Is the Converse True?

by Emily Hamilton
Senior Vice President, PeriGen

Most of us will easily concede that computers are better at number crunching than humans. How many of us, even in our prime, can quickly complete the dreaded serial sevens test? (counting down from one hundred by sevens, a clinical test used to test mental status) .

As for higher level functions like reasoning, clinical judgment, strategic planning, creativity, empathy surely these are better achieved by humans. Well yes, but maybe not always.

This year Google’s AlphaGo defeated a human champion at the ancient game of Go, not by brute force (calculating the best of every possible move at each turn) but by using deep neural networks to learn successful and efficient strategies. AlphaGo learned its strategies by playing the game. With modern computational capacity AlphaGo was able to play more games in a day and that a human could play in a decade. Furthermore, it could remember that experience!

Chess is not medicine. What does the evidence show in medicine?

In 1954 the acclaimed psychologist Paul E. Meehl began a debate that would last more than half a century when he compared the accuracy of clinical versus statistical methods to predict patient condition.(1) His analysis, described in the book Clinical vs. Statistical Prediction: A Theoretical Analysis and a Review of the Evidence, concluded that statistical   (e.g., explicit equations, actuarial tables, defined algorithmic prediction) outperformed clinical methods (e.g., subjective, informal, reasoning, clinical intuition).

Later in 2000, Grove et al published a comprehensive analysis of relevant publications on man versus machine methods. (2)  Their meta-analysis included 136 published reports and compared performance of clinical and statistical methods in a wide variety of domains. Their results confirmed the findings of Meehl! Statistical methods outperformed clinical methods again.

They reported that better performance with statistical methods held across subject matter (medical, mental health, forensic, academic performance ) although the advantage was greatest in the forensic domains. The level of clinician experience did not make a difference, even when the statistical methods were compared to the best performing clinician(s). Superior results were not entirely uniform.   In about half of the studies the difference was small and the clinical methods were approximately the same as the statistical methods. In about one third, the statistical methods substantially outperformed clinicians especially when clinical interviews were involved. That is, detection rates were higher by about 10% or more for predictions with intermediate accuracy. In a small minority, 6% of the studies, the clinical methods were better.

In 2006, Hilton et al reported similar findings and noted a widening gap between statistical and clinical methods when reviewing 66 years of research on the prediction of violence. (3) Reports in current medical literature differ somewhat. A recent review by Sanders et al showed more equivalence between clinical methods and statistical prediction using a wider variety of assessment measures.   Only 31 studies met their inclusion criteria highlighting both the relative scarcity of complex statistical techniques in clinical use and the scientific inadequacy of the comparison methods.(4)

There are many reasons to believe that clinical judgement is better today than in previous eras

Our basic understanding of disease has improved. We have better laboratory tests and higher standards for medical evidence and easier access to information. In fact, one could argue that the clinician today has better access and better information compared to many years ago when there were few genetic markers, biomarkers and environmental conditions to consider. In fact, we may have too much information. The very same mental processes that are essential to “size up” a situation efficiently in the face of so much information can also result in erroneous decisions on occasion.

Two well-established psychological phenomena bear special mention in any discussion of medical error. Recent events or vivid anecdotes form strong and highly influential memories that can distort our perception of the real incidence or usual consequences of specific scenarios. Tunnel vision refers to the tendency to perceive and confirm information that aligns with a particular viewpoint. It includes Framing bias – the tendency to create a coherent interpretation without examining all available information and Confirmation bias which refers to seeking only the information that supports a particular opinion. Finally, too much information can actually obscure critical information. These biases and the burden of too much information are not so problematic for statistical methods.

Pitting clinical methods against computer based methods is unrealistic. “Medical reasoning” and “statistical algorithms” are both derived from real clinical data   Moreover, clinicians incorporate statistical methods unconsciously when reasoning.   They consider the background general incidence of the condition, typical constellations of signs and symptoms and weigh the pros and cons of potential diagnosis and treatments. Many clinicians know and use scoring systems which are essentially simplified statistical weighting methods. Statistics is but a formalized mathematical way to analyze real data and then summarize it succinctly to help us make inferences. Thus one would expect performance measures of human and clinicians to converge.

Mark Twain is often credited with writing – “Facts are stubborn things, statistics are more pliable”. But in this context, clinicians are more pliable. Clinicians can obtain and integrate information from additional sources, see exceptions to the rules, factor in patient fears and desires and even make do with missing data. Clinicians communicate with patients, reason and have empathy. However, occasionally they get tired, take risky shortcuts and must deal with competing interests. In contrast, statistical facts are stubborn things and not subject to the effects of fatigue or recent experience. At present they are not very communicative nor empathetic. Robotic companions for seniors may change our opinion.

The strengths of human and statistical methods are complementary

The objective unbiased statistical methods help to counter the potential for human bias, reduce information overload and help the seasoned clinician make more confident decisions.   The idea of a clear division between clinical reasoning and statistical methods is becoming increasing blurred. The good news is that the best is yet to come and it will probably arrive on your phone.

  1. Meehl, P.E. (1954). Clinical versus statistical prediction: A theoretical analysis and a review of the evidence. Minneapolis: University of Minnesota
  2. Grove WM, Zald DH, Lebow BS, Snitz BE, Nelson C. Clinical versus mechanical prediction: a meta-analysis. Psychol Assess. 2000 ;12(1):19-30
  3. Hilton NZ, Harris GT, Rice ME, Sixty-Six Years of Research on the Clinical Versus Actuarial Prediction of Violence. The Counseling Psychologist, 2006 ; 34(3):400-409.
  4. Sanders S, Doust J, Glasziou P. A systematic review of studies comparing diagnostic clinical prediction rules with clinical judgment. PLoS One. 2015 Jun 3;10(6):e0128233.
  5. Lee YH, Bang H, Kim DJ. How to Establish Clinical Prediction Models. Endocrinol Metab (Seoul). 2016 Mar;31(1):38-44.