Showing posts with label statistics. Show all posts
Showing posts with label statistics. Show all posts

Friday, February 07, 2020

Experimenting to Improve Sleep Quality

comments on: Can a Humidifier Help You Sleep Better and Snore Less?

After doing some research I learned that humidifiers have helped folks snore less. So, after some more research, I picked up a slick little ultrasonic humidifier and gave it a try. Now, it’s been less than a week which I know isn’t enough to get too excited about statistically speaking. But one thing is becoming crystal clear…it’s most definitely helping me sleep better.

Interesting post, which includes control charts showing the impressive progress.

"I’m also still trying to figure out what caused the three special cause signals in January." One nice aspect of improvement is sometimes you can make a system improvement that even without knowing the causes of previous problems, the new improvement stops those from happening again. Maybe that won't be the case this time but maybe it will. Health related issues are so touchy that I could imagine it is something like a couple bad factors stacked on top just push things over the limit. So being a bit tired and say too low humidity and you didn't drink quite enough liquid and sleep quality is bad but just 1 or 2 of those and it might be a bit worse but not horrible.

Special cause signals will be more frequent if several factors together amplify each other (and they rarely happen together so those amplified results are rare). What happens is those rare amplified events will be special, outside of the system that generates that regular variation when they act alone but when all that variation lines up just right the result will be outside what is normal (due to the very large change in the result for that special case where the individual factors acting together (amplifying) create a very large change in the result.

Related: Gadgets to Mask Noise and Help You Sleep or Concentrate - Apply Management Improvement Principles to Your Situation - Zeo Personal Sleep Manager - Using Control Chart to Understand Free Throw Shooting Results

Saturday, September 03, 2016

How to Improve at Understanding Variation and Using Data to Improve

My comments based on a question on, How to Use Data and Avoid Being Mislead by Data:

Thanks for this post John. This is the part of Deming’s teaching that I often struggle with (understanding variation). I read Wheeler’s book Understanding Variation and it helped me with the concept, but I am challenged trying to apply it where I work. I often am not sure what to measure and if I do, I’m not sure how to measure it. Folks appreciate my burn down charts showing trends, but this is about the best I’ve been able to do. Do you have any recommendations on where I can look to help me get better at this?

Getting better at using data is a bit tricky, so struggling is fairly common.
Probably the easiest thing to do is to stop reacting to normal variation (caused by the system) as if it were special. This isn’t super easy but it is the easiest step. And it does make a big difference even if it doesn’t seem very exciting.

The idea of actually using data properly provides big benefit but it much trickier. Don Wheeler’s book is a great start. Making predictions and evaluating how those predictions turn out is also valuable. And in doing so often (though not always) it will also spur you to collect data. This process of predicting, figuring out what data to use to help do so (and to evaluate the results) and considering the result of the prediction and how well the predictions overall are working can help.

You learn what data is often useful, you experiment with real data and real processes and you learn what needs to improve. If you are at least somewhat close to using data well then just doing it and learning from your experience is very useful. If you are really far off the experience might not help any 🙁
The links in the post above I think provide some useful tips (and the links within the posts they link to…).

More: Measurement and Data Collection


If you don’t have an answer for how you will use the data, once you get it, then you probably shouldn’t waste resources collecting it (and I find there is frequently no plan for using the results).

It isn’t uncommon that the measures you would like to have are just not realistically available or are hard to determine. How to get started in this is one of the tricker pieces in my experience. It is a place where consultants may be very helpful. If that isn’t an option another possibility is just to ask others at your workplace for ideas for metrics (there are issues with this and a big one is that many metrics will more likely to lead you astray than actually help).

This can also be an area where seeing what others are using can be helpful. Because it is hard to think up what are great metric seeing what others are doing may provide insight. Of course, the ideas must be evaluate for whether they would work for you (even if they are right for others they may not be right for you – and many are not really right for others it is just a thing they measure and while they have associated it with good things maybe they are wrong (correlation but not causation]).

Monday, March 14, 2016

William G. Hunter Award (nomination deadline June 30th)

William G. Hunter Award
Nomination Deadline: June 30

Criteria for Selection - The William G. Hunter Award is presented annually in order to encourage the creative development and application of statistical techniques to problem-solving in the quality field. Named in honor of the Statistics Division’s founding chairman, the award recognizes that person (or persons) whose actions most closely mirror Bill Hunter’s strengths, which were as:

  • A Communicator
  • A Consultant
  • An Educator (especially for practitioners)
  • An Innovator
  • An Integrator (of statistics with other disciplines) and
  • An Implementor (who obtained results)

Download Award Criteria and Nomination Form (DOC)

Past awardees include: Gerald Hahn, Brian Joiner, Soren Bisgaard, Christine Anderson-Cook and Bill Hill.

Monday, December 01, 2014

Data Must be Understood to Intelligently Use Evidence Based Thinking

All metrics are wrong, but some are useful
Metrics might tell you something about the world in a quantified way, but for the how and why we need models and theories … metrics are generated must be open and transparent to make gaming of the system more difficult, and to expose the biases that are inherent in humanly created data
True, understanding the proxy nature of data (and how well or questionably the proxy fits) is important.

Data can't lie but we often make it easy for others to mislead us when we don't understand (or question) what the data really means (what operational definitions were used in the collection, etc.).

Related: Operational Definitions and Data Collection - Actionable Metrics

Monday, November 10, 2014

Data on Medical Errors

How Many Die From Medical Mistakes in U.S. Hospitals?
In 1999, the Institute of Medicine published the famous “To Err Is Human” report, which dropped a bombshell on the medical community by reporting that up to 98,000 people a year die because of mistakes in hospitals. The number was initially disputed, but is now widely accepted by doctors and hospital officials — and quoted ubiquitously in the media.

In 2010, the Office of Inspector General for Health and Human Services said that bad hospital care contributed to the deaths of 180,000 patients in Medicare alone in a given year.

Now comes a study in the current issue of the Journal of Patient Safety that says the numbers may be much higher — between 210,000 and 440,000 patients each year who go to the hospital for care suffer some type of preventable harm that contributes to their death, the study says.

That would make medical errors the third-leading cause of death in America, behind heart disease, which is the first, and cancer, which is second.
I wish these reports would provide some detail on what these really mean. Is it 250,000 people that were completely healthy coming in for a physical and they die when they would have been healthy if the medical system didn't exist? I doubt it. Is it 2,000 completely healthy people and 248,000 people that were under intensive medical care for years keeping them alive and now we slipped up and they died? Probably not, again, but my guess is it is closer to the second.

Preventing errors is obviously important. And in health care it is very important, of course. But just because you use data doesn't mean it isn't misleading. Medical errors leading to death is just too big an operational definition to be very meaningful in my opinion. For these numbers to provide much insight I really think they need to be segmented more:

  • perfectly health person that was going to be perfectly healthy for decades but were killed by medical error.
  • person that needed life saving care of they were going to die that month and we routinely should be able to provide the very easy care to make them perfectly healthy again but they were killed.
  • etc…
  • person that was extremely sick with many problems for years and was saved with medical care over and over again. Complex care was needed and much of it was done well but in the very challenging situation there was a mistake and that mistake is the proximate cause of death.
We should be working on making everything better and eliminating medical errors that cause damage and death. But there are huge differences between a medical error conditions that caused death and to me those differences are so huge lumping them together is hardly useful.

My understanding is we do use risk based assessment to compare things like survival rates or operations at different hospitals. A figure of survival rates comparing two hospitals when one was the hospital where all the most difficult case for the Western USA were sent to a local hospital that dealt with the easy operations for that health issue would not be very useful. So they try to adjust for the severity of the problem (as I understand it). It would seem to me a similar thing would be much more useful for medical error death rates. Was the person in such a risky state that the tiniest misstep (error or whatever else) would kill them or was it someone who is perfectly health and dies immediately due to an error.

Related: Errors in Thinking - Epidemic of Diagnoses - Health Care Crisis - Great Visual Instruction Example (taking pills)

Response to comment on my comment

Right, l think the thing I am getting at is there is a big difference between making a mistake while you are in a complex situation where any of 30 bad decisions in a pressure situation could result in death (and where doing nothing results in death) versus a situation where there is no risk of death until you do an absolutely idiotic thing that turns my visit to get a physical into death.

Everything should be constantly improved and made safer with mistake-proofing thinking... And healthcare needs this more than most everything due to the dangers and consequences involved.

When there are headlines like 100,000 deaths due to medical error every year that reads to me like John was walking along the street and boom a medical-error/piano dropped on his head and killed him. But I don't believe that is true. I bet it is true that are lots of deaths due to just unforgivable errors - someone is given a drug which was indicated in numerous sensible ways would kill them due an allergy but they were given it anyway and died.

But I don't trust how much of the deaths attributed to error are really 20% error, 19% cancer, 18% diabetes, 17% cardio-vascular disease, 16% long term high level use of powerful drugs ravaging the body, 10% car accident (which also someone else might say is 5% error, 30% cancer, 25%...). The error is still bad, and the system needs to be improved to reduce the frequency and consequences of errors. But I just don't know how to take the error to death data without much more explanation.

In reading more details on the studies they comprehend this issue with the data but I haven't found where they provide more meaningful data. What I read just talks about the contributing nature of "blame" on medical error etc..

Tuesday, September 03, 2013

Early "Lean" Thinking

"There are some who criticize the 'early days' of the Lean movement as being too focused on tools. But, I’ve re-read a lot of the early material and this is not the case." - Mark Graban

Exactly right. It seems to me it was when the first "lean manufacturing" fad wave hit and you had lots of people (that didn't study and learn what it was really about) quickly churn out their oversimplified "lean manufacturing" cookbook tool approach. That is when the tool approach took off and because it is easy to train people on tools that has always been a popular way to sell services to companies. It is really just putting new tools into the existing management system instead of adopting new management thinking which is what the people that actually studied "lean" were doing and talking about. The tools can be helpful but it is a very limited approach to "lean" (if you can even call it that - really it should be called using a couple of lean manufacturing management tools). The initial people who studied Toyota, and other companies in Japan (mainly), understood it was a different way to manage - not just using a couple of tools.

But it was hard to figure out how to actually do it (getting management to improve is hard - it is easy to sell management some training that will "make workers better"). It was easy to offer training in setting up QC circles and how to use various tools, so much of that happened. The biggest change in the selling lean training is you no longer see people selling QC circle training, they now sell other tools.

Here are some early reports (so early it preceded the lean terms widespread use). It also means the focus hasn't already been set by the Machine that Changed the World but it is the same stuff that those that studied in 1980, 1990, 2000 or 2013 saw - it is more about respect for people and using everyone's brain than any specific tool. And these articles have a bit more focus on using statistics and data than much of lean literature today (partially because George Box and Dad were statisticians and partially, in my opinion, because current lean literature is light on using data).

Peter Scholtes report on first trip to Japan, 1986

Managing Our Way to Economic Success: Two Untapped Resources - potential information and employee creativity by William G. Hunter, 1986

How to Apply Japanese Company-Wide Quality Control in Other Countries by Kaoru Ishikawa. (November 1986).

Eliminating Complexity from Work: Improving Productivity by Enhancing Quality by F. Timothy Fuller, 1986

On Quality Practice in Japan by George Box, Raghu Kackar, Vijay Nair, Madhav Phadke, Anne Shoemaker, and C.F. Jeff Wu. (December 1987).

The early lean stuff was much like what is discussed there (though these were before the "lean" term had taken hold). These were all first published as reports at the University of Wisconsin - Madison Center for Quality and Productivity Improvement founded by my father and George Box.

While the format of the documents may be a bit annoying thankfully they are actually available, unlike so many articles supposedly meant to stimulate better management practices (look at major "associations" that don't even make articles available online without a blocking paywall preventing the articles from doing much good).

Related: Management Improvement History (2004 post) - Early History Of Management Improvement Online (2007) - Transforming With Lean (2007) "Successful management improvement is not about mindlessly applying quality/lean tools." - "The tools are very helpful but the change in mindset is critical. Without the change in the way business is viewed the tools may be able to help but often can prove of limited value." (2006) - Lean Thinking and Management (2006) - From lean tools to lean management by Jim Womack, 2006 - I would link to the original article but it is gone :-(

Monday, April 08, 2013

Remembering George E.P. Box

George Box passed away last week and a long (1919 - 2013), rewarding and productive life. His obituary ends with "a last message from George" (quoting Cole Porter's song - Experiment).
“Experiment! Make it your motto day and night. Experiment, And it will lead you to the light …Be Curious, …Get Furious… Experiment, And you’ll see!”
The full text of the song is quoted in Statistics for Experimenters (a book by George, my father and Stu Hunter on using design of experiments to improve). The song is included in the De-Lovely soundtrack.

If you want to honor the memory of George, contributions could be made to

  UW Foundation - George Box Endowment Fund (link to donate - include George Box Endowment Fund in the box for instructions) US Bank Lock Box 78807, Milwaukee, WI 53278.  This fund was started some years ago with the intention of assisting graduate students. It is a permanent endowment fund, so contributions to the fund are added to the principal and the annual earnings of the fund are used to support the fund purpose. The purpose of the fund is to support activities of the Statistics Department with a primary (but not exclusive) focus on activities of direct benefit to graduate students.  Recipients will be selected by the Department faculty (or their designates) with input from Departmental graduate students."

  Agrace HospiceCare (link for donating online), 5395 E. Cheryl Parkway Madison, WI 53711.

Sunday, March 11, 2012

The Potential Benefits, Risks and Folly of Stretch Goals

Some excepts from, The Folly of "Stretch Goals", visit the link to see the full discussion:
Jon Miller: Stretch goals are fine, but gaming the system, sandbagging, achieving the stretch goals through heroic effort, etc. are bad because this is not sustainable. In terms of excessive risk taking, this is a question of the risk-reward calculus and the person’s degree of risk aversion. It doesn’t take a stretch goal to make Enron leaders cheat when their auditors are turning a blind eye. They stole because they could, not because a leader set stretch goals for them. If the governance around the goals are solid and the downside of risk are significant, people will pursue stretch goals in a way that is not destructive. ... 
Dan Markovitz: However, if you had the opportunity to make a HUGE bonus — millions or tens of millions of dollars — for achieving certain stretch sales targets in China, for example, you might be sorely tempted to act differently... 
Jon Miller: But my point was that cheating is not caused by stretch goals, it is caused by poor governance around the performance and rewards process... The more interesting question is why leaders continue to set up such systems. Are they stupid? Evil? Or do such systems produce results?
I think stretch goals are fine when people understand - they are giving scope to the effort. If I want breakthrough improvement quickly it may mean considering radical solutions. That can be helpful to shape people's vision. But there are risks. As Brian Joiner said there are 3 ways to improve figures ("results")
Improving the system is far more difficult than the first 2. Cheating can be encouraged by managers. Stretch goals can increase this encouragement. A culture that pushes the right values and discourages the wrong ones can discourage cheating. Understanding variation is very helpful (it both dramatical reduces silly reaction to variation - the fear of those silly reactions often cause people to cheat "distort the figures or system" ).

An understanding data is only a proxy for the real situation (the number is not real situation) is helpful as is understanding the arbitrary goal is essentially meaningless (it exists to give scope to efforts not to be met - 67.3% improvement when 75% improvement was the "goal" is not failure - an understanding of variation would assure this mistake was not made).

The problem is many organizations are ruled by spreadsheet managers that don't understand variation, are ruled by the tyranny of arbitrary targets (bonus and promotions)... In these situations goals do often become a big part of the reason for cheating. Stretch goals can help shape the effort. The risk (and much more common result, I think) is that they result in distortions of the system and data to achieve those results.

To answer Jon's question I think you can use goals and incentives to reach numerical targets. The risk, as Gipsie Ranny says is the organization may be ruined in the long term. But if the executives are fearful and have large enough incentives to achieve numerical targets they goals and targets can achieve the goal - but at a great cost, I believe. I believe they are more ignorant than evil (though some know the damage they are risking or causing).

A strong management system reduces much of the potential negative consequences of targets. A big problem is those organizations that most rely on targets are those that are least protected from the risks of using them.

Related: The Defect Black Market - Targets Distorting the System - The Problem with Targets

Thursday, October 06, 2011

Lying with Statistics


Response to: Great example of "Lying with Statistics"

My view is closer to Rip's.  Deceiving people is not alleviated by being "truthful" but misleading.  As with many things where you draw the border is often challenging.  I do like putting the claims of lying on a person - not on data.  Data can be wrong.  It can't lie.  People can lie.  People can also mislead.  And very often people can be mislead (by those intending to mislead them and those that failed to understand the data in the first place and then used the data in a faulty way to support their mistaken notion).

Those of us reading the messages in this group (statistics group on LinkedIn) are not likely to fall into the being mislead camp often.  But my experience is that is by far the biggest problem.  People not having numeracy and being mislead all the time do to their lack of understanding (either intentionally, or through ignorance of theirs [or the person presenting the info to them]).

Related:  Bigger Impact: 15 to 18 mpg or 50 to 100 mpg? - Understanding Data -

Preaching False Ideas to Men Known to be Idiots

Friday, August 05, 2011

Experimenting to Discover

Causal Reasoning in Science: Don’t Dismiss Correlations (the broken link was removed)
Box, Hunter, and Hunter were/are theorists, in the sense that they don’t do experiments (or even collect data) themselves.
...
Science is about increasing certainty — about learning. You can learn from any observation, as distasteful as that may be to evidence snobs. By saying that experiments are “necessary” to find out something, Box et al. said the opposite of you can learn from any observation.
William Hunter was my father. He did many experiments. George Box did many experiments. You are entitled to your opinions obviously but the claim that they only dealt with other people's data is not accurate. It is true they were world renowned experts on experimenting and had many people consult them about their experiments, for help: designing them, analyzing them, what to do next, how to improve the process of experimentation in their organization, etc.. While it seems to be implied in the post that such consultation was a reason to distrust their thoughts on experimentation I hardly think that is a sensible conclusion to draw. Most of those they helped were running experiments in industry, to improve results (not to publish papers).

They were, and are, applied statisticians (and though I am obviously biased, I think many would agree, 2 of the most accomplished in that field in the 20th century). What experiments need to be done is critical for an applied statistician. What matters is making improvement in real world processes. If you don't run the right experiments, you won't learn things to help you improve.

They worked on the problem of where to focus, in order to learn, quite a bit. One significant part of there belief was to have those involved in the work do the thinking about what needed to be improved. This isn't tremendously radical today but in the past you had many people that thought "workers" should do what the college graduates in their office at headquarters tell them to do. Here is one of many such example, from Managing Our Way to Economic Success by William Hunter:

The key is that employees at all levels must have appropriate technical tools so that they can do the following things:

- recognize when a problem has arisen or an opportunity for improvement exists,
- collect relevant data,
- analyze the situation,
- determine whose responsibility it is to take further action,
- solve the problem or refer it to someone more appropriate...


I don't have the book in front of me, but doesn't it start with an example on learning where you can use inductive reasoning and from the facts that you see you can draw conclusions and construct a theory that fits the facts. If so, it seems to call into question the idea that they claimed "[the] opposite of you can learn from any observation." is not actually accurate. They understood you can use inductive reasoning to create theories. You then use experiments to test theories.

The books is called Statistics for Experimenters, right? Not statistics for drawing conclusions when not doing experiments. When you are experimenting you can test whether beliefs you have are accurate and you can learn about things you try. Smart people can make guesses what will happen and be right. I know the authors would believe those knowledgable about the system in question are well suited to determine what variables to test. It is that knowledge that will lead to experiments that are likely to be effective.

The authors of the book were trying to help those that often failed to learn as much from experiments as they could. Far too many people still don't use the most effective statistical tools when experimenting.

They emphasized, consistently, the need for those doing the work to involved in the experiments. The job of statisticians was to help in the cases where advanced statistical tools and knowledge would be useful. The reason for those who do the work (are familiar with the process) is because they have knowledge to bring to what should be tried in experiments.

When I read through The Scientific Context of Quality Improvement, 1987 by George Box and Soren Bisgaard it seems to me it discusses the types of issues you raise: how do we learn without experimenting? I am not sure if it is just me, or if it clearly addresses that issue. Here is another, Statistics as a Catalyst to Learning by Scientific Method by George E. P. Box. And another, Statistics for Discovery.

There are many other sources, I am sure. They understood the importance of learning as much as you could from available sources. They just also understood the importance of experiments and learning the most you could from experiments. And the book, Statistics for Experimenters, was focused on the most effective ways to improve using statistics to learn from experiments..

Here is what Box, said in his own words about the objective (and it isn't proving the hypothesis):

[too many people ]"can’t really get the fact that it’s not about proving a theorem, it’s about being curious about things. There aren’t enough people who will apply [DOE] as a way of finding things out"


Statistics for Experimenters: Design, Innovation, and Discovery shows that the goal of design of experiments is to learn and refine your experiment based on the knowledge you gain and experiment again. It is a process of discovery. That discovery is useful when it allows you to make improvement in real world outcomes. That is the objective.

Thursday, August 19, 2010

SPC - Charting and Improving Results

Everett Clinic Video, Redux – The Need for SPC Thinking

Looking at 5.x% and comparing it against an arbitrary goal does little to tell us about the health of the work system. Is 5.x% the typical average performance? Is that much higher than usual?

This is a great opportunity to use the methods of Statistical Process Control. The main management decision is to decide "react" or "not react" to that daily data point. SPC helps us with this (again, Wheeler’s brilliant little book explains this far better than I can in a blog post).

If we choose “not react” because 5.x% is lower than the goal, we might be missing an opportunity for process improvement. Generally, it’s better to present more than one data point – even if you don’t do full-blown SPC, you should present a run chart.
Well put. A simple run chart can be very helpful. One of the uses is to identify special causes. And then to use special cause thinking in those cases. What is important about special cause thinking? That you want to identify what is special about the data point (instead of focusing on all the results as you normally would). What is important about doing that? You want to do it right away (not a week or a month later). Keeping the chart lets you identify when to use special cause thinking and react quickly (to fix problems or capture good special causes to try and replicate them).

You have to be careful as we tend to examine most everything as a special cause, when most likely it is just the expected result of the system (with normal variation in the data). Special cause thinking is not an effective strategy for common cause results.

Related: Quality, SPC and Your Career - Statistical Engineering Links Statistical Thinking, Methods and Tools

Wednesday, February 22, 2006

Global Manufacturing Data by Country

Update 2013: see chart of manufacturing output by leading countries from 1999-2011.  The current ranking is China, USA, Japan, Germany.

Topic: Economics, Manufacturing

I am still looking for a good source for manufacturing data by country and year. Today I found some data from the United Nations Statistics Division. The data for the top five manufacturing economies: China, Germany, Japan, United Kingdom and United States. Figures are in current $US billion. The data used is for Mining, Manufacturing and Utilities (because China and Germany do not have manufacturing data separated out).


Country2001200220032004
United States1,7811,7791,8762,012
Japan9919291017
China507551638754
Germany421449545613
United Kingdom280283322378


For manufacturing output only:


Country2001200220032004
United States1,4601,4631,5231,623
Japan866812894
United Kingdom220223254298

This data shows the United States manufacturing economy is continuing to grow and is solidly the largest manufacturing economy: which contradicts what many believe. It is true manufacturing jobs are decreasing in the United States and worldwide - China is losing far more manufacturing jobs than the USA.

I including some information on the manufacturing economy in my post to the Curious Cat Science and Engineering blog: Phony Science Gap? and referenced my previous post here, Manufacturing and the Economy which reminded me that I wanted some updated data.

Related Posts

Saturday, September 10, 2005

Measurement and Data Collection

Topic: Management improvement

This is my response to the Deming Electronic Network message (removed broken link) on measurement.

I find it useful, to assure that data collection is a wise use of resources, to ask what will be done with the results. If you don't have an answer for how you will use the data, once you get it, then you probably shouldn't waste resources collecting it (and I find there is frequently no plan for using the results).

I have found it helpful to ask: what will you do if the data we collect is 30? What will you do if it is 3? The answer does not need to be some formula, if 30 then x. But rather that the results would be used to help inform a decision process to make improvements (possibly the decision to focus resources in that area). I find, that asking that question often helps reach a better understanding of what data is actually needed, so you then collect better data.

I believe, it is better to focus on less data, really focus on it. My father, Bill Hunter, and Brain Joiner, believed in the value of actually plotting the data yourself by hand. In this day and age that is almost never done (especially in an office environment). I think doing so does add value. For one thing, it makes you select the vital few important measures to your job.

But it is very difficult for anyone to actual suggest plotting data by hand: they must be very secure in their reputation (or maybe a bit crazy), because it seems to be a hopelessly outdated idea that paints you as the same. My appeal, within the Deming context, is that the psychology of plotting the points yourself is qualitatively different from letting the computer do it. Plotting the data yourself serves to lift the data that you plot out of the sea of data that we find ourselves inundated with and gives you a deeper connection to it. You would not plot all the data that you use by hand; just the most important items.


John Hunter
Curious Cat Management Improvement Connections