Showing posts with label experiment. Show all posts
Showing posts with label experiment. Show all posts

Friday, February 07, 2020

Experimenting to Improve Sleep Quality

comments on: Can a Humidifier Help You Sleep Better and Snore Less?

After doing some research I learned that humidifiers have helped folks snore less. So, after some more research, I picked up a slick little ultrasonic humidifier and gave it a try. Now, it’s been less than a week which I know isn’t enough to get too excited about statistically speaking. But one thing is becoming crystal clear…it’s most definitely helping me sleep better.

Interesting post, which includes control charts showing the impressive progress.

"I’m also still trying to figure out what caused the three special cause signals in January." One nice aspect of improvement is sometimes you can make a system improvement that even without knowing the causes of previous problems, the new improvement stops those from happening again. Maybe that won't be the case this time but maybe it will. Health related issues are so touchy that I could imagine it is something like a couple bad factors stacked on top just push things over the limit. So being a bit tired and say too low humidity and you didn't drink quite enough liquid and sleep quality is bad but just 1 or 2 of those and it might be a bit worse but not horrible.

Special cause signals will be more frequent if several factors together amplify each other (and they rarely happen together so those amplified results are rare). What happens is those rare amplified events will be special, outside of the system that generates that regular variation when they act alone but when all that variation lines up just right the result will be outside what is normal (due to the very large change in the result for that special case where the individual factors acting together (amplifying) create a very large change in the result.

Related: Gadgets to Mask Noise and Help You Sleep or Concentrate - Apply Management Improvement Principles to Your Situation - Zeo Personal Sleep Manager - Using Control Chart to Understand Free Throw Shooting Results

Thursday, March 07, 2019

Take Risks to Learn and Improve, But Do So Wisely

My edited comments on: The Limits of Learning From Failure

Failure can be a great learning tool, especially if it is planned. Create an environment that supports and learns from failure, but also use the scientific method, coupled with experience, to understand and mitigate the risks.

I agree, I wrote about this on my blog: Accept Taking Risks, Don’t Blithely Accept Failure Though

The goal is to maximize innovation and improvement. To the extent we need to take risks and accept some failures to achieve this we should accept failure. But that doesn’t mean we don’t continually try to improve our management systems to reduce the costs of failure. Even while we take risks we want to do so intelligently.

It is true many organization are so fearful of being blamed for failure that sensible risks are avoided. We do need to create management systems that allow taking sensible risks but we need to learn while still limiting damage from failures. Do experiments on a small scale, iterate quickly and expand the scope as you learn.

Related posts: Learn by Seeking Knowledge, Not Just from Mistakes - Risks Should be Taken Wisely - What is the Explanation Going to be if This Attempt Fails?

Tuesday, August 01, 2017

Thursday, February 09, 2017

Should I be in the Check Phase of PDCA Daily?

Below is my response on closed forum about whether doing the "check" phase of PDCA daily was too often. I expanded on my comments there a bit in this post.

The check/study phase should be reviewing the results of the experiment done in the Do the experiment phase. "Checking" how things are going during the experiment makes sense but that isn't the check/study phase of PDSA .

For example, you don't want to pay no attention during the experiment and then look at the data and discover the data shows obvious signs the operational definitions were not clear, or the process is providing very bad results. So you need to have those doing the experiment paying attention daily.

Remember one key to using the PDSA cycle is to turn through the whole cycle quickly. Daily would be exceptionally quick. Moving through the whole cycle in 2-6 weeks is more normal. Organizations successful using PDSA will quickly turn the cycle 4+ times for a specific effort (often the 2nd, 3rd... times through are much faster than the first time through).

More on how to use the PDSA well:

Tuesday, June 02, 2015

Learning From Process Improvement Efforts

Comment on: How to Improve (at just about anything)

1. The classic way:

Do – make an improvement
Do – change a process
Do – implement some training
Do – install a system

When you have been through the 4 do’s keep right on doing.
2. The recommended way:

Plan – develop an idea or innovation, work out how you will implement it.
Do – carry out the plan on a small-scale, test it to see if it works.
Check – study what happened, did the plan work? If not why not? What can you change?
Act – adopt the change and roll it out, abandon it or learn from it and adapt it.

Another huge benefit to the PDSA cycle in my experience is to learn. I can't remember how many times I would see in the do-do-do-do organization that

do#1 was x
do#2 was y
do#3 was x again
do#4 was z
do#5 was y again

Um, ok, yeah why are we trying things we already know don't work (they are presented as fixes not, as well this old way wasn't great but jeez it was much less bad than the mess we have now so lets go back). Why are we thinking x is going to work when we just dumped x because it wasn't working? PDSA makes you think about the process, study the historical data and document your predictions. The learning will

Related: How to Improve - Document Your Decisions to Learn More Effectively - Learn by Seeking Knowledge, Not Just from Mistakes - Write it Down

Tuesday, March 24, 2015

Learning Can't Take Place Without Theory

Response to: The Secrets of Lean

I think you make good points, but I think you make a mistake stating:

"This system of learning has come from experience, not theory."

For some reason, culturally, we created this idea that theory was about disengaged people (away from the gemba) thinking in a way disconnected from practice. But this is not what theory is.

Learning can't take place without theory. Experience doesn't lead to learning. Experience with a theory and an informed observer that questions what they see can lead to learning.

It seems to me accept theory that is separated from the gemba as what theory is and then say theory is not useful, experience is. But we are making a mistake when we think this way. The problem we see in theory being used disconnected from feedback from the gemba is a bad use of theory. But the problem is not that theory needs to be eliminated, but that theory disconnected from the gemba is not useful in learning about systems and improving our organizations.

Related: Experimentation is an iterative process - Effort Without the Right Knowledge and Strategy is Often Wasted

Wednesday, June 25, 2014

Create a Continually Improving Management System - not the Perfect Management Solution

My response to a LinkedIn question*
> Hi everyone, i have a question in relation to Lean, what steps will i
> take if i wanted to apply Lean to an engineering firm?

In most instances I think a PDSA approach to the approach to use it best. Test out various options in parts of the company. See what works. Build and improve the process and spread it more widely.

There are some advantages to a wholesale, uniform CEO led unified effort. But the drawbacks of a centrally driven process to start a transformation without a powerful CEO (or close, COO...) directly involved is likely to have problems.

Instead try approaches on smaller scale, build on what works, adjust based on experience... Depending on how big you are many times different focus will be needed. What the call center uses and what the research department uses may be fairly different. There should be unified principles that hold true everywhere but honestly those are almost always useless words at first (in the cases where they actually start as real guiding principles that is great - it just seems rare in my experience).

A decade later maybe a company will really be guided by respect for people, data based decision making, going to the gemba, customer focus, continual improvement... And those really will be the core behind some fairly different processes in divergent parts of the company. But at first it is usually just word that don't connect to actions.

Some of the most important things about the initial plans (off the top of my head - I may be forgetting some things...) in my opinions are

  • continual improvement - a rigid approach is likely to fail (unless you get really lucky). Build the plan with the idea that we are putting forth our first approximation and we will be continually evolving this approach. Therefore the plan most importantly needs to be adaptable based on what we learn (more important that being "right" at the start).
  • a focus on experimentation and all that means (providing people training if needed, providing expertise if needed, understanding variation, using data properly...)
  • go to the organization gemba and user gemba
  • focus on accessing what is working and what isn't and adapt
  • respect for people
My suggestions in a post for the Deming Institute blog:
My suggestion in such a case is to start slowly, learn as you go and build on successes. Learn directly from Deming (the books and videos) and from other great books by those that worked with him. My favorites include: The Leader’s Handbook by Peter Scholtes, Fourth Generation Management by Brian Joiner, The Improvement Guide by Gerald Langley, Kevin Nolan, Clifford Norman, Lloyd Provost and Thomas Nolan.

Start using the tools (PDSA, control charts, flowcharts, cause and effect diagrams, visual job instructions, …), focus on respect for people and move toward evidence based decision making. Focus on doing a few things well. Don’t try to do everything at first. Concentrate on getting a few tools and new concepts well understood and effectively used in the organization. Then build from there. As part of this build an appreciation for systems thinking (seeing how interconnected things are is important to moving forward).
* I would link to the LinkedIn discussion but they chose not to provide sensible options (closed group anyway so you couldn't see the conversation). Even so a web site designed with usability in mind could make this work usefully (and use links that would work if the group discussion became public later). It is pitiful how poor huge internet companies are about programing usable websites.

Wednesday, November 06, 2013

Risks Should be Taken Wisely

I agree. I think it is wise to understand you are willing to take certain risks in order to improve and innovate. Sometimes things might not work out. That doesn't mean you don't do what you can to mitigate the impact of things that don't work out.

It does seem to me the "accept risk" (fail fast, accept failure...) folks would be better served to focus a bit more on mitigating the results of failure. Sure accept risks when you determine it is worth taking the risk due to the benefits.

I wrote about this earlier this year: Taking risk, but do so wisely.

Accepting risk doesn't mean failure is good. And it doesn't mean the results of experiments are all blameless. You can do a poor job of taking risks. If that is done, we should learn from it and improve how we take risks going forward. I would also put my focus process over people (what, good and bad, can we learn about how we did this experiment or took this risk to do better experiments and risk taking going forward).

In response to: To Blame or Not to Blame

Related: Find the Root Cause Instead of the Person to Blame - Blame the Road, Not the Person - Respect for People Doesn’t Mean Avoiding Any Hint of Criticism

Tuesday, October 08, 2013

Pilot on a Small Scale First - Good Advice We Often Ignore

Response to: Pink NFL Penalty Flags Surprisingly Cause Confusion

This is an example of why piloting new ideas is wise. The truth is we often don't pilot stuff. Many times it works out fine (and no-one mentions we didn't pilot it on a small scale). When you don't pilot and it then fails on a big scale this is the question, I think.

Were we bozos for not seeing the risk - looking back is it a pretty strong case we should have piloted.

If we often don't pilot and it works 99 times out of 100 it may be we are pretty good at knowing what needs to be piloted and excepting some failures is ok in order to get things done. Part of the decision that is critical is making sure you don't fail to pilot when it is really costly to be wrong (which is part of the decision on whether to pilot).

We can just always point to failure to pilot as the dumb thing to do when it fails. But I see that as a bit overly simplistic. Many organization don't pilot well. Getting them to do so all the time would likely stop you from doing better stuff. Getting them to do so when

  1. there are likely to be be things we should learn
  2. there are significant questions about how it would work
  3. the costs of widespread failure are large
  4. we can't consider the potential risks and make a judgement that there is likely not to be a problem
Piloting on a small scale is best. It is what I recommend and encourage. I just think seeing the failure to pilot as a cause of the widespread problem is too simplistic. Why did we fail to pilot needs to be the next question - don't stop at the failure to pilot as the root cause. From there you will nearly always discover, unless maybe you are Toyota or the Kaizen Institute or something :-) that your organization consistently fails to pilot before adopting on a wide scale. Then you need to dive into that issue...

With this particular example it seems to me one that could have been thought about rationally and a decent case that we don't need to pilot could have been made. And that illustrates that there is always a risk to implementing without piloting (there is a risk of doing it anyway including a very big one of failing to catch the problems because your pilot failed to capture some important features (for example - you didn't think of the need to pilot with pink towels... - this would be an easy mistake to make).

And it shows why thinking about pilots is important - which is another thing we often fail to do, considering how to make the pilot cover the risky scenarios that may take place. Sometimes organizations will use certain locations to pilot stuff which can be useful - you can train these locations to provide good feedback, etc.. But as soon as you make the pilot locations different than were it will be done there are risks of not catching things.

It is the interaction of variables that often creates problems which it was this time. Pink flags meet the initial criteria of being noticeable. The interaction of putting many other pink items into play (certain jerseys, towels, etc.) is what seems to be the issue.

Related: What is the Explanation Going to be if This Attempt Fails? - Accept Taking Risks, Don’t Blithely Accept Failure Though - Management is Prediction - Combinatorial Testing for Software - European Blackout: Human Error-Not

Tuesday, September 03, 2013

Early "Lean" Thinking

"There are some who criticize the 'early days' of the Lean movement as being too focused on tools. But, I’ve re-read a lot of the early material and this is not the case." - Mark Graban

Exactly right. It seems to me it was when the first "lean manufacturing" fad wave hit and you had lots of people (that didn't study and learn what it was really about) quickly churn out their oversimplified "lean manufacturing" cookbook tool approach. That is when the tool approach took off and because it is easy to train people on tools that has always been a popular way to sell services to companies. It is really just putting new tools into the existing management system instead of adopting new management thinking which is what the people that actually studied "lean" were doing and talking about. The tools can be helpful but it is a very limited approach to "lean" (if you can even call it that - really it should be called using a couple of lean manufacturing management tools). The initial people who studied Toyota, and other companies in Japan (mainly), understood it was a different way to manage - not just using a couple of tools.

But it was hard to figure out how to actually do it (getting management to improve is hard - it is easy to sell management some training that will "make workers better"). It was easy to offer training in setting up QC circles and how to use various tools, so much of that happened. The biggest change in the selling lean training is you no longer see people selling QC circle training, they now sell other tools.

Here are some early reports (so early it preceded the lean terms widespread use). It also means the focus hasn't already been set by the Machine that Changed the World but it is the same stuff that those that studied in 1980, 1990, 2000 or 2013 saw - it is more about respect for people and using everyone's brain than any specific tool. And these articles have a bit more focus on using statistics and data than much of lean literature today (partially because George Box and Dad were statisticians and partially, in my opinion, because current lean literature is light on using data).

Peter Scholtes report on first trip to Japan, 1986

Managing Our Way to Economic Success: Two Untapped Resources - potential information and employee creativity by William G. Hunter, 1986

How to Apply Japanese Company-Wide Quality Control in Other Countries by Kaoru Ishikawa. (November 1986).

Eliminating Complexity from Work: Improving Productivity by Enhancing Quality by F. Timothy Fuller, 1986

On Quality Practice in Japan by George Box, Raghu Kackar, Vijay Nair, Madhav Phadke, Anne Shoemaker, and C.F. Jeff Wu. (December 1987).

The early lean stuff was much like what is discussed there (though these were before the "lean" term had taken hold). These were all first published as reports at the University of Wisconsin - Madison Center for Quality and Productivity Improvement founded by my father and George Box.

While the format of the documents may be a bit annoying thankfully they are actually available, unlike so many articles supposedly meant to stimulate better management practices (look at major "associations" that don't even make articles available online without a blocking paywall preventing the articles from doing much good).

Related: Management Improvement History (2004 post) - Early History Of Management Improvement Online (2007) - Transforming With Lean (2007) "Successful management improvement is not about mindlessly applying quality/lean tools." - "The tools are very helpful but the change in mindset is critical. Without the change in the way business is viewed the tools may be able to help but often can prove of limited value." (2006) - Lean Thinking and Management (2006) - From lean tools to lean management by Jim Womack, 2006 - I would link to the original article but it is gone :-(

Tuesday, July 09, 2013

Grades, Test Scores and Complex Brain Teasers Are Not Good Ways to Pick Employees

Google HR Boss Explains Why GPA And Most Interviews Are Useless
Google doesn't even ask for GPA or test scores from candidates anymore, unless someone's a year or two out of school, because they don't correlate at all with success at the company. Even for new grads, the correlation is slight, the company has found. Bock has an excellent explanation about why those metrics don't mean much. "Academic environments are artificial environments. People who succeed there are sort of finely trained, they’re conditioned to succeed in that environment,".
Exactly right. Graduating (and the difficulty of course - lots of math or science course for example tell you something about the students capability) tell you something about a person's ability to put up with a constraining system (which many jobs also have) but grades are not very valuable. And graduating just gives you a bit of data, people can have that same capability without graduating.
Google also used to be famous for posing impossibly difficult and punishing brain teasers during interviews. Things like "If the probability of observing a car in 30 minutes on a highway is 0.95, what is the probability of observing a car in 10 minutes (assuming constant default probability)?" Turns out those questions are"a complete waste of time," according to Bock. "They don’t predict anything. They serve primarily to make the interviewer feel smart."
Dee Hock has some very good ideas on hiring: "Hire and promote first on the basis of integrity; second, motivation; third, capacity; fourth, understanding; fifth, knowledge; and last and least, experience." Related: Hiring the Right People for Your Company - Signs You Have a Great Job, or Not - Google’s Answer to Filling Jobs Is an Algorithm - Hiring: Silicon Valley Style - Interviewing and Hiring Programmers

Monday, April 08, 2013

Remembering George E.P. Box

George Box passed away last week and a long (1919 - 2013), rewarding and productive life. His obituary ends with "a last message from George" (quoting Cole Porter's song - Experiment).
“Experiment! Make it your motto day and night. Experiment, And it will lead you to the light …Be Curious, …Get Furious… Experiment, And you’ll see!”
The full text of the song is quoted in Statistics for Experimenters (a book by George, my father and Stu Hunter on using design of experiments to improve). The song is included in the De-Lovely soundtrack.

If you want to honor the memory of George, contributions could be made to

  UW Foundation - George Box Endowment Fund (link to donate - include George Box Endowment Fund in the box for instructions) US Bank Lock Box 78807, Milwaukee, WI 53278.  This fund was started some years ago with the intention of assisting graduate students. It is a permanent endowment fund, so contributions to the fund are added to the principal and the annual earnings of the fund are used to support the fund purpose. The purpose of the fund is to support activities of the Statistics Department with a primary (but not exclusive) focus on activities of direct benefit to graduate students.  Recipients will be selected by the Department faculty (or their designates) with input from Departmental graduate students."

  Agrace HospiceCare (link for donating online), 5395 E. Cheryl Parkway Madison, WI 53711.

Wednesday, January 23, 2013

Informal, Subconscious PDSA Experimentation


"Informal" PDSA is basically how babies and kids learn.  They don't formalize the theory they are testing but their brain is doing it for them.  If they touch the hot stove they learn hey touching that hurts.  Most brains figure out hot feeling gets super hot if I touch what appears to be the source (even without a helicopter parent telling them).  I don't want to be hurt again.  Don't touch that hot thing.  A bit old they connect the stove place as likely to be hot...  Sometimes they are a bit lame and they fail to make the connections until they get burned a couple times.

Same thing with say putting food into their mouth.  They try various ways of doing so with various levels of success.  Eventually they find good ones.  As kids get a bit older they have to modify the most effective ways of getting food to their mouth to make sure the big huge person sitting next to them doesn't stop them (factoring in "manners" as part of what is needed not just efficiency).

Kids really are amazing at doing "informal PDSA."  But their are ways to get even better by realizing what you are testing consciously.  Especially as systems get complex relying on your brain decoding everything behind the scenes (doing its own subconscious pdsa) gets less reliable.

Response to: PDCA – So Simple, It’s Child’s Play

Related: Encouraging Curiosity in Kids - Experience Teaches Nothing Without Theory - Keys to the Effective Use of the PDSA Improvement Cycle