Thursday, July 29, 2010

Work In Progress (WIP) and Little's Law

I recently made a small writeup concerning Work In Progress (WIP). I described that throughput is the results of the amount of work to be done divided by the time it takes to do the work. I threw the writeup out yesterday when I was pointed to Little's law, which is exactly what I was defining.

www.factoryphysics.com/Principle/LittlesLaw.htm

(Note that I have had experience with line based work since a child. I grew up on a dairy farm and we went from a simple stanchion barn to what is called a double herringbone with 4 stalls on each side. This was useful for two workers but when there was only one it was too much and we removed 1 stall from each side so that it was a double 3)

I am still reading about Little's law, but this much I have noticed:

The key to the Inventory = Throughput X Flow Time is consistency and lack of variability in units of measure.

Note that even using "cost" as the common unit it is still difficult to use in software development.

Cost is unknown or varies.
Time is unknown or varies.
Complexity is unknown or varies.
Congestion is unknown or varies.
Bottlenecks are unknown or vary.

All of the above issues have been concerns of every software process and software estimation technique. But note that even the best techniques still give estimations.

Predictability is becoming (or maybe already is) the main selling point for software process. This is because if I were going to attempt to use Little's Law for software development I would have to bring software development into the realm of manufacturing and to do this I need predictability. In other words, I can do things to the software process to make it seem to fit better and then jump to the conclusion that if it fits well enough then Little's law still must hold.

If I could get every developer on my team to divide any task into equal chunks of work then I could apply Little's law. If any task no matter the size or difficulty can be divided, redivided, and re-redivided until it is a small and consistent chunk then I can apply Little's law. If the cost of subdividing large features is to much then I can argue that there should be no large features. I could argue no large features really means that all features can be delivered incrementally and therefore there is no need for any large features.

All of that arguing takes you down, not only a very narrow path, but one that is not necessarily true.

For instance, releasing a product into a market that is well established requires large feature development. If I were going to break into the word processing market my first release of the software could only have plain text, no wrapping, etc., and then I would incrementally release wrapping, fonts, until finally the product has a feature set that is comparable to the competition.

The "total incremental" delivery approach implies that all features can be evolved. (I would like to see that proof.)

Now, can there be things learned from Little's law? Certainly. Maybe we could have a discussion about that and find ways to use it to improve software development.

For instance, if you find you are enhancing exist software and roll-out is done regularly and consistently then you may be in a situation where you should subdivide large features and get your development down to the smallest reasonable "chunk size" as possible.

I am doing further investigation based on this:
"Reducing WIP in a line without making any other changes will also reduce throughput."

Geoff

p.s.
(Additional Research)
www.shmula.com/2407/lean-and-theory-of-constraints-either-or

No comments: