Monday, September 15, 2008

Time Tables versus Mile Stones

It's Saturday morning and you are going fishing. You need to catch twenty blue gill because you have invited friends over for dinner for some fresh fish, home fries, coleslaw, and hush puppies.

You need to have the fish caught by 4:30 p.m. and cleaned by 5:00 p.m. in order to have them hot out of the skillet for when the guests arrive.

What time should you go to the lake and start fishing?
What time is the latest that you could possibly go and start fishing?

Those questions are just to get you to thinking.

I live in the United States of America. Currently my country is involved in what is commonly referred to as a war against terror on two fronts. The fronts are Iraq and Afghanistan. I am not giving my opinion on any aspect of these military actions other than the issue of time tables.

Currently there are those who desire a time table for when military action will be completed in Iraq.

Too often time tables are the results of "pure management". What I mean is that those that are not actually doing the action but have some relationship with the outcome try to manage by arbitrary dictate of schedule. Managers may try to time-box everything into a neat package. Managers like neat.

(This will be tied to Software Development. I hope you can wait.)

Let's go back to the fishing analogy.

Let's suppose you allow thirty minutes to catch the twenty blue gill. I personally have caught that many fish in that time, for real. And there have been days when I have caught one or two in thirty minutes.

Let's suppose you have been fishing for fifteen minutes and you only have four fish. You may inform the fish that you are on a time table if you like, but I think it will not entice them to bite.

Obviously you can't make fish bite and you can't predict how long it will take to catch twenty fish. Fish have a mind of their own and are motivated to take bait by many factors which you can not control.

My advice to you is to go fishing as early as you possibly can and have a couple of ponds or lakes you can visit. Also, invite your friends over for some fresh fish or for brats, depending on the outcome of the fishing trip.

Now I will draw this closer to programming by returning to the Iraq time table. It is closer to programming because people are involved, not fish.

When dealing with people, time tables are used to focus the people on the tasks at hand. It is supposed that people may not work as hard as possible unless there is a tight schedule and management pressure. Often rewards for early delivery or punishment for late delivery are used as an added motivator.

It could be argued that in Iraq there are those that would prefer a U.S. military presence for a very long time. Maybe the Iraqi government prefers having the U.S. risk military personnel instead of risking Iraqi citizens. Maybe the Iraqi government prefers having the U.S. military infrastructure at their disposal because it is cheaper than developing their own. Maybe there are military contractors that want to continue involvement because it is lucrative for them. Which ever reasons exist it seems to show that there are conflicting interests and no common goal.

Just as you can not clean a fish until it is caught and you can not fry the fish until it is cleaned you can not expect Iraqi security forces to take control until there are sufficient members of the Iraqi security force. At least with the fish story if you don't catch the fish in the time allotted you have told your guests they may have to eat brats instead. I do not know of any alternatives in the Iraq situation. Mile stones have to be reached before the next step can be taken.

Another aspect of management setting aggressive time tables is that it can be inferred that management considers the work force lazy and not working as hard as it can all of the time. In the situation with Iraq it may be inferred that the military personnel are not doing their best job. This inference can be very upsetting because military personnel are putting their life on the line and it is insulting to think that the managers feel they are not doing their best job.

For some it is considered failure to move on to the next time boxed item before the previous is finished. Continuing to move forward without finishing items is building failure upon failure. That is why you will hear it said that "failure is not an option". A series of failures or incomplete objectives create instability and invite opportunities for disaster. For military actions such behavior would seem total foolishness and would risk further loss of life and possible defeat. That may be why time tables are associated to defeat.

If you are building a house and you do not complete the foundation just because the time for completing the foundation has passed an you move on to installing the walls it would seem foolish to most everyone. If you are fishing you do not go to the cutting board and get out your filleting knife just because it is time to start cleaning fish if you don't have any fish to clean.

Software development, fortunately, is not as rigid as building a house or managing a war. A software product may have many features. However, not all of those features are needed for a first release of the product. In software it is actually preferred to release enough features to place the product into the market such that the product can gain traction, build a user base, and start a revenue stream. After the first release feed back is used to select a few more features and a new version of the product is released. This continues over and over again for as long as the product is marketable and viable.

Software can combine time tables with mile stones.

Suppose we have product with many features defined. The market seems to indicate that a release in twelve months would be optimal. The features are prioritized and an estimate on the time to deliver each feature is made. Features are organized by dependency so that if feature D is wanted in the product release it is understood that features A, B, and C must be developed first. (Notice that to get to D, you must pass milestones A, B, and C.)

After the dependencies and estimates are in place a sub set of the total feature set is selected and this will be the goal for the twelve month time table. (I just realized that I am imagining a desktop application so delivery of the product is more complex than a Web app that could be rolled out incrementally.)

The product now has a feature set definition. At this point each feature, in order of dependency, can be time boxed (a time box is a start date and and end date) and the features can be organized and assigned. This is effectively serializing the feature set.

As each feature is delivered (this implies that it really does work) the product's release schedule can be monitored to give an idea if the product will ship on the desired date.

If the feature set of the product is a bare minimum and can not be reduced further then the product can not ship until every feature is completed. If features take longer than expected then the ship date will have to slip. There is no alternative. You can not alter time. Therefore mile stones actually trump time tables. Delivery is an end, time tables are a means of estimation. Do not confuse the means with the ends.

Now, let's imagine managers, pure managers, trying to shorten the development time for the product OR trying to include more features in the twelve month period. When I say "pure manager" I mean they can not develop code so in other words they can not do anything to help get the code finished any sooner. They can't lend a hand.

If the managers will not increase the number of developers and re-assign existing tasks amongst the work force then it seems impossible to get more done in less amount of time.

For management to change the dates because they feel the work force is not working at full capacity can infer that management feels the workers are lazy. That is insulting to the work force. Maybe management feels the workers have padded the time estimates. The workers will recognize this and infer that management feels that they are dishonest and lazy. The workers morale may drop and if that happens then there will be side effects.

Here is something I want you to consider.

If you have hired the brightest and best developers, and if they are completely honest and ethical, and if they never get sick or have any personal issues or crisis, and if they are the very best at software architecture, then how ever long it takes them to develop a software product is the shortest amount of time that it could possibly be done in and no manner of process management would have decreased the time to delivery.

Now, given those brilliant developers described above, if there are other tasks besides just the development tasks that need to be organized then it is prudent to add another task to the developers (which will take time and thus push the finish time out further) to estimate feature set release dates so that marketing can queue their tasks, documentation can queue their tasks, etc. By organizing other tasks to be performed in parallel you can shorten the overall time for the complete release and delivery of the product.

As each milestone is reach (each feature is finished) the time tables can be readjusted and parallel tasks can be rescheduled.

The software will not be done until the last feature is delivered and it will take how ever long it takes. Mile stones have power over time tables. They always will, if you expect to finish something.

I hope you understand my analogy with fishing, with the very difficult situation of the Iraq war, and with house building.

Your comments are welcome.

Sunday, July 27, 2008

The Allure of Code Reuse

The desire to reuse is well ingrained into the software development process.

The idea of code reuse suffers from a poor choice in wording. I have not seen great success with code reuse. I have seen significant productivity gains from using object files, libraries, services, and frameworks.

The bad kind of code reuse often involves what is known to programmers as ifdefs, custom environment variables, shims, wrappers, and most commonly the copying and pasting of code. Code reuse is inside the box or white box. Libraries are black boxes.

The "ifdef" type changes have an insidious nature. Firstly those involved work under the assumption that just a few little changes will ultimately be harmless to the code base. Secondly the thought of reusing a large amount of code seems enticing but the very thought beguiles the user with the thought that reuse is always cheaper than rewriting software.

The trickery is the idea that your only two choices are reuse or rewrite the code.

"We can take Kim's code and just tweak it a bit to handle our needs!"

This thought is as if it came inside of a vacuum or in absolute defiance of how code takes the shape that it takes.

I have designed many clean domain models, object models, and system architectures in the pure and clean world of theoretical idea. Sometimes these designs have been based on well understood programming paradigms and the resulting code was very much the embodiment of the design. Sometimes. More often the development runs into issues and these issues cause the design to be changed.

I recently developed a 2D graphing/charting package. This is nothing new to me because I have done a few. What was new to me was the system that it had to be built upon. Fortunately developing 2D graphics in a GUI based OS has not changed much since the early days of Macintosh, Amiga, OS/2, and Windows development. Fundamental rules still apply. For instance, if you want to cause something to be drawn or refreshed you call Invalidate on the window or control.

Because the fundamental rules apply my design for this system was "mostly" correct. As I coded I soon discovered that the OS had limitations that I did not expect. These limitations caused me to make "in place" design changes to get around the weakness of the OS. These in place changes mutate the design of the overall system making it difficult to remember or explain what the code does and why it does it that way. Obviously one uses all of the tricks of the trade to capture the intent of the code but I often hear people reading such code and saying, "That looks weird. Why did they do it that way when all you have to do is blah blah blah".

I myself have even forgotten why I did something one way and I put in the more obvious solution only to remember, "Oh yeah, that doesn't work. That is why I had to do it that way."

One of the most common short comings is performance. There may be some call you can make in a provided library that does what you need, but does it too slow. Performance is a requirement and it must be met. Another issue may be memory usage. Any of these issues cause the code to deviate from the theoretical design in order to meet the demands of reality.

Forgetting or ignoring that code is filled with such special case code is one of the traps of code reuse.

Code often takes the path of least resistance. I have seen developers change their code because of a deficiency in their own code. Sometimes it is expedient to just fix a problem where it is encountered instead of drilling in and finding the real problem in their own code. Some developers do this because they do not consider that their code is flawed and others do it knowing the flaws of their code but justify that this is the most expedient solution to the problem.

Regardless of the reasons code is filled with little "bypasses" around bad or clogged veins of code.

No back to the topic, code reuse.

As an argument if favor of code reuse you will hear it said, "This code does almost all we need already."

If I may I would like to say that 80% is almost. I pick that value not as an absolute but as a common variable used by programmers when describing code. It does not really mean exactly 80% but it means "mostly".

I have heard and witnessed that 80% of the code can be developed in about 20% of the time to complete a software feature. It is the last 20% that is difficult. Again, 20% doesn't mean exactly 20% but it means "the devil is in the details".

Please remember that code has bypasses all through it to avoid deficiencies. With that in mind take into consideration that the last 20% of the code takes 80% of the time to develop. This 20% of the code is the very same code that will cause the code to be difficult to reuse.

So, if you can write the theoretical ideal of the code (the first 80%) easily then why exclude that when considering code reuse. Remember that the two factors of code reuse often are reuse or rewrite.

I have ported code from Macintosh OS 7, 8, and 9 to Unix and Windows. I know from years of experience on large systems that it is better to first port the design, the theoretical model, the ideal, than to reuse code filled with bypasses. (Well our code is not filled with bypasses. Yeah, right.)

Instead of taking code that is almost correct and filling it with ifdefs and conditions and bypasses around the new problem I suggest to develop the model from the experience gained from the previous solution.

With this design reuse then let the code take its natural path of bypasses and conditions to handle the deficiencies of the new problem space.

Even if the old 80/20 axiom is pure myth, I still recommend to rewrite based on a clean design.

Now many will hear rewrite and equate that to expense. From my experience the reuse situation causes the code to reach a state of confusion such that it is not maintainable or extensible and thus it goes from satisfying one task sufficiently to failing to satisfy two tasks sufficiently and thus forces a rewrite. A forced rewrite due to code collapse is considered expensive in that the collapse usually happens at an inopportune moment when the system is under new loads and stresses.

I once worked on some code that was reused by another team. I encountered a performance flaw that needed to be addressed. To do so meant changing the parameters to several methods. Making these changes would break the other team. The other team did not have time to change their code to provide these new parameters. I was stuck. The code began to do two tasks poorly. It only compounds from that day forward.

Now there is another topic that is not considered here. That is the development of frameworks and code units that are meant to be used by many teams. I have developed such systems before and their reuse has been beneficial. Such reuse is really at a higher level than code reuse. These are libraries and services that are reused and the internal code is a black box to the users.

In summary, don't limit yourself to the two choices of reuse or rewrite. There is more, there are designs to be reused and there are experiences to build upon. Code is filled with bypasses around environment deficiencies and thus makes reusing code difficult filled with pitfalls.

Friday, May 23, 2008

Approximating a Semicircle with a Cubic Nonrational Bezier Curve

At times I have had the need to approximate a semicircle with two quadratic Bezier curves.

Recently I wanted to approximate a semicircle with one Bezier curve. I decided to do this with a non-rational cubic Bezier curve.

First I made a cubic Bezier curve with a control polygon whose points correspond to a unit square.

Then I plotted a circle against the Bezier curve to see how close the Bezier curve was to a semicircle. It wasn't very close. So I knew I needed to adjust the Y values of P1 and P2 to bring down the curve. But how much? I evaluated the Bezier curve parametrically at t = 0.5 and determined that the y value at t = 0.5 was 0.75 or 3/4.

The radius of the semicircle is 0.5. I needed to move the Bezier curve down from 0.75 to 0.5. To do this I adjust the Y value of points P1 and P2 by:

yValueOffset = radius * 4.0 / 3.0

The resulting Bezier control polygon and curve is shown in the following image.
That looks more like a semicircle. Following is the same image with the semicircle plotted against it for reference.

For most situations this approximation will suffice. However, I decided to try and get it a bit closer! The next change was done by some calculations which yielded a value that was very close but to tell the truth my math and the complexity of the blending functions where such that I am not sure at this time if my conclusions are correct. (Note: Don't let your math skills get too rusty!)

While I endeavor to get the correct solution to the problem I will at this time share with you an easy value to remember that tightens up the Bezier curve close to the circle. The magic number is 0.05.

By insetting points P1 and P2 in the X value only the resulting Bezier control polygon and curve are shown in the following image.

Notice how nicely this cubic non-rational Bezier curve approximates a semicircle.

Here is an enlarged image so that you can see how nicely the curve fits the semicircle. (Click Image to see larger view.)

So, in summary:

xValueInset = Diameter * 0.05
yValueOffset = radius * 4.0 / 3.0

P0 = (0,0)
P1 = (xValueInset, yValueOffset)
P2 = (Diameter - xValueInset, yValueOffset)
P3 = (Diameter, 0)

This gives a pretty good approximation to a semicircle using only one Bezier curve.

Wednesday, May 07, 2008

Dilbert Mashups are fun!

Many of us enjoy the Dilbert comic strip. The new site allows for mashups where you can change the comments of the last cell of the strip.

Please check out mine! I enjoy trying to think of a new punch line.

You may have to create an account.

Please make comments or share links to your own mashups.

Tuesday, May 06, 2008

Measure or Listen?

Following are thoughts directed at Managers. This could be a Team Lead, a Technical Manager, a Product Manager, and especially a Process Manager.

Why "especially a process manager"? Because in my experience they often want to measure stuff!

Measuring isn't good or bad. Measuring isn't the only thing to do either and it may not be the first thing to do. In my opinion it is never the first thing to do.

If you are working on a software development team and it is experiencing some difficulty what do you do to get "control" of the situation?

Suppose the code is in a state of thrashing in that one bug fix seems to create new bugs.

What would you do?

Maybe you would do this:
1) Are they doing code reviews? How many of you are doing code reviews before check-in?
2) Are there regression tests? How much code coverage do we have with our tests?
(I will stop here for brevity)

For a process manager does everything have a process solution? If the perfect process is in place and there are failures does it mean that the people are wrong? Just a couple of thought questions.

Now to the point.

If there are problems then ask those that are experiencing the problems what they think the problems are and how they think the problems can be addressed. Listen carefully to their responses.

I have worked on many products where the software's design had reached its limit of usability. This includes software that I myself designed. There comes a time where weaknesses and inefficiencies become grossly apparent and it is time to address the core issues. While working on these products (with very large code bases) I have seen the fixes create new bugs and the code stability thrashes about. In each of these instances I have been asked what is wrong and I have said, "We need time to build a new foundation, the code has become a mess of add-ons and kludges and it is only going to get worse." Most every time the response is, "We do not have time for that. We will do code reviews and have someone start on writing tests to get better coverage." And most every time the thrashing problem does not go away. The thrashing seems to lessen but I propose that it lessens only because less code is being written due to the fact that more time is spent in code reviews and writing regression tests.

Do you have an example where you felt you knew how to address the "real" problem but was never asked? Is it always a matter of improving the process that will make the bugs go away?

Well, some of you are probably saying, "He wants cowboy programming. He never has liked Process Managers and gives them grief whenever he can. He never has worked on a large project with many developers or he would know that process is what holds it all together." Well, say what you may, that is your prerogative and that is what I am doing here!

I still say this, "Listen first." What does listen mean? It means hearing and understanding. Understanding is the key. This key makes it difficult for managers that do not have technical backgrounds for if they do not understand the problem or the proposed solution. How do such know if the suggestion is good or bad? If the manager doesn't understand the technical issues and only understands process then how can the manager do anything but suggest process changes to address every problem?

Listen first. Measure later.


p.s. Maybe you have heard, Measure twice and cut once. It is a saying used by carpenters. It doesn't apply to software. Sorry.

Sunday, May 04, 2008

Delivering Software Faster...

Delivering software faster than what or who?

I know the following statement is not generally accepted, but for me it has been a constant my entire career.

"Software is done when it is done. It takes as long as it is going to take."

I have noticed in current discussion that the idea of incremental delivery is some how being changed exclusively into the idea of faster delivery. This incremental delivery is delivery to the customer and not solely to a QA team.

I prefer incremental development of software. I believe in enough planning to divide the software into some type of conceptual model with vocabulary and metaphors to describe the abstract notion. Then I believe in enough planning to organize these abstractions in such a way that the pieces can be developed in a manner to utilize as many working in parallel as possible. Finally I believe that each piece is developed completely following the adage of finishing what you start before you start something else.

For me the above paragraph is enough to be the basis for a good software development process. With all of the books, articles, and methods available I feel that the above statement in contrast is concise and sufficient.

As for "delivering faster", well faster than what?

What can delay software delivery?
Poor programming skills.
Lazy people.
Burdensome I.T. policies.
Buggy hardware.
Low morale.

You add to the list anything you want that you have experienced that slows down development and thus delays software delivery. When you encounter one of these "things" that slows down development then address it in context and in a timely manner. That's the best that anyone can do. After you have addressed it then you can develop policies and practices to avoid it in the future.

For instance, coupling slows down software development. One way it does so it that the human developer can not recall all of the places a piece of code is used and thus may not understand all of the side effects of a change. Problem identified. One solution is to develop the software with regression tests so that when a change is made any issues from coupling may surface. Policy and procedure to avoid unidentified coupling issues is to use a development practice like Design by Use, Test First Programming, Test Driven Development, or some other approach that facilitates the creation of a code base with "built-in" regression tests.

The code will be done when it is done and not a day before. It may ship before it is done, that happens all the time!


Tuesday, April 22, 2008


You aint going to need it = YAGNI

When I first learned about YAGNI I was learning about it while I was in the context of doing code level design.

I learned object oriented programming at the same time as structured programming. I didn't realize that I entered the field during a time of transition. Object oriented programming has what is known as polymorphism. Polymorphism allows a procedure of function to be declared with one name and have many versions taking different types and numbers of parameters. In this context of designing object oriented code and specifically polymorphic methods is where I learned and applied the "mantra" of YAGNI.

Often I would feel "pressure" to write a method with many different sets of parameters. Here is a contrived example (without much thought put into it)

public string GetName(PersonID pid){...}
public string GetName(Person person){...}
public string GetName(string personXML){...}

The reason these three methods were coded would be because the Person class has these methods:

public class Person
public PersonID GetPersonID(){...}
public string SerializeToXML(){...}

So the rationale was that you didn't know how someone might want to call your methods and it is clear that they may have a Person, a PersonID, or some XML representing a person so therefore robust code will handle all three types.

I was young and didn't know better and hadn't really thought it out.

I would write all of these polymorphic methods and then guess what, I would have to maintain all of them as well.

As I learned not to develop polymorphic methods just for the sake of someone who MIGHT want it that way I learned of YAGNI and I said, "That's it. YAGNI describes a way to avoid a maintenance problem I am experiencing in a way that I can now describe to those that are still arguing for many polymorphic methods using the term ROBUSTNESS for their defense."

So, for me ROBUSTNESS and YAGNI collided! I liked it.

Over the years I have seen YAGNI applied to other things instead of code level design. Now maybe it was originally envisioned for something other than code level design and I had applied it to the wrong problem. If I have done this, then I say it worked well for me and has been a sound approach for avoiding duplicate code!

I have seen YAGNI applied to extra features that developers want to add to a product. I have heard this called "gold plating" as well. Most developers get ideas as they work on a product. In this case the idea is that the PRODUCT aint going to need it. (Should we call this PAGNI? Which implies the customer aint going to need it. CAGNI? James CAGNI?)

The application of YAGNI to gold plating seems to hold well enough with me. But it is fairly distant from YAGNI in code level design.

More recently I have heard of YAGNI being used during release planning of a product. This is where the application of YAGNI gets fuzzy for me. If there is a list of product requirements and these requirements are being sliced up into releases then all of the requirements are needed by definition of being a requirement. (Do not jump on the BIG UPFRONT WHATEVER BAND WAGON. I am not on that wagon and it is not my point.) So using YAGNI based arguments at release planning seems really far from code level design and is very very fuzzy to me.

Release planning may have statements like "we can't do B before A" or "we can't do C,D, and E with the resources we have" or "H, I, and K are so far out we are not sure the technology won't change before we get there", stuff like that.

Here is an interesting correlation. Methods are interfaces into the code. Polymorphism may be thought of as multiple ways of doing the same thing. Correlate this to User Interface design. There may be more than one UI control that could be used for some particular interaction with the user. Is it necessary to provide the user with three different ways to enter their user name? It is natural for me to apply YAGNI at this level.

So, I use YAGNI during arguments of code level design and user interface design. In other phases of development I may be arguing what some would call YAGNI but to me it is different. I like to keep things simple. YAGNI for code level design and YAGNI for UI design.


Saturday, April 12, 2008

Frequent Delivery of Developed Features in Software

In the "Agile" community you will hear the cry for frequent delivery of features to the customer so that the customer can know that the product is progressing.

In the old days these were called "demos" or demonstrations of the product.

Why are the developers trying to "sell" frequent delivery of developed features? Isn't this backwards? Shouldn't the Product Managers or the customers be the ones that are saying, "Hey, we really want to see what you have developed thus far. Can you show it to us?" How did this get twisted around such that it is the Developers trying to sell the idea of frequent demonstrations of the ever increasing functionality to the Product managers, the Process managers, and the customer!

Dr. Peter Venkman: This city is headed for a disaster of biblical proportions.
Mayor: What do you mean, "biblical"?
Dr Ray Stantz: What he means is Old Testament, Mr. Mayor, real wrath of God type stuff.
Dr. Peter Venkman: Exactly.
Dr Ray Stantz: Fire and brimstone coming down from the skies! Rivers and seas boiling!
Dr. Egon Spengler: Forty years of darkness! Earthquakes, volcanoes...
Winston Zeddemore: The dead rising from the grave!
Dr. Peter Venkman: Human sacrifice, dogs and cats living together... mass hysteria!

Well here we are in mass hysteria. The Developers are the ones crying out to have frequent demonstrations of the current state of development.

In my opinion the main point of frequent releases or demonstrations (depending on your deployment model) are primarily for showing where we are at in the development of the product. Once we know where we are at we can then say things like "Six months ago we had those features and now we have those features plus these features. We are making progress." Knowing where we are and where we have been and how long it took us to get here are facts used to progressively refine estimates of how much longer it will take and at what cost. This is a part of honesty in reporting.

On a side note, somehow along the way various Anti-Agilists have distorted the weight of reporting with that of getting feedback from the customer. These Anti's have created a straw man to tear apart. The claim is that incremental development (yes, that is what every Agile process I understand uses) is some lazy, shady, and shoddy approach to software requirements. The straw man goes like this, "Agile development does not try to understand the task at hand and do their due diligence in understanding what the customer really needs. Instead they hack out some code and show it to the customer and say 'Is this what your wanted?'" Maybe some idiotic process out there does that and if so I will stand by my description of it being idiotic.

The key to frequent delivery is to show where we are in the development of the product. Once you have shown this to someone it is natural for them to want to give feedback. But the intent was to deliver exactly what the customer wanted the first time. It was NOT the intent to deliver some overly simplified hack of some half known requirements and then refine the requirements because you knew you had developed a piece of junk.

I have delivered many features in an incremental fashion that were 100% correct when the user saw them. There was no need to make any changes what so ever. That is the goal. Showing the progress as the product comes together allows for everyone to know the REAL status of development and to fine tune other events and tasks that will need to be performed in order to have a successful and thoughtful product launch.

Isn't this stuff just common sense? Maybe cats and dogs are living together and I missed the memo.

Responsibility in Software Estimation

This blog post will be short for me. I will be blunt and to the point with my following opinions.

Estimating the cost and time to develop software is just that, an ESTIMATION.

1) It is foolish on the part of those hearing an estimate and then later representing it as a promise.

2) It is unethical and dishonest for the development team to hide the discovery that an estimated delivery date and cost is now known to be in error.

Why do clients and product management still to this day take software estimates and turn them into hardened dates and commitments? I understand that estimated dates are used to coordinate many parallel events and used to queue up related tasks so that everyone can rendezvous at a product delivery date. Why does product management still try to work in a fantasy software development world that simply does not exist? I can not speak for them because I am not one of them.

I am a developer and I will speak from my experience on why development teams hide the knowledge that dates are slipping. They do it because it is in their best interest. They sell themselves a lie that they can catch up, do more, bend time, or some other activity which history has shown does not happen.

Why is it in their best interest to hide the truth? You can answer this yourself. Just examine your company or clients and reflect on how they responded to the truth when it was given them.

For you developers out there, it is my opinion that you should tell the truth on the current state of the product and the outstanding features and give corrected and updated estimates as soon as they are known. If the client or customer doesn't respond favorably to this behavior you should decide on if this is a job you want to keep instead of waiting to see if they are going to fire you for being late. Just my opinion.

Thursday, March 20, 2008

Windows Vista SP1 and "Unable to load DLL 'VistaDb20.dll': Invalid access to memory location."

I recently updated my "Vista box" to SP1. I ran my current project without recompiling and everything was fine. Then I checked out a file, made some changes, and recompiled my project and found that my application would no longer run.

The error message I got was this:
Unable to load DLL 'VistaDb20.dll': Invalid access to memory location.

What an untimely error. We are trying to release our latest patch of our product and this takes the wind right out of our sails!

After some vigorous exploration we figure out that this has something to do with the DEP. The DEP is Data Excecution Prevention.

We figured out how to completely disable the DEP and sure enough the product could now run.

So now we had to figure out how to disable it pro grammatically (or at least we thought).

We are doing C# development. So I wrapped up a call into the Kernel32.dll to call SetProcessDEPPolicy. This had no effect. While I was doing this Jerry (one of the members of the team) was looking into why the recompile caused things to break.

I did a build on my "XP box" and copied it to my "Vista box" and sure enough it would run just fine. So we knew it had to be something with the compile.

Some Googling reveals some interesting information:
I'm Just Saying

Ed Maurer nails it right down. Thanks Ed.

Jerry added the following to our Post-build event command line:

call "$(DevEnvDir)..\tools\vsvars32.bat"
editbin.exe /NXCOMPAT:NO "$(TargetPath)"
mt.exe -manifest "$(ProjectDir)$(TargetName).exe.manifest" -outputresource:"$(TargetPath);#1"

I just wanted to share this with those that may be having the same problems.

Sunday, March 09, 2008

More on Code Debt

I have blogged before on code debt.

I would like to say a bit more about it.

Code is like an onion. Onions have layers.

The outer most layer of the code onion are the public interfaces or the exported functions. This is the layer that external code may hook up to the code.

The inner layers are often made up of the classes and structures imagined and created by the developers to organize their abstraction of the problem. In this inner layer the classes will have methods visible to all of the other classes in the same layer and may have methods that only subclasses can see and finally they may have methods that only themself can see and are private.

Each layer may have its own level of code debt. If the outer layer is well defined and no one ever has to peel into the onion the code debt will never be recognized.

Code debt is not recognized until some activity causes its recognition.

If the code is never modified or extended then no one will ever know that the code was poorly written or poorly designed and will never have to pay the costs for the poor code. I have developed code that has been running for years and never revisited. I do not accept the myth that all code is actively changing. I do feel that all code is actively becoming obsolete or decaying, but the rate of decay varies and his tied to Product Debt and Customer Debt.

Another example that code debt does not exist until someone tries to modify the code is this, code may have a very accurate and understandable model of the domain, with classes and methods that are intuitive and make sense. If the activity is to add new methods and functionality to such a code base it doesn't matter if the code internal to each method is poorly written. If you don't enter that layer of the code you will never know it is poorly written. Each of the existing methods may be filled with duplicate code, multiple returns and go-to statements, use of global variables, poorly named local variables, and a myriad of other things, but the external view of the class may be very accurate and correct. If the class is added upon and the existing methods are not modified then no one will know of the code debt that lives inside the method layer. This is an example of "inner code debt" or "deep layer code debt".

One of the most expensive types of code debt I have seen is where all of the code is not extend-able, modifiable, or maintainable. I have seen this often. It is when the code has to be ported to a new language. The existing system may be the best code ever developed with regression tests galore. But it doesn't matter. The choice to develop the code in a language that did not meet the future needs of the product is costly.

Code debt is subjective. Often I have seen a developer take ownership of existing code and upon examination of the code find it unsavory. Such things as, "This should be an interface instead of an abstract base class". The new owner of the code starts to re-write the code to suit their idea of clean code.

Code debt is relative. Often I have seen a developer take ownership of existing code and upon examination of the code find it too complex for their skill level. An easy example of this is C++ code. I have seen programmers that couldn't read parameterized types (templates). It was so foreign to them they just couldn't read the code.

At the inner most layers of the code onion the code may be written very very well but the users of the objects have used them poorly and now you have a coupling mess. Tightly coupled code is a form of code debt. Often no one recognizes how tightly code is coupled until they try to remove a class from the code and replace it with a new one.

Is there a relationship to source lines of code (SLOC) and code debt? If you have zero lines of code one might argue that you have no code debt. I will argue that zero lines of code is adding to the Product Debt!

Code debt is not recognized until some activity exposes it by entering into its layer of existence. Any layer may be rotten but if that layer doesn't need change it will not matter. Poorly designed and architected code does not mean it has to be buggy code.

Suppose there is a function of a C++ class that has 200 lines of code in it. Suppose it has to be fixed because somewhere in it there is a bug. How much code debt is there? Can you give me a cost to pay this debt?

Let's take two specific scenarios.

First, the 200 lines of code was written by a novice programmer. The programmer assigned to fix the bug is an expert in C++ and all of the C++ libraries. The programmer recognizes that 80% of the functionality of this buggy routine is string manipulation and replaces that code with three calls into the C++ string routines. The programmer runs a test and the bug is fixed and everything is done. Time to fix, let's say it took him two hours. Not very expensive at all.

Second, the 200 lines of code was written by an expert programmer. The programmer assigned to fix the bug is a junior programmer relegated to maintenance because it is felt this is the best way for him to get to know the system. (Yes, I know about pair programming, but I am talking about code debt and how it is relative and subjective). The junior programmer doesn't understand that any operator can be overloaded in C++ and in this particular code the indirection operator has been overloaded. The junior programmer makes changes to the code hoping to fix the bug but the changes doesn't seem to make any difference. (Why? Because the bug is somewhere else, in the overloaded operators code.) The junior developer spends days working on this. At first he thinks he has found a compiler bug! The junior programmer adds a variable to the class for tracking some state he hopes is relative and inserts the saving of this state into the code and does some conditional logic with this new state variable. The bug is fixed! He checks it in. Two weeks of work. What was the cost of paying this code debt? The sad thing is that he did not fix the bug. By adding the variable to the class he changed it's size and thus hid the real bug where part of the memory of the class was being corrupted in the code for the overloaded operator. So, in reality, nothing was fixed and everything was a waste of time and money.

Because of these two examples and my previous statements I do not believe that a large number of SLOC means there is significant code debt.

Some may argue that the number of features has to do with code debt. I ask, "Features at what level?" The external layer of a system may be viewed as its feature set. Thus I refer you back to my statements above about layers. Also, I remind the reader that features that have to be ported to a new programming language have a high code debt regardless of the quality of the existing code.

A large system with millions of lines of code may be maintained inexpensively. One factor that keeps the expense down is that the original developers stay on with the system. They know why and how things were done. Thus code debt is affected by the members of the team. Suppose the team becomes insulted in some manner and all quit. Suddenly the code debt changed from low to extremely high!

Just some of my thoughts on code debt. I hope it causes you to think about code debt in new ways as well. As a final thought I think the best way to address code debt is with the right people. Programmers (which usually are people) are the ones to address the issues with the code and their skill can make a job quick and simple.

Drop me a line. I have no idea if anyone ever reads my blog posts!

Friday, March 07, 2008

Design by Use, Object Oriented Design, Design by Contract, and Test Driven Development

Design by Use (DBU)

DBU is a set of software design and development techniques which I have found very useful during my career. I recognize that the parts that make up DBU are not new to everyone.

Before I go into a general description of DBU and compare it to OOD, DbC, and TDD I want to point out some unique aspects of DBU.

Unique Aspects of DBU

DBU considers large software development issues and specifically multiple teams working simultaneously to build components and subcomponents which ultimately will work together as a software system.

DBU describes what is termed "immediate integration". For me this was a new concept. For you it may not be, or maybe I have not communicated clearly what I mean.

Suppose there are two teams, Team A and Team B.

Suppose that Team A is writing Component A which depends on Component B which will be developed by Team B.

Team A writes inside of Component A the call to Component B before Component B is developed. Team B takes the code from Component A and uses that to define the method signature or interface into Component B. Team A decides how they want to use Component B. Team A codes the preferred usage and gives that to Team B.

Team A writes this "preferred usage" code very early in the development of Component A. This is done early so that Team B can start as soon as possible so that all teams are working on their components in parallel as much as possible. When I say "very early" I mean at first for most situations.

Notice that Team A specifies the first version of the interfaces for Component B which are of interest to Team A. As with most software, changes to the interfaces occur before finishing the product. I shouldn't even have to say that, but so many read a description and then say, "You don't allow for future changes." All I can say is that people who think like that need to take the blinders off. If I don't describe some particular issue that you think is important I say to you can you imagine a way to address your issue and if so then everything is still good.

So, Team A writes inside of Component A the "preferred usage" code for the call to Component B, and then creates a stub for Component B. Team B takes ownership of this stub and brings it into Component B and Component A no longer calls the stub but calls Component B. Thus we have immediate integration between Component A and Component B. This new call into Component B had the pre-conditions, post-conditions, and invariants that are concerns for Team A specified by Team A. These concerns can be used in the definition of automated tests.

Team B does not have to wait and wait for Team A to finally decide to call their system. Team A does not have to worry about Component B's interface and how to match up the classes, structures, parameters, exceptions, or return values. There will be no useless code and design based on the common tactic of "We will go ahead and design and implement Component B and when you finally figure out how you want to call us we can implement a mapping layer between the systems." What a poor way to do parallel component development.

Notice that Team B did not declare to Team A that Component B will have these interfaces and Team A will have to figure out how to create the data necessary to make the call. In the development of "NEW" software the "user" has priority over the "used". Some may say, "This doesn't work for integrating software to existing systems." That's right, it doesn't have anything to do with integrating to existing systems such as third party libraries, unless you are designing a transformation layer between your system and the third party system. If you are designing a transformation layer then I would do it in the DBU fashion.

Component B should only do what its users need it to do and nothing more, and obviously nothing less. Any extra code is just a waste. Mapping layers sometimes are the sign of poor design or poor utilization of teams and are just unnecessary and extra code.

Team A knows critical constraints that Team B will not know. For instance, there may be a performance constraint. Suppose Component A must return results in 1 second. That means Component B must return its results in less than 1 second. Team A knows this requirement and passes it down to Team B by means of the "preferred usage" which is stubbed out and called by Team A with the appropriate error code if the call into Component B takes too long. When Team B takes ownership the stubbed code and moves it into Component B then Team B will have reference to the timing constraints and proceed accordingly.

DBU in its Simplest Form

In its simplest form DBU is similar to Test First Programming. The developer, on an individual basis, must start writing code somewhere and in some direction. After appropriate domain consideration the developer will start building classes, structures, or even data flows. It doesn't matter if you are Object Oriented, Structured, or Procedural, there is an architecture that corresponds to your development method.

The direction choice is made by writing calls as if they already exist. Thus you are designing the method on how you are going to use it. The parameters to the call will be of the type that you have available. The results of the call will be of a type that you want to handle. This is by its very nature low level code design. DBU does not require you to have a high level design nor does it exclude the use of a high level design. DBU does not need a detailed low level design before coding because DBU creates the low level designed as needed, in context, on time, in place, and correct for the situation.

That is how you get started designing and writing new code. It is a very powerful way to do so.

DBU is applicable to modifying existing code. Often I find myself adding to existing code. I struggle to organize new code with existing code. I find myself trying to use what already exists instead of trying to use the code the way I would prefer. As I group calls to existing code I often feel that I am ruining the architecture or that this really doesn't fit. I often get stuck and can't figure out how I am going to get the data from all of the places I need and transform it to how it is needed. Then I remember, "Hey dummy, write the new code how you would prefer it to be, even if it doesn't exist." When I do this the code flows, the architecture is maintained or extended but it is not violated or hacked. Every time I have done this I have been pleased with the results. Yes, every time.

I have previously blogged concerning DBU and database design and how it has helped me with SQL queries and such.

DBU and Object Oriented Design

DBU is applied at low level / code level design. Therefore DBU works well with Object Oriented Design (OOD). Sometimes I design my domain objects using UML. I feel it is very important to gain as much understanding of the domain as possible before the low level code design begins. I define the objects and then I usually go right to sequence diagramming in order to imagine or simulate interactions. I do not "flesh out" the method calls to any great extent in UML. But that is me. You do what works for you. I do not use UML to generate my code. I use it to define meta data, organize thoughts on the domain, and get me pointed in the right direction. On small tasks where the domain is simple or in areas where I have lots of domain knowledge I do not even do UML.

DBU and Design By Contract

DBU uses aspects of Design By Contract (DbC). There are three questions associated with DbC.
1) What does it expect?
2) What does it guarantee?
3) What does it maintain?

DbC is based on the metaphor of a "client" and a "supplier". In DBU the user is the "client". In DBU the user of the "to be developed" method defines the preconditions, postconditions, and invariants on externally visible state. DBU follows the same rules of DbC for extending the contract down into lower level methods and procedures, such as a subclass may weaken a precondition, a subclass my strengthen a postcondition, and a subclass may strengthen invariants.

Design by Use and Test Driven Development

DBU and Test Driven Development (TDD) have similarities but are different. Both are design activities. In my opinion both are low level code design activities.

Some definitions of TDD require you to write a failing test (which is similar to a usage example of DBU) and then run your testing framework and see the indication that the test fails. You may do that in DBU but it is not a requirement of DBU. I want to point out that many will say you are not doing TDD unless you write a failing test and then watch it fail. DBU is not thusly constrained.

DBU is defined for new development and for modifying existing code. In DBU if you are modifying existing code and you are developing new functionality you do it in place, in context, in state, where it is needed. You write the new code as if it already exists. Of course the new code isn't going to compile and you don't have to compile it to see it fail. Now if you want, and this is something I personally do, you take this new code and you put it into a "programmer's test" so that it will benefit from automated regression testing. You can put the new code into the tests before you actually develop the underlying functionality if you want and drive the development underlying functionality from the test, or in other words at this point you can use TDD. Or, you can continue in the existing code and use your IDE to generate the method and then fill in the functionality whilst considering DbC and then place calls to the new code in the "programmer's tests".

DBU is concerned with designing the call to the new code in context with the data that is on hand or accessible. DBU does not get stuck on such things has what needs to be public or private or if everything has to be public so that I can fully unit test the code. DBU designs code as needed and needed code is code that is called and code that is called is exercised and code that is exercised is tested.

Am I saying that all possible states are exercised. No. I don't think that TDD promises that either. Why, because in TDD the unit tests are still written by humans who have a finite amount of knowledge, time, and attention.

If the method you have just defined is visible to other classes or callers then I refer you back to DbC to state what is expected.


Design By Use defines "Immediate Integration" where the user specifies the inputs, outputs, and method name (or in other words the method signature). Once the user of the new method has defined the preferred method signature and constraints the team that will develop the new method works from the users definition to build the actual functionality. These component boundaries or interfaces are defined early so that all teams may work in parallel and so that the components are linked together immediately at definition time and not at some far off date.

DBU avoids the unnecessary code of mapping layers that result from poor communication, downstream waiting, or teams going off in their own direction.

DBU is a low level code design activity. It works well with OOD, DbC, and TDD.

DBU applies the user's preference on how things should be called to existing code as well as new code. When modifying existing code DBU says to write the modifications in the way that seems best even if the code doesn't exist. By doing this the overall structure and architecture of the system is extended and not just hacked and coupled. I do not know of any other low level code design methodology that follows that principle. There could be many. I just don't know them or maybe they don't have a cool name like Design by Use!

Friday, February 29, 2008

What is Necessary to Develop Software?

I will work from these definitions.

Software is a general term for the various kinds of programs used to operate computers and related devices.

A Program is a specific set of ordered operations for a computer to perform.

A Computer is a device that accepts information and operation instructions.

So, to rephrase the initial question, What is necessary to develop specific sets of ordered instructions for a computer to perform?

You will need:

- A computer
- A specification of the instructions the computer accepts
- Something to generate sequences of instructions

Something to generate sequences of instructions! Another computer could do this. A person could do this if the person has knowledge of the computer's instruction set.

A Programmer is a person that specifies a sequence of instructions for a computer.

I will not go into the roles of low level languages, high level languages, compilers, interpreters, and other methods of specifying computer instructions.

So, that is all that is necessary to develop computer programs.

What? You feel that there is more necessary? I don't think so.

Oh, you want to develop a specific program to solve a specific problem. Well then, it is necessary for you to describe the problem in terms such that the programmer can order the correct sequence of instructions for the computer to solve the problem.

Does the specification of the problem have to be done in a certain format? No. The only thing necessary is that you communicate to the programmer accurately the problem to be solved.

So, if you want a specific problem solved then you must define the requirements.

Now we are up to:
- A computer
- A specification of the instructions the computer accepts
- Something to generate sequences of instructions
- Specification of the problem requirements

Wait, does the problem specification include a description or specification of the solution? No, it does not. The expected results of a software program may be termed the solution set. For instance if the specification for the program says, "Enter two numbers and compute the multiplication of those two number and display them." If you enter the numbers 6 and 2 and the results where "To be or not to be? That is the question." clearly that result is not in the expected set of solutions for multiplication.

The specification of the solution set or correct set of results for a program are necessary if you only accept specific results.

Now we are up to:
- A computer
- A specification of the instructions the computer accepts
- Something to generate sequences of instructions
- Specification of the problem requirements
- Specification of acceptable results

Some might say the last to specifications go together into one specification. It doesn't matter to me. The point is, if you want specific problems to be solved and receive results in a specific set of possible results then it is necessary for those to be specified and communicated to the programmer.

So, that's all you need.

But you are thinking there is so much more to software development. There are books and books on how to write good solid code. There are books and books on how to organize teams. There are books and books on how to layout the User Interface.

Ultimately it all depends on what you want and what you value and the things you are willing to do to get the things you want.

Here are some related questions to the topic:
- What is necessary to develop software quickly?
- What is necessary to develop software with few defects?
- What is necessary to develop software that runs on many different computers?
- What is necessary to develop software inexpensively?
- What is necessary to stop developers from quitting?
- What is necessary to protect intellectual property?
- What is necessary to attract talented programmers?
- What is necessary to get incremental results from the development effort?
- What is necessary to build software for life critical systems?
- What is necessary to make code that is reusable?
- What is necessary to make code backwards compatible?
- What is necessary to develop software for moving/shifting requirements?

The list goes on and on and on.


Thursday, February 28, 2008

Lessons learned from the farm.

My Dad has often recounted to me the story of a neighboring farmer. The neighbor ran a dairy where they milked Jersey cows. It was a very profitable operation. Not only was it ran well, it was clean and in good order, meaning that the barns were in good repair and that the fence rows were not grown up with briers, weeds, or trees.

The neighbor sent his son to a University were the son majored in agriculture. Upon his graduation and return to the farm the son started applying the lessons he learned at the University. The son took the savings from the farm's years of profit and increased the size of the operation. One of the changes was the adding of a Harvestore "big blue" silo. Trust me, these silos are expensive. The son ran the operation and soon was broke and had to sell out.

Now the son is working at the state government level in the Department of Agriculture.

My Dad is amazed that someone who took control of a "gem" of a dairy operation and ran it into the ground is qualified to make agriculture decisions at the state level.

In my opinion the man is not qualified.

When I remember this story it causes me to think about software development and a couple of similarities I have seen.

The first is that I have seen directors and executives run companies into the ground and then in mass those "people" go to another company and do it all over again. This scenario is not exclusive to dairy farms and software development. I know that using such derisive terms as "ran into the ground" poison the well and since I do not know all of the reasons for the company failures and thus this is an invalid argument based on an appeal to consequences of belief. Nevertheless I have seen people advance in the same business sector even though previously they were involved in significant failures in that same sector.

The second thing that the dairy farm story reminds me is that "just because it was taught at the University doesn't mean it will work!". I loved my education at Brigham Young University. The professors there taught me well. However, I knew the dairy farm story before I went to the University and I knew that I would have to carefully apply the lessons from school to my career.

So, some advice for my friends. When someone else has an idea on how you should run your business figure out if that someone makes a living doing what you do. The University Professor teaching agriculture makes his money through the University and not through farming. The Professor's teachings may be right and applicable and they may not be applicable. The onus or burden of proof falls upon you.

One might conclude that my Father's farm is not progressive because he doesn't follow the latest suggestions from the Universities, feed salesmen, or machinery salesmen. This is not so. My Dad has often told me of how his father had them using a mule to plow the fields. My Dad and his younger Brother decided that if they were going to run the farm and make any money doing it that they would need a tractor. So the two brothers bought a tractor and some implements to use with that tractor.

My Father would go to different parts of the U.S.A. and see what farmers where doing there. I remember when we had traveled through the Mid-West and we had seen the hay balers that make a large round bale. My Dad ordered one and our farm was the first in the area to use the new technology by several years.

My Father also rejected many ideas that he saw adopted. One of which was to place all of the cattle into a confined area to limit the cattle's movement and to put in large silos and conveyor systems. Our neighbors did so. To pay for the silos and such they added more cows to the heard. Even though our neighbors had an operation valued in the millions of dollars my Dad's farm made more profit, was less stressful on the cattle because the cows stilled roamed in their pasture, costs less in operations because the cattle are healthier, etc.

My Father rejected the recommendation of the Agriculture representatives to give the cows a shot of hormones, known as BST, to the cattle. The cost of the hormones was not the reason he rejected this idea. The idea of giving the cows a shot, and trust me the needle to give a cow a shot is large and it hurts the cow, and making the cows nervous and even mean was not worth it. Also, it was not something he wanted in his milk and so he figured you wouldn't want it in yours (even though they claimed that the hormone can not end up in the milk). The shots, the confinement, and other issues of a large "modern" dairy reduces the life of a cow from over 10 years to less than 7 years. The costs of raising or buying replacement cattle is tremendous.

My Father values the living conditions of his cattle over making a profit. My Father has learned that having a bigger business does not mean you have a more profitable business. My Father has learned that salesmen are motivated to sell their product and can spin quite the story of reasons why you should use their product. My Father does not define success by short term profits alone.

For those that have stuck with me on this post you may be asking what does this have to do with software development. To me it has plenty and if it hasn't been clear then please post some comments and I will try to help you understand.

Remember, you are probably smarter than you think you are and those you think are really smart may not be as smart as you think! Think for yourself. Know what is important to you and if you don't know what is important you will eventually learn what is important.

When someone is trying to sell you something that will make you money here's a little test for them:

If someone says that by adopting the new Lissom Software Development Methodology you will increase your productivity by 20%, then ask them to put it into writing and if it doesn't deliver they will give you your money back!

When they refuse to offer you a money back guarantee ask them to stop with the hype and spin and ask them to work with you so that you can see if their methodology addresses real problems that you are experiencing.

You know the problems you face even though you might not recognize the root cause of the problems and you might not know all of the possible ways to address the problem. That is where getting help and ideas from others internally or externally can be useful.

Just some thoughts.

Sunday, February 17, 2008

Poor Requirements, Poor Coding, Poor Design, Poor Policies, Poor Management, Poor Leadership

The software development industry seems to be completely in love with excuses! (Sadly enough, it seems that many industries are infected with this poor behavior. Pun intended.)

In software development, when a problem is identified it doesn't take but an instance to pass before someone has an explanation. Maybe drawing rapid conclusions is part of the software development ecosystem. Developing software requires quick decision making. Maybe "quick thinking" has become our primary method of problem solving.

I have seen many of the different problems encountered from software development. Buggy software or quality problems, performance issues, missing features, extra features, months of overtime, changing requirements and moving targets, threats used to squeeze out a little more work, heroics, power plays, re-organizations, firings, and more.

If you work for a company who's products are web based have you ever experienced something that brought down "the site"? At least that is what it is referred to in the Utah area, that is "bringing down the site". Has this happened where you work?

I have seen it happen a few times. The most drastic reactions from bringing down the site have resulting in the CEO calling the developer for an explanation AND the CTO coming down to "see" what the problem really is. It was termed the "blame game". Someone has to be blamed. Maybe the Executives where not looking for someone to blame but instead were offering suggestions from a higher view hoping their perspective could help. Regardless of their unknown intentions the interpretation was that someone had to be blamed.

The blaming resulted in excuses. Excuses such as, "My change brought the site down because of the ridiculous policy that I can't have access to the live databases. I did not know that the tables on live did not have the exact same schema as those on our staging machines", or, "My change brought down live because the design of the system is so complex that there is no way I can manually test all of the possible states the system may enter", or, "The system that you forced us to build upon is very complex and the company has never trained us on its proper use."

So, back to the title of this post.
"Poor Requirements, Poor Coding, Poor Design, Poor Policies, Poor Management, Poor Leadership"

Upon release of a product if it doesn't do what is wanted then the excuse is "poor requirements". It may be true that the requirements were poorly defined. But waiting to the release of the product to discover this is ridiculous. It is ridiculous on the part of management, on part of the customer, and on part of the developer. Historically the suggested solution to this problem has been creating a more formal and controlled requirements process. A more formal process is something management can dictate but apart from large amounts of ceremony have failed to deliver.

I really don't care how long it takes a company to define the requirements for a software product. There are many that love to play in the undefined space of "Big Upfront Requirements". Well, they can play there all they want. It is a situational trap.

I do care that the company was foolish enough to wait until the product is finished to start finding fault with it and to start looking for those to blame. Those responsible for the delivery of a software product should be using that product continuously during it's development. There is no excuse for not doing so. If those concerned say, "The software is too new, too buggy, and won't run long enough to really evaluate it. So we will wait until they make their alpha candidate." Hog wash. First there are formal management techniques of functional decomposition which allows for meaningful incremental development and release of working features. If management wants to put something formal in place, then put in incremental development.

If you think that incremental development is ad hoc, or some type of cowboy approach then you do not understand it. Incremental developer imposes a level of work that waterfall development does it. It requires features to be organized, group by dependency and priority, estimated, and the scheduled by placing them into release queues. What I just described is a lot of work. A lot of work before the first line of code is developed. Notice I did not say the feature was designed at the code level. That is something different that I will talk about at a future date.

Do you see how these complaints of poor this and poor that are just excuses?

I have lightly covered poor requirements. I will touch on poor management. I know that many believe that a manager of developers doesn't have to be a developer. I will agree with this if the manager has the ability to know where the development line is drawn.

I had a manager that had did some programming. Delphi type stuff, some HTML, a few databases designed with a nice GUI DB designer. Those kinds of things. As sophisticated as that is compared to the masses, it is light weight stuff compared to systems level development using multiple processes communicated using shared memory techniques or wiring up a model view controller in such a way as to avoid race conditions. However this manager (director level) assumed that he really understood programming and he would come down and stick his big honking management nose in the developers code. He required developers of 20+ years experience to explain all of their code changes and designs to him for approval. When we faced technical issues he would make design level decisions that were so idiotic that we couldn't believe it. There was a problem with some web servers that were not responding to sent messages. He said to the team, "If the server fails to respond, send the message again. Double pump the message. Triple pump the message. We must be sure the message gets to the server." How utterly ignorant of server side development! Sending more and more messages to a server that is not responding is not the first solution to the problem. Before I go completely insane from remembering these experiences I better get back on topic. We had poor management.

Poor management was our excuse for our problems. Because we had poor management we felt that we couldn't fix the real problems. Those were excuses and we did not allow excuses to stop us from working to the best of our ability. We did not follow his advice of double pumping the messages and we did not tell him how we fixed the problems. We chose to ignore him and do our job. It is hard to ignore a director when your pay raise goes through his office. But we did anyway. I left the team as did others. Soon there was a new director.

Ultimately I realize that I will be plagued with excuses and will be guilty of giving excuses. If you do not have the ability to make a change then you will probably make an excuse.

Surely there are many poor aspects of any software development. They will stay poor until someone realizes how to stop complaining and to start thinking and then doing something about. Just like the example I give of missed requirements. It is not reasonable to have such problems with the knowledge of incremental feature development.

As for the problem with the poor manager/director, the solution is one of communication and trust. That is a large topic. But to make it short either the manager could have learned to communicate his concern for quality at the same time showing that he trusted the developers to deliver, OR, the VP could have communicated with the developers and could have realized that the manager should be replaced, OR some other solution.

If your development process is suffering from something that is poor, then drop a line to us on this blog or on the users group, "" and we will be glad to offer suggestions. For FREE! But be prepared to give open and honest details about the entire problem and the organization in which the problem lives.

Tuesday, February 05, 2008

Agile Fiction and Myth

"Perhaps the sentiments contained in the following pages, are not YET sufficiently fashionable to procure them general favour; a long habit of not thinking a thing WRONG, gives it a superficial appearance of being RIGHT, and raises at first a formidable outcry in defense of custom. But the tumult soon subsides. Time makes more converts than reason." Common Sense, by Thomas Paine - 1776

The topic of this posting will be "The Agile Method and Other Fairy Tales", by David Longstreet.

Mr. Longstreet gives an impressive background of travel and study of software organizations in many varied fields and marketplaces. I declares that he has "been dedicated to the idea of improving software productivity and and studying software organizations".

I have twenty three years experience in software development. I have always been involved with the improvement of software quality and productivity. My level of involvement is "applied", that is, I am a Computer Scientist and Software Engineer. I apply ideas for improvement and I stay around to live with the results. I have a vested interested in improving software development because those improvements directly affect me. I have developed software on the family dairy farm to manage accounts payable and production. I have delivered software for the Department of Energy to visualize magnetic fields, and to render 3D data visualizations from scanning/tunneling microscopes. I have written coding standards and guides which were adopted by the entire Computer Science department at the National Laboratory where I worked. I have delivered software for EMail applications, real-time stock market analysis, record managers for high performance indexing engines, eCommerce systems, new specialized GUI controls, charting packages, and many many other high quality and working systems. I can produce references if someone wants to question the veracity of my statements.

I have not limited myself to just studying the software industry. I am an award winning Dairy farmer and have been recognized for my techniques in raising dairy calves. I have traveled and performed voluntary service. I have studied Gothic Architecture and Renaissance Art, and I have traveled in Western Europe to see the actual works. I study languages and culture. I have done in-depth research on the Biblical Prophet Abraham and I currently study the Qur'an. I collect HO scale electric trains and I change the oil in my automobiles. I played the Cornet and Saxophone and I have proven myself to be a fairly good artist. Like everyone I have met, I am not a one-dimensional person.

I have developed software in 68000 and VAX assembly, Pascal, FORTAN 77, Ada, APL, Object C, Object Pascal, C, C++, Java, HyperTalk, C#, various scripting languages, and SQL stored procedures. I have managed teams of developers. I do not limit myself to my department either. Everyone knows that I will speak up in other departments and I am not afraid to go to the CEO or the Board. As a matter of fact, I have been demanding on many organizations and I have had no fear of any position.

I state all of this because Mr. Longstreet does so at the beginning of his paper and then later in the paper describes Agile users as one dimensional. Even with all of my experience there are many things to learn.

Like Mr. Longstreet, I was very skeptical of eXtreme Programming (XP) when I first heard of it. I started writing a paper, XP eXposed. As I studied XP and then applied parts of it I started to see what Kent Beck was describing. Soon I abandoned the paper and started applying parts of XP and I found value in XP.

I will now take some quotes from Longstreet's paper and make some comments:

"I have come to the conclusion that software developers cause most of their own problems. The root cause of most of the problems facing software development is actually caused by software developers themselves. They are creating their own complexity and chaos."

This argument is flawed and is known as an appeal to consequences of a belief.

"Agile methods want continuation and formal acceptance of the status quo... Up to this point in time software development has been a Wild West endeavor... IT has been sloppy. There is nothing new with Agile, because it only tries to formalize sloppiness."

This argument is flawed and is known as an appeal to ridicule, spite, and ultimately the argument turns to an appeal of tradition.

"I am bringing a level of professionalism and rigor to the software industry, and I hope you join me."

This is a setup for the well known fallacy known as an "appeal to authority".

"An Agile proponent will argue there is limited value in requirements specifications because the requirements are ever changing."

There may be someone that Mr. Longstreet views as an Agile proponent that has argued this point. That does not make it so. Use Cases and User Stories are the two approaches I am most familiar with and both are taught and used by Agile proponents. I personally believe that some of the confusion is with the roles of XP and the idea of Agile.

"'I think it's fair to say that customer practices are not addressed in Agile methods.' It is clear that understanding what the customer wants or helping the customer figure out what they want is not really part of Agile, and in turn not part of software development."

Mr. Longstreet is quoting someone from an online users group. Such methods of fact finding are laughable. This statement is a hasty generalization, and hearsay.

"It is clear that understanding what the customer wants or helping the customer figure out what they want is not really part of Agile, and in turn not part of software development."

I believe this is a fallacy of composition.

"Perhaps it is the statistician in me, but I do not believe anything is random. Nothing occurs by random and nothing occurs by chance."

This is an appeal to authority.

"The Agile argument is based upon the idea that systematic study does not work for
software development. They believe 'most software is not predictable.'"

This is on the border line of the fallacy of "confusing cause and effect". Also this is a false dilemma.

"Every single time a development project is done, it is done differently. Documents are not cataloged and organized. There is no consistent usage of terms and symbols between projects, within projects, and even within single requirements

This is a distortion and thus a Straw Man argument.

Discussing pair programming Mr. Longstreet states, "The idea is that one programmer writes code and the other programmer stands over his shoulder and watches for mistakes."

This is a complete falsehood. He goes on to say, "I am not sure what problem pair programming is trying to solve. Most of the issues with software development are related to incomplete requirements, not coding."

The first part of his statements on pair programming is a Straw Man. Also his statement that most issues are related to incomplete requirements is confusing cause and effect, and is an appeal to consequences of a belief.

"Incomplete requirements are the biggest issue facing software development. I guess it is clear to the Agile folks that it is only logical to spend more time coding instead of cleaning up requirements or writing concise requirements in the first place."

As with many of the statements this one is of questionable cause and confusing cause and effect.

"They believe trial and error is the best method to gather and communicate requirements."

This is an appeal to ridicule, as is this statement:
"Agile proponents believe discipline is not necessary and inhibits productivity."

"Again the basic premise of Agile Methods is there is nothing I can do about my environment. I am a victim of my environment. I am a victim of circumstances. I can’t plan I can only react."

This is simply an example of poisoning the well. The fallacy goes like this:
- Unfavorable information (be it true or false) about person A is presented.
- Therefore any claims person A makes will be false.

I am only half way through this paper. I find it filled with falsehoods based upon poor research. Mr. Longstreet assumes an authoritative position at the beginning of his paper by stating his experience and declares that he specializes in non-biased scrutiny of people, processes, and businesses. He claims that multi-disciplinary study is a key component to his authority. Yet with these statements there is an obvious lack of research done concerning the current body of knowledge on the subjects of Agile and XP.

For those that are the authors of Agile methodologies I invite you to make concise statements defining your Agile Method. For Mr. Longstreet, I challenge him to clean up his paper and to cite sources and remove the unnecessary fallacies.

Saturday, February 02, 2008

It's okay to THINK for Yourself

I started studying the Sofware Development process in 1985. In about four years I had learned techniques in estimation and learned the name of Halstead. I had learned of something called "COCOMO" and learned the name of Boehm. I learned of Jackson System Development and found out that Michael wasn't just the king of MoTown. I learned of the SW-CMM and learned the name of Humphrey. I learned of Brooks and Parnas, of Booch and many others.

The biggest thing I did not know at the time was that all of these topics were new. I came from a Dairy farm and I had never used a computer until college. I had seen a TRaSh 80 once before but that was it. As I took my math classes, physics classes, and computer science classes I just assumed that the C.S. stuff was "old" stuff and that everyone in industry used and believed in what I was learning in school. I had no idea that the topics were leading and even bleeding edge. My first programming language was Pascal on a VAX 11-750. I did not know that in industry at that same time many projects were done completely in assembly language and that in industry it is costly and takes time to switch from assembly to Pascal, ADA, or C.

My ignorance was a blessing in disguise. Since I assumed that everything I was taught was in common use in industry I had no problem analyzing the concepts as if they were "old" and in need of replacement. I was not mesmerized by these new concepts because I didn't know they were new.

I remember in my CS 327 class I was taught about the Waterfall model. I just went down stairs and got my old text book out. "Software Engineering: A Practitioner's approach. Second Edition" Chapter one of the text taught me the "classic life cycle" of software development. The text clearly states that the waterfall model was under criticism even at that time. It says that projects rarely follow the sequential flow, that customer's had difficulty stating all requirements explicitly, and that the customer had to be very patient because a working version of the software is not available until late in the project time span. The next section introduced Prototyping.

Well, I better fast forward to the now.

Now I have years of context. I know when XP and Agile came on the scene and I viewed them in their true position for the time as being new approaches. I have watched as the current set of software development practitioners beg to be told how to develop software. I mean just what I said. BEG TO BE TOLD. I have participated heavily in many online users groups, more lightly with local users groups, and heavily within the companies where I have worked. I have watched my peers look for someone to whom they can abdicate the decision of how to make good software.

In the late 80's and early 90's the companies I worked for brought in consultants which taught Waterfall, change control boards, and formal inspections. My peers accepted those methods as the way to develop software and seem to have never revisited that problem again. Well, we have moved on in our careers and now many are Managers, Directors, VP's, and even CXO's. Most have not revisited the problem of "how to develop software". I watch the next generation of software developers rage against the old machine. They want to develop software in better ways that work with the advancements in hardware and development tools. But they do what my generation did and abdicate the decision of how to make good software to the current set of consultants. This isn't totally bad unless they make the second mistake that many of my peers made and never revisit the question again.

I watch as people look for Agile Methods. Agile recipes is what they want. Secretly I think they are looking for guarantees. They want to say, "When I find myself in situation X I can apply practice Y and that is the best that can be done." Not only that, but they want to recite the authority of the practice as well, "Beck and Jeffries have both said this is how to do it." Well there you go, we could never question Beck or Jeffries now could we! :-)

It's great to read the latest books and articles, to talk with the authors, and even hire them to teach, clarify, and expound. Ultimately you will find yourself metaphorically alone and having to make decisions. At this point you can either think for yourself or you can blindly follow your method.

By current definitions of Agile Software development I probably practiced it for about a week. I immediately made changes to suit the needs I encountered. I moved on and I am still moving on. I never have cared if I was Agile or not. That was not the problem I was faced with. The problem I am faced with everyday is how to develop software the best I can in the current situation I find myself.

Is your problem whether or not you are Agile? Or is your problem developing software? If you recognize your problem to be developing software do your recognize what is causing the difficulty? If so then you can then say, "Will Agile practices address the issues I am faced with?" Think dog gone it. Think for yourself. Think and then apply yourself to the problems you face. Agile surely has many answers for many problems in Software development. Agile surely is not the answer all of the problems. And neither is CMMI, Lean, BDD, FDD, Spiral, Prototyping, or even DBU.

I have read countless threads on Agile users groups on how do you sell your group or company on Agile. I see this and I sigh. I think, here we go again. Think for a moment. Sell your group or company on solutions to real problems that are being faced. I advise on looking for root problems! For instance, if one of the problems is that the software doesn't meet the requirements when it is shipped then maybe you should institute rigorous reviews of check-ins and have a requirements sign off check list. OR, maybe the problem would be better addressed if there were incremental releases of completed features which are accepted by those that defined the requirement. Dig for the deeper problems and solve them first! Think! Think dog gone it. THINK! Think, because either answer may be right.

I have read countless threads on software development groups on the topic of "when should you optimize software". Often the posts are just a trap and have not disclosed the entire situation accurately or completely. That is, often the poster of the question has already identified that the software doesn't perform sufficiently and they are just waiting to have an argument. But if you are serious about the question then I say again, "Think!". If you are worried about scalability and user load then do you know which technologies are scalable? Do you know about load balancing? Do you know about distributed programming, or message queues? Do you know what about cluster? Virtual severs? Grid computing? If you are familiar with these subjects then you can at least say that we should develop our system to work behind an IP load balancer. These decisions must be made as soon as possible because they will effect how you develop the software. But if you understand such things as the Facade pattern then you can put off the decision until a bit later. But you have to know about alternatives and you have to think and apply.

If you are concerned with the runtime performance of a particular algorithm or function then you have to know the answer to the question of "what is fast enough?" Once you know that then you can exercise and profile the code to see if it is fast enough. Think about this! If you have the answer for how fast it has to be before you develop the algorithm it might guide you to the correct solution the first time instead of having to write and and then hope that it meets some unknown performance requirement. Think! That's what it's about, thinking. Maybe this suggestion is right for your situation and maybe it is not. Don't accept it blindly.

I have been told that I could not do a certain thing because the "agreed" upon software methodology doesn't allow one to do that. Bull pucky. People don't allow people to do things. Methodologies are neither alive nor can they hurt or heal. If that something needs to be done, and it is the right thing to do, then the problem must be real and identifiable. If it is all of these things then surely the problem and corresponding solution can be described. If it can be described then surely your peers can recognize the need or offer alternatives. It is not sufficient to dismiss the action on the basis of "the process doesn't allow for it." Think. THINK.

Do your best, and view your peers with an eye that they are smart and thinking people and that the burden lies upon you to communicate with them. If both sides of the conversation hold these values then both will try their best to hear what the other is saying. If you feel that you are the only one with those values then try and prove yourself wrong.

Think for yourself. It doesn't matter if the author of the latest and greatest software development methodology says you should do A and not do B. If you don't need to do A then don't do it. But know you don't need A for all of real reasons. Know why some people say you should do A and show how that doesn't apply. Don't follow for following's sake and don't be a contrar-ian for contrary's sake. Think. Learn. Share.

I try to do those things. I fail at it often, but I try.