I recently updated my "Vista box" to SP1. I ran my current project without recompiling and everything was fine. Then I checked out a file, made some changes, and recompiled my project and found that my application would no longer run.
The error message I got was this:
Unable to load DLL 'VistaDb20.dll': Invalid access to memory location.
What an untimely error. We are trying to release our latest patch of our product and this takes the wind right out of our sails!
After some vigorous exploration we figure out that this has something to do with the DEP. The DEP is Data Excecution Prevention.
We figured out how to completely disable the DEP and sure enough the product could now run.
So now we had to figure out how to disable it pro grammatically (or at least we thought).
We are doing C# development. So I wrapped up a call into the Kernel32.dll to call SetProcessDEPPolicy. This had no effect. While I was doing this Jerry (one of the members of the team) was looking into why the recompile caused things to break.
I did a build on my "XP box" and copied it to my "Vista box" and sure enough it would run just fine. So we knew it had to be something with the compile.
Some Googling reveals some interesting information:
I'm Just Saying
Ed Maurer nails it right down. Thanks Ed.
Jerry added the following to our Post-build event command line:
call "$(DevEnvDir)..\tools\vsvars32.bat"
editbin.exe /NXCOMPAT:NO "$(TargetPath)"
mt.exe -manifest "$(ProjectDir)$(TargetName).exe.manifest" -outputresource:"$(TargetPath);#1"
I just wanted to share this with those that may be having the same problems.
Thursday, March 20, 2008
Sunday, March 09, 2008
More on Code Debt
I have blogged before on code debt.
I would like to say a bit more about it.
Code is like an onion. Onions have layers.
The outer most layer of the code onion are the public interfaces or the exported functions. This is the layer that external code may hook up to the code.
The inner layers are often made up of the classes and structures imagined and created by the developers to organize their abstraction of the problem. In this inner layer the classes will have methods visible to all of the other classes in the same layer and may have methods that only subclasses can see and finally they may have methods that only themself can see and are private.
Each layer may have its own level of code debt. If the outer layer is well defined and no one ever has to peel into the onion the code debt will never be recognized.
Code debt is not recognized until some activity causes its recognition.
If the code is never modified or extended then no one will ever know that the code was poorly written or poorly designed and will never have to pay the costs for the poor code. I have developed code that has been running for years and never revisited. I do not accept the myth that all code is actively changing. I do feel that all code is actively becoming obsolete or decaying, but the rate of decay varies and his tied to Product Debt and Customer Debt.
Another example that code debt does not exist until someone tries to modify the code is this, code may have a very accurate and understandable model of the domain, with classes and methods that are intuitive and make sense. If the activity is to add new methods and functionality to such a code base it doesn't matter if the code internal to each method is poorly written. If you don't enter that layer of the code you will never know it is poorly written. Each of the existing methods may be filled with duplicate code, multiple returns and go-to statements, use of global variables, poorly named local variables, and a myriad of other things, but the external view of the class may be very accurate and correct. If the class is added upon and the existing methods are not modified then no one will know of the code debt that lives inside the method layer. This is an example of "inner code debt" or "deep layer code debt".
One of the most expensive types of code debt I have seen is where all of the code is not extend-able, modifiable, or maintainable. I have seen this often. It is when the code has to be ported to a new language. The existing system may be the best code ever developed with regression tests galore. But it doesn't matter. The choice to develop the code in a language that did not meet the future needs of the product is costly.
Code debt is subjective. Often I have seen a developer take ownership of existing code and upon examination of the code find it unsavory. Such things as, "This should be an interface instead of an abstract base class". The new owner of the code starts to re-write the code to suit their idea of clean code.
Code debt is relative. Often I have seen a developer take ownership of existing code and upon examination of the code find it too complex for their skill level. An easy example of this is C++ code. I have seen programmers that couldn't read parameterized types (templates). It was so foreign to them they just couldn't read the code.
At the inner most layers of the code onion the code may be written very very well but the users of the objects have used them poorly and now you have a coupling mess. Tightly coupled code is a form of code debt. Often no one recognizes how tightly code is coupled until they try to remove a class from the code and replace it with a new one.
Is there a relationship to source lines of code (SLOC) and code debt? If you have zero lines of code one might argue that you have no code debt. I will argue that zero lines of code is adding to the Product Debt!
Code debt is not recognized until some activity exposes it by entering into its layer of existence. Any layer may be rotten but if that layer doesn't need change it will not matter. Poorly designed and architected code does not mean it has to be buggy code.
Suppose there is a function of a C++ class that has 200 lines of code in it. Suppose it has to be fixed because somewhere in it there is a bug. How much code debt is there? Can you give me a cost to pay this debt?
Let's take two specific scenarios.
First, the 200 lines of code was written by a novice programmer. The programmer assigned to fix the bug is an expert in C++ and all of the C++ libraries. The programmer recognizes that 80% of the functionality of this buggy routine is string manipulation and replaces that code with three calls into the C++ string routines. The programmer runs a test and the bug is fixed and everything is done. Time to fix, let's say it took him two hours. Not very expensive at all.
Second, the 200 lines of code was written by an expert programmer. The programmer assigned to fix the bug is a junior programmer relegated to maintenance because it is felt this is the best way for him to get to know the system. (Yes, I know about pair programming, but I am talking about code debt and how it is relative and subjective). The junior programmer doesn't understand that any operator can be overloaded in C++ and in this particular code the indirection operator has been overloaded. The junior programmer makes changes to the code hoping to fix the bug but the changes doesn't seem to make any difference. (Why? Because the bug is somewhere else, in the overloaded operators code.) The junior developer spends days working on this. At first he thinks he has found a compiler bug! The junior programmer adds a variable to the class for tracking some state he hopes is relative and inserts the saving of this state into the code and does some conditional logic with this new state variable. The bug is fixed! He checks it in. Two weeks of work. What was the cost of paying this code debt? The sad thing is that he did not fix the bug. By adding the variable to the class he changed it's size and thus hid the real bug where part of the memory of the class was being corrupted in the code for the overloaded operator. So, in reality, nothing was fixed and everything was a waste of time and money.
Because of these two examples and my previous statements I do not believe that a large number of SLOC means there is significant code debt.
Some may argue that the number of features has to do with code debt. I ask, "Features at what level?" The external layer of a system may be viewed as its feature set. Thus I refer you back to my statements above about layers. Also, I remind the reader that features that have to be ported to a new programming language have a high code debt regardless of the quality of the existing code.
A large system with millions of lines of code may be maintained inexpensively. One factor that keeps the expense down is that the original developers stay on with the system. They know why and how things were done. Thus code debt is affected by the members of the team. Suppose the team becomes insulted in some manner and all quit. Suddenly the code debt changed from low to extremely high!
Just some of my thoughts on code debt. I hope it causes you to think about code debt in new ways as well. As a final thought I think the best way to address code debt is with the right people. Programmers (which usually are people) are the ones to address the issues with the code and their skill can make a job quick and simple.
Drop me a line. I have no idea if anyone ever reads my blog posts!
I would like to say a bit more about it.
Code is like an onion. Onions have layers.
The outer most layer of the code onion are the public interfaces or the exported functions. This is the layer that external code may hook up to the code.
The inner layers are often made up of the classes and structures imagined and created by the developers to organize their abstraction of the problem. In this inner layer the classes will have methods visible to all of the other classes in the same layer and may have methods that only subclasses can see and finally they may have methods that only themself can see and are private.
Each layer may have its own level of code debt. If the outer layer is well defined and no one ever has to peel into the onion the code debt will never be recognized.
Code debt is not recognized until some activity causes its recognition.
If the code is never modified or extended then no one will ever know that the code was poorly written or poorly designed and will never have to pay the costs for the poor code. I have developed code that has been running for years and never revisited. I do not accept the myth that all code is actively changing. I do feel that all code is actively becoming obsolete or decaying, but the rate of decay varies and his tied to Product Debt and Customer Debt.
Another example that code debt does not exist until someone tries to modify the code is this, code may have a very accurate and understandable model of the domain, with classes and methods that are intuitive and make sense. If the activity is to add new methods and functionality to such a code base it doesn't matter if the code internal to each method is poorly written. If you don't enter that layer of the code you will never know it is poorly written. Each of the existing methods may be filled with duplicate code, multiple returns and go-to statements, use of global variables, poorly named local variables, and a myriad of other things, but the external view of the class may be very accurate and correct. If the class is added upon and the existing methods are not modified then no one will know of the code debt that lives inside the method layer. This is an example of "inner code debt" or "deep layer code debt".
One of the most expensive types of code debt I have seen is where all of the code is not extend-able, modifiable, or maintainable. I have seen this often. It is when the code has to be ported to a new language. The existing system may be the best code ever developed with regression tests galore. But it doesn't matter. The choice to develop the code in a language that did not meet the future needs of the product is costly.
Code debt is subjective. Often I have seen a developer take ownership of existing code and upon examination of the code find it unsavory. Such things as, "This should be an interface instead of an abstract base class". The new owner of the code starts to re-write the code to suit their idea of clean code.
Code debt is relative. Often I have seen a developer take ownership of existing code and upon examination of the code find it too complex for their skill level. An easy example of this is C++ code. I have seen programmers that couldn't read parameterized types (templates). It was so foreign to them they just couldn't read the code.
At the inner most layers of the code onion the code may be written very very well but the users of the objects have used them poorly and now you have a coupling mess. Tightly coupled code is a form of code debt. Often no one recognizes how tightly code is coupled until they try to remove a class from the code and replace it with a new one.
Is there a relationship to source lines of code (SLOC) and code debt? If you have zero lines of code one might argue that you have no code debt. I will argue that zero lines of code is adding to the Product Debt!
Code debt is not recognized until some activity exposes it by entering into its layer of existence. Any layer may be rotten but if that layer doesn't need change it will not matter. Poorly designed and architected code does not mean it has to be buggy code.
Suppose there is a function of a C++ class that has 200 lines of code in it. Suppose it has to be fixed because somewhere in it there is a bug. How much code debt is there? Can you give me a cost to pay this debt?
Let's take two specific scenarios.
First, the 200 lines of code was written by a novice programmer. The programmer assigned to fix the bug is an expert in C++ and all of the C++ libraries. The programmer recognizes that 80% of the functionality of this buggy routine is string manipulation and replaces that code with three calls into the C++ string routines. The programmer runs a test and the bug is fixed and everything is done. Time to fix, let's say it took him two hours. Not very expensive at all.
Second, the 200 lines of code was written by an expert programmer. The programmer assigned to fix the bug is a junior programmer relegated to maintenance because it is felt this is the best way for him to get to know the system. (Yes, I know about pair programming, but I am talking about code debt and how it is relative and subjective). The junior programmer doesn't understand that any operator can be overloaded in C++ and in this particular code the indirection operator has been overloaded. The junior programmer makes changes to the code hoping to fix the bug but the changes doesn't seem to make any difference. (Why? Because the bug is somewhere else, in the overloaded operators code.) The junior developer spends days working on this. At first he thinks he has found a compiler bug! The junior programmer adds a variable to the class for tracking some state he hopes is relative and inserts the saving of this state into the code and does some conditional logic with this new state variable. The bug is fixed! He checks it in. Two weeks of work. What was the cost of paying this code debt? The sad thing is that he did not fix the bug. By adding the variable to the class he changed it's size and thus hid the real bug where part of the memory of the class was being corrupted in the code for the overloaded operator. So, in reality, nothing was fixed and everything was a waste of time and money.
Because of these two examples and my previous statements I do not believe that a large number of SLOC means there is significant code debt.
Some may argue that the number of features has to do with code debt. I ask, "Features at what level?" The external layer of a system may be viewed as its feature set. Thus I refer you back to my statements above about layers. Also, I remind the reader that features that have to be ported to a new programming language have a high code debt regardless of the quality of the existing code.
A large system with millions of lines of code may be maintained inexpensively. One factor that keeps the expense down is that the original developers stay on with the system. They know why and how things were done. Thus code debt is affected by the members of the team. Suppose the team becomes insulted in some manner and all quit. Suddenly the code debt changed from low to extremely high!
Just some of my thoughts on code debt. I hope it causes you to think about code debt in new ways as well. As a final thought I think the best way to address code debt is with the right people. Programmers (which usually are people) are the ones to address the issues with the code and their skill can make a job quick and simple.
Drop me a line. I have no idea if anyone ever reads my blog posts!
Friday, March 07, 2008
Design by Use, Object Oriented Design, Design by Contract, and Test Driven Development
Design by Use (DBU)
DBU is a set of software design and development techniques which I have found very useful during my career. I recognize that the parts that make up DBU are not new to everyone.
Before I go into a general description of DBU and compare it to OOD, DbC, and TDD I want to point out some unique aspects of DBU.
Unique Aspects of DBU
DBU considers large software development issues and specifically multiple teams working simultaneously to build components and subcomponents which ultimately will work together as a software system.
DBU describes what is termed "immediate integration". For me this was a new concept. For you it may not be, or maybe I have not communicated clearly what I mean.
Suppose there are two teams, Team A and Team B.
Suppose that Team A is writing Component A which depends on Component B which will be developed by Team B.
Team A writes inside of Component A the call to Component B before Component B is developed. Team B takes the code from Component A and uses that to define the method signature or interface into Component B. Team A decides how they want to use Component B. Team A codes the preferred usage and gives that to Team B.
Team A writes this "preferred usage" code very early in the development of Component A. This is done early so that Team B can start as soon as possible so that all teams are working on their components in parallel as much as possible. When I say "very early" I mean at first for most situations.
Notice that Team A specifies the first version of the interfaces for Component B which are of interest to Team A. As with most software, changes to the interfaces occur before finishing the product. I shouldn't even have to say that, but so many read a description and then say, "You don't allow for future changes." All I can say is that people who think like that need to take the blinders off. If I don't describe some particular issue that you think is important I say to you can you imagine a way to address your issue and if so then everything is still good.
So, Team A writes inside of Component A the "preferred usage" code for the call to Component B, and then creates a stub for Component B. Team B takes ownership of this stub and brings it into Component B and Component A no longer calls the stub but calls Component B. Thus we have immediate integration between Component A and Component B. This new call into Component B had the pre-conditions, post-conditions, and invariants that are concerns for Team A specified by Team A. These concerns can be used in the definition of automated tests.
Team B does not have to wait and wait for Team A to finally decide to call their system. Team A does not have to worry about Component B's interface and how to match up the classes, structures, parameters, exceptions, or return values. There will be no useless code and design based on the common tactic of "We will go ahead and design and implement Component B and when you finally figure out how you want to call us we can implement a mapping layer between the systems." What a poor way to do parallel component development.
Notice that Team B did not declare to Team A that Component B will have these interfaces and Team A will have to figure out how to create the data necessary to make the call. In the development of "NEW" software the "user" has priority over the "used". Some may say, "This doesn't work for integrating software to existing systems." That's right, it doesn't have anything to do with integrating to existing systems such as third party libraries, unless you are designing a transformation layer between your system and the third party system. If you are designing a transformation layer then I would do it in the DBU fashion.
Component B should only do what its users need it to do and nothing more, and obviously nothing less. Any extra code is just a waste. Mapping layers sometimes are the sign of poor design or poor utilization of teams and are just unnecessary and extra code.
Team A knows critical constraints that Team B will not know. For instance, there may be a performance constraint. Suppose Component A must return results in 1 second. That means Component B must return its results in less than 1 second. Team A knows this requirement and passes it down to Team B by means of the "preferred usage" which is stubbed out and called by Team A with the appropriate error code if the call into Component B takes too long. When Team B takes ownership the stubbed code and moves it into Component B then Team B will have reference to the timing constraints and proceed accordingly.
DBU in its Simplest Form
In its simplest form DBU is similar to Test First Programming. The developer, on an individual basis, must start writing code somewhere and in some direction. After appropriate domain consideration the developer will start building classes, structures, or even data flows. It doesn't matter if you are Object Oriented, Structured, or Procedural, there is an architecture that corresponds to your development method.
The direction choice is made by writing calls as if they already exist. Thus you are designing the method on how you are going to use it. The parameters to the call will be of the type that you have available. The results of the call will be of a type that you want to handle. This is by its very nature low level code design. DBU does not require you to have a high level design nor does it exclude the use of a high level design. DBU does not need a detailed low level design before coding because DBU creates the low level designed as needed, in context, on time, in place, and correct for the situation.
That is how you get started designing and writing new code. It is a very powerful way to do so.
DBU is applicable to modifying existing code. Often I find myself adding to existing code. I struggle to organize new code with existing code. I find myself trying to use what already exists instead of trying to use the code the way I would prefer. As I group calls to existing code I often feel that I am ruining the architecture or that this really doesn't fit. I often get stuck and can't figure out how I am going to get the data from all of the places I need and transform it to how it is needed. Then I remember, "Hey dummy, write the new code how you would prefer it to be, even if it doesn't exist." When I do this the code flows, the architecture is maintained or extended but it is not violated or hacked. Every time I have done this I have been pleased with the results. Yes, every time.
I have previously blogged concerning DBU and database design and how it has helped me with SQL queries and such.
DBU and Object Oriented Design
DBU is applied at low level / code level design. Therefore DBU works well with Object Oriented Design (OOD). Sometimes I design my domain objects using UML. I feel it is very important to gain as much understanding of the domain as possible before the low level code design begins. I define the objects and then I usually go right to sequence diagramming in order to imagine or simulate interactions. I do not "flesh out" the method calls to any great extent in UML. But that is me. You do what works for you. I do not use UML to generate my code. I use it to define meta data, organize thoughts on the domain, and get me pointed in the right direction. On small tasks where the domain is simple or in areas where I have lots of domain knowledge I do not even do UML.
DBU and Design By Contract
DBU uses aspects of Design By Contract (DbC). There are three questions associated with DbC.
1) What does it expect?
2) What does it guarantee?
3) What does it maintain?
DbC is based on the metaphor of a "client" and a "supplier". In DBU the user is the "client". In DBU the user of the "to be developed" method defines the preconditions, postconditions, and invariants on externally visible state. DBU follows the same rules of DbC for extending the contract down into lower level methods and procedures, such as a subclass may weaken a precondition, a subclass my strengthen a postcondition, and a subclass may strengthen invariants.
Design by Use and Test Driven Development
DBU and Test Driven Development (TDD) have similarities but are different. Both are design activities. In my opinion both are low level code design activities.
Some definitions of TDD require you to write a failing test (which is similar to a usage example of DBU) and then run your testing framework and see the indication that the test fails. You may do that in DBU but it is not a requirement of DBU. I want to point out that many will say you are not doing TDD unless you write a failing test and then watch it fail. DBU is not thusly constrained.
DBU is defined for new development and for modifying existing code. In DBU if you are modifying existing code and you are developing new functionality you do it in place, in context, in state, where it is needed. You write the new code as if it already exists. Of course the new code isn't going to compile and you don't have to compile it to see it fail. Now if you want, and this is something I personally do, you take this new code and you put it into a "programmer's test" so that it will benefit from automated regression testing. You can put the new code into the tests before you actually develop the underlying functionality if you want and drive the development underlying functionality from the test, or in other words at this point you can use TDD. Or, you can continue in the existing code and use your IDE to generate the method and then fill in the functionality whilst considering DbC and then place calls to the new code in the "programmer's tests".
DBU is concerned with designing the call to the new code in context with the data that is on hand or accessible. DBU does not get stuck on such things has what needs to be public or private or if everything has to be public so that I can fully unit test the code. DBU designs code as needed and needed code is code that is called and code that is called is exercised and code that is exercised is tested.
Am I saying that all possible states are exercised. No. I don't think that TDD promises that either. Why, because in TDD the unit tests are still written by humans who have a finite amount of knowledge, time, and attention.
If the method you have just defined is visible to other classes or callers then I refer you back to DbC to state what is expected.
Summary
Design By Use defines "Immediate Integration" where the user specifies the inputs, outputs, and method name (or in other words the method signature). Once the user of the new method has defined the preferred method signature and constraints the team that will develop the new method works from the users definition to build the actual functionality. These component boundaries or interfaces are defined early so that all teams may work in parallel and so that the components are linked together immediately at definition time and not at some far off date.
DBU avoids the unnecessary code of mapping layers that result from poor communication, downstream waiting, or teams going off in their own direction.
DBU is a low level code design activity. It works well with OOD, DbC, and TDD.
DBU applies the user's preference on how things should be called to existing code as well as new code. When modifying existing code DBU says to write the modifications in the way that seems best even if the code doesn't exist. By doing this the overall structure and architecture of the system is extended and not just hacked and coupled. I do not know of any other low level code design methodology that follows that principle. There could be many. I just don't know them or maybe they don't have a cool name like Design by Use!
DBU is a set of software design and development techniques which I have found very useful during my career. I recognize that the parts that make up DBU are not new to everyone.
Before I go into a general description of DBU and compare it to OOD, DbC, and TDD I want to point out some unique aspects of DBU.
Unique Aspects of DBU
DBU considers large software development issues and specifically multiple teams working simultaneously to build components and subcomponents which ultimately will work together as a software system.
DBU describes what is termed "immediate integration". For me this was a new concept. For you it may not be, or maybe I have not communicated clearly what I mean.
Suppose there are two teams, Team A and Team B.
Suppose that Team A is writing Component A which depends on Component B which will be developed by Team B.
Team A writes inside of Component A the call to Component B before Component B is developed. Team B takes the code from Component A and uses that to define the method signature or interface into Component B. Team A decides how they want to use Component B. Team A codes the preferred usage and gives that to Team B.
Team A writes this "preferred usage" code very early in the development of Component A. This is done early so that Team B can start as soon as possible so that all teams are working on their components in parallel as much as possible. When I say "very early" I mean at first for most situations.
Notice that Team A specifies the first version of the interfaces for Component B which are of interest to Team A. As with most software, changes to the interfaces occur before finishing the product. I shouldn't even have to say that, but so many read a description and then say, "You don't allow for future changes." All I can say is that people who think like that need to take the blinders off. If I don't describe some particular issue that you think is important I say to you can you imagine a way to address your issue and if so then everything is still good.
So, Team A writes inside of Component A the "preferred usage" code for the call to Component B, and then creates a stub for Component B. Team B takes ownership of this stub and brings it into Component B and Component A no longer calls the stub but calls Component B. Thus we have immediate integration between Component A and Component B. This new call into Component B had the pre-conditions, post-conditions, and invariants that are concerns for Team A specified by Team A. These concerns can be used in the definition of automated tests.
Team B does not have to wait and wait for Team A to finally decide to call their system. Team A does not have to worry about Component B's interface and how to match up the classes, structures, parameters, exceptions, or return values. There will be no useless code and design based on the common tactic of "We will go ahead and design and implement Component B and when you finally figure out how you want to call us we can implement a mapping layer between the systems." What a poor way to do parallel component development.
Notice that Team B did not declare to Team A that Component B will have these interfaces and Team A will have to figure out how to create the data necessary to make the call. In the development of "NEW" software the "user" has priority over the "used". Some may say, "This doesn't work for integrating software to existing systems." That's right, it doesn't have anything to do with integrating to existing systems such as third party libraries, unless you are designing a transformation layer between your system and the third party system. If you are designing a transformation layer then I would do it in the DBU fashion.
Component B should only do what its users need it to do and nothing more, and obviously nothing less. Any extra code is just a waste. Mapping layers sometimes are the sign of poor design or poor utilization of teams and are just unnecessary and extra code.
Team A knows critical constraints that Team B will not know. For instance, there may be a performance constraint. Suppose Component A must return results in 1 second. That means Component B must return its results in less than 1 second. Team A knows this requirement and passes it down to Team B by means of the "preferred usage" which is stubbed out and called by Team A with the appropriate error code if the call into Component B takes too long. When Team B takes ownership the stubbed code and moves it into Component B then Team B will have reference to the timing constraints and proceed accordingly.
DBU in its Simplest Form
In its simplest form DBU is similar to Test First Programming. The developer, on an individual basis, must start writing code somewhere and in some direction. After appropriate domain consideration the developer will start building classes, structures, or even data flows. It doesn't matter if you are Object Oriented, Structured, or Procedural, there is an architecture that corresponds to your development method.
The direction choice is made by writing calls as if they already exist. Thus you are designing the method on how you are going to use it. The parameters to the call will be of the type that you have available. The results of the call will be of a type that you want to handle. This is by its very nature low level code design. DBU does not require you to have a high level design nor does it exclude the use of a high level design. DBU does not need a detailed low level design before coding because DBU creates the low level designed as needed, in context, on time, in place, and correct for the situation.
That is how you get started designing and writing new code. It is a very powerful way to do so.
DBU is applicable to modifying existing code. Often I find myself adding to existing code. I struggle to organize new code with existing code. I find myself trying to use what already exists instead of trying to use the code the way I would prefer. As I group calls to existing code I often feel that I am ruining the architecture or that this really doesn't fit. I often get stuck and can't figure out how I am going to get the data from all of the places I need and transform it to how it is needed. Then I remember, "Hey dummy, write the new code how you would prefer it to be, even if it doesn't exist." When I do this the code flows, the architecture is maintained or extended but it is not violated or hacked. Every time I have done this I have been pleased with the results. Yes, every time.
I have previously blogged concerning DBU and database design and how it has helped me with SQL queries and such.
DBU and Object Oriented Design
DBU is applied at low level / code level design. Therefore DBU works well with Object Oriented Design (OOD). Sometimes I design my domain objects using UML. I feel it is very important to gain as much understanding of the domain as possible before the low level code design begins. I define the objects and then I usually go right to sequence diagramming in order to imagine or simulate interactions. I do not "flesh out" the method calls to any great extent in UML. But that is me. You do what works for you. I do not use UML to generate my code. I use it to define meta data, organize thoughts on the domain, and get me pointed in the right direction. On small tasks where the domain is simple or in areas where I have lots of domain knowledge I do not even do UML.
DBU and Design By Contract
DBU uses aspects of Design By Contract (DbC). There are three questions associated with DbC.
1) What does it expect?
2) What does it guarantee?
3) What does it maintain?
DbC is based on the metaphor of a "client" and a "supplier". In DBU the user is the "client". In DBU the user of the "to be developed" method defines the preconditions, postconditions, and invariants on externally visible state. DBU follows the same rules of DbC for extending the contract down into lower level methods and procedures, such as a subclass may weaken a precondition, a subclass my strengthen a postcondition, and a subclass may strengthen invariants.
Design by Use and Test Driven Development
DBU and Test Driven Development (TDD) have similarities but are different. Both are design activities. In my opinion both are low level code design activities.
Some definitions of TDD require you to write a failing test (which is similar to a usage example of DBU) and then run your testing framework and see the indication that the test fails. You may do that in DBU but it is not a requirement of DBU. I want to point out that many will say you are not doing TDD unless you write a failing test and then watch it fail. DBU is not thusly constrained.
DBU is defined for new development and for modifying existing code. In DBU if you are modifying existing code and you are developing new functionality you do it in place, in context, in state, where it is needed. You write the new code as if it already exists. Of course the new code isn't going to compile and you don't have to compile it to see it fail. Now if you want, and this is something I personally do, you take this new code and you put it into a "programmer's test" so that it will benefit from automated regression testing. You can put the new code into the tests before you actually develop the underlying functionality if you want and drive the development underlying functionality from the test, or in other words at this point you can use TDD. Or, you can continue in the existing code and use your IDE to generate the method and then fill in the functionality whilst considering DbC and then place calls to the new code in the "programmer's tests".
DBU is concerned with designing the call to the new code in context with the data that is on hand or accessible. DBU does not get stuck on such things has what needs to be public or private or if everything has to be public so that I can fully unit test the code. DBU designs code as needed and needed code is code that is called and code that is called is exercised and code that is exercised is tested.
Am I saying that all possible states are exercised. No. I don't think that TDD promises that either. Why, because in TDD the unit tests are still written by humans who have a finite amount of knowledge, time, and attention.
If the method you have just defined is visible to other classes or callers then I refer you back to DbC to state what is expected.
Summary
Design By Use defines "Immediate Integration" where the user specifies the inputs, outputs, and method name (or in other words the method signature). Once the user of the new method has defined the preferred method signature and constraints the team that will develop the new method works from the users definition to build the actual functionality. These component boundaries or interfaces are defined early so that all teams may work in parallel and so that the components are linked together immediately at definition time and not at some far off date.
DBU avoids the unnecessary code of mapping layers that result from poor communication, downstream waiting, or teams going off in their own direction.
DBU is a low level code design activity. It works well with OOD, DbC, and TDD.
DBU applies the user's preference on how things should be called to existing code as well as new code. When modifying existing code DBU says to write the modifications in the way that seems best even if the code doesn't exist. By doing this the overall structure and architecture of the system is extended and not just hacked and coupled. I do not know of any other low level code design methodology that follows that principle. There could be many. I just don't know them or maybe they don't have a cool name like Design by Use!
Subscribe to:
Posts (Atom)