Friday, July 15, 2016

Scouting and Reconnaissance in Software Development

Check out my other blog:
Maverick Software Development

Scouting and Reconnaissance in Software Development

by Geoffrey Slinker
v1.0 October 2004
v1.1 January 2005
v1.2, v1.3, v1.4 July 2005
v1.5 March 24, 2006

Maverick Development

Abstract

Scouting and reconnaissance are two well known methods of discovery. By these means information and experience are gained when faced with the unknown. Experience is critical to writing good software. Experience allows you to correctly identify problems and address them. Scouting and recon for software development is a great way to gain experience and avoid the pitfalls of the unknown.

Introduction

In the well known book ‘The Mythical Man-Month’ Frederick P Brooks states:
Where a new system concept or new technology is used, one has to build a system to throw away, for even the best planning is not so omniscient as to get it right the first time. Hence plan to throw one away; you will, anyhow.
As the years passed and systems grew in size and complexity it became apparent that building a "throw away" as not the most efficient approach. In the 20th anniversary edition of his same book, Brooks states that developing a throwaway version is not as efficient as iterative approaches to software development.
In Extreme Programming Explained Second Edition, Kent Beck states:
"Defect Cost Increase is the second principle applied to XP to increase the cost-effectiveness of testing. DCI is one of the few empirically verified truths about software development: the sooner you find a defect, the cheaper it is to fix it."
Scouting and recon techniques are used to discover defects through experiments and to completely avoid the presence of the defect in the "real" software. These techniques work within phased or phasic development methodologies as well as within iterative methodologies and give knowledge and experience through their use.

Gaining Experience

There are many software development activities concerned with gaining experience. Some of these activities include creating proofs of concept, prototyping, and experimenting. I will refer to all of these activities as experiments.
How much effort should be placed in an experiment? Enough to gain the experience needed to get you to the next step.

Software Scouting

“Scouting” will be the metaphor. During the exploration of the American frontier, scouts were sent out ahead of the company to determine the safest path through unknown and hostile territory. Through software “scouting missions” one can save time and money, and reduce the risks to the company.

Brooks’ first statement concerning building a "throw away" is akin to exploring the entire route first and then moving the company. His revised statement concerning iterative development is akin to scouting out a few hours (or days) ahead and returning to guide the company. This pattern of short scouting trips would continually repeat, making the technique both iterative and incremental. Through the scouting metaphor you can gain a certain feel for why building a "throw away" version is more costly than iterative development.

Scouting Tools

There are many ways to explore the unknown. These activities have many similarities. One of the key differentiators is the stage of software development in which the activity occurs. Following various "tools" for scouting are defined and the stage in which they are typically used is specified.
"Proof of Concept" occurs after a solution has been conceptualized. Investigation is needed to gain confidence and verify the viability of the solution.

A "Prototype" is made after a design has been made. Investigation is needed to validate that the result of the design solves the problem. In software prototyping development activities are scaled back. In engineering prototypes may be scaled functioning models. In software there is no physical dimension so development activities are scaled back which include minimal effort for robustness and usually only implementing the “happy path” of the functionality. Also techniques to reduce coupling are skipped and cohesion is ignored as much as possible (Even though these activities are skipped the experience of prototyping bring to light how the software components should be coupled and an overall domain definition emerges that allows for better cohesion).
Ed Mauldin explains prototyping as thus:
“Prototyping is probably the oldest method of design. It is typically defined as the use of a physical model of a design, as differentiated from an analytical or graphic model. It is used to test physically the essential aspects of a design before closing the design process (e.g., completion and release of drawings, beginning reliability testing, etc.). Prototypes may vary from static "mockups" of tape, cardboard, and styrofoam, which optimize physical interfaces with operators or other systems, to actual functioning machines or electronic devices. They may be full or sub-scale, depending on the particular element being evaluated. In all cases, prototypes are characterized by low investment in tooling and ease of change.”

An "Experiment" occurs after software modules have been developed. Investigation into their behavior under varied conditions is needed. An experiment is conducted to observe the behavior.
A "Mock Object" is created during software implementation. Components have been developed and investigation into their behavior needs to be done. To isolate these components from the effects of other components the other components are replaced with "mocks" that have simple and specific behavior.
A "Driver" is created during software implementation. Components have been developed and investigation into their interfaces and usability need to occur. A driver is developed to interface with and drive the component. The interfaces or entry points of the components are confirmed correct and the pre-conditions of the components are exercised. The driver can validate the post-conditions of the component as well.
"Stub" is created during software implementation. Functionality has been developed and investigation of the code paths needs to occur. Called interfaces are developed with the simplest means in order to return specific results and exercise the code paths of the caller. These simple interface implementations are stubs.
"Simulation" is typically created after the system is implemented. A deliverable needs to be tested in various environments and conditions. A simulation of an environment is developed and it is used for testing. Common examples are simulated users, simulated load, simulated outages, and such.

When to Scout

Remember, scouting activities address the issue of gaining experience in unknown territory. These activities are not necessary when experience is present. Simply said, “If you know how to do the job, then do it!”

When one is in unknown territory scout ahead for information, then come back and apply the knowledge gained. Have enough discipline not to get distracted by the sights along the way. Stay focused, travel light, and get back to camp as quickly as possible.

Can you afford not to scout ahead? The answer to this question only comes at the end of the journey. Did you make it to your destination or not?

Scouting for Phasic Methodologies

One reason that experiments work is because they address issues and concerns in context and as they occur. It is a "learn as you" go approach. Below are some scenarios in which scouting can be used in a traditional phased or phasic methodologies.

Phase 1: Analysis and Requirements.

•    Paper prototypes of the user interface.
•    Proof-of-concept of a requirement (i.e. the database must support 500 simultaneous connections).

Phase 2: Design.

•    Refined paper prototypes of the user interface. Paper models of the architecture and model (i.e. UML).

Phase 3: Implementation.

•    Develop an experiment for the “happy path” to discover boundaries and interfaces.
•    Create prototypes ahead of implementing frameworks so that the framework's approach can be reviewed.

Phase 4: Testing.

•    Create experiments to test scenarios.
•    Create testing harnesses that allow for proxy users (a proxy user can be a user simulated by a computer program).
•    Simulate extreme conditions such as system load.
(Testing is scouting ahead of the user to make sure the user’s experience will be a good one.)

Scouting for Iterative Methodologies

User Stories

  • Create a proof of concept to verify the User has conveyed their desires.

Project Planning

  • If the user story involves a User Interface, create paper prototypes of the interface to stimulate user input and direction.

Release Planning

  • Create a prototype to identify dependencies to facilitate iteration planning.

Iteration Planning

  • Create design prototypes using a modeling language such as UML.

Iteration

  • Create stubs, drivers, and mock objects to increase confidence in the behavior of isolated units.
  • Create an experiment to observe object behavior.
  • Create a simulation to test things like performance under a heavy load.
This list is supposed to be thought provoking, not complete. The idea behind scouting is to perform some scouting activity when faced with the unknown. When doing experiments in conjunction with an iterative development methodology the experiments are "lighter" than they would be in a phasic development methodology if the customer/user is taking an active role. With the customer present one can prototype a user interface with a white board and some drawings. If the customer is not present then a prototype for a user interface is usually mocked up with some kind of computer aided drawing package or even a "quick and dirty" user interface is developed with a GUI building tool or scripting language.

Benefits of Scouting

  1. Scouting brings light to a situation.  Through scouting activities estimations become more accurate. The accuracy comes from the application of the experience, not from an improved ability to predict the future.
  2. Scouting reduces coupling and improves cohesion.  When writing software in the light of experience, the coupling between objects is reduced and the experience unifies the system's terms and metaphors which increase cohesion.
  3. Scouting builds trust and confidence by eliminating incorrect notions and avoiding drastic changes in design and implementation.

Risks of Software Scouting

  1. Is management mature enough to allow the proper use of an experiment and not try to “ship” the prototype and undermine the effort?
  2. Is development mature enough to refrain from features creeping into the product because the experiment revealed something interesting?

Project Management Ensures Adequate Software Recon

Project Management should scout and see if their development environment can support activities that rapidly gain experience. Probing questions include:
  • Are the software developers aware of all of the activities that can lead to experience?
  • Are the stakeholders aware of the benefits of prototypes and experiments?
  • Is everyone aware of the risks of not doing recon and the risks of doing recon? Remember, one of the risks of a prototype is that sometimes people try to ship it!
An interesting exercise would be to listen for concerns expressed by developers and ask them what activity would address their concern. Some concerns expressed by developers that can be addressed through recon are:
  • “If I just had time to write this right”
  • “I don’t think we know how difficult this is going to be”
  • “I really don’t have any idea how long this is going to take”
When a concern is expressed ask the developer what they would do to address it. Listen for solutions that bring experience and shed light.

Conclusion

Experience is key to writing good software. The sooner you discover a problem and correctly fix the problem the cheaper it is. Scouting ahead in software by using prototypes and experiments is a great way to discover the right path without risking the entire company to the unknown.

Design By Use

Check out my other blog:
Maverick Software Development


"Design By Use" Development

by Geoffrey Slinker
version 1.6
March 25, 2006
April 22, 2005
July 1, 2005
July 25, 2005
August 23, 2005

Maverick Development

Abstract

"Design by Use" development (DBU) improves team resource utilization, software design, software quality, and software maintenance through a set of proven industry methods that have been shown to work together synergistically.

Introduction

Are you concerned with keeping your development staffed adequately tasked? Would you like to improve design quality by reducing coupling, improving cohesion, and communicating the domain model? Is the quality of your software important? Do you maximize the R.O.I. of your software by using the software for as many years as possible? If you answered no to any of these questions are you from another planet?

As part of my career I have specialized in the rendering of concise solutions to problems. Whether the problem was to be solved with code or with a methodology I have always strived to take the problem that was presented, boil it down to the essence, and provide a solution. I have studied software engineering processes now for over 20 years. I have distilled the essence of what I feel are the most useful methods to use as a foundation to build a process that is efficient and improving.

I have recently thrown all of the traditional methodologies along with agile methodologies that I know into my soup pan and turned up the heat! Then I took the results and have been experimenting with them. It is like a soup that has been cooked in a big pot. You will taste all of the different ingredients if you try or you can ignore the ingredients and just enjoy the combined flavor.

This paper presents a methodology for development that can work as a subcomponent of any encompassing methodology and deliver results in the areas mentioned in the abstract.

Executive Summary

Design by Use (DBU) follows the basic steps:
1) Create High Level Design
2) Identify systems and subsystems
3) Identify messages or calls between systems and subsystems
4) Use theses identified messages or calls to specify to each team what they should code and how the message will be made (the message/method signature).

For example: There are two subsystems identified, S1, and S2. There are two teams, T1 and T2. T1 is to write S1 and T2 is to write S2.
S1 calls into S2, let's suppose the message is GetStuffFromS2.
Team 1 writes an Usage Example:
void Main() 
{ 
 MyData data = GetStuffFromS2(1);
 assert(data.value == 3);
}

Team 1 gives this Usage Example to Team 2. T2 uses this to direct what they will develop and the order of development will natually flow from this point. So T2 implements GetStuffFromS2 in their subsystem S2 and they notify T1 when it is available (or if they are using unit tests they will know GetSTuffFromS2 is available when the build light goes green for that test).
S1 is immediately integrated with S2 and even better, it is integrated in a great way, the way the user wants to use the system.
DBU is beyond Test Driven Development (TDD) and Design by Contract (DbC). DBU is concerned with large software systems, multiple teams, coordination, and integration. TDD is a code design activity. DbC is a contract driven process based around invariants, pre-conditions, and post-conditions.

The Approach

The "Why":
The problem is keeping all software development teams working and not waiting.
The "When":
When a large software system is being developed with many systems and subsystems and each of these is developed by different teams.
The "How":
The high level design of the system is done with any method that the company agrees upon. A custom diagramming language such as a simplified UML works fine. Subsystems are identified and teams are assigned to each subsystem.The data flows, invocations, calls, dependencies, or whatever you want to call them are identified at the subsystem boundaries. For example, "My subscription subsystem will need to ask the pricing subsystem for a price given a product Id."

At this point the development pump must be primed. All of the teams have their requirements. It doesn't matter if you use Use Cases or User Stories or another way to specify requirements. In an agile methodology this would be one of the last activities of Release Planning. The teams meet together as one and the functionality that will be delivered during this release is decided upon. Each system and subsystem that are participating in this release are identified. If there are systems that are not part of this release the teams responsible for them will not be needed and can work on other systems. Each call into an external system or subsystem that had been identified are listed. The "caller" starts out by writing a usage example. The usage examples are created for the calls identified from the high level design (calls that cross system boundaries) . The usage examples that call into subsystems other than your own are delivered to the proper team. In all software development there are the upstream/downstream situations. (I do not go into the perils of being downstream in this paper.) All of the usage examples will be used to drive the design and development of what's inside a subsystem. This is the low-level design (code) and includes the details not covered in the high-level design (possibly UML).

When all of the usage examples for the subsystem boundaries are identified that can be the teams can coordinate and prioritize the remaining development tasks. This gives a clear picture of who is doing what and how they should do it. There is no waiting because the usage example has with it sample data to drive the call. Therefore no one is waiting for someone upstream to finally call their code.

The usage examples test post conditions after the call into the subsystem. The implementation in the subsystem checks preconditions, invariants, and post conditions. If you taste the flavor of design by contract in this soup you are correct.

From what has been stated so far in the DBU approach the design has presented the overall domain, has identified sub-domains, has exposed the boundaries and entry points, and has allowed for efficient use of resources and scheduling.

Quality is improved through the approach as thus far stated. By having usage examples that drive development integration has already been addressed. Instead of "integrate often" this approach is "integrate immediately". As soon as a component is finished and satisfies the usage example it can be used by consumers. Through this approach the design is very cohesive because sufficient consideration was given to the domain model and the boundary points. The idea that cohesive designs and correct models just emerge from some primordial ooze is a misunderstanding. Instead it comes from the application of knowledge, consideration, experimentation, and application. This approach uses these four factors (knowledge, consideration, experimentation, and application) continuously.

With the usage examples defined and expectations set there is no need for teams to reinvent the wheel. To often teams will not use others code because the quality is suspect or the delivery date is unknown or the solution is a near fit but not a good fit. Eliminating these concerns is just as much a social problem as well as a procedural one. The approach specified here addresses the procedure.

So far improvements in resource utilization, design, and quality have been described. Finally this approach improves the R.O.I. through facilitating software maintenance. By running the usage examples a developer can isolate a piece of code and step through it to understand a legacy system. Often documentation is lost or out of synchronization with the software and a developer just wants to know what the system currently does. When modifying an existing system it is essential to know that changes have not affected the system in undesired ways. By running the usage examples in the role of a regression test the changes can be verified that the effects of the change are isolated to the desired areas. Since each system call into another system is specified the designer's and programmer's intent is specified. This specification can be used to replace entire systems and subsystems. Suppose we want to replace the pricing subsystem our subscription subsystem uses. The usage examples shows exactly where to make the incision.

Summary


1)    Improves team resource utilization
a.    Through specifying interfaces through usage example one team can clearly specify to another team the functionality that is desired. This is immediate integration. Through this there is less rework during integration that traditionally would come at the end of the development phase.
2)    Improves quality
a.    Eliminates issues with late integration
b.    Builds confidence in subsystems and reduces "silo-ing" and duplicated code.
3)    Improves design
a.    Rapidly defines interfaces and exposed entry points.
b.    Reduces coupling.
c.    Increases cohesion.
i.    through communication
ii.    through the dissemination of domain concepts
iii.    through the unification of domain models
4)    Improves software maintenance
a.    By running the usage examples as a regression test one can step through code that is not documented or that is not behaving according to documentation.
b.    Usage examples are ran after every modification (small sets of changes) to verify that the changes have not caused problems because of unknown side effects and couplings.

Conclusion

"Design by Use" development (DBU) improves team resource utilization, software design, software quality, and software maintenance through a set of proven industry methods that have been shown to work together synergistically.

Complex software with many systems and subsystems which are developed by several teams of developers is difficult to schedule the order each part will be developed. Integration is often done late. Immediate integration is the key activity.
To flesh out more of the entire process please read "Reporting for Accountability".