Sunday, 07 August 2005

Testing Levels

In previous versions of Visual Studio, a tester had to resort to many different tool vendors for his testing equipment. The release of Microsoft Visual Studio 2005 Team System will mark an important milestone in testing land since it marks the recognition of the tester as a first class citizen in Visual Studio. It will provide testers with tools that support testing throughout the entire software development and maintenance lifecycles. Does this mean that every tool a tester ever dreamt of or even really needs is in Team System? No, but it’s a great start. In this post I’ll focus on the test levels that are defined in the “V” model and the Visual Studio 2005 and Team System features that support them. The “V” model has become an industry wide standard for visualizing the levels of tests. Figure 1 is an illustration of the “V” model.



I regularly notice that there is a lot of confusion on the what, how and when wile discussing these levels. Before we can start to define the different testing levels it’s probably wise to define what a unit and a component is:

Unit:
A unit is the smallest compilable component. It does not include any communicating components and it’s generally created by one programmer.
Component:
A unit is a component. The integration of one or more components is a component. The reason for "one or more" as contrasted to "Two or more" is to allow for components that call themselves recursively.

On the right-leg of the “V” model you’ll find these levels:
Unit Testing:
During unit-testing the developer should always make sure the unit is tested in isolation, and that it is the only possible point of failure. In unit testing communicating components and called components should be replaced with stubs, simulators, or trusted components. Calling components should be replaced with drivers or trusted super-components.
Component Testing:
During component testing the same scenarios are tested as during unit testing but all stubs and simulators are replaced with the real thing.
Integration Testing:
Integration testing identifies problems that occur when components are combined. Component A and B are two components for which A calls B. Figure 2 illustrates integration testing for Components A and B:
Test Suite A contains the component level tests of component A.
Test Suite B consits the component level tests of component B.
Tab are the tests in A’s suite that cause A to call B.
Tsab are the tests in B's suite for which it would be possible to replace the code that is written to test component B by a call from Component A as input for the tests.
When you combine the test suites Tsab and Tab you will have a set of component tests that you can use after you modify Component B. When you modify Component B or A, you will be able to verify that will still function correctly together.


System Testing:
In system testing the tester will verify if the developed system or subsystems still meet the requirements that were set in the functional and quality specifications.
Acceptance Testing:
In acceptance testing the user and system manager will verify if the developed system still meets the requirements that were set in the functional and quality specifications. This level of testing is done in an environment that simulates the operational environment in the greatest possible extent.
Release Testing:
Prior to a public release of a program you must ensure that all bugs that were intended to be fixed were actually fixed. In release testing following aspects will be verified:

A mixture of previously failed-and-fixed tests and tests that have always passed;

Virus checking of the final installation package. Too many cases of distribution of viruses have been reported to not take this additional precaution.
A comparison of all features actually working reliably with prepared documentation. It is crucial that the documentation reflects all design decisions made during development and testing.

There are many variations on these definitions and the “V” model, but the key point is that the abovementioned testing levels are formally defined. A wise man once said: “Even when laws have been written down, they ought not always to remain unaltered.” Despite the fact that we are not talking about laws here, you can always leave a comment when you have another understanding of these definitions.

Team System | Testing
08/07/2005 15:43:44 UTC  #  Comments [2] 
 Monday, 18 July 2005

Courses versus Conferences
About 2 weeks ago, I had the opportunity to attend TechEd in Amsterdam and this year’s edition was even better then the previous one. To my opinion there are 2 major differences between a conference like TechEd and a regular course at a training center:

A course will provide you with a predefined set of aspects of a product; these are normally chosen by a qualified smart guy. During a conference you have the freedom to constantly choose what you want to learn. But as always, freedom should be accompanied by a certain amount of responsibility. I gained a huge amount of weight in the first months I moved out of my parents house, whereas I could have eaten those healthy vegetables (it’s a freedom<>responsibility joke). You can really see a course as your parent’s house where a conference is more like living on an independent basis.

Advice when attending a conference:
Choose your sessions wisely, think long term;
Go to as much sessions as you can;
A soccer player is only as good as his last season; you’re only as good as the relevant knowledge you have … you see where I’m heading at.

As a former teacher, I always found that my primary objective was to provide my students with a lot of practical information and guidance that they could directly apply in the field. Planting seeds of knowledge in a students mind was my secondary objective. As you know, a seed needs a lot of water and nurturing (studying and hard work) in order to become a tall oak (smart guy). When you attend a conference I’m convinced that your priorities should be the other way around. You really should be looking for the seeds and that’s where responsibility kicks in (again).

Advice when attending a conference:
Don’t wait to long to go over the sessions again, you’ll be surprised how much you have forgotten in a week;
Create a list of interesting subjects that you want to learn before a conference. Try to gather additional information on these subjects afterwards and try to master them in the following weeks. If you succeed at mastering these subjects, you’ve had a good conference;
If you ever been to a conference, you’ll know that the real work really just begins when the conference is finished. Remember that hard work spotlights the character of people: some turn up their sleeves, some turn up their noses, and some don't turn up at all.

 

The best advice I can give you when choosing between a course and a conference is the following:

To be conscious that you are ignorant is a great step to knowledge.
Benjamin Disraeli (1804 - 1881), Sybil, 1845


07/18/2005 19:27:12 UTC  #  Comments [1] 
 Monday, 11 July 2005

SCM and Team System, a marriage made in heaven?

In a previous post I stated the goals of successful configuration management and as you undoubtedly realized they are not easily accomplished in the field.  Now I'll try to give you an insight on how Team System helps you to tame this untamable beast.

A good SCM process makes it possible for developers to work together on a project in an efficient manner, both as individuals and as members of a team.  A development team must constantly manage requirements, tasks, source code, bugs and reports.  Gathering each of these item types in the same tool strengthens the communication pathways of teams and software.

Based on the goals mentioned in the previous post on SCM I'll try to indicate how Team System helps you to accomplish them:

·   Configuration identification:  This is often reffered as the process of recognizing the baseline applicability to a set of configuration items. It refers both not only to source, but all documents that contribute to the baseline.  Examples are:
·   All code files
·   Compilers
·   Compile / build scripts
·   Installation and configuration files
·   Inspection lists
·   Design documents
·   Test reports
·   Manuals
·   System configurations (e.g. version of compiler used)
Team System provides this capability through the concept of Work Item Tracking. A work item can define any unit of information that is part of the software development lifecycle. It can be any of the abovementioned Configuration Items. A powerful feature of Team System is that you can link work items to other artifacts, this allows your developers and manager to track which changes are related to which requirements, bugs.

·   Configuration Control:  Refers to the policy, rules, procedures, information, activities, roles, authorization levels, and states relating to the creation, updates, approvals, tracking and archiving of items involved with the implementation of a change request.
With Team System policies can be created and enabled inside Visual Studio that will enforce following standard check-in conditions, as well as others:
·   Clean Build: The project must compile without errors before check-in.
·   Statis Analyses:  Static analyses must be run before check-in
·   Testing Policy: Smoke-tests, unit-tests must be run before check-in
·   Work Items: On ore more work items must be associated with the check in.
You can also configure Team System to trak additional check in notes.  The standard notes in MSF Agile are: Security Reviewer, Code Reviewer and Performance Reviewer.  As with the most part of Team System, this is again fully custumizable.
Roles and authorization levels are covered by Team System Security.  By locking down privileged operations to only a few members, you can ensure that the roles within your team are always enforced.  You can for example specify which team members can administer, start or resume a build and so much more.

·   Status accounting: Recording and reporting the status of components and change requests and gathering vital statistics about components in the product.
Team System is hosted on SQL Server 2005 and its built-in reporting capabilities. As many as 50 pre-built reports are expected to ship with the release of Team System. These will include reports on: Project health, code churn, test pass, test coverage, active bugs,... These reports are directly available from the Reporting Services report manager portal or can be viewed on the project portal.

·   Configuration verification and audit: Verify that a product’s requirements have been met and the product design that meets those requirements has been accurately documented before a product configuration is released.Before acceptance into the live environment, new Releases, builds, equipment and standards should be verified against the contracted or specified requirements.
This is where the Dynamic Systems Initiative (DSI) comes into play.  DSI is a way to design for deployment or to put it in another way to design for operations. Key features of DSI are:
·   The visualization of systems and services
·   tracking of each system or service to properly describe it to another system or service.
It will in other words allow solution architect's to validate their design against an infrastructure architects's datacenter design and visa versa.  The first Microsoft implementation of DSI will be called the System Definition Model (SDM).  SDM describes your application and its deployment environment in layers.  The following layers are defined:
·   Application
·   Application Hosting
·   Logical Machines and Network Topology
·   Hardware
Microsoft will furter expand on their Dynamic Systems Initiative and will utilize the SDM model in Systems Managment Server(SMS) and Microsoft Operations Manager(MOM).

·   Build management:  Manage the processes and tools that are used to create a repeatable and automatic build.
Team System's Team Build provides an out-of-the-box solution to meet following requirements:
·   Get source code files for the build from the source code repository
·   Run static code analysis
·   Compile sources
·   Run unit tests
·   Save code churn, code coverage and other build information
·   Copy the binaries to a predefined location
·   Generate reports
The build automation tool in Team System provides you with an out-of-the box solution to meet these requirements.  The wizard helps you create an automated build script. Since the execution engine of Team Build is MSBuild, you can customize the process and accomplish any number of custom tasks.

·  

Process management:  Enforces consistent processes and promotes user accountability across the application life cycle, resulting in communication and productivity improvements enterprise-wide.
Team System will include two Microsoft Solution Framework(MSF) methodologies:  

·   MSF for Agile Software Development
·   MSF for CMMI improvement
While in MSF Agile it is more important to respond to change than to follow a plan, it is my understanding that MSF for CMMI process improvement is the only MSF methodology that fully provides process management support.  It is an excellent process to use on your project if your company is looking to achieve a measured, baseline competency in software development.  In short it will bring the process management side of the application lifecycle to your company and project.

·   Teamwork:  Controlling the work and interactions between multiple developers on a product.
One of the great advantages of the fact that Team System is such a highly integrated environment, is that it can instantly improve the communication on your team. All members of a team need to be in sync, watching their managers and need to work together to get their assignments done in time. Managers can always consult what the state of the project is, how much code churn is in the nightly builds, when the project has reached zero bugs,... Your team must constantly manage the same requirements, tasks, source, code bugs and reports. Because of the ways these are integrated in Team System it will automatically strengthen the communication pathways of your team and software.

I hope that by now you will agree that Team System is the new do-it all tool in the SCM's toolbox.  Team System is not a methodology or process but it integrates very well with the MSF methodology. Team System integrates most of the current tools that a Software Configuration Manager has dreamt about.  Microsoft will provide third party tool providers and yourself with an SDK that allows you to take advantage of common functionality that Team System provides.  Well, I cannot imagine a SCM that is not eagerly anticipating the release of Visual Studio 2005 Team System, but only time will tell.


SCM | Team System
07/11/2005 20:47:44 UTC  #  Comments [2] 
 Wednesday, 18 May 2005

Generics Part I - Introduction

Generics or parametric polymorphism allow classes, structs, interfaces, delegates and methods to be parameterized by the type of data they utilize.  It has the following advantages over dynamic approaches:

  • stability

stronger compile time type checking

  • expressivity

invariants expressed in type signatures

  • clarity

fewer explicit conversions between data types

  • efficiency

reduce the need for run-time type checks and boxing operations

Object-based generic design pattern

Without generics, programmers often use the Object-based generic design pattern.  This is a complicated term for something as simple as storing data of any type as an instance of the type Object.  The following List class stores its data in an Object array and the methods Add and the indexer use the Object type to accept and return data:

public class List

{

      private object[] _items;

           

      public object this [int index] {...}

      public void Add(object value) {...}

}

The Object-based generic design applied in the above sample provides the List class with parameter type flexibility.  It is possible to add a value of any type to the List but this solution still has following drawbacks:

  • When the value passed to the Add method is a value type, it is automatically boxed. 
  • When the value returned by the indexer is a value type it must be unboxed with an explicit type cast.  Boxing and unboxing operations add a performance overhead because they involve memory allocations and runtime type checks.
  • When the value returned by the indexer is a reference type, an explicit cast to the appropriate type has to be performed.  This has a performance penalty for the required runtime checking and is quite tedious to write.
  • There is no compile time type checking.  This may cause that problems do not become apparent until the code is executed and an InvalidCastException is thrown.

As you may or may not suspect by now, generics allow us to overcome the abovementioned drawbacks.

 

What are Generics?

Generics provide class creators the tools to create types that have type parameters.  Rather than forcing conversions to and from Object, instances of generic types accept the types for which they were created and allow us to store the data without any conversions.  The passed type parameter is a placeholder until an actual type is specified during utilization.  The following example uses the parameter TypeOfList as the type for:

  • the internal _items array
  • the parameter type for the Add method
  • the return type for the indexer.

public class List <TypeOfList>

{

      private TypeOfList[] _items;

                 

      public TypeOfList this [int index] {...}

      public void Add(TypeOfList value){...}

}

 

When you want use the generic class List, you must specify the actual type for the type substitute TypeOfList:

List<int> list = new List<int>();

 

In the constructed type List<int>, every occurrence of the type substitute TypeOfList is replaced with the type argument int.  A constructed type is a generic type that is named with its type parameters.  When an instance of the type List<int> is used in code, the following applies:

  • The native storage for the _items array is int[], this provides an improved storage efficiency compared to the non-generic List. 
  • Generics provide strong typing. This means that at compile time and at runtime it will be verified that only int values or values that can be implicitly cast to an int are used as parameter. 
  • The indexer will return an int value, this eliminates the explicit cast to an int when it is retrieved and thus eliminates the unbox operation.

Generic type declarations may have any number of type parameters.  The following example illustrates its use:

public class Dictionary <TypeOfKey, TypeOfValue>

{

      public void Add(TypeOfKey key, TypeOfValue value){...}

public TypeOfValue this [TypeOfKey key] {...}

}

 

Constraints

So far the benefits of generics only apply to constructed types.  But when you are coding a generic class, you will find that the type of the parameters is still no more specific than the Object type.  It’s currently impossible to call any type specific method on the parameter values.  To provide us with this information C# permits an optional list of constraints to be supplied for each type parameter.  A type parameter constraint allows you to specify a requirement that a type must fulfill.  Constraints are declared using the word where, followed by:

  • The name of the parameter;
  • a class type (optionally);
  • interface types (optionally);
  • the new() constraint, that allows you to specify the requirement for a public parameterless constructor (optionally).

public class Dictionary <TypeOfKey, TypeOfValue>

where TypeOfKey: IComparable<TypeOfKey>

      where TypeOfValue: IPersistable, new ()

{

public void Add(TypeOfKey key, TypeOfValue value) {…}

}

 

Given the abovementioned declaration, where the type argument for TypeOfKey is constrained to implement IComparable the following applies:

  • the compiler guarantees that any type argument supplied for TypeOfKey implements IComparable;
  • All members of IComparable are directly available on values of the type parameter TypeOfKey.

public void Add (TypeOfKey key, TypeOfValue value)

{

if (key.CompareTo(value) < 0 {…}

}

 

Generic Methods

When you only need a type parameter in a particular method, you will probably want to use a generic method.  A generic method has one or more type parameters specified between < and > delimiters after the method name.  The type parameters can be used within the:

  • parameter list
  • return type
  • body of the method. 

A generic AddDictionary will probably look like this:

public void AddDictionary (Dictionary<TypeOfKey, TypeOfValue> dictionary) {…}

 

Good news?

Yes, there are least 4 more posts to come on generics: 

  • Generics Part II – Generic Declarations
  • Generics Part III - Advanced Generics;
  • Generics Part IV - Generic Performance and Guidelines;
  • Generics Part V - Generic Implementation or what I make of it.

.NET 2.0
05/18/2005 21:37:04 UTC  #  Comments [2] 

 Monday, 25 April 2005

What is Software Configuration Management?

Software Configuration Management (SCM) means many things to many people.  An excellent place to start is to define the goals of SCM.  A good SCM process makes it possible for developers to work together on a project in an efficient manner, both as individuals and as members of a team.  Based on different publications we can state that successful configuration management should enable the following:

·         Configuration identification:  Developers should be able to work together on a project, sharing common code.  This allows a developer to fix a bug in the source code for release A while another developer is developing a new feature that is scheduled for release B.

·         Configuration control:  Ensures that proposed changes to configuration items are fully coordinated and documented.  This can for example include the switch from .NET 1.1 to .NET 2.0.

·         Status accounting audit:  Recording and reporting the status of components and change requests and gathering vital statistics about components in the product.  For example: How many files were affected by fixing a bug.

·         Configuration Verification and Audit: Verify that a product’s requirements have been met and the product design that meets those requirements has been accurately documented before a product configuration is released.  It’s important to remember that this state needs to be maintained through the entire project lifecycle.

·         Build management: Manage the processes and tools that are used to create a repeatable and automatic build.

·         Process management: Enforces consistent processes and promotes user accountability across the application life cycle, resulting in communication and productivity improvements enterprise-wide.  You can really see this as getting the heads pointing in the same direction.

·         Teamwork: Controlling the work and interactions between multiple developers on a product. For example, this addresses the question, "Were all the locally made changes of the programmers merged into the latest release of the product?"

 

SCM did not grow out of a managers wish to limit and control developers in their creativity.  It is there to protect you from rogue behavior.  The "active rogue" is easier to identify and control because it's out in the open and often verbal.  The passive rogue is pretty much anyone on the team who will sacrifice quality when the heat starts to rise.  

When crunch time comes, and it will come believe me, you need a process that keeps people from being tempted to put in quick fixes that ultimately degrades the quality of your application. 

 

I hope that this post has given you an understanding of what SCM has to offer to yourself and your organization.  In a following post I will elaborate on the theoretical and practical sides of the wonderful world of the software configuration manager.

 


SCM
04/25/2005 20:29:32 UTC  #  Comments [3] 
 Monday, 18 April 2005

Welcome!

As you know there are a lot of .NET blogs out there and I was always surprised when people kept encouraging me to share my thoughts on .NET development through a blog.

Well it seems they have won…

About me… I started my development career in 2001 and began playing with .NET from the first days on the job.  Since then I’ve had the opportunity to be in a lot of different roles: developer, team leader, teacher, software configuration manager and technical architect.  You can say that .NET and my career grew up together and boy did it go fast!

I will endeavor to provide you with cool stuff concerning many of the current and future parts of the .NET platform, agile development and the ins and outs of software configuration management, and so much more.

I hope you will enjoy my upcoming posts,

Steven

 

04/18/2005 21:10:35 UTC  #  Comments [1]