Tuesday, 20 September 2005

PDC05 - Software Development with Visual Studio Team System

During the Professional Developers Conference, I attended the excellent pre-conference session on Software Development with Visual Studio Team System. The session was delivered by Richard Hundhausen and Steven Borg. Their presentation was divided into two main parts:

A two hour theoretical introduction to Visual Studio 2005 Team System;
A four hour end-to-end demo on how to build a distributed application using Team System.

In the first part they started by summarizing the different challenges that companies encounter while building distributed solutions: lack of communication; lack of tool integration, lack of (good) process guidance, and the need to increase the predictability of success.
Steven continued by stating that about 70% of private and about 90% of all government software development projects, still actually fail.
After these chocking figures we composed a definition of Team System as being an integrated suite of tools to support the entire software development lifecycle. Although I think we will only accomplish this in future versions of Team System, the current version is an incredible starting point.
As you undoubtedly know Microsoft will provide the Visual Studio Team System front-ends in basically 3 different editions:

Team Edition for Software Architects
Team Edition for Software Developers
Team Edition for Software Testers
You can get the sum of all of the functionality provided by the abovementioned in the Team Suite Edition.

Our speakers covered the different features for each edition and continued by covering the different areas where Team Foundation Server will provide you with huge productivity improvements:

o Work Item tracking:
Scenarios, Quality of Service Requirements, Risks, Tasks, Bugs, Custom work items
o Reports
Software Configuration Management
o Merging, branching, shelving, etc
Build Management
With a room full of developers, you could not continue the session without explaining how the Team Foundation Server provides all the abovementioned functionality. They did this by explaining the Team Foundation Services Architecture and made a very important remark if you want to talk to these services yourself: despite the fact that these services are exposed as web services, developers are highly encouraged to use the provided Team Foundation Object Model since it provides you with process orchestration which you would be missing if you would decide to talk to the web service directly by creating your own proxies.
In the following minutes we stopped in the land of the Software Configuration Manager where they explained the Team System’s Version Control System and compared it to Visual SourceSafe 2005. As a conclusion I can say that Visual SourceSafe 2005 is definitely a huge improvement on its predecessors, but is still relies on the file system for its storage, whereas Team Foundation Version Control is in a league of its own and uses SQL server 2005 as its storage provider.
One of the great things about Team System is that it’s not a locked down platform but that it’s extensible in almost any way you can imagine. There is an exhaustive eventing model and as I mentioned before you can use the many API’s that are exposed. It is also possible to extend Team System by providing you own methodology templates. You can also use the different extensibility toolkits that are available today and will be part of the actual Team System SDK. It is very reassuring to hear that a lot of VSIP partners are planning on extending Team System which definitely proves that they believe in its future.
During the remainder of the introduction Steven and Richard covered the different editions of the Team System front-ends:

The first edition they tackled was Team System for Project Manager which might seem a bit strange since there is no direct mapping to a Team System Edition. A combination of the Team System Client, the Team System functionalities provided in Excel and MS Project will make a project manager a respected member of the Team System family. Following activities available to a project manager were discussed and illustrated:

The creation and configuration of team projects;
Creation and Assignment of work items;
Project status monitoring by querying work items or viewing reports on the project portal. You have many different reports but if I have to pick a favorite it would be the code churn report, which Microsoft believes to be an excellent predictor with respect to the possible failure of a project;
They did make an important remark with regards to the MS Project integration: currently there is no integration with MS Project Server but you can do this yourself through the mpp files.

The second edition was the architect edition and we defined the architect’s problems space. Today’s connected systems are becoming more and more complex and an architect is often confronted with communication problems between architects and developers, and development and IT operations teams. In Team System two different types of architects are distinguished:

Software/Application Architects
Network/Infrastructure Architects.
Following activities available to an architect were discussed and illustrated:
Create Logical Datacenter Diagrams (LDD)
Create Application Diagrams (AD)
Compose application components into “systems”
Create trial deployment diagrams
o Validate AD against LDD
Generate deployment reports
Generate and implement application components (web services)
The long term view of these diagrams is that you should be able to auto-deploy your applications or make recommendations concerning your deployment, before you actually begin the installation of the application.
Team System will allow you to fail often, fail early. Team System will help a team to avoid last minute disputes with IT Operations when it comes to deploying your apps to their servers. This System Definition Model (SDM) provides a common language for describing all aspects of IT system, both the constraints and the settings.
In the following minutes the speakers explained the following designers that are included by the Distributed System Designers in Team System:
Logical Datacenter Designer (LDD)
Application Connection Designer (ACD)
System Designer
Deployment Designer
You can find the definitions of these designers in the Visual Studio 2005 Team System: Designing Distributed Systems for Deployment article on the MSDN website.
The next question on the agenda was: “has UML died with the arrival of the Domain Specific Languages”? The answer is obviously NO: while UML will help you describe how to build the code, DSL will help you with the description of the capabilities of the code.

The third edition was the edition for developers and as it was becoming a habit by then we defined the developer’s problems space. Developers face many problems but we focused on the following:

Developers are not writing quality code;
Inadequate source control system and practices;
No way to relate code changes to justification.
I had no problem agreeing with the abovementioned statements and was happy to know that many developers will be helped by the upcoming release of Team System. The speakers focused in on that by providing a list of activities (besides writing code) that a developer probably does and then continued by explaining what Team System features help him to do a better job:
Unit Testing: The unit testing facilities in Visual Studio 2005 are much more powerful and easier to use than NUnit and there is a much better integration with the code coverage tool, than there is between NUnit and NCover.
Static Analysis: This will test your code for common problems, best practices, naming guidelines, etc. The tools that are incorporated are PreFast for C and C++, and FxCop for .NET.
Source Control: The speakers focused on the integrated check-in capabilities and check-in policies.

The fourth edition was the edition for testers and as we defined the problem space where these Team System citizens live in:

Testing controls are not integrated;
There is no version control for tests;
There are no integrated communication mechanisms.
There are different testing types and Team System has an out of the box set of tools that help the tester perform following activities:
Unit testing and component testing, and code coverage: It is important to notice that the unit testing and component testing activities have a significant overlap with development activities and as such both developers and testers can take advantage of these tools.
Web testing: Tools that support functional web testing. These tests are created in following steps:
o Create a recorded test, which simply records the user’s keystrokes and the URLs of the pages visited.
o Browse your website until you are done
o Convert the recorded test into a coded web test and customize the test further.
Load testing: These tools allow you to test the behaviour of the Web site under load. You can you can use your Web Tests as the basis for load tests.
Test management: This can be done by means of work items; these are units of work assigned to members of your product team.
As the speakers indicated, Microsoft is not providing all the tools required by the tester role but has been actively encouraging third-party vendors to ensure their tools can integrate closely with the Testers edition, and as mentioned Beta 2 comes with an extensive Application Programming Interface (API).

As you will definitely realize there are more than 4 roles involved in the software development process, examples are: Business Analyst, GUI designer, etc. In the first version of Team System they will still be able to participate by:

Accessing the real-time reports on the portal;
Using Excel or Project to maintain work items;
Using Team Explorer or command-line utilities to view/edit project artifacts.
Team System is for the entire team, but not all members are equally supported. Although this may look like a serious shortcoming at first, please realize that this is a version 1 product and the product supports most of the members in ways you could only dream of a year ago.

This concludes my summary of the theoretical part of the presentation given by Richard and Steven, the following hours they went trough an end to end demonstration of the features that were discussed in the first part of the presentation. As the title said it was an introduction to team system and so far it was best I have seen.

Team System
09/20/2005 19:41:52 UTC  #  Comments [2] 
 Tuesday, 13 September 2005

Problems Installing Windows Vista PDC-Build (5219)

During the Keynote we were informed that a PDC build of Windows Vista is available on one of the DVDs that compose the goods package. After the PDC05 Keynote session, I collected the package. Despite all the promises I’m a bit wary about the performance of running Windows Vista in a Virtual PC, and as such I decided to install it as a dual-boot. The first problem I encountered was the fact that the Windows Vista 5219 build is only available as a DVD ISO file, and since I do not have a DVD-writer attached to my laptop I decided to mount the ISO file and copy the files to a dedicated hard disk and start the installation from there. You can understand that I was pretty happy to notice that the installation launched. The details booklet, which accompanied the goods DVDs, provided me with the key necessary to complete the installation, so I was well on my way. You can imagine my disappointment when I encountered the problem following problem:
“Setup cannot validate your product key. Please review your product key and ensure that it has been entered correctly.”
This is illustrated by the underlying screenshot:

My question to you: “Has anybody succeeded in launching the Windows Vista build 5219 installation, and if not is this problem possibly caused because I launch the installation from my hard disk?”


Today, all PDC05 attendees received an email that includes a remedy for the Windows Vista PDC-Build (5219) installation and activation problems. The remedy is also posted
I really hope this helps to solve all your activation problems.

09/13/2005 22:46:53 UTC  #  Comments [10] 
 Tuesday, 16 August 2005

Professional Developers Conference 2005
I'm very happy to announce that I will be attending this year’s Professional Developers Conference. I will be staying at the fabulous Renaissance Hollywood Hotel .
Since the PDC05 is about exploring the leading edge with other developers, I thought it was a good idea to share the sessions and tracks that I hope to attend. I registered for the following
pre-conference sessions :
Software Development w/Visual Studio Team System by Richard Hundhausen;
Patterns & Practices for Designing Service Oriented Applications - An Illustrated Example by Ron Jacobs, Eugenio Pace, Peter Provost, Beat Schwegler, Arvindra Sehmi, and Don Smith.
During the actual conference days, I will be focusing on the Presentation and the Communications tracks, but I have not really decided on which specific sessions to follow. What I have already decided to do, is frequently blog about my conference experiences, and I hope that this somewhat eases the pain of those who cannot go.
On my return flight to Brussels, I can already imagine myself thinking on how to earn myself a ticket for the next PDC.

So far the good news... The bad news is that as of 16/08/2005 the registration for the PDC05 is closed.

08/16/2005 23:44:10 UTC  #  Comments [0] 
 Sunday, 07 August 2005

Testing Levels

In previous versions of Visual Studio, a tester had to resort to many different tool vendors for his testing equipment. The release of Microsoft Visual Studio 2005 Team System will mark an important milestone in testing land since it marks the recognition of the tester as a first class citizen in Visual Studio. It will provide testers with tools that support testing throughout the entire software development and maintenance lifecycles. Does this mean that every tool a tester ever dreamt of or even really needs is in Team System? No, but it’s a great start. In this post I’ll focus on the test levels that are defined in the “V” model and the Visual Studio 2005 and Team System features that support them. The “V” model has become an industry wide standard for visualizing the levels of tests. Figure 1 is an illustration of the “V” model.

I regularly notice that there is a lot of confusion on the what, how and when wile discussing these levels. Before we can start to define the different testing levels it’s probably wise to define what a unit and a component is:

A unit is the smallest compilable component. It does not include any communicating components and it’s generally created by one programmer.
A unit is a component. The integration of one or more components is a component. The reason for "one or more" as contrasted to "Two or more" is to allow for components that call themselves recursively.

On the right-leg of the “V” model you’ll find these levels:
Unit Testing:
During unit-testing the developer should always make sure the unit is tested in isolation, and that it is the only possible point of failure. In unit testing communicating components and called components should be replaced with stubs, simulators, or trusted components. Calling components should be replaced with drivers or trusted super-components.
Component Testing:
During component testing the same scenarios are tested as during unit testing but all stubs and simulators are replaced with the real thing.
Integration Testing:
Integration testing identifies problems that occur when components are combined. Component A and B are two components for which A calls B. Figure 2 illustrates integration testing for Components A and B:
Test Suite A contains the component level tests of component A.
Test Suite B consits the component level tests of component B.
Tab are the tests in A’s suite that cause A to call B.
Tsab are the tests in B's suite for which it would be possible to replace the code that is written to test component B by a call from Component A as input for the tests.
When you combine the test suites Tsab and Tab you will have a set of component tests that you can use after you modify Component B. When you modify Component B or A, you will be able to verify that will still function correctly together.

System Testing:
In system testing the tester will verify if the developed system or subsystems still meet the requirements that were set in the functional and quality specifications.
Acceptance Testing:
In acceptance testing the user and system manager will verify if the developed system still meets the requirements that were set in the functional and quality specifications. This level of testing is done in an environment that simulates the operational environment in the greatest possible extent.
Release Testing:
Prior to a public release of a program you must ensure that all bugs that were intended to be fixed were actually fixed. In release testing following aspects will be verified:

A mixture of previously failed-and-fixed tests and tests that have always passed;

Virus checking of the final installation package. Too many cases of distribution of viruses have been reported to not take this additional precaution.
A comparison of all features actually working reliably with prepared documentation. It is crucial that the documentation reflects all design decisions made during development and testing.

There are many variations on these definitions and the “V” model, but the key point is that the abovementioned testing levels are formally defined. A wise man once said: “Even when laws have been written down, they ought not always to remain unaltered.” Despite the fact that we are not talking about laws here, you can always leave a comment when you have another understanding of these definitions.

Team System | Testing
08/07/2005 15:43:44 UTC  #  Comments [2] 
 Monday, 18 July 2005

Courses versus Conferences
About 2 weeks ago, I had the opportunity to attend TechEd in Amsterdam and this year’s edition was even better then the previous one. To my opinion there are 2 major differences between a conference like TechEd and a regular course at a training center:

A course will provide you with a predefined set of aspects of a product; these are normally chosen by a qualified smart guy. During a conference you have the freedom to constantly choose what you want to learn. But as always, freedom should be accompanied by a certain amount of responsibility. I gained a huge amount of weight in the first months I moved out of my parents house, whereas I could have eaten those healthy vegetables (it’s a freedom<>responsibility joke). You can really see a course as your parent’s house where a conference is more like living on an independent basis.

Advice when attending a conference:
Choose your sessions wisely, think long term;
Go to as much sessions as you can;
A soccer player is only as good as his last season; you’re only as good as the relevant knowledge you have … you see where I’m heading at.

As a former teacher, I always found that my primary objective was to provide my students with a lot of practical information and guidance that they could directly apply in the field. Planting seeds of knowledge in a students mind was my secondary objective. As you know, a seed needs a lot of water and nurturing (studying and hard work) in order to become a tall oak (smart guy). When you attend a conference I’m convinced that your priorities should be the other way around. You really should be looking for the seeds and that’s where responsibility kicks in (again).

Advice when attending a conference:
Don’t wait to long to go over the sessions again, you’ll be surprised how much you have forgotten in a week;
Create a list of interesting subjects that you want to learn before a conference. Try to gather additional information on these subjects afterwards and try to master them in the following weeks. If you succeed at mastering these subjects, you’ve had a good conference;
If you ever been to a conference, you’ll know that the real work really just begins when the conference is finished. Remember that hard work spotlights the character of people: some turn up their sleeves, some turn up their noses, and some don't turn up at all.


The best advice I can give you when choosing between a course and a conference is the following:

To be conscious that you are ignorant is a great step to knowledge.
Benjamin Disraeli (1804 - 1881), Sybil, 1845

07/18/2005 19:27:12 UTC  #  Comments [1] 
 Monday, 11 July 2005

SCM and Team System, a marriage made in heaven?

In a previous post I stated the goals of successful configuration management and as you undoubtedly realized they are not easily accomplished in the field.  Now I'll try to give you an insight on how Team System helps you to tame this untamable beast.

A good SCM process makes it possible for developers to work together on a project in an efficient manner, both as individuals and as members of a team.  A development team must constantly manage requirements, tasks, source code, bugs and reports.  Gathering each of these item types in the same tool strengthens the communication pathways of teams and software.

Based on the goals mentioned in the previous post on SCM I'll try to indicate how Team System helps you to accomplish them:

·   Configuration identification:  This is often reffered as the process of recognizing the baseline applicability to a set of configuration items. It refers both not only to source, but all documents that contribute to the baseline.  Examples are:
·   All code files
·   Compilers
·   Compile / build scripts
·   Installation and configuration files
·   Inspection lists
·   Design documents
·   Test reports
·   Manuals
·   System configurations (e.g. version of compiler used)
Team System provides this capability through the concept of Work Item Tracking. A work item can define any unit of information that is part of the software development lifecycle. It can be any of the abovementioned Configuration Items. A powerful feature of Team System is that you can link work items to other artifacts, this allows your developers and manager to track which changes are related to which requirements, bugs.

·   Configuration Control:  Refers to the policy, rules, procedures, information, activities, roles, authorization levels, and states relating to the creation, updates, approvals, tracking and archiving of items involved with the implementation of a change request.
With Team System policies can be created and enabled inside Visual Studio that will enforce following standard check-in conditions, as well as others:
·   Clean Build: The project must compile without errors before check-in.
·   Statis Analyses:  Static analyses must be run before check-in
·   Testing Policy: Smoke-tests, unit-tests must be run before check-in
·   Work Items: On ore more work items must be associated with the check in.
You can also configure Team System to trak additional check in notes.  The standard notes in MSF Agile are: Security Reviewer, Code Reviewer and Performance Reviewer.  As with the most part of Team System, this is again fully custumizable.
Roles and authorization levels are covered by Team System Security.  By locking down privileged operations to only a few members, you can ensure that the roles within your team are always enforced.  You can for example specify which team members can administer, start or resume a build and so much more.

·   Status accounting: Recording and reporting the status of components and change requests and gathering vital statistics about components in the product.
Team System is hosted on SQL Server 2005 and its built-in reporting capabilities. As many as 50 pre-built reports are expected to ship with the release of Team System. These will include reports on: Project health, code churn, test pass, test coverage, active bugs,... These reports are directly available from the Reporting Services report manager portal or can be viewed on the project portal.

·   Configuration verification and audit: Verify that a product’s requirements have been met and the product design that meets those requirements has been accurately documented before a product configuration is released.Before acceptance into the live environment, new Releases, builds, equipment and standards should be verified against the contracted or specified requirements.
This is where the Dynamic Systems Initiative (DSI) comes into play.  DSI is a way to design for deployment or to put it in another way to design for operations. Key features of DSI are:
·   The visualization of systems and services
·   tracking of each system or service to properly describe it to another system or service.
It will in other words allow solution architect's to validate their design against an infrastructure architects's datacenter design and visa versa.  The first Microsoft implementation of DSI will be called the System Definition Model (SDM).  SDM describes your application and its deployment environment in layers.  The following layers are defined:
·   Application
·   Application Hosting
·   Logical Machines and Network Topology
·   Hardware
Microsoft will furter expand on their Dynamic Systems Initiative and will utilize the SDM model in Systems Managment Server(SMS) and Microsoft Operations Manager(MOM).

·   Build management:  Manage the processes and tools that are used to create a repeatable and automatic build.
Team System's Team Build provides an out-of-the-box solution to meet following requirements:
·   Get source code files for the build from the source code repository
·   Run static code analysis
·   Compile sources
·   Run unit tests
·   Save code churn, code coverage and other build information
·   Copy the binaries to a predefined location
·   Generate reports
The build automation tool in Team System provides you with an out-of-the box solution to meet these requirements.  The wizard helps you create an automated build script. Since the execution engine of Team Build is MSBuild, you can customize the process and accomplish any number of custom tasks.


Process management:  Enforces consistent processes and promotes user accountability across the application life cycle, resulting in communication and productivity improvements enterprise-wide.
Team System will include two Microsoft Solution Framework(MSF) methodologies:  

·   MSF for Agile Software Development
·   MSF for CMMI improvement
While in MSF Agile it is more important to respond to change than to follow a plan, it is my understanding that MSF for CMMI process improvement is the only MSF methodology that fully provides process management support.  It is an excellent process to use on your project if your company is looking to achieve a measured, baseline competency in software development.  In short it will bring the process management side of the application lifecycle to your company and project.

·   Teamwork:  Controlling the work and interactions between multiple developers on a product.
One of the great advantages of the fact that Team System is such a highly integrated environment, is that it can instantly improve the communication on your team. All members of a team need to be in sync, watching their managers and need to work together to get their assignments done in time. Managers can always consult what the state of the project is, how much code churn is in the nightly builds, when the project has reached zero bugs,... Your team must constantly manage the same requirements, tasks, source, code bugs and reports. Because of the ways these are integrated in Team System it will automatically strengthen the communication pathways of your team and software.

I hope that by now you will agree that Team System is the new do-it all tool in the SCM's toolbox.  Team System is not a methodology or process but it integrates very well with the MSF methodology. Team System integrates most of the current tools that a Software Configuration Manager has dreamt about.  Microsoft will provide third party tool providers and yourself with an SDK that allows you to take advantage of common functionality that Team System provides.  Well, I cannot imagine a SCM that is not eagerly anticipating the release of Visual Studio 2005 Team System, but only time will tell.

SCM | Team System
07/11/2005 20:47:44 UTC  #  Comments [2] 
 Wednesday, 18 May 2005

Generics Part I - Introduction

Generics or parametric polymorphism allow classes, structs, interfaces, delegates and methods to be parameterized by the type of data they utilize.  It has the following advantages over dynamic approaches:

  • stability

stronger compile time type checking

  • expressivity

invariants expressed in type signatures

  • clarity

fewer explicit conversions between data types

  • efficiency

reduce the need for run-time type checks and boxing operations

Object-based generic design pattern

Without generics, programmers often use the Object-based generic design pattern.  This is a complicated term for something as simple as storing data of any type as an instance of the type Object.  The following List class stores its data in an Object array and the methods Add and the indexer use the Object type to accept and return data:

public class List


      private object[] _items;


      public object this [int index] {...}

      public void Add(object value) {...}


The Object-based generic design applied in the above sample provides the List class with parameter type flexibility.  It is possible to add a value of any type to the List but this solution still has following drawbacks:

  • When the value passed to the Add method is a value type, it is automatically boxed. 
  • When the value returned by the indexer is a value type it must be unboxed with an explicit type cast.  Boxing and unboxing operations add a performance overhead because they involve memory allocations and runtime type checks.
  • When the value returned by the indexer is a reference type, an explicit cast to the appropriate type has to be performed.  This has a performance penalty for the required runtime checking and is quite tedious to write.
  • There is no compile time type checking.  This may cause that problems do not become apparent until the code is executed and an InvalidCastException is thrown.

As you may or may not suspect by now, generics allow us to overcome the abovementioned drawbacks.


What are Generics?

Generics provide class creators the tools to create types that have type parameters.  Rather than forcing conversions to and from Object, instances of generic types accept the types for which they were created and allow us to store the data without any conversions.  The passed type parameter is a placeholder until an actual type is specified during utilization.  The following example uses the parameter TypeOfList as the type for:

  • the internal _items array
  • the parameter type for the Add method
  • the return type for the indexer.

public class List <TypeOfList>


      private TypeOfList[] _items;


      public TypeOfList this [int index] {...}

      public void Add(TypeOfList value){...}



When you want use the generic class List, you must specify the actual type for the type substitute TypeOfList:

List<int> list = new List<int>();


In the constructed type List<int>, every occurrence of the type substitute TypeOfList is replaced with the type argument int.  A constructed type is a generic type that is named with its type parameters.  When an instance of the type List<int> is used in code, the following applies:

  • The native storage for the _items array is int[], this provides an improved storage efficiency compared to the non-generic List. 
  • Generics provide strong typing. This means that at compile time and at runtime it will be verified that only int values or values that can be implicitly cast to an int are used as parameter. 
  • The indexer will return an int value, this eliminates the explicit cast to an int when it is retrieved and thus eliminates the unbox operation.

Generic type declarations may have any number of type parameters.  The following example illustrates its use:

public class Dictionary <TypeOfKey, TypeOfValue>


      public void Add(TypeOfKey key, TypeOfValue value){...}

public TypeOfValue this [TypeOfKey key] {...}




So far the benefits of generics only apply to constructed types.  But when you are coding a generic class, you will find that the type of the parameters is still no more specific than the Object type.  It’s currently impossible to call any type specific method on the parameter values.  To provide us with this information C# permits an optional list of constraints to be supplied for each type parameter.  A type parameter constraint allows you to specify a requirement that a type must fulfill.  Constraints are declared using the word where, followed by:

  • The name of the parameter;
  • a class type (optionally);
  • interface types (optionally);
  • the new() constraint, that allows you to specify the requirement for a public parameterless constructor (optionally).

public class Dictionary <TypeOfKey, TypeOfValue>

where TypeOfKey: IComparable<TypeOfKey>

      where TypeOfValue: IPersistable, new ()


public void Add(TypeOfKey key, TypeOfValue value) {…}



Given the abovementioned declaration, where the type argument for TypeOfKey is constrained to implement IComparable the following applies:

  • the compiler guarantees that any type argument supplied for TypeOfKey implements IComparable;
  • All members of IComparable are directly available on values of the type parameter TypeOfKey.

public void Add (TypeOfKey key, TypeOfValue value)


if (key.CompareTo(value) < 0 {…}



Generic Methods

When you only need a type parameter in a particular method, you will probably want to use a generic method.  A generic method has one or more type parameters specified between < and > delimiters after the method name.  The type parameters can be used within the:

  • parameter list
  • return type
  • body of the method. 

A generic AddDictionary will probably look like this:

public void AddDictionary (Dictionary<TypeOfKey, TypeOfValue> dictionary) {…}


Good news?

Yes, there are least 4 more posts to come on generics: 

  • Generics Part II – Generic Declarations
  • Generics Part III - Advanced Generics;
  • Generics Part IV - Generic Performance and Guidelines;
  • Generics Part V - Generic Implementation or what I make of it.

.NET 2.0
05/18/2005 21:37:04 UTC  #  Comments [2] 

 Monday, 25 April 2005

What is Software Configuration Management?

Software Configuration Management (SCM) means many things to many people.  An excellent place to start is to define the goals of SCM.  A good SCM process makes it possible for developers to work together on a project in an efficient manner, both as individuals and as members of a team.  Based on different publications we can state that successful configuration management should enable the following:

·         Configuration identification:  Developers should be able to work together on a project, sharing common code.  This allows a developer to fix a bug in the source code for release A while another developer is developing a new feature that is scheduled for release B.

·         Configuration control:  Ensures that proposed changes to configuration items are fully coordinated and documented.  This can for example include the switch from .NET 1.1 to .NET 2.0.

·         Status accounting audit:  Recording and reporting the status of components and change requests and gathering vital statistics about components in the product.  For example: How many files were affected by fixing a bug.

·         Configuration Verification and Audit: Verify that a product’s requirements have been met and the product design that meets those requirements has been accurately documented before a product configuration is released.  It’s important to remember that this state needs to be maintained through the entire project lifecycle.

·         Build management: Manage the processes and tools that are used to create a repeatable and automatic build.

·         Process management: Enforces consistent processes and promotes user accountability across the application life cycle, resulting in communication and productivity improvements enterprise-wide.  You can really see this as getting the heads pointing in the same direction.

·         Teamwork: Controlling the work and interactions between multiple developers on a product. For example, this addresses the question, "Were all the locally made changes of the programmers merged into the latest release of the product?"


SCM did not grow out of a managers wish to limit and control developers in their creativity.  It is there to protect you from rogue behavior.  The "active rogue" is easier to identify and control because it's out in the open and often verbal.  The passive rogue is pretty much anyone on the team who will sacrifice quality when the heat starts to rise.  

When crunch time comes, and it will come believe me, you need a process that keeps people from being tempted to put in quick fixes that ultimately degrades the quality of your application. 


I hope that this post has given you an understanding of what SCM has to offer to yourself and your organization.  In a following post I will elaborate on the theoretical and practical sides of the wonderful world of the software configuration manager.


04/25/2005 20:29:32 UTC  #  Comments [3] 
 Monday, 18 April 2005


As you know there are a lot of .NET blogs out there and I was always surprised when people kept encouraging me to share my thoughts on .NET development through a blog.

Well it seems they have won…

About me… I started my development career in 2001 and began playing with .NET from the first days on the job.  Since then I’ve had the opportunity to be in a lot of different roles: developer, team leader, teacher, software configuration manager and technical architect.  You can say that .NET and my career grew up together and boy did it go fast!

I will endeavor to provide you with cool stuff concerning many of the current and future parts of the .NET platform, agile development and the ins and outs of software configuration management, and so much more.

I hope you will enjoy my upcoming posts,



04/18/2005 21:10:35 UTC  #  Comments [1]