This chapter covers
- Continuous integration theory
- A Hello World CI example
- A preliminary list of CI tools
As developers, we’re interested in creating the best possible applications for our customers with the least amount of work. But with applications becoming more complex and having more moving parts, creating great applications is getting harder, even with advances in tools such as Visual Studio and the .NET Framework.
One of the keys to improving applications and productivity is to automate some of the work. Continuous integration (CI) is one of the best ways to do this.
Have you ever written code that did its small task perfectly, but then discovered unexpected side effects when you integrated that piece of code with the rest of the application? Do you always have success integrating your code with code from other developers? Have you ever shipped an application, only to find that it didn’t work for the customer but you couldn’t duplicate the error? Can you always predictably measure the state of the code for your current project? CI helps alleviate these problems and more.
In this chapter, you’ll learn what CI is all about, why should you use it, and how to overcome objections to its adoption from your team. We’ll briefly introduce you to several free or low-cost tools such as CruiseControl.NET, Subversion, MSBuild, Team Foundation Server, and TeamCity that are part of a complete CI process. Throughout the rest of the book, we’ll explain in detail how to use these tools.
This chapter also demonstrates a simple CI process through an example using batch files. We’ll also get started on a more complex Visual Studio Solution that we’ll use to demonstrate various CI tools and techniques. But before we do any of that, you need to understand exactly what CI is.
When you adopt CI, it’s likely that you’ll make major changes in your development processes because you’ll move from a manual system to an almost completely automated system. Along the way, you may meet resistance from your team members. This section provides you with reasons to use CI and how to overcome objections. But before we take you there, we need to define CI.
One of the best definitions of continuous integration comes from Martin Fowler (www.martinfowler.com/articles/continuousIntegration.html):
Continuous Integration is a software development practice where members of a team integrate their work frequently, usually each person integrates at least daily—leading to multiple integrations per day. Each integration is verified by an automated build (including test) to detect integration errors as quickly as possible. Many teams find that this approach leads to significantly reduced integration problems and allows a team to develop cohesive software more rapidly.
This definition contains an important phrase: “multiple integrations per day.” This means that several times each day, the CI system should build and test the application. But multiple integrations per day isn’t where you begin your journey into CI; we recommend against this because many shops not using CI will meet enough resistance just automating the build, let alone doing multiple builds per day. (We’ll talk more about overcoming team resistance later in this chapter.) Ideally, you should set up your CI process just as you create software: by taking small steps, one at a time.
Here is another definition:
CI is the embodiment of tactics that gives us, as software developers, the ability to make changes in our code, knowing that if we break software, we’ll receive immediate feedback ... [It is] the centerpiece of software development, as it ensures the health of software through running a build with every change.
Paul Duval, Continuous Integration
The key phrase here is “the centerpiece of software development.” This means whatever development process and methodology you use, CI is a key part of it.
An automated process that builds, tests, analyzes, and deploys an application to help ensure that it functions correctly, follows best practices, and is deployable. This process runs with each source-code change and provides immediate feedback to the development team.
As we were discussing this definition, we wondered what a build is. Is it the same as clicking Build on the Visual Studio menu, or something more? We finally decided that the definition varies depending on what you’re doing. Early in the development process, a build can be as simple as compiling and unit testing the code. As you get closer to release, a build includes additional and more complete testing and running code metrics and analysis. You can also go as far as combining all the different files into an install set and making sure it works correctly.
Finally, don’t get caught up with the meaning of continuous. CI isn’t truly continuous, because integration occurs only at specific intervals or when triggered by a specific event. Integration is continuous in that it happens regularly and automatically.
Now that you know what CI is, let’s see how it changes your development process.
Is your development process agile? Do you use extreme programming (XP), scrum, or something else? Is your company deeply rooted in waterfall methodologies? Does your process fall somewhere between agile and waterfall?
It really doesn’t matter which methodology you use, because you probably follow pretty much the same process when it comes to writing code:
1. Check out the needed source files from your source code repository.
2. Make changes to the code.
3. Click Build on the Visual Studio menu, and hope everything compiles.
4. Go back to step 2. You did get compile errors, didn’t you?
5. Run unit tests, and hope everything is green. We hope you’re running unit tests.
6. Go back to step 2. Unit tests do fail. In this case, you’ll see red. Perhaps in more ways than one.
7. Refactor the code to make it more understandable, and then go back to step 5.
8. Check the updated code into the source code repository.
When you start using CI, you’ll follow the same process. But after you check in the source code, you’ll take additional steps (see figure 1.1).
Figure 1.1. In the CI process, developers check code into the version control repository. The automated CI system polls the repository for changes and then builds and tests the code. Results are posted to a feedback system where team members can see the results.
9. An automated system watches the source control system. When it finds changes, it gets the latest version of the code.
10. The automated system builds the code.
11. The automated system runs unit tests.
12. The automated system sends build and test results to a feedback system so that team members can know the current status of the build.
At this point, you may be asking yourself several questions, such as, “Why do tests need to be run multiple times?” or “Why can’t I just click Build in Visual Studio?” The answer to these questions is the same: automating the building, testing, and running of other processes through CI ensures that the code from multiple people integrates, compiles, and functions correctly, and that it can be reproduced the same way every time on a different machine than your workstation. Also, consider that you may have an application with many assemblies. When you click Build, you may only build the assemblies you’re responsible for. Even if you’re a one-person shop, adopting CI will improve the quality of your software.
Automating the build and the unit tests are steps in the right direction, but a good CI process can do more—and eventually you’ll want it to, so you can maximize its usefulness. Things like running code-analysis tools, running tests in addition to unit testing, building an install package, and simulating an install on the customer’s PC are all possible through a CI process. But you won’t do all these things with every change.
The CI steps we’ve outlined make it sound like every time a developer checks in code, a build is triggered. This is the ultimate goal and the reason it’s called continuous integration. Reread the quote from Paul Duval: he says you should build “with every change.” Martin Fowler says, “multiple integrations per day.” That’s pretty close to continuous. But, remember, continuous is the eventual goal. You don’t want to start there.
One way to begin to set up your CI system is to start by getting the latest changes from source code and building the application. Then add unit tests. And only do this daily at first. You can call this a daily build; but as you’ll see in a moment, a daily build includes other things that don’t run when you do the incremental build.
When you have this build running every day, add two or three builds per day that only build and test. It won’t take long, and you’ll be building continuously and adding different builds to do different things. The exact types of builds you need depend on your environment and applications. Some of the more common builds are listed in table 1.1.
How it’s used
|Continuous/Incremental||Runs when code is checked in. Does a quick compile and unit test.|
|Daily/Nightly||Does a compile and full suite of unit tests and possibly additional testing such as FitNesse.|
|Weekly||Does a compile, full unit testing, and additional testing such as FitNesse.|
|Release||Creates an install set and then runs and tests the inst all process.|
|QA||Creates a build just for the QA team.|
|Staging||Builds and copies assemblies to a staging server.|
The most important build, and the one you want to get to, is the continuous or incremental build. This build is automatically triggered whenever source code is checked in to the repository. Because this build can potentially run several times per day, and one build may run immediately upon completion of another, you want the continuous build to run quickly—preferably in under 5 minutes. This build should get the updated code, rebuild the assembly it’s in, and then run some preliminary unit tests. Reports are sent to the feedback mechanism.
Next is the daily build, often called the nightly build. Rather than running whenever the code changes, the daily build is scheduled to run once per day, usually in the middle of the night. Because you don’t need to worry about the next build starting immediately, the daily build typically runs a complete suite of unit tests against all the code. Depending on your environment, you may want to add additional automated tests or code analysis.
Another build type is the weekly build, which runs automatically and usually on the weekend. Once a week, you should run a code analysis and additional tests with tools like Selenium, FitNesse, and NUnitForms. You may also want to create documentation with Sandcastle or do continuous database integration. As you get closer to your release date, you may want to run the weekly test build more often. You’ll also want to run a release build.
The purpose of the release build is to create and test an install set. The release build is typically manually triggered. But after the build is started, all the other steps are handled automatically. In a release build, you’ll build all the source code, increment the version number, and run a full suite of tests. You’ll then create the install set and simulate the install. Good CI server software will have a way to check if the install was successful and then roll back the changes, so that the test system is ready for the next round of install testing.
Your environment may require other types of builds. For example, you may have a build that copies assemblies to a QA environment after the build. Or you can copy files to a staging or production server. The bottom line is that many different types of builds are needed for different purposes. But because steps are automated, you can be sure that things are done the same way every time.
As you introduce CI and different types of builds, some team members may resist the changes. It’s important to overcome these objections so your CI process is successful.
With all these builds going on and developers having to change their routine and check in code more often, you may get objections from team members. Some common objections are as follows:
- CI means increased maintenance. Someone will have to maintain the CI system. This will take them away from programming duties. At first, there will be extra overhead to set up the system; but when a project is fully integrated, your team will save time because it will be faster and easier to test the application and detect and fix bugs. Many teams report that after the CI process is running, maintenance takes less than an hour per week.
- This is too much change, too fast. It’s difficult to adapt to the new way of doing things. Don’t implement everything at once. Start out with a simple build once per day, and then add unit testing. After the team is comfortable with this, you can add one or two additional builds per day or start doing code analysis. By taking the process in baby steps, you’ll get more buy-in into the process.
- CI means additional hardware and software costs. Start out small with an old PC as your CI server if you need to. Eventually, you’ll want better hardware so that you can run builds quickly (remember, the integration build should run in under 5 minutes); but for a build two or three times a day, older hardware will work. If you use the tools we discuss here, your software costs will be minimal.
- Developers should be compiling and testing. We’re not taking those responsibilities away from developers. We’re moving much of the grunt work to an automated system. This allows programmers to use their brains to solve the business problems of the application. This makes the developers more productive where it counts: writing and debugging code.
- The project is too far along to add CI. Although it’s better and easier to place a new project under a CI process, the truth is, most of the work we do is maintenance on existing projects. An existing project may not have unit tests, but you’ll still use source control and need to do builds. You can benefit from CI no matter how far along your project is.
One of the authors once worked in an environment where each developer was responsible for a different executable in a 15-year-old C++ application. Each executable was built locally and then copied to a shared folder on the network where QA picked it up and tested it. Problems arose because each developer used a different version of third-party components, and each developer used different compiler switches. This meant that if one developer was on vacation, and a bug in their code needed to be fixed, it was difficult to reproduce their development environment on another developer’s workstation. It was so troublesome that management finally decided that unless the customer was down due to the bug, the fix would wait for the responsible programmer to get back to the office. If CI had been in place, many of the issues with the software wouldn’t have happened.
Here are several reasons to use CI in your development process:
- Reduced risks—By implementing good CI processes, you’ll create better software, because you’ll have done testing and integration earlier in the process, thus increasing the chances of catching bugs earlier. We’ll talk more about reducing risks in the next section.
- Deployable software—If you automate the installation process, you’ll know that the software installs as it should.
- Increased project visibility—The feedback mechanism allows project members to know the results of the build and where the problems are. Bugs can be fixed sooner rather than later, reducing costs and the time spent fixing bugs.
- Fast incremental builds—In October 2009, ZeroTurnaround released results of a survey of more than 500 Java developers. In the survey, 44% said their
incremental builds took less than 30 seconds, and another 40% said build times were between 1 and 3 minutes. The overall average
build time was 1.9 minutes. Although the survey was for Java apps, there’s no reason not to believe your .NET projects will have fast incremental build
times. Fast incremental build times means you get build and test results sooner, helping you to fix bugs earlier in the development
Don’t let team objections get you down. The initial resistance will eventually give way to acceptance as the team works with the CI system. Virginia Satir, a family therapist, developed the Satir Change Model, which shows how families deal with change. Steven Smith wrote that the same model can be used to show how new technology is adopted (http://stevenmsmith.com/ar-satir-change-model/). The change process involves five steps:
1. Late status quo—Everyone is working in the current process and knows how it works.
2. Resistance—A new element is introduced. People are hesitant to change how they’re working. The late status quo works fine. Why change it?
3. Chaos—The new element is adopted. There is no longer a normal way of doing things. Daily routines are disrupted.
4. Integration—People slowly become adjusted to the new way of doing things. It gets easier to do their jobs with the new methodology.
5. New status quo—The new element becomes fully integrated into the system. People now look at it as normal.
Almost every team has adopted new methodologies at one time or another. This process should sound familiar to you.
As you meet resistance from the team, be persistent in implementing the changes. Team members will eventually accept them. Some team members will adopt CI more quickly than others, who may need more convincing. Perhaps you should show them how CI reduces risk in the development process.
Your customer doesn’t like risk. Your manager doesn’t like risk. Your project manager should have plans in place to mitigate risk. In the end, you shouldn’t like risk either. CI is all about reducing risk.
Perhaps the biggest risk in software development is schedule slippage—in other words, the project being delivered late. Because of the feedback mechanism in the CI process, team members always know the status of the current build, which helps you know whether the project is getting behind schedule. Feedback mechanisms will be presented in chapter 5.
The next biggest risk is bugs. It’s been shown that the later in the process you find a bug, the more costly it is to fix. Some estimates suggest that it costs as much as $4,000 to fix a single bug in internal, home-grown corporate web applications. In 2005, a well-known antivirus company had a bug in an update. That single bug caused customers to lose confidence in the antivirus software and forced the company to lower its quarterly income and revenue forecasts by $8 million. Do you want your company to experience similar costs? One of the caveats of CI is that bugs are fixed as soon as they’re reported. By integrating and testing the software with each build, you can identify and fix bugs earlier in the process. We’ll discuss unit testing in chapter 6 and application testing in chapter 7.
Have you considered how many different code paths exist in your application? Have you tested each if/else combination? How about every case of a switch statement? In his book Testing Computer Software (John Wiley & Sons, 1999), Cem Kaner mentions a 20-line program written by G. J. Meyers that has 100 trillion paths. Code coverage is a methodology that checks which paths are tested and which aren’t. A great thing about code coverage is that you can automate it in your CI process. It’s impossible to test every combination; but the more you test, the fewer issues will be uncovered by your customers. Code coverage will also be presented in chapter 6.
Another risk is database updates. It’s never easy to add columns to a table or new tables to a database. With continuous database integration, you’ll know that database changes work properly and without data loss. We’ll discuss continuous database integration in more detail in chapter 11.
Developers often hate coding and architectural standards, but they have a useful purpose: they ensure that the application follows best practices, which in turn makes the application perform better and makes it easier to maintain. Code reviews catch some of these issues; but because code reviews are a manual process, things are missed. Why not automate standards compliance as part of your CI process? We’ll cover code analysis in chapter 8.
Comments are rarely put in code, and documentation is generated even less often. Many people say that if you’re agile, you don’t have documentation, but this isn’t true. Agile says that you value working software over documentation. But some documentation is still needed, especially if you’re creating assemblies for use by other developers. Here’s another opportunity for automation in your CI process, and one that’ll be covered in chapter 9.
How do you know that your installation process works correctly? There are few things that frustrate users more than when they can’t install an application. Create and test the entire installation process in your CI system. We’ll cover deployment and delivery in chapter 10.
Finally, CI also increases visibility. It’s easier to see problems hiding in the project that without CI wouldn’t be found until much later in the development process, when they would be harder and much more costly to fix.
Now that you know what continuous integration is and how it can improve your development process, let’s see CI in action.
It seems that just about every computer book starts with a Hello World application. To help you understand the CI process, we’ve developed a simple C# application and simulated a CI server using a Windows script. Make sure you have .NET Framework 4.0 Extended installed. Throughout the book, we’ll use Visual Studio 2010. If you have it installed, you’re good to go.
To install the demo, create a miniCI folder, and then copy the demo files into it. To run the demo, open a command window, change the directory to the miniCI folder, and type Build. The results are shown in figure 1.2.
Figure 1.2. The miniCI application builds updated files, tests and deploys them, and then keeps checking for changes in the source code files.
The build script is an old command-line batch file. We used this tool to show you how easy it is to create something that resembles the CI process. We aren’t the only ones to try something like this: there are PowerShell scripts made to do the CI server’s job (see http://ayende.com/Blog/archive/2009/10/06/texo-ndash-my-power-shell-continuous-integration-server.aspx). The CI script, shown next, verifies that the input and output folders exist, compiles the Equals.cs file into an .exe, and then runs it to verify that it works. The application takes two parameters and returns true if they’re equal or false if they aren’t.
In the CI script, you verify that the work area on the build server is set up correctly . The original source file is compared to the file in the work area. If it’s different, it’s copied to the work area. To detect the differences, you can use the fc.exe tool that comes with Windows, which compares two text files, prints the differences on screen, and redirects the output of the command to the null device to hide it from the user. The new work-area source file is then compiled into an .exe and tested . To test the application, the script uses a little fake: it outputs 0 if the strings are identical. This is because you have to check the error level in the batch file. If the program returns something bigger than 0, you’ll assume it’s an error. If the test is successful, the .exe is copied to the deploy folder. The feedback mechanism is also updated with the result.
Now that you’ve seen a simple example of how CI works, it’s time for us to introduce you to the tools that do the real work in continuous integration.
A complete CI process consists of several tools. You can buy expensive CI systems that are feature rich and often easy to set up and maintain; or you can use tools that aren’t as feature rich and often require some work to set up but are either free or low cost. Either way, no one tool does everything you need in a complete CI system. In this book, we’ll work with free or low-cost tools and show you how they work and how to integrate them into a fully functional CI process. In this section, we’ll give a brief introduction to several tools, starting with those that you must have.
Five tools are required to get started with CI. At a minimum, you should have these tools as part of your initial CI setup.
The first essential tool is source control. Source control systems are most often used to store each revision of the source code so that you can go back to any previous version at any time. But you should also use the source control system to store customer notes, development documentation, developer and customer help files, test scripts, unit tests, install scripts, build scripts, and so on. In fact, every file used to develop, test, and deploy an application should be saved into source control. There’s a debate in the developer community about whether this should include binaries that you can build; that decision is up to you and should be based on the needs of your team.
You have many source control options, ranging from high-end enterprise tools from IBM Telelogic that integrate with requirements and bug-reporting systems, to Visual SourceSafe (VSS) from Microsoft, which has been around for years. You can spend thousands of dollars on these tools or find ones like Subversion and Git that are open source and free. Even if you don’t use CI, you should have source control, no matter the size of your team.
Microsoft discontinued the aging and not well-respected VSS in early 2010 and replaced it with Team Foundation Server Basic. But many teams continue to use VSS and have no plans to change in the near future.
This book looks at mostly free tools from the Subversion family and mostly paid tools related to Microsoft Team Foundation Server (TFS). If you choose Subversion, make sure you also install another tool such as AnkhSVN (http://ankhsvn.open.collab.net/), VisualSVN (www.visualsvn.com/visualsvn/), or TortoiseSVN (http://tortoisesvn.tigris.org/) that integrates into Windows Explorer or Visual Studio and makes it easy to work with Subversion. TortoiseSVN (see figure 1.3) seems to be the most popular (according to StackOverflow and SuperUser), so that’s what we’ll use for our examples. If you’re using TFS and have Visual Studio 2010 installed, you’re ready to go.
Figure 1.3. TortoiseSVN integrates into Windows Explorer to make it easy to manage your Subversion repository.
The second and most important tool you need is one to drive your CI process. This sits on the CI server, watches for changes in the source code repository, and coordinates the other steps in the CI process. It also allows on-demand builds to be made. Essentially, this application is the traffic cop of any CI system. The CI server software typically has its own feedback mechanism that’s used to aggregate feedback from the other tools and provide it to the feedback mechanism.
The most common CI tools for .NET development are Team Foundation Server from Microsoft and open source tools such as CruiseControl.NET and Hudson. TeamCity is another application that sits between these two options, because it’s free for small teams but requires licensing fees as the size of the team or number of projects increase. We’ll discuss CI servers in more detail in chapter 4. Most CI tools are driven by a configuration file (see figure 1.4) that specifies when a build should take place and what specific steps are taken during the build or integration process.
The feedback mechanism is another essential part of the CI process. Your team needs to know the status of any build at any time, especially when the build fails. There are many ways to provide feedback to the team, and we’ll discuss them in chapter 5. But the most common method is through a website.
Next, you need something to do the actual build. The two most common options are MSBuild and NAnt. MSBuild is part of the .NET Framework, so it’s free and most closely matches what happens when you click Build from the Visual Studio menu. NAnt is designed after the Java tool Ant. It’s an open source solution, but it has received few updates in the past couple of years. Both applications are controlled by XML configuration files, but you can find GUI tools such as MSBuild Sidekick (see figure 1.5) to make the configuration files easier to maintain.
The build-manager application takes a Visual Studio solution or individual project files and calls the correct compiler, generally C# or VB.NET. The compilers come free as part of the .NET Framework. Some shops use MSBuild for the actual compilation of the source and then use NAnt for the remaining steps, such as running unit tests.
The last essential tool you need is a unit testing tool. The two most common options are MSTest and NUnit (see figure 1.6), but there are others such as MbUnit and xUnit.net. These tools run the unit tests that you write for your application and then generate the results into a text file. The text file can be picked up by your CI server software; a red/green condition for fail/succeed is reported to the team through the feedback mechanism.
Figure 1.6. NUnit runs unit tests on your code and reports the results as red/green for failure or success.
Although NUnit has a GUI tool, it can also be run as a console application as part of your CI process. Many of the tools we’ll discuss in this book have both a GUI and a command-line version. The command-line tools provide results as text or XML files that can be processed by your CI server software; the results are displayed using the feedback mechanism.
Now that you know the required tools, let’s turn our attention to other tools that will help you write better code: code-analysis tools.
Code analysis plays an important part in the development process. Code from multiple team members should use the same naming conventions. And the code should follow best practices so that it’s robust, performant, extensible, and maintainable. Several code-analysis tools can assist in the process.
The first, FxCop (see figure 1.7), a free tool from Microsoft, analyzes code and reports possible issues with localization, security, design, and performance. The tool is targeted at developers creating components for other developers to use, but application teams are finding FxCop a useful part of their CI process.
Figure 1.7. FxCop reports problems in code that can be issues with design, performance, security, or localization.
Another free Microsoft tool is StyleCop (see figure 1.8). It comes with Visual Studio and is delivered with a set of MSBuild plug-ins for standalone usage. This tool checks your code against best-practice coding standards. It compares your code to recommended coding styles in several areas including maintainability, readability, spacing, naming, layout, documentation, and ordering.
Both of these tools generate analysis reports that can be used by your CI server software and integrated into the build report available via the feedback mechanism.
NCover (see figure 1.9) is a coverage tool that checks that all code paths are being tested. So is NCover an analysis tool or a testing tool? The truth is, it’s a little of both.
NCover uses either MSTest or NUnit to run the tests and can integrate with several CI server applications. But there are additional test tools, and they’re the subject of the next section.
Earlier in the chapter, we talked about unit testing tools as an essential part of the CI process. But other test tools can be run against your code to help ensure that the application functions correctly.
One such tool is Selenium, an open source tool developed by ThoughtWorks. Selenium has record and playback capabilities for authoring tests that check web applications. If you’re creating WinForms, Windows Presentation Foundation (WPF) or Silverlight applications, you may be interested in White: it allows testing of your UI classes. Finally, there’s FitNesse. This testing tool allows you to specify the functionality of the application; then tests are run against the code to ensure that it works as specified. Chapter 6 is devoted to showing how to integrate these tools with your CI process.
There are also several other tools you can add to your CI system.
Have you ever put XML comments into your code? You can, and then extract them and compile them into useful documentation. That’s the purpose of Sandcastle. These comments are most often added by component and framework vendors for building help files. But they can also be useful for other team members or even for yourself when you have to make changes to a module a month from now.
You can also automate the building of your deployment. It doesn’t matter if you use ClickOnce, Visual Studio Installer, WiX, Inno Setup, or something else. Having your CI process automatically build the application, create the install set, and then test the install are big steps to ensuring a good, solid application.
The tools presented here are by no means an exhaustive list. You can find many tools for things like code visualization, performance testing, static analysis, and more through a web search. Some of the tools cost several thousand dollars, and others are free. In this book, we take the low-cost approach and discuss tools that are free or available at a minimal cost. Tools like this emerge continuously in the community. To keep track of what’s new and hot, you can check community sites like StackOverflow and ALT.NET (http://altdotnet.org/).
Now that you’ve been introduced to many of the tools you’ll be using in your CI process, it’s time to introduce you to the project we’ll use throughout the book.
To better understand the CI process, you should have a simple but real-world example of software that you can put under source control in the next chapter and eventually integrate continuously. At this early point, you’ll only create a Visual Studio solution and the project files. You’ll add the code in later chapters.
You want a sample application that isn’t trivial but is as easy as possible for demonstration purposes. How about a leasing/credit calculator? It may not be a tool that’ll prevent the next worldwide financial crisis, but it’s a piece of software that’ll provide a straightforward answer to a simple question: how much will you pay monthly for your dream car or house?
The architecture will be sophisticated, as you can see in figure 1.10.
Figure 1.10. You’ll create a CI process for an application consisting of one shared library with two UIs: Windows and web.
The application will consist of one shared library with two clients. One client will be made using WPF and the other with Web Forms. You’ll create a full CI process for this tool throughout this book. But remember, the project is only a pretext to show you how to set up a CI process and not a goal in itself.
You’ll use Visual Studio 2010 to develop the application. In this chapter, we’ll present two examples using ASP.NET and WPF, but the techniques described are suitable for other kinds of .NET applications like Windows Forms, ASP.NET MVC, Silverlight, and mobile apps. The details may differ, but the easiest way to set the CI process correctly is to think about it from the beginning. It never hurts to first think about what you want to accomplish and then do it afterward—not the other way around. The first part of the example we’ll look at is the core of the application: a shared library used with the finance mathematical part of the software.
The financial library project will contain all the necessary code to perform the financial calculation. You’ll start small with one class to perform a leasing/credit installment calculation.
Pick a directory to work with. By default, Visual Studio stores projects somewhere deep in the user folder structure, which makes all the paths long and error prone. It’s better to make the path shorter—anything on the first level in the directory structure will do, such as C:\Project, C:\Development, or C:\Work. For this example, let’s use C:\Dev.
Consider organizing your project directory a little better than the Visual Studio defaults. If you create a project, you’ll get a solution file named after the project. Visual Studio will also create a directory under the solution file, again named after the project, and place the project file inside. That may be all right for a quick shot, but you should consider taking this structure under your control.
First, if you plan to have more than one project in a solution, consider naming the solution differently than any of the projects. Second, remember the golden rule of CI: keep everything you need to integrate inside the repository. To meet this rule, you’ll need a directory to keep all your stuff in. You can name it lib or tools. And you can go even further, as you can see in figure 1.11.
Figure 1.11. Different project directory organization structures. Files can be grouped in logical collections. Pick a pattern that suits you.
Organizing your files in logical groups makes the solution directory tidy. For example, source files go in a directory called src, and documentation-related stuff goes in the doc directory. Of course, this isn’t divine knowledge, and you may have a good reason to do it differently. You may want to put the documentation in another repository or to not have a separate source directory. It’s up to you, but you should at least think about it.
Here are the steps to organize the project:
1. Launch Visual Studio, and create a new solution. Select File > New > Project from the Visual Studio menu. The New Project dialog box opens (see figure 1.12).
Figure 1.12. You should start with a blank solution. Doing so will give you the ability to name it differently than the project inside and to place the projects where you want them. Don’t take a shortcut by creating a project first and letting Visual Studio create the solution file for you. You’ll end up with the Solution Explorer window shown.
2. In the Installed Templates list, select Other Project Types > Visual Studio Solutions, and then choose Blank Solution.
3. Enter Framework for the solution name and C:\Dev\ for the location, and click OK.
4. To add the financial-calculation library to the newly created solution, first choose File > Add > New Project. Then choose Visual C# > Windows, select Class Library, and name the library CalcCore. (In a real solution, you may have other libraries parallel to the core—for example, a project containing database-access classes or controls ready for reuse in various projects.) Your Solution Explorer should look similar to figure 1.13.
Figure 1.13. The initial project structure in the Visual Studio Solution Explorer. Remember, it’s not necessary to correspond to the folder structure on the hard drive.
5. You need to change some Visual Studio defaults to give better results when you build the project and put it under the CI process. From the Solution Explorer, right-click the CalcCore project, and select Properties.
6. Switch to the Build tab. Under Errors and Warnings, check if the warning level is set to 4 (see figure 1.14).
Figure 1.14. Build properties set the right way. All warnings are treated as errors and given the maximum warning level. This setup is used for every configuration defined.
7. Under Treat Warnings as Errors, select All.
These settings will cause the compiler to throw an error every time your code contains the slightest warning (including the least severe and even informational warnings). It’s because the number of warnings have the tendency to grow and in end effect can be completely ignored. If you’re conscious that there’s no way around the warning you can always suppress it by typing its number into the “Suppress warnings” text box. You do this to eliminate unnecessary noise. If you’re like many developers, you have a tendency to stop reacting to too-frequent stimulation. Soon enough, you’ll have dozens of warnings and miss an important one that may lead to a problem. And you don’t need any problems, do you?
Pay close attention to the platform shown in figure 1.15. In Visual Studio 2010, it defaults to x86 by executable .NET applications and to Any CPU for class libraries. This is different than in Visual Studio 2008, which defaults to Any CPU. If your assembly doesn’t use native Windows functionality, it’s safer to set it to Any CPU. It’ll run properly on either a 32- or 64-bit processor.
Figure 1.15. It’s worth signing your reusable assemblies. Doing so makes it possible to reference them using strong assembly names.
8. Signing the assembly with a strong key gives your assembly a globally unique name. It makes identification and administration easier. It makes it possible to create a reference to an explicit version of software, and it prevents others from spoofing your libraries. Without a strong name, you won’t be able to add a library to the Global Assembly Cache (GAC), which may be a good idea for the financial-calculation library. But keep in mind that signing the library will make the versioning more complex. You can’t call a nonsigned library from a signed one. The general rule of thumb is to sign the libraries and leave the executables alone; you’ll have to decide for yourself what to do.
To sign the assembly, switch to the Signing tab (see figure 1.15), and select the Sign the Assembly check box.
9. From the Choose a Strong Name Key File drop-down list, select New. The Create Strong Name Key dialog box opens.
10. Enter CalcCore for the Key File Name, and deselect Protect My Key File with a Password. Press Enter in the resulting dialog box (shown in figure 1.15).
11. Delete the default Class1.cs file. You need to add a new program file that’ll eventually contain the code.
12. Delete all unused references. Everything that’s mentioned in the references is loaded into memory at runtime. If it isn’t needed in your program, it will have nothing to do in memory.
13. Right-click the CalcCore project, and select Add > New Folder from the context menu. Name the folder Math.
14. Create a class called Finance inside the folder. Did you notice that the namespace in the new program file contains the path to the file? It’s generally a good idea to have the path and namespace match. Using folders, you won’t clutter your solution with files, and it’ll be easier to manage a lot of files in the Solution Explorer window. Your Solution Explorer should look like figure 1.16.
Figure 1.16. Model solution with no unnecessary references. The project is signed and uses folders (that match the namespaces).
The Finance class will contain simple financial mathematical operations. The implementation details are irrelevant now; we’ll pay closer attention to the financial library in chapter 6, where you’ll write unit tests for the library.
You’ll need a user interface for the library. Using the same technique as for the framework solution and core project, you’ll create two user interfaces: one for Windows and one for the web.
Follow these steps:
1. Create a new blank solution and name it WindowsCalculator. From the Visual Studio menu, select File > Add > Project...
2. Select Visual C# > Windows > WPF Application from the list of project templates (see figure 1.17).
3. Name the project WinCalc.
4. Set the location to C:\Dev\.
5. Set the warning level as you did for the CalcCore project in the previous section. Don’t sign the executable project.
You now need to create a web application for the web-base UI for the calculator project:
1. Create a new blank solution and name it WebCalculator. From the Visual Studio menu, select File > Add > Project.
2. Select Visual C# > Web > ASP.NET Web Application from the list of project templates (see figure 1.18).
Figure 1.18. Creating the web calculator solution in Visual Studio 2010 is straightforward and should be familiar to users of earlier versions.
4. Set the location to C:\Dev\.
5. Set the warning level as you did for the CalcCore project in the previous section. Don’t sign the executable project.
The solutions and projects for the loan calculator are now finished. Your folder structure should look like figure 1.19.
Figure 1.19. If you’ve followed the steps in this chapter, you should end up with a directory structure similar to this.
You’ve built the initial construction site. It contains three solutions, each with one project. In the next chapter, you’ll place the application under source control.
You should now understand what continuous integration can do for you and your team. It may seem confusing to set up and maintain CI with all the essential tools; but as you’ll learn throughout this book, it’s simple if you take things a step at a time.
In this chapter, we presented information to help you overcome objections from team members and reduce project risk. We also gave you a simple example that shows how CI looks for file changes and then builds and tests the code; and we introduced a more complex sample application that you’ll use throughout the book.
In addition, you were introduced to some of the tools you’ll see in depth later in the book. Specifically, we’ll focus on CruiseControl.NET, TeamCity, and TFS as CI servers, and show you how to integrate other tools with them. One of those, source control, is the first tool you need to set up and is the topic of the next chapter.