5 advantages of powerdelivery over vanilla Team Foundation Server

The open source powerdelivery project for doing continuous delivery I’ve blogged about does a lot of things in a small package, so it’s natural to want to understand why you would bother to use it if perhaps you like the concepts of continuous delivery but you’ve been using off the shelf Microsoft Team Foundation Server (TFS) builds and figure they are good enough. Here’s just a few reasons to consider it for streamlining your releases.

You can build locally

TFS lets you create custom builds, but you can’t run them locally. Since powerdelivery is a PowerShell script, as long as you fill out values for your “Local” environment configuration file you can build on your own computer. This is great for single computer deployments, demoing or working offline.

You can prevent untested changes from being deployed

You have to use the build number of a successful commit build as input to a test build, and the same goes for moving builds from test to production. This prevents deployment of the latest changes to test or production with an unintended change since the last time you tested and when you actually deploy to production.

You can add custom build tasks without compiling

To customize vanilla TFS builds, you need to use Windows Workflow Foundation (WWF *snicker*) and MSBuild. If you want to do anything in these builds that isn’t part of one of the built in WWF activities or MSBuild tasks, your stuck writing custom .dlls that must be compiled, checked back into TFS, and referenced by your build each time they change. Since powerdelivery just uses a script, just edit the script and check it in, and your Commit build starts automatically.

You only need one build script for all your environments

You can use the vanilla technologies mentioned above to create builds targeting multiple environments, but you will have to create a custom solution. Powerdelivery does this for you out of the box.

You can reduce your deployment time by compiling only in commit (development)

The Commit function is only called in Local and Commit builds, so you can perform long-running compilation activities only once and benefit from speedy deployments to your test and production environment.

Put all your environment-specific configuration in one place for a pleasant troubleshooting experience

Most software applications leverage a variety of third party libraries, middleware products, and frameworks to work. Each of these tools typically comes with its own method of configuration. How you manage this configuration has an impact on your ability to reduce time wasted tracking down problems related to differences in the environment in which your application runs.

Configuration as it relates to delivering your software really comes in two forms. The first of these is environment-neutral configuration. This type of configuration is necessary for the tool or software to work and doesn’t change when used in your development, testing, production, or any other environment. The second type is environment-specific configuration and is basically the opposite.

When your application is delivered into an environment, whether on a server somewhere or your users’  devices, troubleshooting configuration problems is much easier if all environment-specific configuration is in one place. The best way to do this is to create a table in a database, or a single file, that stores name/value pairs. For example the “ServerUrl” configuration setting might be set to “localhost” in the development environment. Where in production, it’s some domain name you probably purchased.

The problem with adopting this at first glance is that most tools have their own method of configuration, so to make this work you need to find a way to populate their configuration from this database or file. Do this with the following process:

  1. Create a table or file named “ConfigurationSettings” or “ApplicationSettings” for example, that holds the name/value pairs for environment-specific configuration. You can use nested pairs, or related tables if you need more complicated configuration.
  2. Create a build script for each environment (T-SQL, PSake, MSBuild, rake etc.) that populates the table or file with the values appropriate for it. If you have 4 environments, you will have 4 of these files or scripts.
  3. When you target a build at an environment, run the appropriate build script to overwrite the configuration values in that environment with the ones from the script. Note that I said overwrite, as you want to prevent people in the field from changing the configuration of your environment without doing a build. This is because configuration changes should be tested just like code.
  4. For each tool or asset that you want to configure, create a build script (PSake, MSBuild, rake etc.) that reads the values it needs by name from the table or file populated in step 3, and updates the configuration in the format needed. An example would be updating a web.config file’s XML data from the data in the table or file, or applying Active Directory permissions from the data in the table or file.
  5. Create a page, dialog, or view in your application that lists all of the data in the configuration table or file. This can be used by your personnel to easily see all the environment-specific configuration settings in one place.

This may seem like a hoop to jump through considering Microsoft and other vendors already provide environment-specific configuration files for some of their technologies, but I still encourage you to do this for the following reasons:

  1. When something goes wrong in one environment that works in another, it is much faster to look at a page with a flat list of configuration settings than to look in source control at a bunch of files or scripts that can be anywhere in your source tree.
  2. When environment-specific configuration is stored in source control as scripts, you have an audit trail of how those changes have occurred over time in the history of each file.
  3. Whenever you need a new environment, you can simply create a new script with data for that environment and you already have an automated means of populating the configuration mechanisms used by all of the tools and libraries you leverage.
  4. When you need to provide environment-specific configuration for a new technology, you can script setting it up and not worry about whether it supports environment specific methods out of the box.

Pay off your technical debt by preferring API clarity to generation efficiency

I’ve built the technical aspects of my career on combining technologies from Microsoft, that are easy to sell into enterprises that require the confidence that comes from their extensive support contacts and huge market footprint, with open source technologies that steer the direction of technology ahead of the enterprise curve – eventually to be embraced by them.

Microsoft has always provided powerful tools for developers in their Visual Studio product line. They focus on providing more features than any other vendor, and also having the flexibility to allows developers to design their software with the patterns that they find make the most sense to them. Because of this, the community is full of discussion, and there are always new ways to combine their technologies together to do similar things – but with quite a bit of variance on the architecture or patterns used to get them done. It can be daunting as a new developer, or a new member of a team, to comprehend some of the architectural works of art that are created by well-intentioned astronauts.

After I learned my first handful of programming languages, I began to notice the things that were different between each of them. These differences were not logic constructs, but rather how easy or difficult it could be to express the business problem at hand. Few will argue that a well designed domain model is easier to code against from a higher level layer in your application architecture than a direct API on top of the database – where persistence bleeds into the programming interface and durability concerns color the intent of the business logic.

In recent years domain specific languages have risen in popularity and are employed to great effect in open source projects, and are just starting to get embraced in Microsoft’s technology stack. A domain specific language is simply a programming interface (or API) for which the syntax used to program in it is optimized for expressing the problem it’s meant to solve. The result is not always pretty – sometimes the problem you’re trying to solve shouldn’t be a problem at all due to bad design. That aside, here are a few examples:

  • CSS – the syntax of CSS is optimized to express the assignment of styling to markup languages.
  • Rake/PSake – the syntax of these two DSLs are optimized to allow expressing of dependencies between buildable items and for creating deployment scripts that invoke operating system processes – typically command-line applications.
  • LINQ – The syntax of Language Integrated Query from Microsoft makes it easier to express relationship traversal and filtering operations from a .NET language such as C# or VB. Ironically, I’m of the opinion that LINQ syntax is a syntactically cumbersome way to express joining relationships and filtering appropriate for returning optimized sets of persisted data (where T-SQL shines). That’s not to say T-SQL is the best syntax – but that using an OO programming language to do so feels worse to me. However, I’d still consider its design intent that of a DSL.
  • Ruby – the ruby language itself has language constructs that make it dead simple to build DSLs on top of it, leading to its popularity and success in building niche APIs.
  • YAML – “Yet another markup language” is optimized for expressing nested sets of data, their attributes, and values. It’s not much different looking from JSON at first glance, but you’ll notice the efficiency when you use it more often on a real project if you’ve yet to have that experience.

Using a DSL leads to a higher cognitive retention of the syntax, which tends to lead to increased productivity, and a reduced need for tools. IntelliSense, code generation, and wizards can all cost orders of magnitude longer to use than to simply express the intended action using a DSL’s syntax when you’ve got the most commonly expressed statements memorized because the keyword and operator set it small and optimized within the context of one problem. This is especially apparent when you have to choose a code generator or wizard from a list of many other generators that are not related to the problem you’re trying to solve.

Because of this, it will reduce your cycle time to evaluate tools, APIs, and source code creation technologies based not on how much code your chosen IDE or command-line generator spits out, but rather the clarity in comprehension, and flexibility of that code once written. I am all for code generation (“rails g” is still the biggest game changer of a productivity enhancement for architectural consistency in any software tool I’ve used), but there is still the cost to maintain that code once generated.

Here are a few things to keep in mind when considering the technical cost and efficiency of an API in helping you deliver value to customers:

  • Is the number of keywords, operators, and constructs optimized for expressing the problem at hand?
  • Are the words used, the way they relate to each other when typed, and even the way they sound when read aloud easy to comprehend by someone trying to solve the problem the API is focused on? Related to this is to consider how easy it will be for someone else to comprehend code they didn’t write or generate.
  • Is there minimal bleed-over between the API and others that are focused on solving a different problem? Is the syntax really best to express the problem, or just an attempt at doing so with an existing language? You can usually tell if this isn’t the case if you find yourself using language constructs meant to solve a different problem to make it easier to read. A good example is “Fluent” APIs in C# or VB.NET. These use lambda expressions for property assignment, where the intent of a lambda is to enable a pipeline of code to modify a variable via separate functions. You can see the mismatch here in the funky syntax, and in observing the low comprehension of someone new to the concept without explanation.
  • Are there technologies available that make the API easy to test, but have a small to (highly preferred) nonexistent impact on the syntax itself? This is a big one for me, I hate using interfaces just to allow testability, when dependency injection or convention based mocking can do much better.
  • If generation is used to create the code, is it easy to reuse the generated code once it has been modified?

You’ll notice one consideration I didn’t include – how well it integrates with existing libraries. This is because a DSL shouldn’t need to – it should be designed from the ground up to either leverage that integration underneath the covers, or leave that concern to another DSL.

When you begin to include these considerations in evaluating a particular coding technology, it becomes obvious that the clarity and focus of an API is many times more important than the number of lines of code a wizard or generator can create to help you use it.

For a powerful example of this, create an ADO.NET DataSet and look at the code generated by it. I’ve seen teams spend hours trying to find ways to backdoor the generated code or figure out why it’s behaving strangely until they find someone created a partial class to do so and placed it somewhere non-intuitive in the project. The availability of Entity Framework code first is also a nod towards the importance of comprehension and a focused syntax over generation.

Why continuously deliver software?

Since I adjusted the focus of my subject matter on this blog over the past couple of weeks, one of the main subjects I’ve been talking about is continuous delivery. This is a term coined in a book by the same name. I’m attempting to summarize some of the concepts in the book, and putting an emphasis on how the practices described in it can be applied to development processes that are in trouble. I’ll also discuss specific technologies in the Microsoft and Ruby community that can be used to implement them.

If you really want to understand this concept, I can’t overemphasize the importance of reading the book. While I love blogs for finding a specific answer to a problem or getting a high level overview of a topic, if you are in a position to enact change in your project or organization it really pays to read the entire thing. It took me odd hours over a week to read and I purchased the Kindle version so I can highlight the important points and have it available to my mobile phone and browsers.

That being said, I want to use this post to dispel what continuous delivery is not, and why you would use it in the first place.

Continuous delivery is not

  • Using a continuous integration server (Team Foundation Server, CruiseControl.NET, etc.)
  • Using a deployment script
  • Using tools from Microsoft or others to deploy your app into an environment

Rather, the simplest description I can think of for this concept is this.

“Continuous delivery is a set of guidelines and technologies that when employed fully, enable a project or organization to delivery quality software with new features in as short a time as possible.”

Continuous delivery is

  • Requiring tasks to have a business case before they are acted upon
  • Unifying all personnel related to software development (including operations) and making them all responsible for delivery
  • Making it harder for personnel to cut corners on quality
  • Using a software pattern known as a “delivery pipeline” to deliver software into production
  • Delicate improvements to the process used for testing, configuration, and dependency management to eliminate releasing low quality software and make it easy to troubleshoot problems

I’ll continue to blog about this and I still encourage you to read the book, but one thing that really needs to be spelled out is why you would want to do this in the first place. There are several reasons I can think of that might not be immediately apparent unless you extract them out of the bounty of knowledge in the text.

Why continuously deliver software?

When personnel consider their work done but it is not available to users:

  • That work costs money and effort to store and maintain, without providing any value.
  • You are taking a risk that the market or technologies may change between when the work was originally desired and when it is actually available.
  • Non-technical stakeholders on the project cannot verify that “completed” features actually work.

When you can reduce the time it takes to go from an idea to delivering it to your users:

  • You get opportunities for feedback more often, and your organization appears more responsive to its customers.
  • It increases confidence in delivering on innovation.
  • It eliminates the need to maintain hotfix and minor revision branches since you can deliver fixes just as easily as part of your next release.
  • It forces personnel to focus on quality and estimating effort that can be delivered, instead of maximum work units that look good on a schedule.

And lastly: when personnel must deliver their work to users before it can be considered done, it forces the organization to reduce the amount of new functionality they expect in each release; and to instead trade volume for quality and availability.

When you make production an island, it takes a long time to get there

My post yesterday touched on one of the subjects related to software development that has really crystallized some of the process breakdowns I see in too many organizations out there. There is much time spent measuring developer output, but missing the overall cycle of going from idea to users. When organizations begin to measure this, the next step is to measure the activities within.

Of all the phases in a typical delivery cycle for software, the most costly in improperly automated environments is that of deploying to production. We spend hours writing unit tests, maybe some integration tests, and perhaps even writing a full automated acceptance suite but still significant time is spent getting that code to work right in its eventual “production” environment.

Some signs that this might be happening to you:

  • Deploying to production keeps folks working long past the planned duration, involves numerous personnel and is a high stress event.
  • Code that was accepted in test doesn’t work in staging or production.
  • Things that work in production after the latest deployment don’t work in the other environments, and an operations person has to be contacted to find out what they changed recently.

Before I go much further, lets define what I mean by production. In an IT department with internal applications, production may be a farm of web servers and a database cluster servicing one instance of several applications used by the organization. For a shrink-wrapped product, production will be your users’ computers. The cost on cycle time of not properly testing your application in its environment before delivering it can be significant.

Since production environments are a company’s IT backbone bread and butter, operations personnel (or those of your customers) have a motivation for keeping things as stable as possible. Developers however, are motivated by their ability to enact change in the form of new features. This tends to create a conflict of interest and most organizations’ answer is to lock down production environments to only be accessed by operations personnel. An alternative strategy, one outlined in continuous delivery, is to start treating the work operations does related to setting up and maintaining their environment with the same rigor and process as the software being deployed to it.

Life before source control – are we still there?

Consider an example. An organization has 4 environments – development, test, staging, and production. Development is meant to be an environment in which programmers can make changes to the environment needed to support ongoing changes. Test should be the same environment, but with the purpose of running tests and manually checking out the application like a user would. Staging should be the final place your code goes to verify a successful deployment, and production simply a copy of staging. You may be thinking already “I can’t afford a staging environment that has the same hardware as production!”.

It’s acceptable for staging not have the exact specifications of production, but you should minimally try to have two nodes for every scalable point in the topology. If production has a cluster of 4 databases, staging needs to have 2. If production has a farm of 10 web servers, staging needs to have 2. With this environment in place, you are still testing the scaled points in your architecture, but without the cost of maintaining an entire cluster. This is obviously easier to do with virtualization, but take care to not use a staging environment that is significantly more or less powerful than production if using it for capacity and performance testing. You cannot have a staging environment that has half the servers of production and just double the performance you are experiencing to assume production will provide twice the capacity. Measuring computing resources does not occur in a linear fashion as one might assume.

Continuing with the example, consider what work would be like without source control. When you make a change to your code, you would have to manually send that code and make its changes on each developer’s machine. Maybe you could make things a bit easier by creating a document that tells developers how to make the changes to their code. This is ridiculous right? Sadly this is exactly how many organizations treat the environment. A change made in one environment is manually made in all the others, and the opportunity for lag between making those changes and human error is large.

Making the environment a controlled asset

The way out of this mess is to start thinking about the environment as a product that deserves the same process oversight as the software being deployed to it. We spend so much time making sure code developers write is tested, but it’s just as easy to break production by making one bad configuration change. To get around this, we need to change the way the environment is managed and leverage automation.

  1. Create baselines of environment operating system images for each node required by your application (database server, web server, etc.). These images should have the operating system, and any other software that takes a long time to install already setup. Don’t have anything pre-configured in these images that can change from one environment (dev/test/prod etc.) to the next.
  2. Create deployment scripts that you can point to a network computer or VM using datacenter management software (Puppet, System Center etc.). These scripts should install the baseline image on the target computer. Work with operations to determine the best scripting technology to use for them. Operations personnel typically hate XML, but using PSake (a powershell deployment extension) or rake is usually acceptable.
  3. Create deployment scripts that run after the datacenter management step and configure the environment suitable for your software. This includes setting up permissions, adding users to groups, making configuration changes to your frameworks (.NET machine config, Java classpath, Ruby system gems etc.).
  4. Create configuration settings that are specific to each of your environments. This would optimally be one database table, XML, or properties file with the settings that change from one environment to the next. Put your database connection strings, load balancer addresses, web service URLs etc. in one place. I’ll do a future post on this point alone.
  5. Create deployment scripts that apply the configuration settings to the target environment.
  6. Store all of these assets in source control (other than maybe the OS images, which should be on a locked down asset repository or filesystem share).

Once this is in place, you should be able to point to any computer or VM on your network that has been setup by IT to be remotely managed and target a build to it. The build should setup the OS image and run all your deployment scripts. From this point forward, the only way any change should be made to the environment is through source control.

This change provides us with a number of benefits:

  • Operations personnel improve their career skills by learning to write scripts to automate changing the environment and these can be reused in all of the other environments. If you want to change the configuration of the database for example, this change once made in source, will propagate to ALL environments that are deployed to from the same build.
  • Developers can look in source control to see the configuration of the environment. No more sending an email to operations to find out what varies in production from the other environments.
  • Deploying new builds will test the latest code, with the latest database changes, along with any environment changes. This is the only way to really test how your application will run in production. Any problems found in staging will also be found in production, so you get a chance to fix them without the stress doing so in production adds.

There are a couple more things to mention here. First, if you are deploying shrink-wrapped software, you probably have many target environments. To really deliver quality with as few surprises to your customers, you should setup automated builds like this for each variation you might deploy to. Determine minimum hardware requirements for your customer, test at this minimum configuration, and also test any variances in environment. If you support two versions of SQL server, you really should be testing deployment on an environment with each of these different versions for example.

One more thing – for organizations in which production settings are not to be made visible to everyone, simply have a separate source control repository or folder with configuration settings for production, and give your build the permissions to pull from that repository (just the configuration) when setting up a production node. Developers will still need elevated permissions or to coordinate with more-privileged operations personnel to find the answer to their questions about how production is setup, but the code for applying environment configuration settings to the other environments will be accessible via source control, simply with different values than production.

Once you have an automated mechanism for setting up and configuring your environment from a build, you need a way to piggy back that process on top of your continuous integration server. I’ll leave that for my next post.

Cycle time – the important statistic you probably aren’t measuring

When teams develop software, they use products from other vendors to aid them in following their chosen process. Usually data is captured during development that can be used to create reports or do analysis from these other vendors’ products resulting in some insight into capability. We can answer questions like “how long did this bug take to close?” or “how long after this work item was created, was it marked as completed?”.

The most common statistic analyzed in agile teams is “team velocity” which is a measurement for how much your team can get done in one iteration (sprint). Managers love this statistic because it helps them figure out how efficient a team is, and can be used to calculate potential rough estimates for future availability of some feature.

However there is a much more important metric to your business related to software development, and to measure it correctly we need to redefine or at least clarify a regularly misunderstood word in development processes, and that’s being “done”. Too many teams I encounter work like this:

  1. Business stakeholder has an idea
  2. Idea is placed in product backlog
  3. Idea is pulled off backlog (at some future iteration/sprint) and scheduled for completion
  4. Developer considers the task “done” and reports this in a standup meeting
  5. Developer starts work on the next task
  6. Tester finds bugs 2 weeks later
  7. Developer stops his current task, switches to the old one, and fixes bugs
  8. Months from now, someone does a production deployment that includes the feature, and users (as well as business stakeholders, unfortunately) see it for the first time

The duration of time that has elapsed between the first and last step above is known as cycle time. This is an important statistic because it measures the length of time that it takes to go from an idea, until that idea is available to users. Only when the last step is completed is a feature truly “done” and due to a lack of embedded quality and deployment verification in most processes, often a team or individual’s efficiency is determined by omitting everything after #4 above.

It doesn’t matter if your team has developed 20 new features if they aren’t available to users, and they can’t be made available without significant disruption to ongoing work until they have sufficient acceptance tests. This is similar to lean manufacturing, in which you have inventory on the shelf that isn’t being used but this costs something to create and store. We can optimize our cycle time by measuring and working to improve all aspects of the process within the start and end of a cycle.

Reducing cycle time is a key tenet of continuous delivery, which seeks to automate and gate all the phases in your development process with the goal of improving an organizations’ efficiency at delivering quality features to their customers. To improve cycle time, there are many things you can do but I’ll start by talking about analysis and acceptance.

Analyze and accept during the sprint

Many development teams attempt to do requirements analysis on features before or while they are on the backlog, but before they have been added to a sprint. This is a mistake for a couple of reasons:

  • It spends effort on a feature that has not been scheduled for implementation. The backlog is about waiting to act on work until the last possible moment, to reduce waste and embrace the reality that up-front design (waterfall) doesn’t work.
  • It encourages managers to cram as much into a sprint as possible, assuming all developers need to do is “write the code” and misses the cost of doing analysis in measuring overall efficiency.

In reality, a feature should be added to the backlog and prioritized there without effort being attached to it. When that item becomes high enough on the list to schedule for the sprint, it is assigned to a developer and they work with a business analyst or tester during the sprint to write acceptance tests for the feature. These acceptance tests should be automated when implemented, but a tester should be able to write in English a description for what constitutes sufficient acceptance. Developers write the tests first, and then write code to pass the tests using test-driven development approaches.

Often teams new to this approach will schedule too much to get completed in one sprint. This is a learning experience and over time, you will get better at scheduling smaller units of work into sprints, and describing features at a level of granularity necessary for completion by a single developer. During this adjustment period, be prepared that features added to a sprint, once analysis and acceptance is done, will often be identified as too large to complete in the sprint and need to be split up into smaller tasks on the backlog – only scheduling the ones that can be developed AND acceptance tested prior to the end of the current sprint.

This may seem like a trivial process nuance but the goal is to pursue continually delivering new features to your users as quickly and with as little defects as possible. This can only be done if the acceptance criteria for the feature is clear, and there is a repeatable means for verifying it. Automated acceptance is a must here, as manual testing means a longer cycle time.

Once you start accepting this definition of being done, you can start to look at all the pieces of your process that make up cycle time and optimize them. Managers and development leads love to suggest ways that developers can be more efficient, but they rarely look at opportunities for process improvement in business analysis, testing, and deployment. Often, these are more costly to cycle time than development itself, which tends to be limited in opportunities for optimization by the skill of your resources.

I’ll go into more detail about individual practices within your software delivery process that can reduce cycle time in future posts.

Foregoing assumed value in favor of rapid feedback

The goal of developing any software should be to provide functionality useful to the majority of its users.

While doing business analysis or writing user stories for a feature of a project (especially those that are an attempted re-design of an existing one), it is important (and exciting) to brainstorm, be visionary, and think up great ideas for how you can please your customer base. However when planning those features for release, it is tempting to attempt to complete all of those stories before making the feature available to users.

The reasoning behind this argument usually sounds something like “our customers have used the product for years with these features, and they will not use it if they are not all present”. Another spin on this is “our competitor has these features and we will not be competitive without them”. There are several flaws in this argument.

  1. The argument assumes that users are currently using all the features. Unless you are measuring the use of the feature in the field (google analytics etc.) and have data to back up this claim, it is highly unlikely that a compelling offering could not be made available to users with a smaller subset of features.

    This applies to competitive analysis as well. Comparing your planned features to an existing product sheet will simply align you with them, which can be a disaster if many of their features are unused by their customers and you will now be spending money building them too. It also reduces your ability to differentiate yourself from them.

  2. The argument assumes that users will not provide accurate feedback on their needs of the software. When you choose to implement the kitchen sink around a feature, what you’re really saying is, “I know more about the user’s needs than they do, so I will decide everything to offer them”.

    When you go this route you spend excessive time getting to market, excessive capital implementing features that may not even be used, and place release cycle pressure on yourself by having a larger workload – making it less likely that you will be in the relaxed mindset necessary to listen to your customers and be able to respond to requests for changes.

    It’s more efficient and realistic to simply release the smallest subset of those features necessary to make initial use of them available, measure usage and gather feedback, and give users exactly what they want once they’ve used the feature. While it’s true that this approach can result in designs that are different from what you originally envisioned, your vision is not as important as the successful adoption of a feature by its users.

  3. The argument weights delivering assumed value over used value. What this means is that by focusing development on robust implementation of features that have not been even initially deployed to users, the backlog and priorities are being driven on assumed need. Even if your customers tell you they need a feature, unless you are measuring that they are using it in the field, and they are providing you with feedback that they like it, you are taking a risk with the effort needed to implement it. It makes sense to reduce that risk so that if you deploy a feature that turns out to not be useful, the lost capital is minimal.

Where I’m going with this is that organizations should spend serious time reviewing their backlogs of features, working with user experience experts to come up with designs that deliver the smallest, simplest design that accomplishes what you think the user needs and then get it out there. It is always more viable to bolt on a feature that you verify is needed after an initial offering than to spend money on assumptions only to find that it was a waste.

Why feature branching is a bad idea

I saw Martin Fowler speak in Austin a couple years ago and one part of his talk was on continuous integration. He touched on feature branching, which is essentially where a main “trunk” of source for a project is branched several times, once per feature under active development. The management purpose behind this is typically to try and allow for release into production should one or more features not get done on time.

I’ve been on a project where this was done to a silly degree, such that there were weeks spent by several developers just doing merges and re-testing the merged changes. Martin posted a tweet this morning with a link to video where they describe the issue in detail. I won’t repeat everything they say here but encourage you to watch it, they do a fantastic job.


The Minimalist Development Movement

Over the past 11 years, .NET technology, fueled by Microsoft’s ability to deliver sophisticated development tools, has arguably ruled the enterprise business software landscape. Intellisense, drag-and-drop UI design, XML configuration, dependency injection, unit testing, IDE extensibility APIs for third party controls, continuous integration, and more attempt to ease the use of Agile and SCRUM processes as the Visual Studio IDE supports more and more of these features through wizards and templates. .NET started as a better version of Java, and as such inherited many of the powerful capabilities, but also limitations in that development and deployment approach.

However in the past several years a move towards a minimal development tool mindset has started to occur. This is made possible by the creation of more sophisticated frameworks that establish conventions and set constraints on how to go about implementing things. These constraints reduce the number of API calls to remember, and the number of front end technologies that a typical developer needs to be fluent in. As a byproduct, the required capabilities of development tools is also reduced. Rails, which led to ASP.NET MVC, and JavaScript technologies like the MVC framework being built on Coffee Script as well as mobile frameworks like Titanium Appcelerator all take this approach. They provide the framework and API, and you use the development tool of your choice. Because the framework limits what you can do, but elegantly provides 80% of what you need in most applications, you don’t need an IDE that does so much for you. Extreme minimalists use VIM, Emacs, or TextMate; and Aptana is a popular one with first class support for CSS, HTML, HAML, Rails, JavaScript and many other minimalist technologies that might be a little more approachable to a seasoned .NET developer.

Visual Designers for 20%

However, to take part in this new shift requires a different mindset. What if you had to do all of your user interface development without graphically previewing it first? A dirty little secret in many Microsoft shops is that we rarely use the UI designers anyway. Clients and customers are always asking for features that negate the productivity enhancements touted by RAD design, and force us to the code to do more sophisticated things.  I’ll argue that this is due to an inferior separation of concerns in ASP.NET, and not simply because you’re doing something more complicated. If your framework requires you to break patterns to do something complex, how good of a framework is it? When a development tool only really shines for the minority of projects, your on the losing end of the 80/20 rule. When you design tools to focus on letting developers visually design things, you are continuing to treat UI assets as single units that encapsulate their behavior, presentation, and data. Modern frameworks that separate these concerns make it difficult (if not impossible) to visually represent things as they appear at runtime, but the tradeoff is an increase in productivity due to patterns that decouple responsibilities.

Interpretation vs. Compilation

What if you don’t compile your project to test it out? There are legitimate applications that require compilation due to performance constraints. But if quality is the concern, the efficiency enhancements afforded by these frameworks coupled with a disciplined automated testing approach negate this concern.

Document what you’ve built instead of what you want

What if you don’t create requirements documents, but rather rapidly implement and write tests to serve as the documentation for what the system currently does? We already know from years of SCRUM and Agile debates that documenting a system up front more often than not results in bad designs, slipped deadlines, and stale documentation. Most customers and clients are not system analysts and as such can’t be expected to communicate all of their needs on the first try. A picture is worth 1000 words, and we’ve all been in the meeting where the customer advocate is shown what was built based on their design and they realize things missing that they didn’t communicate. Doesn’t it make sense to use a development process that encourages and adapts to this situation instead of fighting it?

Contrary to popular belief, to pull this off one needs to be a better developer, who follows patterns even more than before. Developers also need to communicate with stakeholders more, and incrementally deliver *tested* features. Increasing the ability for a developer to communicate has all kinds of other benefits as well, such as the ability to clarify requests, think outside of the box, and generally be more pleasant to work with.

If the thought of letting your tests provide your documentation sounds crazy, tell that to Sara Ford.

Get better at learning your framework instead of fighting it

We’ve all been in the code review where someone implemented an API call that’s already in the .NET framework. If we’re honest with ourselves as developers, we really don’t keep much of the .NET technology stack in our head, we just know how to use Google well. If we were able to reduce the number of patterns and APIs used in our solutions, we could retain that knowledge and know the best way to leverage the framework to do what we need to do instead of fighting it. ASP.NET MVC and Rails both exhibit this, and I’ll argue Rails does a better job. ASP.NET MVC won’t complain if you make the mistake of trying to throw a bunch of logic in your view, where in Rails you really have to fight the framework to instantiate classes here. As DHH says “constraints are liberating” (start at 3:50 in).

The challenge

If you could challenge yourself with one technical hurdle this year, would you rather learn another API? Perhaps a way to interface with a new business system? Or would you rather experience a shift in your approach to development that has long lasting, measurable effects on your ability to deliver rapid value and makes you more attractive to established companies and startups? To do so requires several things.

  1. Approach these new patterns and capabilities without attempting to compare them to existing methods. As humans we love to do this, but often we get caught up in analysis paralysis or think we’ve “got it” when we grasp just one of the many innovations in these newer frameworks.
  2. Do not declare competence with these frameworks until we’ve actually grasped them in their entirety. Learning Rails without understanding caching, convention based mocking of tests, or the plugin architecture is like learning C# but ignoring generics and lambda expressions.
  3. Don’t try to figure out how to shim legacy .NET patterns into these frameworks. You wouldn’t expose a low level communication protocol through a web service or REST API where clients are expected to allocate byte arrays, so why would you expect to figure out how to host a third party ASP.NET control in MVC or access a database using T-SQL from an MVC view. Sure, you can do it, but you’re missing the point. And that is to embrace new patterns and learn to abstract the old way of doing things. We’ve been doing it with .NET for years, now let’s see if we can do it when legacy .NET patterns are what we’re abstracting.

When WYSIWYG isn’t an option

I’ve been writing code for about 12 years. That’s just barely too young to have used pre-Windows tools when they came out and a bit too old to rely heavily on the drag ‘n drop features of Visual Studio. Now don’t get me wrong, I drop database tables onto datasets and use the toolbox when building WPF apps, but for the most part I’m using the text editor.

Visual Studio’s Intellisense really does give Microsoft a one-up in the market. There are other IDEs out there that can do C# popup syntax but not with LINQ and since MS owns the .NET platform, their tools are probably always going to be the most productive.

I have however used Emacs and VI a bit when I am on Ubuntu Linux. I never considered myself proficient by any means, but I could get in there and do some basic editing. I had always heard from older co-workers earlier in my career how great VI was but I never had the patience or desire to really learn.

Well I found this VI emulator addin (viemu) for Visual Studio by watching a how-to video on MSDN, I don’t remember the topic, and I’m loving it so far. The guy who sells it (it’s 79 bucks for the VS.NET addin, he has addins for outlook and SQL studio as well with optional bundle pricing) has a great article that makes the case for why you should bother learning this 30 year old editor. It’s a long read but convinced me enough to try it out for a weekend on a rails project I’ve been doing on the side under OSX first using MacVim. The fact that all the keys are laid out to never make you have to take your fingers off the home keys (no need for arrow keys or mouse) is a huge productivity boost.

I liked it enough to get the plugin for Visual Studio (haven’t bought it yet, but am using the trial to evaluate it). About the only thing it doesn’t do that the “real” VIM does is let you open new files and have them horizontally or vertically split with ones you are already editing. I find this to be really, really nice under the Mac version. Anyway check it out if you get a chance. The learning curve is high but I am hoping that considering how many hours of my life I will spend writing code, when the IDE doesn’t help and I’m refactoring and cranking away this will be worth the effort. It’s quite obvious that it so far. I really need to get better at regular expressions however!


Get every new post delivered to your Inbox.

Join 87 other followers

%d bloggers like this: