LavaBlast Software Blog

Help your franchise business get to the next level.
AddThis Feed Button

Scripting an ASP.NET installation in Win2k3

clock March 27, 2008 08:26 by author JKealey

A month ago, I posted an article on a few console commands for managing ASP.NET applications and IIS. In the weeks that followed, I was contacted by my old friends at iWeb Technologies to help them automate their ASP.NET setup. I spent a couple hours creating the required scripts and I thought I'd post my real world example here!

By the way, iWeb is an exceptional web hoster (1TB of space, unmetered traffic, unlimited domains, $3 a month, and excellent technical support; what more could a web developer want?). I've been with them for nine years already!

Scripting the creation of a new website and configuring it for ASP.NET

Task: Given a domain name ( and a network/local path (\\fileserver\hosting\\web\ or c:\inetpub\wwwroot\\web\), setup IIS so that ASP.NET works on the root of the domain.

  1. Create an application pool
  2. Create a web site and associate it to the application pool
  3. Ensure both that the www subdomain works ( and should load this website)
  4. Enable ASP.NET v2.0 on this website.
  5. Give ASP.NET read/write access to the folder.
  6. Add default.aspx to the default documents.


I ended up using the ADSUTIL.VBS script for most of these tasks. I used the iisweb command to create the web application, but it doesn't support network paths. I create it using a temp local path and end up using adsutil to change it to a network path. 

My googling skills are what made this task so short. Here are some of my references:

Finally, the IISBatchUtils collection of scripts provided the most help. Here's why. 

I have created batch files to make adding new websites to the server very quick and easy. The only tricky part about setting up new websites from batch file (or the command line) is that Microsoft's ADSUTL utility does not correctly add host headers like it says it does- rather than appending the new headers, it blindly sticks them in and possibly covers existing host headers that might already be set up.

I used both the original adsutil, Josh's modified adsutil, and some of his code to extract the site id from the IIS Metabase by using the site name.

The script

See the actual script for the variable definitions.

REM Step 1 - Create Application Pool
CSCRIPT //nologo %IWEB_ADSUTIL% CREATE w3svc/AppPools/%IWEB_DOMAIN% IIsApplicationPool
REM Step 2 - Create WebServer
iisweb /create %TEMP% %IWEB_DOMAIN% /d %IWEB_DOMAIN% /ap %IWEB_DOMAIN%
REM Step 3- Find new SiteID for further scripting. 
cscript //nologo iisbatchutils/translate.js "%IWEB_DOMAIN%" > siteid.txt
for /f %%I in (siteid.txt) do SET IWEB_SITEID=%%I
REM Step 4 - Add www. to site URL - uses their own custom adsutil because of bug in the normal one. 
cscript %IWEB_ADSUTIL2% append w3svc/%IWEB_SITEID%/serverbindings ":80:www.%IWEB_DOMAIN%"
REM Step 5 - set various website permissions. 
cscript //nologo %IWEB_ADSUTIL% set w3svc/%IWEB_SITEID%/accessread "true"
cscript //nologo %IWEB_ADSUTIL% set w3svc/%IWEB_SITEID%/accesswrite "true"
cscript //nologo %IWEB_ADSUTIL% set w3svc/%IWEB_SITEID%/root/AppFriendlyName %IWEB_DOMAIN%
cscript //nologo %IWEB_ADSUTIL% set w3svc/%IWEB_SITEID%/root/path %IWEB_Path%
cscript //nologo %IWEB_ADSUTIL% set w3svc/%IWEB_SITEID%/root/DefaultDoc Default.htm,Default.asp,index.htm,Default.aspx,index.asp,index.html
REM Step 6 - Cleanup
del siteid.txt


What about setting up IIS to use ASP.NET v2.0?

I did find a link showing me how to change the ASP.NET version from v1.1 to v2.0 using regiis, but later discovered that this stopped & restarted all websites... something catastrophic to do in a production environment. All the hosted websites (not only the new one) would lose their session state, for example. Fortunately, I found that you can change the root website and any new websites created afterwards will inherit the default ASP.NET version. Run the following once:

@echo off
REM will propagate to new sites. 
%windir%\Microsoft.NET\Framework\v2.0.50727\aspnet_regiis -sn W3SVC/
REM does not propagate
REM cscript %IWEB_ADSUTIL% set w3svc/accessread "true"
REM cscript %IWEB_ADSUTIL% set w3svc/accesswrite "true"


Scripting the deletion of a website

Deletion is much simpler, as you can see below.


iisweb /delete %IWEB_DOMAIN%


I had a fun time perfecting my scripting skills while creating a concrete example that solves someone's problem. I discovered that the devil is in the details, but that numerous people have worked on similar problems in the past. I ended up creating another script which runs this one numerous times, according to actions found in a local text file. Simply put, iWeb's PHP system appends operations to a file (create this site, delete that one, create this other site) and mine performs the operation and empties the task list. That way, the script can be run periodically and all the PHP coders need to do is append to a particular file.  

Download the code.

kick it on

Software Startup Lessons (Part 2): Communication and Collaboration

clock March 17, 2008 09:39 by author EtienneT

Welcome to part two of our three part series describing the various lessons we learned during our first year of a software startup.  You are encouraged to read the first part if you haven't already!

Problem Statement

In Part 1, we described our business' context and how we eat our own dog food. Indeed, we use the software we produce on a daily basis, helping us create quality products. A parallel can be made with our distributed team's communication patterns. Simply put, we're not all at the same central office, talking to each other all day. Our structure resembles that of a franchisor with franchisees distributed a several physical locations. The main reason we are distributed is to keep costs down while we develop our core infrastructure, but it does have the interest side-effect of forcing us to use the same communication tools a franchisor would.

Not only do we not work in the same building, we're not even in the same city! Because we work in independent offices, we have a greater control over our environment, which increases our productivity. Spolsky often talks about how developers should have private offices because outside disturbances kick us out of our zone. You can respond to your email every 15 minutes, but you can't make someone wait at your door for 15 minutes while you polish off a complicated algorithm.

In any case, given the fact that most geeks are not social creatures, you probably think that doing the majority of your interactions over the Internet is great, right? In reality, it's not always convenient  but we've started to become really creative (and efficient) with the tools we found to maximize our interactions.

We spend most our time chatting coordinating on Google Talk and move to Skype when voice conversations are required. Over time, we discovered some pretty nice software to communicate and we want to share these tools with you in this article.


  • How do we talk to each other and with people outside our company?
  • How do we exchange files?
  • How do we manage our requirements?
  • How do we demo our software, how do we program in pairs?

Lesson 4) Talking to People

Carrier pigeons might have played a vital part in WWII communications, but we're using more modern techniques.


imageSkype is the obvious choice for voice communication over Internet.  Don't tell us that you haven't heard about this tool before!  Because we are four partners in LavaBlast, we often need to have conference calls. All our meetings are done online via Skype conference calls, at no cost to us. Furthermore, we can call up a client on their regular phone using SkypeOut.  SkypeOut will call the client's real phone and will connect them to our conversation, at very affordable rates. Indeed, for $3 a month, we obtained unlimited outgoing calls to Canada and the US. Setting up a conference call is really easy and the quality is good (most of the time).  You simply need a good microphones and you're good to go.  In addition, if you have a webcam and a good Internet connection, the video quality of Skype is just amazing especially compared to MSN.  Talking to someone who also has a good webcam and a good Internet connection is almost lag free, in my experience.

Having a central phone number for a distributed office

Since we have a distributed office, we need to have a central phone number.  The most affordable solution that we found was Skype for business.  Basically you simply need to register a SkypeIn number anywhere in the US (not available in Canada yet) and when people call this number, it calls you on Skype.   The cheapest way to get your incoming phone number is to purchase Skype Pro (at $3/month) and purchase the number for $38 a year (and it was operational within minutes of purchase!). You can forward calls to another user or normal phone and even get voicemail with Skype Pro. Optionally, you may install the business version of the Skype program for additional functionality. Therefore, all the incoming calls go to our main salesperson and, if more technical details are required, the call can be forwarded to us.  Skype for business has a nice business control panel to control Skype credit purchases, phone number assignments, and much more. Additionally, because the phone number is associated to the company, we can redirect it to a new person if an employee is replaced or if someone is on vacation. We could have a 1-800 number, but I don't think we have the kind of call volume that would justify having one just yet.

We found the perfect phone number for LavaBlast: 1-201-467-LAVA (5282).  SkypeIn personal has a nice interface to check if a number is available, but the business panel interface doesn't.  If you want to purchase a specific number, I wish you good luck because you have to generate the numbers one by one!  We wanted a number that ended either with BLAST or LAVA;  we found the number with the SkypeIn personal interface and then tried to regenerate it, one number at a time, in the business panel.  We finally found one that ended with LAVA but couldn't find one that ended with BLAST.  The best way we found to generate multiple numbers was to press Ctrl and click like crazy on the generate button to open new pages in new tabs.  We eventually found the number we wanted!

Lesson 5) Exchanging Files

There are lots of cool startups focusing on better ways to exchange files and we're keeping an eye on them. For now, here's what we're using.


As software developers, we obviously use source control to manage our code. No surprise here. We even deploy our web applications using Subversion.


image We've developed a document repository in our franchisee/franchisor collaboration solution. We post our user manuals and a few other documents on there; it is our distribution center for completed documents.

Microsoft Groove

We use Microsoft Groove which is a party of Office 2007, as our main internal document management tool. Groove is much more than a document repository, but we only use this feature.  Groove is simple to use and install because it distributes the files on all the peers instead of having to setup a complex server. Furthermore, it is better than a shared network drive because it has change notifications plus it can be accessed even when you are offline. There are a few drawbacks, but in general, it's a good simple solution for the low volume of Office documents we work on. 

Lesson 6) Requirements Engineering

The first post on the LavaBlast blog was related to requirements engineering and how we managed our pile of requirements written on the back of (sometimes used) paper napkins. More seriously, we've found it very beneficial to collaboratively define our software requirements with our clients. Constant feedback is the key to writing good requirements, and using collaboration tools is a must.


We've been playing with ScrewTurn wiki as our main wiki over the last year, for both for our internal use and to interact with our clients during the requirements engineering process. It is very easy to install (trivial) and it is very very fast. Editing a page is very pleasant because it is nearly instantaneous when you save a page... it just feels blazingly fast. Furthermore, because it is a free and open source project, you get to spend money on Super Mario Galaxy instead.

During our software engineering capstone project, we used TWiki as our primary requirements engineering tool, a couple years before the strategy was discussed in IEEE Software. Although we've enjoyed ScrewTurn's simplicity during the last year, I think we're ready to revert back to the more feature-rich TWiki. Installation is a pain (less now than before), but the syntax much more natural. Furthermore, we already have the infrastructure in place (using page meta-data) to let us manage our requirements.

Lesson 7) Desktop Sharing

Today, sharing your desktop is an essential operation for any software startup. You can easily demonstrate applications to potential clients or perform pair programming, without having to be at the same physical location.

Lotus Sametime Unyte

Trying to explain something to someone over Skype is not always fun, especially when the client is not a technical person. Because a picture is worth a thousand words, showing software is much quicker!  During most of our first year, we used Unyte to do just that.  With Unyte, you simply have to start the application, decide which window to share (or share the entire desktop) and then send a link to the person (or group of people) that need to see it.  Recipients click on a hyperlink and load up a browser-based Java applet to see the shared windows. It's as simple as that.  The client doesn't have to install anything and it's fast! The host will receive an alert when the viewers can see his screen.  Unyte, used in conjunction with Skype, is really a great mix for meetings and sales demonstrations. We used the version for five people and it worked well, until we found Microsoft SharedView.

Microsoft SharedView

A few weeks back, we discovered Microsoft SharedView while it was still in beta. SharedView is similar to Unyte, but 15 people can participate in a shared session.  The only big drawback is that everyone has to download the software.  So if you want to quickly present something to a client, it's a little bit more complicated if they have to install it.  However, SharedView does allow any participant to share their desktop for the others to see (only one person can share at any given time). During a software demo, we used this capability to have the client show us their current point of sale! Additionally, the software gives other people the possibility to take control of the session and perform operations on the computer that is sharing an application. You can also share documents pretty easily with everyone and send chat messages, but we still use Skype for voice communication. We now prefer SharedView to Unyte when it is a possibility to install software on every participant's computer.

Lesson 8) Blogging!

We've been able to connect with tons of great people thanks to our blog, and have lots of fun perfecting our writing skills. We find that figuring out what to say (and what your audience is interested in hearing) is the hardest task when you start a blog. We still don't know if we should focus on giving away code or writing more experiences pieces like this one. Let us know!


Our blogging software, BlogEngine.Net, is simply terrific.  Free, fast, really easy to theme, has some good extensions and integrates really well with Windows Live Writer.  One of the main advantages for us is that it is open source and it is written in .NET, allowing us to perform minor behavior changes as needed.   It still has a few bugs that can be annoying, but from what we have seen so far, it is performing really well.  It supports multiple authors, an important feature for us because we have two people posting on the blog.

Lesson 9) When not to use communication tools

The most important lesson we learned during our first year of operations as a distributed development team wasn't a revelation but rather a confirmation of our expectations. Communication and collaboration tools are great, but technology doesn't solve conflicts. We're a closely knit team and have worked together for a number of years, greatly reducing the number of conflicts we may have. The conflicts we have are generally superficial, but as an example, let's consider an architectural decision where the two decision makers have divergent opinions on the best solution. Although instant messaging is great for efficiency reasons, it is not the best medium to argument on the merits of your solution. Indeed, we've all seen the politically incorrect "Arguing on the Internet is like running in the Special Olympics; even if you win, you're still retarded". The main problem is related to the lack of emotional expressiveness of concise instant messages or emails and since it is discussed in great lengths by very good authors, we'll leave a detailed analysis of the situation to them! As a side note, firing people via email might have seemed like a good idea for RadioShack, but we at LavaBlast shall stay away from such brilliant strategies!


Communication is the most important part of starting a business. Even if you're the only person in your company, you still have to communicate with your customers! We hope that you'll discover at least one cool collaboration tool today! Remember to come back next week for Part 3 of our series.

kick it on  

How I lost my WinDbg virginity

clock March 13, 2008 10:32 by author JKealey

image Jeff Moser's recent post on becoming a grandmaster developer was very inspiring. I'm now consciously forcing myself to work on different, harder, problems on a daily basis. Today, I wanted to tackle a nasty memory leak on FranchiseBlast that had been bugging us for months. Actually, it only made the ASP.NET application restart twice (which has no impact because our session state is kept in the state service), but we've been postponing this bug for months :)

The bug was simple to reproduce. Simply refresh a particular page a hundred or so times and observe the worker process' memory usage continuously increase. dotTrace was not very useful today, because most of the memory was consumed by Strings.

I landed on the If broken it is, fix it you should blog. Reading the blog post and noticing how Tess knows much more than I do about debugging .NET applications, I figured I'd improve my .NET debugging skills by playing with WinDbg. My guess is that many of you haven't played with WinDbg either and that's why I thought I'd write this post. With no previous experience with the tool, I managed to learn enough to be able to solve my bug within a couple hours (learning time included). However, this is a situation where I am grateful to have had the chance to play with the low-level details (assembly language / computer architecture) during my undergrad software engineering studies.

WinDbg Installation

The article says simply run "!dumpheap -min 85000 -stat" to figure out which objects are using over 85k memory. Vaguely remembering doing something similar in VS.NET, I tried the command window, to no avail. After some googling, I understood we wanted to launch WinDbg with the SOS extension (Son of Strike.. not Summer of Sam :)). I managed to get everything running (installed WinDbg, attached to w3wp.exe, ran ".loadby sos mscorwks") and later found setup instructions on the same blog, which would have made my life easier. I still haven't remembered (nor have I looked) how to load SOS inside VS.NET instead of the external tool, but I'm pretty sure it can be done.

Playing around

I played around with various strategies found on the same blog, such as figuring out how much I was caching.  I didn't think caching was my problem, but wanted to play around with the tool.

Attempting to run an interesting command found on the aforementioned blog post, I ran into a syntax error:

.foreach (runtime {!dumpheap -mt 0x01b29dfc -short}){.echo ******************;!dumpobj poi(${runtime}+0x3c);.echo cache size

Of course, this was a blog formatting issue, as the real command was:

.foreach (runtime {!dumpheap -mt 0x01b29dfc -short}){.echo ******************;!dumpobj poi(${runtime}+0x3c);.echo cache size:;!objsize poi(${runtime}+0xc) }

It didn't teach me much :) One command that appeared to be very interesting to me was !dumpaspnetcache. This appeared to be exactly what I was looking for. However, the command is only available in SOS for .NET 1.0/1.1 and I'm trying to debug a .NET 2.0 application. I did lose some time unloading the current SOS and loading the one in the clr10 subfolder (which offers !dumpaspnetcache), but it complained that the command didn't work for .NET 2.0.

The main lesson learned here is that !dumpaspnetcache does not work on .NET 2.0. There is a workaround listed in the comments of the blog post, but I didn't try it out.

Solving a problem

  1. I identified that most of my memory was consumed by strings using !dumpheap -stat.
  2. I ran !dumpheap -min 85000 -stat to a view only the largest strings. I observed only three strings (out of approximately 400,000) and that, in total, they accounted for less than 10% of the memory used by my strings.
  3. I ran !dumpheap -mt 790fd8c4 -min 85000 to view the memory locations of these strings (where the MT 790fd8c4 was in the results of step 2).
  4. I ran !dumpobj 0c6800c0 to display the contents of one of the strings (where the address 0c6800c0 was in the result of step 3.)
  5. I saw the string was a simple ASP.NET AJAX script built by the framework. Repeating for the other objects, I didn't find anything interest. In addition, these were only 3 of my 400,000 strings.
  6. I ran !dumpheap -min 10000 -max 85000 -stat to get a feeling of how many objects were of a particular size. I repeated the exercise for various ranges and ended up discovering that 90% of my strings were under 100 bytes (50% of the memory usage).
  7. I ran !dumpobj 17386ce8 on one of the random 100 byte strings and discovered a country name. Interesting. This means I have a country name in here that is not getting disposed. I only have country names in my address editor, which uses a SubSonic DropDownList.
  8. I ran !gcroot 17386ce8 to look at who was referencing this string and confirmed that it wasn't getting collected by the garbage collector (see below).
  9. I looked at the handles and noticed that I had a user control which was listed as an event listener. Therefore, I discovered that we were registering the UserControl to listen to an event but never broke the link when we were done with the UserControl.
  10. Looking at the code, I saw we were adding the event listener in the Page_Load but never removing it. I now simply remove the event handler during the Page_PreRender event, when listening to events it is no longer necessary in my scenario.
  11. I repeatedly loaded the page and saw memory go up but back down again after the garbage collector does its job. Problem solved! (knock on wood)

Here's the result of my !gcroot command.

17cd3dc4(System.EventHandler`1[[LBBoLib.BO.Utility.GenericEventArgs`1[[System.Web.UI.Page, System.Web]], LBBoLib]])->
173c8e18(System.EventHandler`1[[LBBoLib.BO.Utility.GenericEventArgs`1[[System.Web.UI.Page, System.Web]], LBBoLib]])->


In total, I only spent a couple hours fiddling around with WinDbg but I am very happy to have a new tool in my tool belt (and to have solved my bug). I strongly recommend reading the various tutorials on Tess's blog. I also found a WinDbg / SOS cheat sheet, which you might find interesting.

kick it on

Software Startup Lessons (Part 1): The Basics

clock March 11, 2008 08:49 by author JKealey

Jennifer, one of my high school friends living in Montreal, recently started her own hair salon. Her employees live in Boston and Florida whereas her biggest customer is located in New York. The previous declaration makes no sense given the industry, but it does make sense in ours! Software can be produced anywhere and delivered to customers worldwide via the Internet. Furthermore, our increased connectivity helps businesses target very specialized niches which would not be viable on the local market. (For an overview of the subject, please refer to the excellent The Long Tail.) We're pushing it one step further with LavaBlast, which we started about a year ago, because our team is distributed. We can be categorized as a bootstrapped MicroISV (less than ten people producing software products with no outside funding), and we'd like to share some of our experiences.

We love to read about other startups and why they succeeded, such as Jessica Livingston's Founders At Work, but we're even more interested in companies that failed. On one side, it is very motivating to ready books such as Founders At Work and learn how the guys behind Hot Or Not started it all and got lucky with a very simple idea. On the other hand, knowing more about failures helps you learn from other people's mistakes. Books like Facts and Fallacies of Software Engineering helped, but were are not start-up focused. IMG_2940 If you're considering starting your own software startup, do take the time to read up on why startups fail, even if it is not the subject of today's post.

I hope you'll enjoy reading this series of posts and will be unable to resist posting comments and starting interesting discussions. Part 1 focuses on the basics, Part 2 revolves around communication, and Part 3 talks about what we've learned concerning marketing, sales, and growth.

Our Context

LavaBlast Software was started by two recent graduates from the software engineering program at the University of Ottawa in Ontario, Canada. After the bachelor's degree, Etienne moved to Montreal because of an attractive employment opportunity while Jason completed his master's. Both have been programming since their early teens on a wide variety of software systems in their various positions at companies of all shapes & sizes. From tiny web design shops, to medium-sized software firms in the electric industry, to second language schools for civil servants, to large Government departments, the founders had the chance to play with software from two perspectives. One one side we have small software companies who live and breathe software and on the other we have large-scale businesses in which most people are not necessarily computer-savvy (except for the IT departments). Additionally, we sidelined as male strippers at bachelorette parties, to pay for our university studies.

It did not take long to discover that we were so passionate about software that we wanted to start our own software company, not work in corporate Dilbert-reading sweatshops. Of course, this is simply caricature, as we worked on very interesting/challenging systems while in larger companies. There's nothing wrong with working in IT departments, but it wasn't a fit for our entrepreneurial drive. On top of that, we love working on projects where we get direct feedback from our customers, instead of internal company tools.

It was clear we were to start a software company, but did not know what nor when. The original plan had been acquire experience and a monetary cushion to allow us to start working on the next big thing. Before taking the plunge and leaving our day jobs, we wanted to have a good idea. We considered various ideas, but being recent graduates with student loans, going out on a limb and working with no revenue on a magical idea for an undetermined period of time was not a very attractive solution. However, we did find a very attractive business opportunity with people with whom Jason had been working for almost three years, on various web development projects. These partners have franchised their business and have acquired precious experience over the years. They are very knowledgeable and have a clear vision of the various business processes from the point of view of franchisors. After 20-some years in the business, they know what software is required, what to focus on, and how to support it.

Discussions ensued, and LavaBlast was formed. We produce industry-specific software solutions for franchisors. All our products are integrated via FranchiseBlast, which cuts down franchisor/franchisee costs at the same time as favoring re-use from a software engineering aspect. We have a base product offering which we tailor to our customer's specific business processes. In a sense, we sell software products with a side order of software services.

In summary:

  • No outside funding.
  • Small distributed team.
  • A mix of products and services.
  • We eat our own dog food.

Lesson 1) The Great Idea

Simply put, chances are you won't think of the next big thing. Even if you do, you probably won't have the resources, experience, or contacts, to turn your idea into billions. We were not expecting miracles but were on the lookout many months in advance. One thing we learned while reading Founders At Work is that many software startups are formed without the faintest clue of what the product/service will be. The most important part of the company is not the idea but the people. A small and closely knit team of people who've worked together in the past is a recipe for success, regardless of the idea. Indeed, execution is key! Partnering with people who have a track record of bringing ideas to reality is, in our opinion, the most important part of the business.

We are very happy that we did not wait on having a brilliant breakthrough idea and started our company immediately. We evaluated opportunities and picked the one that made the most business sense, knowing we can adapt along the way. Having made the plunge early in our careers, we felt we didn't have anything to lose because we didn't have that many constraints such as leaving a high paying job, a mortgage and student loans to pay, kids to feed, etc. The more acquainted you get with stable employment, the harder it is to leave. Still, it is worth mentioning that while we were very comfortable with our technical skills, our network of contacts is not as large as it would have been if we had been under someone else's employment for 5-10 years. On the bright side, we don't have 10 year of accumulated bitterness to carry around and we're in the best phase of our development careers (from a production standpoint).

Simply put, talk to people around you and reflect on what pieces of software would scratch your itch. There are business opportunities all around you... all you need to do is decide on what's the best fit for your current situation! Are you smart and can get things done? Are you able to adapt to your environment and pursue different opportunities when they present themselves? If so, don't wait for an idea that may never come and enjoy the ride!

Lesson 2) Eating your own dog food

It's been said time and time again, you need people using your software in order for its quality to reach high standards. Here at LavaBlast, we're fortunate to be able to develop UsWare because we're involved in our partner's the day-to-day franchise operations. As developers, we proactively solve problems before anyone reports them because we feel like we are part of the franchise. Furthermore, since we built our infrastructure to be re-usable, we recently launched SupportBlast as a clone of our FranchiseBlast solution. SupportBlast is basically our mini-CRM and we've integrated our software registration and license key generator in there. Although the strategy does have its limitations, using our own products helps us boost our software quality. Furthermore, for our kind of software, it would have been impossible to develop anything coherent without actual customers.

It takes ten years to build a great software product and it also takes ten years to become a software grandmaster.  You understand that quality takes time and effort and you'll only discover your software's weaknesses by listening to critique. It's better to be harsh on yourself than listening to someone scream over the phone, don't you think? 

Lesson 3) Mix of products and services

As stated previously, our company has currently received no outside funding (nor have we invested millions from our personal fortunes). We doubt angel investors or venture capitalists would be interested in our company because we're not aiming for a 100:1 return on investment. We could change our strategy and go for an all-or-nothing gamble, but we're more interested in growing our company organically. We know our company can be profitable using our current strategy, but we understand that we are not going to be the next FaceBook and my guess is that your company might fall into this same category. More importantly, we agree with Spolsky's model of company growth, and aim to be profitable from day one.

The main question for our type of company is how can we pay the bills while developing our product? The holy grail for return on investment in our business implies no additional work on the developer's part to support a new customer. Microsoft doesn't need to make adjustments to Word to sell it to a new customer and FaceBook allows its users to change the content of their personal pages themselves. On the opposite end of the spectrum, a good web design firm will create interfaces from scratch, adjusted to the needs (and tastes) of each of its clients. In the end, we want to be a product company but we need to build good products (and accumulate a critical mass of customers) before it pays the bills. We wanted to start our business immediately, not work on it on weeknights and during weekends for a couple years. Because of this, we decided our main interest was our product line but we'd perform services on the side as short-term an income generator.

We were open to external contracts, but did not end up doing any work outside of our product line. We're very happy of this fact because the solution managed to sustain its development without having to dilute our efforts with side-contracts (such as bachelorette parties).


Have fun but expect a bumpy ride. If you're not happy in your own company, doing what you want to do, you have a problem. Still, as with anything, you'll have some good days and some bad. In a startup, expect a roller coaster of emotions because you'll certainly get very excited by a certain situation but crash & burn when it doesn't materialize. In addition, not all work is exciting. You're starting a company and you have to do everything yourself. Doing accounting at 11PM on a Saturday night is not my idea of fun. Generally, however, it is very gratifying to build your own company from scratch and if you're fit for the challenge, I encourage you to try it out! The worst thing that can happen is that you will fail, lose some cash, gain valuable life lessons, and go back to working for someone else (or start over with a new idea :)).

Come back next week for part 2!

kick it on

How Super Mario Bros Made Me a Better Software Engineer

clock February 28, 2008 00:07 by author JKealey

Super Mario Bros.Over the past month, I've been working hard on our business plan's second iteration. We've accomplished a lot in our first year and things look very promising for the future.  Writing a business plan helps an entrepreneur flesh out various concepts, including one's core competency and niche market. We illustrate in great detail what makes LavaBlast such a great software company for franchisors and the process of writing it down made me wonder about what made improved my software engineering talents, personally. Luckily for you, this post is not about me personally but rather an element of my childhood that impacted my career.

I don't recall exactly how old I was when I received an 8-bit Nintendo Entertainment System (NES) for Christmas, but I do remember playing it compulsively (balanced with sports like baseball, soccer and hockey!). The first game I owned was Super Mario Bros but I later obtained its successors to augment my (fairly small) cartridge collection. For the uninitiated, the NES does not incorporate any functionality to allowing saving the player's progress. Countless hours were spent playing and replaying the same levels, allowing me to progress to the end of the game and defeating my arch-nemesis, Bowser.

I enjoyed the time during which I played video games and it irritates me to hear people complaining about how video games will convert their children into violent sore-loser bums. In any case, I'd rather focus on the positive aspects of playing Super Mario Bros and other video games during my childhood. Just like mathematics develops critical thinking and problem solving skills, I strongly believe these video games influenced my personality to a point where they are probably defined my career. Disclaimer: I don't play video games that much anymore, but over the last year, I did purchase a Nintendo Wii and Nintendo DS Lite because I love the technology and the company's vision.

Quality #1: Persistence

Some people say I am a patient person, but I would beg to differ. I have trouble standing still intellectually, and although it is a strength in my industry, it isn't the best personality trait :) However, I am VERY persistent. I will attempt solving a problem over and over until I find a solution. Although I don't experience many painful programming situations on a daily basis, I very rarely give up on any programming problems. If I can't solve the problem immediately, it will keep nagging at me until I find a solution. A direct parallel can be traced with playing the Super Mario Bros series where the whole game had to be played over and over again to make any progress. (Anyone else remember trying to jumper over a certain gap in the floor in the Teenage Mutant Ninja Turtles NES game only to fall in the gap and have to climb back up again?) The games helped me train my persistence, a tool which any entrepreneur must use every day.

Quality #2: Pattern Recognition

Software engineering is all about pattern recognition. Refactoring the code to increase reuse, extracting behavioural patterns inside the strategy design pattern, creating object inheritance trees, or writing efficient algorithms based on observed patterns. I feel pattern recognition is one of my strengths, since I can easily see commonalities between seemingly different software problems. I believe this skill was refined by playing various video games, because the the player must observe the enemy's behaviour in order to succeed. In some games, agility doesn't really matter: it's all about knowing the pattern required to defeat the enemy (to the point where it sometimes become frustrating!). The most challenging parts of video games is when the game deliberately trains you to believe you'll be able to stomp an enemy by using a particular technique but, to your surprise, the technique fails miserably. You need to adapt to your environment, and think outside the box.

Quality #3: Creativity

Mathematicians and software engineers are creative thinkers, more than the uninitiated might think. I see software as a form of art, because of its numerous qualities that are hard to quantify. Software creators are artists in the sense that regardless of their level of experience, some will manage to hit the high notes while others could try their whole lifetime without attaining the perfect balance of usability, functionality, performance, and maintainability. Playing a wide breadth of video game styles lets you attack different situations with a greater baggage. I'm not totally sure how putting Sims in a pool and removing the ladder or shooting down hookers in Grand Theft Auto helped me in my day-to-day life, but it was still very entertaining :) The upcoming Spore game is very appealing to me because it combines creativity with pattern recognition, thanks to generative programming.  If you haven't heard about this game, I recommend you check it out immediately!

Quality #4: Speedy reactions

For Chris Brandsma: "IM N YUR BOX, WIF MAH WIFE" At LavaBlast, such as in many other software startups, it is critically important that all developers be fast thinkers. Indeed, when your core expertise is production, as opposed to research and development, you need to be able to make wise decisions in a short period of time. Personally, I can adapt to the setting (research environment versus startup environment) but my strength is speedy problem solving and I consider myself a software "cowboy". By combining my knowledge of of how to right reusable and maintainable code with my good judgement of what battles are worth fighting, I can quickly come up with appropriate solutions, given the context. In video games, the player needs to react quickly to avoid incoming obstacles while staying on the offensive to win the game. Of course, the mental challenges we face in our day-to-day lives of developing software is much more complex than what we encounter playing video games (which trains physical reaction time), but there is still a correlation between the two tasks.

Quality #5: Thoroughness

What differentiates a good software engineer from a plain vanilla software developer is their concern for quality software, across the board. Software quality is attained the combined impact of numerous strategies, but testing software as you write it, or after you change it, is critical. For the uninitiated, a popular methodology is to test BEFORE you write code. In any case, this talent can also be developed by video games such as the classic Super Mario World (SNES) where the player tries to complete all 96 goals (72 levels) by finding secret exits. Reaching thoroughness requires the player to think outside the typical path (from left to right) and look around for any secret locations (above the ceiling). Finding secret exits is akin to achieving better code coverage by trying atypical scenarios. 

Quality #6: Balance

Playing Super Mario Bros as a child helped me develop a certain sense of balance between my various responsibilities (school) and entertainment activities (sports, games, social activities). If you're spending 16 hours a day playing World of Warcraft or performing sexual favors in exchange for WoW money, your mother is right to think that you have a problem. Launching a software startup is a stressful experience, and it helps to be able to wind down with a beer and a soothing video game. A quick 20min run on a simple game before bed can work wonders! Of course, it is no replacement for sports or social activities, but it sure beats dreaming about design patterns.

What's missing? 

Super Mario Galaxy In my opinion, there are two major qualifies that video games don't impact. Having these two qualities is a requirement to becoming a good software engineer. First, video games do not help you interpret other people's needs. Second, video games do not help you communicate efficiently. What does? Experience, Experience, Experience. Being able to deal with people on a daily basis is mandatory, and the video games I played as a child did not help. However, this statement may no longer be true! Today, many massively multiplayer online games require good collaboration  and organizational skills. Furthermore, the new generation of gaming consoles are using the Internet to allow people to play together. 

Furthermore, I find games like the new Super Mario Galaxy (Wii) very interesting for future mechanical engineers. Indeed, the game presents a three-dimensional environment in an novel way, training the brain to think differently about three-dimensional space. You have to play the game to understand, but because the camera is not always showing Mario in the same angle, you have to get a feeling of the environment even if you don't see it (you're on the opposite side of a planet) or are upside down on the southern hemisphere. I can imagine children and teenagers playing the game today will have greater facility to imagine an object from various perspectives, while in studying physics or mechanical engineering in university.

In conclusion, I admit my whole argument can be invalided by saying that I played these types of games because I was inherently inclined to the software engineering profession, but the commonalities are still interesting to review! What are your thoughts on the subject? What do you think drove you to this profession (or drove you away)?

Legal Disclaimer: Did you know that the usage of the title of "software engineer" is much more regulated in Canada than it is in the United States? Although I detain a bachelor's degree in software engineering, and a master's degree in computer science which focused on requirements engineering, I can currently only claim the title of "junior engineer", as I recently joined the professional order.

Follow up: powerrush on DotNetKicks informed me that I'm not the only one who feels games influence software engineers

kick it on

Common console commands for the typical ASP.NET developer

clock February 25, 2008 14:19 by author JKealey

iis As an ASP.NET web developer, there are a few tasks that I must perform often for which I am glad to be able to perform via the command line. GUIs are great, but there are some things that are simply faster to do via the command line. Although we do have Cygwin installed to enhance our tool belt with commands like grep, there are a few ASP.NET related commands that I wanted to share with you today. Some of these are more useful on Windows 2003 server (because you can run multiple worker processes), but I hope you will find them useful.

1) Restarting IIS

The iisreset command can be used to restart IIS easily from the command line. Self-explanatory.

Attempting stop...
Internet services successfully stopped
Attempting start...
Internet services successfully restarted

2) Listing all ASP.NET worker processes

You can use tasklist to get the running worker processes.

tasklist /FI "IMAGENAME eq w3wp.exe"

Image Name PID Session Name Session# Mem Usage
w3wp.exe 129504 Console 0 40,728 K

You can also use the following command if you have Cygwin installed (easier to remember)


tasklist | grep w3wp.exe


w3wp.exe 4456 Console 0 54,004 K
w3wp.exe 5144 Console 0 101,736 K
w3wp.exe 2912 Console 0 108,684 K
w3wp.exe 3212 Console 0 136,060 K
w3wp.exe 852 Console 0 133,616 K
w3wp.exe 352 Console 0 6,228 K
w3wp.exe 1556 Console 0 155,264 K
w3wp.exe 3480 Console 0 6,272 K

3) Associating a process ID with a particular application pool

Should you want to monitor memory usage for a particular worker process, the results shown above are not very useful. Use the iisapp command.

W3WP.exe PID: 4456 AppPoolId: .NET 1.1
W3WP.exe PID: 5144 AppPoolId: CustomerA
W3WP.exe PID: 2912 AppPoolId: CustomerB
W3WP.exe PID: 3212 AppPoolId: Blog
W3WP.exe PID: 852 AppPoolId: LavaBlast
W3WP.exe PID: 352 AppPoolId: CustomerC
W3WP.exe PID: 1556 AppPoolId: CustomerD
W3WP.exe PID: 3480 AppPoolId: DefaultAppPool

By using iisapp in conjunction with tasklist, you can know which task is your target for taskkill.

4) Creating a virtual directory

When new developers checkout your code for the first time (or when you upgrade your machine), you don’t want to spend hours configuring IIS. You could back up the metabase and restore it later on, but we simply use iisvdir. Assuming your root IIS has good default configuration settings for your project, you can create a virtual directory like so:

iisvdir /create “Default Web Site” franchiseblast c:\work\lavablast\franchiseblast\


5) Finding which folder contains the desired log files.

IIS saves its log files in %WINDOWS%\System32\LogFiles, but it creates a different subdirectory for each web application. Use iisweb /query to figure out which folder to go check out.

Connecting to server ...Done.
Site Name (Metabase Path) Status IP Port Host

Default Web Site (W3SVC/1) STARTED ALL 80 N/A
port85 (W3SVC/858114812) STARTED ALL 85 N/A

6) Many more commands…

Take a peek at the following articles for more command-line tools that might be useful in your context:


There are numerous command line tools distributed by Microsoft that help you manage your ASP.NET website. Obviously, the commands listed here are the tip of the iceberg! Although many developers know about these commands because they had to memorize them for some test, many are not even aware of their existence. Personally, I feel that if you write a single script that sets up IIS as you need it to develop, you’ll save time setting up new developers or when you re-install your operating system. Script it once and reap the rewards down the road.

kick it on  

Manage your ASP.NET Web.config Files using NAnt

clock February 19, 2008 13:24 by author JKealey

Egypt trip in 2007 Nothing is more important than the software engineers in a software company. I just finished re-reading Joel Spolsky’s Smart and Get Things Done and it inspired this post. Not only do I admire his writing style, I share Joel’s vision of how a software company should be run. Pampering your programmers is the best decision a manager can make, especially when you’ve built a team that can hit the high notes.

One of Joel’s key elements to running a successful software company is to automate your build process up to the point where it only takes one operation. This minimizes the chance of error while enabling your developers to grey matter to something more complex than copying files. Although they might use the extra time to shop around for student loan consolidation plans (a practice called reverse telecommuting), in most cases they’ll return to writing fresh code or cleaning out existing bugs.

Today’s post is about one of the little things that made my life much easier as a developer: using NAnt to manage our software product line. I’ve come to realize that we encounter these little “sparks” every day, but we never talk about them. Sure, we’ve produce a number of complex software products and they are fun to describe, but I personally enjoy talking about the little things that save time, just like Henry Petroski verbosely describes common items in his books. Fortunately for you, I’ll keep the story short, unlike his description of the evolution of the paper clip in The Evolution of Useful Things (which is still an interesting read, by the way).


We develop lots of ASP.NET websites. Our architecture includes database schemas and business objects shared amongst multiple projects and some common utility libraries.  Furthermore, instead of always inheriting from System.Web.UI.Page and System.Web.UI.UserControl, we have an object oriented inheritance tree as is good software engineering practice. We even have a shared user control library that gets copied over after a successful build. Furthermore, we use ASP.NET master pages and ASP.NET themes to structure our designs. As opposed to what you see in textbooks where themes can be chosen by the user according to their preferences (oh yes, please show me the pink background with fluffy kittens), we use themes to represent different franchise brands.

My point here is that we reusability is key to our solution. We build elements that we can use not only on the website but also in FranchiseBlast, the interactive kiosk, and the point of sale. However, the more you re-use, the more things get complicated. Indeed, the overhead caused by the added configurability we build into our reusable components is non-negligible. We're always on the lookout for new ways to keep things simple, while still reaping the benefits of reuse. We use the Strategy Design Pattern to encapsulate the behavioural changes in our systems and put our various configuration settings inside our Web.config file.

Hurdle #1: Different developers need different web.config files

Our configuration files have a few settings that we want to change on a per-user basis:

- Where should we email exception notifications?

- Database names & file paths

- Google API Keys

How do we manage this? If we put our Web.config file under source control, we'll end up with various conflicts when the developers change the configuration file to suit their tastes. I don't know about you, but I have better things to do than start memorizing API keys or digits of PI.

Solution #1

Our first solution wasn’t fantastic, but it was sufficient for a while. We simply removed the Web.config from source control and created new files, one for each developer (Web.config.jkealey, Web.config.etremblay, etc.) and one for the deployment server (Web.config.server1). When a change was to be made, we whipped out WinMerge and changed all the files. You can quickly understand that this process does not scale well, but it was sufficient for small projects with 2 to 3 developers.

Hurdle #2: Scaling to more than a couple machines

We deploy our point of sale software and kiosks via Subversion. It might be fun to use WinMerge to compare a couple Web.config files, but when you’ve got a hundred web applications to update to the new version, by comparing Web.config files, you’ve got a problem. Doing this by hand wasn’t very difficult but it was error-prone and time consuming. I don’t know if you have seen the Web.config additions that ASP.NET AJAX brought to the table, but upgrading from a release candidate of Atlas to the full release of ASP.NET AJAX was painful (we’re not talking about half a dozen settings in the AppSettings section).

Solution #2

1) Create a template Web.format.config that contains the general Web.config format, with certain placeholders for variables that vary on a per-developer or per-machine basis.

2) Create a that contains the default settings for the web.config

3) Create a for each developer that simply overrides the default settings with other values when needed.

4) Write a script to replace the placeholders in the Web.format.config and generate your Web.config.developername files for you.

We implemented this strategy using NAnt. Our script does a bit more work because we’ve got interrelated projects, but I will describe the base idea here.


Here is a portion of our web.format.config file:

    <add key="GoogleMapsAPIKey" value="${GoogleMapsAPIKey}"/>
   <healthMonitoring enabled="${healthMonitoring.enabled}">
           <add type="System.Web.Management.SimpleMailWebEventProvider"  name="EmailWebEventProvider"
               subjectPrefix="${email_prefix}: Exception occurred"
               bodyHeader="!!! HEALTH MONITORING WARNING!!!"
               bodyFooter="Brought to you by LavaBlast Software Inc..."
               buffer="false" />

Property files

Our default settings look something like the following:

    <property name="GoogleMapsAPIKey" value="ABQIAAAAkzeKMhfEKdddd8YoBaAeaBR0a45XuIX8vaM2H2dddddQpMmazRQ30ddddPdcuXGuhMT2rGPlC0ddd" />
    <property name="healthMonitoring.enabled" value="true"/>
    <property name="email_prefix" value="LavaBlast"/>
    <property name="bugs_to _email" value="[email protected]" />
    <property name="bugs_from_email" value="[email protected]" />


Our per-developer files include the default settings, and override a few:

    <!-- load defaults -->
    <include buildfile=""   failonerror="true" />   
    <!-- override settings -->
    <property name="GoogleMapsAPIKey" value="ABQIAAAAkzeKMhfEKeeee8YoBaAeaBR0a45XuIX8vaM2H2eeeeeQpMmazRQ30eeeePecuXGuhMT2rGPlC0eee"/>
    <property name="bugs_to_email" value="[email protected]" />

The NAnt script

We wrote a NAnt script that runs another NAnt instance to perform the property replacements, but the core code comes from Captain Load Test. It is a bit slow because we have to re-invoke NAnt, but it doesn’t appear like you can dynamically include a properties file at runtime. Feel free to comment if you find a way to make it more efficient. We don’t have our generated files under source control as we only version the property files.

<project name="generate configs" default="generate ">
    <property name="destinationfile"   value="web.config" overwrite="false" />  
    <property name="propertyfile"  value="invalid.file" overwrite="false" />  
    <property name="sourcefile"   value="web.format.config" overwrite="false" />
    <include buildfile="${propertyfile}"   failonerror="false"   unless="${string::contains(propertyfile, 'invalid.file')}" />   
    <target name="configMerge">    
        <copy file="${sourcefile}"  tofile="${destinationfile}" overwrite="true">
                <expandproperties />
    <target name="generate ">
        <property name="destinationfile" value="web.config.${machine}" overwrite="true"/>
        <property name="propertyfile" value="web.${machine}.properties" overwrite="true"/> 
        <property name="sourcefile" value="web.format.config" overwrite="true"/>
        <echo message="Generating: ${destinationfile}"/>
        <!--<call target="configMerge"/>-->
        <exec program="nant">
            <arg value="configMerge"/>
            <arg value="-nologo+"/>
            <arg value="-q"/>
            <arg value="-D:sourcefile=${sourcefile}"/>
            <arg value="-D:propertyfile=${propertyfile}"/>
            <arg value="-D:destinationfile=${destinationfile}"/>

Hurdle #3: Software Product Lines

Egypt trip 2007 Up to now, we’ve talked about taking one project and making it run on a number of machines, depending on a few preferences. However, we’ve taken it one step further because our web applications are part of a software product line. Indeed, we have different themes for different brands. Different companies have different configuration settings and site maps files. Therefore, we needed to be able to generate configuration files for each brand AND for each machine. This also greatly increases the number of configuration files we need.

Solution #3

It wasn’t very difficult to expand to this new level of greatness thanks to the script presented in hurdle #2. We basically have default configuration files for each project (themes, sitemap, name, locale, etc) in addition to the files we’ve shown above. We simply have to load two configuration files instead of one.

We even wrote a batch file (SwitchToBrandA.bat) that generates the property file for the current machine only (via the machine name environment variable) and it replaces the current Web.config. By running one batch file, we switch to the appropriate product for our current machine.

Future work

Currently, it takes a couple minutes to create a new brand or add a new developer. It doesn’t happen often enough to make it worthwhile for us to augment the infrastructure to make it easier on us, but is a foreseeable enhancement for the future. I guess another future work item would actually be hire someone who is an expert in build automation, test automation and automatic data processing! :) These are skills they don't teach in university, but should!

kick it on  

I, for one, welcome our new revision control overlords!

clock February 11, 2008 10:35 by author JKealey
Be a lazy developer!  You know you deserve it.

I’ve been developing websites professionally for almost nine years now and although I still party like it’s 1999, my productivity has increased greatly thanks to better tools and technologies. Due to recent dealings with the IT department of another firm, I remembered the fact that although we think that all developers are using source control (CVS, Subversion, etc.), this is not the case. There are lots of corporate developers out there who don’t follow the industry’s best practices. This post is not about using version control tools for source code… it’s about re-using the same tool for deploying websites, instead of FTP. We assume you know what source control is and are interested in using it in novel ways. 

A few days ago we were visiting the facilities of a company which provides services of interests to one of our franchisor customers. As our specialty is the integration of external systems with FranchiseBlast, our franchise management tool, we wanted to know how the data would be able to move back and forth. One of the sophisticated options available to us was the use of FTP to transfer flat files in a specific file format, not that there’s anything wrong with that!  Indeed, when your core business doesn’t require lots of integration with your customers, no need to re-invent your solution every three years with newer technologies. You can afford to stick with the same working solution for a long period of time! (We will obviously continue to push web services, as it is much easier to write code against a web service than picking up flat files from an FTP server!).

Integration and automation reduce support costs for the franchisor

We’re always looking at pushing the envelope and we know that software integration and especially automation is the key to cutting down labor costs. If your business processes include lots of manual labor, we feel it is worthwhile to take the time to investigate replacing a few steps software-based solutions (focus on the 20% of steps that make you lose 80% of your time). Wouldn’t you rather play with your new puppy than copy-paste data from one place to another? A typical example for the integration we built into FranchiseBlast and the point of sale is the automatic creation of products in stores, once they are created on FranchiseBlast. Our franchisees save lots of time not having to create their own products and we avoid the situation where an incorrect UPC is entered only to be discovered months later.

Furthermore, although your mental picture of a franchisor might boil down to someone lounging in their satin pyjamas in front of the fireplace, sipping some piping hot cocoa, while waiting for the royalty check to come in, this is very far from the truth. Supporting the franchise is a major time consumer but if you can manage to reduce and/or simplify all the work done inside the store, you can greatly reduce time spent supporting the stores.

Enough about the franchise tangent; talk about web development!

Integration and automation does not only apply to your customers: any serious web development firm still using FTP to deploy websites should consider the following questions. By a serious firm, I mean you’ve got more than your mother’s recipe website to deploy and you build dynamic web applications that change regularly, not a static business card for your trucker uncle.

  • Are you re-uploading the whole site every time you make an upgrade?
  • Are you selecting the changed files manually or telling your FTP client to only overwrite new files?
  • Is someone else also uploading files and you’re never sure what changed?
  • Do you care about being able to deploy frequently without wasting your time moving files around?
  • Do you have to re-upload large files (DLL, images) even if you know you only changed a few bytes?
  • Did you ever have to revert your website back to a previous version when a bug was discovered?
  • Do you upload to a staging area for client approval, then deploy to the production server?

imageIf you answered yes to one of these questions, you’re probably wasting your precious time and (now cheap) bandwidth. Yes, I know it is fun to read comics while you’re uploading a site but you’re not using technology to its full potential.

Source control technology has been around for decades and hopefully you’ve been using it to collaborate with other developers or designers when creating websites. Even if you work alone, there are several advantages to using CVS or Subversion, for example. You may be wondering why I am talking about source control in the context of web deployment but I hope the astute reader will know exactly where I’m headed.

Why not deploy your websites using source control tools such as Subversion?

There are probably lots of you out there that already do this but there may be some people that never thought outside the box. By sharing this with you today, I hope to help at least one person cut down an hour per week spent in deployment. We’ve experienced the benefits and wanted to share them with you. 

We prefer Subversion over CVS for multiple reasons, but one that is of particular interest here is the fact that it can do binary diffs. If you recompile your ASP.NET website, you’ll generate new DLL files that are very similar to the previous ones. The same thing happens when you’re changing images. Thanks to this Subversion feature, you only have to upload the bytes that have changed inside your files… as opposed to the whole files!  Furthermore, as with most source control management tools, you can copy a file from one project inside your repository to dozens of others, without taking up additional space on the disk.

You can create a separate repository for your deployment files (separating them from your source code) and checkout the project on the production server. Later on, when you commit your changes, you can simply perform an update on the production server. You could even automate the update process using post-commit actions.

There are numerous advantages to deploying using a tool such as Subversion:

  • You only commit what has changed and the tool tracks these changes for you.
  • Only changes are transferred, not the whole site.
  • If someone other than you fools around with the deployed site, you can immediately see what was changed.
  • You can easily revert back to an older version of the site

You should automate the deployment to the staging area (using continuous integration or post-commit scripts) but keep deployment to the production server manual. Automatic deployment to your staging area means you can commit at 4:50PM on a Friday afternoon, wait for the successful build confirmation, and head off to play hacky sack with your high school buddies without having to manually redeploy the development machine. At 5AM on Saturday morning, when your early-riser client gets up and has a few minutes to spare before reading the paper, he can load up the development server at play with the latest build.

What about my database?

The concept of database migrations is very useful in this context; if you use tools that have database migrations built-in (such as Ruby on Rails), then you are in luck. Otherwise, it gets more complicated. We’re waiting for database migrations to be supported by SubSonic before investing much effort in this area (although I don’t recall ever having to revert a production server back to a previous version). For our business, this is a must have feature because it allows us to revert a store’s point of sale to a stable build should the software exploded in the middle of a busy Saturday. Even better, should a fire destroy the store’s computers, we can reload this store’s customized version within minutes. (We also do nightly off-site data backups of the sales information).

In any case, we recommend you take a peek at the series of posts made by K. Scott Allen, referenced in this article by Jeff Atwood.

How can I save more time?

IMG_0501 The answer is simple: by scripting the copying of files (locally) from your development copy to the checked out folder of the production repository. This can be as simple as using the build-in deployment tools available in your IDE (such as VS.NET’s deployment functionality) or writing a script that copies all files of a particular extension from one place to the other. Eventually, you’ll need to adapt your script to your particular needs, if you’re wasting too much time copying large files that never change, for example. This step depends on your project and its environment. I will describe in a future post how we use NAnt to manage our software product line. Kudos to Jean-Philippe Daigle for helping us out in the early days.

Concrete examples

LavaBlast deploys its point of sale application via Subversion; every computer we manage has its own path in the deployment repository. This allows for per-store customizability and is not as redundant as one may sound because of Subversion copies. Furthermore, when the POS communicates with FranchiseBlast, we track exactly which version of the software is running in that particular store (via the Subversion revision number). We also track a revision number for the database schema. Having this bird’s eye view of the software deployed in the franchise lets us easily schedule updates to subsets of the stores. At the point where we are now, we could easily write code that would signal to the store it should upgrade itself to a certain revision at a certain time & date.  Ultimately, instead of spending my time copying files, I am available to write blog posts like this one!


By moving away from FTP, we’ve considerably cut down the time it takes to deploy a website. We invested time in this infrastructure early on, allowing us to concentrate on design, coding, and testing as opposed to deployment. Of course, FTP still has many uses outside of the context we describe here! FTP has been around for a long time and will not disappear any time soon!

kick it on

RESX file Web Editor

clock February 7, 2008 08:35 by author EtienneT

Source CodeDEMO

If you have a multi-language site, you have probably already worked with .resx files.  Resx files are resource files that can contain strings, images, sounds... pretty much anything.  However, in ASP.NET (at least in our typical scenarios), resource files mainly contain strings to be translated in multiple languages.  Those needing a refresher course on ASP.NET localization should take a peek at this article: ASP.NET Localization. Let us take a typical ASP.NET application as an example.  When you generate a resource file for a file named Default.aspx, VS.NET generates a new folder named App_LocalResources (if it doesn't exist) and it creates a new file named Default.aspx.resx in this folder.


This option will only be visible in the design view of an aspx or ascx file.  

Default.aspx.resx will contain all strings that can be localized from Default.aspx.  Default.aspx.resx is the default resource file for Default.aspx.  It will contain the default language strings, in our case English.  Should you need to offer the same application in more than one language, you will need to create locale-specific resource files with a similar filename. For example, would be a resource file for Canadian French. The logic to retrieve a string from the appropriate resource file is built into the .NET framework (it depends on the current thread's culture).

Those who have built a dynamic web site supporting multiple languages know that managing resx files is a burden, especially when the application changes. is a web site we recently created which provides English, French, and Danish versions.  We use a CMS for most of the text, but some dynamic pages like the store locator contains text which needs to be in resources files.  We collaborated with a resident of Denmark to translate the website; the translator is not a programmer and could not be trusted with the XML-based resx files. Furthermore, as the site is in constant evolution, we wanted a dynamic solution to avoid losing time exchanging Excel files.

Although there are some commercial applications out there, we decided to make a a simple tool to unify a set of resource files.  We wanted to take advantage of the fact that the we were building a website and we could have the translator use a web-based tool to translate the website. This has the advantage of being able to see the changes immediately in-context, instead of simply translating text locally. We made a class that would merge our Default.aspx.resx,, and Default.aspx.da.resx files into a single C# object that is easy to use.  Once the data is in the C# object, the data can be modified and saved back to the disk later on.

We called this C# object ResXUnified.  The only constructor to ResXUnified needs a path to a resx file.  Then it will find all resx files related to it from different languages.  Once you have constructed the object, you can access the data simply by using an indexor:

ResXUnified res = new ResXUnified("Default.aspx.resx");
string val = res["da"]["textbox1.Text"];

In the above code, we access the Danish language file and query the value of the "textbox1.Text" key.  This information can be changed and saved back to the disk:

res[""]["textbox1.Text"] = "Home"; // Default language we can pass an empty string
res["da"]["textbox1.Text"] = "Hjem"; // Danish
res["fr-CA"]["textbox1.Text"] = "Acceuil"; // French

When we call Save(), only the files that were changed will be written to the disk.  ResXUnified simply uses a Dictionary and a List to manage the keys and the languages.  To save, it uses the ResXResourceWriter class provided by the framework, which makes it easy to manipulate resx files. Similarly, we read resx files using the ResXResourceReader class.  Without these two classes, manipulating resx files would be much more complicated.

I won't include more code here since this is a pretty straightforward collection class.

Later, for the website, we made a quick interface in ASP.NET (see the demo here) to display all the resx files in a project. We enable the user to add a language file and translate all the fields from the default language.  Here is an example:


When a string needs to be translated, the textbox background color is not the same, this way it's easy for the translator to see what needs to be translated.

The generate local resource button in Visual Studio generates a lot of "useless" strings in resx files; we don't necessarily want to enter tooltips on every Label in our application. To make it easier to read, we designed an option to hide or show these empty strings.

This tool is pretty basic right now and there are more options that could be easily added.  For example, we could add a list of files that remain to be translated or allow for multiline strings (notice we don't support line-breaks in our strings). We encourage you to modify the code and show off your enhancements!

Final notes: If you try the code out, please remember to give write access to the ASP.NET worker process in all the App_LocalResources folders (and the App_GlobalResources folder if you use it). Also, since changing resx files for the default language restarts the web application, it is recommended you use the tool on a development copy of your website.  

Source CodeDEMO

kick it on

ViewState property code snippet

clock January 23, 2008 12:47 by author EtienneT

One of the things I do frequently is making a new viewstate property to use in a UserControl in ASP.NET.  (Yes, we know, ViewState is evil. We store it in the session and use it in low-volume sites.) Just download this code snippet, open Visual Studio, open Tools->Code Snippets Manager, and finally Import.

ViewState Property.snippet (1.28 kb)

You can then use it just by writing the shortcut in the code editor and pressing TAB.


Here is the result:


Using the ?? operator is nice since we don't have to do ugly ifs to return a default value and we don't have to assign something in the ViewState either.  We just fill the ViewState when we need it. This avoids bloating up the ViewState with default values.

ViewState Property.snippet (1.28 kb)

kick it on

Month List


The opinions expressed herein are my own personal opinions and do not represent my employer's view in anyway.

© Copyright 2017

Sign in