LavaBlast Software Blog

Help your franchise business get to the next level.
AddThis Feed Button

How Super Mario Bros Made Me a Better Software Engineer

clock February 28, 2008 00:07 by author JKealey

Super Mario Bros.Over the past month, I've been working hard on our business plan's second iteration. We've accomplished a lot in our first year and things look very promising for the future.  Writing a business plan helps an entrepreneur flesh out various concepts, including one's core competency and niche market. We illustrate in great detail what makes LavaBlast such a great software company for franchisors and the process of writing it down made me wonder about what made improved my software engineering talents, personally. Luckily for you, this post is not about me personally but rather an element of my childhood that impacted my career.

I don't recall exactly how old I was when I received an 8-bit Nintendo Entertainment System (NES) for Christmas, but I do remember playing it compulsively (balanced with sports like baseball, soccer and hockey!). The first game I owned was Super Mario Bros but I later obtained its successors to augment my (fairly small) cartridge collection. For the uninitiated, the NES does not incorporate any functionality to allowing saving the player's progress. Countless hours were spent playing and replaying the same levels, allowing me to progress to the end of the game and defeating my arch-nemesis, Bowser.

I enjoyed the time during which I played video games and it irritates me to hear people complaining about how video games will convert their children into violent sore-loser bums. In any case, I'd rather focus on the positive aspects of playing Super Mario Bros and other video games during my childhood. Just like mathematics develops critical thinking and problem solving skills, I strongly believe these video games influenced my personality to a point where they are probably defined my career. Disclaimer: I don't play video games that much anymore, but over the last year, I did purchase a Nintendo Wii and Nintendo DS Lite because I love the technology and the company's vision.

Quality #1: Persistence

Some people say I am a patient person, but I would beg to differ. I have trouble standing still intellectually, and although it is a strength in my industry, it isn't the best personality trait :) However, I am VERY persistent. I will attempt solving a problem over and over until I find a solution. Although I don't experience many painful programming situations on a daily basis, I very rarely give up on any programming problems. If I can't solve the problem immediately, it will keep nagging at me until I find a solution. A direct parallel can be traced with playing the Super Mario Bros series where the whole game had to be played over and over again to make any progress. (Anyone else remember trying to jumper over a certain gap in the floor in the Teenage Mutant Ninja Turtles NES game only to fall in the gap and have to climb back up again?) The games helped me train my persistence, a tool which any entrepreneur must use every day.

Quality #2: Pattern Recognition

Software engineering is all about pattern recognition. Refactoring the code to increase reuse, extracting behavioural patterns inside the strategy design pattern, creating object inheritance trees, or writing efficient algorithms based on observed patterns. I feel pattern recognition is one of my strengths, since I can easily see commonalities between seemingly different software problems. I believe this skill was refined by playing various video games, because the the player must observe the enemy's behaviour in order to succeed. In some games, agility doesn't really matter: it's all about knowing the pattern required to defeat the enemy (to the point where it sometimes become frustrating!). The most challenging parts of video games is when the game deliberately trains you to believe you'll be able to stomp an enemy by using a particular technique but, to your surprise, the technique fails miserably. You need to adapt to your environment, and think outside the box.

Quality #3: Creativity

Mathematicians and software engineers are creative thinkers, more than the uninitiated might think. I see software as a form of art, because of its numerous qualities that are hard to quantify. Software creators are artists in the sense that regardless of their level of experience, some will manage to hit the high notes while others could try their whole lifetime without attaining the perfect balance of usability, functionality, performance, and maintainability. Playing a wide breadth of video game styles lets you attack different situations with a greater baggage. I'm not totally sure how putting Sims in a pool and removing the ladder or shooting down hookers in Grand Theft Auto helped me in my day-to-day life, but it was still very entertaining :) The upcoming Spore game is very appealing to me because it combines creativity with pattern recognition, thanks to generative programming.  If you haven't heard about this game, I recommend you check it out immediately!

Quality #4: Speedy reactions

For Chris Brandsma: "IM N YUR BOX, WIF MAH WIFE" At LavaBlast, such as in many other software startups, it is critically important that all developers be fast thinkers. Indeed, when your core expertise is production, as opposed to research and development, you need to be able to make wise decisions in a short period of time. Personally, I can adapt to the setting (research environment versus startup environment) but my strength is speedy problem solving and I consider myself a software "cowboy". By combining my knowledge of of how to right reusable and maintainable code with my good judgement of what battles are worth fighting, I can quickly come up with appropriate solutions, given the context. In video games, the player needs to react quickly to avoid incoming obstacles while staying on the offensive to win the game. Of course, the mental challenges we face in our day-to-day lives of developing software is much more complex than what we encounter playing video games (which trains physical reaction time), but there is still a correlation between the two tasks.

Quality #5: Thoroughness

What differentiates a good software engineer from a plain vanilla software developer is their concern for quality software, across the board. Software quality is attained the combined impact of numerous strategies, but testing software as you write it, or after you change it, is critical. For the uninitiated, a popular methodology is to test BEFORE you write code. In any case, this talent can also be developed by video games such as the classic Super Mario World (SNES) where the player tries to complete all 96 goals (72 levels) by finding secret exits. Reaching thoroughness requires the player to think outside the typical path (from left to right) and look around for any secret locations (above the ceiling). Finding secret exits is akin to achieving better code coverage by trying atypical scenarios. 

Quality #6: Balance

Playing Super Mario Bros as a child helped me develop a certain sense of balance between my various responsibilities (school) and entertainment activities (sports, games, social activities). If you're spending 16 hours a day playing World of Warcraft or performing sexual favors in exchange for WoW money, your mother is right to think that you have a problem. Launching a software startup is a stressful experience, and it helps to be able to wind down with a beer and a soothing video game. A quick 20min run on a simple game before bed can work wonders! Of course, it is no replacement for sports or social activities, but it sure beats dreaming about design patterns.

What's missing? 

Super Mario Galaxy In my opinion, there are two major qualifies that video games don't impact. Having these two qualities is a requirement to becoming a good software engineer. First, video games do not help you interpret other people's needs. Second, video games do not help you communicate efficiently. What does? Experience, Experience, Experience. Being able to deal with people on a daily basis is mandatory, and the video games I played as a child did not help. However, this statement may no longer be true! Today, many massively multiplayer online games require good collaboration  and organizational skills. Furthermore, the new generation of gaming consoles are using the Internet to allow people to play together. 

Furthermore, I find games like the new Super Mario Galaxy (Wii) very interesting for future mechanical engineers. Indeed, the game presents a three-dimensional environment in an novel way, training the brain to think differently about three-dimensional space. You have to play the game to understand, but because the camera is not always showing Mario in the same angle, you have to get a feeling of the environment even if you don't see it (you're on the opposite side of a planet) or are upside down on the southern hemisphere. I can imagine children and teenagers playing the game today will have greater facility to imagine an object from various perspectives, while in studying physics or mechanical engineering in university.

In conclusion, I admit my whole argument can be invalided by saying that I played these types of games because I was inherently inclined to the software engineering profession, but the commonalities are still interesting to review! What are your thoughts on the subject? What do you think drove you to this profession (or drove you away)?

Legal Disclaimer: Did you know that the usage of the title of "software engineer" is much more regulated in Canada than it is in the United States? Although I detain a bachelor's degree in software engineering, and a master's degree in computer science which focused on requirements engineering, I can currently only claim the title of "junior engineer", as I recently joined the professional order.

Follow up: powerrush on DotNetKicks informed me that I'm not the only one who feels games influence software engineers

kick it on DotNetKicks.com



Common console commands for the typical ASP.NET developer

clock February 25, 2008 14:19 by author JKealey

iis As an ASP.NET web developer, there are a few tasks that I must perform often for which I am glad to be able to perform via the command line. GUIs are great, but there are some things that are simply faster to do via the command line. Although we do have Cygwin installed to enhance our tool belt with commands like grep, there are a few ASP.NET related commands that I wanted to share with you today. Some of these are more useful on Windows 2003 server (because you can run multiple worker processes), but I hope you will find them useful.

1) Restarting IIS

The iisreset command can be used to restart IIS easily from the command line. Self-explanatory.

Attempting stop...
Internet services successfully stopped
Attempting start...
Internet services successfully restarted

2) Listing all ASP.NET worker processes

You can use tasklist to get the running worker processes.

tasklist /FI "IMAGENAME eq w3wp.exe"

Image Name PID Session Name Session# Mem Usage
========================================================================
w3wp.exe 129504 Console 0 40,728 K

You can also use the following command if you have Cygwin installed (easier to remember)

 

tasklist | grep w3wp.exe

 

w3wp.exe 4456 Console 0 54,004 K
w3wp.exe 5144 Console 0 101,736 K
w3wp.exe 2912 Console 0 108,684 K
w3wp.exe 3212 Console 0 136,060 K
w3wp.exe 852 Console 0 133,616 K
w3wp.exe 352 Console 0 6,228 K
w3wp.exe 1556 Console 0 155,264 K
w3wp.exe 3480 Console 0 6,272 K

3) Associating a process ID with a particular application pool

Should you want to monitor memory usage for a particular worker process, the results shown above are not very useful. Use the iisapp command.

W3WP.exe PID: 4456 AppPoolId: .NET 1.1
W3WP.exe PID: 5144 AppPoolId: CustomerA
W3WP.exe PID: 2912 AppPoolId: CustomerB
W3WP.exe PID: 3212 AppPoolId: Blog
W3WP.exe PID: 852 AppPoolId: LavaBlast
W3WP.exe PID: 352 AppPoolId: CustomerC
W3WP.exe PID: 1556 AppPoolId: CustomerD
W3WP.exe PID: 3480 AppPoolId: DefaultAppPool

By using iisapp in conjunction with tasklist, you can know which task is your target for taskkill.

4) Creating a virtual directory

When new developers checkout your code for the first time (or when you upgrade your machine), you don’t want to spend hours configuring IIS. You could back up the metabase and restore it later on, but we simply use iisvdir. Assuming your root IIS has good default configuration settings for your project, you can create a virtual directory like so:

iisvdir /create “Default Web Site” franchiseblast c:\work\lavablast\franchiseblast\

 

5) Finding which folder contains the desired log files.

IIS saves its log files in %WINDOWS%\System32\LogFiles, but it creates a different subdirectory for each web application. Use iisweb /query to figure out which folder to go check out.

Connecting to server ...Done.
Site Name (Metabase Path) Status IP Port Host
==============================================================================

Default Web Site (W3SVC/1) STARTED ALL 80 N/A
port85 (W3SVC/858114812) STARTED ALL 85 N/A

6) Many more commands…

Take a peek at the following articles for more command-line tools that might be useful in your context:

http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/b8721f32-696b-4439-9140-7061933afa4b.mspx?mfr=true

http://www.tech-faq.com/using-iis-command-line-utilities-to-manage-iis.shtml

Conclusion

There are numerous command line tools distributed by Microsoft that help you manage your ASP.NET website. Obviously, the commands listed here are the tip of the iceberg! Although many developers know about these commands because they had to memorize them for some test, many are not even aware of their existence. Personally, I feel that if you write a single script that sets up IIS as you need it to develop, you’ll save time setting up new developers or when you re-install your operating system. Script it once and reap the rewards down the road.

kick it on DotNetKicks.com  



Manage your ASP.NET Web.config Files using NAnt

clock February 19, 2008 13:24 by author JKealey

Egypt trip in 2007 Nothing is more important than the software engineers in a software company. I just finished re-reading Joel Spolsky’s Smart and Get Things Done and it inspired this post. Not only do I admire his writing style, I share Joel’s vision of how a software company should be run. Pampering your programmers is the best decision a manager can make, especially when you’ve built a team that can hit the high notes.

One of Joel’s key elements to running a successful software company is to automate your build process up to the point where it only takes one operation. This minimizes the chance of error while enabling your developers to grey matter to something more complex than copying files. Although they might use the extra time to shop around for student loan consolidation plans (a practice called reverse telecommuting), in most cases they’ll return to writing fresh code or cleaning out existing bugs.

Today’s post is about one of the little things that made my life much easier as a developer: using NAnt to manage our software product line. I’ve come to realize that we encounter these little “sparks” every day, but we never talk about them. Sure, we’ve produce a number of complex software products and they are fun to describe, but I personally enjoy talking about the little things that save time, just like Henry Petroski verbosely describes common items in his books. Fortunately for you, I’ll keep the story short, unlike his description of the evolution of the paper clip in The Evolution of Useful Things (which is still an interesting read, by the way).

Background

We develop lots of ASP.NET websites. Our architecture includes database schemas and business objects shared amongst multiple projects and some common utility libraries.  Furthermore, instead of always inheriting from System.Web.UI.Page and System.Web.UI.UserControl, we have an object oriented inheritance tree as is good software engineering practice. We even have a shared user control library that gets copied over after a successful build. Furthermore, we use ASP.NET master pages and ASP.NET themes to structure our designs. As opposed to what you see in textbooks where themes can be chosen by the user according to their preferences (oh yes, please show me the pink background with fluffy kittens), we use themes to represent different franchise brands.

My point here is that we reusability is key to our solution. We build elements that we can use not only on the website but also in FranchiseBlast, the interactive kiosk, and the point of sale. However, the more you re-use, the more things get complicated. Indeed, the overhead caused by the added configurability we build into our reusable components is non-negligible. We're always on the lookout for new ways to keep things simple, while still reaping the benefits of reuse. We use the Strategy Design Pattern to encapsulate the behavioural changes in our systems and put our various configuration settings inside our Web.config file.

Hurdle #1: Different developers need different web.config files

Our configuration files have a few settings that we want to change on a per-user basis:

- Where should we email exception notifications?

- Database names & file paths

- Google API Keys

How do we manage this? If we put our Web.config file under source control, we'll end up with various conflicts when the developers change the configuration file to suit their tastes. I don't know about you, but I have better things to do than start memorizing API keys or digits of PI.

Solution #1

Our first solution wasn’t fantastic, but it was sufficient for a while. We simply removed the Web.config from source control and created new files, one for each developer (Web.config.jkealey, Web.config.etremblay, etc.) and one for the deployment server (Web.config.server1). When a change was to be made, we whipped out WinMerge and changed all the files. You can quickly understand that this process does not scale well, but it was sufficient for small projects with 2 to 3 developers.

Hurdle #2: Scaling to more than a couple machines

We deploy our point of sale software and kiosks via Subversion. It might be fun to use WinMerge to compare a couple Web.config files, but when you’ve got a hundred web applications to update to the new version, by comparing Web.config files, you’ve got a problem. Doing this by hand wasn’t very difficult but it was error-prone and time consuming. I don’t know if you have seen the Web.config additions that ASP.NET AJAX brought to the table, but upgrading from a release candidate of Atlas to the full release of ASP.NET AJAX was painful (we’re not talking about half a dozen settings in the AppSettings section).

Solution #2

1) Create a template Web.format.config that contains the general Web.config format, with certain placeholders for variables that vary on a per-developer or per-machine basis.

2) Create a web.default.properties that contains the default settings for the web.config

3) Create a web.developername.properties for each developer that simply overrides the default settings with other values when needed.

4) Write a script to replace the placeholders in the Web.format.config and generate your Web.config.developername files for you.

We implemented this strategy using NAnt. Our script does a bit more work because we’ve got interrelated projects, but I will describe the base idea here.

Examples:

Here is a portion of our web.format.config file:

[...]
<appSettings>
    <add key="GoogleMapsAPIKey" value="${GoogleMapsAPIKey}"/>
</appSettings>
<system.web>
   <healthMonitoring enabled="${healthMonitoring.enabled}">
       <providers>
           <clear/>
           <add type="System.Web.Management.SimpleMailWebEventProvider"  name="EmailWebEventProvider"
               from="${bugs_from_email}"
               to="${bugs_to_email}"
               subjectPrefix="${email_prefix}: Exception occurred"
               bodyHeader="!!! HEALTH MONITORING WARNING!!!"
               bodyFooter="Brought to you by LavaBlast Software Inc..."
               buffer="false" />
       </providers>
   </healthMonitoring>
</system.web>
[...]

Property files

Our default settings look something like the following:

<project>
    <property name="GoogleMapsAPIKey" value="ABQIAAAAkzeKMhfEKdddd8YoBaAeaBR0a45XuIX8vaM2H2dddddQpMmazRQ30ddddPdcuXGuhMT2rGPlC0ddd" />
    <property name="healthMonitoring.enabled" value="true"/>
    <property name="email_prefix" value="LavaBlast"/>
    <property name="bugs_to _email" value="[email protected]" />
    <property name="bugs_from_email" value="[email protected]" />
</project>

 

Our per-developer files include the default settings, and override a few:

<project>
    <!-- load defaults -->
    <include buildfile="web.default.properties"   failonerror="true" />   
        
    <!-- override settings -->
    <property name="GoogleMapsAPIKey" value="ABQIAAAAkzeKMhfEKeeee8YoBaAeaBR0a45XuIX8vaM2H2eeeeeQpMmazRQ30eeeePecuXGuhMT2rGPlC0eee"/>
    <property name="bugs_to_email" value="[email protected]" />
</project>

The NAnt script

We wrote a NAnt script that runs another NAnt instance to perform the property replacements, but the core code comes from Captain Load Test. It is a bit slow because we have to re-invoke NAnt, but it doesn’t appear like you can dynamically include a properties file at runtime. Feel free to comment if you find a way to make it more efficient. We don’t have our generated files under source control as we only version the property files.

<project name="generate configs" default="generate ">
    <property name="destinationfile"   value="web.config" overwrite="false" />  
    <property name="propertyfile"  value="invalid.file" overwrite="false" />  
    <property name="sourcefile"   value="web.format.config" overwrite="false" />
 
    <include buildfile="${propertyfile}"   failonerror="false"   unless="${string::contains(propertyfile, 'invalid.file')}" />   
    
    <target name="configMerge">    
        <copy file="${sourcefile}"  tofile="${destinationfile}" overwrite="true">
            <filterchain>
                <expandproperties />
            </filterchain>
        </copy>
    </target>
 
    <target name="generate ">
        <property name="destinationfile" value="web.config.${machine}" overwrite="true"/>
        <property name="propertyfile" value="web.${machine}.properties" overwrite="true"/> 
        <property name="sourcefile" value="web.format.config" overwrite="true"/>
        <echo message="Generating: ${destinationfile}"/>
        <!--<call target="configMerge"/>-->
        <exec program="nant">
            <arg value="configMerge"/>
            <arg value="-nologo+"/>
            <arg value="-q"/>
            <arg value="-D:sourcefile=${sourcefile}"/>
            <arg value="-D:propertyfile=${propertyfile}"/>
            <arg value="-D:destinationfile=${destinationfile}"/>
        </exec>
    </target>    
</project>

Hurdle #3: Software Product Lines

Egypt trip 2007 Up to now, we’ve talked about taking one project and making it run on a number of machines, depending on a few preferences. However, we’ve taken it one step further because our web applications are part of a software product line. Indeed, we have different themes for different brands. Different companies have different configuration settings and site maps files. Therefore, we needed to be able to generate configuration files for each brand AND for each machine. This also greatly increases the number of configuration files we need.

Solution #3

It wasn’t very difficult to expand to this new level of greatness thanks to the script presented in hurdle #2. We basically have default configuration files for each project (themes, sitemap, name, locale, etc) in addition to the files we’ve shown above. We simply have to load two configuration files instead of one.

We even wrote a batch file (SwitchToBrandA.bat) that generates the property file for the current machine only (via the machine name environment variable) and it replaces the current Web.config. By running one batch file, we switch to the appropriate product for our current machine.

Future work

Currently, it takes a couple minutes to create a new brand or add a new developer. It doesn’t happen often enough to make it worthwhile for us to augment the infrastructure to make it easier on us, but is a foreseeable enhancement for the future. I guess another future work item would actually be hire someone who is an expert in build automation, test automation and automatic data processing! :) These are skills they don't teach in university, but should!

kick it on DotNetKicks.com  



I, for one, welcome our new revision control overlords!

clock February 11, 2008 10:35 by author JKealey
IMG_0215
Be a lazy developer!  You know you deserve it.

I’ve been developing websites professionally for almost nine years now and although I still party like it’s 1999, my productivity has increased greatly thanks to better tools and technologies. Due to recent dealings with the IT department of another firm, I remembered the fact that although we think that all developers are using source control (CVS, Subversion, etc.), this is not the case. There are lots of corporate developers out there who don’t follow the industry’s best practices. This post is not about using version control tools for source code… it’s about re-using the same tool for deploying websites, instead of FTP. We assume you know what source control is and are interested in using it in novel ways. 

A few days ago we were visiting the facilities of a company which provides services of interests to one of our franchisor customers. As our specialty is the integration of external systems with FranchiseBlast, our franchise management tool, we wanted to know how the data would be able to move back and forth. One of the sophisticated options available to us was the use of FTP to transfer flat files in a specific file format, not that there’s anything wrong with that!  Indeed, when your core business doesn’t require lots of integration with your customers, no need to re-invent your solution every three years with newer technologies. You can afford to stick with the same working solution for a long period of time! (We will obviously continue to push web services, as it is much easier to write code against a web service than picking up flat files from an FTP server!).

Integration and automation reduce support costs for the franchisor

We’re always looking at pushing the envelope and we know that software integration and especially automation is the key to cutting down labor costs. If your business processes include lots of manual labor, we feel it is worthwhile to take the time to investigate replacing a few steps software-based solutions (focus on the 20% of steps that make you lose 80% of your time). Wouldn’t you rather play with your new puppy than copy-paste data from one place to another? A typical example for the integration we built into FranchiseBlast and the point of sale is the automatic creation of products in stores, once they are created on FranchiseBlast. Our franchisees save lots of time not having to create their own products and we avoid the situation where an incorrect UPC is entered only to be discovered months later.

Furthermore, although your mental picture of a franchisor might boil down to someone lounging in their satin pyjamas in front of the fireplace, sipping some piping hot cocoa, while waiting for the royalty check to come in, this is very far from the truth. Supporting the franchise is a major time consumer but if you can manage to reduce and/or simplify all the work done inside the store, you can greatly reduce time spent supporting the stores.

Enough about the franchise tangent; talk about web development!

Integration and automation does not only apply to your customers: any serious web development firm still using FTP to deploy websites should consider the following questions. By a serious firm, I mean you’ve got more than your mother’s recipe website to deploy and you build dynamic web applications that change regularly, not a static business card for your trucker uncle.

  • Are you re-uploading the whole site every time you make an upgrade?
  • Are you selecting the changed files manually or telling your FTP client to only overwrite new files?
  • Is someone else also uploading files and you’re never sure what changed?
  • Do you care about being able to deploy frequently without wasting your time moving files around?
  • Do you have to re-upload large files (DLL, images) even if you know you only changed a few bytes?
  • Did you ever have to revert your website back to a previous version when a bug was discovered?
  • Do you upload to a staging area for client approval, then deploy to the production server?

imageIf you answered yes to one of these questions, you’re probably wasting your precious time and (now cheap) bandwidth. Yes, I know it is fun to read comics while you’re uploading a site but you’re not using technology to its full potential.

Source control technology has been around for decades and hopefully you’ve been using it to collaborate with other developers or designers when creating websites. Even if you work alone, there are several advantages to using CVS or Subversion, for example. You may be wondering why I am talking about source control in the context of web deployment but I hope the astute reader will know exactly where I’m headed.

Why not deploy your websites using source control tools such as Subversion?

There are probably lots of you out there that already do this but there may be some people that never thought outside the box. By sharing this with you today, I hope to help at least one person cut down an hour per week spent in deployment. We’ve experienced the benefits and wanted to share them with you. 

We prefer Subversion over CVS for multiple reasons, but one that is of particular interest here is the fact that it can do binary diffs. If you recompile your ASP.NET website, you’ll generate new DLL files that are very similar to the previous ones. The same thing happens when you’re changing images. Thanks to this Subversion feature, you only have to upload the bytes that have changed inside your files… as opposed to the whole files!  Furthermore, as with most source control management tools, you can copy a file from one project inside your repository to dozens of others, without taking up additional space on the disk.

You can create a separate repository for your deployment files (separating them from your source code) and checkout the project on the production server. Later on, when you commit your changes, you can simply perform an update on the production server. You could even automate the update process using post-commit actions.

There are numerous advantages to deploying using a tool such as Subversion:

  • You only commit what has changed and the tool tracks these changes for you.
  • Only changes are transferred, not the whole site.
  • If someone other than you fools around with the deployed site, you can immediately see what was changed.
  • You can easily revert back to an older version of the site

You should automate the deployment to the staging area (using continuous integration or post-commit scripts) but keep deployment to the production server manual. Automatic deployment to your staging area means you can commit at 4:50PM on a Friday afternoon, wait for the successful build confirmation, and head off to play hacky sack with your high school buddies without having to manually redeploy the development machine. At 5AM on Saturday morning, when your early-riser client gets up and has a few minutes to spare before reading the paper, he can load up the development server at play with the latest build.

What about my database?

The concept of database migrations is very useful in this context; if you use tools that have database migrations built-in (such as Ruby on Rails), then you are in luck. Otherwise, it gets more complicated. We’re waiting for database migrations to be supported by SubSonic before investing much effort in this area (although I don’t recall ever having to revert a production server back to a previous version). For our business, this is a must have feature because it allows us to revert a store’s point of sale to a stable build should the software exploded in the middle of a busy Saturday. Even better, should a fire destroy the store’s computers, we can reload this store’s customized version within minutes. (We also do nightly off-site data backups of the sales information).

In any case, we recommend you take a peek at the series of posts made by K. Scott Allen, referenced in this article by Jeff Atwood.

How can I save more time?

IMG_0501 The answer is simple: by scripting the copying of files (locally) from your development copy to the checked out folder of the production repository. This can be as simple as using the build-in deployment tools available in your IDE (such as VS.NET’s deployment functionality) or writing a script that copies all files of a particular extension from one place to the other. Eventually, you’ll need to adapt your script to your particular needs, if you’re wasting too much time copying large files that never change, for example. This step depends on your project and its environment. I will describe in a future post how we use NAnt to manage our software product line. Kudos to Jean-Philippe Daigle for helping us out in the early days.

Concrete examples

LavaBlast deploys its point of sale application via Subversion; every computer we manage has its own path in the deployment repository. This allows for per-store customizability and is not as redundant as one may sound because of Subversion copies. Furthermore, when the POS communicates with FranchiseBlast, we track exactly which version of the software is running in that particular store (via the Subversion revision number). We also track a revision number for the database schema. Having this bird’s eye view of the software deployed in the franchise lets us easily schedule updates to subsets of the stores. At the point where we are now, we could easily write code that would signal to the store it should upgrade itself to a certain revision at a certain time & date.  Ultimately, instead of spending my time copying files, I am available to write blog posts like this one!

Conclusion

By moving away from FTP, we’ve considerably cut down the time it takes to deploy a website. We invested time in this infrastructure early on, allowing us to concentrate on design, coding, and testing as opposed to deployment. Of course, FTP still has many uses outside of the context we describe here! FTP has been around for a long time and will not disappear any time soon!

kick it on DotNetKicks.com


RESX file Web Editor

clock February 7, 2008 08:35 by author EtienneT

Source CodeDEMO

If you have a multi-language site, you have probably already worked with .resx files.  Resx files are resource files that can contain strings, images, sounds... pretty much anything.  However, in ASP.NET (at least in our typical scenarios), resource files mainly contain strings to be translated in multiple languages.  Those needing a refresher course on ASP.NET localization should take a peek at this article: ASP.NET Localization. Let us take a typical ASP.NET application as an example.  When you generate a resource file for a file named Default.aspx, VS.NET generates a new folder named App_LocalResources (if it doesn't exist) and it creates a new file named Default.aspx.resx in this folder.

image

This option will only be visible in the design view of an aspx or ascx file.  

Default.aspx.resx will contain all strings that can be localized from Default.aspx.  Default.aspx.resx is the default resource file for Default.aspx.  It will contain the default language strings, in our case English.  Should you need to offer the same application in more than one language, you will need to create locale-specific resource files with a similar filename. For example, Default.aspx.fr-CA.resx would be a resource file for Canadian French. The logic to retrieve a string from the appropriate resource file is built into the .NET framework (it depends on the current thread's culture).

Those who have built a dynamic web site supporting multiple languages know that managing resx files is a burden, especially when the application changes.  TeddyMountain.com is a web site we recently created which provides English, French, and Danish versions.  We use a CMS for most of the text, but some dynamic pages like the store locator contains text which needs to be in resources files.  We collaborated with a resident of Denmark to translate the website; the translator is not a programmer and could not be trusted with the XML-based resx files. Furthermore, as the site is in constant evolution, we wanted a dynamic solution to avoid losing time exchanging Excel files.

Although there are some commercial applications out there, we decided to make a a simple tool to unify a set of resource files.  We wanted to take advantage of the fact that the we were building a website and we could have the translator use a web-based tool to translate the website. This has the advantage of being able to see the changes immediately in-context, instead of simply translating text locally. We made a class that would merge our Default.aspx.resx, Default.aspx.fr-CA.resx, and Default.aspx.da.resx files into a single C# object that is easy to use.  Once the data is in the C# object, the data can be modified and saved back to the disk later on.

We called this C# object ResXUnified.  The only constructor to ResXUnified needs a path to a resx file.  Then it will find all resx files related to it from different languages.  Once you have constructed the object, you can access the data simply by using an indexor:

ResXUnified res = new ResXUnified("Default.aspx.resx");
string val = res["da"]["textbox1.Text"];

In the above code, we access the Danish language file and query the value of the "textbox1.Text" key.  This information can be changed and saved back to the disk:

res[""]["textbox1.Text"] = "Home"; // Default language we can pass an empty string
res["da"]["textbox1.Text"] = "Hjem"; // Danish
res["fr-CA"]["textbox1.Text"] = "Acceuil"; // French
res.Save();

When we call Save(), only the files that were changed will be written to the disk.  ResXUnified simply uses a Dictionary and a List to manage the keys and the languages.  To save, it uses the ResXResourceWriter class provided by the framework, which makes it easy to manipulate resx files. Similarly, we read resx files using the ResXResourceReader class.  Without these two classes, manipulating resx files would be much more complicated.

I won't include more code here since this is a pretty straightforward collection class.

Later, for the TeddyMountain.com website, we made a quick interface in ASP.NET (see the demo here) to display all the resx files in a project. We enable the user to add a language file and translate all the fields from the default language.  Here is an example:

image 

When a string needs to be translated, the textbox background color is not the same, this way it's easy for the translator to see what needs to be translated.

The generate local resource button in Visual Studio generates a lot of "useless" strings in resx files; we don't necessarily want to enter tooltips on every Label in our application. To make it easier to read, we designed an option to hide or show these empty strings.

This tool is pretty basic right now and there are more options that could be easily added.  For example, we could add a list of files that remain to be translated or allow for multiline strings (notice we don't support line-breaks in our strings). We encourage you to modify the code and show off your enhancements!

Final notes: If you try the code out, please remember to give write access to the ASP.NET worker process in all the App_LocalResources folders (and the App_GlobalResources folder if you use it). Also, since changing resx files for the default language restarts the web application, it is recommended you use the tool on a development copy of your website.  

Source CodeDEMO

kick it on DotNetKicks.com



Month List

Disclaimer

The opinions expressed herein are my own personal opinions and do not represent my employer's view in anyway.

© Copyright 2017

Sign in