LavaBlast Software Blog

Help your franchise business get to the next level.
AddThis Feed Button

Manage your ASP.NET Web.config Files using NAnt

clock February 19, 2008 13:24 by author JKealey

Egypt trip in 2007 Nothing is more important than the software engineers in a software company. I just finished re-reading Joel Spolsky’s Smart and Get Things Done and it inspired this post. Not only do I admire his writing style, I share Joel’s vision of how a software company should be run. Pampering your programmers is the best decision a manager can make, especially when you’ve built a team that can hit the high notes.

One of Joel’s key elements to running a successful software company is to automate your build process up to the point where it only takes one operation. This minimizes the chance of error while enabling your developers to grey matter to something more complex than copying files. Although they might use the extra time to shop around for student loan consolidation plans (a practice called reverse telecommuting), in most cases they’ll return to writing fresh code or cleaning out existing bugs.

Today’s post is about one of the little things that made my life much easier as a developer: using NAnt to manage our software product line. I’ve come to realize that we encounter these little “sparks” every day, but we never talk about them. Sure, we’ve produce a number of complex software products and they are fun to describe, but I personally enjoy talking about the little things that save time, just like Henry Petroski verbosely describes common items in his books. Fortunately for you, I’ll keep the story short, unlike his description of the evolution of the paper clip in The Evolution of Useful Things (which is still an interesting read, by the way).

Background

We develop lots of ASP.NET websites. Our architecture includes database schemas and business objects shared amongst multiple projects and some common utility libraries.  Furthermore, instead of always inheriting from System.Web.UI.Page and System.Web.UI.UserControl, we have an object oriented inheritance tree as is good software engineering practice. We even have a shared user control library that gets copied over after a successful build. Furthermore, we use ASP.NET master pages and ASP.NET themes to structure our designs. As opposed to what you see in textbooks where themes can be chosen by the user according to their preferences (oh yes, please show me the pink background with fluffy kittens), we use themes to represent different franchise brands.

My point here is that we reusability is key to our solution. We build elements that we can use not only on the website but also in FranchiseBlast, the interactive kiosk, and the point of sale. However, the more you re-use, the more things get complicated. Indeed, the overhead caused by the added configurability we build into our reusable components is non-negligible. We're always on the lookout for new ways to keep things simple, while still reaping the benefits of reuse. We use the Strategy Design Pattern to encapsulate the behavioural changes in our systems and put our various configuration settings inside our Web.config file.

Hurdle #1: Different developers need different web.config files

Our configuration files have a few settings that we want to change on a per-user basis:

- Where should we email exception notifications?

- Database names & file paths

- Google API Keys

How do we manage this? If we put our Web.config file under source control, we'll end up with various conflicts when the developers change the configuration file to suit their tastes. I don't know about you, but I have better things to do than start memorizing API keys or digits of PI.

Solution #1

Our first solution wasn’t fantastic, but it was sufficient for a while. We simply removed the Web.config from source control and created new files, one for each developer (Web.config.jkealey, Web.config.etremblay, etc.) and one for the deployment server (Web.config.server1). When a change was to be made, we whipped out WinMerge and changed all the files. You can quickly understand that this process does not scale well, but it was sufficient for small projects with 2 to 3 developers.

Hurdle #2: Scaling to more than a couple machines

We deploy our point of sale software and kiosks via Subversion. It might be fun to use WinMerge to compare a couple Web.config files, but when you’ve got a hundred web applications to update to the new version, by comparing Web.config files, you’ve got a problem. Doing this by hand wasn’t very difficult but it was error-prone and time consuming. I don’t know if you have seen the Web.config additions that ASP.NET AJAX brought to the table, but upgrading from a release candidate of Atlas to the full release of ASP.NET AJAX was painful (we’re not talking about half a dozen settings in the AppSettings section).

Solution #2

1) Create a template Web.format.config that contains the general Web.config format, with certain placeholders for variables that vary on a per-developer or per-machine basis.

2) Create a web.default.properties that contains the default settings for the web.config

3) Create a web.developername.properties for each developer that simply overrides the default settings with other values when needed.

4) Write a script to replace the placeholders in the Web.format.config and generate your Web.config.developername files for you.

We implemented this strategy using NAnt. Our script does a bit more work because we’ve got interrelated projects, but I will describe the base idea here.

Examples:

Here is a portion of our web.format.config file:

[...]
<appSettings>
    <add key="GoogleMapsAPIKey" value="${GoogleMapsAPIKey}"/>
</appSettings>
<system.web>
   <healthMonitoring enabled="${healthMonitoring.enabled}">
       <providers>
           <clear/>
           <add type="System.Web.Management.SimpleMailWebEventProvider"  name="EmailWebEventProvider"
               from="${bugs_from_email}"
               to="${bugs_to_email}"
               subjectPrefix="${email_prefix}: Exception occurred"
               bodyHeader="!!! HEALTH MONITORING WARNING!!!"
               bodyFooter="Brought to you by LavaBlast Software Inc..."
               buffer="false" />
       </providers>
   </healthMonitoring>
</system.web>
[...]

Property files

Our default settings look something like the following:

<project>
    <property name="GoogleMapsAPIKey" value="ABQIAAAAkzeKMhfEKdddd8YoBaAeaBR0a45XuIX8vaM2H2dddddQpMmazRQ30ddddPdcuXGuhMT2rGPlC0ddd" />
    <property name="healthMonitoring.enabled" value="true"/>
    <property name="email_prefix" value="LavaBlast"/>
    <property name="bugs_to _email" value="[email protected]" />
    <property name="bugs_from_email" value="[email protected]" />
</project>

 

Our per-developer files include the default settings, and override a few:

<project>
    <!-- load defaults -->
    <include buildfile="web.default.properties"   failonerror="true" />   
        
    <!-- override settings -->
    <property name="GoogleMapsAPIKey" value="ABQIAAAAkzeKMhfEKeeee8YoBaAeaBR0a45XuIX8vaM2H2eeeeeQpMmazRQ30eeeePecuXGuhMT2rGPlC0eee"/>
    <property name="bugs_to_email" value="[email protected]" />
</project>

The NAnt script

We wrote a NAnt script that runs another NAnt instance to perform the property replacements, but the core code comes from Captain Load Test. It is a bit slow because we have to re-invoke NAnt, but it doesn’t appear like you can dynamically include a properties file at runtime. Feel free to comment if you find a way to make it more efficient. We don’t have our generated files under source control as we only version the property files.

<project name="generate configs" default="generate ">
    <property name="destinationfile"   value="web.config" overwrite="false" />  
    <property name="propertyfile"  value="invalid.file" overwrite="false" />  
    <property name="sourcefile"   value="web.format.config" overwrite="false" />
 
    <include buildfile="${propertyfile}"   failonerror="false"   unless="${string::contains(propertyfile, 'invalid.file')}" />   
    
    <target name="configMerge">    
        <copy file="${sourcefile}"  tofile="${destinationfile}" overwrite="true">
            <filterchain>
                <expandproperties />
            </filterchain>
        </copy>
    </target>
 
    <target name="generate ">
        <property name="destinationfile" value="web.config.${machine}" overwrite="true"/>
        <property name="propertyfile" value="web.${machine}.properties" overwrite="true"/> 
        <property name="sourcefile" value="web.format.config" overwrite="true"/>
        <echo message="Generating: ${destinationfile}"/>
        <!--<call target="configMerge"/>-->
        <exec program="nant">
            <arg value="configMerge"/>
            <arg value="-nologo+"/>
            <arg value="-q"/>
            <arg value="-D:sourcefile=${sourcefile}"/>
            <arg value="-D:propertyfile=${propertyfile}"/>
            <arg value="-D:destinationfile=${destinationfile}"/>
        </exec>
    </target>    
</project>

Hurdle #3: Software Product Lines

Egypt trip 2007 Up to now, we’ve talked about taking one project and making it run on a number of machines, depending on a few preferences. However, we’ve taken it one step further because our web applications are part of a software product line. Indeed, we have different themes for different brands. Different companies have different configuration settings and site maps files. Therefore, we needed to be able to generate configuration files for each brand AND for each machine. This also greatly increases the number of configuration files we need.

Solution #3

It wasn’t very difficult to expand to this new level of greatness thanks to the script presented in hurdle #2. We basically have default configuration files for each project (themes, sitemap, name, locale, etc) in addition to the files we’ve shown above. We simply have to load two configuration files instead of one.

We even wrote a batch file (SwitchToBrandA.bat) that generates the property file for the current machine only (via the machine name environment variable) and it replaces the current Web.config. By running one batch file, we switch to the appropriate product for our current machine.

Future work

Currently, it takes a couple minutes to create a new brand or add a new developer. It doesn’t happen often enough to make it worthwhile for us to augment the infrastructure to make it easier on us, but is a foreseeable enhancement for the future. I guess another future work item would actually be hire someone who is an expert in build automation, test automation and automatic data processing! :) These are skills they don't teach in university, but should!

kick it on DotNetKicks.com  



RESX file Web Editor

clock February 7, 2008 08:35 by author EtienneT

Source CodeDEMO

If you have a multi-language site, you have probably already worked with .resx files.  Resx files are resource files that can contain strings, images, sounds... pretty much anything.  However, in ASP.NET (at least in our typical scenarios), resource files mainly contain strings to be translated in multiple languages.  Those needing a refresher course on ASP.NET localization should take a peek at this article: ASP.NET Localization. Let us take a typical ASP.NET application as an example.  When you generate a resource file for a file named Default.aspx, VS.NET generates a new folder named App_LocalResources (if it doesn't exist) and it creates a new file named Default.aspx.resx in this folder.

image

This option will only be visible in the design view of an aspx or ascx file.  

Default.aspx.resx will contain all strings that can be localized from Default.aspx.  Default.aspx.resx is the default resource file for Default.aspx.  It will contain the default language strings, in our case English.  Should you need to offer the same application in more than one language, you will need to create locale-specific resource files with a similar filename. For example, Default.aspx.fr-CA.resx would be a resource file for Canadian French. The logic to retrieve a string from the appropriate resource file is built into the .NET framework (it depends on the current thread's culture).

Those who have built a dynamic web site supporting multiple languages know that managing resx files is a burden, especially when the application changes.  TeddyMountain.com is a web site we recently created which provides English, French, and Danish versions.  We use a CMS for most of the text, but some dynamic pages like the store locator contains text which needs to be in resources files.  We collaborated with a resident of Denmark to translate the website; the translator is not a programmer and could not be trusted with the XML-based resx files. Furthermore, as the site is in constant evolution, we wanted a dynamic solution to avoid losing time exchanging Excel files.

Although there are some commercial applications out there, we decided to make a a simple tool to unify a set of resource files.  We wanted to take advantage of the fact that the we were building a website and we could have the translator use a web-based tool to translate the website. This has the advantage of being able to see the changes immediately in-context, instead of simply translating text locally. We made a class that would merge our Default.aspx.resx, Default.aspx.fr-CA.resx, and Default.aspx.da.resx files into a single C# object that is easy to use.  Once the data is in the C# object, the data can be modified and saved back to the disk later on.

We called this C# object ResXUnified.  The only constructor to ResXUnified needs a path to a resx file.  Then it will find all resx files related to it from different languages.  Once you have constructed the object, you can access the data simply by using an indexor:

ResXUnified res = new ResXUnified("Default.aspx.resx");
string val = res["da"]["textbox1.Text"];

In the above code, we access the Danish language file and query the value of the "textbox1.Text" key.  This information can be changed and saved back to the disk:

res[""]["textbox1.Text"] = "Home"; // Default language we can pass an empty string
res["da"]["textbox1.Text"] = "Hjem"; // Danish
res["fr-CA"]["textbox1.Text"] = "Acceuil"; // French
res.Save();

When we call Save(), only the files that were changed will be written to the disk.  ResXUnified simply uses a Dictionary and a List to manage the keys and the languages.  To save, it uses the ResXResourceWriter class provided by the framework, which makes it easy to manipulate resx files. Similarly, we read resx files using the ResXResourceReader class.  Without these two classes, manipulating resx files would be much more complicated.

I won't include more code here since this is a pretty straightforward collection class.

Later, for the TeddyMountain.com website, we made a quick interface in ASP.NET (see the demo here) to display all the resx files in a project. We enable the user to add a language file and translate all the fields from the default language.  Here is an example:

image 

When a string needs to be translated, the textbox background color is not the same, this way it's easy for the translator to see what needs to be translated.

The generate local resource button in Visual Studio generates a lot of "useless" strings in resx files; we don't necessarily want to enter tooltips on every Label in our application. To make it easier to read, we designed an option to hide or show these empty strings.

This tool is pretty basic right now and there are more options that could be easily added.  For example, we could add a list of files that remain to be translated or allow for multiline strings (notice we don't support line-breaks in our strings). We encourage you to modify the code and show off your enhancements!

Final notes: If you try the code out, please remember to give write access to the ASP.NET worker process in all the App_LocalResources folders (and the App_GlobalResources folder if you use it). Also, since changing resx files for the default language restarts the web application, it is recommended you use the tool on a development copy of your website.  

Source CodeDEMO

kick it on DotNetKicks.com



ViewState property code snippet

clock January 23, 2008 12:47 by author EtienneT

One of the things I do frequently is making a new viewstate property to use in a UserControl in ASP.NET.  (Yes, we know, ViewState is evil. We store it in the session and use it in low-volume sites.) Just download this code snippet, open Visual Studio, open Tools->Code Snippets Manager, and finally Import.

ViewState Property.snippet (1.28 kb)

You can then use it just by writing the shortcut in the code editor and pressing TAB.

image

Here is the result:

image

Using the ?? operator is nice since we don't have to do ugly ifs to return a default value and we don't have to assign something in the ViewState either.  We just fill the ViewState when we need it. This avoids bloating up the ViewState with default values.

ViewState Property.snippet (1.28 kb)

kick it on DotNetKicks.com


CheckBoxList hover extender

clock January 22, 2008 09:16 by author JKealey

Summary

This article presents a CheckBoxList extender that enables the user to hover over individual checkboxes in the list and see a popup with additional information.  The information is populated dynamically (web service call) depending on the hovered checkbox' value.

[VIEW THE ONLINE DEMO]

[DOWNLOAD THE CODE] (469.00 kb)

Where could this be useful?

If you use the ManyToManyList in SubSonic, you know that it inherits from the built-in CheckBoxList control to display data from a many-to-many database relationship.  You probably also know that this can be really useful, but it has it's limitations.  We are using the SubSonic many-to-many list to display a list of stores that a user can be associated with.  You can display only one field of information by item with the ManyToManyList control for the foreign table associated in the relationship (unless you use views!).  Here is an example:

storesList

For a particular user, we are listing all possible stores and showing the store ID as a description for the items.  This can have a meaning for some users, but when a new user comes in and doesn't know the store identifiers, you have a problem. All franchise stores share the same store name as they do business under the same name - they can be uniquely identified by their franchisor-assigned ID or by their address. However, we don't want to show too much information in this control, as it would clutter the interface, in our system. We need a way to display more information by item, without taking up too much screen real estate.  

Our solution is very much inspired by the AjaxControlToolkit controls, such as the HoverMenu to display information when the user hovers over an item in our CheckBoxList (or SubSonic ManyToManyList).  We created an AJAX Control Toolkit Extender that asynchronously calls a web service method (or a page method) to obtain the information displayed in the popup control, when the user hovers over an item.  Here is an example of the result:

checkboxListhover

How to use it

You need a CheckBoxList, a panel to display the information, our extender, and a web service method to invoke.

<asp:CheckBoxList ID="CheckBoxList" runat="server">
    <asp:ListItem Text="Item #1" Value="1" />
    <asp:ListItem Text="Item #2" Value="2" />
    <asp:ListItem Text="Item #3" Value="3" />
    <asp:ListItem Text="Item #4" Value="4" />
    <asp:ListItem Text="Item #5" Value="5" />
</asp:CheckBoxList>

<asp:Panel ID="panelInfo" runat="server" CssClass="checkboxlisthover">
    <asp:Label ID="lblTest" runat="server" Text="Label"></asp:Label>
</asp:Panel>

<ajax:CheckboxListHoverExtender
id="checkboxlistext" runat="server"
TargetControlID="CheckBoxList"
PanelID="panelInfo"
DynamicServiceMethod="GetContent"
DynamicControlID="lblTest"
DynamicServicePath="~/CheckBoxList.asmx" />

The web service class should look something like this:

[ScriptService]
    public class CheckboxList : WebService
    {
        [WebMethod]
        [ScriptMethod]
        public string GetContent(string contextKey) { return "";}

    }

Implementation

A web service method was called with the value of the hovered checkbox.  When you DataBind the CheckBoxList, it is very important to assign a value to each of your ListItems.  In this example, each checkbox has a GUID value.  This GUID is passed as a parameter to the web service call automatically by the extender.  The popup panel is then filled with the information returned by the web service.

As stated previously, the CheckBoxListExtender control is very much inspired by the HoverMenu extender.  The two controls have similarities, but we can't use the HoverMenu directly in the CheckBoxList because we don't have access to the item template of a CheckBoxList.  This prevents us from using the built-in HoverMenu extender for each CheckBoxList item.

Coding a new extender

To code a new extender, you can use existing extenders to simplify your life: that's what we did for the CheckBoxListExtender.  It re-uses the HoverExtender and the PopupExtender.  Those two extenders are not in the sample page of the AjaxControlToolkit (we see the HoverMenuExtender and PopupControlExtender but not the two we are using here), but they are in the source code if you want to see them.  Basically when we coded the CheckBoxListExtender, we had to pass the scripts we wanted to depend on:

[RequiredScript(typeof(CommonToolkitScripts))]
[RequiredScript(typeof(HoverExtender))]
[RequiredScript(typeof(PopupExtender))]
[RequiredScript(typeof(AnimationExtender))]

[Designer(typeof(CheckboxListHoverDesigner))]
[ClientScriptResource("LavaBlast.AJAX.CheckboxListExtender.CheckboxListHoverBehavior", "LavaBlast.AJAX.CheckboxListExtender.CheckboxListHoverBehavior.js")]
[TargetControlType(typeof(CheckBoxList))]
public class CheckboxListHoverExtender : DynamicPopulateExtenderControlBase

As you can see, the extender inherits from DynamicPopulateExtenderControlBase.  This means that the extender can dynamically populate the control via a web service call and all the necessary plumbing is already in place. Specifying the scripts you depend on is as easy as using the RequiredScript attribute on your extender class.

JavaScript behavior

As for the JavaScript, for each "TD" inside our CheckBoxList control, we created a HoverBehavior (this is from the HoverExtender).  Each time the HoverBehavior events are fired, we can do something about them.  In this case, we simply activated the PopupBehavior to show the popup panel and call the web service method to populate the content.  As the value of each checkbox of the list is not contained in the DOM of the page, most probably a security feature of ASP.NET, you have to somehow pass this information from the server to the extender behavior.  Since we couldn't find a way to pass a list of values from the server to the behaviour using a generic List variable, we simply used a string of comma separated values.  Right now we're using this:

[ExtenderControlProperty]
[DefaultValue("")]
[Browsable(false)]
public string Values
{
    get { return GetPropertyValue("Values", ""); }
    set { SetPropertyValue("Values", value); }
}

But would much rather like to use the following: 

[ExtenderControlProperty]
[DefaultValue("")]
[Browsable(false)]
public List<string> Values
{
    get { return GetPropertyValue("Values", ""); }
    set { SetPropertyValue("Values", value); }
}

It appears generic lists are not supported, unless we are mistaken. If someone knows if this is possible, please leave us a comment on this post.

Don't forget to look at the online demo!

CheckBoxListHoverExtenderDemo.zip (469.00 kb)

kick it on DotNetKicks.com


Dirt Simple ASP.NET CMS using the ScrewTurn Wiki

clock January 22, 2008 00:08 by author JKealey

A year ago we wanted to quickly integrate the capabilities of a content management system in a customer’s website. Budget was limited but so were the requirements.

  • The user SHALL be able to change a few (a dozen) paragraphs on the website. 
  • The user SHALL be able to use basic formatting (bulleted lists, headers, images) without knowing HTML.

The lengthy option was the integration of a powerful CMS and the shorter one was to create something quickly using one of the many open source rich text editors found on the Internet and a simple database table. We didn’t really feel like coding that infrastructure at that point for various reasons. 

At this point, we were already a wiki for requirements management and task planning for this customer.  On very complex projects, we prefer TWiki because we had already used its metadata and form capabilities to make it easy to collaboratively work on software requirements back in 2005. However, we had installed the ScrewTurn wiki (an open source wiki in ASP.NET) for this customer, as its installation only takes a few seconds. We decided we would dynamically integrate content from our Wiki into our website, which was sufficient for our customer, for the time being.

We took a shorter lunch break that day and coded a dirt simple CMS application that queries the ScrewTurn wiki to obtain paragraphs of text. We simply made an HttpWebRequest to the printable version of the wiki page, cleaned out a bit of HTML markup that we did not need and cached the result. Using the control is then straightforward.

Register ScrewturnVisualizer in our Web.config (system.web, pages, controls):

<add tagPrefix="LB" assembly="LavaBlast" namespace=" LavaBlast.CustomControls" />

Add the base information in our Skin to avoid repeating it everywhere:

<LB:ScrewturnVisualizer runat="server" BaseURL="http://ourclientwiki.lavablast.com" CssClass="ScrewTurn" />

Add the control on the appropriate pages:

<LB:ScrewturnVisualizer ID="stv1" runat="Server" PageName="CurrentSpecials" />

Today, we’ve moved on to a full-fledged CMS and no longer use this code, but the attached code may still help someone out! We’re big fans of incremental engineering and this half hour of coding helped keep our clients happy while we moved to a better solution.

Side note: In terms of open source licences, I’ve always wondered what this would imply. ScrewTurn is GPL (as opposed to LGPL) and I’m curious to know if this would imply that websites using it as a simple CMS would have to be GPL as well. Because we’re making us of an online service (the code can quickly be adapted to work for any Wiki or other website) and not extending the codebase, I think we’re not bound by the GPL. Any thoughts?

ScrewturnVisualizer.zip (1.41 kb)

kick it on DotNetKicks.com


Hybrid Accordion/TreeView Sitemap

clock December 20, 2007 09:14 by author EtienneT

Live Demo | Source Code (VS 2008) (856.65 kb)

The TreeView site map that we use in FranchiseBlast has become too long to fit reasonably the left panel of our application. We wanted something more compact that would be as simple to use and maintain as our current solution.

Matt Berseth gave us the idea of using an AjaxControlToolkit Accordion to achieve a nice look and feel for our sidebar. Our site map is automatically generated from our Web.sitemap file. (We use different *.sitemap files for each client; we needed something dynamic to cut down on maintenance time. We simply change which *.sitemap file the Web.config points to in our configuration generation scripts.) Furthermore, we also trim what is available in the sidebar according to user roles. I've added a reference to the appropriate web.config settings to achieve this behaviour below.

<siteMap defaultProvider="secureProvider">
<providers>
<add name="secureProvider" type="System.Web.XmlSiteMapProvider" siteMapFile="Web.sitemap" securityTrimmingEnabled="true"/>
</providers>
</siteMap>

Yesterday, I began coding a quick solution to our problem. I was inspired by the code in this post but ultimately I changed it a lot. We wanted all first level nodes in the Web.sitemap to be Accordion panes and all the other levels to be contained in TreeViews inside the parent pane.

We also highlight the current page in bold in the TreeView and display a different pane color to represent the current selection. Altogether, this is pretty simple stuff but hopefully it will help you avoid re-inventing the wheel.

Happy holidays to all!

 

Live Demo | Source Code (VS 2008) (856.65 kb)

kick it on DotNetKicks.com

 

 

 

 

 



Debug Visualizer for SubSonic Collections

clock December 7, 2007 20:00 by author EtienneT


Wouldn't it be nice to be able to see a SubSonic collections while you are debugging, just like the DataSet debug visualizer? Because we often need such a tool to debug our SubSonic collections here at LavaBlast, I've created a small Visual Studio Debug Visualizer that you can use to see what your SubSonic collections contain.  How does it work?  While you are debugging, put a breakpoint somewhere and simply hover your mouse over a SubSonic collection variable.  You should see something similar to the screenshot shown above.

Once the tooltip has appeared, you can click on the small magnifier to open the debug visualizer.  You'll see a DataGridView similar to the picture on the right. This will show all your SubSonic objects in a easy to navigate list.  Basically my code takes the SubSonic list and transform it in a DataTable.  I can't use the method ToDataTable() from the SubSonic AbstractList because this requires access to the provider's configuration, and the JIT executing the debug visualizer doesn't have access to it.

To use this simple tool, download this file SubSonicVisualizer.dll (8.00 kb) and put it in your [My Documents]\Visual Studio 2008\Visualizers\ folder, creating the directory if it doesn't already exist. (You might want to double check your file system permissions on this folder, as well.) I compiled this for Visual Studio 2008.  I have also included the source code here (Visual Studio 2008) in case anyone wants to enhance it.

SubSonicVisualizer.zip (386.36 kb)

If you are using Visual Studio 2005, I think this file will work for you, but I have not tested it: SubSonicVisualizer2005.dll (8.00 kb). I simply changed the reference from Microsoft.VisualStudio.DebuggerVisualizers.dll version 9.0 to 8.0 and, from what I have read, it should work in VS.NET 2005.

kick it on DotNetKicks.com


SubSonic Limitations - Part 2 (aka: Knee deep in … SubSonic. )

clock December 4, 2007 22:02 by author JKealey
Knee deep in snow.

After my recent post  asking for the most elegant way to support multiple databases with the same schema at runtime, I received some good pointers in the SubSonic forums from dbr, a forum user. In the end, I admit should have done my homework before posting.

One elegant solution to change the SubSonic Provider/ConnectionString at runtime makes use of SharedDbConnectionScope. I personally do not like this solution, as I prefer my code to explicitly state what it’s doing via its properties or arguments instead of relying on contextual information.  I was also concerned about how it works with regards to concurrency and I did a little digging. Looking at the code, I discovered it internally uses the ThreadStatic attribute which seems like a godsend at first, but further investigation reveals the implementation may be flawed. I did see people complain that it didn’t work for them, but don’t know if it is related to the ThreadStatic attribute. I do not fully trust this technique, but I may wrong as I'm far from an expert in concurrency.

Returning to Dbr's suggestion, he simply generates different providers at runtime for each connection string. This sounds simple if you can manage to modify the ProviderName property on the collection (ActiveList) or object (ActiveRecord) every time you load/save from the database. Without resorting to SharedDbConnectionScope, you can't use the auto-generated constructors because they fall back to the default provider which is hardcoded in the generated object’s schema.

The elegant implementation to encapsulate loading/saving from the database is to use a controller, as would be suggested by the MVC design pattern. I have not yet played with the new MVC templates provided by SubSonic, but we already use a good generic SubSonicController here at LavaBlast.

I wanted to re-write my object loading/saving code using this new solution to get rid of my inelegant concurrency locks. Although obvious in appearance, I encountered a few little hiccups along the way and I thought I'd post my findings here.

Limitation 1: You can't create an object by specifying its ProviderName in an ActiveRecord constructor using the default generated code.

  • Workaround: You need to load it using a collection, which supports the ProviderName.
  • Workaround 2: Use SharedDbConnectionScope
  • Workaround 3: Change the code templates to add new constructors.

Limitation 2: You can't use a collection's Where parameter to load your data (via its primary key or other filter), because of incomplete framework code. Hopefully this will be resolved soon (see issue 13402).

  • Workaround: Copy-paste the code used by internally by the Collection, but pass in the extra ProviderName parameter to the new Query.

Limitation 3: You can't specify the ProviderName property on an ActiveRecord because the setter is marked as protected.

  • Workaround: Change the code templates and add a public method/property that sets PropertyName from within the class.
  • Use SharedDbConnectionScope.

Limitation 4: When you load an ActiveRecord by running a query or by loading a Collection, the ActiveRecord does not inherit the ProviderName from the Collection/Query. This is probably due to Limitation 3.  

My current prototype no longer uses the c# lock keyword. I create instances of a controller, passing in which connection string name to use. All database loading/saving is done via this controller, for which I have attached sample code extracts. I managed to get object loading code to my liking, but I had to resort to SharedDbConnectionScope for saving. Once the minor limitations in the framework are resolved, I will be more comfortable with the code.

In summary, I did manage to get a working prototype and I have attached the relevant portions of code that works with data from the appropriate database (chosen at runtime). Hope this helps!



SubSonic Limitations

clock November 30, 2007 10:23 by author JKealey

Question: What is the most elegant way to reuse code generated by SubSonic for tables that share the same schema, in different databases.

  • Ideally, I would have a shared schema definition, generated once, and seamlessly integrated into the code generated for each separate provider.
  • Creating a separate DataProvider for a subset of tables reduces the amount of code that is generated, but is not very convenient to use if you do not use the same namespace for all your projects.
  • Creating a separate DataProvider does not solve the problem of database selection at runtime.

Multiple Databases, Same SchemaLavaBlast's integrated solution for franchise management solution operates on a centralized database and a data warehouse which collects data from all our points of sale. Recently, we decided we wanted to create some management pages for our various e-commerce websites in our centralized portal. Because our recently developed e-commerce backend is the same as our point of sale (reuse++), we automatically obtained features like centralized product line and pricing management for our store fleet (featureSynergy++). However, we wanted to be able to process website users and orders from this same central portal, not on each individual site.

My first question was how do we get the union of the data from the same table in multiple databases? One solution would be to merge these into the data warehouse, but we didn't want to go through complex infrastructure to bring the data into the warehouse and push the changes back out when necessary. I suppose having everything in the same database in the first place would be a solution, but it is not how we architecture our systems. SQL Server Replication might be useful, but it is not bidirectional with SQL Server Express. I can easily write a view that would be a UNION query that would merge the data from the set of databases, but that would be a maintenance problem. For each table, I would have to hardcode the list of databases.

I wrote a quick stored procedure that builds the UNION query from a table of Website to DatabaseName mappings, given a few parameters. It is inefficient and is not strongly-typed (hence it feels dirty) but given the volume of data on these sites, it is good enough for now without being a maintenance pain. Passing in a in a few parameters to the stored procedure, we can filter the rows before the union, we can improve performance. I am curious to know if there are more elegant solutions to this problem.

Anyhow, with this first problem solved, we could bind our GridView to a DataTable produced by the execution of a StoredProcedure and see the merged results. However, because we have a standard infrastructure that makes good use of SubSonic magic for filtering, paging, and sorting, this was not enough. Our infrastructure only works on views or tables in our central database, not on arbitrary results returned by stored procedures. Therefore, SubSonic did not generate any code for the merged tables, in the central database. Still, thanks to the SubSonic Provider model, we managed to load a collection based on the type defined in one DataProvider (point of sale) using data provided by the stored procedure, in another DataProvider (central server). Below, an example without any filtering, sorting or paging.

SubSonic.StoredProcedure sp = SPs.WebsiteUnionOfTables(POSBOLib.Generated.ShoppingCart.ViewWebUser.Schema.TableName, "*", string.Empty, string.Empty);
POSBOLib.Generated.ShoppingCart.ViewWebUserCollection coll = new POSBOLib.Generated.ShoppingCart.ViewWebUserCollection();
coll.LoadAndCloseReader(sp.GetReader());

With a bit more work on the stored procedure, we can make it efficient, but we don't want to use T-SQL all that much, to make everything easier to maintain. (We could use CLR stored procedures, but that's another story).

My second question was how am I going to update this data? When manipulating the data, I know from which database it comes from thanks to an additional column appended by my stored procedure, but I cannot create an updatable SubSonic object with this, and I don't feel like writing SQL anymore, now that we use SubSonic. However, the DataProvider name is a hardcoded string in the auto-generated code… and changing the templates to pass in extra parameters looks like too much work in addition to breaking the simplicity of the framework.

Having played with the DataProvider model, one idea that came to me was to switch the provider context dynamically at runtime. The framework doesn't support this, so I had to hack it in and make sure all my data access was contained in critical sections (using the lock keyword) which begin with an invocation of the following method.

Another option, which just came to me now, would be to obtain the SQL generated by SubSonic during an operation and perform string replacements to forward the requests to the appropriate database. This too is too much of a hack, however, since it depends on the implementation details and the DBMS.

In conclusion, I did manage to build a working prototype using locks and the above code, but I feel the code is too dirty and I am open to suggestions from SubSonic experts (I'm looking at you Rob Conery and Eric Kemp). If there is a clean way to do it, I would love to contribute it to the SubSonic project!

Read Part 2.



An Improved SubSonic ManyManyList

clock November 28, 2007 13:38 by author JKealey

Etienne is on fire with his recent blog posts about SubSonic, so I thought I would contribute one too.

Five months ago I submitted a patch to SubSonic concerning their ManyManyList control (SubSonic.Controls.ManyManyList). I love the control as it is a real time saver, but there are a few limitations.

1 - Cannot use a view as a primary table or foreign table.
In my context, I want to display strings and these strings are not directly in the ForeignTable. The control had assumptions on the presence of a primary key.

2 - Cannot sort the resulting elements
In my context, I want to sort the strings alphabetically.

3 - Cannot filter the foreign table
In my context, a particular item can be in multiple categories, but the set of categories it can be in is not the full foreign table.

4 - The save mechanism deletes all rows and recreates them. If you have other columns in your map table, you lose all that information. Furthermore, there are no checks to see if the delete/recreation is necessary. Even if there are no changes, it still deletes/recreates everything.

I've pretty much re-written everything to support the listed behaviour. The parameter names should be reviewed because they are not very user friendly, and I am not well versed in the SubSonic naming conventions. Since then, we've used this code in production and it appears to work perfectly for our purposes (and it should work exactly as the other one did out of the box if you don't specify any of the new properties).

Agrinei enhanced my code to make it even more customizable.

Download the patch directly on CodePlex and don't forget to vote for the issue!



Month List

Disclaimer

The opinions expressed herein are my own personal opinions and do not represent my employer's view in anyway.

© Copyright 2017

Sign in