LavaBlast Software Blog

Help your franchise business get to the next level.
AddThis Feed Button

SQL Server - Restore a database backup via the command line

clock October 14, 2008 12:45 by author EtienneT

image Anyone who's ever developed a web application in .NET has had to play with a database management system, most probably SQL Server or its free cousin, SQL Server Express.  One of the tasks I personally hate doing with our SQL Server Express 2005 databases is restoring them from a backup, using SQL Management Studio.  We sometimes restore the point of sale database used by our customers to track down various issues or to build reports using their data as our test set. The process is not that long when you restore a backup from your own machine (restoring the MDF and LDF files to their original directory). If you restore databases from foreign systems, the process is simple only if both systems stored their databases in the same directory, which is rarely the case.

For example, I use Windows Vista x64 and our dedicated server uses a 32-bit version of Windows 2003.  Our data is stored in the default SQL Server directory, which is in the Program Files folder.  However, when using a 64-bit operating system, the program files directory is different (C:\Program Files (x86)).  Since the location of the MDF and LDF files are encoded directly in the bak file generated by SQL Server, restoring them via the command line is especially challenging when you don't control the original locations of the MDF and LDF files, nor their Logical Names.

Our goal is to be able to restore a database by executing a simple command such as this:

restore.bat LavaBlast

This command would look for LavaBlast.bak in the current directory and would restore the LavaBlast database to a default location on your computer where you want to store your MDF and LDF files.

Here is the code for restore.bat:

sqlcmd -S .\SQLEXPRESS -i attachDB.sql -v database="%1" -v root="%CD%"

We are simply calling sqlcmd (added to our path) to connect to our local instance of SQL Server Express and we are executing an SQL file (attachDB.sql) which includes two variables: database and root (the current path).

Here is the code for attachDB.sql:

USE MASTER
GO
IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[$(database)]') AND type in (N'U'))
  ALTER DATABASE $(database) SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
 
create table #backupInformation (LogicalName varchar(100),
PhysicalName varchar(100),
Type varchar(1),
FileGroupName varchar(50) ,
Size bigint ,
MaxSize bigint,
FileId int,
CreateLSN int,
DropLSN int,
UniqueId uniqueidentifier,
ReadOnlyLSN int,
ReadWriteLSN int,
BackupSizeInBytes int,
SourceBlockSize int,
FileGroupId int,
LogGroupGUID uniqueidentifier,
DifferentialBaseLSN bigint,
DifferentialBaseGUID uniqueidentifier,
IsReadOnly bit, IsPresent bit )
 
insert into #backupInformation exec('restore filelistonly from disk = ''$(root)\$(database).bak''')
 
DECLARE @logicalNameD varchar(255);
DECLARE @logicalNameL varchar(255);
 
select top 1 @logicalNameD = LogicalName from #backupInformation where Type = 'D';
select top 1 @logicalNameL = LogicalName from #backupInformation where Type = 'L';
 
DROP TABLE #backupInformation 
 
RESTORE DATABASE $(database)
FROM DISK = '$(root)\$(database).bak'
WITH REPLACE,
MOVE @logicalNameD TO 'C:\Program Files (x86)\Microsoft SQL Server\MSSQL.1\MSSQL\Data\$(database).mdf',
MOVE @logicalNameL TO 'C:\Program Files (x86)\Microsoft SQL Server\MSSQL.1\MSSQL\Data\$(database).ldf'
GO

Simply put, we are extracting the logical names (and other metadata) from the .bak file into a temporary table. We then use those values to restore the MDF and LDF to the correct location, instead of the ones specified in the .bak file.

If you want to use this script, simply ensure you change the location of your SQL Server data files (the last lines in the SQL file) and you should be good to go. Please note that in its current form, the script only supports files with one MDF and one LDF file in the database backup. Furthermore, it assumes your .bak file has the same name as the database you want to import. We could also enhance the script by automatically adding permissions to the ASP.NET user after restoring the database. Feel free to post any enhancements you make in this post's comments and I hope you'll find this script useful! Enjoy.

kick it on DotNetKicks.com


BlogEngine.net Post Security

clock August 13, 2008 15:23 by author EtienneT

BlogEngine.net 1.3/1.4 supports user roles.  But we can't seem to be able to make it mandatory for users to sign in to see blog posts.  That's not something you usually want on a public blog, but for a corporate blog, maybe you want to make sure your news only gets to the people you want.  This seemed like a perfect candidate for a BlogEngine extension.

User Filtering

In our scenario, we don’t want any unregistered users to be able to see blog posts.  This can be easily checked by calling Membership.GetUser() and ensuring the returned value is not null.  We could filter out specific users as well, but we didn’t implement this feature in our extension.

Post Filtering

It could be interesting to restrict who can see the posts in a specific blog category.  For example, a blog category “Top Secret” which can only be read by your company's upper management…  Not very likely in a blog, but you get the point.  Our extension does this filtering by associating a blog Category with a membership Role in the extension’s settings.

image

By associating a membership role with a blog category name, the extension ensures the user has this role before displaying a post associated with this blog category name.  If you add two roles for the same category, posts with this category will only be served if the user has both roles.

Adding a setting with an empty category name will ensure that all posts require a particular role.

Code

using System;
using System.Data;
using System.Configuration;
using System.Web;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.HtmlControls;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;
using BlogEngine.Core;
using BlogEngine.Core.Web.Controls;
using System.Collections.Generic;
 
/// <summary>
/// Summary description for PostSecurity
/// </summary>
[Extension("Checks to see if a user can see this blog post.",
            "1.0", "<a href=\"http://www.lavablast.com\">LavaBlast.com</a>")]
public class PostSecurity
{
    static protected ExtensionSettings settings = null;
 
    public PostSecurity()
    {
        Post.Serving += new EventHandler<ServingEventArgs>(Post_Serving);
 
        ExtensionSettings s = new ExtensionSettings("PostSecurity");
 
        s.AddParameter("Role", "Role", 50, true);
        s.AddParameter("Category", "Category", 50);
 
        // describe specific rules for entering parameters
        s.Help = "Checks to see if the user has any of those roles before displaying the post. ";
        s.Help += "You can associate a role with a specific category. ";
        s.Help += "All posts having this category will require that the user have the role. ";
        s.Help += "A parameter with only a role without a category will enable to filter all posts to this role. ";
 
        s.AddValues(new string[] { "Registered", "" });
 
        ExtensionManager.ImportSettings(s);
        settings = ExtensionManager.GetSettings("PostSecurity");
    }
 
    protected void Post_Serving(object sender, ServingEventArgs e)
    {
        Post post = (Post)sender;
        bool continu = false;
 
        MembershipUser user = Membership.GetUser();
 
        continu = user != null;
 
        if (user != null)
        {
            List<string> categories = new List<string>();
            foreach (Category cat in post.Categories)
                categories.Add(cat.Title);
 
            string[] r = Roles.GetRolesForUser();
 
            List<string> roles = new List<string>(r);
 
            DataTable table = settings.GetDataTable();
            foreach (DataRow row in table.Rows)
            {
                if (string.IsNullOrEmpty((string)row["Category"]))
                    continu &= roles.Contains((string)row["Role"]);
                else
                {
                    if (categories.Contains((string)row["Category"]))
                        continu &= roles.Contains((string)row["Role"]);
                }
            }
        }
 
        e.Cancel = !continu;
    }
}

 

Simply saving this code in a .cs and putting it in your App_Code/Extensions for BlogEngine.net shall enable the plugin.

kick it on DotNetKicks.com


SubSonic v2.1 Controller and Utilities

clock August 4, 2008 11:45 by author JKealey

We've done a few posts about how we use SubSonic here at LavaBlast. Recently, SubSonic v2.1 was released and we upgraded the code we've previously published to support this new version. We've blogged about our changes in the past and not much has changed since, but we did get a request to post our source code, so here it is. I've actually included a bit more code in this release so that this blog post has a bit more substance!

Download the source code.

The file contains our SubSonicController, SubSonicHelper, and our associated code generation templates. Nothing new to see here, except that you get downloadable code. We unfortunately did not have time to play with the new Query engine all that much, so our controller still uses the old one (which is used throughout our codebase). If anyone would like to augment our code to support the new query engine and post it in the comments, that would be great! Moving to the new query engine would circumvent the OR query limitation related to the search fields we've mentioned in the past.

Auditing using SubSonic

We like to log certain things in our Electronic Journal as it gives us ways to debug more efficiently, and provides us with a way to keep track of who changed what in case something breaks. We've included an SQL script that generates our ElectronicJournal table, and code which allows us to save events in the table. We've wired it up to our SubSonicController so that we can log all object updates, for example. What you log is your own business and it depends on your needs and performance requirements.

ej

We've built an administrative interface over this table allowing us to navigate efficiently through the events. (Each of our pages in FranchiseBlast extends from generic controls which list/filter/page rows using our ObjectDataSource, effectively re-using the code we're presenting here.)

Various notes

  • Remember not to mix AND and OR in the current version of this code, with the old query engine.
  • Don't log everything on high volume sites, for obvious reasons.
  • Issue 3 is still open and waiting to be committed. The others bugs I previously reported (and a new one) have been committed.
  • We removed the ToList() which we added last time, because GetList() is already present. (Thanks to our readers for noticing!)
  • We replaced all calls to IsLoaded() to !IsNew() in our codebase. Click here to learn why.
kick it on DotNetKicks.com


jQuery Content Slider Tutorial

clock July 22, 2008 15:22 by author EtienneT

Simple Demo (Firefox, IE7, IE6, Opera, Safari) | Demo with content | Source Code

LavaBlast launched a new front page recently and we have incorporated a new jQuery slider. You can see it in action below (in the Flash animation) or you can go see it directly on our website home page.

Scroll down to find out how this little puppy was implemented!

 

 

HTML & CSS

HTML

This is the basic ASP.NET in the ASPX page:

<div class="FrontMenu">
    <div class="Bar">
        <asp:Repeater ID="Repeater" runat="server" onitemdatabound="Repeater_ItemDataBound">
            <ItemTemplate>
                <span class="item">
                    <a href="#" class="jFlowControl">
                        <asp:Label ID="lbl" runat="server" />
                    </a>
                    <div class="spike" style="z-index:999999">
                    </div>
                    <div class="right">
                    </div>
                </span>
            </ItemTemplate>
        </asp:Repeater>
    </div>
    <div class="Panel">
        <div><cms:Paragraph ID="paragraph1" runat="server" ContentName="frontMenu1" /></div>
        <div><cms:Paragraph ID="paragraph2" runat="server" ContentName="frontMenu2" /></div>
        <div><cms:Paragraph ID="paragraph3" runat="server" ContentName="frontMenu3" /></div>
        <div><cms:Paragraph ID="paragraph4" runat="server" ContentName="frontMenu4" /></div>
        <div><cms:Paragraph ID="paragraph6" runat="server" ContentName="frontMenu5" /></div>
    </div>
</div>

This is pretty simple HTML.  Everything is enclosed in the main div with the css class FrontMenu.  We have the animated bar at the top and the content panel underneath it.  The menu bar is generated by a simple repeater control bound to data in our ASP.NET page. Each menu item is a span containing a link that we’ll use to change the selected menu item.  The Panel div contains multiple dynamic paragraphs from our content management system (SubSonic CMS).  You could easily change this code to bind to data from another source.

Here is the code behind for this control.  We kept it pretty simple:

public partial class FrontPageMenu : UserControl
{
    protected void Page_Load(object sender, EventArgs e)
    {
        if (!IsPostBack)
        {
            List<string> list = new List<string>();
            list.Add("How can we help?");
            list.Add("Our Products");
            list.Add("Hot Features");
            list.Add("Testimonials");
            list.Add("Read");
 
            Repeater.DataSource = list;
            Repeater.DataBind();
        }
    }
 
    protected void Repeater_ItemDataBound(object sender, RepeaterItemEventArgs e)
    {
        if (e.Item.ItemType == ListItemType.Item || e.Item.ItemType == ListItemType.AlternatingItem)
        {
            Label lbl = e.Item.FindControl("lbl") as Label;
            lbl.Text = (string)e.Item.DataItem;
        }
    }
}

 

image

The following picture shows how the html menu item elements are rendered.  The CSS section of this article will dive deeper in the inner workings of the menu.

image

JQuery

jFlow

We used jFlow to scroll our panels when we click on a menu item.  The code was fairly straightforward to use.  We included this script in our user control.

$(".FrontMenu").jFlow({
    slides: ".FrontMenu .Panel",
    width: "974px",
    height: "344px",
    duration: 300
});

And then we included a simple reference to our script manager.

<asp:ScriptManagerProxy ID="proxy" runat="server">
    <Scripts>
        <asp:ScriptReference Path="~/js/jquery.flow.1.0.min.js" />
    </Scripts>
</asp:ScriptManagerProxy>

 

We could have used Next and Previous buttons but decided not to.

 

IE6 Problems

This menu did require some basic jQuery.  Except for the scrolling animation, we could have built almost all of this in pure CSS thanks to the hover pseudo-class. However, Internet Explorer 6 came back from the grave to haunt us…  As we still have some visitors who are using IE6, we could not afford to let our home page break down in IE6.  Microsoft gods: please please, hear our prayers and find a way to erase IE6 from the face of earth.  But hey, we've got work to do and can't wait five years for the browser to die like IE 5 did.

We discovered that using a:hover in IE6 to change our background image will make the browser crash.  We used IETester to test the menu in IE6: it made IETester crash.  We then tried Virtual PC running Win98 and IE6: Internet Explorer crashed again when we hovered over one of the links with a:hover CSS styles.

The solution to this problem was simply to apply a style to the hovered link (.hover). Then we could easily style the children of this element to our liking without breaking IE6.

$(".FrontMenu .Bar a").mouseover(function() { $(this).addClass("hover"); });
$(".FrontMenu .Bar a").mouseout(function() { $(this).removeClass("hover"); });

To change the style of the selected item we added the css class .sel to the span.item (the parent of the clicked link).  First of all, when a link is clicked, remove the currently selected item.  Second, set the parent of the link as the current selected item.  It’s important to return false as otherwise the browser will follow the link and scroll to the top of the page.

$("div.FrontMenu div.Bar a").click(function()
{
    $("div.FrontMenu div.Bar").children("span.sel").removeClass("sel");
    $(this).parent().addClass("sel");
    return false;
});

 

CSS

Let’s take a deeper look at the menu’s CSS.  Let’s start with the FirstMenu class which is not too complicated.

.FrontMenu
{
    padding: 5px 5px 0px 5px;
    margin-left: 2px;
    margin-top: -4px;
}
 
.FrontMenu .Bar
{
    background: #F6ECA4 url(images/MainPage/bar.jpg) no-repeat top left;
    width: 980px;
    height: 48px;
    position: relative;
}
 
.FrontMenu .Bar a
{
    color: #FFFFFF;
    font-size: large;
    font-family: Tahoma;
    position: relative;
    top: 7px;
    display: block;
    text-decoration: none;
    padding-right: 6px;
    margin-right: -6px;
    cursor: pointer;
}

Now let’s take a closer look at the Bar menu.  Here are the images we used to style our menu items.

CSS Images    
.FrontMenu .Bar span.sel

selLeft.gif

selLeft

selRight.gif

selRight

spike.gif

spike

.FrontMenu .Bar a.hover span

hoverLeft.gif

hoverLeft

hoverRight.gif

 

 

hoverRight

 

Like you saw in the jQuery part, we change the class in JavaScript to bypass some IE6 issues, so you should not be surprised by the CSS.

The code for the main span for each menu item:

.FrontMenu .Bar span.item
{
    line-height: 30px;
    margin: 0px 0px;
    float: left;
    position: relative;
    display: inline;
    cursor: pointer;
    width: 188px;
    text-align: center;
    margin-left: 6px;
}

Here is the code when you hover over the link inside the menu item:

/* We have to handle hover with jQuery because :hover makes IE6 crash when we change the background image. */
.FrontMenu .Bar a.hover
{
    background: transparent url(images/MainPage/hoverRight.gif) no-repeat top right;
    height: 30px;
}
 
/* We have to handle hover with jQuery because :hover makes IE6 crash when we change the background image. */
.FrontMenu .Bar a.hover span
{
    background: transparent url(images/MainPage/hoverLeft.gif) no-repeat top left;
    height: 30px;
    display: block;
}

 

Like you can see, the link contains the hoverLeft  background image and the span inside the link contains the hoverRight.  This enables the link to have any length and the control will resize easily.  If you ever get a link that is wider than the left image, simply make the image wider...

Then we only needed the CSS to change the menu item to make it look selected.

 

 

 

 

.FrontMenu .Bar span.sel a:hover { background: none; padding-left: 0px; margin-left: 0px; }
.FrontMenu .Bar span.sel a:hover span { background: none; padding-left: 0px; margin-left: 0px; }
 
.FrontMenu .Bar span.sel
{
    background: transparent url(images/MainPage/selLeft.gif) no-repeat top left;
    height: 48px;
}
 
.FrontMenu .Bar span.item .spike, .FrontMenu .Bar span.sel .spike
{
    background: transparent url(images/MainPage/spike.gif) no-repeat top left;
    display:none;
    position: absolute;
    top: 44px;
    left: 50%;
    margin-left: -11px;
    width: 22px;
    height: 17px;
    z-index: 9999;
}
 
.FrontMenu .Bar span.sel .spike
{
    display: block;
}
 
.FrontMenu .Bar span.sel .right
{
    background: transparent url(images/MainPage/selRight.gif) no-repeat top right;
    position: absolute;
    height: 48px;
    width: 4px;
    right: 0px;
    top: 0px;
}
 
.FrontMenu .Bar span.sel a { color: #d43300; }

 

The parent span for the menu item with the .sel class contains the selLeft image and the div.right inside this span contains the selRight image.  We have to make sure too that the hover style does not get applied when the item is selected.

The spike spike is applied in absolute position in the center.  To do that in absolute position, you have to set the following:

left: 50%;
margin-left: -11px;  // <—This is the width/2
width: 22px;

Even with a higher z-index, we were not able to make the spike go on top of the content panel in IE6.  Therefore, we had to put a margin on top of content panel just to make sure the spike was not overlapping the content panel below:

.FrontMenu div.Panel
{
    height: 328px;
    width: 974px;
    margin-top: 15px;
}

 

 

Simple Demo (Firefox, IE7, IE6, Opera, Safari) | Demo with content | Source Code

kick it on DotNetKicks.com



Image Post Processing Caching

clock July 15, 2008 13:41 by author EtienneT

Complete Source Code | Small Demo

This article present a small class library that abstracts opening, modifying (applying effects, resizing, etc.), and caching images in ASP.NET.  Everything needs to be abstracted to ensure the code is easily testable (opening, modifying, and caching of the images).  You may want to resize your images or convert them to black and white and cache the result, and want to test these operations.

You want to be able to read image data from different sources:

  • An image on the local disk on the web server
  • A remote image on the Internet that you want to download and cache
  • An image in your database

You want to be able to apply any number of post processing algorithms to the resulting image:

  • Resize the image (generate thumbnails)
  • Apply an image filter such as convert to black and white
  • Do anything on the image that requires computing and where caching the result proves beneficial from a performance standpoint. 

CacheManager

First of all, let's look at a nice caching class I found on a DotNetKicks kicked story.  Zack Owens gave a nice piece of code in his blog to help you manage your ASP.NET cached objects.  The goal of this class is simply to let you have a strongly typed way to access your cached objects.  Here is the code for the class with some slight modifications:

public class CacheManager<T> where T : class
{
    private Cache cache;
    private static CacheItemRemovedCallback callback;
    private static object _lock = new object();
 
    private TimeSpan cacheTime = new TimeSpan(1, 0, 0, 0); // Default 1 day
 
    public CacheManager(Cache cache)
    {
        this.cache = cache;
        if(callback == null)
            callback = new CacheItemRemovedCallback(RemovedFromCache);
    }
 
    public T Get(string key)
    {
        try
        {
            lock (_lock)
            {
                if (cache[key] == null)
                    return default(T);
 
                T b = CastToT(cache[key]);
 
                return b;
            }
        }
        catch (ArgumentException ex) // The object was disposed by something! return null;
        {
            return null;
        }
    }
 
    public void Add(string key, T obj, TimeSpan cacheTime)
    {
        lock (_lock)
        {
            if (obj != null)
                cache.Add(key, CastFromT(obj), null, DateTime.Now.Add(cacheTime), Cache.NoSlidingExpiration, CacheItemPriority.Default, callback);
        }
    }
 
    protected void RemovedFromCache(string key, object o, CacheItemRemovedReason reason)
    {
        T obj = o as T;
        if (obj != null)
        {
            lock (_lock)
            {
                DisposeObject(obj);
            }
        }
    }
 
    protected virtual void DisposeObject(T obj) { }
 
    protected virtual T CastToT(object obj) { return obj as T; }
 
    protected virtual object CastFromT(T obj) { return obj as T; }
 
    public TimeSpan CacheTime
    {
        get { return cacheTime; }
        set { cacheTime = value; }
    }
}

As you can see, this is a pretty simple class.  We defined some virtual methods to be implemented in our child class, for example DisposeObject if you want to cache disposable objects (continue to read if you want to know why this is a really bad idea).

The constructor requires a Cache object; we can simply pass along the Page's Cache (Page.Cache) to make it happy.  We now want to derive from CacheManager to help us in our main task which is to cache modified images.

ImageCacheManager

To create our image-specific cache manager, we defined a new class called ImageCacheManager which is a subclass of CacheManager  and will cache byte arrays (our images).  We implemented this feature in the past, but did a big mistake that led to a mysterious bug.  We were caching Bitmap objects in the ASP.NET cache but this was a big, big mistake.  Bitmap objects are GDI+ managed objects and they need to get disposed.  Even if we had methods to dispose the Bitmap when they were removed from the cache, some Bitmap objects were disposed while still in the cache (because of a memory leak elsewhere in the application). This caused errors downstream when we tried to use those objects later.  The lesson: we'll only cache only byte[] of the images.

The default image format is PNG in our case, but you can specify your own in the constructor.  In our case we are using PNG because we are in a controlled environment where we know everyone is using IE7, so we can use transparent PNG.  You probably want to use a different format for general public web sites since IE6 doesn't support transparent PNG.

This class will enable to download remote images and cache them locally, as well.  We needed this feature since we have a lot of remote point of sales which synchronize their product list from a central database.  We didn't want to send product images during synchronization because it would have been too much data.  Instead we decided to store our images on a central server and since our stores always have Internet access, they download and cache the images via this image cache manager.  In our product, when a franchisor changes a product image in the main database, the cached version of the picture in the point of sale will expire within the next day and the new picture would be downloaded when used.

ImageCacheManager is an abstract class.  It implements image caching and handles the fact that you want to apply post processing to an image with options (interface IImagePostProcess) and it abstracts the way you load the image (interface IImageReader).  Here is the code:

using System;
using System.Collections.Generic;
using System.Text;
using System.Web.Caching;
using System.Drawing;
using System.IO;
using System.Drawing.Drawing2D;
using System.Drawing.Imaging;
using System.Web;
using System.Net;
using LavaBlast.ImageCaching.PostProcess;
using LavaBlast.ImageCaching.Reader;
 
namespace LavaBlast.ImageCaching
{
    /// <summary>
    /// This class specialize in caching modified images.  Images are cached as
    /// byte[].  You can apply different modifications in serial to the image
    /// before caching it.
    /// 
    /// Construct a cache key depending on the options of your image post processing.
    /// This enables to cache copies of an image with different post processing applied to it.
    /// 
    /// Control how the image will be read.  On local disk, via Internet etc.
    /// </summary>
    public abstract class ImageCacheManager : CacheManager<byte[]>
    {
        public ImageCacheManager(Cache cache) : this(cache, ImageFormat.Png) { }
 
        protected Dictionary<string, IImagePostProcess> postProcess = new Dictionary<string, IImagePostProcess>();
 
        protected ImageFormat format = ImageFormat.Png; // Default image format PNG
 
        public ImageCacheManager(Cache cache, ImageFormat format) : base(cache)
        {
            this.format = format;
 
            InitImagePostProcess();
        }
 
        /// <summary>
        /// Determine which image reader will be used to read this image.
        /// </summary>
        /// <param name="uriPath"></param>
        /// <returns></returns>
        protected abstract IImageReader GetReader(Uri uriPath);
 
        /// <summary>
        /// Fill the variable postProcess with post processing to apply
        /// to an image each time.
        /// </summary>
        protected abstract void InitImagePostProcess();
 
        /// <summary>
        /// This method shall return a unique key depending on the path of the
        /// image plus the options of it's post processing process.
        /// </summary>
        /// <param name="path"></param>
        /// <param name="options"></param>
        /// <returns></returns>
        protected abstract string ConstructKey(Uri path, Dictionary<string, object> options);
 
        /// <summary>
        /// Get an image from the following path.  Use the provided options to use in post processing.
        /// If refresh is true, don't use the cached version.
        /// </summary>
        /// <param name="uriPath"></param>
        /// <param name="options"></param>
        /// <param name="refresh"></param>
        /// <returns></returns>
        protected byte[] GetImage(Uri uriPath, Dictionary<string, object> options, bool refresh)
        {
            string key = ConstructKey(uriPath, options);
            byte[] cached = Get(key);
 
            if (cached != null && !refresh)
                return cached;
            else
            {
                try
                {
                    byte[] image = ReadBitmap(uriPath); // Get the original data from the image
 
                    byte[] modified = PostProcess(image, options); // Do any post processing on the image (resize it or apply some effects)
 
                    Add(key, modified, CacheTime); // Add this modified version to the cache
 
                    return modified;
                }
                catch
                {
                    return null;
                }
            }
        }
 
        /// <summary>
        /// Run all post processing process on the image and return the resulting image.
        /// </summary>
        /// <param name="input"></param>
        /// <param name="options"></param>
        /// <returns></returns>
        protected byte[] PostProcess(byte[] input, Dictionary<string, object> options)
        {
            byte[] result = input;
 
            foreach (string key in postProcess.Keys)
                result = postProcess[key].Process(result, options[key]);
 
            return result;
        }
 
        /// <summary>
        /// From a path, return a byte[] of the image.
        /// </summary>
        /// <param name="uriPath"></param>
        /// <returns></returns>
        protected byte[] ReadBitmap(Uri uriPath)
        {
            using (Stream stream = GetReader(uriPath).GetData(uriPath))
            {
 
                byte[] data = new byte[0];
 
                Bitmap pict = null;
 
                try
                {
                    pict = new Bitmap(stream);
                    data = ImageHelper.GetBytes(pict, format);
                }
                catch
                {
                    return null;
                }
                finally
                {
                    if (pict != null)
                        pict.Dispose();
                }
 
                stream.Close();
 
                return data;
            }
        }
    }
}

Because this is an abstract base class, we need a concrete implementation of ImageCacheManager.  We created ThumbnailCacheManager.  ThumbnailCacheManager checks the URI if it’s a local or remote file and uses the right image reader.  It has only one post processing task (resizing the image), but it could have more.  It construct the unique key for the cache from the processing task’s options.

Image Resizing

Here is a quick example of a typical image processing task: resizing it.  The class implement the simple method Process(byte[] input, object op) where op is in fact the options of the post processing process.  I could not use generics in my IImagePostProcess interface because of the way I store them later…  Here is a quick code example of how to resize an image.

namespace LavaBlast.ImageCaching.PostProcess
{
    /// <summary>
    /// Post processing that take an image and resize it
    /// </summary>
    public class ImageResizePostProcess : IImagePostProcess
    {
        public byte[] Process(byte[] input, object op)
        {
            byte[] oThumbNail;
 
            ImageResizeOptions options = (ImageResizeOptions)op;
 
            Bitmap pict = null, thumb = null;
 
            try
            {
                using (MemoryStream s = new MemoryStream(input))
                {
                    pict = new Bitmap(s); // Initial picture
 
                    s.Close();
                }
 
                thumb = new Bitmap(options.Size.Width, options.Size.Height); // Future thumb picture
 
                using (Graphics oGraphic = Graphics.FromImage(thumb))
                {
                    oGraphic.CompositingQuality = CompositingQuality.HighQuality;
                    oGraphic.SmoothingMode = SmoothingMode.HighQuality;
                    oGraphic.InterpolationMode = InterpolationMode.HighQualityBicubic;
                    oGraphic.PixelOffsetMode = PixelOffsetMode.HighQuality;
                    Rectangle oRectangle = new Rectangle(0, 0, options.Size.Width, options.Size.Height);
 
                    oGraphic.DrawImage(pict, oRectangle);
 
                    oThumbNail = ImageHelper.GetBytes(thumb, options.ImageFormat);
 
                    oGraphic.Dispose();
 
                    return oThumbNail;
                }
            }
            catch
            {
                return null;
            }
            finally
            {
                if (thumb != null)
                    thumb.Dispose();
                if (pict != null)
                    pict.Dispose();
            }
        }
    }
 
    public class ImageResizeOptions
    {
        public Size Size = Size.Empty;
        public ImageFormat ImageFormat = ImageFormat.Png;
    }
}

 

Reading the picture

Reading the picture is the easy part and I have included two implementations of IImageReader: one for a local images and one for a remote images.  You could easily implement one which loads images from you database.

LocalImageReader

/// <summary>
    /// An image reader to read images on local disk.
    /// </summary>
    public class LocalImageReader : IImageReader
    {
        public Stream GetData(Uri path)
        {
            FileStream stream = new FileStream(path.LocalPath, FileMode.Open);
 
            return stream;
        }
    }

RemoteImageReader

/// <summary>
    /// Image reader to read remote image on the web.
    /// </summary>
    public class RemoteImageReader : IImageReader
    {
        public Stream GetData(Uri url)
        {
            string path = url.ToString();
            try
            {
                if (path.StartsWith("~/"))
                    path = "file://" + HttpRuntime.AppDomainAppPath + path.Substring(2, path.Length - 2);
 
                WebRequest request = (WebRequest)WebRequest.Create(new Uri(path));
 
                WebResponse response = request.GetResponse() as WebResponse;
 
                return response.GetResponseStream();
            }
            catch { return new MemoryStream(); } // Don't make the program crash just because we have a picture which failed downloading
        }
    }

HttpHandler

Finally, you want to serve those images with an HttpHandler. The code for the HttpHandler is pretty simple as you only need to parse the parameters from the QueryString and pass them to the ThumbnailCacheManager presented above.  The handler receives a parameter “p” for the path of the image (local or remote) and a parameter “refresh” which can be used to ignore the cached version of the image.  Additionally, we can pass parameters such as “width” and “height” for our image resizing.  Warning: you must adapt this code to your environment otherwise you are exposing a security hole because of the path parameter.

When debugging your image caching HttpHandler, don't forget to clear your temporary Internet files from IE or FireFox because your images will also be cached in your web browser otherwise your code will not be executed!

using System;
using System.Collections;
using System.Data;
using System.Web;
using System.Web.Services;
using System.Web.Services.Protocols;
using LavaBlast.ImageCaching;
 
namespace WebApplication
{
    /// <summary>
    /// Really simple HttpHandler to output an image.
    /// </summary>
    public class ImageCaching : IHttpHandler
    {
        public void ProcessRequest(HttpContext context)
        {
            string path = context.Request.Params["p"] ?? "";
            bool refresh = context.Request.Params["refresh"] == "true";
 
            int width, height;
            if (!int.TryParse(context.Request.QueryString["width"], out width))
                width = 300;
            if (!int.TryParse(context.Request.QueryString["height"], out height))
                height = 60;
 
            byte[] image = null;
            ThumbnailCacheManager manager = new ThumbnailCacheManager(context.Cache);
            image = manager.GetThumbnail(path, width, height);
 
            if (image != null)
            {
                context.Response.ContentType = "image/png";
                try
                {
                    context.Response.OutputStream.Write(image, 0, image.Length);
                }
                catch { }
 
                context.Response.End();
            }
        }
 
        public bool IsReusable
        {
            get
            {
                return true;
            }
        }
    }
}

 

 

Source Code | Small Demo

kick it on DotNetKicks.com



Upgrading to SubSonic v2.1

clock July 10, 2008 16:05 by author JKealey

The timing for the release of SubSonic v2.1 could not have been better as we're between time-critical projects at the moment. As our readers know, we've used SubSonic as our business object code generator since we first launched the company. I spent a few hours this morning doing the migration of our codebase and it seems to have gone smoothly. We've posted some cool improvements we've made to SubSonic in previous posts: Improved ManyManyList Control, Object Change Tracking, and an Improved ObjectDataSource Controller. Migrating to v2.1 involved a few changes and this post will describe them briefly. As this is currently a work in progress, we'll let the dust settle before writing a more formal post.

LavaBlastManyManyList :)

Rob integrated the LavaBlastManyManyList control into SubSonic. It does strike me as uncommon for an open source project to list the contributor in the class name, but who am I to complain? :)

Changes to our SubSonicHelper and SubSonicController.

SubSonic changed the base classes for their objects. Therefore, we have to change our own SubSonicController<T, C> to extend RecordBase<T> instead of AbstractRecord<T>. In our SubSonicHelper, we changed AbstractRecord<T> and ActiveRecord<T> to RecordBase<T> but, for some reason, we also had an ActiveList<T> which we changed to AbstractList<T> to match the rest of the application.

SubSonic Collections no longer extend List<T>

Collections are now extending BindingList<T>, apparently for improved DataBinding support. However, this breaks all the code you may have which uses the fact that Collections were generic lists: Sort, Find, FindAll, FindLast, AddRange, Exists, etc. Luckily for us, we have replacement methods for Sort/Find, which are easier to use but not as powerful as custom delegates/predicates. Rewriting the 70-odd locations in our code to avoid using methods from the List<T> interface isn't what I consider fun and you may feel the same way. The code we had to rewrite was non-trivial and rewriting all these locations without being able to recompile and test (as we don't have unit tests that specifically check that the items in a Collection are sorted the right way, for example), we took the decision to go with a low-impact change.

We edited CS_ClassTemplate.aspx and CS_ViewTemplate.aspx and added the following method to both collections:

   1: public List<<%=className%>> ToList()  {
   2:     return new List<<%=className%>>(Items); // shallow copy
   3: }

BindingList<T> has a protected property named Items which is indeed a List<T>. We didn't check the implementation details, but since it doesn't make this property public, we can assume that playing with that list directly (removing items from the list for example) might screw up the original collection. Therefore, we're creating a shallow copy of the List and using that in our code when necessary. Now that everything compiles and works properly, we can rewrite code where performance is more important (and use the original SubSonic collection instead).

Found two bugs, one old, one new.

We've reported two bugs in the SubSonic's brand new issue tracker on Google Code. (Issue 3 is a rare case relating to composite keys and paging, it probably won't affect you as it has been around forever. However, Issue 4 is a bit more worrisome as it implies that most of your code that uses StoredProcedures might not work anymore without a small workaround until they release SubSonic v2.1.1.)

Conclusion

I hope this helps all of you who were trying to get our SubSonic v2.0.3 code working on SubSonic v2.1! When everything will have been tested thoroughly, we'll post more source code.

kick it on DotNetKicks.com


Spolsky's Paradox

clock May 2, 2008 13:42 by author JKealey

Last week, I loaded up my blog aggregator and I was pleased to see Joel Spolsky had written a new article on architecture astronauts. He made a good point about how Microsoft is rewriting the same software over and over and no one seems to care. I totally agree with Joel's argument about architecture astronauts as we are wasting precious intellectual resources and solving the same issues over and over.  (Side note: an interesting read about how we're wasting massive amounts of brainpower.)

However, that's not what I'm writing about today. I found myself reading faster and faster as I progressed through the article, reading the last paragraph at a frenetic pace. You can definitely feel Joel's frustration - the big boys in the industry are "stealing" all the great programmers by offering starting salaries leagues above what smaller companies can offer. Why do I think Joel's frustration is paradoxical?

Joel's Premises

  • Hire only the top quality people
  • Treat your employees as if they were superstars in your beautiful New York offices - spare no expense.
  • Build a closely-knit team that works on challenging problems to retain your employees
  • Set an example as being the best damn place for a software engineer to work and inspire millions of developers to follow your example.

Joel's Aspirations

  • Recruitment problem: solved.
  • Develop and commercialize high quality software
  • Thanks to a well-defined (and very selective) hiring process, retire from software at age 45 to start your own avocado grove as a hobby.

The Contradiction

Okay... I'm generalizing just because I find it ironic to see Joel having hiring woes. Even if as a general rule things are going well, that doesn't mean you get anyone you want. Everyone has hiring frustrations, even those who set the example. However, I'm left to wonder... has anything changed in the context of hiring? Is there anything you need to do differently today to grab the best technical talent? I can't answer these questions myself, but I see lots of companies struggle with hiring.

I do agree that it is impossible for smaller companies to compete with some of these starting salaries (unless they are keen on burning VC money) but smaller firms do have (many) advantages. But what are they?

1. Get back in the kitchen and make me some pie

What I like most working for a startup (and it would be the case even if it wasn't mine) is the opportunity to touch a bit of everything (engineering, marketing, sales, legal, etc.). Even if you go work for a 40-person startup, if you're interested in contributing to elements which aren't related to your primary function (software developer), you probably can help out. For example, if you think the company's website doesn't communicate what the company does, you can take a step back, think about it a bit, and propose enhancements. (Complaining doesn't bring you anywhere, but constructive criticism helps everyone out!).

If you're a hardcore coder, you can still benefit from working for a smaller company, because you'll have a greater impact on the final product.

However, this fact is not something that has changed in the hiring context... what has?

2. Not everyone wants to work in New York, Redmond or Mountain View.

This is one key differentiating factor for startups. Not all of the world's most talented individual feel inclined to move to get a job and I feel the number of people who will start their own software business in their home town will increase in the coming decade. In the past, we've seen a few companies such as Eric Sink's SourceGear in Illinois do well even if their offices are in the middle of nowhere, so to speak. This is due partly because of increased high-speed Internet availability combined with the lower cost to start your own software business. I think we'll definitely see more success stories from entrepreneurs living in non-metropolitan areas over the next decade because starting your own business (or working for a local one) is such an attractive alternative. It's funny how making it easy to go global causes the creation of many smaller local hubs.

On a related subject, I don't recall that many local startups trying to recruit us while we were software engineering students at the University of Ottawa... there were a few but we were mostly solicited by IBM and Research In Motion (leading to the infamous "hey! do you want a RIM job?" quote). If you're a competent student today, you should definitely look around at local startups that are working on interesting concepts.

3. You can read about it on the Internet

There are tons of people talking about their software startup experiences on the Internet and it's easier to actively participate in the community today than it was a decade or so ago. I can't really see myself connecting to a BBS with my 14.4kbps modem to learn about software startups. Today, you can find people with similar interests very easily but, best of all, you can learn from their experience.

Rather than enumerate a long list of advantages that you wouldn't bother to read, I'd like to ask you an open ended question.

What do you think will change in the way we hire software engineers in the next decade?

Please feel free to discuss in the comments. Ideas: Outsourcing? Co-working? Telecommuting? Nothing at all?

kick it on DotNetKicks.com


Co-working environments are good for software startups

clock April 26, 2008 23:53 by author JKealey

The Code FactoryA month or so ago we mentioned co-working environments in one of our blog posts about startup lessons. It appears we're now the number one hit on Google for co-working software startups. When I first heard of co-working, I assumed they were mainstream because the added value these environments bring to software startups is so obvious. However, they are an emerging trend in the software world and you should expect to hear more about them in the future. 

The advantages of co-working environments:

  1. It provides a location where members of a small core team can meet, brainstorm, and work on their new idea.
  2. It is a low-cost alternative to renting/owning your own office. You can use the space as much or as little as you need it and don't need to buy chairs, desks, a photocopier, fax machine, espresso machine, routers, etc.
  3. It opens the door to meeting new people and networking with peers in the same industry.

In a sense, they improve on the familiar software engineering lab environment that is available to university students and we know universities help create startup hubs.

Given the fact that it has become so inexpensive to start your own software company, co-working environments are a perfect fit for the small software start-ups that want to strike it big but have limited resources. Furthermore, who better to help you with your software startup business plan than someone who's gone through the process in the past? Most government agencies that help you start your business don't fully grasp software companies, but the people in a software co-working environment do!

Rather than ramble on about why co-working environments are so great, I'd like to make an announcement: 

LavaBlast Software will develop an industry-specific POS and interactive kiosk for The Code Factory, a new franchisor in the co-working arena.

The Code Factory will open their first location within a couple weeks. Ian Graham, the founder, is very much involved in the Ottawa startup community and this co-working space will definitely help budding software entrepreneurs in the Ottawa-region. The first event to be held at the Ottawa location will be the Ottawa Web Weekend, who is currently looking for more programmers for the event!  Those of you who are not familiar with the franchise industry (and thought it was limited to McDonalds and Subway) might be surprised to see a co-working environment using the franchise model but you'd be surprised by the wide variety of businesses that do (software shops, web design shops, etc.)!

Not only are we very happy to have a new franchisor on board, we're especially excited by the fact that The Code Factory will be out first client outside the child-related retail industry to use our industry-specific interactive kiosk as a key differentiator.



Skinned Login Control

clock April 14, 2008 14:16 by author EtienneT
Here is our login form in FranchiseBlast.  We think it's a pretty cool login form and it was not that hard to do.  It only requires basic CSS and some jQuery.
 
 

How we did it

The only things you need is an image like this one here:

inputLogin

Then we used the following CSS to define our text boxes style.  The "Login" css class is applied to the ASP.NET Login control and the class "TextBox" are applied to both textboxes in the login control.

.Login .Textbox, .Login .Hover
{
    width: 337px;
    height: 17px;
    background:transparent url(images/inputlogin.png) no-repeat top left;
    color: Black;
    border: none;
    padding: 5px;
    font-weight: bold;
}
  
.Login .Hover
{
    background:transparent url(images/inputlogin.png) no-repeat bottom left;
}

 

Has you can see, the only difference for .Hover class is that we tell the background to show the bottom of the picture (the orange part) instead of the top of the image.  If Internet Explorer supported the "focus" CSS pseudo class then it would be much simpler, but IE doesn't support it, so we have to use jQuery to achieve the effect.

Don't forget to add jQuery.js somewhere in the page and then you can add the following script to your page:

$('.Login .Textbox').focus(function(){
  $(this).attr('class', 'Hover');
});
  
$('.Login .Textbox').blur(function(){
  $(this).attr('class', 'Textbox');
});

 

Basically the code above registers an event to all DOM elements which have the "Textbox" CSS class and are children of a control of the "Login" CSS class. The first call registers an event handler on the focus event of the text box which changes the class to Hover.  We do the exact opposite for the blur event when we the text box loses it's focus.   There may be a better way to do this why jQuery; if you know how, let us know.

Finally, as a special added touch, we use an AnimationExtender after a successful login:

<ajax:AnimationExtender ID="animLogin" runat="server" TargetControlID="LoginButton">
<Animations>
    <OnClick>
        <Sequence>
            <FadeOut Duration=".5" Fps="20" AnimationTarget="pnlLogin" />
        </Sequence>
    </OnClick>
</Animations>
</ajax:AnimationExtender>

 

One last thing, if you use this AnimationExtender, you have to make sure your validators don't run on the client side. Validation must occur on the server otherwise the fade out animation will still occur and the login control will disappear. For example, we used a RequiredFieldValidator for both the username and password text boxes and we had to set the EnableClientScript property to false on both these validators.

This concludes how to do a skinned Login control à la LavaBlast.

kick it on DotNetKicks.com


Upcoming StatCVS/StatSVN Release

clock March 31, 2008 13:40 by author JKealey

locAs you know, LavaBlast develops most of its applications using Microsoft technologies but we like to dabble with other technologies. We're behind the open source movement and have worked on a few open source projects, mainly in Java. One of these is StatSVN, a tool that retrieves information from a Subversion repository and generates various tables and charts describing the project development. It's so great even Eric Kemp used it on SubSonic. The tool uses StatCVS internally and I've recently been promoted to the project admin status on SourceForge (along with Benoit Xhenseval of Appendium). StatSVN has not evolved much in the last year, since the release of v0.3.1, but we've recently done a few enhancements.

Current improvements to both StatSVN and StatCVS

  • StatSVN: Faster diffs.
    • StatSVN now takes advantage of a new Subversion 1.4 feature which allows us to perform one svn diff per revision, instead of one svn diff per revision per file. If you don't have 1.4, the old behavior will continue to work.
    • The Apache project blocked our demo because we were doing too many svn diffs on their servers. Hopefully, this will solve this situation in addition to making everything faster.
    • You can still use the old mechanism by using the -force-legacy-diff command line option, should you encounter any problems with the new feature.
  • Both: Export to XML format.
    • A new -xml option generates XML files instead of the typical HTML reports.
  • Both: Now showing affected file count in a commit.
  • StatSVN: The revision number shows up on the commit page.
  • StatSVN: Added support for -tags-dir as a way to specify 'top' directory where the tags are stored, defaulted to "/tags/".
  • StatSVN: Added support for a -anonymize command line option, to anonymize committer names
  • ... and a few minor things.

 

*** Download the alpha version ***

We're looking for some outside help

Instead of releasing v0.4.0 prematurely, we'd like to ask you to help us out!

1) Beta testers and benchmarkers

Before going with a full blown release, we want to ensure that the new StatSVN diff works as intended in various contexts. It works on our repositories, but we'd like you to run it on yours. We want to ensure it works well regardless of the operating system, your language, the type of files in your repository, the number of revisions, etc. Ideally, you'd time the whole thing and let us know how much faster it is. Furthermore, if you're a real zealot, you'd compare the line counts computed by both algorithms to ensure they match. (The counts are cached in a local XML file... very easy to compare the results).

2) Minor bug fixing & improvements

We've basically fixed the issues that annoyed us personally, but there are some that remain in both the StatCVS tracker and the StatSVN tracker. You may also be interested in creating new reports, which you think would interest the community.

3) Cool enhancements / external projects.

StatCVS/StatSVN can now export XML. This enables you to create new applications that use the computed data... why not whip up a cool dynamic web application? As a demo, I've imported the XML in ASP.NET and used the Open Flash Chart control we've blogged about in the past.

pie

4) Up for a challenge?

Subversion supports move operations and CVS doesn't. While developing StatSVN, we decided we'd treat moves as a delete-add sequence, for simplicity's sake. However, this does generate inaccurate statistics. If you're in for a challenge, you could tackle this StatSVN enhancement. We prototyped something in the past, but it ended up being too slow for production use.

 

A few links

 

Conclusion

Even though StatSVN/StatCVS are Java-based and most of our readers operate in the .NET space, the application itself is platform independent. Personally, I enjoy loading up a recent version of Eclipse and working in Java once in a while because it helps me observe the best in both worlds. I much prefer coding in C# because of the easier string manipulation and the fact that everything is an object so you don't have to explicitly create an Integer object from your int to insert it into a collection. However, when working in VS.NET, I dearly miss the automatic incremental compilation feature that Eclipse runs when you save a file. 

kick it on DotNetKicks.com


Month List

Disclaimer

The opinions expressed herein are my own personal opinions and do not represent my employer's view in anyway.

© Copyright 2017

Sign in