LavaBlast Software Blog

Help your franchise business get to the next level.
AddThis Feed Button

SubSonic Limitations

clock November 30, 2007 10:23 by author JKealey

Question: What is the most elegant way to reuse code generated by SubSonic for tables that share the same schema, in different databases.

  • Ideally, I would have a shared schema definition, generated once, and seamlessly integrated into the code generated for each separate provider.
  • Creating a separate DataProvider for a subset of tables reduces the amount of code that is generated, but is not very convenient to use if you do not use the same namespace for all your projects.
  • Creating a separate DataProvider does not solve the problem of database selection at runtime.

Multiple Databases, Same SchemaLavaBlast's integrated solution for franchise management solution operates on a centralized database and a data warehouse which collects data from all our points of sale. Recently, we decided we wanted to create some management pages for our various e-commerce websites in our centralized portal. Because our recently developed e-commerce backend is the same as our point of sale (reuse++), we automatically obtained features like centralized product line and pricing management for our store fleet (featureSynergy++). However, we wanted to be able to process website users and orders from this same central portal, not on each individual site.

My first question was how do we get the union of the data from the same table in multiple databases? One solution would be to merge these into the data warehouse, but we didn't want to go through complex infrastructure to bring the data into the warehouse and push the changes back out when necessary. I suppose having everything in the same database in the first place would be a solution, but it is not how we architecture our systems. SQL Server Replication might be useful, but it is not bidirectional with SQL Server Express. I can easily write a view that would be a UNION query that would merge the data from the set of databases, but that would be a maintenance problem. For each table, I would have to hardcode the list of databases.

I wrote a quick stored procedure that builds the UNION query from a table of Website to DatabaseName mappings, given a few parameters. It is inefficient and is not strongly-typed (hence it feels dirty) but given the volume of data on these sites, it is good enough for now without being a maintenance pain. Passing in a in a few parameters to the stored procedure, we can filter the rows before the union, we can improve performance. I am curious to know if there are more elegant solutions to this problem.

Anyhow, with this first problem solved, we could bind our GridView to a DataTable produced by the execution of a StoredProcedure and see the merged results. However, because we have a standard infrastructure that makes good use of SubSonic magic for filtering, paging, and sorting, this was not enough. Our infrastructure only works on views or tables in our central database, not on arbitrary results returned by stored procedures. Therefore, SubSonic did not generate any code for the merged tables, in the central database. Still, thanks to the SubSonic Provider model, we managed to load a collection based on the type defined in one DataProvider (point of sale) using data provided by the stored procedure, in another DataProvider (central server). Below, an example without any filtering, sorting or paging.

SubSonic.StoredProcedure sp = SPs.WebsiteUnionOfTables(POSBOLib.Generated.ShoppingCart.ViewWebUser.Schema.TableName, "*", string.Empty, string.Empty);
POSBOLib.Generated.ShoppingCart.ViewWebUserCollection coll = new POSBOLib.Generated.ShoppingCart.ViewWebUserCollection();
coll.LoadAndCloseReader(sp.GetReader());

With a bit more work on the stored procedure, we can make it efficient, but we don't want to use T-SQL all that much, to make everything easier to maintain. (We could use CLR stored procedures, but that's another story).

My second question was how am I going to update this data? When manipulating the data, I know from which database it comes from thanks to an additional column appended by my stored procedure, but I cannot create an updatable SubSonic object with this, and I don't feel like writing SQL anymore, now that we use SubSonic. However, the DataProvider name is a hardcoded string in the auto-generated code… and changing the templates to pass in extra parameters looks like too much work in addition to breaking the simplicity of the framework.

Having played with the DataProvider model, one idea that came to me was to switch the provider context dynamically at runtime. The framework doesn't support this, so I had to hack it in and make sure all my data access was contained in critical sections (using the lock keyword) which begin with an invocation of the following method.

Another option, which just came to me now, would be to obtain the SQL generated by SubSonic during an operation and perform string replacements to forward the requests to the appropriate database. This too is too much of a hack, however, since it depends on the implementation details and the DBMS.

In conclusion, I did manage to build a working prototype using locks and the above code, but I feel the code is too dirty and I am open to suggestions from SubSonic experts (I'm looking at you Rob Conery and Eric Kemp). If there is a clean way to do it, I would love to contribute it to the SubSonic project!

Read Part 2.



The Mysterious Parameter Is Not Valid Exception

clock November 29, 2007 20:39 by author JKealey

For a number of weeks, we had been encountering an odd exception on rare occasions. Typically, our point of sale would run flawlessly up until a very busy day where it would not want to render some of our cached images.

System.Web.HttpUnhandledException: Exception of type 'System.Web.HttpUnhandledException' was thrown. ---> System.ArgumentException: Parameter is not valid.
at System.Drawing.Image.Save(Stream stream, ImageCodecInfo encoder, EncoderParameters encoderParams)
at System.Drawing.Image.Save(Stream stream, ImageFormat format)
[...]

We investigated on Google for the possible cause of this error and found a bunch of irrelevant posts from people who get this error message every time they execute their code. In addition, we discovered that it was a generic message that could mean a null image, an inappropriate image format, etc. We figured it must have something to do with memory usage because of the time it took before it occurred in production. However, we knew that the ASP.NET worker process had not restarted because of excessive memory usage.

We ran stress tests on our machines, and never managed to replicate the error. In one session, we loaded all the birth certificates that a store had ever created, hundreds times more than what they would do on their busiest day. We were unfortunately unable to replicate this issue. (Here at LavaBlast, we mostly use NUnit and NUnitASP for our unit testing of ASP.NET applications).

Then we found a post saying that you might want to copy from the image, to a new MemoryStream instead of directly outputting to the Response.OutputStream of an ASP.NET application.

The relevant source code looks like this:

public static void CopyToStream(Bitmap image, Stream outputStream, ImageFormat format)
{
    using (MemoryStream stream = new MemoryStream())
    {
        image.Save(stream, format);
        stream.WriteTo(outputStream);
    }
}

The code is accessed this way in one of our ASP.NET handlers (ASHX file):

Bitmap image = null;
// load the image from cache
if (image != null)
{

HttpResponse response = context.Response;
response.ClearHeaders();
response.ClearContent();
response.Clear();
response.ContentType = "image/jpg";
CachedImageGenerator.CopyToStream(image,response.OutputStream,ImageFormat.Jpeg);
}

The end result is that this makes no difference at all. Bummer! Because we had to wait a week to get the results in production, we needed to replicate this error on our development machines. We quickly realized that we had overlooked the most obvious of solutions in the first place. We knew the image was not null and we knew it was still in the cache, but we had never checked to see if it was disposed! Yes, somehow the images in our cache were disposed by some external process, but not by the cache itself, which would have removed it from the cache beforehand. Once a System.Drawing.Image is disposed, all of its properties return the Parameter is not valid error. In our image cache, we coded a quick hack that would test the image.Height property was throwing this error: if it was, we reloaded the image from the database. (Note: Images do not have an IsDisposed property).

Obviously, this hack was not very reassuring. While Etienne took on the task of refactoring the image cache to store a byte array instead of a System.Drawing.Image, I took out the heavy artilery to find the root cause of this exception. By using JetBrains dotTrace 3.0, a superb tool for profiling .NET applications (Both WinForms and Web Applications), I discovered a memory leak in our application. I cannot overstress how glorious this tool is. It is simply excellent and it saved me tons of time.

In any case, before fixing the memory leak, I reduced the maximum memory IIS allows to my worker process to 16mb. (My machine has 4GB of RAM; that's why we never discovered the flaw in the first place. We should have tested on our sample production hardware instead … but that's another story). With such low memory, I was quickly able to cause the worker process to restart when trying to load too many images (all the images the store had ever produced, once again). Between worker process restarts, I managed to replicate the elusive Parameter is not valid exception. Debugging under this scenario with scare resources, I discovered that the image was being Dispose in the short lapse of time between its creation and its output, revealing that no amount of quick hacks would have solved this issue.

Returning to the memory leaks with JetBrains dotTrace, we found them quickly and the application managed to run our nasty stress test with 32mb assigned to the worker process.

In conclusion, there are no real miracle solutions for solving this problem except ensuring you don't use up too much memory! I just wanted to write this post to help people who are encountering intermittent "Parameter is not valid" exceptions figure out what is going on!

Shameless plug: LavaBlast creates industry-specific interactive kiosks integrated with tailor-made point of sale software, and a variety of other software solutions for franchisors.



An Improved SubSonic ManyManyList

clock November 28, 2007 13:38 by author JKealey

Etienne is on fire with his recent blog posts about SubSonic, so I thought I would contribute one too.

Five months ago I submitted a patch to SubSonic concerning their ManyManyList control (SubSonic.Controls.ManyManyList). I love the control as it is a real time saver, but there are a few limitations.

1 - Cannot use a view as a primary table or foreign table.
In my context, I want to display strings and these strings are not directly in the ForeignTable. The control had assumptions on the presence of a primary key.

2 - Cannot sort the resulting elements
In my context, I want to sort the strings alphabetically.

3 - Cannot filter the foreign table
In my context, a particular item can be in multiple categories, but the set of categories it can be in is not the full foreign table.

4 - The save mechanism deletes all rows and recreates them. If you have other columns in your map table, you lose all that information. Furthermore, there are no checks to see if the delete/recreation is necessary. Even if there are no changes, it still deletes/recreates everything.

I've pretty much re-written everything to support the listed behaviour. The parameter names should be reviewed because they are not very user friendly, and I am not well versed in the SubSonic naming conventions. Since then, we've used this code in production and it appears to work perfectly for our purposes (and it should work exactly as the other one did out of the box if you don't specify any of the new properties).

Agrinei enhanced my code to make it even more customizable.

Download the patch directly on CodePlex and don't forget to vote for the issue!



SubSonic object change tracking

clock November 28, 2007 08:44 by author EtienneT

Sometimes you want to know what your users are doing in your system. You probably don't want to spend too much time on this feature, but it is nice to be able to prove that a certain user introduced data inconsistencies and it isn't your fault. Obviously, the "client is always right" but tracking down the source of a problem (and pointing fingers) is something that is very useful to developers like us!

We decided to add events in our controllers to catch when an object is updated or inserted. When that occurs, we can produce simple HTML about the object through reflection. If the user changed an object, we can generate a nice view of what was changed, as well. Here is an example of a save to the database. We generate a string and insert it into our ElectronicJournal (audit log) to keep track of all the actions in the system:

Type: InventoryItem
Schema: SubSonic.TableSchema+Table
ItemGUID: d7fc5f85-129a-4909-b4ac-490fc26d9007
StoreID: TestStore1
StorePrice: [12.99] -> [7.99]
Tax1: True
Tax2: True
Tax3: True
Tax4: True
CreditPoints: [129] -> [79]
SalePoints: [1299] -> [799]
BonusPoints: 0
StockQuantity: 0
IsReportable: True
PromptForPrice: False
HideItemInStore: False
Item: 3.2
Store: Test Store 1
TableName: InventoryItems
ProviderName:
NullExceptionMessage: {0} requires a value
InvalidTypeExceptionMessage: {0} is not a valid {1}
LengthExceptionMessage: {0} exceeds the maximum length of {1}

By tracking various actions in your database, you can make a quick listing to track a user's actions. If you add a few more columns in your audit log table, you can track who did the change, when they did it, their IP, etc. We keep track of a set of action types which allows us to filter by "Saved Objects" on a particular date, for a certain person (for example). This allows us to visualize what a user did in an efficient manner. Obviously this is a pretty simple example that is based on the existence of SubSonic objects and we don't handle complicated scenarios for the time being. If you run update queries on the database that change multiple objects in one save, we obviously never generated any SubSonic objects and cannot track those changes. However, most of the time, it is sufficient to have a good idea of what is going on. Again, we reap the benefits of our standard use of custom SubSonic controllers.

I have included a pretty simple utility class that you can use to do this. It's pretty generic, so you can easily use this with your own classes. The methods of interest in the class are:

public static string DumpObjectToHtml<T>(T before, T after)

and

public static string DumpObjectToHtml(object obj)

 

The class includes some other useful functions like:

 

public static C Paging<T, C>(C result, int startRowIndex, int maximumRows)
where T : ActiveRecord<T>, new()
where C : ActiveList<T, C>, new()

and

public static void Sort<T, C>(string sort, C result)
where T : ActiveRecord<T>, new()
where C : ActiveList<T, C>, new()

 

Those two methods enable you to page or sort a SubSonic collection if you can't use SQL Paging/Sorting. I had to use this to Page/Sort a SubSonic collection containing both database objects and my objects that I created after the database call. I hope this can be useful to someone.

SubSonicHelper.cs (3.57 kb)

kick it on DotNetKicks.com


SubSonic magic

clock November 27, 2007 16:13 by author EtienneT

ASP.NET developers who don't know what SubSonic is should definitely go read about it. We have been using SubSonic since the initial 2.0 release and have built most of our recent data-driven applications using this wonderful tool. We really like SubSonic and think it has helped us tremendously in our daily productivity. For all simple SQL operations, we try to never write custom SQL and use the SubSonic Query object instead. What I want to talk about today is how we use SubSonic in our main project FranchiseBlast and how we think this could be useful for other programmers like us.

A bit of introduction is required; FranchiseBlast has a set of different web pages to manage our data (products, sale reports, pricing, web orders, etc.). Most of these pages use the typical master list / detailed view usage scenario, with the master list being filtered due to search criteria. Additionally, because FranchiseBlast includes fine-grained access control, all lists need to be filtered according to these permissions, which vary per user. Although there are many built-in constructs in ASP.NET (GridView, DetailsView, FormView, etc.) that support this scenario via an SqlDataSource, but let us explain why we prefer using SubSonic.

SubSonic Controllers

Most of the time, you don't want to rewrite the same code again and again. When doing a classic scenario of displaying a GridView in ASP.NET, you are faced with multiple things you have to handle. First, how do you do sorting? Do you do it in memory with a DataView, for example, or do you let SQL server do it? Writing the SQL to benefit from SQL Server sorting is more work, but if your table has a lot of data, then it becomes absolutely necessary.

What do you do for paging? Do you want to fetch ALL the rows from your database and then let the grid view handle the paging in memory? This works well until you have a table containing thousands of rows because your GridView only displays the first 15 rows and you're transferring the all that data from SQL Server to ASP.NET for nothing (that's if you didn't use caching, but that's of another subject).

Sorting and paging are only two problems. What about filtering with a search string? Filtering by user permissions? These are problems we wanted to solve once and for all at LavaBlast when creating our core data management engine. We did not want to copy paste complex SQL code everywhere to manage filtering, sorting, and paging, as this would cause a maintainability nightmare. We don't have millions of rows in our tables, but we have enough to cause perform issues if we don't think about scalability.

So what is the miracle solution? Here at LavaBlast, we think the solution is custom SubSonic controllers used with ASP.NET ObjectDataSource. We assume our readers already know how to use an ObjectDataSource and have a reasonable knowledge of SubSonic. So if you are not really familiar with ObjectDataSource, you should probably read this tutorial first: Displaying Data With the ObjectDataSource. And if you are not familiar with SubSonic, go to the web site and watch some SonicCasts or go read Rob Conery's blog (very interesting blog). Rob is one of the authors of SubSonic who just got hired by Microsoft to continue to work on the goodness of SubSonic! Nice work Rob!

Some code

Here is some sample source code that makes use of our custom Controller class. This class was manually authored and it adds a few methods to the Controller SubSonic generated for us by using the partial class mechanism available in C#. It returns a sorted list of item groups (a set of items) filtered by the user's permissions. Furthermore, it returns only the item groups to be displayed in the UI. We've extracted out the generic concepts of filtering, sorting, and paging. In the end, SubSonic generates a single SQL query that generates temporary tables in which the rows that are populated in the UI are displayed.

This controller is bound to an ObjectDataSource which feeds records into the GridView. FetchByProductAuthority has parameters to handle sorting, paging, and data filtering. "startRowIndex" and "maximumRows" are parameters used for paging. The "sort" parameter specifies the sort column name and the other three parameters are for filtering our data.

SubSonic connects to our SQL Server Database and uses code templates to generate ActiveRecord objects for all our tables, views, and stored procedures. Using strongly typed objects throughout our applications instead of ADO.NET DataRows greatly improves its maintainability for a negligible performance hit, since we only need a couple dozen objects to render one page of a GridView.

One of our ideas was to change the code generation templates to make all controllers derive from our base controller class. We made this class generic so that it could be used for all kinds of SubSonic objects and makes it easier to deal with object collections, also generated by SubSonic.

[DataObject]
public abstract class SubSonicController<T, C>
where T : AbstractRecord<T>, new()
where C : AbstractList<T, C>, new()

We modified the code generation templates so that SubSonic controllers would extend our base controller:

[System.ComponentModel.DataObject]
public partial class ItemGroupController : SubSonicController<ItemGroup, ItemGroupCollection>

Since the class is partial, you can make a new file and continue to write methods for this controller in a completely separate file which won't be overwritten by SubSonic when you regenerate objects from the relational database. We could also have inherited from the generated controller to perform our extensions.

In any case, all our controllers have the same base class. This enables us to add functions to all our controllers. The most basic tasks you want to do with a controller is to do SQL sorting of your data and SQL paging. SubSonic can easily do the sorting and paging for you with a SubSonic Query object. So we decided to write custom function to adapt ObjectDataSource parameters to work with the SubSonic Query object. I include some code here to show how we do this:

protected static Query GetQueryByParams(string sort, int startRowIndex, int maximumRows)
{
Query q = CreateQuery();

q = AddPaging(q, startRowIndex, maximumRows);

q = AddSort(q, sort);
return q;
}

protected static Query AddSort(Query q, string sort)
{
if (!string.IsNullOrEmpty(sort))
{
if (sort.Contains("DESC"))
q.OrderBy = OrderBy.Desc(sort.Split(' ')[0]);
else
q.OrderBy = OrderBy.Asc(sort.Split(' ')[0]);
}

return q;
}

protected static Query AddPaging(Query q, int startRowIndex, int maximumRows)
{
if (maximumRows > 0)
{
q.PageIndex = (int)(startRowIndex / maximumRows) + 1;
q.PageSize = maximumRows;
}

return q;
}

Therefore, when we want to construct a basic query which handles sorting and paging for us, all we have to do is to write this code: Query q = GetQueryByParams(sort, startRowIndex, maximumRows);. In addition, for those who are not yet familiar with the ObjectDataSource, it automagically provides the sort, startRowIndex, maximumRows according to what is clicked in the GridView. In summary, we don't have much code to write to obtain data efficiently from our database that respects our business processes (access control, auditing, etc.).

Furthermore, we added a custom method to handle searching through a list of columns: see the SearchFields property above. We simply define a List<string> of column names in which we wish to search and our base controller handles adding the required filters to the SubSonic Query object. Finally, we also added default sorting, which allows us to simply set the DefaultSort property in our controller to be used when no sort column is specified in the Fetch methods.

I'll stop here; I think this article is already WAY too long. I hope this can be useful to someone! We're currently considering upgrading the controller templates to automatically include methods like FetchByProductAuthority which are commonly used in FranchiseBlast, depending on the SQL Schema of the table. We could even generate the sort/search fields by using metadata stored in the database. Furthermore, we're very interested in generating some ASCX/ASPX files, following the architecture imposed by our solution, which would make use of our controllers. These generated files would be good starting points to cut down on development time for some of our pages.

I include the SubSonicController class if anyone would be interested.

SubSonicController.cs (4.22 kb)

As for the code template for SubSonic, just modify it to use this base class and pass the right types in the generic parameters.

 [System.ComponentModel.DataObject]
    public partial class <%=tbl.ClassName %>Controller : SubSonicController<<%=tbl.ClassName%>, <%=tbl.ClassName%>Collection>

If you have any questions or comments, don't hesitate to send them in.

Edit: It has come to my attention that I forgot to provide a disclaimer concerning the SearchFields in this post. SubSonic 2.0 does not fully support the OR query construct. Keep in mind that your query will not work properly if you specify more than one field name in the SearchField list PLUS you also use the AND query construct.  (SubSonic uses boolean operator precedence, and not parenthesis.

 

kick it on DotNetKicks.com


Free charting for ASP.NET

clock November 18, 2007 23:15 by author EtienneT

 

Last week I stumbled upon this article by accident: Charts And Graphs: Modern Solutions. This article offers a really good overview of the web-based charting solutions on the market. There are a lot more products available on the market, but what interested me the most in this article were the FREE solutions (as in beer)! My cheap entrepreneur spirit compelled me to check the free solutions. I was really surprised to find that one of them had a Free Charting ASP.NET library.

 

Open Flash Chart

Open Flash Chart was the first one that got my attention because it had a Google Analytics look. Open Flash Chart uses a Flash-based engine and downloads a separate file to know what to display. This file contains both the data and the appearance of the graph.

On the main page of the project, there is no mention of the existence of a .NET library to use this project. However, in the latest release (1.9.5) two guys made a library that can be used in an ASP.NET project! I downloaded the latest version and began to play with the project. I must say that the library is promising, but there are lots of problems. I fixed some of them and I intend to submit my changes to the team.

A simple example

I decided to try the solution a little bit with some data from our data warehouse. I used SubSonic to query the data since I can't live without it! As there is no data binding in the library, you have to do this work by yourself but it should be pretty easy to add this functionality to the library.

The first step is to have an *.aspx page displaying a custom control:

<graph:Chart ID="chart" Width="100%" Height="400" runat="Server" Url="daysOfWeekData.ashx" />

This control basically inserts a Flash control in the page. The Url property specifies the URL of a file containing both the data and graph's format settings. In this example, I use daysOfWeekData.ashx which contains a bar chart of the average sales a franchise store made for each day of the week during some pre-defined time period.

How does this works?

This is a pretty simple chart. You can hover over each column and the value of the bar will show in a tooltip. So basically daysOfWeekData.ashx uses the Open Flash Chart .NET library allowing us to customize our graph and insert our data. After manipulating the data, we can render the objects to the Response stream in query string format:

&title=,& &x_label_style=12,0x000000,2,1,0x000000& &x_axis_steps=1& &y_ticks=5,10,5& &bar_glass=65,#871E00,#FF7B00,Sales,12 &values=10852.69,6702.74,5327.95,5167.67,4982.08,10143.17,19958.16& &y_min=0& &y_max=19968& &tool_tip=Total Sales for #x_label#:
#val#& &x_labels=Sunday,Monday,Tuesday,Wednesday,Thursday,Friday,Saturday&

This code is then read and interpreted by the Flash control and displayed the chart.

I have included the code for daysOfWeekData.ashx.cs (1.96 kb), in case you are interested.

kick it on DotNetKicks.com


ASP.NET GridView column resizing

clock November 8, 2007 00:54 by author EtienneT

We use lots of ASP.NET GridViews in our main product, FranchiseBlast.  We think it's important to make our users' browsing experience as pleasant as possible.

Today, we included a new feature to enable our users to be able to dynamically resize the columns of a GridView.  All the credit for the original source code goes to Matt Berseth who published his code on his blog: GridView column resizing.  Unfortunately, the column resize behavior was not implemented as AjaxControlToolkit Control Extender. Therefore, this morning I took his code and adapted it to be reusable as a control extender.  I have included the code here so that others can benefit from it (see at bottom of this article).  We also included small feature to improve the sort order visualization, in the GridView. Many thanks again to Matt Berseth.

 Here is a screenshot of one of the dynamic reports in FranchiseBlast.

FranchiseBlast is architectured with re-usable base classes and we do lots of things in the codebehind instead of the typical ASPX/ASCX.  Here is some sample code to programmatically add the extender to the page:

 

Thanks to our architecture, these five lines enable column resizing on all of the GridViews in FranchiseBlast.

Here is the source code, should anyone be interested:

GridViewResizeExtender.zip (262.87 kb)



In 2007, can we afford to refuse potential customers who don’t have JavaScript enabled?

clock November 3, 2007 15:04 by author JKealey

Traditionally, I was very conservative when it came to making use of JavaScript (and even CSS) in my projects. Years ago, I was spending horrendous amounts of time double checking my sites on various browsers, particularly Netscape 4.7. As a developer, I found it was a necessary evil to get the site to work on all browsers, and became quite good at it. I now use Microsoft Virtual PC  to test my websites.

AJAX

A decade after Netscape 4 was launched, I now find myself in a similar position with JavaScript. We need to decide if we can require our users to have JavaScript enabled.  We feel that when used properly, JavaScript can increase the site’s usability. We know that approximately 94% of web users have JavaScript enabled .  Looking at the trends, we can see that this number is rising. We also notice that an increasing amount of websites are using AJAX. However, the big players typically build two versions of their sites, allowing visitors without JavaScript to use their services.   

Maybe a better question would be Can we afford to refuse potential customers who don’t have JavaScript enabled? The answer depends on who your customers are. In the franchisor/franchisee relationship, you can impose stricter constraints on the franchisees, as you don’t have thousands of them. However, the situation is different with the retail customers, who are not technically savvy. They probably have outdated software or half-a-dozen of internet security/anti-virus/anti-spyware packages that compound the problem. (I’ll keep the discussion on the Internet’s culture of fear for another day).

Therefore, by convention, we’d err on the side of caution. This position is further reinforced by the fact that AOL’s browser has broken JavaScript support.  However, taking a deeper look into the economics of the software, we have decided to require JavaScript for one of soon to be released e-commerce sites. I am purposefully omitting many arguments for the sake of conciseness as everything is debatable.

-          Trends: We see an increasing number of sites requiring JavaScript. We’d rather design for the future than the past.  We feel JavaScript has reached its critical mass and can be used in production environments.

-          Return on investment: Given our development style and legacy code, we estimate we need to invest an additional 30% effort to properly support users without JavaScript. This is prohibitively expensive for our clients.

  • New businesses need to start making revenue immediately and hopefully can afford to implement a script-less version when (and if) the site becomes popular.  
  • Our POS (in operation in a controlled environment) uses AJAX profusely.  Re-use is very tempting.
  • The ASP.NET framework includes exceptional server-side controls that make use of JavaScript for postbacks.  We’d have to avoid a large number of useful components or re-implement them.
  • We could develop without script from the ground up (thus eliminating the problem completely), but we feel it limits our potential and usability.

-          We see search engine accessibility as an orthogonal issue. Even if we require JavaScript, we can easily create a few semi-static pages (at a negligible cost) that will be scanned by search engines.  

Still, even after the previous justifications, we are still torn. Concerning accessibility, our position is not justifiable. What “feels right” for us developers (and our customer’s pocket book) will have to be monitored in the coming months. Access logs will be reviewed and customer requests, tracked. We’ll review our decision in a few months and might end up reverting our position in the future.  Therefore, we shall remain conservative in our JavaScript usage, in case we need to review our software at a later date.

I suppose the morale of this story is to be open to change and experimentation.

kick it on DotNetKicks.com


Sample C# code for BeanStream credit card processing

clock October 26, 2007 16:11 by author JKealey

The other day we started looking at various credit card payment gateways in order to be able to process transactions on one of our client’s e-commerce sites. After reading up on a few alternatives, we hoped to be able to implement an easy all-in-one solution such as PayPal’s Website Payments Pro. Unfortunately, this program is not available in Canada. Apparently it will be some time soon, but we can’t wait on them for e-commerce, obviously.

After looking round a bit more, we found a payment gateway popularity contest and since we had seen a bunch of programming samples for Authorize.NET, it interested us. However, once again, Canadians cannot use this payment gateway. We looked at PSIGate the most popular one in Canada and were interested by their offering but, in the end, our client decided to go with BeanStream, another Canadian firm. BeanStream offers Electronic Funds Transfer programs (EFT) which is very useful for collecting royalties from franchisees. I may post something concerning EFT later in the year.

In any case, we were a bit disappointed that the site was not full of technical information, programming samples, SDKs, etc.  We had to contact them to obtain a copy of the documentation, something we would not have expected from a technical company in the days of Web 2.0. Having to contact them increases their contact base but shows a certain lack in openness, something which is gaining stream nowadays. The integration process seemed straightforward, as expected. Send out a request and get a response back. We were a bit surprised that the requests were encoded as you would encode a query string instead of XML with a freely available XSD/DTD. The sample code provided was dirt simple VBScript (ASP) with other technologies that we don’t use.

Some would call us lazy, but we feel that re-inventing the wheel is not a mission one should waste time on. Therefore, we started googling for freely available code for C# for payment processing using BeanStream, figuring that if the company itself doesn’t make this code available, someone must have posted an article on The Code Project or at least that we could find some code on Google Code Search. We found some PHP and some Perl, but since we code in C#, this code was not useful for us. Therefore, we started our implementation from scratch for our own purposes.

The code that follows is the current state of our implementation. It has not yet been tested in production, but our unit tests are working. We discovered a SOAP API after signing up and used that instead of the query string format.  We implemented a bit of parameter verification to make it easier to integrate with our higher level structures, which don’t have strict field lengths. Hopefully you’ll find this code useful and will let us know if you find any flaws. In our code, we've subclassed this base class to insert logging and conversion from our object-oriented data structures.

We found that the documentation was not very good, especially for the SOAP API. There were tons of mistakes and inconsistencies but, worst of all, the documentation was only available in a PDF format from which we cannot copy-paste. Therefore, the 500+ error messages or 100+ country codes cannot be easily exported to an Excel spreadsheet in order to create lookup tables in our database. We're building multi-lingual systems and don't have the time to translate their 500+ error messages, so we chose a simple solution as seen in the code. All errors (and exceptions in our code) are mapped to large encompassing classes. Fortunately, we were put in contact with VERY helpful people who responded extremely rapidly to our technical questions.

The source code follows. If you're interested, download the attached zip file containing the c# source code.

BeanStreamProcessing.zip (5.82 kb)

kick it on DotNetKicks.com


Month List

Disclaimer

The opinions expressed herein are my own personal opinions and do not represent my employer's view in anyway.

© Copyright 2017

Sign in