LavaBlast Software Blog

Help your franchise business get to the next level.
AddThis Feed Button

Webcam surveillance in FranchiseBlast

clock December 6, 2007 23:11 by author JKealey

Point of sale surveillance With the Teddy Mountain teddy bear stuffing franchise, we’re fortunate enough to work with a technically savvy franchisor. Our website describes various elements that we’ve produced for their great franchise system. LavaBlast is proud to help centralize the Teddy Mountain franchise operations and bring the franchise offering to the next level. We’ve been working with them on various solutions since their early beginnings and grown our tailor-made solution with their growing needs.

Today, we launched a simple feature that allows franchisees to remotely monitor their stores, using security webcams that were installed when the stores opened. This is something they could already do, but we’ve integrated it into our solution so that they only have one place to go on the web for their product pricing, reports, etc.

The integration into our solution was a quick job thanks to the infrastructure we already had in place. We do little side-projects like this just for fun, to clear our minds!

PS: The Teddy Mountain store decor is absolutely fabulous. The Imagination Retail Group has found a great balance between visual appeal and supporting infrastructure.

 

blast it on Franchise NewsBlast


SubSonic Limitations - Part 2 (aka: Knee deep in … SubSonic. )

clock December 4, 2007 22:02 by author JKealey
Knee deep in snow.

After my recent post  asking for the most elegant way to support multiple databases with the same schema at runtime, I received some good pointers in the SubSonic forums from dbr, a forum user. In the end, I admit should have done my homework before posting.

One elegant solution to change the SubSonic Provider/ConnectionString at runtime makes use of SharedDbConnectionScope. I personally do not like this solution, as I prefer my code to explicitly state what it’s doing via its properties or arguments instead of relying on contextual information.  I was also concerned about how it works with regards to concurrency and I did a little digging. Looking at the code, I discovered it internally uses the ThreadStatic attribute which seems like a godsend at first, but further investigation reveals the implementation may be flawed. I did see people complain that it didn’t work for them, but don’t know if it is related to the ThreadStatic attribute. I do not fully trust this technique, but I may wrong as I'm far from an expert in concurrency.

Returning to Dbr's suggestion, he simply generates different providers at runtime for each connection string. This sounds simple if you can manage to modify the ProviderName property on the collection (ActiveList) or object (ActiveRecord) every time you load/save from the database. Without resorting to SharedDbConnectionScope, you can't use the auto-generated constructors because they fall back to the default provider which is hardcoded in the generated object’s schema.

The elegant implementation to encapsulate loading/saving from the database is to use a controller, as would be suggested by the MVC design pattern. I have not yet played with the new MVC templates provided by SubSonic, but we already use a good generic SubSonicController here at LavaBlast.

I wanted to re-write my object loading/saving code using this new solution to get rid of my inelegant concurrency locks. Although obvious in appearance, I encountered a few little hiccups along the way and I thought I'd post my findings here.

Limitation 1: You can't create an object by specifying its ProviderName in an ActiveRecord constructor using the default generated code.

  • Workaround: You need to load it using a collection, which supports the ProviderName.
  • Workaround 2: Use SharedDbConnectionScope
  • Workaround 3: Change the code templates to add new constructors.

Limitation 2: You can't use a collection's Where parameter to load your data (via its primary key or other filter), because of incomplete framework code. Hopefully this will be resolved soon (see issue 13402).

  • Workaround: Copy-paste the code used by internally by the Collection, but pass in the extra ProviderName parameter to the new Query.

Limitation 3: You can't specify the ProviderName property on an ActiveRecord because the setter is marked as protected.

  • Workaround: Change the code templates and add a public method/property that sets PropertyName from within the class.
  • Use SharedDbConnectionScope.

Limitation 4: When you load an ActiveRecord by running a query or by loading a Collection, the ActiveRecord does not inherit the ProviderName from the Collection/Query. This is probably due to Limitation 3.  

My current prototype no longer uses the c# lock keyword. I create instances of a controller, passing in which connection string name to use. All database loading/saving is done via this controller, for which I have attached sample code extracts. I managed to get object loading code to my liking, but I had to resort to SharedDbConnectionScope for saving. Once the minor limitations in the framework are resolved, I will be more comfortable with the code.

In summary, I did manage to get a working prototype and I have attached the relevant portions of code that works with data from the appropriate database (chosen at runtime). Hope this helps!



SubSonic Limitations

clock November 30, 2007 10:23 by author JKealey

Question: What is the most elegant way to reuse code generated by SubSonic for tables that share the same schema, in different databases.

  • Ideally, I would have a shared schema definition, generated once, and seamlessly integrated into the code generated for each separate provider.
  • Creating a separate DataProvider for a subset of tables reduces the amount of code that is generated, but is not very convenient to use if you do not use the same namespace for all your projects.
  • Creating a separate DataProvider does not solve the problem of database selection at runtime.

Multiple Databases, Same SchemaLavaBlast's integrated solution for franchise management solution operates on a centralized database and a data warehouse which collects data from all our points of sale. Recently, we decided we wanted to create some management pages for our various e-commerce websites in our centralized portal. Because our recently developed e-commerce backend is the same as our point of sale (reuse++), we automatically obtained features like centralized product line and pricing management for our store fleet (featureSynergy++). However, we wanted to be able to process website users and orders from this same central portal, not on each individual site.

My first question was how do we get the union of the data from the same table in multiple databases? One solution would be to merge these into the data warehouse, but we didn't want to go through complex infrastructure to bring the data into the warehouse and push the changes back out when necessary. I suppose having everything in the same database in the first place would be a solution, but it is not how we architecture our systems. SQL Server Replication might be useful, but it is not bidirectional with SQL Server Express. I can easily write a view that would be a UNION query that would merge the data from the set of databases, but that would be a maintenance problem. For each table, I would have to hardcode the list of databases.

I wrote a quick stored procedure that builds the UNION query from a table of Website to DatabaseName mappings, given a few parameters. It is inefficient and is not strongly-typed (hence it feels dirty) but given the volume of data on these sites, it is good enough for now without being a maintenance pain. Passing in a in a few parameters to the stored procedure, we can filter the rows before the union, we can improve performance. I am curious to know if there are more elegant solutions to this problem.

Anyhow, with this first problem solved, we could bind our GridView to a DataTable produced by the execution of a StoredProcedure and see the merged results. However, because we have a standard infrastructure that makes good use of SubSonic magic for filtering, paging, and sorting, this was not enough. Our infrastructure only works on views or tables in our central database, not on arbitrary results returned by stored procedures. Therefore, SubSonic did not generate any code for the merged tables, in the central database. Still, thanks to the SubSonic Provider model, we managed to load a collection based on the type defined in one DataProvider (point of sale) using data provided by the stored procedure, in another DataProvider (central server). Below, an example without any filtering, sorting or paging.

SubSonic.StoredProcedure sp = SPs.WebsiteUnionOfTables(POSBOLib.Generated.ShoppingCart.ViewWebUser.Schema.TableName, "*", string.Empty, string.Empty);
POSBOLib.Generated.ShoppingCart.ViewWebUserCollection coll = new POSBOLib.Generated.ShoppingCart.ViewWebUserCollection();
coll.LoadAndCloseReader(sp.GetReader());

With a bit more work on the stored procedure, we can make it efficient, but we don't want to use T-SQL all that much, to make everything easier to maintain. (We could use CLR stored procedures, but that's another story).

My second question was how am I going to update this data? When manipulating the data, I know from which database it comes from thanks to an additional column appended by my stored procedure, but I cannot create an updatable SubSonic object with this, and I don't feel like writing SQL anymore, now that we use SubSonic. However, the DataProvider name is a hardcoded string in the auto-generated code… and changing the templates to pass in extra parameters looks like too much work in addition to breaking the simplicity of the framework.

Having played with the DataProvider model, one idea that came to me was to switch the provider context dynamically at runtime. The framework doesn't support this, so I had to hack it in and make sure all my data access was contained in critical sections (using the lock keyword) which begin with an invocation of the following method.

Another option, which just came to me now, would be to obtain the SQL generated by SubSonic during an operation and perform string replacements to forward the requests to the appropriate database. This too is too much of a hack, however, since it depends on the implementation details and the DBMS.

In conclusion, I did manage to build a working prototype using locks and the above code, but I feel the code is too dirty and I am open to suggestions from SubSonic experts (I'm looking at you Rob Conery and Eric Kemp). If there is a clean way to do it, I would love to contribute it to the SubSonic project!

Read Part 2.



The Mysterious Parameter Is Not Valid Exception

clock November 29, 2007 20:39 by author JKealey

For a number of weeks, we had been encountering an odd exception on rare occasions. Typically, our point of sale would run flawlessly up until a very busy day where it would not want to render some of our cached images.

System.Web.HttpUnhandledException: Exception of type 'System.Web.HttpUnhandledException' was thrown. ---> System.ArgumentException: Parameter is not valid.
at System.Drawing.Image.Save(Stream stream, ImageCodecInfo encoder, EncoderParameters encoderParams)
at System.Drawing.Image.Save(Stream stream, ImageFormat format)
[...]

We investigated on Google for the possible cause of this error and found a bunch of irrelevant posts from people who get this error message every time they execute their code. In addition, we discovered that it was a generic message that could mean a null image, an inappropriate image format, etc. We figured it must have something to do with memory usage because of the time it took before it occurred in production. However, we knew that the ASP.NET worker process had not restarted because of excessive memory usage.

We ran stress tests on our machines, and never managed to replicate the error. In one session, we loaded all the birth certificates that a store had ever created, hundreds times more than what they would do on their busiest day. We were unfortunately unable to replicate this issue. (Here at LavaBlast, we mostly use NUnit and NUnitASP for our unit testing of ASP.NET applications).

Then we found a post saying that you might want to copy from the image, to a new MemoryStream instead of directly outputting to the Response.OutputStream of an ASP.NET application.

The relevant source code looks like this:

public static void CopyToStream(Bitmap image, Stream outputStream, ImageFormat format)
{
    using (MemoryStream stream = new MemoryStream())
    {
        image.Save(stream, format);
        stream.WriteTo(outputStream);
    }
}

The code is accessed this way in one of our ASP.NET handlers (ASHX file):

Bitmap image = null;
// load the image from cache
if (image != null)
{

HttpResponse response = context.Response;
response.ClearHeaders();
response.ClearContent();
response.Clear();
response.ContentType = "image/jpg";
CachedImageGenerator.CopyToStream(image,response.OutputStream,ImageFormat.Jpeg);
}

The end result is that this makes no difference at all. Bummer! Because we had to wait a week to get the results in production, we needed to replicate this error on our development machines. We quickly realized that we had overlooked the most obvious of solutions in the first place. We knew the image was not null and we knew it was still in the cache, but we had never checked to see if it was disposed! Yes, somehow the images in our cache were disposed by some external process, but not by the cache itself, which would have removed it from the cache beforehand. Once a System.Drawing.Image is disposed, all of its properties return the Parameter is not valid error. In our image cache, we coded a quick hack that would test the image.Height property was throwing this error: if it was, we reloaded the image from the database. (Note: Images do not have an IsDisposed property).

Obviously, this hack was not very reassuring. While Etienne took on the task of refactoring the image cache to store a byte array instead of a System.Drawing.Image, I took out the heavy artilery to find the root cause of this exception. By using JetBrains dotTrace 3.0, a superb tool for profiling .NET applications (Both WinForms and Web Applications), I discovered a memory leak in our application. I cannot overstress how glorious this tool is. It is simply excellent and it saved me tons of time.

In any case, before fixing the memory leak, I reduced the maximum memory IIS allows to my worker process to 16mb. (My machine has 4GB of RAM; that's why we never discovered the flaw in the first place. We should have tested on our sample production hardware instead … but that's another story). With such low memory, I was quickly able to cause the worker process to restart when trying to load too many images (all the images the store had ever produced, once again). Between worker process restarts, I managed to replicate the elusive Parameter is not valid exception. Debugging under this scenario with scare resources, I discovered that the image was being Dispose in the short lapse of time between its creation and its output, revealing that no amount of quick hacks would have solved this issue.

Returning to the memory leaks with JetBrains dotTrace, we found them quickly and the application managed to run our nasty stress test with 32mb assigned to the worker process.

In conclusion, there are no real miracle solutions for solving this problem except ensuring you don't use up too much memory! I just wanted to write this post to help people who are encountering intermittent "Parameter is not valid" exceptions figure out what is going on!

Shameless plug: LavaBlast creates industry-specific interactive kiosks integrated with tailor-made point of sale software, and a variety of other software solutions for franchisors.



An Improved SubSonic ManyManyList

clock November 28, 2007 13:38 by author JKealey

Etienne is on fire with his recent blog posts about SubSonic, so I thought I would contribute one too.

Five months ago I submitted a patch to SubSonic concerning their ManyManyList control (SubSonic.Controls.ManyManyList). I love the control as it is a real time saver, but there are a few limitations.

1 - Cannot use a view as a primary table or foreign table.
In my context, I want to display strings and these strings are not directly in the ForeignTable. The control had assumptions on the presence of a primary key.

2 - Cannot sort the resulting elements
In my context, I want to sort the strings alphabetically.

3 - Cannot filter the foreign table
In my context, a particular item can be in multiple categories, but the set of categories it can be in is not the full foreign table.

4 - The save mechanism deletes all rows and recreates them. If you have other columns in your map table, you lose all that information. Furthermore, there are no checks to see if the delete/recreation is necessary. Even if there are no changes, it still deletes/recreates everything.

I've pretty much re-written everything to support the listed behaviour. The parameter names should be reviewed because they are not very user friendly, and I am not well versed in the SubSonic naming conventions. Since then, we've used this code in production and it appears to work perfectly for our purposes (and it should work exactly as the other one did out of the box if you don't specify any of the new properties).

Agrinei enhanced my code to make it even more customizable.

Download the patch directly on CodePlex and don't forget to vote for the issue!



LavaBlast at IAAPA Orlando

clock November 7, 2007 00:01 by author JKealey

Next week, some of LavaBlast's software will be demoed at the International Association of Amusement Parks and Attractions (IAAPA) Orlando Expo. Because of the diverse crowd, we're focusing less on the franchise aspects of our FranchiseBlast solution and more on the individual products.

Being a one stop shop for franchisors implies being able to produce nice promotional material for tradeshows, conferences, and expositions. We designed one for our own company; let me know what you think!

 

 See you there!



In 2007, can we afford to refuse potential customers who don’t have JavaScript enabled?

clock November 3, 2007 15:04 by author JKealey

Traditionally, I was very conservative when it came to making use of JavaScript (and even CSS) in my projects. Years ago, I was spending horrendous amounts of time double checking my sites on various browsers, particularly Netscape 4.7. As a developer, I found it was a necessary evil to get the site to work on all browsers, and became quite good at it. I now use Microsoft Virtual PC  to test my websites.

AJAX

A decade after Netscape 4 was launched, I now find myself in a similar position with JavaScript. We need to decide if we can require our users to have JavaScript enabled.  We feel that when used properly, JavaScript can increase the site’s usability. We know that approximately 94% of web users have JavaScript enabled .  Looking at the trends, we can see that this number is rising. We also notice that an increasing amount of websites are using AJAX. However, the big players typically build two versions of their sites, allowing visitors without JavaScript to use their services.   

Maybe a better question would be Can we afford to refuse potential customers who don’t have JavaScript enabled? The answer depends on who your customers are. In the franchisor/franchisee relationship, you can impose stricter constraints on the franchisees, as you don’t have thousands of them. However, the situation is different with the retail customers, who are not technically savvy. They probably have outdated software or half-a-dozen of internet security/anti-virus/anti-spyware packages that compound the problem. (I’ll keep the discussion on the Internet’s culture of fear for another day).

Therefore, by convention, we’d err on the side of caution. This position is further reinforced by the fact that AOL’s browser has broken JavaScript support.  However, taking a deeper look into the economics of the software, we have decided to require JavaScript for one of soon to be released e-commerce sites. I am purposefully omitting many arguments for the sake of conciseness as everything is debatable.

-          Trends: We see an increasing number of sites requiring JavaScript. We’d rather design for the future than the past.  We feel JavaScript has reached its critical mass and can be used in production environments.

-          Return on investment: Given our development style and legacy code, we estimate we need to invest an additional 30% effort to properly support users without JavaScript. This is prohibitively expensive for our clients.

  • New businesses need to start making revenue immediately and hopefully can afford to implement a script-less version when (and if) the site becomes popular.  
  • Our POS (in operation in a controlled environment) uses AJAX profusely.  Re-use is very tempting.
  • The ASP.NET framework includes exceptional server-side controls that make use of JavaScript for postbacks.  We’d have to avoid a large number of useful components or re-implement them.
  • We could develop without script from the ground up (thus eliminating the problem completely), but we feel it limits our potential and usability.

-          We see search engine accessibility as an orthogonal issue. Even if we require JavaScript, we can easily create a few semi-static pages (at a negligible cost) that will be scanned by search engines.  

Still, even after the previous justifications, we are still torn. Concerning accessibility, our position is not justifiable. What “feels right” for us developers (and our customer’s pocket book) will have to be monitored in the coming months. Access logs will be reviewed and customer requests, tracked. We’ll review our decision in a few months and might end up reverting our position in the future.  Therefore, we shall remain conservative in our JavaScript usage, in case we need to review our software at a later date.

I suppose the morale of this story is to be open to change and experimentation.

kick it on DotNetKicks.com


Sample C# code for BeanStream credit card processing

clock October 26, 2007 16:11 by author JKealey

The other day we started looking at various credit card payment gateways in order to be able to process transactions on one of our client’s e-commerce sites. After reading up on a few alternatives, we hoped to be able to implement an easy all-in-one solution such as PayPal’s Website Payments Pro. Unfortunately, this program is not available in Canada. Apparently it will be some time soon, but we can’t wait on them for e-commerce, obviously.

After looking round a bit more, we found a payment gateway popularity contest and since we had seen a bunch of programming samples for Authorize.NET, it interested us. However, once again, Canadians cannot use this payment gateway. We looked at PSIGate the most popular one in Canada and were interested by their offering but, in the end, our client decided to go with BeanStream, another Canadian firm. BeanStream offers Electronic Funds Transfer programs (EFT) which is very useful for collecting royalties from franchisees. I may post something concerning EFT later in the year.

In any case, we were a bit disappointed that the site was not full of technical information, programming samples, SDKs, etc.  We had to contact them to obtain a copy of the documentation, something we would not have expected from a technical company in the days of Web 2.0. Having to contact them increases their contact base but shows a certain lack in openness, something which is gaining stream nowadays. The integration process seemed straightforward, as expected. Send out a request and get a response back. We were a bit surprised that the requests were encoded as you would encode a query string instead of XML with a freely available XSD/DTD. The sample code provided was dirt simple VBScript (ASP) with other technologies that we don’t use.

Some would call us lazy, but we feel that re-inventing the wheel is not a mission one should waste time on. Therefore, we started googling for freely available code for C# for payment processing using BeanStream, figuring that if the company itself doesn’t make this code available, someone must have posted an article on The Code Project or at least that we could find some code on Google Code Search. We found some PHP and some Perl, but since we code in C#, this code was not useful for us. Therefore, we started our implementation from scratch for our own purposes.

The code that follows is the current state of our implementation. It has not yet been tested in production, but our unit tests are working. We discovered a SOAP API after signing up and used that instead of the query string format.  We implemented a bit of parameter verification to make it easier to integrate with our higher level structures, which don’t have strict field lengths. Hopefully you’ll find this code useful and will let us know if you find any flaws. In our code, we've subclassed this base class to insert logging and conversion from our object-oriented data structures.

We found that the documentation was not very good, especially for the SOAP API. There were tons of mistakes and inconsistencies but, worst of all, the documentation was only available in a PDF format from which we cannot copy-paste. Therefore, the 500+ error messages or 100+ country codes cannot be easily exported to an Excel spreadsheet in order to create lookup tables in our database. We're building multi-lingual systems and don't have the time to translate their 500+ error messages, so we chose a simple solution as seen in the code. All errors (and exceptions in our code) are mapped to large encompassing classes. Fortunately, we were put in contact with VERY helpful people who responded extremely rapidly to our technical questions.

The source code follows. If you're interested, download the attached zip file containing the c# source code.

BeanStreamProcessing.zip (5.82 kb)

kick it on DotNetKicks.com


Requirements Engineering at LavaBlast

clock September 11, 2007 00:32 by author JKealey

It is always interesting to try something new and I am excited to have the opportunity to write the first post on LavaBlast Software’s blog. My name is Jason Kealey and I’m a software engineering graduate of the University of Ottawa and the President of LavaBlast Software. This blog will feature various insights which we want to share with our clients and the world in general. Since our specialty is software, we thought it would be nice to create a blog focused on software for franchisors.

We’re opening the blog just a few days before I leave for Europe for a few weeks as I am presenting a paper at the 13th System Design Language Forum. Because will be presenting concepts related to my Master’s thesis, I thought the first post here could give an overview of one of my interests. Over the last two years, I have been the lead developer of an Eclipse plug-in to create, analyze, and transform models which help software engineers describe software requirements. Today, I wish to explain in layman’s terms how my expertise can be useful to franchisors or anyone who might require LavaBlast’s software. By giving insight on my expertise, I hope to communicate to my readers how we can collaborate to create software without requiring our customers to be experts in software engineering.

Requirements

Traditionally, people have written free-form textual discussions to explain the behaviour of a system. This is good for initial discussions but does not scale because of the nature of unstructured text: ideas are not located easily and details are often sparse and scattered. It is better practice to list different functionality in a clear, structured pattern such as the following example:

  • FranchiseBlast SHALL record changes made to a product into an electronic journal.
  • FranchiseBlast SHALL only allow administrators to view the electronic journal.

By writing requirements in such a way, progress can be tracked, and priorities are easy to visualize. At LavaBlast Software, we like to use a collaboration wiki to elaborate our requirements for complex projects. A public example of such a wiki adapted for requirements management can be found on the website of the aforementioned tool: jUCMNav. By using a wiki, we collaboratively create the requirements document. No more emailing large documents and ending up with out of synch versions: everyone works on the central web-accessible location (protected by password if necessary)  and the requirements evolve. Later on, the requirements are not shelved as the developers who are implementing the system can go back and ask questions directly in the particular requirement’s discussion thread. Everyone works closely together to best define and scope the software system.

How to write good textual requirements is a topic that has been discussed by many authors. However, while the mileage that a team can obtain with a collection of textual requirements varies from project to project, more often than not these are insufficient. Structured textual requirements are good to describe features but not to describe scenarios or relationships between features. To better represent scenarios, a common tool of the requirement engineer’s arsenal is the textual use case. Textual use cases describe scenarios in which actors and a system under description interact. Such sequence of events increases the comprehension of the context in which the various features come into play. Use cases illustrate the system’s functionality and do not usually include technical jargon. By utilizing language that the domain expert and end users understand, a broader group of stakeholders can discuss the system’s behaviour (unlike more formal approaches). At LavaBlast, we often use textual Use Cases, but we find they lack the ease of use of a visual notation. They are great to explain detailed scenarios, but non-technical people don’t really want to read or discuss them. Here’s where the work done in my thesis comes into play.

Over the past decade, researchers around the world developed a graphical notation to describe software requirement by assembling concepts from other notations into a single easy-to-understand visual notation. This notation is called the User Requirements Notation (URN) which is composed of two sub-notations, the Use Case Map (UCM) notation and the Goal-Oriented Requirement Language (GRL).  The Use Case Map (UCM) notation represents the functional and operational aspects of a software system. The abstraction level of UCMs makes them an ideal notation to express user requirements as high-level scenarios. As for the Goal-Oriented Requirement Language (GRL), it describes the non-functional aspects of a software system and is well adapted to model high-level business goals. Now that the boring technicalities are out of the way, what does this notation mean to you?

 Goal-Oriented Requirement Language (GRL)

Above is a sample GRL diagram which represents various alternatives to create a secure system.  jUCMNav is a tool that makes it easy to create such diagrams and visualize the different alternatives using an automated evaluation propagation algorithm. Although one could draw such a diagram on paper or use the techniques commonly taught in management programs, the GRL notation combined with the powerful tool saves time.  Simply put, to achieve the high level goal of security, one needs to secure the terminal and the host. There are a number of ways one could achieve this; the sample GRL diagram shows a system where authentication is provided by swiping a cardkey and encryption is not used. In the end, security is achieved but is not very high. Although I do not wish to go into detailed explanations here, with a simple 10 minute tutorial, stakeholders can understand the notation and visualize the impact of certain solutions on their business goals.
Typically, at LavaBlast, we build the models ourselves to reveal the tradeoffs between various alternatives and present them in meetings. The goal here is to keep trace of the motivations behind a certain decision. In a meeting, the stakeholders can discuss elements that may not be taken into consideration by the model, refine it iteratively and observe the impact on the goals (automatically, thanks to the tool). In the end, a decision is made and the GRL model records the motivations.

Use Case Map (UCM) Notation

Above is a sample Use Case Map used in the context of a web store with a warehouse. Again, with a brief 10 minute tutorial about the notation, one can understand the modeled scenarios. In this case, we see a normal order where the user submits and other and waits for it to arrive.  As an exercise, without any background in the UCM notation, you are invited to see if you can understand the above diagram. Complex details can be encapsulated inside the Stubs (diamond figures) allowing for multi-layer models. After brief discussions with franchisors the LavaBlast team can, thanks to the Use Case Map notation and jUCMNav, unify all the scenarios that were discussed and rapidly break it down into various interacting scenarios. Having a visual notation such as above helps clarify any vague issues and identifies any problems very early on in the requirements elicitation phase.

The model here identifies various issues that can arrive when products are backordered; software engineers need to know about these fringe cases to build a robust system and having such a tool helps minimize costs incurred by discovering issues late in the game. Everyone can sit down around a table and discuss the scenarios which represent what LavaBlast’s understanding of a business process and clarify any issues before any code has been written. Moreover, thanks to these scenarios, the software architecture and the development team can review the modeled scenarios at a later date to ensure they are implementing the appropriate behaviour.

For decades, software engineers have understood the value of writing things down; that’s why NASA has thousand page documents for their software systems. However, we at LavaBlast take a much lightweight approach to requirements engineering. Simple visual diagrams used in meetings combined with online collaboration for requirements elaboration are a perfect balance between traceability and pragmatism.

Conclusions

LavaBlast knows the best practices for writing good requirements and loves to have the customer participate in the process. We understand that requirements are only useful if they evolve with the system and we use Web 2.0 collaboration tools to manage our requirements. Furthermore, we’re dedicated to help people understand the software system to be built. With our agile software development processes, we focus on clear understanding and incremental development. We want our customers to have as much fun developing a software system as we do and that starts by minimizing useless repetitive paperwork and replacing hundred page documents with clear and concise figures.

Side note

LavaBlast’s CTO was the lead software architect for jUCMNav and I was the lead developer. My main responsibilities were related to the manipulation, analysis, and transformation of Use Case Map models. This open-source tool has around 200 KLoc of Java, with about a quarter automatically generated from UML class diagrams.



Month List

Disclaimer

The opinions expressed herein are my own personal opinions and do not represent my employer's view in anyway.

© Copyright 2017

Sign in