LavaBlast Software Blog

Help your franchise business get to the next level.
AddThis Feed Button

SubSonic Limitations - Part 2 (aka: Knee deep in … SubSonic. )

clock December 4, 2007 22:02 by author JKealey
Knee deep in snow.

After my recent post  asking for the most elegant way to support multiple databases with the same schema at runtime, I received some good pointers in the SubSonic forums from dbr, a forum user. In the end, I admit should have done my homework before posting.

One elegant solution to change the SubSonic Provider/ConnectionString at runtime makes use of SharedDbConnectionScope. I personally do not like this solution, as I prefer my code to explicitly state what it’s doing via its properties or arguments instead of relying on contextual information.  I was also concerned about how it works with regards to concurrency and I did a little digging. Looking at the code, I discovered it internally uses the ThreadStatic attribute which seems like a godsend at first, but further investigation reveals the implementation may be flawed. I did see people complain that it didn’t work for them, but don’t know if it is related to the ThreadStatic attribute. I do not fully trust this technique, but I may wrong as I'm far from an expert in concurrency.

Returning to Dbr's suggestion, he simply generates different providers at runtime for each connection string. This sounds simple if you can manage to modify the ProviderName property on the collection (ActiveList) or object (ActiveRecord) every time you load/save from the database. Without resorting to SharedDbConnectionScope, you can't use the auto-generated constructors because they fall back to the default provider which is hardcoded in the generated object’s schema.

The elegant implementation to encapsulate loading/saving from the database is to use a controller, as would be suggested by the MVC design pattern. I have not yet played with the new MVC templates provided by SubSonic, but we already use a good generic SubSonicController here at LavaBlast.

I wanted to re-write my object loading/saving code using this new solution to get rid of my inelegant concurrency locks. Although obvious in appearance, I encountered a few little hiccups along the way and I thought I'd post my findings here.

Limitation 1: You can't create an object by specifying its ProviderName in an ActiveRecord constructor using the default generated code.

  • Workaround: You need to load it using a collection, which supports the ProviderName.
  • Workaround 2: Use SharedDbConnectionScope
  • Workaround 3: Change the code templates to add new constructors.

Limitation 2: You can't use a collection's Where parameter to load your data (via its primary key or other filter), because of incomplete framework code. Hopefully this will be resolved soon (see issue 13402).

  • Workaround: Copy-paste the code used by internally by the Collection, but pass in the extra ProviderName parameter to the new Query.

Limitation 3: You can't specify the ProviderName property on an ActiveRecord because the setter is marked as protected.

  • Workaround: Change the code templates and add a public method/property that sets PropertyName from within the class.
  • Use SharedDbConnectionScope.

Limitation 4: When you load an ActiveRecord by running a query or by loading a Collection, the ActiveRecord does not inherit the ProviderName from the Collection/Query. This is probably due to Limitation 3.  

My current prototype no longer uses the c# lock keyword. I create instances of a controller, passing in which connection string name to use. All database loading/saving is done via this controller, for which I have attached sample code extracts. I managed to get object loading code to my liking, but I had to resort to SharedDbConnectionScope for saving. Once the minor limitations in the framework are resolved, I will be more comfortable with the code.

In summary, I did manage to get a working prototype and I have attached the relevant portions of code that works with data from the appropriate database (chosen at runtime). Hope this helps!



SubSonic Limitations

clock November 30, 2007 10:23 by author JKealey

Question: What is the most elegant way to reuse code generated by SubSonic for tables that share the same schema, in different databases.

  • Ideally, I would have a shared schema definition, generated once, and seamlessly integrated into the code generated for each separate provider.
  • Creating a separate DataProvider for a subset of tables reduces the amount of code that is generated, but is not very convenient to use if you do not use the same namespace for all your projects.
  • Creating a separate DataProvider does not solve the problem of database selection at runtime.

Multiple Databases, Same SchemaLavaBlast's integrated solution for franchise management solution operates on a centralized database and a data warehouse which collects data from all our points of sale. Recently, we decided we wanted to create some management pages for our various e-commerce websites in our centralized portal. Because our recently developed e-commerce backend is the same as our point of sale (reuse++), we automatically obtained features like centralized product line and pricing management for our store fleet (featureSynergy++). However, we wanted to be able to process website users and orders from this same central portal, not on each individual site.

My first question was how do we get the union of the data from the same table in multiple databases? One solution would be to merge these into the data warehouse, but we didn't want to go through complex infrastructure to bring the data into the warehouse and push the changes back out when necessary. I suppose having everything in the same database in the first place would be a solution, but it is not how we architecture our systems. SQL Server Replication might be useful, but it is not bidirectional with SQL Server Express. I can easily write a view that would be a UNION query that would merge the data from the set of databases, but that would be a maintenance problem. For each table, I would have to hardcode the list of databases.

I wrote a quick stored procedure that builds the UNION query from a table of Website to DatabaseName mappings, given a few parameters. It is inefficient and is not strongly-typed (hence it feels dirty) but given the volume of data on these sites, it is good enough for now without being a maintenance pain. Passing in a in a few parameters to the stored procedure, we can filter the rows before the union, we can improve performance. I am curious to know if there are more elegant solutions to this problem.

Anyhow, with this first problem solved, we could bind our GridView to a DataTable produced by the execution of a StoredProcedure and see the merged results. However, because we have a standard infrastructure that makes good use of SubSonic magic for filtering, paging, and sorting, this was not enough. Our infrastructure only works on views or tables in our central database, not on arbitrary results returned by stored procedures. Therefore, SubSonic did not generate any code for the merged tables, in the central database. Still, thanks to the SubSonic Provider model, we managed to load a collection based on the type defined in one DataProvider (point of sale) using data provided by the stored procedure, in another DataProvider (central server). Below, an example without any filtering, sorting or paging.

SubSonic.StoredProcedure sp = SPs.WebsiteUnionOfTables(POSBOLib.Generated.ShoppingCart.ViewWebUser.Schema.TableName, "*", string.Empty, string.Empty);
POSBOLib.Generated.ShoppingCart.ViewWebUserCollection coll = new POSBOLib.Generated.ShoppingCart.ViewWebUserCollection();
coll.LoadAndCloseReader(sp.GetReader());

With a bit more work on the stored procedure, we can make it efficient, but we don't want to use T-SQL all that much, to make everything easier to maintain. (We could use CLR stored procedures, but that's another story).

My second question was how am I going to update this data? When manipulating the data, I know from which database it comes from thanks to an additional column appended by my stored procedure, but I cannot create an updatable SubSonic object with this, and I don't feel like writing SQL anymore, now that we use SubSonic. However, the DataProvider name is a hardcoded string in the auto-generated code… and changing the templates to pass in extra parameters looks like too much work in addition to breaking the simplicity of the framework.

Having played with the DataProvider model, one idea that came to me was to switch the provider context dynamically at runtime. The framework doesn't support this, so I had to hack it in and make sure all my data access was contained in critical sections (using the lock keyword) which begin with an invocation of the following method.

Another option, which just came to me now, would be to obtain the SQL generated by SubSonic during an operation and perform string replacements to forward the requests to the appropriate database. This too is too much of a hack, however, since it depends on the implementation details and the DBMS.

In conclusion, I did manage to build a working prototype using locks and the above code, but I feel the code is too dirty and I am open to suggestions from SubSonic experts (I'm looking at you Rob Conery and Eric Kemp). If there is a clean way to do it, I would love to contribute it to the SubSonic project!

Read Part 2.



Month List

Disclaimer

The opinions expressed herein are my own personal opinions and do not represent my employer's view in anyway.

© Copyright 2017

Sign in