LavaBlast Software Blog

Help your franchise business get to the next level.
AddThis Feed Button

How I lost my WinDbg virginity

clock March 13, 2008 10:32 by author JKealey

image Jeff Moser's recent post on becoming a grandmaster developer was very inspiring. I'm now consciously forcing myself to work on different, harder, problems on a daily basis. Today, I wanted to tackle a nasty memory leak on FranchiseBlast that had been bugging us for months. Actually, it only made the ASP.NET application restart twice (which has no impact because our session state is kept in the state service), but we've been postponing this bug for months :)

The bug was simple to reproduce. Simply refresh a particular page a hundred or so times and observe the worker process' memory usage continuously increase. dotTrace was not very useful today, because most of the memory was consumed by Strings.

I landed on the If broken it is, fix it you should blog. Reading the blog post and noticing how Tess knows much more than I do about debugging .NET applications, I figured I'd improve my .NET debugging skills by playing with WinDbg. My guess is that many of you haven't played with WinDbg either and that's why I thought I'd write this post. With no previous experience with the tool, I managed to learn enough to be able to solve my bug within a couple hours (learning time included). However, this is a situation where I am grateful to have had the chance to play with the low-level details (assembly language / computer architecture) during my undergrad software engineering studies.

WinDbg Installation

The article says simply run "!dumpheap -min 85000 -stat" to figure out which objects are using over 85k memory. Vaguely remembering doing something similar in VS.NET, I tried the command window, to no avail. After some googling, I understood we wanted to launch WinDbg with the SOS extension (Son of Strike.. not Summer of Sam :)). I managed to get everything running (installed WinDbg, attached to w3wp.exe, ran ".loadby sos mscorwks") and later found setup instructions on the same blog, which would have made my life easier. I still haven't remembered (nor have I looked) how to load SOS inside VS.NET instead of the external tool, but I'm pretty sure it can be done.

Playing around

I played around with various strategies found on the same blog, such as figuring out how much I was caching.  I didn't think caching was my problem, but wanted to play around with the tool.

Attempting to run an interesting command found on the aforementioned blog post, I ran into a syntax error:

.foreach (runtime {!dumpheap -mt 0x01b29dfc -short}){.echo ******************;!dumpobj poi(${runtime}+0x3c);.echo cache size

Of course, this was a blog formatting issue, as the real command was:

.foreach (runtime {!dumpheap -mt 0x01b29dfc -short}){.echo ******************;!dumpobj poi(${runtime}+0x3c);.echo cache size:;!objsize poi(${runtime}+0xc) }

It didn't teach me much :) One command that appeared to be very interesting to me was !dumpaspnetcache. This appeared to be exactly what I was looking for. However, the command is only available in SOS for .NET 1.0/1.1 and I'm trying to debug a .NET 2.0 application. I did lose some time unloading the current SOS and loading the one in the clr10 subfolder (which offers !dumpaspnetcache), but it complained that the command didn't work for .NET 2.0.

The main lesson learned here is that !dumpaspnetcache does not work on .NET 2.0. There is a workaround listed in the comments of the blog post, but I didn't try it out.

Solving a problem

  1. I identified that most of my memory was consumed by strings using !dumpheap -stat.
  2. I ran !dumpheap -min 85000 -stat to a view only the largest strings. I observed only three strings (out of approximately 400,000) and that, in total, they accounted for less than 10% of the memory used by my strings.
  3. I ran !dumpheap -mt 790fd8c4 -min 85000 to view the memory locations of these strings (where the MT 790fd8c4 was in the results of step 2).
  4. I ran !dumpobj 0c6800c0 to display the contents of one of the strings (where the address 0c6800c0 was in the result of step 3.)
  5. I saw the string was a simple ASP.NET AJAX script built by the framework. Repeating for the other objects, I didn't find anything interest. In addition, these were only 3 of my 400,000 strings.
  6. I ran !dumpheap -min 10000 -max 85000 -stat to get a feeling of how many objects were of a particular size. I repeated the exercise for various ranges and ended up discovering that 90% of my strings were under 100 bytes (50% of the memory usage).
  7. I ran !dumpobj 17386ce8 on one of the random 100 byte strings and discovered a country name. Interesting. This means I have a country name in here that is not getting disposed. I only have country names in my address editor, which uses a SubSonic DropDownList.
  8. I ran !gcroot 17386ce8 to look at who was referencing this string and confirmed that it wasn't getting collected by the garbage collector (see below).
  9. I looked at the handles and noticed that I had a user control which was listed as an event listener. Therefore, I discovered that we were registering the UserControl to listen to an event but never broke the link when we were done with the UserControl.
  10. Looking at the code, I saw we were adding the event listener in the Page_Load but never removing it. I now simply remove the event handler during the Page_PreRender event, when listening to events it is no longer necessary in my scenario.
  11. I repeatedly loaded the page and saw memory go up but back down again after the garbage collector does its job. Problem solved! (knock on wood)

Here's the result of my !gcroot command.

DOMAIN(0E68E998):HANDLE(Pinned):ed712f0:Root:0a646bd0(System.Object[])->
17cd3dc4(System.EventHandler`1[[LBBoLib.BO.Utility.GenericEventArgs`1[[System.Web.UI.Page, System.Web]], LBBoLib]])->
161a6630(System.Object[])->
173c8e18(System.EventHandler`1[[LBBoLib.BO.Utility.GenericEventArgs`1[[System.Web.UI.Page, System.Web]], LBBoLib]])->
1736abe4(ASP.usercontrols_content_searchpanel_ascx)->
1626d460(ASP.users_aspx)->
1736e234(ASP.usercontrols_content_users_usereditor_ascx)->
1736ff4c(ASP.usercontrols_shared_user_accountinformation_ascx)->
1737d558(ASP.usercontrols_shared_user_addressselectionpanel_ascx)->
1737d974(ASP.usercontrols_shared_user_addresseditor_ascx)->
1737e044(SubSonic.Controls.DropDown2)->
173866dc(System.Web.UI.WebControls.ListItemCollection)->
173866ec(System.Collections.ArrayList)->
173894b8(System.Object[])->
17386d0c(System.Web.UI.WebControls.ListItem)->
17386ce8(System.String)

Conclusion

In total, I only spent a couple hours fiddling around with WinDbg but I am very happy to have a new tool in my tool belt (and to have solved my bug). I strongly recommend reading the various tutorials on Tess's blog. I also found a WinDbg / SOS cheat sheet, which you might find interesting.

kick it on DotNetKicks.com



Software Startup Lessons (Part 1): The Basics

clock March 11, 2008 08:49 by author JKealey

Jennifer, one of my high school friends living in Montreal, recently started her own hair salon. Her employees live in Boston and Florida whereas her biggest customer is located in New York. The previous declaration makes no sense given the industry, but it does make sense in ours! Software can be produced anywhere and delivered to customers worldwide via the Internet. Furthermore, our increased connectivity helps businesses target very specialized niches which would not be viable on the local market. (For an overview of the subject, please refer to the excellent The Long Tail.) We're pushing it one step further with LavaBlast, which we started about a year ago, because our team is distributed. We can be categorized as a bootstrapped MicroISV (less than ten people producing software products with no outside funding), and we'd like to share some of our experiences.

We love to read about other startups and why they succeeded, such as Jessica Livingston's Founders At Work, but we're even more interested in companies that failed. On one side, it is very motivating to ready books such as Founders At Work and learn how the guys behind Hot Or Not started it all and got lucky with a very simple idea. On the other hand, knowing more about failures helps you learn from other people's mistakes. Books like Facts and Fallacies of Software Engineering helped, but were are not start-up focused. IMG_2940 If you're considering starting your own software startup, do take the time to read up on why startups fail, even if it is not the subject of today's post.

I hope you'll enjoy reading this series of posts and will be unable to resist posting comments and starting interesting discussions. Part 1 focuses on the basics, Part 2 revolves around communication, and Part 3 talks about what we've learned concerning marketing, sales, and growth.

Our Context

LavaBlast Software was started by two recent graduates from the software engineering program at the University of Ottawa in Ontario, Canada. After the bachelor's degree, Etienne moved to Montreal because of an attractive employment opportunity while Jason completed his master's. Both have been programming since their early teens on a wide variety of software systems in their various positions at companies of all shapes & sizes. From tiny web design shops, to medium-sized software firms in the electric industry, to second language schools for civil servants, to large Government departments, the founders had the chance to play with software from two perspectives. One one side we have small software companies who live and breathe software and on the other we have large-scale businesses in which most people are not necessarily computer-savvy (except for the IT departments). Additionally, we sidelined as male strippers at bachelorette parties, to pay for our university studies.

It did not take long to discover that we were so passionate about software that we wanted to start our own software company, not work in corporate Dilbert-reading sweatshops. Of course, this is simply caricature, as we worked on very interesting/challenging systems while in larger companies. There's nothing wrong with working in IT departments, but it wasn't a fit for our entrepreneurial drive. On top of that, we love working on projects where we get direct feedback from our customers, instead of internal company tools.

It was clear we were to start a software company, but did not know what nor when. The original plan had been acquire experience and a monetary cushion to allow us to start working on the next big thing. Before taking the plunge and leaving our day jobs, we wanted to have a good idea. We considered various ideas, but being recent graduates with student loans, going out on a limb and working with no revenue on a magical idea for an undetermined period of time was not a very attractive solution. However, we did find a very attractive business opportunity with people with whom Jason had been working for almost three years, on various web development projects. These partners have franchised their business and have acquired precious experience over the years. They are very knowledgeable and have a clear vision of the various business processes from the point of view of franchisors. After 20-some years in the business, they know what software is required, what to focus on, and how to support it.

Discussions ensued, and LavaBlast was formed. We produce industry-specific software solutions for franchisors. All our products are integrated via FranchiseBlast, which cuts down franchisor/franchisee costs at the same time as favoring re-use from a software engineering aspect. We have a base product offering which we tailor to our customer's specific business processes. In a sense, we sell software products with a side order of software services.

In summary:

  • No outside funding.
  • Small distributed team.
  • A mix of products and services.
  • We eat our own dog food.

Lesson 1) The Great Idea

Simply put, chances are you won't think of the next big thing. Even if you do, you probably won't have the resources, experience, or contacts, to turn your idea into billions. We were not expecting miracles but were on the lookout many months in advance. One thing we learned while reading Founders At Work is that many software startups are formed without the faintest clue of what the product/service will be. The most important part of the company is not the idea but the people. A small and closely knit team of people who've worked together in the past is a recipe for success, regardless of the idea. Indeed, execution is key! Partnering with people who have a track record of bringing ideas to reality is, in our opinion, the most important part of the business.

We are very happy that we did not wait on having a brilliant breakthrough idea and started our company immediately. We evaluated opportunities and picked the one that made the most business sense, knowing we can adapt along the way. Having made the plunge early in our careers, we felt we didn't have anything to lose because we didn't have that many constraints such as leaving a high paying job, a mortgage and student loans to pay, kids to feed, etc. The more acquainted you get with stable employment, the harder it is to leave. Still, it is worth mentioning that while we were very comfortable with our technical skills, our network of contacts is not as large as it would have been if we had been under someone else's employment for 5-10 years. On the bright side, we don't have 10 year of accumulated bitterness to carry around and we're in the best phase of our development careers (from a production standpoint).

Simply put, talk to people around you and reflect on what pieces of software would scratch your itch. There are business opportunities all around you... all you need to do is decide on what's the best fit for your current situation! Are you smart and can get things done? Are you able to adapt to your environment and pursue different opportunities when they present themselves? If so, don't wait for an idea that may never come and enjoy the ride!

Lesson 2) Eating your own dog food

It's been said time and time again, you need people using your software in order for its quality to reach high standards. Here at LavaBlast, we're fortunate to be able to develop UsWare because we're involved in our partner's the day-to-day franchise operations. As developers, we proactively solve problems before anyone reports them because we feel like we are part of the franchise. Furthermore, since we built our infrastructure to be re-usable, we recently launched SupportBlast as a clone of our FranchiseBlast solution. SupportBlast is basically our mini-CRM and we've integrated our software registration and license key generator in there. Although the strategy does have its limitations, using our own products helps us boost our software quality. Furthermore, for our kind of software, it would have been impossible to develop anything coherent without actual customers.

It takes ten years to build a great software product and it also takes ten years to become a software grandmaster.  You understand that quality takes time and effort and you'll only discover your software's weaknesses by listening to critique. It's better to be harsh on yourself than listening to someone scream over the phone, don't you think? 

Lesson 3) Mix of products and services

As stated previously, our company has currently received no outside funding (nor have we invested millions from our personal fortunes). We doubt angel investors or venture capitalists would be interested in our company because we're not aiming for a 100:1 return on investment. We could change our strategy and go for an all-or-nothing gamble, but we're more interested in growing our company organically. We know our company can be profitable using our current strategy, but we understand that we are not going to be the next FaceBook and my guess is that your company might fall into this same category. More importantly, we agree with Spolsky's model of company growth, and aim to be profitable from day one.

The main question for our type of company is how can we pay the bills while developing our product? The holy grail for return on investment in our business implies no additional work on the developer's part to support a new customer. Microsoft doesn't need to make adjustments to Word to sell it to a new customer and FaceBook allows its users to change the content of their personal pages themselves. On the opposite end of the spectrum, a good web design firm will create interfaces from scratch, adjusted to the needs (and tastes) of each of its clients. In the end, we want to be a product company but we need to build good products (and accumulate a critical mass of customers) before it pays the bills. We wanted to start our business immediately, not work on it on weeknights and during weekends for a couple years. Because of this, we decided our main interest was our product line but we'd perform services on the side as short-term an income generator.

We were open to external contracts, but did not end up doing any work outside of our product line. We're very happy of this fact because the solution managed to sustain its development without having to dilute our efforts with side-contracts (such as bachelorette parties).

Conclusion

Have fun but expect a bumpy ride. If you're not happy in your own company, doing what you want to do, you have a problem. Still, as with anything, you'll have some good days and some bad. In a startup, expect a roller coaster of emotions because you'll certainly get very excited by a certain situation but crash & burn when it doesn't materialize. In addition, not all work is exciting. You're starting a company and you have to do everything yourself. Doing accounting at 11PM on a Saturday night is not my idea of fun. Generally, however, it is very gratifying to build your own company from scratch and if you're fit for the challenge, I encourage you to try it out! The worst thing that can happen is that you will fail, lose some cash, gain valuable life lessons, and go back to working for someone else (or start over with a new idea :)).

Come back next week for part 2!

kick it on DotNetKicks.com



How Super Mario Bros Made Me a Better Software Engineer

clock February 28, 2008 00:07 by author JKealey

Super Mario Bros.Over the past month, I've been working hard on our business plan's second iteration. We've accomplished a lot in our first year and things look very promising for the future.  Writing a business plan helps an entrepreneur flesh out various concepts, including one's core competency and niche market. We illustrate in great detail what makes LavaBlast such a great software company for franchisors and the process of writing it down made me wonder about what made improved my software engineering talents, personally. Luckily for you, this post is not about me personally but rather an element of my childhood that impacted my career.

I don't recall exactly how old I was when I received an 8-bit Nintendo Entertainment System (NES) for Christmas, but I do remember playing it compulsively (balanced with sports like baseball, soccer and hockey!). The first game I owned was Super Mario Bros but I later obtained its successors to augment my (fairly small) cartridge collection. For the uninitiated, the NES does not incorporate any functionality to allowing saving the player's progress. Countless hours were spent playing and replaying the same levels, allowing me to progress to the end of the game and defeating my arch-nemesis, Bowser.

I enjoyed the time during which I played video games and it irritates me to hear people complaining about how video games will convert their children into violent sore-loser bums. In any case, I'd rather focus on the positive aspects of playing Super Mario Bros and other video games during my childhood. Just like mathematics develops critical thinking and problem solving skills, I strongly believe these video games influenced my personality to a point where they are probably defined my career. Disclaimer: I don't play video games that much anymore, but over the last year, I did purchase a Nintendo Wii and Nintendo DS Lite because I love the technology and the company's vision.

Quality #1: Persistence

Some people say I am a patient person, but I would beg to differ. I have trouble standing still intellectually, and although it is a strength in my industry, it isn't the best personality trait :) However, I am VERY persistent. I will attempt solving a problem over and over until I find a solution. Although I don't experience many painful programming situations on a daily basis, I very rarely give up on any programming problems. If I can't solve the problem immediately, it will keep nagging at me until I find a solution. A direct parallel can be traced with playing the Super Mario Bros series where the whole game had to be played over and over again to make any progress. (Anyone else remember trying to jumper over a certain gap in the floor in the Teenage Mutant Ninja Turtles NES game only to fall in the gap and have to climb back up again?) The games helped me train my persistence, a tool which any entrepreneur must use every day.

Quality #2: Pattern Recognition

Software engineering is all about pattern recognition. Refactoring the code to increase reuse, extracting behavioural patterns inside the strategy design pattern, creating object inheritance trees, or writing efficient algorithms based on observed patterns. I feel pattern recognition is one of my strengths, since I can easily see commonalities between seemingly different software problems. I believe this skill was refined by playing various video games, because the the player must observe the enemy's behaviour in order to succeed. In some games, agility doesn't really matter: it's all about knowing the pattern required to defeat the enemy (to the point where it sometimes become frustrating!). The most challenging parts of video games is when the game deliberately trains you to believe you'll be able to stomp an enemy by using a particular technique but, to your surprise, the technique fails miserably. You need to adapt to your environment, and think outside the box.

Quality #3: Creativity

Mathematicians and software engineers are creative thinkers, more than the uninitiated might think. I see software as a form of art, because of its numerous qualities that are hard to quantify. Software creators are artists in the sense that regardless of their level of experience, some will manage to hit the high notes while others could try their whole lifetime without attaining the perfect balance of usability, functionality, performance, and maintainability. Playing a wide breadth of video game styles lets you attack different situations with a greater baggage. I'm not totally sure how putting Sims in a pool and removing the ladder or shooting down hookers in Grand Theft Auto helped me in my day-to-day life, but it was still very entertaining :) The upcoming Spore game is very appealing to me because it combines creativity with pattern recognition, thanks to generative programming.  If you haven't heard about this game, I recommend you check it out immediately!

Quality #4: Speedy reactions

For Chris Brandsma: "IM N YUR BOX, WIF MAH WIFE" At LavaBlast, such as in many other software startups, it is critically important that all developers be fast thinkers. Indeed, when your core expertise is production, as opposed to research and development, you need to be able to make wise decisions in a short period of time. Personally, I can adapt to the setting (research environment versus startup environment) but my strength is speedy problem solving and I consider myself a software "cowboy". By combining my knowledge of of how to right reusable and maintainable code with my good judgement of what battles are worth fighting, I can quickly come up with appropriate solutions, given the context. In video games, the player needs to react quickly to avoid incoming obstacles while staying on the offensive to win the game. Of course, the mental challenges we face in our day-to-day lives of developing software is much more complex than what we encounter playing video games (which trains physical reaction time), but there is still a correlation between the two tasks.

Quality #5: Thoroughness

What differentiates a good software engineer from a plain vanilla software developer is their concern for quality software, across the board. Software quality is attained the combined impact of numerous strategies, but testing software as you write it, or after you change it, is critical. For the uninitiated, a popular methodology is to test BEFORE you write code. In any case, this talent can also be developed by video games such as the classic Super Mario World (SNES) where the player tries to complete all 96 goals (72 levels) by finding secret exits. Reaching thoroughness requires the player to think outside the typical path (from left to right) and look around for any secret locations (above the ceiling). Finding secret exits is akin to achieving better code coverage by trying atypical scenarios. 

Quality #6: Balance

Playing Super Mario Bros as a child helped me develop a certain sense of balance between my various responsibilities (school) and entertainment activities (sports, games, social activities). If you're spending 16 hours a day playing World of Warcraft or performing sexual favors in exchange for WoW money, your mother is right to think that you have a problem. Launching a software startup is a stressful experience, and it helps to be able to wind down with a beer and a soothing video game. A quick 20min run on a simple game before bed can work wonders! Of course, it is no replacement for sports or social activities, but it sure beats dreaming about design patterns.

What's missing? 

Super Mario Galaxy In my opinion, there are two major qualifies that video games don't impact. Having these two qualities is a requirement to becoming a good software engineer. First, video games do not help you interpret other people's needs. Second, video games do not help you communicate efficiently. What does? Experience, Experience, Experience. Being able to deal with people on a daily basis is mandatory, and the video games I played as a child did not help. However, this statement may no longer be true! Today, many massively multiplayer online games require good collaboration  and organizational skills. Furthermore, the new generation of gaming consoles are using the Internet to allow people to play together. 

Furthermore, I find games like the new Super Mario Galaxy (Wii) very interesting for future mechanical engineers. Indeed, the game presents a three-dimensional environment in an novel way, training the brain to think differently about three-dimensional space. You have to play the game to understand, but because the camera is not always showing Mario in the same angle, you have to get a feeling of the environment even if you don't see it (you're on the opposite side of a planet) or are upside down on the southern hemisphere. I can imagine children and teenagers playing the game today will have greater facility to imagine an object from various perspectives, while in studying physics or mechanical engineering in university.

In conclusion, I admit my whole argument can be invalided by saying that I played these types of games because I was inherently inclined to the software engineering profession, but the commonalities are still interesting to review! What are your thoughts on the subject? What do you think drove you to this profession (or drove you away)?

Legal Disclaimer: Did you know that the usage of the title of "software engineer" is much more regulated in Canada than it is in the United States? Although I detain a bachelor's degree in software engineering, and a master's degree in computer science which focused on requirements engineering, I can currently only claim the title of "junior engineer", as I recently joined the professional order.

Follow up: powerrush on DotNetKicks informed me that I'm not the only one who feels games influence software engineers

kick it on DotNetKicks.com



Common console commands for the typical ASP.NET developer

clock February 25, 2008 14:19 by author JKealey

iis As an ASP.NET web developer, there are a few tasks that I must perform often for which I am glad to be able to perform via the command line. GUIs are great, but there are some things that are simply faster to do via the command line. Although we do have Cygwin installed to enhance our tool belt with commands like grep, there are a few ASP.NET related commands that I wanted to share with you today. Some of these are more useful on Windows 2003 server (because you can run multiple worker processes), but I hope you will find them useful.

1) Restarting IIS

The iisreset command can be used to restart IIS easily from the command line. Self-explanatory.

Attempting stop...
Internet services successfully stopped
Attempting start...
Internet services successfully restarted

2) Listing all ASP.NET worker processes

You can use tasklist to get the running worker processes.

tasklist /FI "IMAGENAME eq w3wp.exe"

Image Name PID Session Name Session# Mem Usage
========================================================================
w3wp.exe 129504 Console 0 40,728 K

You can also use the following command if you have Cygwin installed (easier to remember)

 

tasklist | grep w3wp.exe

 

w3wp.exe 4456 Console 0 54,004 K
w3wp.exe 5144 Console 0 101,736 K
w3wp.exe 2912 Console 0 108,684 K
w3wp.exe 3212 Console 0 136,060 K
w3wp.exe 852 Console 0 133,616 K
w3wp.exe 352 Console 0 6,228 K
w3wp.exe 1556 Console 0 155,264 K
w3wp.exe 3480 Console 0 6,272 K

3) Associating a process ID with a particular application pool

Should you want to monitor memory usage for a particular worker process, the results shown above are not very useful. Use the iisapp command.

W3WP.exe PID: 4456 AppPoolId: .NET 1.1
W3WP.exe PID: 5144 AppPoolId: CustomerA
W3WP.exe PID: 2912 AppPoolId: CustomerB
W3WP.exe PID: 3212 AppPoolId: Blog
W3WP.exe PID: 852 AppPoolId: LavaBlast
W3WP.exe PID: 352 AppPoolId: CustomerC
W3WP.exe PID: 1556 AppPoolId: CustomerD
W3WP.exe PID: 3480 AppPoolId: DefaultAppPool

By using iisapp in conjunction with tasklist, you can know which task is your target for taskkill.

4) Creating a virtual directory

When new developers checkout your code for the first time (or when you upgrade your machine), you don’t want to spend hours configuring IIS. You could back up the metabase and restore it later on, but we simply use iisvdir. Assuming your root IIS has good default configuration settings for your project, you can create a virtual directory like so:

iisvdir /create “Default Web Site” franchiseblast c:\work\lavablast\franchiseblast\

 

5) Finding which folder contains the desired log files.

IIS saves its log files in %WINDOWS%\System32\LogFiles, but it creates a different subdirectory for each web application. Use iisweb /query to figure out which folder to go check out.

Connecting to server ...Done.
Site Name (Metabase Path) Status IP Port Host
==============================================================================

Default Web Site (W3SVC/1) STARTED ALL 80 N/A
port85 (W3SVC/858114812) STARTED ALL 85 N/A

6) Many more commands…

Take a peek at the following articles for more command-line tools that might be useful in your context:

http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/b8721f32-696b-4439-9140-7061933afa4b.mspx?mfr=true

http://www.tech-faq.com/using-iis-command-line-utilities-to-manage-iis.shtml

Conclusion

There are numerous command line tools distributed by Microsoft that help you manage your ASP.NET website. Obviously, the commands listed here are the tip of the iceberg! Although many developers know about these commands because they had to memorize them for some test, many are not even aware of their existence. Personally, I feel that if you write a single script that sets up IIS as you need it to develop, you’ll save time setting up new developers or when you re-install your operating system. Script it once and reap the rewards down the road.

kick it on DotNetKicks.com  



Manage your ASP.NET Web.config Files using NAnt

clock February 19, 2008 13:24 by author JKealey

Egypt trip in 2007 Nothing is more important than the software engineers in a software company. I just finished re-reading Joel Spolsky’s Smart and Get Things Done and it inspired this post. Not only do I admire his writing style, I share Joel’s vision of how a software company should be run. Pampering your programmers is the best decision a manager can make, especially when you’ve built a team that can hit the high notes.

One of Joel’s key elements to running a successful software company is to automate your build process up to the point where it only takes one operation. This minimizes the chance of error while enabling your developers to grey matter to something more complex than copying files. Although they might use the extra time to shop around for student loan consolidation plans (a practice called reverse telecommuting), in most cases they’ll return to writing fresh code or cleaning out existing bugs.

Today’s post is about one of the little things that made my life much easier as a developer: using NAnt to manage our software product line. I’ve come to realize that we encounter these little “sparks” every day, but we never talk about them. Sure, we’ve produce a number of complex software products and they are fun to describe, but I personally enjoy talking about the little things that save time, just like Henry Petroski verbosely describes common items in his books. Fortunately for you, I’ll keep the story short, unlike his description of the evolution of the paper clip in The Evolution of Useful Things (which is still an interesting read, by the way).

Background

We develop lots of ASP.NET websites. Our architecture includes database schemas and business objects shared amongst multiple projects and some common utility libraries.  Furthermore, instead of always inheriting from System.Web.UI.Page and System.Web.UI.UserControl, we have an object oriented inheritance tree as is good software engineering practice. We even have a shared user control library that gets copied over after a successful build. Furthermore, we use ASP.NET master pages and ASP.NET themes to structure our designs. As opposed to what you see in textbooks where themes can be chosen by the user according to their preferences (oh yes, please show me the pink background with fluffy kittens), we use themes to represent different franchise brands.

My point here is that we reusability is key to our solution. We build elements that we can use not only on the website but also in FranchiseBlast, the interactive kiosk, and the point of sale. However, the more you re-use, the more things get complicated. Indeed, the overhead caused by the added configurability we build into our reusable components is non-negligible. We're always on the lookout for new ways to keep things simple, while still reaping the benefits of reuse. We use the Strategy Design Pattern to encapsulate the behavioural changes in our systems and put our various configuration settings inside our Web.config file.

Hurdle #1: Different developers need different web.config files

Our configuration files have a few settings that we want to change on a per-user basis:

- Where should we email exception notifications?

- Database names & file paths

- Google API Keys

How do we manage this? If we put our Web.config file under source control, we'll end up with various conflicts when the developers change the configuration file to suit their tastes. I don't know about you, but I have better things to do than start memorizing API keys or digits of PI.

Solution #1

Our first solution wasn’t fantastic, but it was sufficient for a while. We simply removed the Web.config from source control and created new files, one for each developer (Web.config.jkealey, Web.config.etremblay, etc.) and one for the deployment server (Web.config.server1). When a change was to be made, we whipped out WinMerge and changed all the files. You can quickly understand that this process does not scale well, but it was sufficient for small projects with 2 to 3 developers.

Hurdle #2: Scaling to more than a couple machines

We deploy our point of sale software and kiosks via Subversion. It might be fun to use WinMerge to compare a couple Web.config files, but when you’ve got a hundred web applications to update to the new version, by comparing Web.config files, you’ve got a problem. Doing this by hand wasn’t very difficult but it was error-prone and time consuming. I don’t know if you have seen the Web.config additions that ASP.NET AJAX brought to the table, but upgrading from a release candidate of Atlas to the full release of ASP.NET AJAX was painful (we’re not talking about half a dozen settings in the AppSettings section).

Solution #2

1) Create a template Web.format.config that contains the general Web.config format, with certain placeholders for variables that vary on a per-developer or per-machine basis.

2) Create a web.default.properties that contains the default settings for the web.config

3) Create a web.developername.properties for each developer that simply overrides the default settings with other values when needed.

4) Write a script to replace the placeholders in the Web.format.config and generate your Web.config.developername files for you.

We implemented this strategy using NAnt. Our script does a bit more work because we’ve got interrelated projects, but I will describe the base idea here.

Examples:

Here is a portion of our web.format.config file:

[...]
<appSettings>
    <add key="GoogleMapsAPIKey" value="${GoogleMapsAPIKey}"/>
</appSettings>
<system.web>
   <healthMonitoring enabled="${healthMonitoring.enabled}">
       <providers>
           <clear/>
           <add type="System.Web.Management.SimpleMailWebEventProvider"  name="EmailWebEventProvider"
               from="${bugs_from_email}"
               to="${bugs_to_email}"
               subjectPrefix="${email_prefix}: Exception occurred"
               bodyHeader="!!! HEALTH MONITORING WARNING!!!"
               bodyFooter="Brought to you by LavaBlast Software Inc..."
               buffer="false" />
       </providers>
   </healthMonitoring>
</system.web>
[...]

Property files

Our default settings look something like the following:

<project>
    <property name="GoogleMapsAPIKey" value="ABQIAAAAkzeKMhfEKdddd8YoBaAeaBR0a45XuIX8vaM2H2dddddQpMmazRQ30ddddPdcuXGuhMT2rGPlC0ddd" />
    <property name="healthMonitoring.enabled" value="true"/>
    <property name="email_prefix" value="LavaBlast"/>
    <property name="bugs_to _email" value="info@test.com" />
    <property name="bugs_from_email" value="exception@test.com" />
</project>

 

Our per-developer files include the default settings, and override a few:

<project>
    <!-- load defaults -->
    <include buildfile="web.default.properties"   failonerror="true" />   
        
    <!-- override settings -->
    <property name="GoogleMapsAPIKey" value="ABQIAAAAkzeKMhfEKeeee8YoBaAeaBR0a45XuIX8vaM2H2eeeeeQpMmazRQ30eeeePecuXGuhMT2rGPlC0eee"/>
    <property name="bugs_to_email" value="jkealey@test.com" />
</project>

The NAnt script

We wrote a NAnt script that runs another NAnt instance to perform the property replacements, but the core code comes from Captain Load Test. It is a bit slow because we have to re-invoke NAnt, but it doesn’t appear like you can dynamically include a properties file at runtime. Feel free to comment if you find a way to make it more efficient. We don’t have our generated files under source control as we only version the property files.

<project name="generate configs" default="generate ">
    <property name="destinationfile"   value="web.config" overwrite="false" />  
    <property name="propertyfile"  value="invalid.file" overwrite="false" />  
    <property name="sourcefile"   value="web.format.config" overwrite="false" />
 
    <include buildfile="${propertyfile}"   failonerror="false"   unless="${string::contains(propertyfile, 'invalid.file')}" />   
    
    <target name="configMerge">    
        <copy file="${sourcefile}"  tofile="${destinationfile}" overwrite="true">
            <filterchain>
                <expandproperties />
            </filterchain>
        </copy>
    </target>
 
    <target name="generate ">
        <property name="destinationfile" value="web.config.${machine}" overwrite="true"/>
        <property name="propertyfile" value="web.${machine}.properties" overwrite="true"/> 
        <property name="sourcefile" value="web.format.config" overwrite="true"/>
        <echo message="Generating: ${destinationfile}"/>
        <!--<call target="configMerge"/>-->
        <exec program="nant">
            <arg value="configMerge"/>
            <arg value="-nologo+"/>
            <arg value="-q"/>
            <arg value="-D:sourcefile=${sourcefile}"/>
            <arg value="-D:propertyfile=${propertyfile}"/>
            <arg value="-D:destinationfile=${destinationfile}"/>
        </exec>
    </target>    
</project>

Hurdle #3: Software Product Lines

Egypt trip 2007 Up to now, we’ve talked about taking one project and making it run on a number of machines, depending on a few preferences. However, we’ve taken it one step further because our web applications are part of a software product line. Indeed, we have different themes for different brands. Different companies have different configuration settings and site maps files. Therefore, we needed to be able to generate configuration files for each brand AND for each machine. This also greatly increases the number of configuration files we need.

Solution #3

It wasn’t very difficult to expand to this new level of greatness thanks to the script presented in hurdle #2. We basically have default configuration files for each project (themes, sitemap, name, locale, etc) in addition to the files we’ve shown above. We simply have to load two configuration files instead of one.

We even wrote a batch file (SwitchToBrandA.bat) that generates the property file for the current machine only (via the machine name environment variable) and it replaces the current Web.config. By running one batch file, we switch to the appropriate product for our current machine.

Future work

Currently, it takes a couple minutes to create a new brand or add a new developer. It doesn’t happen often enough to make it worthwhile for us to augment the infrastructure to make it easier on us, but is a foreseeable enhancement for the future. I guess another future work item would actually be hire someone who is an expert in build automation, test automation and automatic data processing! :) These are skills they don't teach in university, but should!

kick it on DotNetKicks.com  



I, for one, welcome our new revision control overlords!

clock February 11, 2008 10:35 by author JKealey
IMG_0215
Be a lazy developer!  You know you deserve it.

I’ve been developing websites professionally for almost nine years now and although I still party like it’s 1999, my productivity has increased greatly thanks to better tools and technologies. Due to recent dealings with the IT department of another firm, I remembered the fact that although we think that all developers are using source control (CVS, Subversion, etc.), this is not the case. There are lots of corporate developers out there who don’t follow the industry’s best practices. This post is not about using version control tools for source code… it’s about re-using the same tool for deploying websites, instead of FTP. We assume you know what source control is and are interested in using it in novel ways. 

A few days ago we were visiting the facilities of a company which provides services of interests to one of our franchisor customers. As our specialty is the integration of external systems with FranchiseBlast, our franchise management tool, we wanted to know how the data would be able to move back and forth. One of the sophisticated options available to us was the use of FTP to transfer flat files in a specific file format, not that there’s anything wrong with that!  Indeed, when your core business doesn’t require lots of integration with your customers, no need to re-invent your solution every three years with newer technologies. You can afford to stick with the same working solution for a long period of time! (We will obviously continue to push web services, as it is much easier to write code against a web service than picking up flat files from an FTP server!).

Integration and automation reduce support costs for the franchisor

We’re always looking at pushing the envelope and we know that software integration and especially automation is the key to cutting down labor costs. If your business processes include lots of manual labor, we feel it is worthwhile to take the time to investigate replacing a few steps software-based solutions (focus on the 20% of steps that make you lose 80% of your time). Wouldn’t you rather play with your new puppy than copy-paste data from one place to another? A typical example for the integration we built into FranchiseBlast and the point of sale is the automatic creation of products in stores, once they are created on FranchiseBlast. Our franchisees save lots of time not having to create their own products and we avoid the situation where an incorrect UPC is entered only to be discovered months later.

Furthermore, although your mental picture of a franchisor might boil down to someone lounging in their satin pyjamas in front of the fireplace, sipping some piping hot cocoa, while waiting for the royalty check to come in, this is very far from the truth. Supporting the franchise is a major time consumer but if you can manage to reduce and/or simplify all the work done inside the store, you can greatly reduce time spent supporting the stores.

Enough about the franchise tangent; talk about web development!

Integration and automation does not only apply to your customers: any serious web development firm still using FTP to deploy websites should consider the following questions. By a serious firm, I mean you’ve got more than your mother’s recipe website to deploy and you build dynamic web applications that change regularly, not a static business card for your trucker uncle.

  • Are you re-uploading the whole site every time you make an upgrade?
  • Are you selecting the changed files manually or telling your FTP client to only overwrite new files?
  • Is someone else also uploading files and you’re never sure what changed?
  • Do you care about being able to deploy frequently without wasting your time moving files around?
  • Do you have to re-upload large files (DLL, images) even if you know you only changed a few bytes?
  • Did you ever have to revert your website back to a previous version when a bug was discovered?
  • Do you upload to a staging area for client approval, then deploy to the production server?

imageIf you answered yes to one of these questions, you’re probably wasting your precious time and (now cheap) bandwidth. Yes, I know it is fun to read comics while you’re uploading a site but you’re not using technology to its full potential.

Source control technology has been around for decades and hopefully you’ve been using it to collaborate with other developers or designers when creating websites. Even if you work alone, there are several advantages to using CVS or Subversion, for example. You may be wondering why I am talking about source control in the context of web deployment but I hope the astute reader will know exactly where I’m headed.

Why not deploy your websites using source control tools such as Subversion?

There are probably lots of you out there that already do this but there may be some people that never thought outside the box. By sharing this with you today, I hope to help at least one person cut down an hour per week spent in deployment. We’ve experienced the benefits and wanted to share them with you. 

We prefer Subversion over CVS for multiple reasons, but one that is of particular interest here is the fact that it can do binary diffs. If you recompile your ASP.NET website, you’ll generate new DLL files that are very similar to the previous ones. The same thing happens when you’re changing images. Thanks to this Subversion feature, you only have to upload the bytes that have changed inside your files… as opposed to the whole files!  Furthermore, as with most source control management tools, you can copy a file from one project inside your repository to dozens of others, without taking up additional space on the disk.

You can create a separate repository for your deployment files (separating them from your source code) and checkout the project on the production server. Later on, when you commit your changes, you can simply perform an update on the production server. You could even automate the update process using post-commit actions.

There are numerous advantages to deploying using a tool such as Subversion:

  • You only commit what has changed and the tool tracks these changes for you.
  • Only changes are transferred, not the whole site.
  • If someone other than you fools around with the deployed site, you can immediately see what was changed.
  • You can easily revert back to an older version of the site

You should automate the deployment to the staging area (using continuous integration or post-commit scripts) but keep deployment to the production server manual. Automatic deployment to your staging area means you can commit at 4:50PM on a Friday afternoon, wait for the successful build confirmation, and head off to play hacky sack with your high school buddies without having to manually redeploy the development machine. At 5AM on Saturday morning, when your early-riser client gets up and has a few minutes to spare before reading the paper, he can load up the development server at play with the latest build.

What about my database?

The concept of database migrations is very useful in this context; if you use tools that have database migrations built-in (such as Ruby on Rails), then you are in luck. Otherwise, it gets more complicated. We’re waiting for database migrations to be supported by SubSonic before investing much effort in this area (although I don’t recall ever having to revert a production server back to a previous version). For our business, this is a must have feature because it allows us to revert a store’s point of sale to a stable build should the software exploded in the middle of a busy Saturday. Even better, should a fire destroy the store’s computers, we can reload this store’s customized version within minutes. (We also do nightly off-site data backups of the sales information).

In any case, we recommend you take a peek at the series of posts made by K. Scott Allen, referenced in this article by Jeff Atwood.

How can I save more time?

IMG_0501 The answer is simple: by scripting the copying of files (locally) from your development copy to the checked out folder of the production repository. This can be as simple as using the build-in deployment tools available in your IDE (such as VS.NET’s deployment functionality) or writing a script that copies all files of a particular extension from one place to the other. Eventually, you’ll need to adapt your script to your particular needs, if you’re wasting too much time copying large files that never change, for example. This step depends on your project and its environment. I will describe in a future post how we use NAnt to manage our software product line. Kudos to Jean-Philippe Daigle for helping us out in the early days.

Concrete examples

LavaBlast deploys its point of sale application via Subversion; every computer we manage has its own path in the deployment repository. This allows for per-store customizability and is not as redundant as one may sound because of Subversion copies. Furthermore, when the POS communicates with FranchiseBlast, we track exactly which version of the software is running in that particular store (via the Subversion revision number). We also track a revision number for the database schema. Having this bird’s eye view of the software deployed in the franchise lets us easily schedule updates to subsets of the stores. At the point where we are now, we could easily write code that would signal to the store it should upgrade itself to a certain revision at a certain time & date.  Ultimately, instead of spending my time copying files, I am available to write blog posts like this one!

Conclusion

By moving away from FTP, we’ve considerably cut down the time it takes to deploy a website. We invested time in this infrastructure early on, allowing us to concentrate on design, coding, and testing as opposed to deployment. Of course, FTP still has many uses outside of the context we describe here! FTP has been around for a long time and will not disappear any time soon!

kick it on DotNetKicks.com


CheckBoxList hover extender

clock January 22, 2008 09:16 by author JKealey

Summary

This article presents a CheckBoxList extender that enables the user to hover over individual checkboxes in the list and see a popup with additional information.  The information is populated dynamically (web service call) depending on the hovered checkbox' value.

[VIEW THE ONLINE DEMO]

[DOWNLOAD THE CODE] (469.00 kb)

Where could this be useful?

If you use the ManyToManyList in SubSonic, you know that it inherits from the built-in CheckBoxList control to display data from a many-to-many database relationship.  You probably also know that this can be really useful, but it has it's limitations.  We are using the SubSonic many-to-many list to display a list of stores that a user can be associated with.  You can display only one field of information by item with the ManyToManyList control for the foreign table associated in the relationship (unless you use views!).  Here is an example:

storesList

For a particular user, we are listing all possible stores and showing the store ID as a description for the items.  This can have a meaning for some users, but when a new user comes in and doesn't know the store identifiers, you have a problem. All franchise stores share the same store name as they do business under the same name - they can be uniquely identified by their franchisor-assigned ID or by their address. However, we don't want to show too much information in this control, as it would clutter the interface, in our system. We need a way to display more information by item, without taking up too much screen real estate.  

Our solution is very much inspired by the AjaxControlToolkit controls, such as the HoverMenu to display information when the user hovers over an item in our CheckBoxList (or SubSonic ManyToManyList).  We created an AJAX Control Toolkit Extender that asynchronously calls a web service method (or a page method) to obtain the information displayed in the popup control, when the user hovers over an item.  Here is an example of the result:

checkboxListhover

How to use it

You need a CheckBoxList, a panel to display the information, our extender, and a web service method to invoke.

<asp:CheckBoxList ID="CheckBoxList" runat="server">
    <asp:ListItem Text="Item #1" Value="1" />
    <asp:ListItem Text="Item #2" Value="2" />
    <asp:ListItem Text="Item #3" Value="3" />
    <asp:ListItem Text="Item #4" Value="4" />
    <asp:ListItem Text="Item #5" Value="5" />
</asp:CheckBoxList>

<asp:Panel ID="panelInfo" runat="server" CssClass="checkboxlisthover">
    <asp:Label ID="lblTest" runat="server" Text="Label"></asp:Label>
</asp:Panel>

<ajax:CheckboxListHoverExtender
id="checkboxlistext" runat="server"
TargetControlID="CheckBoxList"
PanelID="panelInfo"
DynamicServiceMethod="GetContent"
DynamicControlID="lblTest"
DynamicServicePath="~/CheckBoxList.asmx" />

The web service class should look something like this:

[ScriptService]
    public class CheckboxList : WebService
    {
        [WebMethod]
        [ScriptMethod]
        public string GetContent(string contextKey) { return "";}

    }

Implementation

A web service method was called with the value of the hovered checkbox.  When you DataBind the CheckBoxList, it is very important to assign a value to each of your ListItems.  In this example, each checkbox has a GUID value.  This GUID is passed as a parameter to the web service call automatically by the extender.  The popup panel is then filled with the information returned by the web service.

As stated previously, the CheckBoxListExtender control is very much inspired by the HoverMenu extender.  The two controls have similarities, but we can't use the HoverMenu directly in the CheckBoxList because we don't have access to the item template of a CheckBoxList.  This prevents us from using the built-in HoverMenu extender for each CheckBoxList item.

Coding a new extender

To code a new extender, you can use existing extenders to simplify your life: that's what we did for the CheckBoxListExtender.  It re-uses the HoverExtender and the PopupExtender.  Those two extenders are not in the sample page of the AjaxControlToolkit (we see the HoverMenuExtender and PopupControlExtender but not the two we are using here), but they are in the source code if you want to see them.  Basically when we coded the CheckBoxListExtender, we had to pass the scripts we wanted to depend on:

[RequiredScript(typeof(CommonToolkitScripts))]
[RequiredScript(typeof(HoverExtender))]
[RequiredScript(typeof(PopupExtender))]
[RequiredScript(typeof(AnimationExtender))]

[Designer(typeof(CheckboxListHoverDesigner))]
[ClientScriptResource("LavaBlast.AJAX.CheckboxListExtender.CheckboxListHoverBehavior", "LavaBlast.AJAX.CheckboxListExtender.CheckboxListHoverBehavior.js")]
[TargetControlType(typeof(CheckBoxList))]
public class CheckboxListHoverExtender : DynamicPopulateExtenderControlBase

As you can see, the extender inherits from DynamicPopulateExtenderControlBase.  This means that the extender can dynamically populate the control via a web service call and all the necessary plumbing is already in place. Specifying the scripts you depend on is as easy as using the RequiredScript attribute on your extender class.

JavaScript behavior

As for the JavaScript, for each "TD" inside our CheckBoxList control, we created a HoverBehavior (this is from the HoverExtender).  Each time the HoverBehavior events are fired, we can do something about them.  In this case, we simply activated the PopupBehavior to show the popup panel and call the web service method to populate the content.  As the value of each checkbox of the list is not contained in the DOM of the page, most probably a security feature of ASP.NET, you have to somehow pass this information from the server to the extender behavior.  Since we couldn't find a way to pass a list of values from the server to the behaviour using a generic List variable, we simply used a string of comma separated values.  Right now we're using this:

[ExtenderControlProperty]
[DefaultValue("")]
[Browsable(false)]
public string Values
{
    get { return GetPropertyValue("Values", ""); }
    set { SetPropertyValue("Values", value); }
}

But would much rather like to use the following: 

[ExtenderControlProperty]
[DefaultValue("")]
[Browsable(false)]
public List<string> Values
{
    get { return GetPropertyValue("Values", ""); }
    set { SetPropertyValue("Values", value); }
}

It appears generic lists are not supported, unless we are mistaken. If someone knows if this is possible, please leave us a comment on this post.

Don't forget to look at the online demo!

CheckBoxListHoverExtenderDemo.zip (469.00 kb)

kick it on DotNetKicks.com


Dirt Simple ASP.NET CMS using the ScrewTurn Wiki

clock January 22, 2008 00:08 by author JKealey

A year ago we wanted to quickly integrate the capabilities of a content management system in a customer’s website. Budget was limited but so were the requirements.

  • The user SHALL be able to change a few (a dozen) paragraphs on the website. 
  • The user SHALL be able to use basic formatting (bulleted lists, headers, images) without knowing HTML.

The lengthy option was the integration of a powerful CMS and the shorter one was to create something quickly using one of the many open source rich text editors found on the Internet and a simple database table. We didn’t really feel like coding that infrastructure at that point for various reasons. 

At this point, we were already a wiki for requirements management and task planning for this customer.  On very complex projects, we prefer TWiki because we had already used its metadata and form capabilities to make it easy to collaboratively work on software requirements back in 2005. However, we had installed the ScrewTurn wiki (an open source wiki in ASP.NET) for this customer, as its installation only takes a few seconds. We decided we would dynamically integrate content from our Wiki into our website, which was sufficient for our customer, for the time being.

We took a shorter lunch break that day and coded a dirt simple CMS application that queries the ScrewTurn wiki to obtain paragraphs of text. We simply made an HttpWebRequest to the printable version of the wiki page, cleaned out a bit of HTML markup that we did not need and cached the result. Using the control is then straightforward.

Register ScrewturnVisualizer in our Web.config (system.web, pages, controls):

<add tagPrefix="LB" assembly="LavaBlast" namespace=" LavaBlast.CustomControls" />

Add the base information in our Skin to avoid repeating it everywhere:

<LB:ScrewturnVisualizer runat="server" BaseURL="http://ourclientwiki.lavablast.com" CssClass="ScrewTurn" />

Add the control on the appropriate pages:

<LB:ScrewturnVisualizer ID="stv1" runat="Server" PageName="CurrentSpecials" />

Today, we’ve moved on to a full-fledged CMS and no longer use this code, but the attached code may still help someone out! We’re big fans of incremental engineering and this half hour of coding helped keep our clients happy while we moved to a better solution.

Side note: In terms of open source licences, I’ve always wondered what this would imply. ScrewTurn is GPL (as opposed to LGPL) and I’m curious to know if this would imply that websites using it as a simple CMS would have to be GPL as well. Because we’re making us of an online service (the code can quickly be adapted to work for any Wiki or other website) and not extending the codebase, I think we’re not bound by the GPL. Any thoughts?

ScrewturnVisualizer.zip (1.41 kb)

kick it on DotNetKicks.com


VS.NET 2008 IDE Frozen After Compilation

clock January 16, 2008 21:08 by author JKealey

Each and every time I've compiled my solution over the last month, I've had to wait approximately 30 seconds for the IDE to resume after compiling. It simply said Build failed / Build succeeded and would not let me click anywhere in the UI for a painful half-minute (which I put to good use).

At first, I assumed this was another "feature" of VS.NET that was causing me pain and suffering, just like when my VS.NET 2005 decided to make me pay for cheating on it with another IDE (Eclipse) when I was working on a few open source Java projects using Eclipse (StatSVN and jUCMNav).

Evil VS.NET 2005

I decided to launch VS.NET 2008 in Safe Mode to verify if this would fix my problem and, to my surprise, it did. I probably have an evil add-in that was migrated from VS.NET 2005... to be investigated.

Update: Even in safe mode, it started doing it again after a couple hours of work. I'll file this into the "unexplained" category, alongside my mysterious "clipboard doesn't work for large images when memory usage is too high". From what I can see, Windows 32bit with the /PAE or /3GB options simply doesn't work. I need to upgrade to a 64bit OS to use my 4GB ram because the startup options aren't working well for me.



A first in the competitive stuff-your-own teddy bear industry

clock December 9, 2007 11:45 by author JKealey

We have just released a new feature on the Teddy Mountain Stuff-Your-Own Teddy Bear Website which we believe is an industry first. Teddy Mountain sees itself as the most innovative teddy bear franchise and they use LavaBlast's technology for both their supporting infrastructure and client facing applications.
We've done a number of industry-specific solutions for this franchisor:

  • Interactive Kiosk with webcam and touch screen to create your own teddy bear adoption certificate.
  • Point of Sale: Integrated with the kiosk, our point of sale is easy to use and reduces training costs.
  • Website: Online sales via an e-commerce engine with a few teddy bear industry-specific features.
  • FranchiseBlast: Our centralized management and collaboration portal which ties everything together, providing a single environment to modify the product line, store pricing, collaborate with others, share documents, view reports, etc.



Today, we augmented the integration between the points of sales distributed across the world and the Teddy Mountain website.  We are now allowing members of the frequent buyer program to view adoption certificates they have created in brick and mortar stores…. online! Simply put, existing PaLS members from a select number of stores are able to see their frequent buyer card balance and birth certificate history on the Teddy Mountain website. We think that no other franchisor in the teddy bear industry has done anything similar and are proud to see Teddy Mountain lead the way. Of course, for privacy concerns, only those who sign-up for the program will have their pictures made available online to distribute to their friends and family.

We feel that, in the long term, this feature will improve gift card sales from out-of-town family members as the donor can receive visual feedback from the recipient, via the Internet.

If the feature attracts some interest, we are open to implementing new features such as integration with FaceBook or using Google's OpenSocial API. We shall also add features such as emailing certificates to friends/family with a greeting.



Month List

Disclaimer

The opinions expressed herein are my own personal opinions and do not represent my employer's view in anyway.

© Copyright 2017

Sign in