July 2009 Blog Posts
Standards? We don’t need no stinkin’ standards!

Interesting article here about Google and how they do markup on their search page.

Apparently, they have determined that not closing the <body> and <html> tags on their pages somehow has a performance benefit, and so ignore what most people would consider to be Markup 101 (not to mention the other ‘issues’ mentioned there).

I wonder how long it will be till someone brings up this issue to me as a reason why markup validation doesn’t matter. 

When building an e-Com system, my business partner and I have at some point in the past come up with the conclusion that just because Amazon does something, doesn’t necessarily mean it is a good practice to follow.  I think the same can be said about eBay, Google, <insert mammoth system here>, etc.

I’m still going to try to write valid markup when I can.

posted @ Friday, July 31, 2009 10:53 PM | Feedback (2)
Maybe we should talk about Sustainability instead

So, the topic of ‘Maintainability’ has come up again in various forums, sparked by Frans Bouma with his blog post and other comments.  Jeremy ‘counting till 10 till jdn calls me names on the net again’ Miller has this, Ayende has this, Patrick Smacchia has this, Jimmy Bogard has this (okay, maybe this one is slightly off-topic), and I’m sure there are a couple dozen/hundred more that could be listed.

I’ve said various things about the topic here, here, here, as well as (sort of) here and here.  Gosh.  That’s a lot of ‘here’ here.

A ton of what I’ve read over the past few years about maintainability has focused on developer activities.  And what I’ve argued (or at least stated) previously is that maintainability as a concept only really makes sense when tied to the the context of those who are maintaining it.  For the sake of argument, let’s assume that it is a given that, in some abstract sense, writing a .NET web application using ASP.NET MVC is better than using ASP.NET web forms, if only because MVC better separates concerns by default.  It’s easy to caricature things, and since it’s my blog, let’s do that: cramming everything into the code behind of a web form, in some abstract sense, is clearly worse than separating out concerns into separate models, views and controllers.  If you have a direct data access call in every web form, then if you need to change the implementation of those calls, you have to (potentially) re-write those calls in multiple places, whereas you have to re-write them in fewer places otherwise.  This is a given.

However, if you have a web application using web forms, you have a much larger pool of candidate developers who are familiar with that way of doing things than with MVC.  I’m leaving aside the fact that you can use the MVP pattern with web forms, and also leaving aside the fact that you can teach people, and, with some exceptions, you can teach people to use MVC even if they are familiar with web forms.

The point is, when talking about the maintainability of an application, at a developer level, you should take into account who is writing the code now, and who will be maintaining the code in the future.

Many people will argue (or at least state) that any application that follows, e.g. SOLID principles, is, in principle, more maintainable than one that doesn’t.  I agree with this, as long as I’m allowed to point out the ‘in principle’ part.  If you are in charge of maintaining an application of any significant size, you have to take into account whom you think will be doing the maintaining.

Regardless of all that, what I think is really more important when it comes to deciding things (is that vague enough?  Yes, I think it is) is to talk about the sustainability of an application.

Sustainability involves the entire life-cycle and process chain, which includes how the organization that has/runs the application defines gathering requirements, validating functionality, develops code, deploys the application, and manages the application in production.  This is *always* determined within a context, the context of the organization as a whole.

One of the fun things about being a developer who reads a lot of ‘’ type stuff is to read about all of the ‘newer’ (not ‘new’ in the sense of invented, usually, but ‘new’ in the sense of learned from other developer communities, for instance) ways of writing code, code that is easier to change and manipulate down the road.  But, in my experience, the ‘developer’ side of an application is often a much smaller piece of the puzzle than other factors.

Suppose for the sake of argument that you need to implement a new piece of functionality in an existing application that was built with only the slightest understanding of SOLID principles.  Suppose this new piece of functionality takes a month to implement in development.  For the sake of argument, suppose that you know that if the application had been built with even a slight understanding of SOLID principles, you could implement it in a week within development.  I think that some/many/most people would say that the latter case was more maintainable, in some abstract sense.

But suppose that, regardless of the amount of actual development time involved, it still takes a month to manage the requirements, and then a month to QA the functionality, and then a month to deploy the functionality.  The development part of the equation starts to lose importance.  If it also requires a significant retraining of production support staff to learn new ways of figuring out production problems, it really starts to lose importance.

Learning how to build a sustainable application involves a hell of a lot more than just having the development team employing the latest tricks and techniques that they read off the latest post from their favorite blog.  Though this might seem controversial, having an application that has less separation of concerns but is easier for the production support staff to understand, so that that staff can potentially fix production problems without having to call the developers who developed it, is in many instances more sustainable.  And, as someone who managed 2nd and 3rd shift employees, I want to make it clear that this isn’t a matter of insulting production support staff.  But, the fact of the matter is that, generically speaking, brilliant developers don’t want to work 2nd and 3rd shifts.

None of this should be taken as a suggestion that you should lobotomize your development team.  Far from it.  But, it should be taken as a suggestion that when it comes to developing software, what makes it more maintainable for the developers might not be the most important thing. 

If it takes 12-18 months for a business requirement to make it into the developer pipeline (which I’ve experienced, theoretically), that it takes a month versus a week for the developer to get their job done is rather irrelevant.  It would be much more important in that context to make it take even 2 months for the business requirement to make it into the developer pipeline than to spend all that much time worrying about if a developer has to cut and paste some code.

Sustainable software development requires focusing on the entire process, not just on what developers do.  Even though what developers do is usually the fun stuff.

posted @ Thursday, July 30, 2009 12:46 AM | Feedback (2)
LINQ to Events

I can’t say that I fully grasp all of this, but it appears that this is the way to do asynchronous programming going forward with .NET 4.0.

The IObservable/IObserver interfaces are in .NET framework 4.0.  I want to stress that IObservable is the new asynchronous programming pattern in .NET.  It supplants the Begin/EndInvoke pattern as well as the event-based asynchronous pattern.  Simple run of thumb: if the method is asynchronous, return an IObservable.

posted @ Thursday, July 23, 2009 3:07 PM | Feedback (0)
Strong Opinions, Weakly Held

This post is from a few years ago, but Bob Sutton blogged about the idea that one should have strong opinions, weakly held.  From the post:

…weak opinions are problematic because people aren’t inspired to develop the best arguments possible for them, or to put forth the energy required to test them.

But at the same time:

…(it is) just as important, however, to not be too attached to what you believe because, otherwise, it undermines your ability to “see” and “hear” evidence that clashes with your opinions. This is what psychologists sometimes call the problem of “confirmation bias.”

Now, I wouldn’t go as far as Sutton does in tying this notion to the concept of wisdom, but you can see in all walks of life and just about every field of endeavor where not following this behavior can lead to problems.  As I’ve blogged about previously and use as a common example, Agile fanatics seem to fall prey to confirmation bias all the time (“I tried Agile techniques on this project and it failed”…”Well, then you weren’t really doing Agile”), and it is also seems pretty common when it comes to unit testing and TDD, where even the possibility that unit testing might not be useful in some contexts boggles some minds.  Which is silly, and really unnecessary.  If unit testing helps (making up a number) 87% of the time, then you are still better off doing it, most of the time.  Refusing to acknowledge counter-examples just makes one look foolish.

There are, of course, times when the proper course to take is of having strong opinions, strongly held, but it is generally good advice to consider following the path of strong opinions, weakly held.

posted @ Thursday, July 23, 2009 2:17 PM | Feedback (0)
Office 2010 Send A Smile


Taskbar suddenly shows these weird looking icons, one a yellow smiley face, one a red frowny face.

This allows you to send messages to Microsoft about your experience with bugs.  Or I guess happy things with the smiley face.  What would you send then?  My document wasn’t corrupted?

Anyway, I discovered a problem with a plugin that prevented me from replying to messages in Outlook (something you typically want to do, reply to a message).  The pdfmoutlook plugin.  Sent a Frown for that.

posted @ Tuesday, July 14, 2009 9:44 PM | Feedback (0)
When TDD can't help you - A case study : Paypal Express Checkout

If you’ve ever been able to work for a period of time where you could use TDD, BDD, or just plain ol’ unit testing, then you’ve probably experienced the joy of having to work the good old fashioned way, what we might call “Brute Force Integration Testing”, otherwise known (when developing web applications) as “Click through the damn thing and see what happens.”

Recently, I’ve had such a joyful experience, and its name is Paypal Express Checkout (henceforth ”EC”).

I’m assuming most everyone knows what Paypal is.  When designing a checkout process, you typically want to have standard credit card processing, and Paypal does offer that.  However, it is a requirement that you also implement EC.  Since that is a standard addition to many a checkout process, this is okay.  Well, sort of.

The Express Checkout flow is pretty basic:

  1. From the shopping cart (as well as from the Payment Information page, more on that later), you click the Checkout with Paypal button….(digression: the button includes the tagline “the safer, easier way to pay.”  Client wants to know why the tagline is there.  I don’t know, it is part of the branding requirements, and I’m a stickler for following agreements.  Client points out that Nike’s store has a button without the tagline.  I point out that Nike probably is able to dictate more favorable terms.  But I digress).
  2. You are redirected to Paypal’s site, where the user logs in, reviews various information, and clicks the Continue button.
  3. You come back to your order review page, and place the order.

Seems simple enough, right?  Right.

The Sandbox

Obviously, you want to test out your implementation of EC before you go live.  Right?  Right.  So, Paypal offers their Sandbox environment.  This is an environment that allows you to make sure things work correctly in an environment that ‘mimics’ how the live environment works.  Sort of.

You set up a developer test account within the sandbox, and it requires you to take various steps that mimic what you would do in the live version.  Sort of.

You login and then you setup some sandbox test accounts.  Specifically, you set up a ‘buyer’ account and a ‘seller’ account.  Because Paypal thinks it is good to mimic the live environment, you have to login to the developer site, then launch the test site and login as the seller, and accept the billing agreement.  This is okay, except if you decide to reset the seller account (  more on this later).

To ‘aid’ in the process, Paypal provides a number of resources.  Specifically, there is an online wizard that generates code for you to create a local test site, as well as sample code to download that shows you how to implement their API.  There’s also a wealth of documentation to read.

In theory, all of this is grand.  You would follow something like the following flow:

  1. Create a local test site.
  2. Run the online wizard to generate the code you need to test out the various steps of the Paypal EC process.
  3. From that code, implement EC in your real site (though still using the sandbox).
  4. Promote to production.

Nothing can possibly go wrong.  Right?  Right.

The 3 Steps of EC

  1. SetExpressCheckout – send initial information to Paypal.  This will create for you a Token, which you will use in the other steps.  It also determines what the user sees when they are redirected to Paypal.
  2. GetExpressCheckout – after the user is redirected from Paypal when clicking the Continue button, this allows you to get any information that was produced when the user was at Paypal.  For instance, the user can set the shipping information, and you want to have this information for your own site.
  3. DoExpressCheckout – this ‘finalizes’ the order (whether it actually finalizes or just authorizes depends on what you tell it to do.  Typically, you don’t capture a payment until the order is shipped, so you typically just authorize the payment.  There are various rules about this, but that is beyond the scope of this post).

Once you begin to implement the three steps, you find a whole bunch of fun stuff.

Online Wizard Generated Code

The skillful, snide reader will come up with a Microsoft joke here, but the code that is generated by the online wizard which is complete, in the sense that it covers all three steps, has a slight flaw.  It won’t compile because of a number of syntax issues.  Not hundreds or anything, but enough to make you notice.  Okay, one syntax issue will make you notice.  This was my first indication that the QA department is not heavily involved with the sandbox environment.  But, that was easy enough to fix, to get it to compile.  And running through the local test site indicated success.  Which is always good.


All calls to Paypal are sent to a quasi-web service.  I say ‘quasi’ because you don’t send the typical web service requests (although there is a SOAP API that lets you go that route), instead, you send key-value pairs to a URL.  For instance, to set the amount of the order being processed, you would send:


Immediately, you will notice that you don’t send ”AMOUNT” but “AMT.”  Why?  Hell, who knows. 

The astute reader will note that, however you implement this within your own code, there is a bunch of magic string magic happening.  I created wrapper classes that allowed for strongly-typed coding within my code, but as you can imagine, I had a few issues where I fat-fingered strings.  This is okay, but one thing about the API is that if you pass in a key (say “AMOUNT” instead of “AMT”) that the Paypal server doesn’t recognize, it doesn’t throw an exception, it just ignores it.  I guess this is okay, but it can cause issues.

Moreover, the API handles versioning with the following:


This turns out to be very important, and very misleading (more in a minute), as certain functionality is only available with certain versions.  As far as things go, I guess this okay.

Passing Order Details and Documentation Fun

One thing that you will naturally want to do is to pass to Paypal the details of the items that are in your shopping cart.  The documentation that is available tells you in great detail how to do this.  Except for a few problems.

For one thing, while there is a lot of documentation that is linked to directly from the Paypal developer site, a lot of important documentation is only findable if you use Google.

Another big thing is that it is not clearly available from any documentation what version is required for certain functionality to be, well, functioning.  If you want to pass Order Details to Paypal, you have to be using Version 53.0 or higher.  If you didn’t search Google and get to the forums, you wouldn’t know this.  Since the online wizard generates code that passes the Version as 51.0, you might theoretically spend a lot of time wondering why Order Details don’t show up in the sandbox.

What makes this even better is that the downloadable sample code, which includes a DLL for you to reference in your .NET solutions, apparently hard-codes the Version to 3.0, so that even if you pass in a different Version, it overrides it.  Which means that if you use the DLL, you simply cannot get Order Details to show up.  Theoretically, this can cause you to spend 10+ hours trying to get this functionality to work in a situation where it simply will never work, unless you dump the DLL and write some code in your actual site that mirrors what the local test site does.

Overriding Shipping Information

When you implement EC, you also have to allow the user to use Paypal from your normal checkout’s payment information page, which will typically come after the user has entered in shipping information.  Naturally, you will want to send this info to Paypal.  This is where the magic string fun comes in.

Paypal’s documentation clearly states how to send in the actual shipping info.  You pass in “SHIPTOSTREET”, “SHIPTOCITY”, etc. etc. etc.  All good.  Except you also have to set the variable “OVERRIDESHIPPING”, or something like that, for Paypal to recognize what you are trying to do.  This is in some of the documentation, I just didn’t see it at first.  Or second.  Or third.  The response that the API sends you won’t tell you if something is missing, it just won’t do what you want it to do.  This is a RTFM issue, but it would be helpful if there was ONE manual to RTF.

The actual testing experience

This is where the disconnect from TDD or any sort of unit testing becomes obvious.

If you want to test how EC works, you have to launch your eCom site and work through the process.  But that’s not all.  In a separate instance or tab of the browser you are testing with, you have to log into the developer Paypal site first.  I’m not entirely sure why, but from the scanning of the forum site I did, it has something to do with how cookies are used. 

This is one of those things where I’m sure there is a solid technical/architectural reason for why it has to work this way, but it is highly annoying.  If your session at the developer Paypal site times out, your testing of your own site will fail.

Moreover, within the sandbox, you can only login with the test accounts you setup.  You have to login with the buyer test account, otherwise nothing works.

As you can imagine, unless you are a Selenium freak of the highest order, you can’t automate any of this.  So, not only do you have to “click through the application and see what happens,” you also have to login to a completely different site ahead of time to get it to work.

In the grand scheme of things, this isn’t necessarily the most horrible thing in the world.  I recently had a conversation with someone complaining that their automated test suite test time went from 2 minutes to 4.  Having been at a contract where it took 10 minutes to build and get to a logon screen, I found this funny.

But once you become addicted to running a single test that takes 10 seconds (at most) to run, having to move to a scenario where you have to click through an application, even if it actually only takes 2 minutes, seems to take an hour.

Magic strings

Another issue I theoretically ran into involved the DoExpressCheckout process, which is what occurs when you get back to your Order Review process and try to process the order.  I was able to get this process to work in the local test site, but I kept getting an error when trying to process it on the ‘real’ local site.

It turns out that I was passing in:


instead of


I’m pretty sure that I saw some documentation that said either would work, but only the latter did.

The ‘hilarious’ thing about it was that the error I was getting only occurred when I was trying to process orders that had multiple items in the cart.  If you only have one item, Paypal doesn’t try to add up the various amounts, so single item orders always worked, even though I was passing in bad info.  Why does it only try to add up amounts when you have multiple items in your cart?  Who the hell knows?

Resetting the Seller Account

When you run through many iterations of testing this lovely thing, you end up with a lot of data and info that you eventually want to get rid of.  So, since the sandbox allows it, you reset the account.  What isn’t documented is that this then breaks the normal Paypal credit card processing, since you have to login as that user in the test Paypal site and re-accept the billing agreement.  And it doesn’t tell you that you need to do this, or how to do it.  You have to click through some non-intuitive links (which you can only find out by visiting the forums) to get this to work.

Final comments

Now that I’ve actually gone through the process, I could implement EC in a day or so.  Of course, given how all of this has gone, I have no confidence that this will actually work in production (still to come) without changes.

When doing TDD or BDD, you are supposed to ask yourself how the API should look.  When dealing with a third-party API, you are stuck with dealing with what you are allowed to do.

It is VERY clear that Paypal doesn’t do much (if any) QA in their sandbox environment.  They don’t update their documentation much with all the relevant details.  Even though I am far from a TDD fanatic, I’m going to go write a program that is TDD-friendly, just so that I can feel better.

posted @ Tuesday, July 14, 2009 9:36 PM | Feedback (3)
Office 2010 Technical Preview


Somehow, I was invited to participate in the Office 2010 Technical Preview. Running it on top of Windows 7 should be interesting.

So far, there have been two issues.  Well, minor issues.  When installing Sharepoint Workspace 2010 (which replaces Groove), the taskbar disappeared and didn’t come back.  Since a reboot was required, and the taskbar came back after reboot, no big deal.  Then, before launching any of the applications, I pinned Excel to the taskbar, which launched some install thing (I’m guessing it would have occurred if I launched an app first), which crashed Explorer.exe, which caused the taskbar to disappear and come back.  The icon was properly pinned when it came back, so again, no big deal.

I suppose I should have backed up my mail files before running Outlook 2010 for the first time.  I’m sure I have a backup somewhere.  Don’t do that.  Anyway, it launched fine.

First impression, Ribbon everywhere, including in Outlook now.  I’m sure this means lots of confusion to come.

posted @ Tuesday, July 14, 2009 7:22 PM | Feedback (0)
Blackfield - Once (Live)

One of SW's numerous bands, good stuff.

posted @ Sunday, July 12, 2009 11:27 PM | Feedback (0)
Marillion – Somewhere Else (live)

A great perfomance of the song.

Somewhere Else (Live)

posted @ Sunday, July 12, 2009 11:17 PM | Feedback (0)
LINQ to SQL : SqlDateTime overflow. Must be between 1/1/1753 12:00:00 AM and 12/31/9999 11:59:59 PM

This annoying error when calling SubmitChanges() from a LINQ to SQL DataContext can occur if you are using a default value for a DateCreated DateTime column which is NOT NULL, and you add a table to the LINQ to SQL designer.  For whatever reason, it doesn’t pick up that it has a default value, so you need to highlight the column and go to properties, and manually set the ‘Auto Generated Value’ property to ‘True.’

posted @ Friday, July 10, 2009 6:35 PM | Feedback (3)
The problem with BDUF is the ‘B’ not the ‘UF’

SB has a good post, short and to the point, where he points out that the problem with BDUF(“Big Design Up Front”) is with the ‘Big’ aspect, not the ‘Up Front’ aspect.

For instance:

I’m sure every developer with even the slightest amount of experience has run into a situation where a proposed design for a domain concept is initiated, and then someone (perhaps even the developer in question) has decided something along the lines of: “We need to generalize this so that it covers future requirements.”

If this doesn’t cause you some concern, it should.  The problem with trying to design against future requirements is that, except in rare instances, you don’t really know what those requirements are.  So, you sort of guess at what they might be, and come up with something that sort of, maybe, handles these vague future requirements.  And, except in rare instances, you will get this wrong.

In a recent example that I dealt with, an internal logging system was being designed to handle some automatically scheduled batch processes, when they ran, who triggered them, what the result was, what exceptions occurred, etc. etc. etc.  Now, given that there are many stock/standard logging systems (from Microsoft’s Enterprise Library to many open-source tools), this should already raise an alarm.  But, there are occasionally times when you do need to roll your own, so let’s leave that aside.

The batch processes in question had to do with file transfers, an area with a fairly fixed and standard set of requirements.  However, the designers decided that the log system should be designed to handle future batch processes, like SQL jobs.  I asked one of the designers, “Have you ever heard of YAGNI (You Aren’t Going to Need It)?”  He then asked, “What’s YAGNI?”  I replied, “You aren’t going to need it.”  He answered, “Oh, yes of course.”  And then continued to talk about the need to make the design handle future batch processes, like SQL jobs.  And so the design began to exhibit generic and nearly incomprehensible aspects.  I don’t remember all the specifics, but it wasn’t enough to have a concept like FileSource, no, you needed something like PrecursorCondition (I made that up, but it was something equally bad, IIRC).

All of which made it that much harder to understand and implement the current requirements that actually mattered.

Now, it is always fun to make fun of others, but, to be fair, another instance:

When designing an eCom Order Management system a few years back, we (myself and a business partner) knew from our previous dot com experience that there were many different order status types, often depending on many inventory status types, and that you needed to manage the flow pretty carefully.  We knew this.  We were domain experts.

Now, the actual system we were building didn’t need that sophistication at the time.  But, we knew we might need it later.  So, we designed our happy little designer selves away for quite a long period of time, building in a whole bunch of functionality into the design, functionality which that system never needed, and which prevented us from actually implementing the functionality we did need for quite a long period of time.

As Scott points out, the obvious remedy is to do no design, but this is just reactionary, and causes its own set of problems.

There is no magic formula to determine how much design is needed.  The correct answer is “Some, but not too much.”  Are you generalizing so much that it is hard to determine exactly what the current design is doing for the specific functionality being designed for?  Back it down.  Think of a potential next requirement, one that is actually reasonable.  Will you need to re-write every single class module involved to handle that requirement?  Maybe you need to give the design a little bit more thought.

As always, use your discretion wisely.

posted @ Saturday, July 04, 2009 6:13 PM | Feedback (2)