March 2009 Blog Posts
Sharepoint Designer 2007: A Web Part or Web Form Control on this Web Part Page cannot be displayed or imported because it is not registered as safe on this site

If you get the following verbose error when trying to open a site in Sharepoint Designer 2007:

“soap:ServerServer was unable to process request. ---> A Web Part or Web Form Control on this Web Part Page cannot be displayed or imported because it is not registered as safe on this site. You may not be able to open this page in an HTML editor that is compatible with Microsoft Windows SharePoint Services, such as Microsoft Office SharePoint Designer. To fix this page, contact the site administrator to have the Web Part or Web Form Control configured as safe. You can also remove the Web Part or Web Form Control from the page by using the Web Parts Maintenance Page. If you have the necessary permissions, you can use this page to disable Web Parts temporarily or remove personal settings. For more information, contact your site administrator.”

Under the account of a user that has rights (usually the admin account), go to the URL and add “?contents=1” to it.  This will list all web parts that have been added at any particular point.  If you have a web part that you added at one point and deleted it through the URL, it still ‘exists’ and so if it wasn’t a safe part, or is no longer listed in the web.config as a safe part, it will be listed as an ‘Error’ web part.  Select it/them and delete them, and then you should be able to open the page in Sharepoint Designer as expected.

posted @ Sunday, March 29, 2009 9:30 PM | Feedback (3)
Using Google is not waste

Over at LosTechies, Derick Bailey posted about how answering no issues in a daily standup is not acceptable.

I’ll leave it for other discussions whether or not the overall theme he was commenting on is good or not, but he discussed something that is worth commenting on, and I’ll quote extensively:

“When was the last time you spent 10 minutes wrestling with a problem, spent 10 minutes in the debugger of your IDE or looking at log files generated by your code? When was the last time you hit up Google for an answer to a question, or struggled with a design before asking someone on the team for help?

Or worse: When was the last time an external factor had a negative impact on your ability to work? A boss asks you to join in a meeting. A coworker needs a ride to the mechanic to pick up a car. A department head decides that their support of your project is a lower priority, for whatever reason.

It does not matter how large, how catastrophic – or how insignificant and small the issue is. Any struggle, any minor distraction, any politics that need to be dealt with – anything that is a waste of time or other resources in the context of productivity… they are all issues that need to be reported during the stand ups. They all cause a drop in productivity. They all cause schedules to be adjusted (even if the project schedule accounted for them, from the start, the schedule was still adjusted for them). They all take your team’s efforts away from adding value to the project.“

Not to put too fine a point on it, but this is idiocy. 

At one of my previous clients, a certain team was required to log in 15 minute increments exactly what they did each day.  It was never clear to me if they logged the time that they were forced to log each 15 minute increment.

It is just stupid to think that any member of any team that is doing any productive work has to be actively engaged in the work that they are doing every minute.  It is insanely stupid to think that any member of any team that gives a ride to a fellow team member needs to report this in a standup.

I suppose it depends on the situation, but in general, looking up something on Google when one is not sure of what to do is probably a significantly better use of time than trying to fuck around on one’s own. 

In general, with almost no exception, the need to micro-manage the time used by the members of a team to the extent that if a team member has to report in his next standup that he spent an extra 20 minutes in the can because of a bad burrito from the night before suggests that the team is highly dysfunctional on many levels.

A highly functional and productive team has a lot of attributes.  There is no way to specify in an “a=b” sort of way exactly what works in each team. 

Having said that, if your team can’t handle a group of your team members ‘wasting’ 20 minutes talking about the game last night, you shouldn’t be part of the team, or managing it.

People who talk about ‘waste’ seem to know nothing about what it actually is, and how to mange it.

posted @ Friday, March 27, 2009 12:53 AM | Feedback (6)
Battlestar Galactica, Daybreak, Part 2 – Spoiler (I guess)

I’m glad I didn’t spend four years watching the show (only the last year).  I think this has to go down as the worst finale of a series in the history of television.

I remember how angry people were at the end of Babylon 5, but this was so much dumber.  The only character I really cared about was Starbuck, and that’s what they did to her character?  Right.

Oh well.

Update: okay, maybe calling it the worst finale in history was a bit harsh.  Keeping it spoiler free, these are the things that really bug me:

  • The meaning of the Opera House vision: um, really? 
  • Baltar's big speech: wasn't remotely convincing (though it had a brilliant line "I may be crazy, that doesn't mean I'm wrong").  And had no real effect.  It reminded me of Sheridan's "Get the hell out of our galaxy" speech that ended the Shadow War in Bablyon 5.  Thematically, I know how it fits in, but it just didn't work for me.
  • Cavil's action: was required to move the plot where it needed to go, but no.
  • Lee Adama's decision: also required to move the plot where it needed to go, but totally unbelievable.
  • Fate of Starbuck: Ron Moore admits this is the one he knows some people will hate.  Count me in on that.

So much of Season 4 seemed rushed (the mutiny and its aftermath should have been a 5-10 episode arc at least), and I think the finale suffered from it.  Maybe in time, I'll grow to like it.


Update again: though it probably isn’t necessary at this point, some spoiler space:














I should probably state up front something obvious.  When dealing with even really good science fiction/fantasy, you have to suspend just a little bit of belief.  I mean, a major plot point of Season 4 of BG is that the final five lived two thousand years ago on ‘bad Earth’ and travelled back to Caprica only to just miss the conflict (I’m reminded of the Monty Python ‘The Bishop’ sketch, where the Bishop always shows up too late to stop a disaster, but I digress), and so you already have to buy into some plot points.

Having said that, having watched this episode about 8 times now, though I don’t hate it as much as on the first viewing, some things still stand out, and not in a good way:

  • Sam can turn off the defenses of the Colony but can’t stop the Cylon fighters from launching.  Doesn’t anyone at the Colony understand ‘su root’ to turn the defenses back on?  Just wondering.
  • Galactica rams the Colony.  And doesn’t get damaged.  Right.
  • Cavil decides to launch an offensive with what should be a couple of million of centurions, but none of them make it to the command center.
  • What’s her name is proud of Baltar because…he shoots a dead Cylon a couple hundred times?  Wish women that hot had standards that low.  Then again, I lived on South Beach for a few years, so maybe that tracks.
  • The Opera House vision: okay, so they played this out as being so frickin’ important, and it turns out that it involves a whole bunch of people trying to get a little girl into the command center.  That was the entire point of the attack on the Colony, yes?  So, where’s the conflict?  Athena and Roz and what’s her name and Baltar are all trying to do the same thing, get the little twerp to stop running away long enough to carry her to where she needed to be anyway.  As if Baltar was not going to take the little twerp into the command center if it didn’t match the vision.
  • And once they carry the little twerp into the command center, she is immediately taken by Cavil.  Who has no centurions with him.
  • And then Baltar gives his little speech, which accomplishes….nothing.  Nothing he says means anything, until resurrection is offered.  So, what’s the point?  Besides the brilliance of the “I may be crazy, but that doesn’t mean I’m not right” line, there is nothing about his speech that impacts anything. 
  • Since Cavil doesn’t have his thousands of centurions with him, he eats his gun.  Nice plot reset there.
  • Not to mention the whole ‘dead pilot hitting the nuke missile launch’ thing.  Nice.
  • But here we get to the biggest problem: the entire fleet agrees to let their ships slip into the sun so they can inhabit happy Earth.  No fucking way.  You couldn’t get a kid to drop their IPod, you think the entire fleet gives up all their technology (well, except the tech that lets Adama fly to where ever)?  Totally unreal.  First time a cougar attacks them, think they want that lazer gunny thing?  Entirely unbelievable.
  • Not to mention, you don’t think people are worried about the ‘we let the Centurions take the BaseStar’ thing, knowing their own past, to give up every possible defense if they came back?
  • And they give all this up to hump people who have no culture?  Right.


I still like the show, but the ending is still pretty pathetic.

posted @ Saturday, March 21, 2009 12:32 PM | Feedback (10)
The Importance of Integration Tests

The ‘red-green-refactor’ crew often seems to denigrate the importance of so-called ‘integration’ tests.  This isn’t always on purpose, but the idea seems to be that integration tests are ‘slow.’  As a matter of fact this is often true.  Integration tests do take longer, because you have to hit those databases and browsers and whatnot, and that takes time.

But this is the actual application.  The application is not the series of unit tests, and can never be covered by them.  Your end users care only about how your application works in real time.  This is why you *have* to have tests to cover the actual user experience if you expect your tests to be relevant.

Here’s an actual example from Apollo 11:

The PGNC System malfunctioned during the first live lunar descent, with the AGC showing a 1201 alarm ("Executive overflow - no vacant areas") and a 1202 alarm ("Executive overflow - no core sets").[6] In both cases these errors were caused by spurious data from the rendezvous radar, which had been left on during the descent. When the separate landing radar acquired the lunar surface and the AGC began processing this data too, these overflow errors automatically aborted the computer's current task, but the frequency of radar data still meant the abort signals were being sent at too great a rate for the CPU to cope.[7]

Happily for Apollo 11, the AGC software executed a fail-safe routine and shed its low-priority tasks. The critical inertial guidance tasks continued to operate reliably. The degree of overload was minimal because the software had been limited so as to leave very nearly 15% available spare time which, wholly by luck, nearly matched the 6400 bit/s pulse trains from the needless, rendezvous-radar induced Pincs, wasting exactly 15% of the AGC's time. On the instructions of Steve Bales and Jack Garman these errors were ignored and the mission was a success.

The problem was caused by neither a programming error in the AGC nor by pilot error. It was a procedural (protocol) and simulation error. In the simulator, the astronauts had been trained to set the rendezvous radar switch to its auto position. However, there was no connection to a live radar in the simulator and the problem was never seen until the procedure was carried out on Apollo 11's lunar descent when the switch was connected to a real AGC, the landing radar began sending data and the onboard computer was suddenly and very unexpectedly tasked with processing data from two real radars.[8]

Thankfully, the real world experience turned out to be okay, but the important point from my perspective is that the real-world experience didn’t match the expected one.

Obviously, nothing I’ve ever worked on matched the importance of Apollo 11, but it is a common experience in projects that I’ve worked on that production bugs occur precisely because you have no way of testing what will happen in production due to lack of a testing that matches production.

I don’t want to suggest that it isn’t important to test developer code.  It is.  But, in the end, production code matters more, and that requires important and relevant integration tests.

posted @ Tuesday, March 17, 2009 10:45 PM | Feedback (4)
Promoting burnout is a bad thing

Ray Houston over at LosTechies has a post about not wasting time.  On the surface, what he says makes sense.  But consider this:

“After a good day of pairing, you feel exhausted because you put in a real day's work. You were engaged the entire time. Why don't we hold ourselves to the same standards when we are programming solo?”

It is bad to have your workers, in any field, feel exhausted at the end of a day’s work.  This pretty much guarantees burnout in your workstaff.

Except for those periods where there is a really important business imperative for working extra, you should never expect or allow your workers to feel exhausted.  This is local optimization gone way wrong and bad advice all around.

posted @ Tuesday, March 17, 2009 10:14 PM | Feedback (5)
How not to interview for help

Question: “What is the difference between System.Array.CopyTo versus System.Array.Clone”?

Correct answer: who the fuck cares?  If you find yourself working with arrays, know it, otherwise, Google is your friend.

Technically correct answer: shallow copy versus deep copy.

posted @ Monday, March 16, 2009 10:09 PM | Feedback (2)
DDD-Lite: Auditing and Messages

Having finally gotten around to watching Greg Young’s E-Van presentation, I want to take a little interlude to talk about something that may or may not be interesting for a number of applications, but is typically a central requirement for many ‘enterprise’ applications (I have no idea how to define ‘enterprise’ here…I leave it as an exercise for the reader for now).  And that ‘something’ is auditing.

Though it should go without saying, nothing I’m going to describe here is new.  It could be viewed as a very unsophisticated description of the Event Sourcing pattern.  I should also point out that I’m not advocating what I’m going to be describing.  At least, not necessarily.  I don’t have enough experience with messaging at all to claim that I know the ins and outs of it.  I do have a lot of experience with auditing from a traditional SQL background, so that is what I am going to be contrasting it with.  Consider this an experiment in thinking out loud about the topic.

Auditing Basics

In almost any enterprise application, the auditing requirement is usually pretty central.  Who made what changes when and where (‘why’ is usually important but almost impossible to track, so I’ll leave that aside).  I’ll use Account as the basic example.

Suppose an Account is created.  People usually want to know when this was created, and from what process.  Sometimes it is done from the UI of an application, sometimes it is done from some background process, such as a loader.  After an Account is created, people usually want to know when and how it is modified and by whom.  Which properties were modified?  Who modified them?

A very common and traditional way to manage tracking this audit information is through the use of SQL triggers.

In my experience, this is typically done by creating shadow tables.  These are tables that have the same columns as the source table, with the addition of columns designed for tracking the source of those changes.  These shadow tables often reside in a separate database, but this isn’t a requirement.

Typically, separate triggers are created for inserts, updates and deletes.  When, for instance, an insert occurs, columns such as RecordCreatedBy, RecordCreatedDate, and the like are populated.  For this to happen smoothly, the application that is doing the insert usually needs to be modified to supply the identification of the user doing the insert.  The same can be said for updates, with columns like RecordLastModifiedBy, etc.

For inserts and updates, the trigger system is painless and transparent.  It just works.  Deletes are a little trickier, since unless the application knows how to update the shadow tables directly, the trigger itself has no way of knowing who made a change, since it only has access to the ‘deleted’ table within itself, which would only have the RecordLastModifiedBy record of the last insert/update.  Additional work is required to supply this information.

Additionally, there is usually other processing that takes the audited data from the shadow tables and modifies it to make it more amenable for reporting purposes.  I leave that aside for now.

Regardless, this is a common system, and it works fairly well.  I’m not trying to convince anyone to use it or not use it, so I will not go into the more sophisticated descriptions of how it might work.  In general, though, I will say that it does work and is pretty scalable.

Auditing using Messages

If one accepts the entire architectural shift required to use messages, there is another way to do auditing.

Suppose one has a UI screen that allows you to create an Account, or update an already existing Account.  When the ‘Save’ or ‘Submit’ button in that UI screen is pressed, a message that contains the data for that insert or update (or delete) is created and sent to the messaging infrastructure.  The message will typically be in XML format, though that isn’t a requirement.  Regardless, the message is sent.

With an architecture like this, one can imagine an auditing component that exists and is registered to handle the existence of those messages, over and above the domain components that also exist and are registered to handle them.  You still might have shadow tables that exist to persist those messages.  The actual infrastructure might be very similar to the trigger-based one.

A central difference is that you get the entire ‘action’ within a message.  Triggers cannot store the action, but only the results of a change.  The value of Column A went from this to that.  But the context of the change is lost.  An audit system that used messages can store and persist the entire context of the change, because it is encapsulated in the message that contained it.

On the surface, this isn’t that big of a deal, until you consider the possibility of rolling back a change.  Although it is possible to do this with a trigger-based system, it is very hard to do.  If you have an audit repository that can store the entire history of an Account, from its creation to its changes to its potential deletion, you can reset an Account to any point in its history by replaying the history of the messages involved.  If I want to know the state of an Account at Time X, all I have to do is to run through the messages involved with that Account from creation to Time X.

Obviously, this doesn’t happen by magic, you have to write code to accomplish this, but at least the infrastructure is there to enable it.


SQL-based auditing is pretty common, and when done correctly, works, and works pretty well.  I don’t imagine that it will disappear anytime soon.  But if you design your architecture to use messages, you may be able to develop an even more robust system.

posted @ Sunday, March 15, 2009 10:31 PM | Feedback (0)
There is nothing wrong with writing a book

Chad posted something in reaction/response to some session at the Alt.NET Seattle 2009 conference about why Alt.NET is so mean or something (I have the various sessions on my list to watch, but given the size of my current backlog, I figure I’ll actually get to it in 2010.  Maybe.).  Since I haven’t watched the session in question I don’t have much to say about it.  I will say that I don’t think there is that much meanness going on (with perhaps a few notable exceptions).  Perhaps it is a form of weird reverse nostalgia, but I remember the conversations on the non-moderated Babylon 5 Usenet list as having a lot more vitriol.

However, Chad did bring up something within the context of a point about professionalism (mild digression: is there a contractual obligation that everyone at LosTechies has to talk about quality and/or professionalism every few months?  Not that there is anything wrong with that, just wondering) that requires a comment or two:

“Worse yet, what if there are others in the community who have some profit-interest (book deals, speaking engagements, lucrative contracts) in seeing the “wrong” technology being released so they can help customers who are unable to use the technology effectively (… because it’s “wrong”!)?   What if these people who have conflicts of interest malign us and call us “mean” and tell us to stop being “a**holes” and, basically, to shut up?”

This isn’t anything new, not from Chad or from others, of course.  But, I think it is ironic because I think it is, well, unprofessional.

As I’ve written about previously, seemingly the entire Agile/XP gang writes books, gets speaking engagements, etc.  But I don’t think Chad would hold that against them, because he is on their side.  Uncle Bob’s presentation last month was a good one, but he did mention within it that he sees a profit motive for ObjectMentor in what he does (he didn’t mention the phrase ‘profit motive’ of course, but the point he was making was clear).  Because of all the corporate initiatives that have spread Scrum without the XP, those like himself who want to re-introduce XP into Scrum can benefit financially.  Ron Jeffries’ comment about having a nice blue convertible is also telling.

And I think this is fine.  It’s legitimate.  If you think that there are certain things within software development that are important and legitimate, and can also benefit you financially, you’d have to be an idiot not to pursue this.

What Chad seems to think (though not exactly) is that people who promote ‘wrong’ technology are doing it as if they are *only* doing it from a profit motive, and that they know they are promoting ‘wrong’ technology.  He doesn’t name the particular technologies that he has deemed to be unworthy.  My guess is that the Entity Framework and/or Sharepoint might be on the list, but the guess is irrelevant.  He seems to be incapable or perhaps just unwilling to accept that there might be people who view these ‘wrong’ technologies as not wrong at all.

Of course, I’m making a mistake by focusing on Chad here, since the point is more general, but you have to work with what you have to work with.  The more general point is pretty simple:  if you are going to question the motives of those you disagree with, it really shouldn’t be surprising if people think you are being mean.  Whenever I have a disagreement with Chad, it is usually not that long before he states that I’m just a troll, as if that is the motivation for the disagreement.  I think that anyone who knows me or knows anything about me knows that my ego is healthy enough (one might say ‘over-healthy’) where I don’t really care that much if the best rebuttal to a point I have to make is that I’m a troll.  I mean, if I want to engage in troll-dom, I’ll go to a Linux forum and claim that Linux isn’t sophisticated enough to be the desktop for my Mom or something. 

But, the fact remains that there are people who really do believe in the ‘wrong’ technologies that they promote, and falling to the level of simply refusing to accept this is a sign of complete unprofessionalism.

Are there people who are dishonest because of profit motives?  Um, duh.  But this ‘duh’ is as true of the Agile crowd as of anybody who is promoting some product from Microsoft.

So, if you believe in a technology, it is okay to write a book about it.  Unprofessional people will question your motives, but their questioning is unimportant.

posted @ Monday, March 02, 2009 8:08 PM | Feedback (2)
Enough with the ‘Sprocs are evil’ BS m’key?

Derik has posted one of those ‘Sprocs are evil’ type rants (he did label it as a rant, so I have to give him props for that).  Probably unrelatedly, Jeremy Miller posted something that is sort of related (though his point was really a larger one).  Regardless, I have to respond. (Update: I mean responding in general, not line by line discussion of either Derik's or Jeremy's posts...more of a rant using them as a launching pad). 

(Update #2: Derik wants to make it clear that he doesn't think sprocs are evil.  I think it is a semantic point, but just to make it clear, he thinks business logic in sprocs is evil.  This doesn't change anything, but he seems to feel strongly about making the distinction, so consider it made.)

disclaimer #1: I have no doubt that Derik is a better developer than I am.  Same goes for Jeremy (doubly).

disclaimer #2: I’ve been a SQL developer (among other things) for about 10 years.

disclaimer #3: despite, or perhaps because of, #2, I caught the ORM bug about 5 years ago, with the Wilson ORMapper.  I’ve used NHibernate (but no where near as much as I’d like to), and more recently, on a number of projects, I am using Linq to SQL (yeah, I know, it’s dead, and yeah, it may or may not be a full blown ORM tool…whatever).  When writing my own code, I don’t want to handwrite SQL or use sprocs.  At all, if I can get away with it.

Having said all that, I have found any number of times where using T-SQL in general or sprocs in particular.  In no particular order, here are some of the reasons why:

ETL: if you’ve ever found yourself having to move large amounts of data (for the hell of it, let’s say 1 GB or more) from one source to another, especially where the target is a SQL database, T-SQL in my experience is almost always more efficient than managed code.  Even if you can get the data into memory in the first place to do any of the ‘T’ (Transform) of ETL that is required, you are better off letting the SQL engine do it, especially if it is set-based.

Performance: most of the ‘debate’ about sprocs vs. code when it comes to performance has seemed to have been about how SQL caches execution plans or network traffic.  As has been beaten to death already, this is irrelevant these days.  While a ‘raw’ query sent over the wire may be larger than a sproc call, this almost never has a noticeable impact.  We aren’t connecting to our SQL data-stores over a modem.  And when parameterized (as almost any decent ORM will do for a dynamic query), a ‘raw’ query will result in an execution plan that is cached just as efficiently as a sproc.  So, none of that matters these days.

What does matter are indexes (and also statistics, but I’ll ignore those for now).  Any developer worth anything working against a data store will know what a join is (if they don’t, get them far, far, away from your system).  But they don’t, often times, know the importance of indexes, how they are tuned, and how to write code that properly uses them.  And when you are writing code that hits a table that has a couple hundred thousand (much less millions) of rows, knowing how to do this is critical to performance.

Unless otherwise restricted, an ORM will happily allow a developer to write code that happily produces where clauses that doesn’t properly use indexes.  Even against a small dataset, badly performant queries are easily produced, but on a much larger dataset, this can wreak havoc on a production system.  This is why so many SQL developers and DBA-types hate ORM technology.

Even a good SQL developer can get this wrong because, as I like to joke at times, SQL seems to emit quantum like behavior (it doesn’t actually, of course).  Understanding execution plans takes some knowledge.  With SQL Server, for instance, a Merge Join can sometimes perform better than an Apply, and sometimes perform worse.  I bet that NHibernate can help ‘hint’ which to use (or at least, given my ignorance of how you would get it to do this, I wouldn’t bet against it), but almost no developer is going to know which to produce even if they knew how to get their ORM of choice to produce it.  Have you ever had to answer a test question about whether Exists or In is faster in a query?  The textbook answer is obvious, but any SQL developer knows that either one can be faster than the other, depending.  Joining on a sub-select is generally the right way to go either way.  Does your ORM generate SQL code that knows to use Exists vs. In vs. a sub-select?

Very recently, I had to write code that updated a couple of values in each row of a table that had a little less than one hundred thousand rows.  Thinking of it purely logically (in the sense of, given A, B, and C, I want D and E to happen), I wrote some T-SQL that did it, quite efficiently, thank you.  I’ve worked with brilliant SQL developers enough to know I’m not one of them, but I’m pretty good, thank you again.  But the update statement hung the server.  Looking at what the code did, there was no obvious reason why.  In order to get it to work properly, I had to change the T-SQL to perform the exact same update in batches of 100 (I think I increased the batch size to 1000 eventually) and it executed successfully in a couple of minutes.

Forcing people to use sprocs can force them to use code that has been analyzed and tweaked to use all available indexes (or caused the DBA-types to create the relevant indexes to support the operations required).  Is it *possible* that this can always be done from managed code?  Since I used the word ‘always’, the answer is no.  Can it be *often* done from managed code?  I doubt it.

Do you have a large dataset you are going to be hitting in your production environment?  Take a code-base that allows one to use an IRepository that has a Find method or a Query method that can take in an ad-hoc query.  Then, go grab the pager and be willing to do L3 support when the tickets roll in about why the list page doesn’t return (yes, even a list page).

Testability: Derik mentioned that one of the problems with having business logic in sprocs is that it inhibits testability, especially when it comes to TDD.  If you have to call a sproc from managed code, then you have a problem in setting up the quickly repeatable unit tests that are the hallmark of writing code test first (stated this way so that BDD counts as much as TDD, etc.).  This is, as far as I can tell, entirely accurate.  Most ORMs that are worth anything make it easy enough to write managed code that calls a sproc (so we don’t have to go back to the ‘dark’ days of .NET data access and call a SQLCommand, set the type to procedure, map the data to each input variable, and all that painful stuff), but when doing test first development, it is a lot harder to mock or stub this out.  I don’t think there is much to debate about this.

But, I’m sorry, if you are going to be hitting large datasets in your production environment, fully legitimate tests *have* to take performance into account.  This isn’t premature optimization, it is a known variable.  Surely (don’t call me ‘Shirley’), you want to test the functionality of one’s code outside of other concerns, and that is a legitimate desire to have.  But if it is *known* that you have a data store that is in the tens of TBs, much less tens of GBs, code that fails to perform adequately is failed code, regardless of whether or not it meets the design specifications.

Also very recently, I decided, in one of my own projects (so I wasn’t hampered by client requirements, since I had control over that) to write a stored procedure, the only stored procedure in the current code base.  Why?  What I needed to do was to create a temp table, and then, off of the input variable, query 4 separate tables that each updated the temp table, and then at the end, join that temp table back to a key table and return a resultset.  Could I have done that purely in managed code?  Probably a bad question.  Could Ayende have done it in 3 lines of Boo?  Probably.  But Ayende isn’t working on my project, and he isn’t working on your project either.  In this case, I needed a final resultset that was best produced by letting the database engine produce it.  The point remains that T-SQL is very good at certain things, and should be allowed to do it.

If one is writing a ‘trivial’ (in the sense of technical requirements) application like Dimecasts or a basic eCommerce site (as one of my current projects entails), should you write a bunch of stored procedures to perform your business logic?  Hell no.  Using Linq to SQL code, from p in db.Products where p.ID == id select p, will get you what you need, 99% of the time.  Even with more complicated business logic, from a developer productivity perspective, and from a performance perspective, you aren’t going to need it, where ‘it’ references sprocs.  But don’t think for a minute that from these basic cases you can make the case that sprocs are evil. 

In a perfectly ideal world, all of your managed code developers will understand relational databases as well as your sql developers, and will know how to make the adjustments to their code accordingly.  In this same perfectly ideal world, the ORM of choice of the managed code developers will allow them to create managed code that utilizes the full power of the data store they are using.  If you are not lucky enough to inhabit this perfectly ideal world, don’t reject sprocs because of some generic arguments.

(Some smartass will recommend sharding the database when the datasets get to be too big, or to use OODMBS instead of RDBMS…if you are in a position to do either, god bless (and I think sharding will require some database layer anyway), and I’m available for hire).

If you still think sprocs are evil, go ahead and take ownership of that pager, and I’ll talk to you at 3AM.

posted @ Monday, March 02, 2009 1:14 AM | Feedback (7)