November 2010 Blog Posts
How Do I Reboot My Car?

Update: apparently, the phone issue is a known issue related to the Samsung focus and using microSD cards that aren’t “certified for Windows 7 Phone.”  The problem is, there is no such certification now.  Nice.  Apparently it wasn’t part of the official Windows 7 Phone profile or something.  But a fix is supposedly on the way, so for the meantime, AT&T says they will give me a new phone (since it’s apparently possible it could brick itself at any point).

Technology rocks.  If you are old enough to think about where things where things were 10 years ago, you know this.

Technology also sucks.  When things don’t work, especially with things you don’t have control over, you are stuck.


In a previous post, I talked about an issue with my new Windows 7 Phone, where it eliminated all of my settings.

The other day, it happened again.  I felt it vibrate because it decided to reboot itself for no reason.  Looking at it, I saw that all of the tiles were set back to the default colors, and it indicated I needed to set up new email settings.  I tried hooking it up to the computer I had Zune-synced it to, and it refused to recognize it.  Fantastic.

I did what most people would do.  I stared at it for a while.  I cursed quite a bit.  And then I power cycled it.

All the settings came back.  Fantastic.  Is this a reliable fix?  I have no control over it.  I didn’t break it, and I didn’t fix it.

Home Network

I work for some clients where I often work from home because I work off-hours and weekends.  Because of a production migration, I went home early from the client site since they typically run off-hours and I didn’t feel like hanging out on site.

You can already guess.  I came home and I had no internet connectivity.  I had recently switched providers, so I called them and they told me their equipment was responding remotely.  At that point, I noticed that my network firewall, at least 6 years old, was dead.  Dead, dead, dead.  No lights, no nothing.  Except for changing some basic settings when I switched providers, I hadn’t looked at it in years.

Needless to say, but I’ll say it, I spent a long time on site after going back.


I recently bought one of those cars that designed their center console after an iPad.  Touch, touch, touch.  Buggy sucker.  Manual says “Don’t touch the console too firmly”, but if you don’t, it sometimes ignores you.  Fine. Otherwise, it’s a nice car.

The other day, listening to an USB input for all my tunes, one of the tunes hesitated, then stopped.  What, it doesn’t like my tunes?  Moments later, the navigation system failed, giving me a message that the navigation SD card was corrupt.

Now, the whole point of having this sort of system is that it allows you to manage things without taking your eyes off the road.  Which didn’t really work as I stabbed vaguely at the console, trying to get the tunes and/or navigation to work.

I don’t know how to reboot my car.  I happened to need to fill up on gas (the car, not myself personally), so I pulled up to the station and shut everything down, then started things up.  No luck.  I then filled up on gas (again, the car, not myself personally), and got back in.  All fine, navigation and tunes came back.


There is no summary other than I still don’t know how to reboot my car, but hope it continues to figure it out on its own when needed.

Technology rocks.

posted @ Sunday, November 21, 2010 10:55 PM | Feedback (0)
On documenting the right things

Nick Malik has a post about documentation as it relates to software development, in which he quotes a post from James McGovern:

“I am of the belief that documentation should explain decisions taken and not taken, Why an approach, architecture or algorithm was selected and what else was considered and rejected. “

Frans Bouma is someone who has talked about this before and I agree with this.

Malik and McGovern continue to talk about the role of the architect and the developer as it relates to this, which I generally disagree with, but that’s a topic for another post.

Various developer-centric blogs have talked about how code itself should be self-documenting.  To the extent that this means code should be well-written and generally understandable to a similarly situated developer (i.e., someone of the same skill level and business knowledge), it is hard to disagree with.  However, this misses the fact that code cannot self-document the code not written, nor why it wasn’t written that way

I’ll give one example that sticks out from recent experience.  At one of those ‘too big to fail’ financial companies, I was working on a system and was assigned to investigate a potential defect.  At first glance, what I saw in the code was something that most decent developers (or in this case, myself) would consider appalling.  Above (or below, I forget exactly) the method that was actually executing was another version of the method that was entirely commented out.

Normally, this is just terrible.  This is what source control is for.  Leaving in commented out code is at best totally confusing.  When working on a module/subsystem/whatever, there are times when I will comment out a method that I am making a significant change to, simply because it is easier to reverse the change if it turns out it was a mistake to make the change in the first place, or if I want to reset to a known state efficiently.  Yes, you can due this with source control, but, let’s face it, sometimes, it’s just easier to do that.  But, once the change is made, the previous method should be eliminated, not just left there.

However, in this instance, the previous developer had a very good reason to leave it in.  Above the commented out method, she included a comment that said (paraphrasing):

Due to defect number blah, it was decided by Business Analyst blah that this method should change from Blah A to Blah B.  However, I am not convinced that this decision won’t be reversed due to potential future evidence.  If you are assigned a defect number related to this method, please refer to defect number blah and the previous BA decision for analysis

Now, surely (“Don’t call me ‘Shirley’”), there are different ways in which the same information could have been conveyed to me, the new developer having been assigned the potential new defect.  The method itself could have been eliminated, but the same comment left intact.  However, assuming that it would have been easy enough to find the previous method in source control and run a compare (which IIRC was questionable, but leave that aside), given the context of what I was investigating, it was more efficient for me to have the previous code actually there.  When working in an environment with dozens of active developers working on parts of the same system, and casting this over years of development, this ‘hacky, ugly’ commented out code allowed me to address the details of the potential new defect very quickly.

Regardless of the details of the example, some key points stick out.  As an ideal, you should write self-documenting code (and to give a trivial example of something I’ve seen recently, if your variable is of the type “Account” name it that.  Don’t name it “pAcnt” for instance.  Unless you are writing real-time trading code, name your variables so that it’s obvious what you are talking about.  Same with method names, and class names, etc.  in other words, basic stuff…but I digress).  But even the best self-documenting code can’t explain the decisions behind why the code was written the way that it was and not some other way.

I fully understand the evil that is the traditional design document.  Whenever tasked with creating one of those suckers, I cringe at the thought, especially if I am unfamiliar with the target audience of the document.  Probably every experienced developer has been faced with the ‘trauma’ of creating the design document that is supposed to contain the actual code that will be developed.  Probably every experienced developer has been faced with the even worse ‘trauma’ of being passed a design document that contains the actual code, which never ever ever is correct, and then is tasked with keeping the design document in sync with the actual code developed.  Personally, when faced with this situation, I either ‘forget’ to do that, lie about it, or update it later (I don’t recommend this unless you are confident you can get away with this).

But, to tie this back to the original point (finally), where documentation in software development can be of great use is in documenting the decisions why the design took the path that it did.  To give yet another recent example, I was asked why a vital production system prevented user action given situation X.  Well, the BRD, written and signed-off on (and let’s hear it for the great process of sign-off) 8 months previously said that’s what the system should do given situation X.  It didn’t say why it should do that, just that it did.  8 months later, users are complaining that the system is preventing them from taking action, and no one is totally clear why.  In environments where BRDs are required (and when/if/why this should or should not be is the topic of yet another post), they should give that explanation.  Because what often happens (as the “commented out” code example illustrates) is that a decision is made, then, because no one remembers the reasons behind the original decision, the decision is changed due to new circumstances, and then it is discovered that the new decision fails to take into account the context of the original decision.

If you can follow that.

So, summation:  valuable software development documentation documents the decisions behind design considerations (without including code).

Repeat that 5 times fast.

posted @ Saturday, November 20, 2010 11:29 PM | Feedback (0)
Outlook 2010–This operation has been cancelled due to restrictions in effect on this computer

A few weeks ago, I started getting this annoying error when trying to click on links in Outlook 2010 messages.  I have no idea how I caused it, but apparently it happens a lot.

There are a number of different things to check and try (like resetting your default browser, etc.).

What worked for me:

1.Start, click Run, type Regedit in the Open box, and then click OK.
2.Browse to HKEY_CURRENT_USER\Software\Classes\.html
3.Right click the value for the .html key and select Modify…
4.Change the value from whatever it is to “htmlfile”

Do this for the htm, xhtml, and shtml keys if they exist.

If the value is “htmlfile”, google for the other possible fixes.

posted @ Saturday, November 20, 2010 1:49 PM | Feedback (22)
Troubleshooting a Windows 7 Phone Problem

Update: apparently, the phone issue is a known issue related to the Samsung focus and using microSD cards that aren’t “certified for Windows 7 Phone.”  The problem is, there is no such certification now.  Nice.  Apparently it wasn’t part of the official Windows 7 Phone profile or something.  But a fix is supposedly on the way, so for the meantime, AT&T says they will give me a new phone (since it’s apparently possible it could brick itself at any point).

Windows 7 Phone can use a MicroSD card to increase its capacity for storing all those mp3s you illegally downloaded with Napster of your music, videos, photos, etc.

When I bought my Samsung Focus, it didn’t come with one, so I ordered a 32 GB card through Amazon and got it yesterday.  You have to basically reset the phone to factory defaults and format the card, which I did.  Syncing the phone through the Zune software took 4 or 5 hours as even though I never ever intend to listen to music, watch videos, etc. on it, I figured, what the hell, and downloaded all my content.

This morning, I noticed that the phone started to display system glitches I hadn’t noticed (I would go to check email and it would show my inbox but then jump back to the tiles, for instance).  Even worse, while it was laying on my desk at work, I noticed that it rebooted itself a couple of times, and after the 2nd or 3rd time, all of my settings were gone, as if I had reset it to factory settings.  Since there are various prompts it gives you when you do this, I’m pretty confident I didn’t accidentally do that.

So now, the question is, what’s the cause?  There are 3 possible causes that I can think of:

  1. There’s a fundamental problem with the Windows 7 Phone platform.  I’ve been really busy, but I think that if this were the case, it would pop up online and I don’t think I’ve seen anything along those lines.
  2. The MicroSD card I bought is faulty.
  3. My particular handset is faulty.

The interesting question, assuming 1) isn’t true, is how to figure out if 2) or 3) is true.  Technically, they could both be true.

The manual states that if I remove the MicroSD card (which is now ‘integrated’ into the handset, whatever that means), the phone will be inoperable.  What I need to do is check with AT&T is if this is really the case.  Can’t I remove it but then immediately tell the phone to reset to factory defaults?  If I can’t, I have no idea how to proceed.

Otherwise, I guess what I can do is to remove the card and reset to factory defaults, then get another MicroSD card, and see if the glitches come back.

When I was younger, this is the sort of thing that I would find intellectually interesting.  Right now, it’s just annoying, especially since I have a high priority project going on that takes up so much of my time that I don’t have time to find it intellectually interesting.  If you get my drift.

posted @ Wednesday, November 17, 2010 7:05 PM | Feedback (2)
Devops–A good thing

The fact that it talks about a ‘movement’ makes me a bit wary (“manifesto” anyone?), but there’s this thing called Devops that falls into my “freaking obviously good” category enough that I’m willing to ignore that.

I’ve talked about this sort of stuff a lot (though am way too busy/lazy to link all the relevant posts), but let me talk about a typical non-Devops example.

Separation of Duties

Working the last few years for all sorts of those “too big to fail” type companies, one of the standard things that you run into is “Separation of Duties.”  Different groups exist that have different responsibilities, often because of some risk assessment made by some team that doesn’t really understand SOX.  But, I digress.

Typically, you have at least 3 teams:

- Dev: the schmucks like myself that write code.
- Infrastructure: the people who set up the central infrastructure that runs the code.
- Migration: the people who migrate the code the Devs write into the Infrastructure.

Although I’m going to explain why this all sucks, there is at least a reasonable explanation for why this separation exists.  In some typical environments, there are a lot of systems that interact with each other in often complicated ways.  A *lot* of systems.  Unix and Windows and mainframes.  Oracle and SQL Server and sometimes Sybase.  Java and .NET, and sometimes a wide range of batch processes, maybe a bit of Perl, and if you’re really unlucky, a bunch of C++.  Additionally, you often have 3rd party vended apps that you have a limited amount of control over (in terms of being able to change underlying source code, for instance).  On top of this, you often have multiple environments, usually separated into categories like “System”, “Integration”, “QA” and “Production”, each of which typically has significantly different configurations

It is highly unrealistic (or at least, statistically speaking, highly unlikely) to think that you can have a team of experts that is fully conversant with all of the different technologies, and the different ways in which they are are built at an infrastructure level, as well as the different ways in which the code that underlies it all is migrated.  It could happen, but it is probably far from the norm, and unlikely to change.  So, you have different teams that, ideally at least (though, statistically speaking, less likely than not), are experts in their areas, and have built out sets of ‘best practices’ in how to structure their areas. 

Though it doesn’t seem to happen in practice as often as one might hope, the ideal situation should be fairly obvious:  the various developers write well written code (or well written “enough”) that is supported on top of an infrastructure that has been solidly designed and maintained, which is then migrated according to procedures that have been developed over time and proven to be stable and maintainable.

As one last thing here, you have to take into account that there are often legal compliance issues to take into account.  The average developer is often restricted from being able to see Production data.  Rightly or wrongly, there is a notion that you need to spread these duties around to different groups to prevent too much critical information from residing in any one group of people.  If you’ve ever worked in an environment where the “IT Team”, however defined, can be upwards of 100 people or more, it’s simply unmanageable from a resource perspective *not* to split up responsibilities.

That’s the idea anyway.  Let’s take a look at where it often breaks down.

It’s Not My Fault

An obvious problem can occur to the extent that the actual soldiers on the ground, so to speak, don’t match up to the ideal.  There are always people in the various groups who tend not to be experts in their areas or who aren’t quite as conscientious or who aren’t team players, or whatever.  I’m not going to really dwell on that here.

An area where the ideal quickly and easily breaks down is when a problem arises (and I’ll stick here to talking about Production migrations) and the cause of the problem is unclear.

Most good employees/contractors/whatever want to be able to fix production problems and do so quickly, not just because it is to their benefit, but because they want to use their problem-solving skills to identify what needs to be done, as production problems, especially in “too big to fail” scenarios, are usually highly visible.  If you’ve ever worked in a situation where traders cannot do their jobs, you know how that can be.

When a problem arises and the cause of the problem is unclear, the separation of duties often makes it totally unclear who is responsible for the problem and who should take the lead in driving it to a resolution.  Is the code bad?  It could be.  Is this a new problem that is surfacing a previously undetected flaw in the infrastructure?  It could be.  Is it something that the migration processes have never uncovered before?  It could be.

Since it is fresh in my memory, let me give you a specific example I dealt with recently.

I worked on a set of fixes to a production application that is used to support a trading team.  The fixes themselves were clearly identified and the code required to remedy the flaws was, relatively speaking, non-complicated.  The standard procedure of migrating the code “up the food chain” through the various environments before the actual production migration went smoothly, as the different teams responsible for their pieces did their jobs.  When it actually became time to promote the code to production, the migration failed.

Since the migrations up and until the one that was to go into production went smoothly, the developer (in this case, myself) felt pretty strongly that there was nothing wrong with the code itself.  If the code was flawed, it should have showed up in previous non-production migrations, and besides, how would bad code (that wasn’t being executed as part of the migration) cause a migration to fail?  The migration team knew that the migration failed, but did not have full access to the infrastructure logs that might pinpoint the issue, so from their perspective, it didn’t appear to be a problem with their procedures.  The infrastructure team, having successfully supported the non-production migrations, couldn’t immediately pinpoint any reason why the production one failed.

Although there was the usual vague ‘finger-pointing’, the reality was that we had a failure, and no one could exactly explain why.  It was reasonable for each group to say after a cursory look at their area of expertise and the facts as they could see them “I don’t see anything wrong with what we are doing here.”

As the developer, I had no access to the production systems (well, next to no access), so I couldn’t see any relevant logs.  The infrastructure team and the migration team could see their own logs but not each others.  None of us had the blanket ability to log into any particular machine and see any and all relevant data, we could only see the data present to each separate team.

In the end, the cause of the problem, and its resolution, was one of those typically maddening and stupid things that, in retrospect, should have been easily identifiable at an earlier date.  But more on that later.

The dark side of siloization

From the devops post:

“On most projects I’ve worked on, the project team is split into developers, testers, release managers and sysadmins working in separate silos. From a process perspective this is dreadfully wasteful. It can also lead to a 'lob it over the wall' philosophy - problems are passed between business analysts, developers, QA specialists and sysadmins.“

The problem with separation of duties is that, when enforced strictly, you set up these inevitable impasses where no one team is responsible, and no one individual, who may have the ability to resolve any issue from a technical standpoint, has the ability from an access standpoint to make the fixes required.  Every problem that could be remedied before it becomes a critical issue can only be remedied after it becomes critical.  This seems to be an odd paradigm.  As a developer, once an issue becomes critical (and as such is raised “up the food chain”), I often then have the ability to do just about anything that I want to do (this is often called a “firecall” problem).  What would have helped is the ability to have this power before it became a firecall and senior management was involved.

How does Devops help?

To a certain extent, Devops can’t help.  The “separation of duties” mentality is so ingrained with so many organizations, that the obvious steps that one can take to improve things will meet with some resistance.  So, to a certain extent, what Devops can do is simply “raise the consciousness” of people involved.  Give the different teams the ability to “fix a firecall before it is a firecall” and work together in a more proactive manner.

From the post:

“So, the Devops movement is characterized by people with a multidisciplinary skill set - people who are comfortable with infrastructure and configuration, but also happy to roll up their sleeves, write tests, debug, and ship features”

It is understandable, and probably unavoidable, that separation of duties is something that won’t go away any time soon, but allow different members of the different groups to have greater input and greater access to areas that are currently blocked off.  If this is allowed:

“Suddenly the technical team starts trying to pull together as one. An 'all hands on deck' mentality emerges, with all technical people feeling empowered, and capable of helping in all areas. The traditionally problematic areas of deployment and maintenance once live become tractable - and the key battlegrounds of developers ('the sysadmin built an unreliable platform') versus sysadmins ('the developers wrote unreliable code') begins to transform into a cross-disciplinary approach to maximizing reliability in all areas.”

The fact of the matter is that if this isn’t allowed, “nature finds a way.”  On more than one occasion in my career, and I’m hardly unique in this, I’ve found a way to get around organizational blocks to solve a production issue.  Especially when it allows a trading group to begin trading that was previously blocked, I don’t have a problem with taking the “ask forgiveness later” route, but the central point is that it shouldn’t be something that requires later forgiveness.

Devops, to me, is as much a statement that “this should not stand” as much as anything else.  Organizations should strive to find a way to allow for a general separation of duties without making it so strict as to thwart successful and repeatable migration attempts.  How this can be done will vary from organization to organization, but since most organizations allow for the strictness to be relaxed to a certain extent in firecall situations, they should be able to find some similar relaxation during migrations before they become firecall issues.

Addendum: what was the issue?

For the particular real world scenario that I mentioned, what was the cause of the problem?

As it turned out, for months and months, the production migration had been failing every single time.  Because neither the migration nor the infrastructure team could determine the actual cause, they were ‘forcing’ the migration to succeed through whatever manual steps that were required to get code into production.  Since they couldn’t pinpoint the issue, they didn’t officially raise it to any external group.

Well, in between the last ‘forced’ migration that they silently fixed and the most recent one that failed, the infrastructure team upgraded one of their systems that gave them additional logging that identified the issue.

For reasons that have yet to be explained, the production migration first attempts to migrate code into a “Pre-Prod” environment.  The previous developer of the code had fat-fingered a “Pre-Prod” config file to have a duplicate entry that no one had noticed before.  So, technically speaking, it was a code error.  Making the problem exceptionally irritating is the fact that, technically speaking, there is no purely separate “Pre-Prod” environment, it’s a step carried over from other infrastructures that have separate hardware, etc.   Every migration, we are asked to verify a successful “Pre-Prod” migration, but since there is nothing to test, we always automatically verify it as successful.

The person responsible for the migration, having discovered this flaw that had existed for 6+ months, reported it in an online system, demanded a new code package without explicitly telling anyone, and went home.

Fantastic.  The dysfunctional corporate exercise that then ensued is a topic for another day.

But, if there had been a ‘devops’ style migration practice, the duplicate entry could have been removed without requiring a whole new build and a new migration, which required explicit management approval.


posted @ Sunday, November 14, 2010 4:21 PM | Feedback (1)
Judging Cost vs. Risk in Testing

Derick Bailey has a good post about determining the cost vs. value of doing testing.  No one who’s read this blog will be surprised that I particularly like this:

“There is a cost associated with a test-first approach. If you put in 100% unit test coverage, and 100% integration test cover and 100% end-to-end functionality test coverage, then you end up with 300% coverage of your system. Is the cost of maintaining 300% coverage worth it in your system? “

It’s a good read, check it out.

posted @ Thursday, November 11, 2010 10:04 PM | Feedback (0)
Favoring Value over Quality

Davy Brion has a good post up about how you should favor adding value in software development over that ephemeral ‘Quality’ that can lead to a never-ending drive for the never actually achievable perfection:

“So focusing on the quality of your code and design is a good thing, right? Of course it is, as long as it doesn’t prevent you from actually delivering your software to the people who are supposed to use it in a timely fashion. At some point, you are going to have to accept that you can’t spend all that extra time and money to make it perfect.”

It’s a good read, check it out.

posted @ Thursday, November 11, 2010 9:24 PM | Feedback (0)
I hate virtual keyboards

I have a project that will need to develop stuff™ for Windows Phone 7, Droid, and iPhone.  Accordingly, I finally got off the crappy but familiar Windows Mobile 6.5 stuff and got a Windows Phone 7 Samsung Focus (or whatever you call it) as my ‘daily’ phone.

There are many reviews out there about it, mostly positive with reservations, so you can google all that.

The one thing I worried about though seems to have come true.  I hate virtual keyboards.

Consistently, when I try to type a message or email or whatever, the virtual keyboard just doesn’t like what I’m trying to type.  It was never great with the physical keyboard of the Samsung Epix that I used to have, but I just can’t get my poorly trained fingers to hit the right keys on the virtual keyboard.

My hope is that I can train myself to aim for what seems to be the wrong spot, since that seems to be the key.  “The key.”  Gosh, I’m funny.

Otherwise, I really like the thing, but you can find more comprehensive reviews out there on google.  Not having Outlook sync is a problem, but I’m working around it.

posted @ Thursday, November 11, 2010 9:17 PM | Feedback (0)
Software Developers aren’t Doctors, and they shouldn’t play one on TV

Continuing on the theme that David Harvey talked about, where one can and should be in favor of ‘craftsmanship’ while being skeptical of Software Craftsmanship (TM pending), I’ve created a category to that effect, and want to continue the discussion here.

Jan Van Ryswyck (hope I spelled that right, he doesn’t have an easy last name like “Nuechterlein”, further mentioned as “JVR”) has a post up on Elegant Code where, inspired by a tweet, he discusses the division of software developers between laborers and professionals.  As I’m going to paraphrase the hell out of it, so you might want to read the original, as paraphrasing necessarily tends to lose something in the translation.

JVR begins with the usual execrable dichotomy between the ‘laborers’ and ‘professionals’, where the former are unintelligent and mindless and don’t care about code standards, while the latter are “very passionate about their craft, that want drive innovation and also want to continuously learn and improve.  If you’re a developer and you’re reading this blog post, you probably fall into this category of developers.  The former can be characterized as “Nine to five, no thinking, narrow focus, like soldiers in the military obeying orders to make a big mess.”

<digression><rant>It’s hard to even begin to describe how many ways this dichotomy sucks.  The insult to the military is the least of it, but is eye-opening.  The last thing the military needed when the beaches of Normandy were stormed were soldiers not obeying orders.  Ignoring that though, there seems to be this continual basic idiotic thought that software developers who don’t care to spend every waking hour outside of the normal working hours of a business reading blogs (for God’s sake, blogs?) are somehow mindless laborers.  In just about any industry, there is a fact that you have to spend hours outside of normal working hours to get ahead and improve yourself.  But this idea that you somehow rise to the level of being a “professional” because you read ElegantCode blogs is just self-serving garbage.</rant></digression>

The Easy Target: Management

Next, JVR moves onto a very easy target, one that almost anyone, at least without careful thought, is going to feel a visceral dislike of, and that’s managers.  Oh, the poor Software Craftsman, all they want to do is improve their craft, only to be stymied by those managers!:

“Although they don’t get their hands dirty with writing software, most managers do feel compelled to impose all kinds of political decisions regarding business requirements, software architecture/design, tools and technologies to the development teams they are ‘managing’. I for one want to make it clear that this has to stop. In order to lift this industry to the next level, we as software professionals need to free ourselves from the leash that is currently being held by management. “

Who can’t agree with that?  Um, yeah.

Without a doubt, there are places and situations where you find yourself dealing with incompetent managers.  Everyone has been there and done that.  But, anyone with any real experience in software development knows that good project managers play a vital role.  In one of the environments that I work in, we have c# code, java code, SQL Server code, Oracle code, and other processes that are part of the software development process.  No Software Craftsman is going to be able to manage the interplay between the groups that manage these different processes, as the Software Craftsman is too busy feeling good about themselves reading blog posts. 

In fact, the JVR posited Software Craftsman disdains that sort of thing.  He’s too busy “improving his craft.”  Which is something the good project manager has to deal with:

“Any business that wants to survive in this hard world economy and even wants to get ahead of its competition has to free its professional developers from management. It’s that simple. “

Exactly wrong.  Management needs to protect the business, the business that wants the software they need to do their job, from the whims of the Software Craftsman who changes what he thinks is good software development practice based on the latest blog he read.

Software Craftsmen aren’t Doctors

JVR thinks this is a compelling picture:

“Can you picture lying in an operating room when a manager in a suit bursts through the door, yelling at the surgeon that he’s not allowed to use technique xyz to save your live?”

The hubris of this is amazing, and funny.  The Software Craftsman is really saving the life of someone?  Yeah, not really.  And as an analogy, it doesn’t work either.  The Software Craftsman cares more about his self-development than he does about helping the business.  He’s more about “screwing up the courage” to dump .NET development altogether and moving onto Ruby, or whatever the current fad is.

It is precisely because of this that the Software Craftsmanship movement is bad and should be rejected.  The individual Software Craftsman doesn’t care about the legal liability in a corporation for using unapproved Open-source software, he’s more interested in himself.  The individual Software Craftsman doesn’t care about building consistent software that can be used across a group, as he’s more interested in following whatever fad catches his fancy.  The individual Software Craftsman doesn’t care about any of that.

Summary: Embrace craftsmanship, reject the Manifesto

Building well-written and maintainable software is hard to do.  It requires a lot of effort across many individuals across many teams.  There are established practices of how this can be done, however difficult it may be.

Software Craftsmanship, at the manifesto level, is a bad idea, and should be fought against.

posted @ Thursday, November 04, 2010 9:28 PM | Feedback (2)
SQL Server: When were my indexes and statistics updated?

From, use these queries to check when statistics for indexes were last updated.

SELECT AS Table_Name
, AS Index_Name
,i.type_desc AS Index_Type
,STATS_DATE(i.object_id,i.index_id) AS Date_Updated
FROM sys.indexes i
JOIN sys.tables t
ON t.object_id = i.object_id
WHERE i.type > 0
ORDER BY ASC ,i.type_desc ASC , ASC

SELECT name AS stats_name
,STATS_DATE(object_id, stats_id) AS statistics_update_date, *
FROM sys.stats
order by STATS_DATE(object_id, stats_id)


posted @ Thursday, November 04, 2010 9:13 AM | Feedback (0)