Posts
1146
Comments
890
Trackbacks
1
November 2008 Blog Posts
DDD-Lite and Data Access: One way of doing it

I’m not sure who came up with the term ‘DDD-lite’ or if it even makes sense to ask the question, but I *think* I first saw it being used by Colin Jack.  In any event, I’m going to describe what I mean by it, and how I’m using it in a way that I find comfortable along with a certain way of doing data access.

As I’m using it in this post (I don’t think there is an official definition), ‘DDD-lite’ refers to the use of certain patterns within your code.  Full blown DDD requires an extensive collaboration between business users and developers and everyone in between (though there are many debates about this), and so DDD-lite is more of a programming technique (or set of them).  As I use it (and I’m talking strictly about web forms here), it involves:

  • MVP/C at the presentation layer (technically, only the ‘V’ is truly the presentation layer, but go with me here)
    • V – the view.  I always create views at the user control level, and the aspx page just combines them.
    • M – the model.  This gets the…stuff (I want to say ‘data’ but it doesn’t directly get data in a database sense) that the view needs to display, as well as act on the…stuff when it changes in the view.
    • P – the presenter.  This co-ordinates the interaction between the M and the V.  I like my views to be a bit more robust than others (so the presenter may pass some stuff to the view and the view itself will do some formatting, etc.).  I also tend to have the presenter constructed with the view passed in, and then wire up the presenter to the view’s events, then have the view fire off events, blah blah blah.
    • C – the controller.  In Asp.NET world, this could mean Monorail and other things, but I’m only considering Asp.NET MVC here.  Since routing scares me (the combination of strings and order of precedence is something I will hork up massively the first twenty times I do it) and it isn’t officially released yet, I’m staying off it for now
  • Facade/Services
    • I’m actually inconsistent here.  I think I prefer that the M call into a facade, which then calls into a service, but I sometimes shortcut this.  What’s the difference?  Not sure in terms of importance, but something like this: suppose the standard eCommerce site.  I might have a facade layer for the shopping experience (adding to cart, applying discounts if I’m a logged in user with certain blah blah blah) and a different one for the ordering experience, for instance.  And the facade is a cover/combiner of various services.  Or, I might just call into the services directly.  I go back and forth between the two, and am not sure if it is because of whimsy or if I can’t tell if one is really better than the other or laziness or stupidity or what.
  • Repositories
    • Here’s where the data stuff with the database happens.  But there’s a ‘complication’ (it isn’t really complicated, but just how I do it).
  • Domain Objects/DTO/Data Objects/Mapping
    • Domain Objects: the objects that actually do all the stuff and have all (or most, or at least some) of the behavior.  These objects are basically POCO, and these are heavily tested (when I don’t cheat) since these are the objects that do the stuff that matters.
    • DTO: the objects that are used to populate the presentation layer, and pass data back from the presentation layer.
    • Data Objects: the data access stuff, more below.
    • Mapping: obviously, two types:
      • DTO-Domain
      • Domain-Data
      • One way to think about this is a parallel between the other layers
        • DTO: maps to the presentation layer
        • Domain: maps to the facade/services layer
        • Data: maps to the repository layer

 

Okay, so the data access stuff is pretty straightforward, and I’ll use LINQ to SQL as the example.  Can you use L2S as your domain model?  I don’t know.  Someone smart enough could probably figure it out.  But, I’m not that smart and I don’t want to worry about it.  So, I create Data Objects the standard way.  Drag and drop the fu&*er.  Or use SqlMetal.  But, in any case, I don’t worry (that much) about how L2S does what it does.  I don’t want to TDD this area, unless I have edge-cases.  If i’m using IRepository<T>, I don’t want to test every instance of T here.  It is akin to me of testing if int works in the CLR.  You assume the framework does what it does properly.  Similarly, I assume that once you get the basic case of IRepository<T> working, it will work across the board.

I do want to test the mapping.  Since I may refactor my domain objects, I need to know that the mapping between my domain objects and my data objects continues to work.  This allows me to continually refactor my domain and then test the mappings to know they are in sync, without having to test *everything* down the stack.

An obvious question is whether or not this whole thing is a good idea.  Could I improve the different layers and how they interact?  Obviously there are times when I might say yes to this question, but I judge the ‘is it a good idea’ question by how easy it is for me to implement functionality. And because I have a comfort level with the way I’m doing it now, I go with it.

The bottom line is that the way I’m doing it now works efficiently for me.  I’m used to it.  The places or junction points where I am not so sure about, I build tests.  The rest, I trust that the framework will test for me.

posted @ Tuesday, November 25, 2008 9:14 PM | Feedback (5)
Windows Live ID Bug

Okay, this has been going on for…years now.  I just got so used to it that I didn’t really notice it.  That’s not quite accurate.  I noticed it, but got used to it.

Anyway, the bug is very simple.  Every time I need to log into Hotmail or any other Microsoft service that requires a Live ID (most recently, Azure Services), I have to login twice.  The first time, I am told my email address and/or password is incorrect, then I use the same info again, and it works.

Now, it may be that I have some neural glitch that I incorrectly type my password EVERY SINGLE TIME the first time and then correctly type my password EVERY SINGLE TIME the second time, but I doubt it.  I tend to always save my email address so I only have to enter my password.  And I use Maxthon as my IE type browser instead of IE itself.  I should remember to test different combinations of browser.

But, it is really weird. 

posted @ Tuesday, November 25, 2008 7:40 PM | Feedback (2)
This guy is still in the league (NHL version)?

As a Canadian (dual citizen of US and Canada), I think it is required that I have the Center Ice package.

Anyway, was watching a Predators-Hurricanes game the other day, when I heard a Radek Bonk reference. 

He’s still in the league?  Really?  I specifically remember when he was drafted, because he had scored something like 80 goals for Las Vegas (they have hockey there?) and was compared to Jaromir Jagr because….he had the same mullet.

Seriously, I think that was the unspoken reason that people had to compare him with Jagr.  He was European, and he had the identical mullet.

And he’s in his 13th year in the NHL.  God bless him.

posted @ Monday, November 24, 2008 8:18 PM | Feedback (0)
These guys are in the League?

So, watching the Bears mop up the Rams (not a surprise), when the Rams send in Brock Berlin to finish up the loss.  Brock Berlin?  They let this guy in?

As a proud alumnus of ‘The U’ (though we got smacked by Georgia Tech last Thursday, and our coach is a whiner….you really think Florida ran up the score by kicking that late field goal?  Jimmy Johnson would have faked the field goal and scored a touchdown….but I digress), I was a bit stunned, as he wasn’t really all that hot in college.  Then again, Ken Dorsey won a national championship and I didn’t think he was all that hot either, and he’s *still* in the NFL.

As a joke, I said to myself ‘I guess this means Kyle Wright is in the league.’  Holy crap.  He made it?  Looks like he isn’t actually on the active roster for the 49ers, maybe he’s on the practice squad?  That’s amazing.

Here comes Robert Marve.

posted @ Sunday, November 23, 2008 3:01 PM | Feedback (0)
More on TDD

Dimecasts is a site created and run by fellow Chicago Alt.Net group ‘founder by which we mean we were in the bar when we decided to do it’ Derik Whittaker that hosts a decent number of relatively short webcasts on topics that Alt.Net type people might be interested in.  They are supposed to be all under 10 minutes, but as I like to heckle Derik about, it’s usually 11 minutes up to about 15 minutes.  No matter, really, but heckling is fun and cheap.

Anyway, Episode #65 is a ‘demo’ of sorts of both TDD and Pair Programming, as Derik and Kyle Baley begin to implement a basic new feature for the Dimecast site, one which will email the author of a Dimecast episode any comments that are added to it (the site already allows comments).  So, pretty basic stuff.  Kyle (whom I have exchanged email with on occasion but don’t know anywhere near as much as I do Derik) is remote, so there is lag and some static in the webcast, blah blah blah, but the production values aren’t really important, as the two are able to do what they want to do, and it is easy to follow.  And taking 15 minutes to test out a design for a feature seems like a decent amount of time to do it, so no worries there.

Having said that, watching this webcast, it worries me that any competent developer who might see this will reject TDD out of hand.

To put in a few caveats/disclaimers about what I *don’t* mean by this:

- as my own hobo-like presentation at the Chicago Alt.Net group will attest to, I embrace the idea of programming to interfaces, so the fact that there are four interfaces in the design doesn’t bother me.

- while I personally don’t have a love affair with the underscore naming convention, I understand how the sorts of reports that can be generated off of them can be much easier to understand in various situations, so that’s cool.

- rap master douchebag hasn’t issued any ultimatums about this episode, so I assume it is at least a reasonable presentation of TDD in this space.

Having said all that, there are obvious problems here.  The piece of functionality under test is that an email is sent to the author of an episode if someone comments.  As designed, the test requires *8* vars to do so.  Each ‘var’ is a place for possible error and often requires a proper understanding of the mock framework.  That’s a hell of a lot of code required to test a single assert.  Imagine if the requirement was actually difficult.  Related to this, the code in the test has basically no semblance to the code that will actually perform the functionality as required in the *real* code (which hasn’t been written yet, but will be in the next episode, numbered whatever).

From a philosophical standpoint, <insert gratuitous comment>I have multiple degrees in Philosophy, the core of which involves logic</insert gratuitous comment>, TDD makes a lot of sense.  What is my code supposed to do, and what is its API supposed to be?  Great.  But, maybe it is the syntax, maybe it is the process, but the fact of the matter is, if I really want to know if an email is sent when a comment is added, I can write 10x less code and just run an integration test.  “OOOO, integration tests are slow!!!!!!”.  Sure (though I immediately want to ask, compared to what, and what’s your confidence that your 74 vars in your test code actually test what will happen in production??), but slow is better than inaccurate. 

It is easy to get addicted to the fast results of unit tests.  If you’ve ever had to switch between them and long-running integration-style tests, it is *painful*.  Extremely so.  But integration-style tests, when done properly, test your real code, not some ‘hopefully accurate’ code.

posted @ Friday, November 21, 2008 8:19 PM | Feedback (11)
Choose a path, stick with it

So, I’m finally done spiking something, and ready to start on a major new project.  I’ve read 40+ blog posts and technical articles a day for the last 3 months, I’ve looked at nearly all the various alternatives that made sense to me, and I think I’m ready to go.

I’m not going to do strict DDD (since it is impossible here), TDD (since it is misguided here) or BDD (since I really can’t stand it and as the main business user and developer, I can’t do it ‘correctly’ anyway, if there was such a thing as doing BDD correctly, which there isn’t, but I digress), but I am going to do a version of something that people have called DDD-lite.

That is, I’m going to have a central POCO domain model, I’m going to use repositories of a common type, I’m going to use services, and I’m going to use either MVP or MVC at the presentation layer (which will involve both Web and Windows Forms, and Silverlight and WPF).  And as hard as it is for me to do, I’m going to surround them all with tests (though will not strictly write the tests first or have 100% code coverage, but focusing on tests that actually mean something).

Okay, so I’m ready to go.  I think.

Probably the biggest difficulty has been simply choosing amongst alternatives (still debating MVP vs. MVC on the web side).  There are so many different ways of doing IRepository<T>, for instance.  Also, looking at a current project, I can almost map the progression of my (hopefully) learning new things and applying them as I got to different parts of the system.  All of them work, just a little bit differently.

So, what I’m going to do this time, is use the same patterns and techniques in every subsystem.  And if I decide to change a pattern or technique, I am going to require ‘back-porting’ it to all other parts of the system.

I think.  I feel pretty confident about it.  Today.  Right now.  Here we go.

posted @ Saturday, November 15, 2008 7:13 PM | Feedback (2)
Developer Growth Price is Right: From Coder to Craftsman, without going over

Jeremy Miller has a post about the evolution of a developer (in terms of DI/IoC) and also includes something from a previous post of his (but doesn’t directly link):

“I think there is an inflection point where a coder mindlessly spewing out code transforms into a thoughtful software craftsman capable of creating maintainable code.”

Now, I think the idea that this is an off-on switch type of thing is wrong.  Developers like to talk about ‘aha!’ moments (“Take on me!!!!!!!!”….sorry, I digress, but I loved ‘Hunting High and Low’) and they are real.  They are real in other intellectual disciplines, and the experience of them can be pretty powerful (in my Philosophy hey-day, I had more than a few), but I think the experience is also over-rated.  You went from not getting something to getting something (or at least you think so…I think a lot of ‘aha!’ moments are phony), but the development of a developer is much more of a continuum.  And the moronic Alt.NET stereotype of the dumb 9-5 80% lingers in here (more on this in another post soon).

Nit-picking aside, there’s an obvious truth to what he’s talking about, and he aptly describes stages along the way.

But, I think there are times at which this ‘development of a developer’ process is in a negative direction.

digression: you can view videos and content from Uncle Fester’s EchoChamberConf here.  Lots of good stuff.  In order to have Continuous Improvement, you also need to have the notion of ‘a step backward’ and the notion of ‘a completely bad idea that should be abandoned’…but I digress.

To re-use another stupid analogy that I might have used at some point, but am way too lazy to lookup, consider the act of painting.

Have you ever had to paint a room, or had the brilliant idea of painting a room yourself?  As a way of making money in college, one summer I worked on a crew that re-painted dorm rooms (looking back, I think the main point of this arrangement was to give middle-aged ‘manager-type’ men a new pool of young girls to hit on in hopes of finding one or more that was dumb and/or naive…this is also the case at any restaurant like Bennigan’s…but I digress).  It was not particularly enjoyable (students had many interesting ways of using their dorm rooms…in case you ever need to know this, partially digested Cheerios do not decompose, at least not noticeably, but I digress) .  At times, you came across a room where the student(s) had actually done some pretty decent work.  And you placed a layer or two of white paint (or off-white or whatever it was) over it, and you were done.  Since it was during the summer, and it was in Houston, it was freaking hot.  But I digress.

Anyway, if you own a house, you might end up painting a room on your own, because you think you can save money and/or you’ve watched too many Loews’ ads and feel inadequate.  Depending on your skill level, it might actually turn out well (as long as you don’t look at the baseboards too closely, etc.).

However, if you are a business owner and need a room painted, you aren’t likely to call Mary Jane Homeowner, but a professional company that does that sort of thing.  These companies fit on a continuum but in general, you get a better result than if you would hire someone like, say, me.  There are steps you need to take to make sure the results are professional.  I think it has something to do with primer.  Bad analogy when I don’t know what professional painters do, but figure it out.

To push the analogy, a developer can live a long time (and I mean a *long* time) just slapping a layer of white paint over a codebase without ever thinking about design principles.  The transition from coder to craftsman is something like transitioning from “I’m painting stuff for some immediate need” to “I’m painting stuff in a way that is repeatable, because I actually care about it.” 

To really push the analogy (to get to the whole ‘without going over’ bit), suppose you are a business that needs a room painted, and all you can hire to do so is Michelangelo.

If I’m Michelangelo,I don’t want to just put up some white paint and make sure the floorboards are painted properly, I want to create a work of art.

But if I’m a business owner, unless I’m painting a room at the Bellagio, I don’t want a work of art.  I want a professionally painted room.

The artists of the development world want to push things like deprecating the database, moving to an object-oriented database system, and other things that are really not applicable in the real world.  It isn’t that you couldn’t build a system on these lines.  You could.  And the artist is best capable of delivering this.  But this isn’t what a business really wants.

When the ‘development of a developer’ pushes from Craftsman to Artist, you have problems.  And it is an interesting problem.  If you abstract the context as much as possible, wouldn’t you want to have Michelangelo instead of just some competent painter?  Seemingly, the answer is obviously yes.  But in the real world of business, the answer is no.  You want someone who will accept the business requirement of the job at hand, and a ‘lesser’ painter/developer is more likely to give you that.

posted @ Thursday, November 13, 2008 9:26 PM | Feedback (7)
What makes code easily deployable?

In a previous post, I asserted that excellent code is easily deployable.  What might that mean specifically?

It depends, of course.  But, let’s try an example.

Separation of concerns is a basic principle that means (paraphrasing) “Don’t try to do too much s$%t at once.”  For instance, if you are working on your UI/Presentation Layer, don’t embed business logic and/or data access logic within it.  An obvious example is to not put ADO.NET code in the code behind of a web forms page.  This makes it harder to test the code (in any sense of ‘test’ that isn’t an end-to-end integration test).  It is painful to only be able to discover that your business logic doesn’t work by firing up IE and watching some interaction with the page fail.  It is, unfortunately, often unavoidable, but life is full of trials and tribulations.

However, there is a way in which separation of concerns can lead to difficulties in deployment.  One of the standard blah blah dictums of software development is that the greatest cost involves maintenance, and deployment is a maintenance cost (especially if there are multiple environments to which code is deployed…if you’ve worked in places where the number of deploy targets is greater than single digits, you really know what I mean).

Suppose you have an application that needs to be scheduled by some means (whether Windows Task Scheduler, SQL Agent Jobs, 3rd party scheduling tools, etc.).  Suppose your application is such that it is parameter based, and so there are multiple instances that need to be configured for scheduling.

I’ve already touched in the previous point on how to minimize difficulties by not doing stupid crap (technical phrase) with configuration, but there are issues beyond that.

From a development standpoint, it would seem really bad to embed the multiple parameters inside of the application.  E.g., it seems much better to have single scheduled instances like “myapp.exe ‘task1’”, “myapp.exe ‘task2’”, “myapp.exe ‘task3’”, “myapp.exe ‘task4’” as opposed to having an app that has one scheduled instance “myApp.exe” within which you handle tasks 1 through 4 coded within it (and for the nit-pickers like me, just accept that you can’t do a myApp1.exe, myApp2.exe, etc.).  You should cringe at the thought, since you know that you’ll end up cramming tasks 5-74 into the one executable once you go down that path.

Except….

One key rule of software development (which I have yet to think of a catchy name for) is that you have to know the entire life-cycle.  If you are in an environment that has no deployment issues, none of this matters, but in most real-world situations, there are some.  Besides the number of environments involved, you often have to cross teams for deployments, where the DEV team knows next to nothing about the deployment issues specifically, and the deployment team(s) know next to nothing about what an executable does specifically.

In some such cases, where the maintenance cost of deploying the software exceeds the development cost of not separating concerns, you should violate the principle of separation of concerns.

Beyond deployment comes troubleshooting.  Any scheduling tool worth talking about will have some ability to log what happens when a scheduled executable runs.  But different tools do this differently, and knowing the how the production scheduling tool handles logging is crucial here, especially when developing the executable in the first place. 

It’s easy to (roughly) formalize this into something like this: if ((cost of separating concerns in executable) > ((cost of deployment when separating) + (cost of troubleshooting when separating)) then don’t separate concerns in executable.

In fact, if you know that you can lessen the cost of deployment and troubleshooting by not separating concerns in development, you should do that, even though it is painful as a developer to do it.

The difficulty comes about in determining what the various costs are.  There is rarely/never an exact way to determine this.  The variables involved in the troubleshooting process are many.  If you use EntLib or log4net, you can log things to the EventViewer or to SQL or to SMTP or to your Mom.  Well, when something goes wrong, it’s an at least 85% chance that the developers will have to be called in to fix production issues.  Do the developers have access to the EventViewer?  Probably not.  Read-only access to Production SQL?  Maybe.  And so on and so forth.

The upshot is that when you do software development, you have to keep in mind how your code is going to be deployed, and how it is going to be troubleshot (is that a word?), and if it turns out that by eliminating separate components that are deployed and troubleshot, you end up also eliminating separation of concerns, then you should do so.

Alt.NET people (including myself) like to talk about maintainability, but sometimes, the maintainability for the developer is only one part of the equation.  What is the maintainability for the deployment team and/or the production support team?  In any real-world project, all of these factors have to be considered.

posted @ Monday, November 10, 2008 8:12 PM | Feedback (0)
Excellent Code is Easily Deployable Code

That I am including this in the rant category I will leave as an exercise for the reader to determine the cause.

Let’s leave aside all of things (SOLID principles for instance) that should be considered as basics in any software development process, I want to talk about configuration.

App.config.  Web.config.  Within .NET, almost any project is going to involve having values that will need to change as you deploy code from environment to environment (DEV, DEVINT, QA, PREPROD, PROD…do any of these sound familiar?).

Except they shouldn’t.

I want to advance as a general principle/goal that you should never, ever need to change any values related to configuration as part of your deployment process.  Certainly not manually, and hopefully never even as an automated process (based on string matching for instance).

I started out my professional career as the head of an Operations Department.  Working on ways to automate deployments was always a central goal.  Why?  Because any step that requires manual intervention is a perfect candidate for something to be screwed up.  The guy who knows the magic combinations is out on vacation.  The document listing the required changes is out of date.  The up-to-date document will contain a typo.

None of this should happen.  As always, there are areas where I’m sure this is harder to do than others.  As technology changes, there will perhaps be cases where it isn’t possible to leave config values untouched.  But for the most part, it should be possible.

SQL Connection strings?  Use an alias.  I typically create an alias called ‘Connect’ in every environment, and then set all connection strings to use that as the server name.  Sure, you still have to manually create the alias in each environment, but it is a one-shot deal.

There are many other places that you can set ‘alias’ type variables.  Environment variables and the Registry are obvious places.  Need to have a SQL connection string that has to have SQL authentication?  Instead of putting that in the config file, have code that reads from an encrypted registry key instead.

WCF endpoints, MSMQ names, file shares…okay, what about file shares?  The QA server has a different path to where the outputted file needs to go from what you DEV against.  Use the hosts file.  Create a clearly non-standard definition in your hosts file that says that ‘Connect’ means ‘T174WBlah’.

And so on and so forth.

We/I certainly didn’t do this perfectly back in 2000, but we did do this back then, so there is no reason why it can’t be done today.  Watching a software project hit a wall because the code that works in DEV doesn’t work in any other environment because no one can get all the manual config file changes down perfectly in a deployment is *painful*. 

Again, it is a goal that might not always be perfectible.  It takes iterations of improvement to find all the weak points.

But for the love of God, with not a lot of effort, you can eliminate almost all configuration issues across any environment in your, well, environment if you ‘think backwards.’  If it helps, use config values that seem stupid.  Make the server name ‘Steve’ and then figure out how the config value stays ‘Steve’ from DEV to QA to PROD without needing to be changed, and the code still works.  You can do it.

posted @ Friday, November 07, 2008 7:14 PM | Feedback (0)
Alt.NET Baby-ism

So, Ayende wrote a post criticizing some API related stuff about the ASP.NET MVC Framework.

Phil Haack had a reply that dealt with why the design was done the way it was done, which contained the following funny comment:

“We spent a lot of time thinking about these design decisions and trade-offs, but it goes without saying that it will invite blanket criticisms. Fortunately, part of my job description is to have a thick skin. ;)

In part, by favoring usability in this case, we’ve added a bit of friction for those who are just starting out and have trouble using Google.”

Now, *anyone* with any sense knew that this was a joke.  I mean, if someone commented that Tiger Woods needed to work on his putting, or that Abraham Lincoln needed to work on his leadership skills, you know it was a joke (I like and respect Ayende, but don’t think he’s actually at the level of either Woods or Lincoln, but you get the analogy).

Uncharacteristically, Ayende took the comment seriously:

“Saying that I am a newbie or that I lack the skills of using google is inaccurate and insulting.”

As I commented on his blog, he needs to lighten up.  People say things about me that are actually inaccurate and insulting on at least a weekly basis, and those are from my *friends*.  And Ayende is good friends with people who inaccurately and insulting-ly question other people’s ethics before they finish their breakfast each day.

Very disappointing reaction from someone who is one of *the* good natured guys in the space.

posted @ Thursday, November 06, 2008 7:31 PM | Feedback (3)