Posts
1147
Comments
891
Trackbacks
1
February 2008 Blog Posts
Windows Server 2008 Install

I like it, quite a lot.  Got TFS 2008 running on it and there have been no problems so far.  Because I was doing a complete upgrade, I have to recreate a lot of my build settings but that's okay.

The one problem I had with the install itself was that the server had been set up with Windows Server 2003 with dynamic discs created to provide mirror support for the hard drives (I can't afford hardware RAID for these systems right now...sue me).  This should have been okay, and Windows Server 2008 booted off the DVD without an issue and didn't complain about the partition I selected to install on.  Since I was doing a clean install, I did a quick format and told it to continue, but it refused to copy the boot image it needs to disk (which it then expands as part of the install).  For whatever reason, the install process would not let me get rid of the existing partitions.

So, I did the obvious, and booted off of a Windows Server 2003, and deleted all partitions so I could start from a clean slate.  Though it probably wasn't required, I let it copy all the installation files just to make sure the partition was fine, then switched out the DVD to Windows Server 2008, and booted off of that.  Everything went swimmingly after that.

posted @ Friday, February 22, 2008 8:09 PM | Feedback (1)
Visual Studio 2008 - Vista SP 1: IIS 7 Debugging No Longer Works After SP Install

Now, to be clear, I believe that I am still working in classic mode and running VS 2008 as Administrator to get around all the issues that came up when Vista came out, so this may not apply to you.

But, after installing SP1 for Vista, I could no longer debug my web applications running on IIS7.  I worried about some major problem that would take hours if not days to fix.

As it turns out, the WAS service was not set to auto-start, which prevented any of the application pools from starting, which prevented the WWW service from starting.

Started WAS (and set to auto-start), checked in IIS7 manager to make sure the pools were started, and everything was good.

posted @ Friday, February 22, 2008 8:02 PM | Feedback (0)
Ix-nay on Windows Server 2008

Turns out that the way I have my current public facing server configured, I can't upgrade to Windows Server 2008.  So, I'll be doing it internally first.  Which, if you think about it, is more logical.

Then again, this tells me that I need to upgrade the public server, and maybe get that Windows Home Server unit.  Hmm.

posted @ Sunday, February 17, 2008 9:02 PM | Feedback (0)
Upgrading to Windows Server 2008

Tip for you.  When upgrading a server, don't make any accommodation for it not working.  And do it after midnight after waking from a deep sleep.

Anyway, when, I mean if the site isn't reachable (again), it will be related to this.  This reminds me I need to find a good .NET 3.5 host, so should move this boring sucker when I do that.  Note to self.

posted @ Sunday, February 10, 2008 12:11 PM | Feedback (0)
ASP.NET, DataContext, and LINQ to SQL: Business Process Model

In my previous post, I pointed out a good article by Rick Strahl on different ways to handle DataContext management and mentioned that I used his 'Business Object' method.  This actually isn't quite accurate, so let me expand on that a little bit.

I'll use a standard e-Commerce site as an example.  If you think about most of the data access that is required when creating an eCom site, there is little about the DataContext that you really need to worry about.  Product lists, navigation elements, search results, and other items are essentially read-only, so you can use a DataContext and get rid of it.

Even things like user profiles or shopping carts do not require you to carry around a DataContext.  If you need to add/remove something to a shopping cart, it is much better to recreate the cart, make the adjustments, and immediately submit your changes on the shopping cart page than to hold on to a DataContext, or even the cart itself (if you need to keep the count for display purposes, it is much better to save the count itself in a session variable...you don't need the cart except on your shopping cart page).   The same can be said for user profile information (if you need to worry about elements of a user's profiles, it is better to save those elements than the full profile).  Not carrying around complex objects makes a web application significantly more scalable.

However, when it comes to something like the checkout process, a process that needs to traverse multiple pages (and often both backwards and forwards), the need to manage your DataContext becomes important.  When moving a code base over to LINQ to SQL initially, I tried to use a DataContext per page, and then attempt to detach, attach, pulse until smooth, etc. on the order review page, and it was untenable and unmaintainable. 

Instead, what I found does work is to create a 'Business Process' class (for this example, let's call it Checkout) that manages a single DataContext throughout its entire life cycle.

In its constructor, I would do something like:

public Checkout()

{

_context = new MyDataContext();

}

where _context is a private field that only has a public read-only property.  The checkout class is created at the beginning of the checkout process (typically, when the 'Checkout' button is clicked) and stored in session.

On each page within the checkout process, if I need to do any data access work, I can use the contained DataContext and since it is carried around throughout the process, I can call SubmitChanges() when the order is submitted, and everything works just fine.

I won't walk through the entire process, but imagine a page where you need to enter shipping information.  Your Address object is likely to want to have a State object and at first glance, this poses a problem.  When your UI address form contains a dropdown list to let you choose a state, that list is going to be populated from some other DataContext.  But this is okay.  You can recreate the State object from the value of the selected item by using your contained DataContext, which you will also use to InsertOnSubmit() the new Address object (or if it is an existing item chosen from a user profile generated list, the DataContext will already contain it and so you can SubmitChanges() just fine).  You do have an extra call to the database doing this, but it's pretty minor (frankly, if your scalability is threatened by this, you have other issues), or you could set the StateID directly on your Address object (if that is how your LINQ classes are setup).  You will also need to create a few extra methods in your data layer partial classes to accept a DataContext, but this is again pretty minor.

By focusing your DataContext management on a process instead of at the individual business object level, you can eliminate almost all management entirely (since it is almost always unimportant), implementing a process-level DataContext only where it is needed.

posted @ Thursday, February 07, 2008 12:07 AM | Feedback (7)
LINQ to SQL DataContext and ASP.NET

I was about to write up a long post about this when I saw this post out today by Rick Strahl.  I'm using what he calls the 'Business Object' option, and I like it so far.

His post is better, so read up.

 

Update: I'm actually using what I would call the Business Process option.  See here.

posted @ Tuesday, February 05, 2008 7:44 PM | Feedback (0)
Best Practices or Personal Preference?

Over at CodeBetter, Jeremy Miller has started a new series of posts about software development, and determining what is good:

" What is best?  How do we decide what is best?  Is there a constant set of basic criteria that we can use to judge software techniques through time?  I say yes."

In the comments, I made a comment about 'Separation of Concerns' and that I believed that much of what is called 'Best Practices' is 'just' (more on this qualifier in a second) personal preference, and I'd like to expound on that here with a couple of my patently bad analogies.

1)  Database Normalization:

If you need to know what it is, you can find a good overview here. Very roughly, the idea is to eliminate the duplication of data within the tables of a database. To use a standard example, if you have a customer table where you record their address, instead of putting in the name of the state in a column in that table, you would have a State table that contained the details, and then use a StateID column in customer that held the key. Also, in any other table that needed State information, you could just use the key, instead of including the name of the state (so you can avoid different spellings, impose standardization when it comes to displaying the data, etc.)

Tied into this is the notion of a normal form. There are six (apparently, I always thought there were five), with normalization increasing as you go (so second normal form is more normalized than first normal form, etc.).

However, as anyone who has done database work knows, normalization in and of itself is not an ‘ultimate good’ that supercedes all else.  Take the customer table. There are going to be columns that indicate a customer’s name. So why not a Name table, instead of recording ‘John’ for both ‘John Smith’ and ‘John Nuechterlein’? And why not a NameType table that indicates if it is a first name, middle name, surname, etc. and then have a joining table to record the name of a customer?

Besides the negative performance impact that occurs when you have to join to too many tables (‘too many’ depends on the RDBMS in question to an extent), this is much harder to maintain, and to use a highly technical term, ‘stupid.’  No one would actually ever consider doing this.

As an intellectual exercise, I once tried to figure out if you could create a database schema where every table had only two columns, a key column, and a data column. Since I quickly got bored, I’m not sure, but if you take normalization to a ridiculous logical extreme, this would be the result (I think).

Which is also why no one uses 5th normal form, and why somewhere between 2nd and 3rd normal form is the norm in most production systems (at least that I’ve worked with, I’m sure there are others with other experiences). You balance the good of normalization with other concerns. Is there an exact, perfect balance? No, it depends on the context (though I’m more willing to say there might be close to an exact, perfect balance within a context, but I digress).

2) Network Security

A common need in any important computer system is security. Business critical data, privacy information, and many other things need to be kept safe and secure (most people, on a surface level, tend to focus on external threats, but really, most data theft is internal…but I digress). There are many different ways to enforce security.

Let’s suppose the system is an ecommerce solution that contains credit card information (it shouldn’t actually, except maybe the last 4 digits, but I digress). How would you go about securing this information? There’s a lot you can do with networking equipment to enforce routing rules, there’s a lot you can do with encryption, and a host of other things.

But let’s start taking it to a ridiculous logical extreme. You want to increase security? Disconnect the database server from the network entirely, pull the cable. But someone might still log into it locally, you say? Disconnect the keyboard and monitor. Someone could still pull the hard drives and connect to a different system? Power it down and put the hard drives in a vault. Someone could crack the safe? Destroy it terminator style by throwing it into molten steel. There you go, near perfect security (science fiction scenarios excluded as an exercise for the reader).

But this is, again, to speak, technically, stupid. No one would do this, or even consider doing this. You have to balance the need for security against other needs (the system actually working, being highly available, performant, etc.).

3)  Separation of Concerns

Okay, so let’s apply the same notion of a ridiculous logical extreme to SoC. What would that look like? Although no one would ever actually consider doing this, sometimes I wonder if the ‘perfect’ implementation of SoC would end up like the ‘ultimate’ database normalization, a series of classes with one method each.

Now, since no one would ever consider doing any of these things, what’s the point?

Well, to start, saying that “Separation of Concerns is good” (Jeremy has stated that SoC is “the ‘alpha and omega’ of software design”) doesn’t really tell you much by itself. Sure, it is good in the sense that normalization is good.  Which, I think is really to say that having a system with NO normalization is bad. So, you normalize to the extent you need to until the system is no longer bad. Similarly, I think having a system with NO separation of concerns is bad, and so you separate your concerns to the extent that the system is no longer bad in this regard.

The question then becomes, when do you reach this point? And it is here that I think a certain amount of personal preference comes into play. What are comfortable with? How much time would be too much time to spend given the deadlines you have?

This isn’t to say that all personal preferences are equal or that they are equivalent to whim. Far from it. You would tend to lean towards the personal preferences of experienced, seasoned software developers. But even then, there is often little agreement.

Since I have a little bit of experience with it, how would an experienced, seasoned software developer develop MVP/MVC? No one needs to spend a lot of time searching blogs to discover that there is a wide and varied set of opinions on the matter. How humble is your view? Do you like events or not? Should the view invoke the presenter/controller or not?

Moreover, should you always use MVP/MVC? I think the answer is an obvious no, but others would probably say it is an obvious yes.

To go a different direction, SoC might be viewed as being anti-YAGNI when taken to far. Do I really need DI? It’s seen as a best practice, but I’m skeptical (as are others). Do I need to use mocks when testing? I don’t know. I want to know that my system in production is going to work, and if I end up with fragile mock tests, maybe I’m worse off than if I just created integration tests (slow as they are), since then I’m testing what I really want to know.

4)  To wrap up

So, are there best practices in software development? Well, I think there are very clearly worst practices, things to be avoided. And I do actually think that there are techniques that can help you to avoid worst practices, but I also think they need to be considered skeptically. There is a common dictum to the effect that the worst system you will ever create is your second one, because you will over-compensate for what you see as the deficiencies of your first system, but I think that it is probably the case that you are constantly doing this.

Which is why even (I would say ‘especially’) experienced, seasoned developers have a tendency to look at their last code base and see all the ways it could be better (this is why it is apparently an empirical fact that you have to be competent at something to know when you are being incompetent…the incompetent people aren’t competent enough to know). But at some point, perfect is the enemy of good.

To pull out something from my philosophical past, Hilary Putnam used an analogy concerning knives. Roughly paraphrasing from memory, it doesn’t make any sense to ask what the “best” knife is. There is no such thing. But, there is an objective answer to the question “Is this knife good enough for this job?” and I would say you could say something similar about ‘best’ practices in software development.

posted @ Tuesday, February 05, 2008 10:56 AM | Feedback (2)
T-SQL: List of information_schema views

From here:

http://haacked.com/archive/2006/07/05/bulletproofsqlchangescriptsusinginformation_schemaviews.aspx

Name Returns
CHECK_CONSTRAINTS Check Constraints
COLUMN_DOMAIN_USAGE Every column that has a user-defined data type.
COLUMN_PRIVILEGES Every column with a privilege granted to or by the current user in the current database.
COLUMNS Lists every column in the system
CONSTRAINT_COLUMN_USAGE Every column that has a constraint defined on it.
CONSTRAINT_TABLE_USAGE Every table that has a constraint defined on it.
DOMAIN_CONSTRAINTS Every user-defined data type with a rule bound to it.
DOMAINS Every user-defined data type.
KEY_COLUMN_USAGE Every column that is constrained as a key
PARAMETERS Every parameter for every user-defined function or stored procedure in the datbase. For functions this returns one row with return value information.
REFERENTIAL_CONSTRAINTS Every foreign constraint in the system.
ROUTINE_COLUMNS Every column returned by table-valued functions.
ROUTINES Every stored procedure and function in the database.
SCHEMATA Every database in the system.
TABLE_CONSTRAINTS Every table constraint.
TABLE_PRIVILEGES Every table privilege granted to or by the current user.
TABLES Every table in the system.
VIEW_COLUMN_USAGE Every column used in a view definition.
VIEW_TABLE_USAGE Every table used in a view definition.
VIEWS Every View

When selecting rows from these views, the table must be prefixed with information_schema as in SELECT * FROM information_schema.tables.

posted @ Monday, February 04, 2008 11:37 AM | Feedback (1)
Versioning Databases

K Allen Scott from OdeToCode has a great series on versioning databases.  Definite must read stuff for SQL development.

http://odetocode.com/Blogs/scott/archive/2008/01/30/11702.aspx

http://odetocode.com/Blogs/scott/archive/2008/01/31/11710.aspx

http://odetocode.com/Blogs/scott/archive/2008/02/02/11721.aspx

http://odetocode.com/Blogs/scott/archive/2008/02/02/11737.aspx

http://odetocode.com/Blogs/scott/archive/2008/02/03/11746.aspx

 

posted @ Monday, February 04, 2008 11:00 AM | Feedback (0)