In previous posts, I talked about my initial impressions of both the EF Profiler and the Linq to SQL Profiler. I’m going to focus here on the latter. From the post title, you can guess correctly that there’s some bad here, but there’s also some good here, so I’m going to start with that.
“This thing will pay for itself in an hour”
This is one of the lines in an email I sent to Ayende, focusing on the promise of the profiler and how I see the clear benefits of it.
Here’s a standard output (I’m using a test project, reasons for which I’ll get to later, if I remember to):
The Data contexts tab on the upper left shows each Data Context executed and when you select one, you get all of the queries it executed (again, in my simple test, there is only one query, but go with the idea). Where it gets even better is when you click on the Analysis tab and select accordingly:
As you can see, all of your calls are grouped by method (again, pretend there are more queries per method), which is fantastic.
Where I believe the Profiler is worth the cost is probably best explained by contrasting how I might accomplish the same tasks without the tool. Since at this point I’m purely a SQL Server guy, I’ll talk about that case.
If I wanted to set a baseline for the SQL calls that a web app would make, I would naturally use the SQL Server Profiler it comes with. I would create a trace in that tool, have it start profiling, and then hit a page I wanted to baseline. I would then stop the profiler, and export the trace to SQL tables (you can do this automatically, but I prefer to export it for no other reason than that is what I’m used to). You can spend some time to set up the trace so that it only logs the calls your apps are using, or, what I normally do, is just do some filtering and grouping to get the same statistics that the Linq to SQL Profiler gives you. So, from a pure data perspective, you get the same data.
What is really efficient about the promise of the Profiler is that you get it very quickly and very easily, and more importantly, your SQL calls are tied back to the methods that generate them. This allows you to very easily see which methods are most inefficient. You can tie the individual queries back to your SQL instance if you care to (though I haven’t done that yet), but having this centralized view of your application and how it interacts with your data store is incredibly useful. It would take very few iterations of being able to see the statistics and dive down into the methods to easily cover the time it would take to use SQL Server Profiler alone (which won’t automatically tie back to your methods anyways).
So all of that is good. And where it really rocks for me is if you imagine the following workflow:
1) Identify a page you want to optimize.
2) Run the profiler on that page and get all of the statistics in all their glory.
3) Identify your most expensive/problematic areas, and optimize your code.
4) Rerun the profiler, see how it has improved. Rinse and repeat.
Again, you could do most of this with SQL Server Profiler, but UberProf makes it really easy to do. In theory.
For the real app that I am using UberProf with, it’s a perfect test of its capabilities. The app was built (with some code probably two years old) without any eye to optimization at all and so it is a performance dog. Within an hour, my goal was/is to show how easy it is to improve an application’s performance (and not because I told Ayende I was going to do it, just because I think the promise of the tool is that compelling).
Now, let’s talk about the not quite so good part.
“If I can ever get it to work for more than a minute at a time”
At some point during the Skype session with Ayende where he was trying to troubleshooting the problem I was having, I said something like “Dude, I told you I wasn’t making this up.”
Here’s a simulation of the problem:
Notice how in the middle of the top of the app, it shows the L2SProfTestSite as still being profiled. Note that the statistics have not cleared in the bottom left and that no data is showing in the Data Context tab.
To the consternation of Ayende and myself, we can’t determine why the Profiler stops profiling. Doing an iisreset should do the trick. Recycling the Application Pool (since this is a web App) should do the trick. But neither do.
Note that I mentioned that the above pic is a simulation of the problem. To the double consternation of Ayende and myself, we can’t recreate the issue outside of the particular web app I bought UberProf to fix.
After some back and forth on the Google support group, we setup a Skype conversation and tried to troubleshoot the problem. I shared my desktop and recreated the issue immediately, to which Ayende responded (I’m paraphrasing of course since we didn’t record it) “It should not be doing that.” Over the course of a little more than an hour, he changed internal code to L2SProf and sent me various debug builds, and try as we could, we could not come up with any explainable reason for why it behaved the way that it did. We looked at how I was creating Data Contexts, how the App Pools were defined, and other things. Ayende shared his desktop and showed that he couldn’t recreate the issue. And in my test app, I can’t recreate it either, but there’s nothing in the web app in question that explains the behavior (which I’ve recreated on three separate machines to eliminate machine-specific issues).
The only way I can get L2SProf to benchmark separate pages is to get stats for the first page, then add a space somewhere in the solution and rebuild and recycle. At that point, I can get stats for the second page I’m benchmarking. Rinse and repeat for the third page.
Because of the time difference, after an hour or so, we concluded the Skype session, and he asked me to try and recreate the issue in a test app, which I’ve been unable to do.
I have not as of yet done a side by side comparison of the statistics that Linq to SQL Profiler generates versus SQL Server Profiler, but that’s actually the least of the things I worry about. It meets the eyeball test when I’ve run quick comparisons, I’ll be more exacting in my tests, but I’m fairly confident about that part (which in the end is the important part).
What’s maddening is the high friction in being able to use the tool in what I would consider to be a normal use case. I have the workaround, but I shouldn’t have to add a space to the code base and rebuild. Ayende agreed, and was able to share his desktop and show that he couldn’t reproduce the same behavior. The fact that I have been able to reproduce the same behavior, at will, on three separate machines tells me there is something basic being missed.
That Ayende was willing to Skype in and troubleshoot is a good thing (try doing that without a support contract with other software vendors), but that he couldn’t fix the issue isn’t so good. As a paying customer, I want the thing to work in what I believe we both agreed was a ‘not crazy’ use case. Combined with the constant errors (mentioned in previous posts, I think) that are generated when upgrading to a new build gives off the impression that the tool is amateur-ish, which I don’t think it is.
So, it’s frustrating all around.
Since I have the ugly workaround, I will now be able to test the promise of the tool on the application I bought it for. That will be the topic of my next post.