Oct 01, 2015

|

by: james

|

Tags: Tuning

|

Categories: Tech Tips

Don’t Tune the Wrong Thing

Don’t Tune the Wrong Thing

One of my favorite questions to ask interview candidates, (especially if they bring it up first) is about performance tuning.  In fact, recently one candidate and I got into some questions about computational complexity, big O notation, and space / time tradeoffs which is definitely not the norm.

Instead, what I commonly hear from many (I would daresay most) people when they talk about performance tuning is that their process looks something like this: look at a piece of code and essentially surmise, divine, or use the force to determine where it is slow and then rewrite using that same intuition (usually this is the same person who wrote the poor performing code originally).  No tools, no standards, just essentially guess work . . . maybe they’ve seen some patterns that tell them that they may need to avoid about deep recursion, CSQs, or maybe someone has mentioned the *DREADED* “we should put NOLOCK hints everywhere” approach. But, really nothing that says, here’s how to find the cause of the problem and here’s how to remediate that specific problem.

To me, I would say the biggest thing with tuning is to first start with data.  How long is it taking to run a given method, statement, etc.  In SQL, I use SQL Profiler a lot.  I keep it running almost all the time if I can.  Anything that runs more than 1000 ms or is reading a disproportionate share of rows for rows being returned gets a second look.  There are similar tools on the app side.  I’ve used a trace tool called dynaTrace before, which hooks into the .NET Profiler API, but I understand VS has some nice tools as well.  Starting with that, I can see exactly where something is running slowly.  Hopefully, it’s not a whole bunch of 10 ms calls, but one guilty call that is dominating the time in the call stack.  From there, it’s relatively easy to drill into that one call and start focusing attention there.

Well, it happened to me today — my software dev force failed me.  I was convinced (yes, the force is strong with this one) that a LINQ statement that I had written that was grouping a list of objects by some property was going to be terribly slow (it was a pretty big list of 100k+ patients).  It’s unordered, I reasoned.  It’s unindexed, I said.  It *MUST* be slow.  So, I quickly rewrote it.  That data is backed by the DB anyway, I’ll put an index on medical record number on the table, and that query will be sub-second.  And, I implemented it.  The query ran sub-second, just as I knew it would.  But, my application was still pokey.  Why hadn’t this solved the problem?

It probably did to some degree . . . I probably reduced a 1 second operation to a 100 ms operation.  Yay!  That’s a huge performance improvement!  Well, too bad that wasn’t the minute+ long operation I was trying to improve.

Turns out the problem wasn’t there at all, it was in a nested loop that was pruning out those duplicates and iterating over a 100k item list 10k times.  In fact, to be true, it was doing that over two 100k item lists.  It looked pretty benign:

foreach (var item in itemsToRemove)

{

aBigList.RemoveAll(x => x.Id == item.Id);

anotherBigList.RemoveAll(x => x.Id == item.Id);

}

But I should have been looking at data to tell me where the bottleneck was.  Once I did that, it was clear that just that part of the code was dominating the time spent in the call.  And, if you look at it and deconstruct it, it becomes pretty obvious.  Two 100k item lists are being iterated over from start to end 10,000 items.  That’s ~2,000,000,000 iterations, with an arithmetic operation done at each one, and some list management say every tenth item.  Even if it’s not heavy lifting, that’s a whole lot of light lifting.

After seeing this, I replaced that code by first throwing those lists into dictionaries and then handling the removal logic.  That got it down from ~2,000,000 iterations down to 10k iterations.  And, constant time lookup to boot.  Run time from 150+ seconds to 0.05s.  I’ll take that!