A real life table variable performance problem.


I’m sure we’ve all heard and read about the differences between table variables and temporary tables. In case you haven’t here’s one of many articles that you can read on the subject.

Anyway, there is a business process in one of my databases that calculates results via table variables. The amount of data that is generally contained in these table variables is generally less that 10 rows. There are however quite a few table variables in this process but again the amount of data generally contained in them is pretty small.

Now, one of our business users identified a very settle edge case bug in the process. The person went to see the developer of the code and together the bug was fixed. During testing, small amounts of data were ran though the temp tables – maybe 10 rows – then the amount of data was raised a little larger - maybe around 100 rows – but still pretty small, and all preformed well.

When the business people were satisfied that the whole bug was fixed they decided to reprocess all the data that had potentially fallen victim to this bug – This was done in test on a copy of the production database.

The user who was testing this started the process late one evening and they estimated that it would take quite some time and so left it running overnight – there were approximately 370,000 records to reprocess.

The next morning when I got into work, I found myself being asked why this process had not completed. Upon investigation I found that in fact the process was still running.

This was in fact the first that I’d heard about this process and I was curious.

I looked at the TSQL code that was running – it created at least 7 table variables, joined some table variables to themselves and others to permanent tables.

I then asked how much data was being processed and was told approximately 370,000 rows. This immediately set off alarm bells as I suspected that amount of data in a table variable was at least contributing to the amazing slow performance.

I took a new copy of the original production database and restored it to another server. I then took the script that was being used and turned all table variables to temp tables and altered any references elsewhere in the code.

I then ran the script, it appeared to to be going to take sometime.

But then, it finished after 2 minutes 27 seconds – wow.

I left the original version running just to see how long it would take but eventually killed it after it had not finished for 28 hours – that’s some major difference.

This was perhaps the biggest performance difference I have ever personally witnessed – down from 28+ hours to 2 minutes 27 seconds. To say the business users were delighted is an understatement.

Have a good day,















About Martin Catherall

Martin Catherall is Senior SQL consultant at SQL Down Under, based in Melbourne, Australia. Martin is also a Microsoft Data Platform MVP and Regional Mentor (RM) for the Professional Association for SQL Server. Prior to relocating to Australia he was extremely active in the Christchurch, New Zealand data community - founding the local SQL Server user group and organizing SQL Saturday Christchurch - which later became SQL Saturday South Island. He likes learning interesting stuff about processing and storing data and passing on his knowledge. In his spare time he likes to learn guitar and hang out with his two young sons Callum Glen and Robert Michael.