SSIS was born for parallel processing, while sql server wasn't; well not from the users perspective. Yes, sql server breaks a user's query into parallel processes, but there is no way I can 'fork' within a stored procedure, or do an asynchronous call. I originally designed a process which I outline here "Parallel Processing in SQL"which works beautifully if in fact the work to be done is broken into equal parts.
As an example, if you have 100 units of work and decide to divide it up between 5 threads, each thread would have 20 units of work. If each unit takes 1 min, then the entire process will take 20 mins. Now, lets say that somewhere in the 20 units of work that thread 4 has (around task # 15 and #17) those are different and take 10 min each. Rather than the process being done in 20 min, the process will finish in 38 min. The reason being is that when the other threads finish in 20 min, thread 4 will be stuck doing work for an extra 18 min (+9 min for task #15 and +9 min for task#17). The reason for this is because when you use the modulo strategy each thread is assigned a workload that cannot be transferred. If a thread happens to get stuck behind one or more long running pieces of work, the entire process suffers. Having seen this in some of my load processes, I came up with a new strategy that divides the work evenly, allowing the other threads to pick up work if one or more lag behind.
The new strategy creates a pool of work. Let's go back to the 100 units of work. I create a table that contains 100 rows, each with a number of columns representing the variables or work I need to do. In the example I give, I am extracting a statement_account and a rowcount. If you've ever programmed in a mutli-threaded language you would be familiar with a semaphore or latch. This is a nice mechanism to control access to a specific section of code. For us, we will be protecting that table of work, which for a lack of a better term I'll call work_queue.
There is no way within sql server that I can prevent a stored procedure from being called by two or more connections, but I can affect access to a table by way of locks. I devised a rather efficient way, within a procedure, of taking one record off the work queue for processing. In effect I have used the work_queue table as a semaphore to make sure each thread gets a single unit of work, without holding up the others for an extended period of time and no duplication of work. Here is the procedure
create procedure usp_workunit
(@rcount int output,
@statement char(8) output)
set nocount on
declare @tblvar as table (stmt char(8))
delete top(1) work_queue
output deleted.statement_acct into @tblvar
The important piece of work is using the output keyword from within the delete. As you see I can remove the top record from the work_queue table and extract the actual value without needing to lock the work_queue table (with a begin tran), select one record, delete that record, and then release the lock. I simply use the single delete statement. Also note that work_queue is simply a heap. The statement and rowcount are exported to SSIS. I use the rowcount variable to determine when there is no more work to extract and end that thread's execution loop. Also note that since there is no assigned work queue, each thread simply retrieves the next valid record, there is no backup and only the thread that picked the short straw (longer work work unit) will be slowed. I will show the structure I use within SSIS that couples with the work unit in my next post.