INNER JOIN vs. CROSS APPLY

refer from : http://explainextended.com/2009/07/16/inner-join-vs-cross-apply/

INNER JOIN is the most used construct in SQL: it joins two tables together, selecting only those row combinations for which a JOIN condition is true.

This query:

SELECT  *
FROM    table1
JOIN    table2
ON      table2.b = table1.a

reads:

For each row from table1, select all rows from table2 where the value of field b is equal to that of field a

Note that this condition can be rewritten as this:

SELECT  *
FROM    table1, table2
WHERE   table2.b = table1.a

  

, in which case it reads as following:

Make a set of all possible combinations of rows from table1 and table2 and of this set select only those rows where the value of field b is equal to that of field a

These conditions are worded differently, but they yield the same result and database systems are aware of that. Usually both these queries are optimized to use the same execution plan.

The former syntax is called ANSI syntax, and it is generally considered more readable and is recommended to use.

However, it didn‘t get into Oracle until recently, that‘s why there are many hardcore Oracle developers that are just used to the latter syntax.

Actually, it‘s a matter of taste.

To use JOINs (with whatever syntax), both sets you are joining must be self-sufficient, i. e. the sets should not depend on each other. You can query both sets without ever knowing the contents on another set.

But for some tasks the sets are not self-sufficient. For instance, let‘s consider the following query:

We table table1 and table2table1 has a column called rowcount.

For each row from table1 we need to select first rowcount rows from table2, ordered bytable2.id

We cannot formulate a join condition here. The join condition, should it exists, would involve the row number, which is not present in table2, and there is no way to calculate a row number only from the values of columns of any given row in table2.

That‘s where the CROSS APPLY can be used.

CROSS APPLY is a Microsoft‘s extension to SQL, which was originally intended to be used with table-valued functions (TVF‘s).

The query above would look like this:

SELECT  *
FROM    table1
CROSS APPLY
(
SELECT  TOP (table1.rowcount) *
FROM    table2
ORDER BY
id
) t2

  

For each from table1, select first table1.rowcount rows from table2 ordered by id

The sets here are not self-sufficient: the query uses values from table1 to define the second set, not to JOINwith it.

The exact contents of t2 are not known until the corresponding row from table1 is selected.

I previously said that there is no way to join these two sets, which is true as long as we consider the sets as is. However, we can change the second set a little so that we get an additional computed field we can later join on.

The first option to do that is just count all preceding rows in a subquery:

SELECT  *
FROM    table1 t1
JOIN    (
SELECT  t2o.*,
(
SELECT  COUNT(*)
FROM    table2 t2i
WHERE   t2i.id <= t2o.id
) AS rn
FROM    table2 t2o
) t2
ON      t2.rn <= t1.rowcount

  

 

The second option is to use a window function, also available in SQL Server since version 2005:

SELECT  *
FROM    table1 t1
JOIN    (
SELECT  t2o.*, ROW_NUMBER() OVER (ORDER BY id) AS rn
FROM    table2 t2o
) t2
ON      t2.rn <= t1.rowcount

  

This functions returns the ordinal number a row would have be the ORDER BY condition used in the function applied to the whole query.

This is essentially the same result as the subquery used in the previous query.

Now, let‘s create the sample tables and check all these solutions for efficiency:

SET NOCOUNT ON
GO
DROP TABLE [20090716_cross].table1
DROP TABLE [20090716_cross].table2
DROP SCHEMA [20090716_cross]
GO
CREATE SCHEMA [20090716_cross]
CREATE TABLE table1
(
id INT NOT NULL PRIMARY KEY,
row_count INT NOT NULL
)
CREATE TABLE table2
(
id INT NOT NULL PRIMARY KEY,
value VARCHAR(20) NOT NULL
)
GO
BEGIN TRANSACTION
DECLARE @cnt INT
SET @cnt = 1
WHILE @cnt <= 100000
BEGIN
INSERT
INTO    [20090716_cross].table2 (id, value)
VALUES  (@cnt, ‘Value ‘ + CAST(@cnt AS VARCHAR))
SET @cnt = @cnt + 1
END
INSERT
INTO    [20090716_cross].table1 (id, row_count)
SELECT  TOP 5
id, id % 2 + 1
FROM    [20090716_cross].table2
ORDER BY
id
COMMIT
GO

  

 
 

table2 contains 100,000 rows with sequential ids.

table1 contains the following:

idrow_count
1 2
2 1
3 2
4 1
5 2

Now let‘s run the first query (with COUNT):

SELECT  *
FROM    [20090716_cross].table1 t1
JOIN    (
SELECT  t2o.*,
(
SELECT  COUNT(*)
FROM    [20090716_cross].table2 t2i
WHERE   t2i.id <= t2o.id
) AS rn
FROM    [20090716_cross].table2 t2o
) t2
ON      t2.rn <= t1.row_count
ORDER BY
t1.id, t2.id

  

 
idrow_countidvaluern
1 2 1 Value 1 1
1 2 2 Value 2 2
2 1 1 Value 1 1
3 2 1 Value 1 1
3 2 2 Value 2 2
4 1 1 Value 1 1
5 2 1 Value 1 1
5 2 2 Value 2 2
8 rows fetched in 0.0000s (498.4063s)
Table ‘table1‘. Scan count 2, logical reads 200002, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. 
Table ‘Worktable‘. Scan count 100000, logical reads 8389920, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. 
Table ‘Worktable‘. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. 
Table ‘table2‘. Scan count 4, logical reads 1077, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. 

SQL Server Execution Times:
   CPU time = 947655 ms,  elapsed time = 498385 ms. 

This query, as was expected, is very unoptimal. It runs for more than 500 seconds.

Here‘s the query plan:

SELECT
  Sort
    Compute Scalar
      Parallelism (Gather Streams)
        Inner Join (Nested Loops)
          Inner Join (Nested Loops)
            Clustered Index Scan ([20090716_cross].[table2])
            Compute Scalar
              Stream Aggregate
                Eager Spool
                  Clustered Index Scan ([20090716_cross].[table2])
          Clustered Index Scan ([20090716_cross].[table1])

For each row selected from table2, it counts all previous rows again an again, never recording the intermediate result. The complexity of such an algorithm is O(n^2), that‘s why it takes so long.

Let‘s run he second query, which uses ROW_NUMBER():

SELECT  *
FROM    [20090716_cross].table1 t1
JOIN    (
SELECT  t2o.*, ROW_NUMBER() OVER (ORDER BY id) AS rn
FROM    [20090716_cross].table2 t2o
) t2
ON      t2.rn <= t1.row_count
ORDER BY
t1.id, t2.id

  

idrow_countidvaluern
1 2 1 Value 1 1
1 2 2 Value 2 2
2 1 1 Value 1 1
3 2 1 Value 1 1
3 2 2 Value 2 2
4 1 1 Value 1 1
5 2 1 Value 1 1
5 2 2 Value 2 2
8 rows fetched in 0.0006s (0.5781s)
Table ‘Worktable‘. Scan count 1, logical reads 214093, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. 
Table ‘table2‘. Scan count 1, logical reads 522, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. 
Table ‘table1‘. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. 

SQL Server Execution Times:
   CPU time = 578 ms,  elapsed time = 579 ms. 

This is much faster, only 0.5 ms.

Let‘s look into the query plan:

SELECT
  Inner Join (Nested Loops)
    Clustered Index Scan ([20090716_cross].[table1])
  Lazy Spool
    Sequence Project (Compute Scalar)
      Compute Scalar
        Segment
          Clustered Index Scan ([20090716_cross].[table2])

This is much better, since this query plan keeps the intermediate results while calculating the ROW_NUMBER.

However, it still calculates ROW_NUMBERs for all 100,000 of rows in table2, then puts them into a temporary index over rn created by Lazy Spool, and uses this index in a nested loop to range the rns for each row fromtable1.

Calculating and indexing all ROW_NUMBERs is quite expensive, that‘s why we see 214,093 logical reads in the query statistics.

Finally, let‘s try a CROSS APPLY:

SELECT  *
FROM    [20090716_cross].table1 t1
CROSS APPLY
(
SELECT  TOP (t1.row_count) *
FROM    [20090716_cross].table2
ORDER BY
id
) t2
ORDER BY
t1.id, t2.id

  

idrow_countidvalue
1 2 1 Value 1
1 2 2 Value 2
2 1 1 Value 1
3 2 1 Value 1
3 2 2 Value 2
4 1 1 Value 1
5 2 1 Value 1
5 2 2 Value 2
8 rows fetched in 0.0004s (0.0008s)
Table ‘table2‘. Scan count 5, logical reads 10, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. 
Table ‘table1‘. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. 

SQL Server Execution Times:
   CPU time = 0 ms,  elapsed time = 1 ms. 

This query is instant, as it should be.

The plan is quite simple:

SELECT
  Inner Join (Nested Loops)
    Clustered Index Scan ([20090716_cross].[table1])
    Top
      Clustered Index Scan ([20090716_cross].[table2])

For each row from table1, it just takes first row_count rows from table2. So simple and so fast.

Summary:

While most queries which employ CROSS APPLY can be rewritten using an INNER JOINCROSS APPLY can yield better execution plan and better performance, since it can limit the set being joined yet before the join occurs.

郑重声明:本站内容如果来自互联网及其他传播媒体,其版权均属原媒体及文章作者所有。转载目的在于传递更多信息及用于网络分享,并不代表本站赞同其观点和对其真实性负责,也不构成任何其他建议。