為什麼這個查詢需要超過 1.3TB 的記憶體,它應該沒有記憶體授權?
SQL Server 版本為 2014 Developer SP1。跟踪標誌
272, 610, 1118, 1206, 1222, 8048, 9481
是全域打開的。我們不得不打開標誌 9481,因為新的基數估計器嚴重影響了許多查詢計劃。查詢沒有排序、散列或併行性。當我執行查詢時,幾分鐘內沒有返回任何結果。
在執行期間,sys.dm_exec_query_memory_grants dmv 還報告了超過 1.3TB 的理想記憶體。(WhyGiganticDesiredMemory.queryanalysis)
我執行了一個非常相似的查詢。在 ActualPlan.queryanalysis 中,所需記憶體為 124GB,授予為 44GB,而在 ActualPlanWithLessMemoryGrant.queryanalysis 中,所需記憶體僅為 10MB。
它們之間的唯一區別是從後者中刪除的嵌套循環下方最左邊的索引查找(我在連接條件中添加了一個無效的過濾器 1 = 0 以從計劃中刪除表)。
我的查詢如下所示:
SELECT CAST('C' AS CHAR(1)) AS [Action] , f.LoadId , u.UnvPclId , p.* --- [about 480 columns, Daniel Hutmacher's edit] FROM [tExtract].[ExtractCounty] f INNER JOIN [tControl].[VersionControl] vc ON Id = 1 INNER JOIN [tTax].[Property] pk WITH ( NOLOCK, FORCESEEK ( UpdateVersion_CntyCd ( UpdateVersion ) ) ) ON pk.UpdateVersion > vc.StartRowVersion AND pk.UpdateVersion <= vc.EndRowVersion AND pk.CntyCd = f.CntyCd INNER JOIN [tTax].[Property] p WITH ( NOLOCK, FORCESEEK, INDEX = 1 ) ON p.[CntyCd] = pk.[CntyCd] AND p.[PclId] = pk.[PclId] AND p.[PclSeqNbr] = pk.[PclSeqNbr] LEFT OUTER JOIN tCommon.UnvPclId u WITH ( NOLOCK, FORCESEEK ( 1 ( CntyCd, Edition, PclId, PclSeqNbr ) ) ) ON u.Edition = f.Edition AND u.CntyCd = f.CntyCd AND u.PclId = p.PclId AND u.PclSeqNbr = p.PclSeqNbr AND 1 = 0 WHERE f.SchemaId = 1 /*tTax*/ AND f.FullExtract = 0 AND EXISTS ( SELECT 1 FROM [tExtract].[TableNotCompleted] dt WHERE dt.TableId = 57 /*Property*/)
為什麼GiganticDesiredMemory.query分析
記憶體授予用於嵌套循環連接上的預取和批量排序。請參閱“ Optimized ”和“ WithUnorderedPrefetch ”屬性。
有一個記錄的跟踪標誌可以關閉Optimized - TF 2340;否則,優化器會根據基數估計做出決定。從 SQL Server 2016 SP1 開始,您還可以使用查詢提示
DISABLE_OPTIMIZED_NESTED_LOOP
。您可以調整查詢以降低預期的行數,以便優化器決定不預取或批量排序。這樣的重寫也可能消除對這麼多提示的“需要”。
更多資訊:
基本問題已在 SQL Server 2016中得到解決,並將在某個階段向後移植到 SQL Server 2014。來自 Pedro Lopes 的連結文章:
…在 SQL Server 2016 RC0 中,我們更改了行為以保持優化的優勢,但現在最大授予限制基於可用的記憶體授予空間。
我的猜測是,當您在巨大的表上看到構造錯誤的執行計劃時,您的統計數據可能已經過時了。當統計資訊過時時,會使用錯誤的執行計劃,這通常會嚴重影響 SQL Server 的性能。預設情況下,update auto stats 選項會在修改 20% + 500 條記錄時更新統計資訊…因此,如果您有 1 億條記錄,則只有在修改了 2000 萬條記錄後才會更新統計資訊,這可能需要很長時間(取決於在更新統計資訊之前插入、更新和刪除的頻率。我最近遇到了一個類似的問題,我必須建構一個維護,使用一定百分比的行進行採樣,重新索引、重組和更新統計資訊。
我的維護負責檢查碎片和統計資訊。我用它來檢查索引碎片並遍歷每個結果:
SELECT t.name AS TableName, sch.name as SchemaName, i.name AS IndexName, s.avg_fragmentation_in_percent, ROW_NUMBER() OVER (PARTITION BY t.name order by t.name) as RowNum, TR.RowCnt FROM SYS.TABLES t INNER JOIN sys.schemas sch on t.schema_id=sch.schema_id JOIN SYS.INDEXES i ON t.object_id = i.object_id JOIN SYS.DM_DB_INDEX_PHYSICAL_STATS(DB_ID(),NULL,NULL,NULL,NULL) s ON t.object_id = s.object_id AND i.index_id = s.index_id LEFT JOIN (SELECT o.OBJECT_ID, ddps.row_count as RowCnt FROM sys.indexes AS i INNER JOIN sys.objects AS o ON i.OBJECT_ID = o.OBJECT_ID INNER JOIN sys.dm_db_partition_stats AS ddps ON i.OBJECT_ID = ddps.OBJECT_ID AND i.index_id = ddps.index_id WHERE i.index_id < 2 AND o.is_ms_shipped = 0) TR ON TR.object_id = t.object_id WHERE t.type = 'U' AND s.avg_fragmentation_in_percent > 5 AND i.name is not null
然後使用它來遍歷統計資訊並更新需要更新的統計資訊:
SELECT OBJECT_NAME(sp.OBJECT_ID) AS TableName, SchemaName=sch.name, s.name AS StatName, CASE WHEN pa.rows_in_table between 500 AND 500000 AND sp.modification_counter>(pa.rows_in_table*0.15) AND ISNULL(sp.last_updated,'1900-01-01')<DATEADD(DAY,-3,GETDATE()) then 100 WHEN pa.rows_in_table BETWEEN 500000 AND 1000000 AND sp.modification_counter>(pa.rows_in_table*0.10) AND ISNULL(sp.last_updated,'1900-01-01')<DATEADD(DAY,-3,GETDATE()) then 50 WHEN pa.rows_in_table BETWEEN 1000001 AND 5000000 AND sp.modification_counter>(pa.rows_in_table*0.05) AND ISNULL(sp.last_updated,'1900-01-01')<DATEADD(DAY,-3,GETDATE()) then 25 WHEN pa.rows_in_table BETWEEN 5000001 AND 10000000 AND sp.modification_counter>(pa.rows_in_table*0.025) AND ISNULL(sp.last_updated,'1900-01-01')<DATEADD(DAY,-3,GETDATE()) then 10 WHEN pa.rows_in_table BETWEEN 10000001 AND 50000000 AND sp.modification_counter>(pa.rows_in_table*0.02) AND ISNULL(sp.last_updated,'1900-01-01')<DATEADD(DAY,-3,GETDATE()) then 2 WHEN pa.rows_in_table BETWEEN 50000001 AND 100000000 AND sp.modification_counter>(pa.rows_in_table*0.01) AND ISNULL(sp.last_updated,'1900-01-01')<DATEADD(DAY,-3,GETDATE()) then 1 WHEN pa.rows_in_table>100000000 AND sp.modification_counter>(pa.rows_in_table*0.005) AND ISNULL(sp.last_updated,'1900-01-01')<DATEADD(DAY,-3,GETDATE()) then 3000000 ELSE 0 END StatsToUpdate FROM SYS.STATS AS s OUTER APPLY SYS.DM_DB_STATS_PROPERTIES(s.OBJECT_ID,s.stats_id) AS sp INNER JOIN (SELECT ta.OBJECT_ID, ta.schema_id, sum(pa.rows) rows_in_table FROM SYS.TABLES ta INNER JOIN SYS.PARTITIONS pa ON pa.OBJECT_ID = ta.OBJECT_ID INNER JOIN SYS.SCHEMAS sc ON ta.SCHEMA_ID = sc.SCHEMA_ID WHERE ta.is_ms_shipped = 0 AND pa.index_id IN (1,0) GROUP BY ta.OBJECT_ID, ta.schema_id) as pa ON pa.OBJECT_ID = s.OBJECT_ID INNER JOIN sys.schemas sch on pa.schema_id=sch.schema_id WHERE ISNULL(modification_counter,0) <> 0 AND CASE WHEN pa.rows_in_table between 500 AND 500000 AND sp.modification_counter>(pa.rows_in_table*0.15) AND ISNULL(sp.last_updated,'1900-01-01')<DATEADD(DAY,-3,GETDATE()) then 100 WHEN pa.rows_in_table BETWEEN 500000 AND 1000000 AND sp.modification_counter>(pa.rows_in_table*0.10) AND ISNULL(sp.last_updated,'1900-01-01')<DATEADD(DAY,-3,GETDATE()) then 50 WHEN pa.rows_in_table BETWEEN 1000001 AND 5000000 AND sp.modification_counter>(pa.rows_in_table*0.05) AND ISNULL(sp.last_updated,'1900-01-01')<DATEADD(DAY,-3,GETDATE()) then 25 WHEN pa.rows_in_table BETWEEN 5000001 AND 10000000 AND sp.modification_counter>(pa.rows_in_table*0.025) AND ISNULL(sp.last_updated,'1900-01-01')<DATEADD(DAY,-3,GETDATE()) then 10 WHEN pa.rows_in_table BETWEEN 10000001 AND 50000000 AND sp.modification_counter>(pa.rows_in_table*0.02) AND ISNULL(sp.last_updated,'1900-01-01')<DATEADD(DAY,-3,GETDATE()) then 2 WHEN pa.rows_in_table BETWEEN 50000001 AND 100000000 AND sp.modification_counter>(pa.rows_in_table*0.01) AND ISNULL(sp.last_updated,'1900-01-01')<DATEADD(DAY,-3,GETDATE()) then 1 WHEN pa.rows_in_table>100000000 AND sp.modification_counter>(pa.rows_in_table*0.005) AND ISNULL(sp.last_updated,'1900-01-01')<DATEADD(DAY,-3,GETDATE()) then 3000000 ELSE 0 END>0
請注意,我選擇這些數字作為更新統計數據的採樣,但需要根據具體情況進行更改。