Postgresql

PostgresSQL 遞歸查詢執行緩慢,負載低,幾乎沒有數據

  • April 7, 2017

我正在建構一個簡單的 ACL 系統,數據儲存是 Postgres 9.6。權限被分配給可以有子組的組,而人員被分配給組。

我有一個性能不佳的查詢負責檢索與使用者相關的所有權限,我需要您的幫助來優化/重寫它。

慢查詢涉及的簡化(但相關)數據模型如下:

CREATE TABLE IF NOT EXISTS "acl_group" (id BIGSERIAL PRIMARY KEY, parent_id BIGINT, name TEXT NOT NULL);
CREATE TABLE IF NOT EXISTS "acl_group_membership" (group_id BIGINT NOT NULL, subject_id TEXT NOT NULL);
CREATE TABLE IF NOT EXISTS "acl" (id BIGSERIAL PRIMARY KEY, group_id BIGINT NOT NULL, service TEXT NOT NULL, action TEXT NOT NULL, resources TEXT NOT NULL);

有外鍵和唯一約束,還有兩個額外的索引:

CREATE INDEX ON "acl_group_membership" (subject_id);
CREATE INDEX ON "acl" (group_id);

查詢:

WITH RECURSIVE group_tree AS (
SELECT acl_group.id, acl_group.parent_id, 1 AS level
FROM acl_group
INNER JOIN acl_group_membership AS agm ON agm.group_id = acl_group.id
WHERE agm.subject_id = $1
UNION ALL
SELECT c.id, c.parent_id, p.level + 1
FROM acl_group c
INNER JOIN group_tree p on c.id = p.parent_id
)
SELECT acl.*
FROM acl
INNER JOIN group_tree on group_tree.id = acl.group_id
ORDER BY group_tree.level asc, acl.id asc;

在我非常溫和的壓力測試期間(平均為 50 req/s),在 localhost 上執行 Postgres 在 Linux docker 容器中執行,僅在數據庫上執行此查詢(因此沒有寫入),數據庫中的數據很少(少於所有表中有 100 行),我得到以下結果:

  1. 大約 10% 的請求執行速度很慢,大約 200 毫秒或更多
  2. 在 60 毫秒左右有更多的時鐘
  3. <10 ms 時只有 50% 的時鐘。

快速執行查詢計劃(https://explain.depesz.com/s/M5tO):

Sort  (cost=355.67..356.08 rows=161 width=50) (actual time=0.988..1.065 rows=26 loops=1)
 Sort Key: group_tree.level, acl.id
 Sort Method: quicksort  Memory: 27kB
 CTE group_tree
   -&gt;  Recursive Union  (cost=13.74..324.76 rows=606 width=20) (actual time=0.110..0.412 rows=3 loops=1)
         -&gt;  Hash Join  (cost=13.74..38.51 rows=6 width=20) (actual time=0.103..0.138 rows=1 loops=1)
               Hash Cond: (acl_group.id = agm.group_id)
               -&gt;  Seq Scan on acl_group  (cost=0.00..20.70 rows=1070 width=16) (actual time=0.007..0.040 rows=11 loops=1)
               -&gt;  Hash  (cost=13.67..13.67 rows=6 width=8) (actual time=0.043..0.043 rows=1 loops=1)
                     Buckets: 1024  Batches: 1  Memory Usage: 9kB
                     -&gt;  Bitmap Heap Scan on acl_group_membership agm  (cost=4.20..13.67 rows=6 width=8) (actual time=0.021..0.024 rows=1 loops=1)
                           Recheck Cond: (subject_id = 'team_df'::text)
                           Heap Blocks: exact=1
                           -&gt;  Bitmap Index Scan on acl_group_membership_subject_id_idx  (cost=0.00..4.20 rows=6 width=0) (actual time=0.011..0.011 rows=1 loops=1)
                                 Index Cond: (subject_id = 'team_df'::text)
         -&gt;  Hash Join  (cost=1.95..27.41 rows=60 width=20) (actual time=0.036..0.078 rows=1 loops=3)
               Hash Cond: (c.id = p.parent_id)
               -&gt;  Seq Scan on acl_group c  (cost=0.00..20.70 rows=1070 width=16) (actual time=0.005..0.038 rows=11 loops=2)
               -&gt;  Hash  (cost=1.20..1.20 rows=60 width=12) (actual time=0.017..0.017 rows=1 loops=3)
                     Buckets: 1024  Batches: 1  Memory Usage: 8kB
                     -&gt;  WorkTable Scan on group_tree p  (cost=0.00..1.20 rows=60 width=12) (actual time=0.003..0.006 rows=1 loops=3)
 -&gt;  Hash Join  (cost=2.19..25.01 rows=161 width=50) (actual time=0.469..0.881 rows=26 loops=1)
       Hash Cond: (group_tree.id = acl.group_id)
       -&gt;  CTE Scan on group_tree  (cost=0.00..12.12 rows=606 width=12) (actual time=0.118..0.440 rows=3 loops=1)
       -&gt;  Hash  (cost=1.53..1.53 rows=53 width=46) (actual time=0.333..0.333 rows=53 loops=1)
             Buckets: 1024  Batches: 1  Memory Usage: 13kB
             -&gt;  Seq Scan on acl  (cost=0.00..1.53 rows=53 width=46) (actual time=0.004..0.158 rows=53 loops=1)
Planning time: 0.454 ms
Execution time: 1.216 ms

慢速執行查詢計劃(182ms)https://explain.depesz.com/s/pZ4e

Sort  (cost=355.67..356.08 rows=161 width=50) (actual time=92.671..182.171 rows=26 loops=1)
     Sort Key: group_tree.level, acl.id
     Sort Method: quicksort  Memory: 27kB
     CTE group_tree
       -&gt;  Recursive Union  (cost=13.74..324.76 rows=606 width=20) (actual time=0.225..0.491 rows=3 loops=1)
             -&gt;  Hash Join  (cost=13.74..38.51 rows=6 width=20) (actual time=0.216..0.252 rows=1 loops=1)
                   Hash Cond: (acl_group.id = agm.group_id)
                   -&gt;  Seq Scan on acl_group  (cost=0.00..20.70 rows=1070 width=16) (actual time=0.030..0.092 rows=11 loops=1)
                   -&gt;  Hash  (cost=13.67..13.67 rows=6 width=8) (actual time=0.094..0.094 rows=1 loops=1)
                         Buckets: 1024  Batches: 1  Memory Usage: 9kB
                         -&gt;  Bitmap Heap Scan on acl_group_membership agm  (cost=4.20..13.67 rows=6 width=8) (actual time=0.061..0.066 rows=1 loops=1)
                               Recheck Cond: (subject_id = 'team_df'::text)
                               Heap Blocks: exact=1
                               -&gt;  Bitmap Index Scan on acl_group_membership_subject_id_idx  (cost=0.00..4.20 rows=6 width=0) (actual time=0.016..0.016 rows=1 loops=1)
                                     Index Cond: (subject_id = 'team_df'::text)
             -&gt;  Hash Join  (cost=1.95..27.41 rows=60 width=20) (actual time=0.031..0.067 rows=1 loops=3)
                   Hash Cond: (c.id = p.parent_id)
                   -&gt;  Seq Scan on acl_group c  (cost=0.00..20.70 rows=1070 width=16) (actual time=0.005..0.032 rows=11 loops=2)
                   -&gt;  Hash  (cost=1.20..1.20 rows=60 width=12) (actual time=0.014..0.014 rows=1 loops=3)
                         Buckets: 1024  Batches: 1  Memory Usage: 8kB
                         -&gt;  WorkTable Scan on group_tree p  (cost=0.00..1.20 rows=60 width=12) (actual time=0.003..0.006 rows=1 loops=3)
     -&gt;  Hash Join  (cost=2.19..25.01 rows=161 width=50) (actual time=92.140..92.506 rows=26 loops=1)
           Hash Cond: (group_tree.id = acl.group_id)
           -&gt;  CTE Scan on group_tree  (cost=0.00..12.12 rows=606 width=12) (actual time=0.326..0.610 rows=3 loops=1)
           -&gt;  Hash  (cost=1.53..1.53 rows=53 width=46) (actual time=91.781..91.781 rows=53 loops=1)
                 Buckets: 1024  Batches: 1  Memory Usage: 13kB
                 -&gt;  Seq Scan on acl  (cost=0.00..1.53 rows=53 width=46) (actual time=0.012..10.226 rows=53 loops=1)

我是 Postgres 世界的新手,但我認為這樣一個簡單的查詢不應該在幾乎沒有數據並且只執行讀取操作的情況下執行這麼慢,所以我認為存在一些我看不到的宏觀錯誤。

您是否看到優化/重寫此查詢的方法?

這是一個在物理 Macbook Pro 上執行的 Linux docker,但我們在 Amazon RDS 實例上得到了類似的結果。

主要問題可能是您的伺服器在幕後發生的其他事情,例如dezso 評論。不過有幾點觀察:

您的查詢對我來說看起來不錯。這個更簡單一些,可能執行得更快。不確定,測試一下:

WITH RECURSIVE group_tree AS (
  SELECT group_id, 1 AS level
  FROM   acl_group_membership
  WHERE  subject_id = $1

  UNION  ALL
  SELECT ag.parent_id, gt.level + 1
  FROM   group_tree gt
  JOIN   acl_group  ag USING (group_id)
)
SELECT acl.*
FROM   group_tree gt
JOIN   acl USING (group_id)
ORDER  BY gt.level, acl.id;

如果您需要進一步優化讀取性能並且沒有很多寫入(您只提到只讀),則可以選擇物化視圖。

如果所有表中只有不到 100 行,我們當然不會看到任何索引或點陣圖索引掃描。我們應該只看到順序掃描。某些配置設置不正確(主要嫌疑人:成本設置)或表統計資訊具有誤導性。

一旦您修復了伺服器配置和/或 autovacuum 設置並填充了具有大量行(數千?數百萬?)的表,它將有助於像您展示的那樣將這些索引用於只讀負載(對於任何一個查詢都相同):

CREATE INDEX ON acl_group_membership (subject_id, group_id);
CREATE INDEX ON acl_group            (id, parent_id);
CREATE INDEX ON acl                  (group_id);

針對 和上的僅索引掃描acl_group_membership``acl_group

我的評論*“數千?數百萬?”* 受到bigint架構中列的啟發。通常,平原integer足夠大 - 而且更小更快。更小更快的索引也是如此。

假設,如果你每秒創建一個新行,你需要 68 年才能耗盡一個普通integer( int4) 列的正半部分。

2^31 / (360024365) = 68.1

我懷疑你的設計有幾乎一樣多的寫入。

引用自:https://dba.stackexchange.com/questions/169279