Postgresql

緩慢的 Postgres 聚合查詢

  • September 28, 2019

我需要計算一系列測量數據的值。測量數據分為 1000 組連續行,因為每次呼叫查詢都必須輸出一系列 1000 個值。典型的輸入集將包含 10,000 到 1,000,000 行。以下解決方案是我能想到的最好的主意。它工作正常,但執行速度相當慢。因為我的要求之一是經常觸發計算,所以我需要優化執行時間。不幸的是,它不是預先計算各個組的值的選項,因為每個新的測量行都會影響組大小。

架構設置

create table devices
(
   id varchar not null
       constraint devices_pkey
           primary key
);


create table processes
(
   id        integer not null,

   constraint processes_pkey
       primary key (id, device_id),

   device_id varchar not null
       constraint fk_processes_devices
           references devices
           on delete cascade
);

create index processes_device_id_idx on processes (device_id);


create table measurements
(
   timestamp  timestamp with time zone not null,
   current    real                     not null,
   process_id integer                  not null,
   device_id  varchar                  not null,

   constraint measurements_pkey
       primary key (timestamp, process_id, device_id),

   constraint fk_measurements_processes
       foreign key (process_id, device_id) references processes
           on delete cascade
);

create index measurements_process_id_device_id_idx on measurements (device_id, process_id);


INSERT INTO devices (id) VALUES ('123');

INSERT INTO processes (id, device_id) VALUES (456, '123');

WITH numbers AS (
 SELECT *
 FROM generate_series(1, 1000000)
)

INSERT INTO measurements (timestamp, current, process_id, device_id)
SELECT NOW() + (generate_series * interval '1 second'), generate_series * random(), 456, '123'
FROM numbers;

詢問

select min(timestamp) as timestamp,
      case when sum(current) < 0 then -SQRT(AVG(POWER(current, 2))) else SQRT(AVG(POWER(current, 2))) end,
      456            as process_id,
      '123'          as device_id,
      index / 1000   as group_index
from (select timestamp,
            current,
            row_number() over (order by timestamp) as index
     from measurements
     where device_id = '123'
       and process_id = 456
     order by timestamp) as subquery
group by group_index
order by group_index;

數據庫小提琴:https ://www.db-fiddle.com/f/uVTcf9Q2JDEkPf3S5hgvfB/2

查詢計劃視覺化: http: //tatiyants.com/pev/#/plans/plan_1569689658707

如何優化查詢?

查詢計劃

Sort  (cost=100157.88..100158.38 rows=200 width=60) (actual time=927.340..927.402 rows=1001 loops=1)
 Sort Key: ((subquery.index / 1000))
 Sort Method: quicksort  Memory: 103kB
 ->  HashAggregate  (cost=100144.74..100150.24 rows=200 width=60) (actual time=926.828..927.036 rows=1001 loops=1)
       Group Key: (subquery.index / 1000)
       ->  Subquery Scan on subquery  (cost=0.42..77644.74 rows=1000000 width=20) (actual time=0.049..704.478 rows=1000000 loops=1)
             ->  WindowAgg  (cost=0.42..65144.74 rows=1000000 width=20) (actual time=0.046..576.692 rows=1000000 loops=1)
                   ->  Index Scan using measurements_pkey on measurements  (cost=0.42..50144.74 rows=1000000 width=12) (actual time=0.029..219.951 rows=1000000 loops=1)
                         Index Cond: ((process_id = 456) AND ((device_id)::text = '123'::text))
Planning Time: 0.378 ms
Execution Time: 927.591 ms

計劃沒有明顯的錯誤,所以沒有明顯的大優化要做。你基本上做了很多工作,這需要很多時間。

我可能會先退後一步,看看你的商業案例。為什麼你需要這個輸出?你可能“需要”一些更容易優化的東西嗎?就像,大塊整體滑過的東西,而不是個別行在被遺忘的路上慢慢地從一個塊移到另一個塊?或者,分區是由可預測的時間戳切片定義的,而不是每個分區正好有 1000 行?

你沒有給出你的版本,但你的小提琴使用 9.5。如果我使用最新版本並打開 JIT(即時)編譯,我會比關閉 JIT 獲得大約 15% 的改進。

如果我建構一個適合僅索引掃描的索引,我不會得到太大的改進,它有時更快,有時更慢(它也與 JIT 互動)。但是,如果所有內容都沒有在記憶體中,它可能會帶來巨大的改進:

create index  on measurements (device_id, process_id, timestamp, current);

如果您正在尋找更複雜的解決方案,您可以編寫一個流程,該流程將連接到數據庫並監聽新的 INSERT,然後通過使用增量計算將行滑出一個分區並進入下一個分區來更新數據,而不是重新計算每個分區從頭開始分區。這類似於移動聚合的工作方式,只是它僅在語句中的幀之間起作用,而不是在您需要的語句之間。這應該非常快。對於嚴格的準確性,這將取決於每個插入的行以與其時間戳列的相同順序送出 - 通常很難保證這一要求,但對於您的確切情況可能很容易。

引用自:https://dba.stackexchange.com/questions/249853