MongoDB aaS - 管理卷/磁碟大小 - 完整捲的後果 - 新文件分配失敗
我們將 MongoDB(目前為 3.0.6)作為服務執行。MongoDB 在 Docker Container 中執行,具有 8 GB 的小容量,mongod 數據文件永久儲存在其中。無法擴展音量。這是自動化和業務約束。
客戶看不到磁碟大小 (
df -h
),只有權限dbOwner
。所以 adb.stats()
不起作用。> db.getUser("rfJpoljpiG7rIn9Q") { "_id": "RuhojEtHMBnSaiKC.rfJpoljpiG7rIn9Q", "user": "rfJpoljpiG7rIn9Q", "db": "RuhojEtHMBnSaiKC", "roles": [ { "role": "dbOwner", "db": "RuhojEtHMBnSaiKC" } ] }
我測試瞭如何填滿卷並有問題。
創建空數據庫後:
# df -h . Filesystem Size Used Avail Use% Mounted on /dev/vdm 7.8G 6.2G 1.2G 84% /data/19a39418-320e-4557-8495-2e79fcbe1ca4
我用 GridFS 上傳做了一個循環。各種尺寸和不同的數據。
$ mongofiles -h localhost --port 3000 -d xxx -u xxx -p xx put test.pdf 2016-06-03T14:26:49.244+0200 connected to: localhost:3000 added file: test.pdf
我很快就會在日誌中看到這個
2016-06-03T13:04:51.731+0000 I STORAGE [FileAllocator] allocating new datafile /data/work/mongodb/data/RuhojEtHMBnSaiKC.7, filling with zeroes... 2016-06-03T13:04:51.744+0000 I STORAGE [FileAllocator] FileAllocator: posix_fallocate failed: errno:28 No space left on device falling back 2016-06-03T13:04:51.748+0000 I STORAGE [FileAllocator] error: failed to allocate new file: /data/work/mongodb/data/RuhojEtHMBnSaiKC.7 size: 2146435072 failure creating new datafile; lseek failed for fd 25 with errno: errno:2 No such file or directory. will try again in 10 seconds 2016-06-03T13:05:01.749+0000 I STORAGE [FileAllocator] allocating new datafile /data/work/mongodb/data/RuhojEtHMBnSaiKC.7, filling with zeroes... 2016-06-03T13:05:01.756+0000 I STO^C # df -h . Filesystem Size Used Avail Use% Mounted on /dev/vdm 7.8G 6.2G 1.2G 84% /data/19a39418-320e-4557-8495-2e79fcbe1ca4
並找到了解釋
因為我們的 ObjectRocket 實例使用 smallfiles 選項執行,所以第一個擴展區被分配為 16MB。這些擴展區的大小翻了一番,直到達到 512MB,之後每個擴展區都被分配為一個 512MB 的文件。所以我們的範例“ocean”數據庫的文件結構如下:
這些範圍儲存我們數據庫的數據和索引。使用 MongoDB,只要將任何數據寫入一個區,就會分配下一個邏輯區。因此,在上述結構下,ocean.6 目前可能沒有數據,但已預先分配給ocean.5 滿時。一旦任何數據寫入到 ocean.6,一個新的 512MB 擴展區,ocean.7,將再次被預分配。當從 MongoDB 數據庫中刪除數據時,在您壓縮之前不會釋放空間——因此隨著時間的推移,這些數據文件可能會隨著數據的刪除而變得碎片化(或者如果由於添加了額外的鍵而導致文件超出其原始儲存位置)。壓縮會對這些數據文件進行碎片整理,因為在壓縮期間,數據是從副本集的另一個成員複製的,並且數據文件是從頭開始重新創建的。
文件系統視圖
# ls -alh total 6.2G drwxr-xr-x. 5 chrony ssh_keys 4.0K Jun 3 15:00 . drwxr-xr-x. 5 chrony ssh_keys 4.0K Jun 3 13:20 .. drwxr-xr-x. 2 chrony ssh_keys 4.0K Jun 3 13:20 admin -rw-------. 1 chrony ssh_keys 64M Jun 3 14:01 admin.0 -rw-------. 1 chrony ssh_keys 16M Jun 3 14:01 admin.ns drwxr-xr-x. 2 chrony ssh_keys 4.0K Jun 3 13:20 local -rw-------. 1 chrony ssh_keys 64M Jun 3 13:20 local.0 -rw-------. 1 chrony ssh_keys 16M Jun 3 13:20 local.ns -rwxr-xr-x. 1 chrony ssh_keys 2 Jun 3 13:20 mongod.lock -rw-------. 1 chrony ssh_keys 64M Jun 3 15:58 RuhojEtHMBnSaiKC.0 -rw-------. 1 chrony ssh_keys 128M Jun 3 15:58 RuhojEtHMBnSaiKC.1 -rw-------. 1 chrony ssh_keys 256M Jun 3 15:58 RuhojEtHMBnSaiKC.2 -rw-------. 1 chrony ssh_keys 512M Jun 3 15:58 RuhojEtHMBnSaiKC.3 -rw-------. 1 chrony ssh_keys 1.0G Jun 3 15:58 RuhojEtHMBnSaiKC.4 -rw-------. 1 chrony ssh_keys 2.0G Jun 3 15:26 RuhojEtHMBnSaiKC.5 -rw-------. 1 chrony ssh_keys 2.0G Jun 3 15:58 RuhojEtHMBnSaiKC.6 -rw-------. 1 chrony ssh_keys 16M Jun 3 15:58 RuhojEtHMBnSaiKC.ns -rw-r--r--. 1 chrony ssh_keys 69 Jun 3 13:20 storage.bson drwxr-xr-x. 2 chrony ssh_keys 4.0K Jun 3 16:03 _tmp
我在 GridFS 中看到了這個錯誤(但是有幾個 GB 的儲存空間可用,我只上傳了 1 MB 的帶循環文件)
2016-06-03T16:34:42.454+0200 Failed: error while storing 'test-94.tmp' into GridFS: new file allocation failure 2016-06-03T16:34:42.623+0200 connected to: localhost:3000 2016-06-03T16:34:42.917+0200 Failed: error while storing 'test-95.tmp' into GridFS: new file allocation failure 2016-06-03T16:34:43.090+0200 connected to: localhost:3000 2016-06-03T16:34:43.412+0200 Failed: error while storing 'test-96.tmp' into GridFS: new file allocation failure 2016-06-03T16:34:43.535+0200 connected to: localhost:3000 2016-06-03T16:34:43.817+0200 Failed: error while storing 'test-97.tmp' into GridFS: new file allocation failure 2016-06-03T16:34:43.948+0200 connected to: localhost:3000 2016-06-03T16:34:44.184+0200 Failed: error while storing 'test-98.tmp' into GridFS: new file allocation failure 2016-06-03T16:34:44.306+0200 connected to: localhost:3000 2016-06-03T16:34:44.619+0200 Failed: error while storing 'test-99.tmp' into GridFS: new file allocation failure 2016-06-03T16:34:44.783+0200 connected to: localhost:3000 2016-06-03T16:34:45.071+0200 Failed: error while storing 'test-100.tmp' into GridFS: new file allocation failure 2016-06-03T16:34:45.197+0200 connected to: localhost:3000 2016-06-03T16:34:45.497+0200 Failed: error while storing 'test-101.tmp' into GridFS: new file allocation failure 2016-06-03T16:34:45.626+0200 connected to: localhost:3000 2016-06-03T16:34:45.891+0200 Failed: error while storing 'test-102.tmp' into GridFS: new file allocation failure 2016-06-03T16:34:46.063+0200 connected to: localhost:3000 2016-06-03T16:34:46.333+0200 Failed: error while storing 'test-103.tmp' into GridFS: new file allocation failure 2016-06-03T16:34:46.493+0200 connected to: localhost:3000 2016-06-03T16:34:46.792+0200 Failed: error while storing 'test-104.tmp' into GridFS: new file allocation failure 2016-06-03T16:34:46.921+0200 connected to: localhost:3000 2016-06-03T16:34:47.186+0200 Failed: error while storing 'test-105.tmp' into GridFS: new file allocation failure 2016-06-03T16:34:47.316+0200 connected to: localhost:3000 2016-06-03T16:34:47.628+0200 Failed: error while storing 'test-106.tmp' into GridFS: new file allocation failure 2016-06-03T16:34:47.772+0200 connected to: localhost:3000 2016-06-03T16:34:48.017+0200 Failed: error while storing 'test-107.tmp' into GridFS: new file allocation failure 2016-06-03T16:34:48.144+0200 connected to: localhost:3000 2016-06-03T16:34:48.405+0200 Failed: error while storing 'test-108.tmp' into GridFS: new file allocation failure 2016-06-03T16:34:48.528+0200 connected to: localhost:3000 2016-06-03T16:34:48.809+0200 Failed: error while storing 'test-109.tmp' into GridFS: new file allocation failure 2016-06-03T16:34:48.982+0200 connected to: localhost:3000 2016-06-03T16:34:49.250+0200 Failed: error while storing 'test-110.tmp' into GridFS: new file allocation failure 2016-06-03T16:34:49.388+0200 connected to: localhost:3000 2016-06-03T16:34:49.645+0200 Failed: error while storing 'test-111.tmp' into GridFS: new file allocation failure 2016-06-03T16:34:49.779+0200 connected to: localhost:3000 2016-06-03T16:34:50.021+0200 Failed: error while storing 'test-112.tmp' into GridFS: new file allocation failure 2016-06-03T16:34:50.158+0200 connected to: localhost:3000 2016-06-03T16:34:50.432+0200 Failed: error while storing 'test-113.tmp' into GridFS: new file allocation failure 2016-06-03T16:34:50.552+0200 connected to: localhost:3000 2016-06-03T16:34:50.803+0200 Failed: error while storing 'test-114.tmp' into GridFS: new file allocation failure 2016-06-03T16:34:50.924+0200 connected to: localhost:3000 2016-06-03T16:34:51.219+0200 Failed: error while storing 'test-115.tmp' into GridFS: new file allocation failure 2016-06-03T16:34:51.342+0200 connected to: localhost:3000 2016-06-03T16:34:51.601+0200 Failed: error while storing 'test-116.tmp' into GridFS: new file allocation failure 2016-06-03T16:34:51.734+0200 connected to: localhost:3000 2016-06-03T16:34:52.028+0200 Failed: error while storing 'test-117.tmp' into GridFS: new file allocation failure 2016-06-03T16:34:52.185+0200 connected to: localhost:3000 2016-06-03T16:34:52.460+0200 Failed: error while storing 'test-118.tmp' into GridFS: new file allocation failure 2016-06-03T16:34:52.593+0200 connected to: localhost:3000 2016-06-03T16:34:52.858+0200 Failed: error while storing 'test-119.tmp' into GridFS: new file allocation failure 2016-06-03T16:34:52.991+0200 connected to: localhost:3000 2016-06-03T16:34:53.292+0200 Failed: error while storing 'test-120.tmp' into GridFS: new file allocation failure 2016-06-03T16:34:53.432+0200 connected to: localhost:3000 2016-06-03T16:34:53.749+0200 Failed: error while storing 'test-121.tmp' into GridFS: new file allocation failure 2016-06-03T16:34:53.884+0200 connected to: localhost:3000 2016-06-03T16:34:54.148+0200 Failed: error while storing 'test-122.tmp' into GridFS: new file allocation failure 2016-06-03T16:34:54.344+0200 connected to: localhost:3000 2016-06-03T16:34:54.610+0200 Failed: error while storing 'test-123.tmp' into GridFS: new file allocation failure 2016-06-03T16:34:54.743+0200 connected to: localhost:3000 2016-06-03T16:34:55.048+0200 Failed: error while storing 'test-124.tmp' into GridFS: new file allocation failure
為什麼?
也很奇怪,沒有新條目
mongodb.log
。為什麼?2016-06-03T13:04:43.996+0000 I ACCESS [conn9] Unauthorized not authorized on admin to execute command { serverStatus: 1.0 } 2016-06-03T13:04:51.731+0000 I STORAGE [FileAllocator] allocating new datafile /data/work/mongodb/data/RuhojEtHMBnSaiKC.7, filling with zeroes... 2016-06-03T13:04:51.744+0000 I STORAGE [FileAllocator] FileAllocator: posix_fallocate failed: errno:28 No space left on device falling back 2016-06-03T13:04:51.748+0000 I STORAGE [FileAllocator] error: failed to allocate new file: /data/work/mongodb/data/RuhojEtHMBnSaiKC.7 size: 2146435072 failure creating new datafile; lseek failed for fd 25 with errno: errno:2 No such file or directory. will try again in 10 seconds 2016-06-03T13:05:01.749+0000 I STORAGE [FileAllocator] allocating new datafile /data/work/mongodb/data/RuhojEtHMBnSaiKC.7, filling with zeroes... 2016-06-03T13:05:01.756+0000 I STO
應該有一個新連接的日誌條目。幾小時後沒有新線路。但是數據庫是線上的,至少用mongo shell。
我決定插入隨機數據。使用了這個範例腳本
$ mongo mongodb://xxx:xxx@localhost:3000/RuhojEtHMBnSaiKC --eval "var arg1=50000000;arg2=1" create_random_data.js Job#1 inserted 49400000 documents. Job#1 inserted 49500000 documents. Job#1 inserted 49600000 documents. Job#1 inserted 49700000 documents. Job#1 inserted 49800000 documents. Job#1 inserted 49900000 documents. Job#1 inserted 50000000 documents. Job#1 inserted 50000000 in 1538.035s
還有其他帶有隨機字元串的範例腳本
function randomString() { var chars = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXTZabcdefghiklmnopqrstuvwxyz"; var randomstring = ''; var string_length = 10000000; for (var i=0; i<string_length; i++) { var rnum = Math.floor(Math.random() * chars.length); randomstring += chars.substring(rnum,rnum+1); } return randomstring; } for(var i=0; i<2000000; i++){db.test.save({x:i, data:randomString()});} Inserted 1 record(s) in 3199ms Inserted 1 record(s) in 3059ms Inserted 1 record(s) in 3264ms Inserted 1 record(s) in 3279ms Inserted 1 record(s) in 3187ms Inserted 1 record(s) in 3133ms Inserted 1 record(s) in 2999ms Inserted 1 record(s) in 3220ms Inserted 1 record(s) in 2966ms Inserted 1 record(s) in 3161ms Inserted 1 record(s) in 3165ms Inserted 1 record(s) in 3154ms Inserted 1 record(s) in 3362ms Inserted 1 record(s) in 3288ms Inserted 1 record(s) in 3184ms new file allocation failure new file allocation failure new file allocation failure new file allocation failure new file allocation failure new file allocation failure new file allocation failure
只讀訪問仍然有效
> db.test.find(); { "_id": ObjectId("5751a595b9f7999857650c13"), "x": 0, "data": "xFmUATFIEWao4moOZ0SknNo56dg49TTyQcVgGBTeyE2RUKr7WQ6s0BpmhvSlrAuTBDGpZfPDGtfRrNLSpA8PcbNMkWfCoFFMevCC" } { "_id": ObjectId("5751a595b9f7999857650c14"), "x": 1, "data": "IKsbGictFAtcgfMfUggzfHZSiPreWW3Tm8ik8tgLDERWUo2P1Lh2RKBardHUhaEZfuaaM7ofFRGKKHSFwGNcUQA051mMgOxpNvbN" } { "_id": ObjectId("5751a595b9f7999857650c15"), "x": 2, "data": "MXQySK5RsMrXTw8JuRzxIeAaxSgNhXdkFzOhcbZZcsTSU7T1sBLTyps7mw0vlGaOzCvJQz08BKr9ALXEPKpl3REUGZMTAx3wccur" } { "_id": ObjectId("5751a595b9f7999857650c16"), "x": 3, "data": "qE8tyTTLfvNNIlih2g19bnTiRIBtyEN0ySUn1vbBIsMeH3JOL1MsWufDkTq0KgQGzgl8EAM8gRSqVJCSxDeTAqgDbcsvch5SKeQy" } { "_id": ObjectId("5751a595b9f7999857650c17"), "x": 4, "data": "hl4QThNWH6vMBxzGxvZUMAsrhneRGpodJlpSxJmdulxZtLdQ0tyTM8wmy0H4Xl8okTNfCih01unNh0wMVJT1kZqc6DeQOZ2PRPBV" } { "_id": ObjectId("5751a595b9f7999857650c18"), "x": 5, "data": "MWVMuFKZo4G2cBVGGWN1DT7JpsEACQnnB2eTJTdcblO5Ne8OaI4tAcrxzdSAdINBoB5yLyizy042xELLczth5BVx1gdl9Ib0peeC" } { "_id": ObjectId("5751a595b9f7999857650c19"), "x": 6, "data": "gTmlVLlnG1WJCAWiHZuZw8Le5HnEXCw30kUSGAlTdMBR1veTGQ6AyCRL1n5uZEAdg3rBR3VMudrgm1ey3IduvH3gXaDSZoUwpduw" } { "_id": ObjectId("5751a595b9f7999857650c1a"), "x": 7, "data": "fW99TOO2bVM1oU6iyElq1HFiMbXNLFwCCxGRObLqoLIHbHudezyLoOio6N5rUmxodh3TAJoTcSzVTGH76EfWZSxSzX3tXMF4icWW" } { "_id": ObjectId("5751a595b9f7999857650c1b"), "x": 8, "data": "dSDbXwbN40TrTaM82Tbatzn6kkM5UeAFuvhTeZqaUvlkipBAizd1WJbqSbCJane9mvWTMTz9uTI4ZSO6CDHTf3PClVSUsE1peUku" } { "_id": ObjectId("5751a595b9f7999857650c1c"), "x": 9, "data": "DTM0CX22mW4BS9NKlQq9xnFiJuo3USB5BTvoP2ZPWex5UV6Bc4M2eu2k3yp95Pkq0C5zHpocHNloFTdbTGuDxwncpvnKX4kcscx4" } { "_id": ObjectId("5751a595b9f7999857650c1d"), "x": 10, "data": "3sapecVli8PffDV2qLKXTcRT6riFMwXGeduWibJDLlJOJhP1v04ytswW5mNmdKXGul2BRnBrg7MEzyDJNTru3vGwCRlila8L8qCF" } { "_id": ObjectId("5751a595b9f7999857650c1e"), "x": 11, "data": "QHIiGrWqHoNKBV8XQX9v5QlRSq2b1hCQuISJgOp9It4oUqM8a85uysqn2V2belW1xzfw6Gcg5S3qOBg319dXmGy2UZTBtbfn2TEP" } { "_id": ObjectId("5751a595b9f7999857650c1f"), "x": 12, "data": "mUJ1Q8xSsVlAGo30Mbr2z6BTrzZqaRks1J3c02rHwb0NFyyS1dPbSsZGpsQtD0exI9urTBsPAmvZwLTpEVSFbV4XIvlifBy9drCI" } { "_id": ObjectId("5751a595b9f7999857650c20"), "x": 13, "data": "W4KfIuqc7yqniahz2acLJ6cvag4PF45EkWr4JliobmA7fWzZdGim2TBp1D17Q1PVaszPUPNMUZ71FacwJqNNXmVnV0t5fRkTEFsJ" } { "_id": ObjectId("5751a595b9f7999857650c21"), "x": 14, "data": "07rLpFT2OIKkswbdwzXaKfW25noO9DFOSF4iQAb0p9yNAZxtfqPGuFUD2h0qxhkf2eln4r2GEqu7amTG6CxxbS8E0iGZR0JHV5Zc" } { "_id": ObjectId("5751a595b9f7999857650c22"), "x": 15, "data": "gt8EvTRTfGm9KPWCgJBEkBAZrE4Qo2X5B8I8RDUv1QMcIczPTvHEdVFHUt9Tfiep4AEyi7dmUbOL1RxZDQ3pPebBOi5ocMsektTo" } { "_id": ObjectId("5751a596b9f7999857650c23"), "x": 16, "data": "CBENcWObIsQF0TBxvmlXTvFi8l7AmVFKRgvSJ5WgRMUlbeu9NAgVpMcA8vsTWgTStoNreLnJNKLfGLfpfZEK9JiUeGfkx18WpQGO" } { "_id": ObjectId("5751a596b9f7999857650c24"), "x": 17, "data": "Fiqb0haZFi8x2XdvgDy3ok5epdnOeCN3RWg5fspza6ExyCSgCv3qwRqxzAO1SvFMCfww9nIa6UhmI7WUEwnSCLHkywNDh5g6qf6g" } { "_id": ObjectId("5751a596b9f7999857650c25"), "x": 18, "data": "TR1hXOkJcgRyh44H5HvluNxcHluAKtDyoP35Bpw2xN46kL5vUKSgbedzlTdnd0mT7oylRbTqTcOuZ5qwwBpnug8ft8frRnnkoPhN" } { "_id": ObjectId("5751a596b9f7999857650c26"), "x": 19, "data": "x1WE7ccb1Dyis4ggEGHNPcTez4BqT6TbiT0d9fXnr1bkXe2XZTTC1ZGnLxP4DRPtgeQ6aZ32kpiyrM4IUOAqcx7EkKKZIbMPNm68" } Fetched 20 record(s) in 72ms -- More[true]
我什至可以插入小文件
> db.users.insertMany( [ { name: "bob", age: 42, status: "A", }, { name: "ahn", age: 22, status: "A", }, { name: "xi", age: 34, status: "D", } ] ) { "acknowledged": true, "insertedIds": [ ObjectId("5751a807758c56125f57a556"), ObjectId("5751a807758c56125f57a557"), ObjectId("5751a807758c56125f57a558") ] } > db.stats(1024); { "db": "RuhojEtHMBnSaiKC", "collections": 8, "objects": 12364085, "avgObjSize": 442.12763435385637, "dataSize": 5338382.47265625, "storageSize": 5535032, "numExtents": 50, "indexes": 6, "indexSize": 392639.625, "fileSize": 6223872, "nsSizeMB": 16, "extentFreeList": { "num": 0, "totalSize": 0 }, "dataFileVersion": { "major": 4, "minor": 22 }, "ok": 1 }
容器內的文件描述符
dr-x------. 2 mongod mongod 0 Jun 3 11:20 . dr-xr-xr-x. 8 mongod mongod 0 Jun 3 11:20 .. lr-x------. 1 mongod mongod 64 Jun 3 11:20 0 -> /dev/null l-wx------. 1 mongod mongod 64 Jun 3 11:20 1 -> pipe:[1433257418] lrwx------. 1 mongod mongod 64 Jun 3 16:00 10 -> /data/work/mongodb/data/admin.0 lrwx------. 1 mongod mongod 64 Jun 3 16:00 11 -> /data/work/mongodb/data/local.ns lrwx------. 1 mongod mongod 64 Jun 3 16:00 12 -> /data/work/mongodb/data/local.0 lrwx------. 1 mongod mongod 64 Jun 3 16:00 13 -> socket:[1437446183] lrwx------. 1 mongod mongod 64 Jun 3 16:00 14 -> /data/work/mongodb/data/RuhojEtHMBnSaiKC.ns lrwx------. 1 mongod mongod 64 Jun 3 16:00 15 -> /data/work/mongodb/data/RuhojEtHMBnSaiKC.0 lrwx------. 1 mongod mongod 64 Jun 3 16:01 16 -> socket:[1438081191] lrwx------. 1 mongod mongod 64 Jun 3 16:00 17 -> /data/work/mongodb/data/RuhojEtHMBnSaiKC.1 l-wx------. 1 mongod mongod 64 Jun 3 11:20 2 -> pipe:[1433257419] lrwx------. 1 mongod mongod 64 Jun 3 16:00 20 -> /data/work/mongodb/data/RuhojEtHMBnSaiKC.2 lrwx------. 1 mongod mongod 64 Jun 3 16:00 21 -> /data/work/mongodb/data/RuhojEtHMBnSaiKC.3 lrwx------. 1 mongod mongod 64 Jun 3 16:00 22 -> /data/work/mongodb/data/RuhojEtHMBnSaiKC.4 lrwx------. 1 mongod mongod 64 Jun 3 16:00 23 -> /data/work/mongodb/data/RuhojEtHMBnSaiKC.5 lrwx------. 1 mongod mongod 64 Jun 3 16:00 24 -> /data/work/mongodb/data/RuhojEtHMBnSaiKC.6 lr-x------. 1 mongod mongod 64 Jun 3 11:20 3 -> /dev/urandom l-wx------. 1 mongod mongod 64 Jun 3 11:20 4 -> /data/work/mongodb/logs/mongod.log lr-x------. 1 mongod mongod 64 Jun 3 11:20 5 -> /dev/urandom lrwx------. 1 mongod mongod 64 Jun 3 11:20 6 -> socket:[1433261184] lrwx------. 1 mongod mongod 64 Jun 3 16:00 7 -> socket:[1433261185] lrwx------. 1 mongod mongod 64 Jun 3 16:00 8 -> /data/work/mongodb/data/mongod.lock lrwx------. 1 mongod mongod 64 Jun 3 16:00 9 -> /data/work/mongodb/data/admin.ns bash-4.2$ pwd /proc/1/fd
現在做了一滴
> show collections fs.chunks → 3733.953MB / 3736.602MB fs.files → 0.021MB / 0.039MB randomData → 1318.577MB / 1506.949MB system.indexes → 0.001MB / 0.008MB system.profile → 0.105MB / 1.000MB test → 160.600MB / 160.664MB users → 0.008MB / 0.039MB > db.test.drop(); true
現在我的隨機數據不再工作了
function randomString() { var chars = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXTZabcdefghiklmnopqrstuvwxyz"; var randomstring = ''; var string_length = 100000000; for (var i=0; i<string_length; i++) { var rnum = Math.floor(Math.random() * chars.length); randomstring += chars.substring(rnum,rnum+1); } return randomstring; } for(var i=0; i<2000000; i++){db.test.save({x:i, data:randomString()});}
未創建測試集合
為什麼我只能使用總共 7.8G 中的 5.12?
> db.stats(1024); { "db": "RuhojEtHMBnSaiKC", "collections": 7, "objects": 12360157, "avgObjSize": 428.64359376664873, "dataSize": 5173927.84765625, "storageSize": 5370512, "numExtents": 45, "indexes": 5, "indexSize": 392503.890625, "fileSize": 6223872, "nsSizeMB": 16, "extentFreeList": { "num": 7, "totalSize": 165160 }, "dataFileVersion": { "major": 4, "minor": 22 }, "ok": 1 }
數據庫似乎處於一種奇怪的狀態。
Ops 如何以最少的努力和滿意的客戶進行批量管理?
您提到了smallfiles選項(來自 ObjectRocket 文件),但您的
ls
輸出表明您實際上並未使用它。如果是,那麼您的最大文件大小將為 512MB,但您有 2GB 文件(預設值)。它還解釋了您的問題。一旦你填滿了現有的數據文件並且另一個寫入進來(它比這更複雜一點,但考慮它的好方法),MongoDB 將嘗試分配一個新的數據文件 - 再次為 2GB。您沒有足夠的空間來儲存新的 2GB 文件,因此您會遇到錯誤和失敗。
因此,如果您打開,
smallfiles
您將能夠使用更多空間並更接近卷的最大使用量。預分配選項也可以進行調整,但在 3.0 中它並不像在舊版本中那樣重要(MMAP 預分配在以後的版本中進行了調整)。最後,正如其他地方提到的,您也可以嘗試 WiredTiger,不過我建議先升級到 3.2(它現在是預設儲存引擎)。WiredTiger 可以選擇使用壓縮,
Snappy
預設情況下啟用,並且有更激進的選項可供您使用,因此您基本上可以用 CPU 換取磁碟空間效率(我前段時間在此處分析了各種選項的影響以供參考)。