Neohapsis is currently accepting applications for employment. For more information, please visit our website www.neohapsis.com or email firstname.lastname@example.org
From: Yaoxing (yaoxing.zhanggmail.com)
Date: Thu Dec 23 2010 - 04:45:23 CST
Thank you very much for your adequate explanation. I made some comments
2010/12/23 16:06, Stan Hoeppner:
> Yaoxing put forth on 12/22/2010 9:59 PM:
>> 3. 3.2MB/s disk IO write, 0.01MB/s read.
> MB/s throughput isn't usually a factor, but IOPS definitely can be.
> What's in the iostat tps column for the device your mail queues reside on?
Is this what you're talking about?
Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 148.63 27.55 6550.60 523033469 124353201092
sda1 0.00 0.00 0.00 2524 116
sda2 148.63 27.55 6550.59 523027626 124353101816
sda3 0.00 0.00 0.01 2895 99160
> If your mail queue resides on a single mechanical disk spindle you may
> be running into an IOPS limit. A single 7.2k rpm SATA disk can only
> sink approximately 150 seeks/sec (IOPS) under optimal conditions. You
> didn't mention the size of your subscriber base, but you mentioned that
> you queue mail to those subscriuber addresses every 4 seconds. If using
> a single SATA disk as described above, you will run out of IOPS with
> ~600 subscribers. If all of your queues and your log files, and your
> entire *nix system, resides on such a single disk, you'll run out of
> IOPS well below 600 subscribers.
I don't fully understand how this 600 subscribers is calculated. maybe I
should describe my situation like this:
It's a weekly promotion mail. no matter how many subscribers are there,
I send 1 email every 4sec.
Despite all I said above, do you mean if I have 600 subscribers, and I
send to all of them 1 mail every 4 sec (which is 600 mails/sec), I'll
run out of IOPS?
> After investigating, if this is indeed the cause of your queue
> performance problem, the solution is to put the Postfix queues on a
> separate storage device, optimally for performance and cost reasons, and
> Using a dedicated device for the queue files, a single 15k SAS disk can
> sink approximately 300 seeks/sec (IOPS) which would allow for
> approximately 1200 subscribers with a 4 sec interval between mailings.
> A RAID 10 set of 4 such drives will allow double that amount, or 2400
> The cost of a good quality high performance SSD for Postfix queues is
> much less than 15k rpm SAS drives and far less than RAID 10, and will
> give you tens of thousands of IOPS, allowing for many thousands of
> subscribers, limited by SSD capacity. If your problem is limited IOPS,
> I suggest putting your queues on one of these:
> If your total queue size is exceeding 40GB before mail can be flushed to
> the destinations, OCZ offers both 60GB and 90GB models for less than
> $200 USD. IOPS performance for all 3 sizes of OCZ devices is similar,
> around 4k Random Write (Aligned): 45,000-50,000 IOPS.
> An SSD such as this is by far the best price/performance solution to
> such a queue IOPS problem, if that is indeed your queue performance issue.
I read more documents of postfix today, and according to my
understanding the active queue is in memory. Thus if I'm running out of
IOPS, I shouldn't have got a full active queue (tell me if I'm wrong).
But actually I do. So it's more likely a limitation of network or maybe
quantity of process. but either seems to be all right. That's why I'm