Neohapsis is currently accepting applications for employment. For more information, please visit our website www.neohapsis.com or email firstname.lastname@example.org
From: Robert Klemme (shortcuttergooglemail.com)
Date: Fri Apr 27 2007 - 07:19:11 CDT
Am 27.04.07 schrieb Zuylen, G. van <GvZuylenysl.nl>:
> I have connected the DB server (MaxDB) to a NetApp filer (FAS3020) with an NFS mount. The database files are located on the NFS volume on the NetApp. But with heavy I/O load the NFS connection performance became very bad. On the database I see I/O response times above the 100 mS. The normal response time, with normal load, is around the 8 mS (also not very fast).
> The overall performance of the filer is good, also NetApp can't locate a performance problem on our filer.
I am involved in tuning an Oracle 10g (Solaris, Filesystems on a
NetApp filer) and IO seems not very fast there either - but there is
no indication of the filer being slow. In our case there seems to be
some overhead created by a virtual volume management layer (I forgot
the name) though.
> Different DB parameters are changed to optimize for NFS (USE_OPEN_DIRECT = YES)
> Also on the Linux system (Suse SLES 9 SP3 - (x86_64) - Kernel 2.6.5-7.244-smp) different parameters are modified :
> Sysctl.conf :
> net.core.netdev_max_backlog = 3000
> net.ipv4.tcp_rmem = 4096 87380 8388608
> net.ipv4.tcp_wmem = 4096 65536 8388608
> net.ipv4.tcp_mem = 4096 4096 4096
> The connections to the filer is done with a dual 1 Gb link (bonding). There are no network errors on this connection. Testing this interface for instance with ftp the max throughput is OK for this kind of interface (+/- 100 Mbyte/sec).
That's a completely different workload than a DB generates. It is
fairly easy for about any storage and file system to sequentially read
> Used mount options :
> san-merb-back:/vol/vol_sap_YZO/sapdata /sapdb/YZQ/sapdata nfs rsize=32768,wsize=32768,intr,rw,nolock,nfsvers=3,hard 0 0
> Is there someone who had the same kind of problems, or can give me advice to solve this problem ?
My *impression* (which I cannot back with any concrete figures as I
did not do specific performance tests with different setups): network
attached storages are not ideal for databases, especially with OLTP.
My reasoning goes like this: in order to retrieve a block from the
remove filesystem a lot more has to be done with a network attached
storage than with a local storage (e.g. through a fast SCSI host
adapter); there is network protocol overhead, network hardware etc.
Filers might be good at streaming large data files concurrently
maintaining high bandwidth. But since not all DB operations can make
use of multiblock reads or similar streaming like features my
impression is that the increased latency with a network attached
storage kills you in those scenarios.
MaxDB Discussion Mailing List
For list archives: http://lists.mysql.com/maxdb