Neohapsis is currently accepting applications for employment. For more information, please visit our website www.neohapsis.com or email firstname.lastname@example.org
From: Jan Kirchhoff (kirchygmx.de)
Date: Tue Jan 29 2008 - 17:09:51 CST
David Schneider-Joseph schrieb:
> Hi all,
> I am attempting to convert a very large table (~23 million rows) from
> MyISAM to InnoDB. If I do it in chunks of one million at a time, the
> first million are very fast (approx. 3 minutes or so), and then it
> gets progressively worse, until by the time I get even to the fourth
> chunk, it's taking 15-20 minutes, and continuing to worsen. This is
> much worse degradation than the O*log(N) that you would expect.
> This problem can even be reproduced in a very simple test case, where
> I continuously insert approximately 1 million rows into a table, with
> random data. `big_table` can be any table with approximately one
> million rows in id range 1 through 1000000 (we're not actually using
> any data from it):
> Any ideas, anyone?
what hardware are you running on and you much memory do you have? what
version of mysql?| |
How did you set innodb_buffer_pool_size? you might want to read
and do some tuning.
In case that doesn't help you, you'll need to post more info on your config.
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql