Neohapsis is currently accepting applications for employment. For more information, please visit our website www.neohapsis.com or email email@example.com
From: Douglas Pearson (bizsunnyhome.org)
Date: Thu Aug 23 2007 - 16:10:04 CDT
We know that we won't need to do those sorts of queries except for
statistical analysis which can happen offline (and for that we'll assemble
the data back into a single table).
Each table is for a specific user and there's no need to run queries across
users (for this data).
From: Jerry Schwartz [mailto:jschwartzthe-infoshop.com]
Sent: Thursday, August 23, 2007 2:02 PM
To: 'Douglas Pearson'; mysqllists.mysql.com
Subject: RE: Dynamic tables--always a bad idea?
How are you going to do queries that join or merge thousands of tables? or
won't that be necessary?
The Infoshop by Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032
860.674.8796 / FAX: 860.674.8341
> -----Original Message-----
> From: Douglas Pearson [mailto:bizsunnyhome.org]
> Sent: Thursday, August 23, 2007 4:35 PM
> To: mysqllists.mysql.com
> Subject: Dynamic tables--always a bad idea?
> We're trying to figure out how to design a particularly
> critical table in
> our database schema. The choices are to use a single large table or a
> series of dynamically created small tables.
> This table will receive the majority of traffic (queries and
> updates) in the
> database so it's a key part of the design. The data set
> means we're either
> looking at 1 table with perhaps 10 million records or 100,000
> tables each
> with about 100 records.
> "Standard" SQL theory seems to say we should use a single
> table. It's more
> flexible and some queries simply aren't possible across
> multiple tables (or
> at least not efficiently). But in this case we're happy to live with
> reduced flexibility if it gives us substantially better performance.
> Early empirical testing with 100,000 records suggests the
> single large table
> becomes progressively slower to access as it grows in size
> (average access
> time goes from ~4ms/transaction up to around ~80ms for our
> test cases--MySQL
> 5.0 on CentOS). The multiple dynamic tables don't seem to have this
> property--access remains pretty much constant as you might expect
> So the question is, even given this 20x performance benefit
> are we still
> fools to consider the dynamic table model? Are we going to run into
> max-tables or max-file-handle limits or other problems that
> will eventually
> bite us? Or is this speed difference just an artifact of
> poor indexing
> choices or similar? Or are dynamic tables OK sometimes?
> P.S. Here's the table in question:
> CREATE TABLE one_big_table (
> rank bigint not null auto_increment unique,
> item_id int not null,
> user_id int not null,
> count smallint not null default 1,
> added datetime not null,
> primary key(rank, user_id)
> ) engine=InnoDB;
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql