Neohapsis is currently accepting applications for employment. For more information, please visit our website www.neohapsis.com or email firstname.lastname@example.org
From: Jerry Schwartz (jschwartzthe-infoshop.com)
Date: Thu Sep 04 2008 - 10:10:34 CDT
>From: Brent Baisley [mailto:brenttechgmail.com]
>Sent: Wednesday, September 03, 2008 5:35 PM
>To: Jim Leavitt
>Subject: Re: Large Query Question.
>That's a lot of data to return, make sure you factor in data load and
>transfer time. You may try breaking your query into smaller parts and
>recombining the results in a scripting language. If you are searching
>on a range (i.e. date range), break the range into smaller parts and
>run multiple queries.
>Divide and conquer, it will scale better.
[JS] I'm considering changing one of my programs so that it leaves the
result set on the server and pulls one record at a time. Do you have any
sense of how much that might hurt me? We're talking about less than 100,000
records but they are relatively chunky.
In this case, it's the memory usage for the result set that is a concern. I
have to keep increasing the amount of memory available for PHP.
>On Wed, Sep 3, 2008 at 1:05 PM, Jim Leavitt <jimtreefrog.ca> wrote:
>> Greetings List,
>> We have a medium-large size database application which we are trying
>> optimize and I have a few questions.
>> Server Specs
>> 1 Dual Core 2.6 Ghz
>> 2GB Ram
>> Database Specs
>> 51 Tables
>> Min 100000 rows, Max 1000000 rows
>> Total size approx 2GB
>> key_buffer = 512M
>> sort_buffer_size = 512M
>> read_buffer_size = 512M
>> read_rnd_buffer_size = 256M
>> record_buffer = 256M
>> myisam_sort_buffer_size = 512M
>> thread_cache = 128
>> query_cache_limit = 1M
>> query_cache_type = 1
>> query_cache_size = 32M
>> join_buffer = 512M
>> table_cache = 512
>> We are having trouble with certain queries which are returning
>> 100000 - 300000 rows. Total query time is taking approx 1 - 2 mins
>> depending on load. Is there anything in our conf file which could
>> our performance? Are there any hardware recommendations that could
>> improve the speed? Would more memory help us? Any comments or
>> recommendations are greatly appreciated.
>> Thanks much.
>> Jim Leavitt
>> Treefrog Interactive Inc. (www.treefrog.ca)
>> "Bringing the Internet to Life"
>MySQL General Mailing List
>For list archives: http://lists.mysql.com/mysql
>To unsubscribe: http://lists.mysql.com/mysql?unsub=jschwartzthe-
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql