Need advice

Ruslan Zasukhin sunshine at public.kherson.ua
Tue Apr 18 23:16:36 CDT 2006


On 4/18/06 8:25 PM, "Brendan Murphy" <bmurf at comcast.net> wrote:

Hi Brendan,

>>> With version 2.3, the internals have changed, so I am assuming my
>>> previous measurements don't apply. Starting over...
>> 
>> In fact must be the same as 1.x
>> What exactly you have to find?
> 
> With 1.1 file sizes exploded proportionally with segment size.

> With 2.3 I would characterize it as leveling off as the the number
> of records increases.

> In other words, there is no significant
> difference in file sizes for 100,000 records using segments sizes
> of 4k, 8k, 16k, and 32K.

And THIS IS correct and expected result.
Hmm, I believe 1.x did work in the same way, Brendan.


> There is a significant difference (percentage wise) when there is only 10,
> 100, 1000, and 1000 records. So this is definitely a different behavior from
> version 1.1.

Still not very clear, but I think not very important.

Again, initial size of db can be even few MB. ONLY WHEN you insert few MB of
data it will start grow.
 
> I notice a slow down when using 4k segment sizes for very large
> (100,000 records) files, but no difference in speed for 8k, 16k,
> and 32k segment sizes. So it looks like 8k seems to be optimal.

I think you have not come to barrier when 8K also will be visibly slower of
16K  :-)


-- 
Best regards,

Ruslan Zasukhin
VP Engineering and New Technology
Paradigma Software, Inc

Valentina - Joining Worlds of Information
http://www.paradigmasoft.com

[I feel the need: the need for speed]




More information about the Valentina mailing list