[Coco] Question for those familiar with rbf.mn
Gene Heskett
gheskett at wdtv.com
Sun Feb 1 03:21:44 EST 2015
On Sunday 01 February 2015 02:36:21 Allen Huffman did opine
And Gene did reply:
> > On Jan 31, 2015, at 1:06 PM, Gene Heskett <gheskett at wdtv.com> wrote:
> >
> > By then I had been campaigning to
> > get folks to raise the default value of it.sas in their rbf file
> > descriptors from $09 to at least $10, and I was using from $20 to $FF
> > in order to assure both non-fragmented files, AND at the same time
> > prevent ever coming close to a segment list full error because if
> > you do, nitros9's recovery from isn't pretty.
>
> Gene,
>
> What happens if a block is not found that size? Does it just fail, or
> will it use the largest it can find?
Generally, that write will fail, but if it did, I would not kill the
process, I would hop over to another window and cut the sas in half with
dmode, effectively making less than normal sas sized bits of the drive
usable.
> I ran in to an issue tonight that I believe is caused by having a
> nearly full hard drive. Somehow things went from “zippy†to
> “molasses†real quick, so I’m still not quite sure if that is
> the root cause. I am running the JWT Enterprises “Optimizeâ€
> program on my drive right now. It’s a defragmenter, not an
> optimizer, so it just ensures files have now more than X segments
> (whatever you specify, default 1). As a bonus, it will pack
> directories cleaning up deleted entries, and can optionally float
> directories to the top.
>
> You can even run it on files in just one directory. It’s a great
> thing, and was lightspeed compared to a true defragmenter program like
> Burke & Burkes (though that did a better job overall once you let it
> run for 20 hours).
>
The last time I ran it, it was way more than 20 hours. But I haven't run
it in probably 10 years now.
> I made a .DSK image of a floppy drive last night using a B09 program I
> wrote. I see that image is 19 segments (though it has been the only
> file found that has any fragments so far).
>
I suspect the majority of the time is being spent looking for a sas sized
place to put it. 128(131 something in decimal) megs is the largest that
can be represented in the FAT when the cluster size is 1. The system
does, or did, keep track and did not have to search the whole FAT in the
past except for the first write after a reboot. After that it got
noticeably faster when I was still using a Maxtor 7120s drive on a 4n1
controller.
I have 2 identical seagate hawk 1Gb drives on my system, one of which is
only formatted for about 490 megs for os9, the remainder available for
HDBDOS and vdisks, so if I ever wanted to, I could setup several more
HDBDOS "partitions" of 256 disks each since3 each such allocation is just
over 80 megs. The other I formatted all for os9. Drive 0 uses a cluster
size of 4 sectors, while drive 1 (/s1 in my lashup) uses a 16 sector
cluster. Neither seems to be suffering from the nearly full FAT so far.
> I am going to adjust my min allocation. Thank you for the repeated
> encouragement. I did not realize i would fill 128MB as quickly as I
> did :)
>
> --
> Allen Huffman - PO Box 22031 - Clive IA 50325 - 515-999-0227 (vmail/TXT
> only) Sub-Etha Software - http://www.subethasoftware.com - Established
> 1990! Sent from my MacBook.
>
> P.S. Since 4/15/14, I have earned OVER $600 in Amazon gift cards via
> Swagbucks! Use my link and I get credit:
> http://swagbucks.com/refer/allenhuffman
Cheers, Gene Heskett
--
"There are four boxes to be used in defense of liberty:
soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
Genes Web page <http://geneslinuxbox.net:6309/gene>
US V Castleman, SCOTUS, Mar 2014 is grounds for Impeaching SCOTUS
More information about the Coco
mailing list