[Coco] quick basic09 copy program
L. Curtis Boyle
curtisboyle at sasktel.net
Fri Feb 1 09:09:30 EST 2008
On Fri, 01 Feb 2008 03:47:21 -0600, Willard Goosey <goosey at virgo.sdc.org>
wrote:
> On Thu, Jan 31, 2008 at 12:23:46PM -0600, L. Curtis Boyle wrote:
>> Using the I$ calls also means to you can DIM a byte array for the
>> largest
>> size you need,
>
> Humm, buffer size could become a delicate choice as the program grew
> and used more code & data.
Not really. Since you can use a GetStat call to get the file size, you
just divide the file size by the byte array size, and load/save them in
chunks (the last chunk usually being smaller). This way you could also
reduce the chunk size if you were running ona 128K machine. For example,
if you were copying a 62,000 byte file, and your byte array was 20,000,
you would do 3 read/writes of 20,000, and 1 read/write of 2,000 (all
controlled by the value you pass in register Y to the I$Read and I$Write
Syscalls).
>
>> but you can control the # of bytes read/written every time
>> you call it (unlike GET/PUT, which simply uses the variable's
>> size).
>
> Yeah, BASIC-09's file I/O is a bit.... strange. It's the BASIC
> influence. I never really groked BASIC file I/O.
I wish they had built in the variable GET/PUT, since the OS itself was
built that way. One thing that was planned for Nitros9 that never got
implemented.
>
> I had another question, but then I realized I'd have to read up on
> BASIC-09 to know if the question even makes sense under the language.
>
Fire away anyways... curious to see what it is.
>> I used this a lot when we still ran Coco's here at work for
>> processing data...
>
> CoCos doing real work? Shocking! :-)
We used to run 8 terminals at 4800 baud, and 3 600 line per minute line
printers, and 1 20 page per minute laster, off of 1 Coco 3 with a multipak
(And some custom boards), 1 MB of RAM, Nitros9 (from version 1 to version
2.01 before it retired), and an Eliminator system with 2 hard drives
totalling 120MB, and a 720K floppy. The terminals weren't in constant use,
so they didn't slow down the system too much. Ran great... still miss it.
I remember one time we had a client send a file we had to process that was
almost 20 MB zipped, and would expand to larger than either of our hard
drives (an 80MB and a 40MB), but we only need a small subset of data. I
ended up writing a BASIC09 program using pipes that we had UNZIP extract
to the pipe, the BASIC09 program would read a line from the pipe, parse
out the data we really needed, and save it out line by line to the 2nd
hard drive, in order to process the file. Slower than normal, but it
worked!
> Willard
--
L. Curtis Boyle
More information about the Coco
mailing list