[Coco] Linville's ramblings on assembly vs machine code
L. Curtis Boyle
curtisboyle at sasktel.net
Tue Jul 11 13:35:48 EDT 2017
You can do self-modifying code “legally” in OS-9 - but you have to copy the routine(s) into your data area and execute them from there. The reason it doesn’t work “normally” is that a program can be shared by multiple processes (they all run the exact same code, just with different data areas and stacks) - and it wouldn’t work well if a 2nd iteration of a program thinks it has one version of the self-modifying code, but the 1st iteration has already changed it to something else. If you went for a more memory bloated Unix/Linux model (a full physical copy in RAM of each copy of the same program), you might be able to get away with it (although things like program CRC checks would fail, unless you updated it each time it self modified), but for an 8 bit machine like the Coco, that would suck way too much RAM.
Since the data area is unique to each process, you can do self-modifying code in that area, and it won’t corrupt any other copies of that program running.
L. Curtis Boyle
curtisboyle at sasktel.net
> On Jul 11, 2017, at 11:30 AM, RETRO Innovations <go4retro at go4retro.com> wrote:
>
> To clarify, I am not suggesting that self modifying code is relevant or should be condoned in today's pipelined and multi-threaded CPUs. It also may not have relevance in older systems like OS/9. But, for maximum performance on smaller CPUs, it is a useful tool that should not be dismissed out of hand.
>
> Predictably, people tend to take today's best practices and automatically retroactively apply them:
>
> "Jim, do you recommend self modifying code in this Linux driver?"
> "Absolutely not. Self Modifying Code is fraught with support concerns, won't work in the Linux driver model anyway, and is considered bad programming form"
> "Aha! How do you then explain using self modifying code in your latest Coco application? Two faced liar!"
> "Uh, can I smack you now?"
>
> For the given CPU architecture, there are a set of tricks and tips useful to wrong best performance out of the CPU. Competent developers know and consider all of them when developing for that platform.
>
> That said, today's concerns around security of IoT devices may force vendors to eliminate Von Neumann architectures from embedded CPUs used in IoT devices, and with it, any self modifying code capability.
>
> --------------------------------------------------------------------------------
>
> On the main topic:
>
> Dave et al, it looks like CocoNut John Linnville has a distinct (and perhaps dismissive) perspective on our ramblings on list:
>
> For those not subscribed to the CoCo mailing list, there is a raging
> discussion there that was inspired by my tech segment in Episode 25.
> It seems that not everyone accepts my premise that "machine
> language" and "assembly language" are really just two forms of the
> same base language!
>
> I'll probably wait until Episode 26 to respond, but in the meantime
> it is kinda fun watching people contort themselves to 'prove' that
> knowing that there is more than one form of a given instruction or
> that being able to distinguish code sections from data sections is
> simultaneously both vital to "machine language" and somehow
> completely foreign to "assembly language"...
>
>
> https://www.facebook.com/groups/1606095809633762/?hc_ref=NEWSFEED
>
> We are evidently the Coco Contortionists!
>
> Be careful, there is some trolling afoot, methinks...
>
> Jim
>
> --
> Coco mailing list
> Coco at maltedmedia.com
> https://pairlist5.pair.net/mailman/listinfo/coco
>
More information about the Coco
mailing list