Home » Resources » Red Hat Diaries
Red Hat Diaries/0038
X is not Unix
Subject: Re: X is not Unix
Date: Tue, 12 Feb 2002 15:14:45 +0000 (GMT Standard Time)
From: Rickster
To: Sargon
>Maybe someone at Apple needs a little history lesson.
I don't know what they're up to. But there are countless examples of
something going wrong.
Consider the following, gleaned from my studies of Carbon and Cocoa.
All interface controls - menu items, dialog push buttons etc. - need
both a string application signature and a control ID. The control ID
is a 32-bit number - so far so good - but the application signature,
also called a creator code, is not. It is a four character text
string.
Now here comes the punch: These creator codes have to be _unique
across the board_. You may not have a creator code that is already
in use somewhere else in the system.
How do you ensure that your creator code is unique? Why by visiting
the Apple site and applying for a creator code! Also at the Apple
site you can see the list of currently registered creator codes.
The character space comes in at 20h. The range of visible easily
typed and standardised characters ends normally at 7Eh. That gives
you 5Eh or 94 possible characters, including punctuation marks,
brackets, braces, parentheses - all of which are most likely
disallowed.
It gets worse. All _lower case creator codes_ are opted by Apple
themselves. You just cut a big chunk out of all possible creator
codes left to the public. In the nightmare scenario therefore, you
have basically the American English alphabet, 26 characters (no
completely lower case allowed) times almost two, raised to the power
of four possible creator codes - right?
To make this easier to compute, let's say we have 50 possibilities
for each character of the four. That gives us 6,250,000 possible
creator codes - and then that's it, that's the upper boundary.
_There may not be more than 6,250,000 applications ever written for
the Mac._
Sounds wimpy, doesn't it? You bet. But the knock-out punch is right
under your nose, and you probably don't see it yet. For why in the
name of all that is sacred would you need this junk anyway? The only
possible explanation must be that the I/O on X is very confused. Let
me explain.
Before the days of NT, back when 'cooperative multitasking' was the
only way one could go, everything was basically one big glob in
memory. The 'operating system' per se, Windows 3 or Mac OS 9.2,
was/is basically just another application trying to get into the
CPU. There were no threads, there was no true multitasking, there
was no 'kernel mode' to speak of. If your application did not call
GetMessage in Windoze, you could not relinquish control back to the
operating system. The GetMessage call was taken care of _'at the
operating system's leisure'_, ie first the OS would clean up, make
sure all visible windows were updated, do other housekeeping chores
etc. before giving you back a message. Because of this, and only
because of this, systems like Windows 3 and Mac OS 9.2 could work.
And because it was one big glob of memory, everything was accessible
to anyone. In fact, MS began warning vendors such as Adobe way back
then to stop their dorky programming practices, as soon with the
advent of NT and 9x these questionable techniques would no longer
work. (For the record, Adobe did not listen. That's another story.)
And back then, a message was a message was a message. And there was
only ONE message queue for the entire system. Which is why these
16-bit systems can bring the house down.
The system should change radically for 32-bit systems however. Not
only did Cutler see that C2 was operative, at least in the sense
that each and every process experienced itself as being alone in the
computer - using very sophisticated virtual memory, page tables, the
like - he also saw that the input queues were quickly expedited and
messages transferred at lightning speed to _thread-specific message
queues_.
Which is how NT freaks in the beginning could deliberately botch up
a bad app and then move your cursor out of the bad app's window, and
the hour glass would disappear and the NW arrow would return. For
already on the kernel level the messages were dispensed with, and
after that it was up to each and every application to deal with its
queue on its own.
Queues are funny things. What basically happens is that an app -
transparently or not: on Windoze it's explicit, on Mac OS as on Unix
it's 'done for you' - plucks a message from the queue and
_dispatches_ it. There might be some preliminary processing, so on
Windoze this is a good thing. For example, you might want to convert
key up and key down messages into character messages. (Then again
you might not.) You might also want to invoke an accelerator table
(for keyboard shortcuts). Etc.
When the message pump as it is called dispatches your message, it is
sent through this dedicated API to the 'window procedure' which
handles windows of the class to which the window specified by the
window handle in the message belongs. This message is _sent_ - that
means that your call to the dispatching API will not return until
the window procedure takes care of it. And for what it's worth (and
it's normally not worth a thing), the return value from your window
procedure is the return value from your call to the dispatching API
- not that this is ever used of course: There is nothing you could
conceivably use it for, but nonetheless: it is there.
What all this means is that your specific application can in fact
lock itself by not returning to your dispatch call, and as more and
more messages enter your queue through the graces of the kernel
level I/O routines, your queue can become overloaded. But what is
important to understand here is that only your application will be
affected - all the other windows on your desktop will continue to
perform admirably, as all these buggers are being effectively
rationed CPU time slices by your kernel schedulers (NT has two).
NT crashes today, but when it does, it is most often due to screwy
code running in the graphics drivers. Cutler had to move the
graphics back into the kernel, against his own explicit design
principles, because the assholes who dreamed up the shell namespace
and Explorer had screwed him bad: Their asshole idea was so dumb
that it threatened to bring NT to a grinding halt, and Cutler knew
he'd always had a speed advantage over those assholes and he wanted
to keep it. Preliminary tests of NT4 showed, for example, that its
graphics were about ten times as fast as 95, and Cutler liked it
like that.
Here's the other aspect of the same thing: When registering window
classes to run an application, Microsoft insists you have to choose
names that are unique system-wide, but they are not reading their
own documentation. For these names need only be unique on a
per-process basis.
Why? Because with true 32-bit multitasking, _other processes cannot
see your memory_. And this is a _sine qua non_ of any true OS. Now X
has not been certified (although Jobs might be if he keeps this up)
but the assumption has to be that X is at least as secure, at least
on paper, as NT. (It's Unix - FreeBSD - or at least it's supposed to
be. Any other hypothesis would be ridiculous. And there is no way
you can construct a true multitasking OS without these guarantees.
Supposedly...)
But this is where the cruncher comes in. This is where the referee
waves his hand and shouts 'NINE! TEN! YOU'RE OUT!' For if X is truly
multitasking, then the names of all these dorky controls - their
creator codes and control IDs - are totally immaterial. For in such
case, the system is performing proper channeling of incoming
messages and dispersing them summarily to thread-specific message
queues and there is no conflict possible.
If the system is performing true multitasking, and if the system has
at least C2 stability and security (virtual memory decently
implemented, each process thinks it's alone in the system), _then
none of the above could happen_.
Yet time and again we see that spinning beach ball - not over just
one window either. OVER THE ENTIRE SCREEN.
QED.
Click here »
|