EASy68K Home

DTACK GROUNDED -- 16 July 1986

DTACK GROUNDED, The Junk Mail Flyer
16 July 1986 Not Copyrighted by Digital Acoustics Inc

TEMPUS DOTH FUGIT:

Five years have passed since I started a little rag called DTACK GROUNDED. At that time, the best microprocessor Motorola had in production was over twice as fast as the best microprocessor Intel had in production. That's still true today, but the comparison is no longer 68000-8086 but 68020-80286. (The 80386 is not in production.) Yep, that's right, folks: the 68000 is no longer the hottest production micro. I guess I'll have to change the logo, where it says "The Journal of Simple 68000 Systems." Oops! I did that already about 9 months ago! Well, the 68000 was top dog for a goodly while, and the 68020 is just a faster 68000 with lots more pins.

But to show you just how much has changed in those five years, let me quote from page 4 of issue #1: "As I write this, there are about 300,000 systems out there using the 6502 and (Bellevue) BASIC. Nearly all are Apples or PETS." WHAT? ONLY 300,000? Why, there are now over 5 million C-64s and over 5 million Apple IIs out there alone! Obviously, the personal computer explosion took place while I was busy writing that rag.

I wonder what the installed base of 680X0 machines will be five years hence, and how many of them will be running HBASIC? In the personal computer industry, five years is a LONG LONG time!

MODESTY DEPT:

Some of you may have noticed page 5 of the last junk mailer. I just had to prove that Digital Acoustics really does own a LaserJet+ with an F font cartridge! The subject matter was chosen at random, of course.

AN EXERCISE:

There are more than 20 million personal computers out there, not counting electronic doorstops like the TIMEX 1000. About 10 million of them are used in time-is-money environments. Most of those 10 million are either IBM PC/XT/ATs or their clones or Apple IIs. All of those IBMs/clones/Apple IIs have OSs (operating systems) written in assembly, which is why they are being used in time-is-money environments.

Purely as an exercise, let us figure what the life-cycle cost would be if an infestation of Computer Science sacerdotes swarmed on the scene and smoted all of those 10 million time-is-money machines with OSs busted in an HLL. The average useful life of a personal micro is about 4 years. The average personal micro is used 5 hours a week. So the average micro gets used about 1000 hours during its life. Times 10 million machines, that's 10 billion hours. Assuming

Page 1, Column 2

that more Indians use those machines than Chiefs, we estimate the average hourly rate in this time-is-money environment is $16/hour. That's $160 billion dollars.

If the only result of being smoted with a HLL OS is a 3% decrease in average performance (and we would bet it would be more than that), then the cost to the time-is-money folks would be $4.8 billion dollars, or $480 per machine. The boys from Bellevue charge a measly $60 per copy for their assembly-based MS-DOS!

In other words, MS-DOS, DOS 3.3 and ProDOS have saved existing consumers $4.8 billion dollars by having been written in assembly. It did not cost that much to write (program?) those operating systems! (Leading candidate for understatement of the year.)

Now, as an exercise for the student, you work out what the cost would be if that infestation of Computer Scientists had instead smoted the interactive BASICs (AppleSoft and BASICA) with HLL overhead. I'll wait.

Dum dum de dum... yawn... You have that worked out? O.K., now you tell me how much it is worth to a Mack, Amiga or ST owner to get a real, assembly-based BASIC! (This flyer is real heavy on the commercials, folks. Well, some flyers haven't been.)

STRUCTURED PROGRAMMING:

Some of you folk may have gotten the idea that I hate structured programming. I don't. In fact, I use that technique occasionally, when it seems appropriate. And I can't figure out how a really big programming project, one involving dozens of programmers, can be managed at all unless some form of structured programming is used.

What I do hate is the assertion of certain demented computer scientists and their acolytes that structured programming is the only way programming should be done! The magazine Computer Language has just initiated a column by P. J. Plauger entitled Programming on Purpose. Plauger is the president of Whitesmiths Ltd, a sometimes science fiction author, and has written at least three C compilers. He also has 25 years' experience at programming, much of it in the 'real' computer world. If his first 'column' (actually a damn long article) is typical, anyone with any interest in language theory is going to have to subscribe to that magazine.

Plauger concludes that structure and top-down is indeed a very good way to program, but that sometimes it is not the best way and sometimes it cannot initially be used at all. However, Plauger notes that top-down and structure can always be used to program something one already knows how to program! In that same, very interesting first column he tackles yet another shibboleth of the Computer Science religion. Subscribe and read.

The ACM? In DTACK ?

Terry P. has called my attention to an article in the June Communications of the ACM on a very different aspect of structured programming. "The ACM?", you ask? "Hey, I know what the ACM is gonna say about structured programming!", you say? Well, you're wrong this time! The title is "Impact of the Technological Environment on Programmer/Analyst Job Outcomes".

Job Outcomes? A peculiar title, that. This is an absolutely serious article which focuses on the impact of the use of structured programming on the retention of programmer/analysts in their present job assignment/company! It seems that someone has noticed the substantial job turnover of programmer/analysts. Them guys/gals is hard to find and harder to keep, it seems. Existing studies are cited, studies which favor the use of structure. Yet two of those

Page 2, Column 1

studies which are nominally favorable to the use of structure also note that: "...the use of structured design methods will reduce skill variety, task identity and autonomy." Translation: the programmer/analysts will soon be headed for the door.

And anyone - thee and me, even - can see that programmer/analysts head for the door early and often. Hmmm. (This article even questions whether the use of structured programming really does increase productivity, and it notes that "...we know of no tests of (the) conjecture" that maintenance of structured code would be easier than of unstructured code.)

It worries me that my long-held opinions of structured programming are suddenly supported in part by P.J. Plauger and even by an article in the Communications of the ACM. In the ACM? Where have I gone wrong???

QUICKIES:

A survey of Japanese 32-bit micro system-makers reveals that 42.5% have selected the 68020, 21.2% the 80386, 2.4% the 32032, and 1.6% the NEC V70 (EDN May 29). Well, nobody ever said the Japanese were stupid. One assumes that the missing 30%+ are planning to go with bit-slice technology.

May IEEE SPECTRUM contains a series of articles on the personal computer industry titled "Lessons Learned." There is a really cute full-page artistic rendition of a computer graveyard complete with LISA's tombstone, the "PC2"'s tombstone with the epitaph "Never born - Died of Excess Speculation." (Yes, the word born was not capitalized but Died was - shame on the artist!)

But the tombstone I got the biggest kick out of was: "Commodore PET b. Sep 77 d. May 85 With Love from Kindly Uncle Jack." KINDLY UNCLE JACK??? My, that nickname does get around, doesn't it? Take a bow, Nils. (I am not going to follow Nils' suggestion as to what I can do with his 700-page HALGOL manual.)

Last Sunday night I was sipping some Mumm's Extra Dry while reading the final pages of the paperback edition of Niven/Pournelle's FOOTFALL. I almost choked when, on page 507, I saw Mumm's misspelled as Mum's. Tsk. I think I'll blame that one on Niven, for two reasons. One is that Niven does not publish a couple of regular columns in publications which buy ink by the 55 gallon drum. I won't tell you what the other reason is.

Another observer of the personal computer marketplace asserted to me that the Amiga, Mack, and ST should be presenting a united front against the IBM/Intel world. I agree. So why does Commodore persist in trying to market the Amiga as an overpriced, vaguely-compatible MS-DOS machine?

Now that Scully has firmly grasped the reins at Apple, how long will it be before we see a PC clone with an Apple logo? Remember, Pepsi is nothing but a Coca-Cola clone. And that would leave Kindly Uncle Jack as the Lone Ranger - the only major personal computer maker not flogging a PC clone.

Three years ago, in DTACK #21, I stuck my neck out and predicted a massive blood-bath in the personal computer industry based on 80186 machines. The blood-bath is now fully under way, but based on 8088 and 80286 machines, not the 80186! I'm going to have to recalibrate the Intel sector of my crystal ball. The few survivors of this blood-bath will be the low-cost vendors. You will remember that IBM has repeatedly asserted in recent years that it wants to be the low-cost PC vendor. Guess what?

Chips and Technology is a company which produces custom VLSI chip sets which can be used to produce simpler, cheaper PCs and ATs. Because a few custom gate arrays

Page 2, Column 2

replace about 60 TTL chips in the IBM designs, that's why. The vanguard of super-cheap PC clones using these chip sets has already arrived. None of them bear the IBM logo. IBM is not, now, the low-cost vendor!

So why should folks pay 2 or 3 times as much for that IBM logo? For reliability and service? Ha ha ha. You obviously have just returned from a 2-year stay on the planet Zorn, and so have missed the hilarious PC/AT hard disc fiasco which was and is. IBM itself makes just about the lousiest quality AT available. And they are not cleaning up the mess. They won't even acknowledge it. Ed McMahon: "Just how unreliable is your AT hard disk, Johnny?" Johnny Carson: "Why, my AT hard disk is soooo unreliable..."

My my. IBM is not the low-cost vendor and they do not make a reliable product and they do not even stand behind their product when it is obviously defective! I wonder if that has anything to do with the fact that folks are now buying more clones than the real fazoola? It will be interesting to see how IBM handles this somewhat undesirable (from IBM's point of view) state of affairs. Remember, the mainframe-brains are back in charge at ESD...

A rare and unaccountable outbreak of intelligence and rationality has suddenly struck InfoWorld. In two consecutive issues there were three articles, including a Kee opinion column, pointing out that 95% of the PC owner/users needed multitasking like they needed AIDS. And that there was fierce resistance to MS-DOS versions 4.0 and 5.0, which are now in beta-site evaluation. MS-DOS 4.0 and 5.0 require that all those programs written for MS-DOS 2.X and 3.X be re-written. For some reason, a lot of software companies don't want to do that for a lousy 5% of the market. And when the absence of that software is noted, that market is gonna be a lot smaller than 5%.

This rare and unaccountable outbreak of intelligence and rationality has not yet infected the staff of PC magazine. For instance, Jim Seymour has just informed PC's readers that the real importance of the Intel 386 is that the mass personal computer marketplace can now have a virtual-memory computer! That makes Jim the leading candidate for the 1986 Jeanyates Award.

In the latest issue of InfoWorld, there is a front-page story wherein one of the principle authors of the Mackintosh operating system admits that Mack's OS is 'way too complicated and needs to be fixed. NOOOOOOOOO! REALLY?

IBM has this FORTRAN compiler which it licenses to its mainframe sites for $12,500 per month. That's $450,000 over a three-year period. Rumor has it that there is a heated internal dispute over whether to adapt this same compiler for the PC and 'license' it for a one-time fee of about $200. I don't make these things up, folks!

1 to 2 MILLION 386s ???

Stan Baker has a regular column in EET in which he covers doings in the chip world. His June 30 column on 32-bit micros was intentionally hilarious. National claims it shipped 30,000 in '85 and Motorola claims 51,000 in '85. Both Baker and Dataquest endorse those figures. Let's see now: Motorola began shipments in Aug, which means it shipped for just 5 months, or 10,000 per month. National shipped all 12 months - it was first out the door, remember - so it averaged 2,500 per month. Yet National's advertising "continues to crown itself as the 32-bit leader, which gets some Motorolans' foaming at the mouth (and seriously thinking of) legal action... "

Baker continues, "National says it will triple shipments to 90,000 this year and everybody, including Dataquest, chooses to disagree." Motorola is projecting shipments of 250,000 68020s in '86, which Baker reports without comment.

Page 3, Column 3

The really funny part of that column is what Baker has to say about Intel's David House projecting shipments of 1 to 2 million - yes, million - 80386's in '87. But heck, I can't steal Stan's whole column.

THE LAWYERS GET RICHER:

Hitachi, having failed to get a second-source license for the 68020, is going to introduce its own 32-bit version of the 68000. Please don't laugh; the recently introduced CMOS 68000 was developed entirely by Hitachi, a fact which Motorola is somehow not emphasizing. It was originally to be called the 63000, which is why Hitachi's CMOS graphics chip bears the part number 63484.

But this is certain to cause another Intel-NEC type lawsuit over whether portions of the microcode were stolen from the 68000. I can see Nick Tredennick, who's with IBM these days, giving depositions now. Nick designed the 68000's microcode.

HARD DECISIONS:

The U.S. and Japanese are negotiating hard over DRAM pricing. One day an agreement is announced, the next day its opposite, the day after that etc. etc. There are a couple of fundamental problems which those negotiators continually bump up against.

One is that there aren't any national economies any more, just one world economy. Imposing duties on DRAMs which are shipped only from Japan won't work, because Japan has plants all over the world, including a place called Dallas, Texas. That's right, Japanese DRAMs, a lot of them, are made in Dallas. T.I. DRAMs, 80% of them, are made in Japan and packaged in Maylasia. That's why the present legal proceeding excludes Japanese DRAMs which are packaged in Maylasia. Hitachi, for example, used to bake its DRAM dies in JAPAN, like T.I. Then it shipped the dies to Maylasia to be packaged, like T.I. Then it did functional testing in Maylasia, like T.I. Then, unlike T.I., it shipped the packaged product back to Japan for final quality assurance testing. (T.I. ships to Dallas for final QC.)

So all Hitachi did was move a few QC engineers and technicians and the test equipment to Maylasia and ship directly from Maylasia. Other Japanese makers did pretty much the same thing. So the legal proceedings which are now drawing to a climax will have no effect on shipments from Maylasia at all! For some reason, the Commerce Department folks think the Japanese are cheating!

But the Japanese have packaging plants all over the world. To bar Jap DRAM, the U.S. will have to go back to FORTRESS AMERICA, an idea which was popular in the 1930s but got discredited a little later. I forget why. (For the youngsters out there, FORTRESS AMERICA means that we 'build' a symbolic wall around our country and refuse to import anything from anywhere. That saves lots of U.S. jobs. It also means that we can't export anything, which loses lots of U.S. jobs.)

And remember, the whole idea is to protect U.S. DRAM makers. The biggest U.S. DRAM maker, T.I., produces 80% of its DRAMs in the far east!

Last year the trading companies flew to Japan bearing bank drafts on Japanese banks. This year the trading companies fly to Maylasia bearing Maylasian bank drafts. Next year the trading companies may fly to Silesia or Samarkand... Since the Commerce Department has figured out that it cannot bar Japanese DRAM without imposing duties on all other countries, its negotiating position is that it wants to control the price of Japanese DRAM all over the world!

Page 3, Column 2

The other fundamental problem is that one of the negotiating points which the U.S. Department of Commerce is trying to achieve is flatly illegal under U.S. laws! Commerce wants Japan to agree to purchase more U.S. semiconductor products. Japan asks, "how much more?" The Commerce official verbally tosses out a figure. The figure gets batted around for a while. Eventually an agreement of sorts is reached and the Japanese negotiators suggest reducing the agreement to writing, as in "the U.S. and Japan are agreed that XY% of the semiconductors used in Japan shall be purchased from U.S. vendors." At this point, the U.S. types jump up in the air and shriek, "No no! You can't do that!" Why? Because such an agreement is an obvious conspiracy to restrain trade under the Sherman Anti-trust Act, that's why!

So the U.S. wants to negotiate a figure but is unwilling to place that figure in writing. The Japanese think the Americans are crazy. I am inclined to agree.

Remember that the whole point of the exercise is to boost U.S. chip-maker profits to the detriment of folks like IBM and Digital Acoustics who buy those chips. And those higher profits naturally would get passed on to lucky you when you buy your next computer!

There are only two scenarios which resolve these problems and I don't believe in either of them: 1) FORTRESS AMERICA. 2) The Japanese let the U.S. Department of Commerce prepare their price lists.

HILARITY DEPARTMENT:

A large U.S. business firm has 40,000 automatic telephone dialers gathering dust on over 17 miles of shelves. That firm had planned to sell those devices to the public for $8,000 each, but the public for some inexplicable reason ain't buying. So the large U.S. business firm has gone to plans B and C. Plan B involves selling an 8088-based add-on board which runs IBM PC programs. They had originally planned to sell this board for $995 but since the dust is getting really thick on that unsold inventory they will sell it for only 95 cents if you buy the $8,000 automatic telephone dialer. So for a mere $8,000.95 you can have a box which will run some IBM PC programs. There are a few that won't run, though. Like Microsoft Word, DBASE III, Framework II, Symphony 1.1, Microsoft Chart, and... Honest, folks, I really don't make these things up!

Oh, yes: plan C. Since folks don't seem to be buying that box to sit on their personal desk, that large business firm has decided that the automatic telephone dialer is a multi-user machine. That's what that ubiquitous full-page ad is all about, the one with shadowy minicomputers in the background.

In a separate development, that very same large U.S. business firm has released an improved version of an operating system which it offers to the public. Since the OS is improved, it is naturally worth more, right? So the price, for source or binary licenses, has been jacked up by another 60%. This is the operating system which was going to take over the personal 16-bit computer world yesterday but somehow didn't. Maybe the personal 32-bit computer world tomorrow?

SADNESS DEPARTMENT:

When a circus clown slips on a banana peel once, it's funny. The fourth time, it's sad. A certain writer has predicted that Kindly Uncle Jack's forthcoming 32-bit desktop micro will be based on the AT&T micro chip set. I'll let you guess who that writer is. Hint: he's been that far off base several times before.

SLEEPER COMPARTMENT:

A couple of flyers back I told you I thought the Fairchild Clipper just might be a sleeper. I don't think it will appear

Page 4, Column 1

in a KUJ $995 rock-shooting toy in the foreseeable future, but that is not its intended market. Its intended market is the upcoming 32-bit engineering workstation market, where it will buck heads with the MicroVax II, the IBM RT, and the merchant-micro machines based on the 68020 and the 80386. The Clipper can outperform the 68020 and the 80386 by about a factor of 2, the MicroVax II by a factor of 4, and the IBM RT by a factor of 6 or more. And its price of $1500 per 3-chip board is not too steep for a machine which bucks heads with $17-$27K machines from other vendors.

Intergraph has just become the first company to introduce an engineering workstation based on the Clipper. It will not be the last. For $25K you get an 80 megabyte disk, 6 megabytes of RAM (50% more than I have on my DTACK/IBM attached processor), a mega-pixel non-interlaced color display with a now-standard color palette, and some other stuff. A keyboard, for instance. And Intergraph is big enough to back that machine up with marketing and software. I loved the 2-full-page ad in my latest Electronic News.

I think the $27K Sun Microsystems 68020-based engineering workstations might be in trouble. I know the IBM RT is in trouble, but then it already was. IBM itself crippled the RT as sold by folks other than IBM's own sales force by subtracting all the useful software and also subtracting its 5080 megapixel display. As a result of this foolishness, the RT's sales are right up there with the UNIX PC's. And that was before the Intergraph announcement.

None of this stuff upsets me personally because my interests lie in stuff priced from about $6000 down. In other words, I am interested in the personal computer market where folks use their own money to buy their own personal computer. But I am interested in the engineering workstation market because, as president of Digital Acoustics, I might someday have to make a decision to buy one of those workstations. It would be a company, not personal, computer...

So let's discuss some more reasons why I like the Clipper, and think the 68020 and the 80386 are going to have rough going competing with it. The secret is its twin cache. Let me quote from Fairchild marketing director Tom Miller in EET:

"Single-user systems, running a single task at a time, can do well with a simple direct-mapped one-way associative cache system like that used on many previous-generation workstations. But as DEC realized when it went to the VAX architecture, this simple caching strategy can cripple applications running under a multi-tasking operating system like UNIX. They cause what's called cache-thrashing - senseless filling and refilling.

"...Clipper uses separate caches for instructions and data, each tightly coupled to the CPU, organized two-way set associative, and including both write-through and copy-back caching strategies." Translation: Howard Sachs is one smart fellow when he's designing CPUs instead of suggesting that 1-2-3 be rewritten in C.

THE NEXT BORING SUBJECT:

The next subject which I am going to hammer on until all of you get sick and tired of it is the desperate need in these days of 16 to 33MHz CPU clock rates for an efficient CPU- to-multi-megabyte DRAM memory interface.

There are two ways to achieve a better interface with DRAM: large, properly-designed caches; one for instructions and one for data. Clipper has this. The other way is built-in logic to take advantage of nibble mode, which provides (typically) a 33% speed increase when reading. And reads are about six or seven times more common than writes. Clipper has built-in logic to take advantage of nibble mode. And that is why the Clipper is about twice as fast as a 68020 or 80386. Without the caches and the nibble DRAM

Page 4, Column 2

access mode, the 33MHz clock rate would be useless because of the infamous von Neumann bottleneck - the CPU to memory bus-bandwidth.

If either the 020 or the 386 are going to try to knock heads with the Clipper in the $17K-$27K bracket then they are most certainly going to have to have large, dual caches which are two-way set associative. Translation: $$$$$$$$$.

Here is where the $1500 Clipper has the $250 020 and 386 whipped: its caches and ancillary logic have been reduced to two identical VLSI chips having about a quarter-million device-equivalents each. So each of those cache chips is about as complex as an 020 or a 386! And, being VLSI, the production cost is going to be a hell of a lot less than a cache built up of discrete chips! In other words, the Clipper in its intended marketplace is more cost-effective at $1500 than the 020 or 386 are at $250. That fact is so important that it is almost, but not quite, irrelevant that the Clipper has built-in nibble-mode logic and the 020 and 386 do not.

I learned a long time ago that in the electronics business ONE of something costs a million dollars, but a MILLION of something costs a buck apiece. It will be impossible for workstation vendors who each have to design their own caches and build them with discrete chips to compete against the mass-production, already-designed VLSI cache chips which come as standard equipment on that $1500 CPU board.

With that background, I can now point out a crucial difference between the 68020 and the 80386 - aside from the fact that the 68020 is in full mass production and the 80386 isn't, that is.

LOW-END 32-BIT SYSTEMS:

If we move down into the mass personal computer marketplace we can forget about the $1500 Clipper and we can forget about external caches because they are plain too expensive. What we are going to see is a direct CPU-DRAM connection right on the motherboard (local memory), just like the Beaucoup Grande. High-speed PALs can be used for decode to get the fastest response from the standard 120 naec 256K DRAMs which are the fastest affordable chips available today.

The 68020 and 386 are both nominally 16MHz chips. The 68020 requires 3 clocks minimum for an external memory cycle. Intel boasts that the 386 only requires 2 clocks minimum for an external memory cycle, which provides a 50% increase in bus-bandwidth over the 020. In the mass personal computer marketplace none of that matters because, at 16MHz, both chips will require at least four clock cycles to access affordable DRAM!

Well, that leaves the 020 and the 386 on equal footing, doesn't it? They are both 32-bit micros, they will both work with the same commodity DRAM, and they are both nominally 16MHz parts? Wrong, of course! The 020 has a built-in 256-byte instruction cache. For well-ordered programs (i.e. programs written in assembly by human beings rather than compilers) it turns out that that little cache achieves a 65% hit-rate. For each hit, the "memory" cycle uses only two clocks. So for every 100 memory cycles, 35 use four clock cycles and 65 use two memory cycles (the internal cache). That is a total of 270 clocks, or an average of 2.7 clocks per memory cycle. The Intel chip has no equivalent internal cache, so it will run at 4 clocks per memory cycle all the time. According to my arithmetic, that makes the 020 48% faster than the 386 in the mass personal computer marketplace. (Motorola claims 40%, which means they factored in the write cycles.)

What this means is that the Intel/MS-DOS folk are due for a shock when the first personal computer 386 machines begin to ship a year from now. The 020 machines are 14 months earlier and 40% faster...

Page 5, Column 1

WHAT TO DO?

Several years back Intel's Andy Grove publicly questioned what the heck the chip vendors were going to do with a million transistors per chip when that level of integration became available? Besides make megabit DRAMs, that is? One of the answers is that it will become possible to package a complete 32-bit computer architecture on a single chip. The CPU, dual caches, a double-precision floating point logic unit, and full memory management. The industry is not quite at that point yet, which is why the Clipper comes as a 3-chip set with a total of about three-quarters of a million transistor-equivalents (the Clipper is not a complete real-world computer architecture).

Both the 68020 and the 386 use roughly a third of a million transistor-equivalents, so neither can even come close to being complete computers as a single chip. Both use external floating point units; the 68881 is in production and the 387 is a piece of paper. Very nice paper. But a third of a million transistor-equivalents is more than one needs for just a CPU; what shall we do with the remaining available logic?

Intel, following the policy it established with the 80286, made the decision to spend the remaining silicon on internal memory management, which is highly desirable in a multiuser/multitasking environment. The mass personal computer market is a singleuser singletasking environment. How many PC/ATs ever run in the memory-protected mode? As many as 17? Memory management is as useless as tits on a boar hog in the mass personal computer market. If you say otherwise then you're describing a business computer!

Motorola made the decision to go with an internal instruction cache. I don't think they did that with us personal-computer users in mind, but the result is that the 020 is better for us - you and me - than the 80386.

I can think of three reasons the 020 is superior to the 386 as an engine for a personal computer: 1) The 020 is in mass production now, and the 386 isn't. 2) The 020 has an internal instruction cache to improve the memory bus-bandwidth and the 386 doesn't. 3) The FP math chip coprocessor for the 020 is in mass production and the one for the 386 is a piece of paper.

I can think of two reasons the 386 is superior to the 020 as an engine for a personal computer: 1) There is still lots of 8080 software out there which was written in assembly. The 386, like the 286, retains a mode in which it can run transliterated 8080 assembly code. Low-end personal computers which are based on the 386 will run in this mode and in this mode only, just like the PC/AT. 2) IBM still owns a largish chunk of Intel and that, along with software continuity, will assure that IBM will build a next-generation PC based on the 386. And that means that lots of software will be ported to that IBM machine.

I wonder if IBM will attempt to introduce its 386-based PC as another multi-user machine like the AT was intended to be, or whether they have wised up that folks buy personal computers for personal use?

CAUTION DEPT:

The Clipper is great hardware and highly competitive in its intended niche... as hardware. There is about as much software available for Clipper as there was for the 68000 in 1980. For that reason, the companies looking for early success with Clipper had better have sizeable resources. The MicroVax II and Sun 68020 machines are not going to disappear overnight

Comparing the performance of the 68020 and the 386 is a complex matter. The correct answer to "Which is faster?" is "Both of them!" A comparison has to be based on a highly

Page 5, Column 2

specific configuration. The comparison I made a few paragraphs back was just that.

If unlimited funds are thrown at those two CPUs to gussy them up, the 386 comes out ahead because of two cycles per external cache memory fetch and because Intel has commissioned Weitek to develop an interface chip which will tie Weitek's double-precision math chips to the 386. Such a configuration would be a lot faster than a 68020/68881 with an external cache! But the price tag would be so heavy that I just might want to look at the far-cheaper $1500 Clipper at that point...

68K ASSEMBLY BASICS:

A couple of folk have called me to task for asserting that there weren't any assembly-based 68000 BASICs out there. There are. And I knew it. But they were in the multiuser arena, as in 37 terminals on a poor, overworked 68000 and I usually don't pay much attention to those. The first two were the Alpha Micro and Pick 68000 BASICS. I think Alpha Micro came first.

The Alpha Micro 68000 BASIC is incrementally compiled and is available only on Alpha Micro multiuser/multitasking systems. So the incrementally compiled part is similar to HBASIC. The Pick system is really an integrated BASIC-cum-OS-cum-data base system. The integration of the BASIC and the OS is similar to HBASIC. Both of these systems are written in assembly and yes, I knew about them but forgot. I talked to the guy at Pick who was developing their 68000 BASIC over four years ago, and Jim Rea filled me in about the Alpha Micro system nearly two years ago.

Like I said: those are humongous multiuser systems and so I plain overlooked them.

But an outfit in Chicago called Softworks Limited decided to piggyback Alpha Micro a long while back, and wrote an APL, and a FORTRAN, and a BASIC. The BASIC is a compiler which is ten million percent compatible with Alpha Micro's incrementally compiled BASIC. Compilers are faster than incremental compilers; SL claims a 40% advantage over Alpha Micro, which is a reasonable number where heavy-duty number crunching is not involved. On the other hand, compilers are not interactive, but incremental compilers can be and most - including HBASIC - are. So that makes three assembly-based 68000 BASICS, or four with HBASIC.

Let me say this again: SL's BASIC is ten million percent compatible with Alpha Micro's BASIC. How compatible is it with Bellevue BASIC? Don't ask. It has 1.5 precision floating point (6 bytes). It doesn't have signed integers, but it does have one to five byte unsigned binary. But you should know that this BASIC is now available for the Atari ST! It runs under GEM/TOS but does not use windows. And the price is a munificent $82 including S & H in the U.S. Mastercharge and VISA. An Amiga version is due this fall.

SOFTWORKS LIMITED
2944 N. Broadway
Chicago IL 60657
ph. (312) 975-4030

I got a letter from the guy who programmed that BASIC, which is why I know all this. He first took three years (one guy) to write APL in assembly. Then he took two years to write FORTRAN in 68000 assembly. By the time he got around to BASIC he got the time down to 20 months! All of these were 1-man efforts and all were in assembly.

This #&*$@>% guy asserted in his letter that only novices prefer interactive BASICs! Geez! And here I've been programming since 1969! Not 1979, 1969. And while I am not the most experienced programmer around, neither am I a *(@#%$ novice! And I greatly prefer interactive BASICS! I think I'll have my two attack teddy bears visit Chicago...

Page 6, Column 1

BEAUCOUP BLUE SKY:

The high density floppies we now have and which I have been using for over 6 months are almost so large and so fast that I almost don't want anything larger or faster. Almost. I have been tentatively looking at hard disk specifications of late. A brief review:

Your vanilla real-computer floppy disk, the one on the IBM PC, holds 360K formatted. It rotates at 300 rpm and transfers data at 250Kbits/sec.

The HD floppy we are using, which has been in common use in Japan for about 3 years now, is essentially the same as the one in the PC/AT. Which, by the way, nobody seems to use... in the AT. It holds 1.2 megs the way IBM formats it and 1.28 megs the way we format it. It rotates at 360 rpm and transfers data at 500Kbits/second.

Vanilla real-world floppies are almost never used in an efficient manner, so the average throughput when reading large files is on the order of 10 to 12Kbytes/sec. Us folks at Digital Acoustics are very fond of using hardware efficiently, so we get an average throughput of 40Kbytes/sec from our HD floppy when reading large files.

The typical small-computer type Winchester rotates at 3600 rpm, exactly ten times as fast as our HD floppy. It transfers data at 5Mbits/sec, exactly ten times as fast as our HD floppy. So a DMA-equipped 68000-based computer, with a 16-bit DMA port, can naturally read large files at an average throughput of 400Kbytes/sec? HO HO HO!

With a rotation every 16.7 msec instead of 167 msec, the track-to-track stepping and settling time becomes very important. There is about a 5-1 range in performance in this parameter in Winchesters. There are 3 types of head positioning mechanisms, and you gets what you pays for. But more important than that, for some reason hardly anybody reads a track in sequence. A specific example:

Digital Acoustics has a Haba disk for the Atari ST, an early version which we bought last August. It works. As I reported 'way back then, it loaded consecutive 30K files at a rate of almost 2 per second for an average throughput of 56K/second. Well, that included the seek time and probably the catalog read time. James took off the cover and fired up an oscilloscope and this is what he found:

The Haba disk reads a 512K sector in .765 msec. Let's call that 3/4 of a millisecond, O.K.? Then it does nothing for 3 milliseconds, which is 4 sector times. Then it reads another sector. In other words, it reads every fifth sector. So instead of being able to read an 8K track in 16.7 msec (one rotation) it reads that track in 83.3 msec (five rotations). So if we ignore the track-to-track stepping and settling time, the Haba hard disk has a maximum throughput of only 96Kbytes/sec instead of 480Kbytes/second. Neither of those rates can actually be achieved; how close one comes depends on how much money one spends on the head-positioning mechanism.

But since I already have a (floppy) disk which has an actual throughput of 40Kbytes/sec and which I can back up in 66 seconds, if I am going to put my precious data on a hard-to-back-up Winchester, I want to be trying to get as close to 480K/sec as I can, not a lousy 96K/sec.

There may be some hard disks out there which are like what I want. Quantum, for instance makes a hard disk-cum-controller which has a track buffer. I think it reads a track at a time. But when we called Quantum and asked about the average throughput, the kid they had on the phone didn't know anything but "5 megabits per second," a phrase which he parroted repeatedly. Well, we have the same problem: nobody can afford to put anybody who knows anything on the phone line.

Page 6, Column 2

A REQUEST:

Since we mail about 850 of these junk flyers, maybe some of you folk out there have some info about hard disks and controllers and track buffers. Stuff which is not proprietary, that is. If so, would you mind photocopying it and sending it to me, please?

ALTERNATIVES:

There are many ways to skin a cat. How to run faster than what we now have? Hmmm... parallel processing. Those HD floppies cost only a tad over $100 a copy. Four of them could read 160Kbytes per second, more than twice the throughput of the Haba hard disk. Eight of them would hold ten megabytes and could read 320Kbytes per second...

I have four megabytes of DRAM on my DTACK system right now. In about a year I will have ten megabytes. So I boot into RAM disk every morning (maybe from parallel floppies?) and do everything from RAM. 68000-based RAM disks are amazingly fast. Every hour or so when I get up to stretch and sip a cup of water or whatever I hit the 'backup' button and the floppy is updated. Perhaps this idea should be investigated further.

One could read the latest product release which is going to cure this problem forever and sit back and wait for that product to be shipped tomorrow afternoon at 3 o'clock. In 1982 that product was Syquest's cheap Winchester with removable media. In 1983 it was the Sony ten-megabyte 3.5 inch floppy using vertical recording techniques. In 1984 both Amlyn (using Dysan's money) and Drivetec (using Kodak's money) actually produced pilot runs of multi-megabyte 5.25 inch floppy drives. Neither of them panned out. In January of 1985 3M announced its 5.25-inch erasable optical disk, the one which held 300 to 500 megabytes. And Sir Clive was going to ship his semiconductor disk-on-a-wafer. In 1986 Kodak is trying to resurrect Drivetec and Toshiba has announced a 4 megabyte 3.5 inch floppy with a 1Mbit/sec data transfer rate using vertical recording. And in 1980, '81, '82, '83, and '84 bubble memory was going to take over the world. All of those breakthroughs listed above came with very nice brochures. So far, none of them got shipped at 3 o'clock the next day.

Waiting for the latest breakthrough can take a while...

There is an extremely simple solution to the problem of a big fast hard disk and a fast reliable backup. It's available now. It was available last year, and the year before that, and... what's that? You want to know what this extremely simple solution is? Well, it's spelled "$$$$$$$$."

For an appropriate sum of money one can get a very good, very fast hard disk and a very fast and reasonably reliable tape backup. But that "sum of money what am appropriate" is for business, not personal, budgets. What I'm looking for is a faster disk which one can easily back up and which is priced appropriately for a personal computer.

THE OLD PARADIGM:

Paradigm is a rather unlovely word which is commonly used in technical writings when the author wants to obscure the fact that there are no facts in his writings. Psychologists and psychiatrists are especially fond of the word. In 1980, the paradigm of a CP/M personal computer was a keyboard, Z80 with 64K RAM, 24 X 80 text display, and two floppy disks, each holding 256 to 512Kbytes. Naturally, all of the DOSs of the day featured sequential files, for the simple reason that only a little bit of the data on those disks could be loaded into RAM at a time.

In Jan '84, Apple introduced an insanely great computer which attempted to bust that paradigm, because there was provision for only one floppy disk in that insane, er, insanely

Page 7, Column 1

great computer. I had a conversation with an editor of an Apple-related publication in which I asserted that the first thing people were going to do was buy an add-on floppy disk drive for Mack.

You see, that early wimp-Mack had only 128K RAM and the disk held a lot more than that. So to copy a floppy disk one incurred serious wear and tear on the old wrist as one swap swap swap swapped! Anybody who had one of those one-disk wimp-Macks did a lot of swapping! The problem still existed when a slightly less wimpy Mack was introduced with 512K RAM and a single 800K+ floppy. I believed then and still do now that Steve Jobs had to be a (deleted) idiot not to build space for two floppies into Mack's case.

Heck, everybody knows that the paradigm of a serious floppy-based personal computer has two floppy drives. Isn't that right?

THE NEW PARADIGM:

The $2500 Mack Plus and the $1000 1040 ST each come with a megabyte as standard equipment. Each comes with space for just a single floppy drive. Isn't that just terrible?

Uh, well, actually, no. Times change and old paradigms slink off to the paradigm graveyard, to be replaced by new ones. Here is the new paradigm: If your floppy holds nearly a megabyte and you have more DRAM than the capacity of that floppy then you do not need a second floppy! Not for a personal computer, that is.

Why? Very simple. A utility can be written which can copy a floppy disk using a single step. Read the old disk into RAM. Write to the new disk. Done. Or write to another new disk. And another.

Hardware evolves with astonishing swiftness. All that hardware is totally useless without software. Software, especially operating systems, evolves with glacial slowness. So although both Apple and Atari are making 68000-based systems which are useful with a single floppy, all the operating systems out there are still based on pulling a little piece of data into local RAM at a time. The OS thinks the year is still 1980, when floppies held 8 times more data than the total available RAM. Have you ever watched a personal computer running an ISAM-type data base program? The floppy grinds and it grinds and it interminably grinds away. Hours later - many hours later - the program is finished.

Obviously, the way to run a personal-computer data base program today is to load the entire floppy into RAM and then let your 68000, with its huge linear address space, massage that data damn quickly. Naturally, this technique is unavailable to the MS-DOS folks for a number of reasons, but then the PC is 1981 hardware, not 1986 hardware. But the new paradigm is so very new that most folks have not caught on yet, even in the 1040 or Mack+ world.

I AIN'T GUESSING!

How do I know that a single floppy is usable over the long term? Simple. I have been using one constantly for over six months now! Sitting beside my host computer is a FLOPPY DISK SERVICES case-cum-power supply for two half-height 5.25 inch floppies. There are two HD floppies in that case. I have never used that second floppy, even though it would only take two hours, at the most, to modify the current HBASIC DOS to recognize that drive. To repeat, I have been using a single floppy disk for over six months even though a second disk is not only readily available but is physically present! And you have learned by now how fond I am of large memories, fast efficient CPUs etc. There is, for real, a new paradigm.

PAGE 7, Column 2

A NEW, BETTER DOS:

That means that a new kind of DOS is needed for handling data efficiently in big chunks, not sequentially feeding a tiny piece at a time. Guess what? That happens to be exactly how the HBASIC DOS is written. I have seen smart, experienced computer types scratching their head over the way HBASIC handles data files. That's because they are still locked into the old paradigm where the disk is bigger than RAM. A lot bigger! Here's how one loads a 720K data file containing 92,160 8-byte floating point numbers into an Atari 1040 ST using HBASIC:

 10 DIM A[90,1024]:DATALOAD filename,A[]

That is it! Finished! Done! When you are done massaging those 92,160 double-precision numbers you simply

 900 DATASAVE filename,A[]

Prices of 1 megabit DRAMs are now where they were for 256Ks in the spring of 1984. If this trend continues, the crossover from the 256K chips to the 1 megabit chips might occur in the early fall of 1987. In the second half of 1987 a high-end personal computer could have a 32-bit processor and a minimum of 4 megabytes of RAM, expandable to 16 megabytes if one uses 128 chips. We will have completely filled the linear addressing range of a 68000! What I am talking about is only a little over a year away from reality (plus maybe a Iiitle production lead time).

And three years after that, the 4 megabit chips will arrive, so we will have a minimum of 16 megabytes, expandable to 64 megabytes if we use 128 chips. Folks, what I am describing is a personal computer. A high-end personal computer but a personal computer nevertheless.

IT JUST HAPPENED!

On Jan 1 1986 there weren't any personal computers with 700K+ floppy disks which had more RAM than disk capacity. DTACK boards and 520ST upgrades and Mackish upgrades don't count; I'm talking about what you can buy across the retail counter. Both the 1040ST and the Mack+ began selling across the counter since Jan 1. So the new paradigm has arrived very recently; you might not even have noticed it yet.

But this change is permanent; from this time forward (into the foreseeable future) personal computers are going to have more RAM than disk capacity and so will need only one disk drive. What is less obvious is that an operating system which efficiently uses 'big chunk' files, not sequential files, is going to be needed in the personal computer environment.

Business computers, even the small ones that PC magazine writes about, will always (into the foreseeable future) have more disk capacity than RAM and so will always need a DOS with sequential files.

I think we are geeing the end of an era. Up until now the arrogant, fatuous, stupid minicomputer folks argued that what they had (UNIX) was just what a personal computer user needed. More recently, PC magazine could assert that the small business computers it writes about, the ones with all the slots filled and 76 megabytes of hard disk and ten coresident utilities and tomorrow afternoon at 3 o'clock with virtual memory was just what a personal computer owner needed. The folks at PC magazine have been doing that because, like the minicomputer folk, they don't know any better. (Isn't it obvious that PC should be named PBC?)

But we have a new paradigm in the personal computer world. It is not merely different but obviously different from the minicomputer world and from the personal business computer world. It will force the creation and use of a very different kind of DOS. As always, some folks will catch on sooner

Page 8, Column 1

than others. Even today, a lot of minicomputer folk think they are going to gather those personal computer users into the fold via networking and departmental computers. Uh uh. Ain't never gonna happen.

It took a year for cheap 256K DRAM to translate into mass-produced 1 megabyte rock-shooting toys (1040ST) and 1 megabyte yuppie adult toys (Mack+). So it will be five years before 16-megabyte rock-shooting toys appear. Climb into your time machine and go forward five years. Now explain to an owner of a 16 megabyte 68040-based personal computer how absolutely vital it is that his DOS have sequential files...

(Actually, sequential files are not, and never have been, wonderful. Like toilet tissue, sequential files are a regrettable necessity.)

WORD PERFECT ?

This junk mailer resides on Word Perfect 4.1 files. All of the text up until this point was originally entered on the Eagle II at home, and transferred to the IBM via RS232 at 1200 baud. Naturally, all of the formatting codes had to be re-entered, a miserably time-consuming task. So I'm going to have to get me an IBM PC for use at home. Sigh.

Word Perfect is widely regarded as among the very fastest of the IBM word processors. And the 8088 is much faster than the Z80 the Eagle uses. Yet Word Perfect is at least 2.5 times slower than Spellbinder running on my Eagle! Looks like I'm going to have to retract my assertion that the IBM PC had several word processors written in assembly. And it looks like I'm going to have to buy an AT, not a PC, because I can outrun WP's cursor in certain modes.

Jim Seymour isn't always wrong. In PC WEEK he asserts that Word Perfect and Microsoft Word are going to divide up the biggest part of the PC word processor market. I think he's right. But it really worries me that Word is widely regarded as being a lot slower than Word Perfect...

HBASIC REPORT:

I have finished implementing general expressions. There are no known bugs at this time, but I have just begun exercising the various possible combinations of expressions, nested arrays and functions etc. This time I'm doing this by writing a program which calculates the value of successive expressions and compares that value with predetermined correct results. Sometimes when they don't match it turns out that HBASIC is right and the predeterminer is wrong!

That's not all. There has been a big change in the way the strings and string functions are handled. They are now almost indistinguishable from Bellevue BASIC! The reason I don't like dynamic strings is that they spray the variables all over the place, go away for a long time for garbage collection, often have fatal garbage collection errors (Applesoft, Waterloo APL), and prevent contiguous machine code from being loaded, manipulated, and run as string arrays.

I had in the past believed that getting rid of those problems meant that strings had to work differently. Uh uh! So HBASIC strings still don't have the problems listed above and yet they are almost identical in operation to Bellevue BASIC!

Here's what I did. Each Bellevue BASIC string or array element has associated with it a byte containing the length of the element and a pointer to the element. In some versions, there is also a down-link pointer stored with the element to speed up the garbage collection process. (I am most familiar with that in its Commodore BASIC 4.0 guise.)

HBASIC had in the past a pointer and a maximum length byte associated with each string. In the case of string arrays, there was a single byte (actually a word) which specified the

Page 8, Column 2

maximum length of all of the string elements. These have been retained. But a byte has been added which contains the current length of the string, just like Bellevue BASIC.

In the case of a string array, there is a 'current length' byte for each element of the array. These bytes are not stored alongside each array element as in Bellevue BASIC because that would screw up using string arrays as executable machine code. So HBASIC uses a 'shadow array' which is associated with each string array. The shadow array has the same dimensions as the string array except that each element is a single byte, where the string array itself can have 1 to 255 bytes for each element. (No, that doesn't create undue overhead in locating string array elements - the address of the 'shadow array' element is essentially "free" at the penultimate step of calculating the array element address.)

HBASIC retains the property that each string or string array element has a maximum length. That's why HBASIC strings are fixed in memory. But the entire objective of being compatible with Bellevue BASIC is to simplify conversion of programs written for the BASICA or Applesoft environment into HBASIC. So we simply change the default maximum string length to 255! Since Bellevue BASIC's strings can't be longer than that, we are now fully compatible!

"Aha!", you exclaim? "But that is going to take up a lot more memory space!" True. But what we are talking about is importing programs written for machines with less than 64K combined program and data space into an environment which will never be smaller than 512K and will usually be a megabyte or greater! So who cares?

Besides, that default length is (now) programmable and each string array's maximum element length is separately specified by the DIM statement, as it was in the past. Voila! Strings which are (nearly) indistinguishable from Bellevue's, except that ours work better!

It is somewhat hilarious that HBASIC, which started out being almost totally unlike Bellevue BASIC, is winding up a lot more compatible than, for instance, Software Limited's compiled BASIC.

Have no fear. HBASIC is not ever going to store and retrieve numeric data to and from disk by PRINTing and INPUTing each numeric data element...

FONT SIZES:

I'm learning more about fonts and stuff than I ever intended to learn. What you are reading is Times Roman-8 in 8 point type. The bold headers are Times Roman-8 in 10 point bold. The F font cartridge does not have 8-point italics, so I have to use underline for emphasis. This is a little too small a font for best readability; it turns out that most magazines and newspapers and such use 9 point. But HP only offers 8 or 10 (or 12, or 14... ) point.

A point is 1/72nd of an inch. 8 point type is 1/9 of an inch high. 12 point is 1/6 of an inch high. If I have to learn this stuff I'm gonna inflict it on you too.

Another difference is that the F font cartridge uses proportional spacing. That's great for text but it makes for absolutely lousy program listings. The two brief pseudo- listings on p.7 look awful, especially the empty array brackets. So we will have to keep our paper cutter and waxer (we use hot wax, not glue, to piece together the masters for this junk mailer. The DTACK logo for instance). Proportional spacing for program listings is out.

Hal W. Hardenbergh, his mark:

X

The world's most literate junk mail writer