GT Tutor ************************************************************************ The following text file was captured by me as a result of my call to Jim Davis' Retreat (713 497-2306) in Houston, Texas. I went to his board to download GTCTL and GTLOG - two utilities used with GT PowerComm. Jim came on the line to assist as I experienced transmission problems. I took the opportunity to ask questions about GT PowerComm and PC communications. Jim's response is being presented here as an aid to other `Neophytes" to PC communications. << Raymond Wood >> ... In the vernacular of the communications industry, there are a few concepts that need to be understood before understanding 'HOW' is accomplished. For example, the word BAUD. This essentially means 'bits per second'. In fact, it means something a little different than that, but for openers, let's say that's what it means. Now, whenever two machines are going to try to communicate with each other a couple of things have to be done by both. They must both be set to send and receive at the same frequencies, for example. The most often used frequency, today, is 1200 baud. That means 1200 bits per second, as I said before. Well, most users have no idea what bits are involved in a file transfer or a message transfer. Let's look at another standard word: BYTE. There are 8 bits of information contained in a byte. That is, a byte is merely a set of 8 bits. Within a set of 8 bits there are 256 permutations available. From all zeroes to all ones. Each letter in the alphabet and each digit and each other special character is represented by a predetermined set pattern of those 8 bits. A capital 'J' has a different pattern than a lower case 'j', for example. Given that that is true, it is easy to see that no more than 128 of the total possible patterns would be necessary to represent any text. Thus, we have another 128 that may be used for 'special purposes'. What, for example? I'll get to that. The sending of bits (on or off, high or low, in other words binary information) is, by definition, a binary process. That is, the computers need only recognize one of two states. The telephone, on the other hand, carries information that is other than binary. It can faithfully represent different tones, pitch, and volume. This is called analog rather than binary. The almost sole purpose of a modem is to translate binary signals into analog and vice versa. When you are going to send a set of bits across a telephone you will have to convert those binary 'states' into some form of sound (which is, after all, what the telphone is designed to best carry). Modulating a signal from binary to analog is the 'Mo' in Modem. Demodulating an analog signal back into binary is the reverse and is the 'Dem" in Modem. If we want the transmission to be highly reliable then we must do more than simply send the binary information (modulated). We have all heard 'noise' on a telephone line and without doing more than demodulating into bits, the receiver will no doubt have a virtually impossible time of being able to tell what sounds are bits or just plain noise. In some applications, we don't really care all that much. Examples include the transmission of plain text files. Recall that all that was necessary to send any letter, many special symbols and any digit was a capability that required no more than 128 different combinations of bits. 7 bits are sufficient to represent 128 permutations of those bits. That is, if a byte were only 7 bits long then it could contain as many as 128 different sets of bits being on or off). However, a - 1 - byte is 8 bits long by definition. So, in what is called ASCII (American Standard Code for Information Interchange) transmissions we can use the first 7 of those bits to represent data and the 8th bit to represent some form of insurance or integrity check that the first 7 were received as they were sent. This is called using 'PARITY'. You can establish a convention between the sender and the receiver that every byte WILL have an even number of bits (or odd) and use the 8th bit to do so at the sending end. If the receiving end ever gets a byte that has odd parity then it knows that it received the byte in error (some bit or bits were either added or lost). That is all there is to parity checking in an ASCII transmission. Not at all very good, but sufficient for most text. Program files or data files or even text files that have been compressed (ARChived) in some way use all 8 bits in every byte to represent information. So, we have lost the ability to use parity as an integrity check vehicle. Instead, in every protocol other than ASCII we add either one or two full bytes to the end of a 'block' of bytes. The block is a fixed length (usually 128 bytes). The purpose of those one or two bytes is to contain what is called a Cyclic Redundance Check (CRC) character or word. Like parity, the CRC is constructed at the sending end to create a pattern of bits that demonstrates that the preceeding entire block of bytes has been received with integrity. The Receiving end dynamically creates its own CRC from the information received and compares it to the byte or bytes received at the end of a block. If it doesn't match then the block must be rebroadcast (requested by sending to the sender a signal that says: "Negative Acknowledge" - NAK. If it was ok then it sends an ACK - meaning "Acknowledge", and the next block is sent. Now, let's go back to the idea of baud. At 1200 baud, the modems are able to send and receive 1,200 bits per second. How many bits per byte? Yes, 8, but not on a telephone line if you are using modems! Instead, we bracket a byte by sending what is called a start bit before the 8 bits of data and ending with what we call a stop bit (sometimes 2 - at 300 baud). So, every byte requires 10 bits, not 8. Thus, at 1200 baud your maximum possible data transfer rate is 120 characters (bytes) per second! OK. Now we know what we have to send and how many bits are required and that there is something called a response from the receiver called either an ACK or NAK. So why don't we get 120 bytes per second transfers using 1200 baud modems? Well, we already saw that for every 128 bytes of data, in most protocols, we send an additional one or two bytes of CRC. We DO NOT count the CRC byte(s) as data! Yet it takes time to transmit. Also, it takes time for most protocols to turnaround and react to the ACK or NAK. For example, assuming all is well, the sender has a few hundred blocks to upload to the receiver. After the first block is sent he, by convention, must wait for the receiver to analyse the CRC and decide if it is going to respond with the ACK or a NAK. Then it takes a moment to send that to the sender who, in turn, has to receive it, verify that it got here properly (was not just noise) and decide whether to send the next block or to resend to last one that was improperly received by the receiver. That takes time. All time used as described above is called 'overhead'. Overhead does not include the transmission of DATA, only control bits and time. Thus, it is impossible to get to an effective DATA transmission rate of even 118 characters per second let alone 120 (CRC, etc). But, we know - 2 - that the telephone is capable of carrying sound in both directions simultaneously. So, why should the sender have to wait for the receivers ACK or NAK? This mode of operation is often called 1/2 duplex, by the way. The answer, of course, is that it does so only by convention. Newer protocols do not wait. They assume that a transmission will be successful and will result in getting an ACK. So they go immediately to the task of sending the next block. Always listening, of course, for that ACK or NAK. When it is received as an ACK all is well and we have gained performance. If not, the software has to decide which block or blocks have to be rebroadcast. In order to do that it should be obvious that the ACK or NAK is not simply a single byte. Rather, it includes a byte that is called the packet number (0 to 255), and possibly more information. If an ACK is received the recipient knows which of a series of blocks(packets) it is referring to. Similarly it would know with an NAK. Yep, more bits and more overhead! Well, then let's see if I can get to a few more contemporary terms and information more practical to know at this time. For example, almost nobody uses ASCII transfers any more. Why should they when they are so poorly controlled and when you realize that ONLY un-compressed raw text can be sent with it? Still, a great many first time communications users try to do so. And, while the transmissions will appear to work, the resulting files will be garbage, of course. Only 7 oF the 8 bits are being transmitted in each byte! Many comm programs will allow you to use ASCII even when they should know that the result will be unsatisfactory. For example, if a filename ends with COM or EXE then, again by convention, that file is an executable program. ALL such programs use 8 bits in every byte and could not, therefore, be transmitted via ASCII. Some comm programs will not let you try to do something that stupid (only, of course, to a knowledgeable user). What are the protocols that currently exist in wide spread usage across the country? The most frequently seen is called XMODEM. This protocol is quite reliable (about 96%) and uses blocks of 128 bytes plus one CRC character at the end of every block. It is because it uses only one CRC character that the reliability is only 96%. Another is called XMODEM/CRC. This is exactly the same as XMODEM but it uses two CRC characters. The result is that the effective performance is reduced insignificantly (1/130th), but the reliability is increased to about 99.6%. In any case where you have a choice between the two you would, of course, opt for XMODEM/CRC. Then, and this is particulary true in environments where one of the computers being involved is either a mini or a mainframe, there is a protocol which is called Kermit. I believe it uses 128 byte blocks and other overhead such as a 'header block - block zero' that provides control information. It is also very reliable (99.6% I believe) but it is SLOW!!! It is used only if that is the only protocol available. Then there is what is called YMODEM. This protocol differs from the earlier ones in that it sends 8 - 128 byte blocks together as a 'super block' before it sends the two byte CRC word. As a result it is the fastest protocol that I have ever seen for micro computers that use - 3 - 'dumb' modems (ie, non self correcting ones). There are two times when one should not use this protocol if there is a choice: 1) when the line noise is great on the telephone (for a retransmission of a 'block' that failed involves 1024+2 bytes even if only one bit was gained or lost). That is a lot of overhead! And 2), in an environment like PC-PURSUIT that involves long duration hand shaking turnaround delays. Another protocol is called Telink. Telink uses 128 byte blocks but has an advantage over the other ones. It results in a file that is exactly the same size and has the same date and time stamp on it as the one being sent. Ymodem, for example, adds to (pads) a block until it is exactly 1024 bytes (the last record) even if that record only contains a few bytes of data. GT PowerComm has a unique protocol called 1kTelink. It is the same as Telink except it uses 1024 byte blocks and is therefore more efficient. Like YMODEM, 1kTelink should only be used on clean phone lines for performance, but unlike YMODEM it can be used on even a short file with efficiency. In the case of GT, and then only if communicating GT to GT, if either YMODEM or 1kTelink experience a set of 6 errors during the transmission of a single file then it will automatically fallback to 128 byte blocks to greatly increase the odds that the transmission can be completed and to greatly increase the efficiency on what is presumed to be a noisy line!!! Neat!!! The BEST protocol at this time for use in a PC-PURSUIT environment is called Wxmodem which stands for 'Windowing Xmodem'. This uses 128 byte blocks but it does not wait between blocks for a response. It is always listening for those ACKs and NAKs, of course. Extremely high performance is the result, relative to Xmodem or the other 1/2 duplex protocols. Wxmodem tries to stay 4 blocks ahead of the receiver at all times while the receiver tries to get 'caught up'. The difference between the block being sent and the most currently received ACK or NAK is called the window (a number between 1 and 4). Then there are two more odd protocols that have become relatively visible of late. These are called ZMODEM and Batch-YAM. ZMODEM was designed for use in a PC-PURSUIT like environment. Like WXMODEM, the best protocol for use in that environment, ZMODEM does not wait for ACKs and NAKs. Unlike WXMODEM, ZMODEM is relatively slow. For one reason, it uses no buffering. Thus every 512 bytes of data it must make another disk access. Batch-YAM is much like YMODEM except that it allows you to specify a set of file names (a 'batch' of them). It is slower than YMODEM except, possibly on PC-PURSUIT. What must a user know to do a file transfer? What protocol is available on BOTH ends of the transmission, the file name of the file on his end and the file name on the other end. That is, if the receiveing end of a transmission already has a file with the name of the file you want to send to it, naturally you will call the new file something else. Thus, every comm program allows the specification of the file name on your end and then the name on the other end. (It is not just an irritant that you 'already' typed that in, it is necessary). Having said that I must make an exception - Telink and 1kTelink. These protocols allow batch names, like Batch-YAM, but the receiving end and transmitting end file names are the same. - 4 - That's it for now. Wood: I have a few questions. ok? Davis: Sure. Wood: Four to be exact. 1- You mention date/time stamp on one of your protocol descriptions but did not define its use prior to that. What is this and what is it used for? PC-DOS or MS-DOS marks every file with the date and time that file was created or last modified. So, let's say I want to send you a copy of my transmission log that was dated 12/31/86 (by DOS). If I use any protocol other that Telink the resulting file on your end will be dated with the date and time it was created (ON YOUR SYSTEM!) Today, now. Telink creates that file and leaves it on your system with my date and time stamp still intact. When I receive an ARCed file this time/date stamp is in the EXE module somewhere? Davis: It is several places in that example. In the directory record on your disk is the formal residence of the stamp. So, in the case of an ARC file, it has a date and time stamp. Additionally, within the ARC file each record, which is merely another way of saying 'each file within the ARC file', has the date and time that THAT file had in its directory record BEFORE it had been ARCed into the ARC file. When you unARC, the resulting file will not have todays date and time as a stamp but the one recorded within the ARC file for it. Wood: Good, I understand perfectly. I can relate it to what we sometimes do on the mainframe. 2-You mentioned padding with YMODEM. What is this? Does the receiving end recognize the padding and discard it automatically? Davis: Let's say the file you want to send is exactly 1025 bytes long. Each block transmitted by YMODEM contains 1024 bytes of date plus 2 bytes of CRC. It will, therefore, take two blocks to send that file. The second block will contain only 1 byte of data plus 1023 padded "blanks" - actually End Of File marks. YMODEM sends 1024 bytes every time!. The receiver does not automatically strip those padded bytes. Indeed, it passes them to the resulting file so that it will always be an even multiple of the 1024. Thus, you sent a 1025 byte file and it becomes a 2048 byte file!! Wood: Ok--3...You came to a conclusion without what I thought was the necessary support when you said "...thus 512 bytes result in a disk access with ZMODEM..." I did not follow the conclusion. Help! Davis: Sure. As we discussed before the tutorial when we talked about buffers, a buffer is a fixed length (amount) of memory, sufficient to contain some number of blocks of data. In the case of ZMODEM, a block is 256 bytes, by the way. If the protocol used buffers there could be some large multiple of 'blocks' in memory awaiting transmission. Instead, ZMODEM does not use a buffer. Thus, it must have in memory only one sector of data at a time. In the PC world, a sector is 512 - 5 - bytes, or two blocks of data as far as ZMODEM is concerned. Again, since that is the case, after two blocks (512 bytes), ZMODEM must go back to the disk to get more data to transmit. Wood: One of the first things we learned in programming school 20+ years ago was that you could do things a lot faster with more than one buffer. WE typically (or the system) use at least two. Why would ZMODEM not use any? Is there a memory problem? Davis: I can't speak for the authors of ZMODEM but I will say that it is typically not a protocol that is written into a program like GT PowerComm (As is Xmodem or Wxmodem, etc.). Instead, it comes rather conveniently in the form of an EXE program that can be run independantly of the comm package or by a simple shell out of the comm package to it. In the latter case, there is no way to know how much memory might be available in the majority of systems. The program itself, could, of course, simply find out. But you will recall that BOTH ends of a transmission are highly dependant upon compatible software. It might be that the author of ZMODEM simply took the easy way out. I don't know. Wood: This leads nicely into my final question which deals with today's comm packages. When I first bought my PC I did the necessary research by reading reviews and magazines like Software Digest. I rejected XTALK and settled on HYPERACCESS. After I started using it I discovered Shareware. I have come to the conclusion that there are two classes of products in the Micro world today. Commercially developed and other. My company uses XTALK. In the corporate environment you order a comm package and you get what the corporate gurus decide is best for you. I like ProCommm. I do not like to feel that I was ripped off by buying HyperAccess. I just feel that I was uninformed at the time. In this area ProComm seems to reign as King with the majority of PC users. 4- What are the advantages of GT over ProComm? Davis: Excellent question. Let me try to deal with it professionally instead of from the bias I would naturally have for GT PowerComm. (When I wrote the documentation for GT I twice called it ProComm - how embarrassing it would have been if I had released it without an edit). Let's go back a little in time. Before the era of the PC virtually all micro computers were 8 bit in design rather than 16. At that time the undisputed King in the area of comm packages was Crosstalk. It enjoyed an excellent reputation and was well supported. Further, it was not terribly expensive and it was one of the only comm packages that supported what was to become a whole set of protocol transfer methods (it was an XMODEM protocol). Well, in those days if your comm package didn't work reliably and you were not sure if it was a hardware problem or a software problem you simply put up Crosstalk. If it worked the conclusion was that the problem was software. It was THAT reliable. Along came the PC's. Crosstalk was ported to the 16 bit world, but in a way that made very little progress in terms of adapting to the capabilities of the PC's. To this very day, I believe it is impossible to change directories in Crosstalk, though I could be wrong. In essence, Crosstalk continues to be available and though it runs - 6 - reliably in a 16 bit environment it runs like it was in a CP/M environment, not a DOS one. Then there was a leading contender from the shareware world called QMODEM. It enjoyed an excellent following and was remarkably efficient by comparison to Crosstalk - MUCH faster, in fact. And, it had a couple of contemporary protocols not available in Crosstalk. It took off and has been a very successful product ever since. In my opinion it would still be a champion product save only for a few 'author' problems. It is a great program, nonetheless. About the same time the Hayes Modem manufacturers introduced SmartComm II as a commercial product and it was being shipped with many of their modems. By brand identification it was accepted. This, despite that it is the clumsiest of all the comm packages I have ever seen. It was, furthermore, not very efficient by comparison to QMODEM. It has essentially been unchanged since its introduction (Sound like Crosstalk all over again?) A new comm package hit the scene called ProComm. In this program the author spent a great deal of attention to 'image'. He used imaginative ideas like a whistle that announced opening and closing of windows, the windows themselves were innovative, etc. It was no where near as efficient as QMODEM, but it captured the imagination of the users. And, like QMODEM, the price was right - $0 to try it out, and then if you decided to, you sent them a small check - but that's shareware. Procomm has advanced far faster than QMODEM in terms of incorporating different protocols and the incorporation of what is called a Host mode, or unattended mode of operation (autoanswer of modem, etc.) It became King as you call it by being both innovative and current - but not by being efficient, though it is quite respectable. GT PowerComm was only formally announced to the shareware world on the 21st of last month!!!(2/21/87). It includes 8 protocols, not including the also present ASCII, of course. At 2400 baud, I routinely establish DATA transfer rates of 235.5 characters per second with it, while the best I ever got with Qmodem was about 220 and with Procomm about 218. Actually, I did get a 225 once with Qmodem, but only once. So, in terms of performance, nothing has come close to being as fast as GT PowerComm. But that, as we saw with Procomm, is not all that the user is looking for. We have incorporated an extremely rich function called the LOG. Into that log is recorded all connects, disconnects, messges to the host, passwords used to gain access, bad passwords tried, and even more interesting, the names and time to trasmit every file that goes from the GT to or from another computer, and along with that is the total byes involved and the name of the protocol used in the transmission and, finally, manually created notes and messages. So what, you might ask. I would answer that if you were the Sysop of a board, or of a Corporate system, you MUST be able to determine who sent you a file or a messgage and when. (Yes, date and time stamps are included in all entries in the log). For example, what would be your reaction if you found that a program on your disk was a trojan horse if you could not determine where it came from? Or, say you created a proforma for your department and it has been downloaded by 18 different executives before you discover a major error in it. Wouldn't you want to be able to determine who has received that file? All those kinds of questions are automaticlly answered via GT's log and GTLOG. The main - 7 - reason for feeling that there is a substantial difference between GT and Procomm for the user is in the area of SUPPORT. I take it that it has occurred to you that I have been talking to you for more than three hours already? And I don't even know if you are a registered user of GT. Well, I am only one of two of us that do exactly the same thing. The author of GT PowerComm, Paul Meiners, provides 24 hour a day access to his system as I do (as the author of the companion software). We have provided many new versions of GT powerComm over the past year and are about to provide release 12.10 only two weeks after announcing 12.00 on the 21st! Why, because we are constantly enhancing the products and our users want us to do so. We have several major clients already including one of the major Oil companies, one of the major airlines and one of the countries largest school districts!!! Finally, nobody has a better Host mode than GT PowerComm!!! I run a BBS using nothing else. That is power and function! Try it, you'll love it!! Wood: I can't wait to put the system together! Rest assured that I will register the program. As an ex-programmer I know what is involved. I wish the product much luck. Did you say 3 hours? Davis: I believe so. I don't remember, but I reset the 1 hour time limit I gave you twice now, possibly three times. By the way, as a favor to me in exchange for the time, would you mind terribly ARCing your capture file and sending me a copy. I can make it available as a tutorial to others. And if you will make it available to others as well, it is possible that they will come to know GT PowerComm as well. Wood: No problem. I will not be able to do this for a couple of days however. My modem is on the blink and I am waiting for a replacement. I will upload GT and the Log and CTL files to all of the bulletin boards that I normally deal with. I have already uploaded it to the corporate BBS. I do expect to get some healthy ribbing from the ProComm lovers which is why I asked the question that I did. For now though I would like to get the Log file. Davis: Thanks for the opportunity to be of help. I too must get to work. So, I'll take you out of chat mode. Don't forget to 'close' your capture file. Jim Davis' Retreat Voice 713 558-5015 Data 713 497-2306 A Review of GTpowerComm ver 1200, GThost. 3/16/87 GtpowerComm, from P & M Software, has had a passable Host mode for some time now. Until now it was pretty much a typical host in that it would allow someone to dial in to your computer and would provide support for downloading and uploading files with several file transfer protocols. The only thing unique was that it also supported a "call- back" mode which would allow some one to have practically free long distance access to the host system, with your approval of course. Although this was nice it did not appear to me to be any big asset to the BBS community in general. - 8 - With the release of Version 1200 however I felt that a review of the Host mode was certainly appropriate. The Host mode now is what I would term a REAL Mini-BBS. It presently supports not only the files security which I felt had been lacking in previous releases but now supports a real Message base for general BBS chit-chat, additionally it supports the other features we have grown accustomed to with most BBS's. Things like Bulletins, Coments, printer support, ANSI screen support, and much more. To assist us as SYSOPS operating the GThost system as a BBS, James Davis the author of the excellent GTLOG utilities has again collaborated with Paul Meiners the author and provided another program called GTCTL. This program is a support program for handling the message base, user log, files and so forth. Again a very nice feature provided in the one package. Several individuals in the Houston area have started BBS's now using the GThost. I know of one individual in Round Rock who is in the process of setting up a system there. I have had a test system available on a request basis, in addition to my own Buy A boaT, Genesis system. I am not implying by this review that the complete program is "bug-free" or that it is so simple that a two year old can run it. It is however an excellent, total communications package. The next version which may have been released by the time you are reading this article, will contain even more. Multiple message bases, user selectable passwords, and the fixes for some known and repeatable bugs. Overall, in my opinion, this is an excellent alternative to the traditional BBS programs now out. GTpowerComm ver 1200 is User-supported software available from P & M Software Company, 9350 Country Creek #30, Houston, Texas, 77036. It is available locally on Buy A boaT, RGS, (512)-263-9731. The suggested donation is $40.00 for complete license and registration, and $10.00 for updates to previously registered users. My connection with P & M Software is that of a registered user. The system used for the evaluation of the program consisted of a BAB-PC w/512K memory, 20 meg Seagate HD w/Winchester controller, single floppy, and USR2400-PC internal modem 300/1200/2400 Baud. Tom Scallorn Buy A boaT, RGS (512)263-9731 24 hrs 300/1200, 8-N-1 Austin, Texas - 9 - ************************************************************************ Following is a second conversational 'chat' between James Davis and Raymond Wood designed as a follow-up of the first one. It takes on the form of a tutorial again due to the high number of requests for same following the first one we released. NOTE, this is an update of the original text in that I discovered an inadvertant error in the original that confused SEAlink and Zmodem relative to their implementation of network flow control. ************************************************************************ D: Shall we start this off with a kind of outline as to where I think we will go with it? We discussed many fundamentals involved with communications in the first tutorial and ended up discussing several of the more popular file transfer protocols. This session will go farther into the area of file transfer protocols, technology such as the 9600 bit per second modems and error correcting modems with MNP or ARQ, and how one goes about intelligently selecting a protocol given a basic understanding of their environment. For example, while Ymodem was described as the 'King of the hill' when it comes to performance, that is not true if you are using one of the packet switching networks. It is also not true at 9600 bits per second. W: You mentioned 9600 and MNP. I thought that there was no industry standard for 9600 and that it is only practical if the other end is talking the same language with the same hardware? Also that MNP was implemented in the hardware of the modem...where am I wrong? D: You're not wrong. GT PowerComm (12.20) now supports 9600 baud. I believe the newest version of Qmodem (3.0) does as well. Paul Meiners, the author of GT PowerComm, has a USRobotics HST9600 baud modem and he is using it every day. I, too, have a USR HST9600 as well as a Microcom MNP modem that I am testing. There are two quite different error correction methods in use at this time. MNP (Microcom Networking Protocol) which was developed by Microcom and ARQ (a general term used by USR to mean Automatic Retry Request protocols - theirs being specifically called USR-HST [High Speed Technology]) and these two methods are totally incompatible. Even the methods used to modulate 9600 baud signals appears to be incompatible. However, we have successfully connected these two different brands of modems in 'reliability' mode. The USR has the ability to 'fallback' to MNP at 1200 or 2400 baud where MNP has established a standard. (Of course, that makes sense for our PCP users). We have also connected with other USR HST9600 modems and seen that we have outstanding performance at 9600 baud. (We have cruised along at about 945 cps during transfers of more than 3 million bytes so far). Further, GT is such an efficient comm program that we are able to drive these modems at 19,200 bits per second from the systems while the modem is communicating at 9600 to another modem - for additional performance. It is for this very reason that we had to implement flow control - so the transmitting modem does not overrun. I will discuss this in more detail a little later in this tutorial. So, while you are correct that there is no standard at 9600 baud, that does not mean that 9600 baud modems are necessarily impractical. We are determining to what extent it is a problem. What concerns me the most is the different modulation methods. Nevertheless, it will not stop our support of 9600 baud. - 10 - Finally, you are right again, MNP (ARQ) is a hardware function - but it can and should be a transparent one. I note, for example, that since I began testing these modems I have connected with several (many) others and, as a result, totally eliminated the line noise that was present prior to the MNP connection - ie, there appears to be more to MNP than just error free file transfers. Thus, we must look at it. And, in doing so, we will test the various non-error checking protocols that are used in such environments (Ymodem-G, for example). It is as much a learning curve for us as for the users - we just MUST do it behind the scenes for credibility sake. W: I understand the necessity to stay up with technological advances affecting your your product. What I am not to clear on is exactly what is MNP or ARQ and why have they come about. Can you shed some light on this? D: Since 2400 baud modems are NOT really 2400 'baud' - they are 2400 bits per second, 1200 baud modems - it has been clear that the limit of reliable communications in terms of speed using the bandwidth of the existing telephone circuitry has not been reached. However, it is also clear that as we more densely pack information within that bandwidth the incidence of errors increases. The manufacturers investigated, starting with Microcom, various error detection and recovery methods that were hardware assisted. That was the the birth of MNP (Microcom Networking Protocol). There has been an evolution in that technology which results in several 'levels' of MNP available today. The higher the level, the more function is included. At any level, MNP merely insures that the data received by the modem is what was sent by the sending modem. That is INSUFFICIENT, in my opinion. The only valid scenario is one in which the receiving COMPUTER is insured that it received accurately what the sending COMPUTER sent. There are cables, ports, circuits, timings, etc. that MNP DOES NOT CHECK. Thus, it seems that a combination of software and hardware error detection and correction methods is necessary. Almost all file transfer protocols check what I believe is necessary - computer to computer accuracy. What, then, is the advantage of MNP? Well, to begin with, it SHOULD be more efficient. If the software need only be concerned with data bytes and not CRC and other control bytes, then it should be faster. Further, the newer levels of MNP are more efficient than you might have guessed. They strip off the start bit and the stop bits from each byte, for example, and that increases transfer performance by 20% (8 bits per byte rather than 10). Further, they send 'compressed' data via internal algorithms which increases performance even more. On the other side of the ledger, MNP and ARQ technology has some built in disadvantages from a performance point of view, they are, after all, no longer just high speed pipes but are now full computers (usually Z80's) and are prone to modest slowdowns at the higher speeds. Nevertheless, at 9600 'baud' it is possible to obtain about 1100 cps rather than 960 and at 2400 'baud' it is possible to obtain upwards of 290 cps rather than 240. Not to forget, as I mentioned earlier, MNP is active at all times while protocol transfers are active only during a transfer - thus, line noise is effectively filtered out even while we are chatting. There are several possible advantages, and a few disadvantages - not the least of which is the lack of standards. - 11 - W: Jim, I understand what you just said and from that it would seem that MNP is needed at both ends to do the job. Is that correct? Also is MNP proprietary for just Microcom modems? D: It is obviously true that MNP (or ARQ) must exist on both ends to be functional. When my Microcom modem connects with a non-MNP modem it recognizes that fact and reverts to being a standard Hayes compatible modem. Further, when the USR HST connects with a Microcom that has MNP, there is a fallback in baud rates to 2400 baud in both modems so that they can communicate using MNP. That is likely to be overridden by the users, however, via disabling MNP or ARQ in such situations. (My opinion only). However, it is reasonably certain that 9600 baud connections cannot be established without error correction being functional. Further, while Microcom MNP is wider used than ARQ (USR's method), the USR method of supporting both (at different baud rates) is more flexible and argues for USR. It may be that we obtained the wrong 9600 baud modems at this time. It is part of the testing and learning process. As to the proprietary nature of MNP, according to USRobotics, Microcom has placed at least the first three levels of MNP into the public domain. It is certain that they have been generous in licensing out at least the lower 'levels' to other manufacturers. What alternative do they have? Unless a standard evolves, these are contests that damage the future, not advances it. W: It seems obvious that standards in this area are to the advantage of all concerned. Is there a standards organization looking into this? I would like to have 9600 baud capability and error free transmission. However, I would also like to communicate with whomever without having to worry about whats at the other end. Do you see what I am concerned about? D: Of course. It is a paraphrase of my earlier discussion. I think the only 'standards organization' that is effective is called the marketplace. The huge power of the Hayes organization, because of its modem standard, is likely to be the telling blow to other manufacturers - when they finally put there own 9600 baud technology - may well become the new standard. Because of this I believe it is premature to buy 'long' in such security issues as USRobotics and Microcom. W: Whenever I talk to the Hayes people at a convention or trade show, they know or say nothing about 9600 development. I do not know if this is just policy or not. I think that when they do introduce 9600 that it would not necessarily mean that whatever they do will be the standard. I may be naive, but I would like to believe that will be the case. I say this only because others are active in meeting a need and they are not or appear not to be. D: No argument there. My point remains valid only if Hayes does something in the near term. Intel saw what happens when they get over confident and let competition pass them by when they first put the 8080 micro-computer chip into the marketplace. They had it made, save only that the Z80 took it ALL away from them. It was an awfully long time before they we were able to come back and Motorola nearly did it to them again. So, while Hayes has by far the largest visible shelf space in the industry at the moment, USR (my guess) or Microcom could steal it away from lack of responsive attention on their part. - 12 - W: It would seem that you need compatible hardware above 2400 baud and compatible software as well for truly effective and increased performance. Does Paul Meiners' Megalink protocol tie into this somehow? D: Megalink is an extremely efficient protocol particularly designed for the network environments like PCP and the higher baud rates. It is 'network friendly', which means that it recognizes and honors flow control imposed by the network. For efficiency it uses 512 byte packets (4 blocks), it is a full streaming protocol, which means it does not ever stop sending unless it receives a NAK saying a packet was received in error, and it is batch oriented. It uses block 0 header information, as do all the '...link' protocols so that the resulting file is the same size and properly time and date stamped, and it uses 32 bit CRC rather rather than 16. I think it is time to go back to the earlier tutorial and add some more concepts at this time. Since our last discussion there has been increased popularity in two relatively new file protocols. The first of these is called SEAlink and the second is Zmodem. You will recall in the earlier discussion that 'windowing' techniques are beginning to become available in the file transfer protocols. There is now a Windowing Kermit, for example, as well as WXmodem. These programs attempt to obtain better performance by avoiding the start-stop approach used by earlier protocols where after sending a packet of data the transmitter would stop and wait for an Acknowledgment that the packet had been properly received before sending the next one. Windowing protocols assume that the packets are being received without error and do not wait between packets. The receiving systems DO send ACK signals, its just that the transmitter is not waiting for them. Assuming all is well, time has been saved as a result. When an error does occur, a NAK is returned to the transmitter and associated with that signal is the packet number that was in error. Assuming the transmitter still has that packet at its disposal it merely retransmits it and proceeds. That is the limit, of course. In order to be able to retransmit a packet it must still be in the transmit buffer and the buffer has a finite length. All windowing protocols set a maximum 'window size'. This means that there can be no more than 'x' packets sent without a reply before the transmitter is forced to wait for that reply else error recovery would not work. This is no big deal at 1200 baud, but at 2400 and above it is really quite limiting. SEAlink is a windowing protocol. It has as an added advantage over WXmodem, for example, two important features: it uses 3 byte CRC for increased reliability and it uses a window size of 6 rather than the 4 used by WXmodem. It is NOT 'network-friendly'. What is 'network-friendly'? It is a design that recognizes and honors XON/XOFF signals that are placed on a packet switching network when that network (like PC Pursuit) becomes so busy that it is nearly choking on data. When the network places an XOFF on the line, a network-friendly recognizes it for what it is rather than a coincidental configuration of bits in a byte of data and stops sending data! It stops until it receives an XON from the network. Why is that important? Well, it is my experience that a huge number of subscribers - 13 - now exists for PCP. Forcing a network to exceed its ability to handle data could only crash the network. PCP would not allow that. They have intelligent node controllers that selectively will abort a 'hog' link that does not honor its earlier 'request' to wait a little (via XOFF). Thus, using a protocol that is not network-friendly is like saying: "I don't care if I am a hog. And, if you don't like it, then abort me." As usage continues to increase, the network will oblige that attitude. The result of being network-friendly is two fold in terms of 'hits' against performance: 1) while you are waiting for the network to send you an XON you are not sending data and 2) there are MANY extra bytes of control information that definitionally must be sent along with your data. Let me explain that last point as it is not obvious, I know. XOFF and XON are simply bytes, just like the letter 'A' or the digit '4'. If no data file contained those bytes then it would be easy to implement a network-friendly protocol. Recall, however, that it is almost always true that data is sent in some form of archive or compressed format. The resulting bytes can have ANY configuration despite what the un- archived or un-compressed file looks like. In other words, the odds are essentially 100% that the data files that you send consist of probably many bytes that look like XOFF or XON. That cannot be allowed to happen. The protocol finds all such bytes and encapsulates them in what is called an escape sequence that consists of a special byte (usually the DLE character) followed by a 'folded' duplicate of the byte that needed to be camouflaged (the XON or XOFF). Folding merely means that the byte is transmogrified in some way (usually via being sent as a compliment - XORed with all 1's). Further, the DLE character itself must also be escape sequenced for this method to work. It is a random process that results in indeterminate performance for any particular file. That is, if a file had none of these three special byte combinations in it, then the time to transmit it would be minimal where a file that happened to have many of them will have that many more bytes to send in order to escape sequence it. In such a case the file would take longer to transmit than the first. Same protocol, different performance. On balance, the designers of SEAlink did an excellent job. The performance of SEAlink is essentially as good as WXmodem yet it is more reliable and uses the 'link' header. Why is SEAlink becoming so popular? Because it is a protocol supported under a BBS system called OPUS which is quickly replacing most of the old FIDO systems all over the country. It is a good protocol. Becauseit is not network-friendly it does not bother with (it doesn't have to) escape coding anything. That is probably a fatal mistake to its future particularly as the networks get crowded. The next one of interest is called Zmodem. This is almost always found as an external protocol. That means it is included in a file (DSZ.EXE) that is shelled to by the host or terminal communications program when it is needed. As such, it requires a lot of memory compared to the internal protocols. But because of that, it is easy to install as a protocol offering of many BBS systems. There is another and more significant difference between Zmodem and the other protocols we have already discussed so far. Instead of being start-stop in nature, and instead of being windowing, it is a streaming protocol. A streaming - 14 - protocol does not expect to get ANY ACK signals back from the receiver until the transfer is complete and successful. If an error occurs it will receive a NAK and it is up to the transmitter to insure that it can recover from any NAK received. Thus, because it is not a windowed protocol it never stops transmitting unless there is an error. That means it should be faster than even the windowing protocols. Zmodem uses 32-bit CRC for reliability and it is network-friendly. In some ways it is not very user-friendly, however. For example, in every other protocol there is a way to terminate the transfer should you wish to do so while it is in progress. The usual manner is to press Cntl-X one or two times and wait till the other end recognizes the abort request and finally stops. In the case of Zmodem you must do so 5! times in a row to stop it. I suggest that not 1 user in a thousand knows that. It is a popular protocol as a result of its performance on the packet switching networks. Incidentally, they also escape sequenced a fourth byte - the SYN. It is for rather obscure reasons and I believe a mistake. Included in GT PowerComm 12.20 is the newest file transfer protocol. It is called MegaLink. It uses 32-bit CRC, it is network-friendly, is faster than Sealink, and like all the 'link' named protocols it uses a header record that results in exact size and proper time and date stamping of the resulting file when received. Most interesting about MegaLink is how well it performs at the very highest baud rates. Running comparative tests of four different protocols, all sending the same 880K file to the same machine and at 9600 baud, I obtained the following results: WXmodem 60.4 % efficiency 580 cps SEAlink 75.6 % 725 cps Ymodem 77.6 % 744 cps Zmodem unsuccessful* MegaLink 98.5 % 945 cps In order, WXmodem did so poorly for two reasons: at 9600 baud its window limit of 4 is the same as not having a windowing technique at all. Second, there are ACK signals coming back for each packet sent. In the 9600 baud arena, the transmission is only 9600 baud in one direction and only 300 baud in the other! It is transparent, more or less, to the users as the modems automatically change which direction is at 9600 baud based on the volume of data that needs to be sent in each direction at any one time. Further, while one character (the ACK itself at 300 baud is not significant, the ACK/NAK response is actually either two or three bytes rather than one as you might expect. The additional byte(s) is for packet number (and it's compliment). SEAlink is being driven about as fast as it can go. It is not as fast as Ymodem because of the small window it uses (like WXmodem) and because it has so many more characters to transmit because it is network-friendly (escape sequences). Ymodem is going as fast as it can. It is effected primarily because of the start-stop nature of its function and the fact that the ACK/NAKs are coming back at 300 baud. Here we see clearly an indication that the days of the start-stop protocols are numbered. - 15 - As an aside, Ymodem-G would have performed MUCH better because it has no error control whatever, thus it has fewer bytes to transmit and no turnaround delays. Remember, however, that error correcting modems are only capable of insuring that the data sent from one modem is received reliably by the other. As will be seen in the discussion later about Zmodem's total failure, Ymodem-G would not have reliably worked in this test. It is interesting that Zmodem failed altogether at 9600 baud. The reason is a little subtle and it leads to the next thing I wanted to discuss anyway. I earlier mentioned that the MNP and ARQ modems are able to strip the start and stop bits from bytes, (they must, thus, be in synchronous mode rather than asynchronous), and that they also may use a form of compression beyond that for performance reasons. I further stated that at 9600 baud the modem I was using was able to perform at 1100 cps rather than 960. This may have caused you to ponder: if the modem is connect to the computer at 9600 baud that means the computer can only send 960 characters per second to the modem for subsequent transmission. So how can the modem send it any faster than it receives it? The answer is that it cannot do so. The method to use to obtain these extraordinary performances is to connect your computer to the modem at 19,200 baud and utilize a buffer in the modem to match up the input with the output. Naturally, as the data is arriving at the modem much faster than it is leaving, there must be a way to stop the input. Well, you guessed it, we use flow control just like the networks when they are getting choked. In particular, we sense that the modem's Clear To Send signal is on or off. When off, we stop sending data to it and when on, we instantly start cramming data at it at 19,200 baud. In this way, the modem is able to send data at 1100 cps. Naturally, the modem must be able to control its CTS signal for this to work. US Robotics HST is capable of doing so. I showed you what happened to Zmodem when we tried to transfer to it at in excess of 9600 baud - it failed. That is not entirely the fault of Zmodem, however. Unless the receiving system is of the AT class of computers you will probably find that regardless of what kind of software you are using with it, the modem is faster than the computer's ability to feed it or eat from it!! Now that is amazing, isn't it? We now have modems that are paced by the computer they are attached to instead of the other way around. Incidentally, unless the receiving computer is connect to the receiving modem at 19,200 instead of 9600 baud, and has implemented some form of flow control to signal the sending modem that it's buffer is full, 1100 cps transmissions to it will naturally fail when the buffer is overflowed. - 16 - ************************************************************************ This is the third in a series of tutorials that I hope will be found to be useful to both new and experienced users of communications facilities. ************************************************************************ Q: Why is it that I experience so much more line noise than the people I call? It seems that I see noise on my screen with some frequency, but if I ask the party that I have called if he sees it too, I'm usually told his screen is clean. Is there something wrong with my system? A: The odds are twice as great that you will have line noise if you place a call to a computer than if a computer were to call you. It is normal and easily explainable. While it is true that the odds are twice as great that you will experience or know about noise in the case where you have initiated the call, the incidence of noise is the same regardless of who places that call (assuming the same lines and circuits are being used in both cases). The reason for this is that when you are in Terminal mode (placing the call), your system is set to full-duplex operation and when it is in Host mode (auto answer), it is in half duplex. Full duplex means that whatever you type on your keyboard does not get sent to your screen. It is sent, instead, to the communications port and from there it travels through your modem, along the telephone lines to an answering modem, and then to a host sytem. The host system then sends it back to you. In half duplex, on the other hand, whatever you type is sent to both your communications port and to your screen. From this it is obvious that every character seen on your screen when you have placed a call has gone through the telephone system while only half of what is seen on the host system's screen has been on the telephone circuit before it got there. Further, line noise can be unidirectional. That is, it may appear as data travels in only one direction or the other. Regardless of that fact, it will be seen by the terminal mode user (data must go both ways before it reaches the screen) and if it appears only on the link from the host to the terminal user it will never be seen by the host. Q: The last tutorial you wrote told us about MNP and ARQ modems being able to eliminate most line noise. How do they do that? A: Part of that answer is still a mystery to me, but I know how it does it in theory at least. I will tell you why part of the answer remains a mystery in a moment. First, recall the discussion we had about file transfer protocols. All of them utilize some form of CRC mechanism to insure that the receiving system had received all of the contents of a packet of information without having dropped any bits or picked up any extra bits. The CRC is a byte or a word of data that is the result of an algoritm that 'folds' every byte in the data packet onto itself in such a way as to result in a pattern of bits that can be calculated by the receiving system as each byte of data is received and then compared with the CRC that is subsequently received. If there is a mismatch then the data (or CRC byte) did not get received correctly. The MNP and ARQ modems implement this strategy within themselves. All data that is transmitted from one of these modems is re-packaged into what the modem manufacturers call 'frames' (packets) before being - 17 - transmitted. Each frame is followed by a CRC byte or word that is stripped off by the receiving modem and used to determine if the frame was received correctly. Line noise simply makes that CRC check fail and the result is an automatic retransmission of the frame. As you can see from the above, the modem is now acting just like your computer does during file transmissions using a protocol transfer method. This is not done for 'free'. The overhead of doing so results in less than rated speeds in every case. That is, the theoretical maximum data rate of a 1200 baud modem is 120 characters per second, but MNP and ARQ modems are sending more characters between themselves than the sending system itself. If there are errors and, thus, an automatic retransmission of a frame, the sending modem is very likely to have to ask the sending computer to wait for it. It is estimated that this overhead (even without errors) results in a degradation of about 12% in terms of the maximum possible performance of the modem yielding about 106 characters per second possible throughput. To counter that built in degradation, the modems strip the start and stop bits from each byte and send only 8 bits rather than the 10 (or eleven) that are sent by non-error-correcting modems. This increases the efficiency by about 20%. The net effect is that, assuming no errors, the possibility of about 108% of rated performance. (It is possible to get about 130 characters per second rather than 120 if there are no errors - this also fails to account for additional 'compression' methods built into some of these modems). So, where is the confusion? Well, the above assumes there is a stream of data being sent that can be 'framed'. How the modems function when a user is merely typing one or two characters or words at a time before the other side responds is a mystery. Indeed, as each character is typed it is sent down-line. Presumably there is a timeout of some kind in the modem that says that if another character is not entered within x milliseconds it is presumed that the frame is complete and it is sent along with its CRC. However it does it in practice, it does seem to be effective at eliminating line noise. Q: So MNP and ARQ modems are faster and eliminate line noise. Sounds like the way to go. Are there any negatives to their usage? A: Interesting question. Assuming that you use protocol transfer methods in addition to the error detection and correction logic of the modems themselves, I can only think of a couple of negatives at the moment. The first, of course, is the lack of standards, particularly at the higher baud rates. Second is the fact that every time you use one to call a system that does not use MNP or ARQ (the vast majority of them do not) then you automatically lose part of their opening screens. Let me explain that. When an MNP or ARQ modem first connects with another modem the calling modem issues a sequence of bytes that is asking the answering modem if it is also MNP or ARQ. These bytes include an id and an indication of the level of MNP, for example, that the caller is using. The first set of characters that come back from the called modem are then consumed by the modem rather than passed through to the user's screen. Thus, they are lost to your system. Very often it is necessary for the calling system user to press his Enter key in order to cause subsequent characters to be passed through the modem (telling it in effect, to turn off MNP or ARQ). This is an annoyance to the terminal mode user but it can be worse for the host system. - 18 - With the introduction of release 12.20 of GT PowerComm there has been some controversy as to the existence of the opening prompt that it issues in which it asks if the caller wants to use ANSI graphics or not. Many users seem mildly annoyed that their selection is not recorded somewhere so they don't have to answer that prompt more than once. What they fail to understand is that the prompt is there for several reasons. MNP is a good example of what I mean as is the possibility of noise on the line. When an MNP call comes in, those initial characters I just mentioned 'hit' the prompt and result in reissuance of it. We do not permit a default to that prompt so that we do not go past it with noise or MNP. By the time a Y or N is entered, the MNP sequence of handshake signalling is done. If we did not have that initial prompt then the first question the user would be asked would be his first name. Ask any Sysop how many garbage names he has in his user base. If there are any then I can reasonably assure you that his system does not have a leading prompt such as ours to protect him from noisy incoming calls (or MNP). Q: Is 9600 baud the theoretical limit to technology in terms of modems? A: Hardly. It appears that 9600 'baud' stretches the reliability limits of today's unconditioned telephone system, but modems exist that are much, much faster than that already. 19,200 bits per second modems are functional on conditioned lines even now. As to limits, well, did you know that satellite communications capabilities exist that already permit the transfer of over a million bits per second? Over the past 20 years there has been a rather constant rate of improvemnt in all aspects of data processing technology. As a rule of thumb that is pretty close consider this: Every four years there has been a three fold improvement in performance/capacity for only a two fold increase in price. Sometimes we forget how long this trend has been in effect, but an IBM advertisement a few years back made it pretty clear. At that time the ad suggested that if the automobile industry had enjoyed the same rate of improvements over the past 20 years that the data processing industry has enjoyed, then every adult in this country could afford to own a Rolls Royce, as it would cost only about $20 and, incidently, it would get about 2,000,000 miles to the gallon of gasoline. For a more contemporary example, we need only look back at the original IBM PC. That machine had 320K disk drives and a clock speed of 4.77 micro seconds. Today you can buy a Compaq 386 that is 17 times faster than the original PC (throughput) and you can get it off the shelf with 130 megabyte hard disk. The price of this newer machine is less than three times the original PC, closer to twice the price. No, we are not at the limit of technology, not by a long shot. - 19 - ************************************************************************ Part four in a series of tutorials concerning communications and the micro-computer industry. This particular tutorial was created during a call from Mr. Bruce Aldrich and deals with the concepts of multi- tasking and the future of Local Area Networks. ************************************************************************ A: James, a few questions regarding the future of PC multi-tasking. 1) Are you familiar with a new software product for the 386 called PC- MOS (I believe)? 2) Are you familiar with DRI's Concurrent DOS (as well as a just-out update by a different name which currently escapes me)? Obviously, I am interested in exploring the area of multi-tasking. D: If you will permit me, I would like to digress and establish some fundamentals before answering directly. I wish to talk about what has happened in the micro-computer world, and why, in order to develop a reasonable insight into where it is going. About the time of the introduction of micro-computers into the marketplace the manufacturers had recognized a long term trend that needed to be continued: about every four years the industry was providing three times the prior performance/capacity for price increases of only about double the prior cost - unit costs were decreasing.. The trend had taken computers into the early seventies in three broad forms: 1) The largest form were (and still are) called mainframes. Entry level cost was about a million dollars. For several million you could get a mainframe that could support several thousand simultaneous users of that system. It cost another million or so per year to maintain that system (air conditioning, raised floor and other capital cost amortizations, systems engineers, programmers, librarians, etc.). 2) For an order of magnitude less money, about one-hundred thousand dollars, you could buy a relatively small mini-computer. For several hundred thousand dollars you could buy a mini-computer that could support several hundred simultaneous users on that system. It took about a hundred thousand dollars a year to support the initial investment (environment, supplies, programmers, operators, etc.). 3) Finally, an order of magnitude less was required to purchase a micro-computer. Unlike the other two kinds of systems, the micro- computers could not support multiple users. On the other hand, unlike the other kinds of computers, it did not take thousands of dollars per year to maintain such a computer. There were no environmental requirements. There were no major programmer or operator costs. In fact, what you could buy for about $10,000 was a single user computer. Thus, they became known as personal computers. Well, that all may have been obvious, but looking under the covers, so to speak, it is interesting to note that the manufacturers of these micro-computers did not expect the business community to be interested in them. Their real intent in bringing these machines to market was to get sufficient manufacturing experience that they could quickly get in front of the efficiency ramp up curve and, through volume sales, to generate economies of manufacturing scale such that they could - 20 - manufacture yet another generation of computers below the micro- computer level and to do so not for one order of magnitude price decrease, but two! They wanted to be able to produce and sell micro- computers (chips, actually) that would be priced not at $1,000 each, but at something closer to $100. The layman seems not to appreciate the fact that they were completely successful in their efforts. You cannot buy a $30,000 automobile today that does not have at least one $25 computer in it. To get the volume of sales high, the manufactures introduced those computers as game players for home use. That generated the initial bad name for micros and the initial resistance by business to use them. Who would risk part of his business career on a decision to use game players for serious work? An interesting thing happened soon after these micro-computers began to sell in volume; they were found to be MORE reliable than their larger cousins - of course, they had much less complicated circuitry, many fewer components, and required low power. Further, and this is the MOST IMPORTANT DEVELOPMENT OF ALL, there emerged an INDUSTRY STANDARD called the S-100 bus. This permitted many different manufacturers to enter the marketplace with new peripherals, clones, and most important, with software that ran on almost all of these early 8-bit machines. As clever engineers recognized that these machines were able to perform several hundred thousand machine instructions per second and the vast majority of that capability was not being used, they looked for ways to do so. Let me explain that a little more. In these early machines the typical computer program (game or wordprocessor) was doing something like this: I need a character from the keyboard, I'll see if one has been typed. No, nothing yet. Well, I need a character from the keyboard, I'll see if one has been typed. No, ..." In other words, though it could perform several hundred thousand instructions per second it was almost always crawling along at the speed of the human operator's typing skill - SLOW! The engineers built clever new programs that would control several different programs at the same time in memory. Of course these were only able to run one at a time, but whenever one of those programs needed to read a character from the keyboard and find that none had yet been typed, or need to print a character to the screen or a printer only to find that the output device was not yet ready to print it, the program would be temporarily stopped and the system made to go to the next program in memory to see if it was capable of doing some work. This process of appearing to run more than one program at a time is called multi-tasking. In human terms it was extremely fast. The result, the computers were performing much more work than they had prior to multi-tasking. But there were some real limits to this approach. After all, there was only one central processing unit (CPU) and at most you could have only 64K of memory that had to be shared by the programs. Remember I just mentioned that there was an important industry standard that emerged from the mass sale of micros called the S-100 bus? Well this was the enabling event that made the next method of increasing the performance of these small machines. S-100 bus cards were introduced that contained on the single card both a separate CPU and 64K of memory - 21 - that was dedicated to that CPU. These cards became known as 'slave systems' and were plugged into the same S-100 bus as the original CPU and memory. (A 'bus' is merely a set of parallel wires with connectors on them that allow every card plugged into them to have exactly the same data available to each of its pins as do all the other cards connected to those lines.) The original CPU became known as the 'master system' or 'server' and became responsible for allocating work out to each of the other CPUs on the bus (that is why they were called 'slaves'). In this way a great deal more performance was made available from that micro than previously. I must add that it also meant that a special and more complicated Operating System was required as well. What I just described was the beginning of what the industry called 'tightly coupled' systems. These systems had common access to all the system memory and all the peripheral devices attached to the system. Further, they also had a terminal attached to each of the slave CPUs and, thus, several simultaneous users of the system was now a reality. Recall, only a few years earlier it would have cost several hundred thousand dollars worth of computer in order to support more than one simultaneous user of a computer. This was a breakthrough that brought serious attention from the business community. Looking more closely at those configurations, what was happening was that all of the expensive devices that once were being dedicated to only a single user were now being shared. Printers and disks are the most obvious examples. Along with sharing of devices came the obvious problems of controlling shared devices. It would not do at all if two of the users wanted to print reports at the same time and having, say, only one printer, being allowed to do that. Thus was born buffers and sequencers that made the user think he was printing but which in reality were re-directing output destined to a printer and putting it on disk someplace to await a time when the printer was no longer being used by another user. These were called 'spoolers' and the concept is fundamental to the successful sharing of printers even in the mainframes today. Disk sharing brought with it a more subtle form of problem; the possibility that two users might inadvertently corrupt the information required by each other. For example, assume that there is a client file on a disk drive that contains only one record in it and that that record contains the current balance due by that client to the firm. Suppose that an accounts payable clerk is operating one terminal and wants to post a $1,000 payment received just as a sales entry clerk tries to post a new credit purchase of $500. Finally, assume the record starts with a balance due of $2,000. If both of these clerks happen to read the current balance due at almost the same time they would put into their memory the record that says that they are starting with a balance due of $2,000. Let's say that the sales entry clerk posts the new purchase of $500 to that record and saves the resulting $2,500 record back onto the disk. Then the accounts receivable clerk posts the check received of $1,000 to her copy of the current balance (which still says $2,000) and then saves the resulting new balance due of $1,000 back to the disk (right on top of the old one). The obvious result is that the record of the new sales has been lost for when either of these clerks next reads the disk record it will show $1,000 rather than $1,500 as it should. This insidious problem is solved with the implementation of what is called file and record interlocks. With these functioning properly no more than one person may ever have the ability to modify a - 22 - disk record at a time. Unfortunately, even today, most software does not consider that this might be a problem and does NOT use file and record interlocks! Along came the 16 bit machines now generically called the PC's (because IBM called their machines the Personal Computers rather than game players - good move!). Besides being faster CPUs, these machines broke through the 64K maximum memory barrier and thus had much more satisfactory performance available for both multi-tasking as well as multi-use of the computers. Though they were much better equipped to handle these performance oriented uses of the computer, the software did not support it! Indeed, the user community was set back almost four years as a result of the failure of the hardware industry to work closely with the software industry in order to meet the needs of their users. Many believe it was a conscious effort designed to sell as much 'iron' as they could before the prices of that equipment fell due to competition. Remember, in a single user environment there are no shared devices. Every user that wanted to print something had to buy a printer. For several years after the introduction of the 16 bit micro-computers the manufacturers of the 'old' 8-bit machines continued to advance their capabilities. As I mentioned earlier, the ability to support multiple users at the same time had been introduced BEFORE the 16-bit machines were even announced. Development continued in a new direction, however. It was already seen that they had reached certain limits in terms of how many cards could be plugged into the same S-100 bus and be expected not to dramatically impact the performance of that single bus due to conflicts and simultaneous demands of it. The new approach was called 'loosely coupled' systems. In this method of sharing of resources (notably printers and disk devices) complete computerswere tied together via coax cable and elegant software. Messages were routed from one computer to the next over those cables. Sometimes they were arranged in loops or rings, without 'ends'. Sometimes they were arranged with a central computer in the center and the remotes at the ends of cables as 'stars or 'spokes'. Whatever the configuration of the cabling, the result was that a user of any of the computers connected to the cable could send print jobs to shared printers (spooling as necessary) and read or write files from disk drives that were located on other computers (typically the master or server). And as you might expect, before long there evolved the ability to connect one such NETWORK of computers to another one that was located either near by or sometimes several thousand miles away via telephone connections (these connections were called 'gateways'). And what were the bigger 16-bit computer manufactures doing all this while? They were selling a lot of micro-computers (based largely on their names (IBM, DEC, WANG)). Finally, several companies that had pioneered the development of multi-user capabilities in the 8-bit machines and network component manufactures upgraded to the 16-bit machines. 3COM, Novell, and several others announced NETWORK capability for the PC's (as if it was the best thing since night baseball and something new). It was not met with great enthusiasm by the business community. Primarily since IBM and the other computer manufacturers had been so successful in convincing these buyers that a machine on every desk was the wave of the future and because they did not have their own networking capability at the time, it was pronounced 'premature' to leap into a potentially 'non-standard', potentially 'dead-ended' approach such as were being offered by these network - 23 - manufacturers. IBM soon announced their own network architecture which was the worst of the bunch and a few years later they admitted that there was a better way and 'pre-announced' exactly how businesses should wire their buildings to prepare for their new network architecture. It turns out that that architecture changed after IBM's more loyal clients had done as IBM recommended, but that is another story all together. The message I am trying to get to is that it is still pre-mature to bet on a specific network architecture (loosely couple systems). Further, tightly coupled systems have been TOTALLY ignored by IBM as they are simply too efficient and do not result in other than incremental hardware sales. Finally, with the introduction of Intel's 80286 CPU the 16-bit micro-computers were able to very efficiently operate in multi-tasking mode (remember how it all started with the 8-bit machines). IBM did not make software available to run multi-tasking on their machines that was compatible with the existing software (PC-DOS). Instead, they 'introduced' UNIX as the software that permitted multi- tasking of their machines. Indeed, there were several 'flavors' of this massive and highly inefficient Operating System that soon became available (ZENIX is a UNIX derivative or clone). In other words, the industry as led by IBM began to forget the value of standards and started pushing software that was incompatible and which not- coincidentally required major increases to theamount of memory and disk space available to then in order to operate. That strategy has not been well received and PC-DOS and MS-DOS are still by far the dominant Operating Systems on the PC/XT/AT micro-computers. Multi-tasking needs are usually supported via off-brand software houses such as DoubleDos, DESQview, and Multi-Link. IBM introduced, late and poorly done, and inefficient as usual, a multi-tasker called TopView, described by IBM as an emerging standard, but in reality a failure in the marketplace. Other multi-taskers are also available today such as Windows. And then came the super micros which use the Intel 80386 CPU. This machine can often run as much as 17 times faster than IBM's original PC and can support a dozen megabytes of memory and more. It is clearly the mainframe in miniature dreamed about only a few years ago. And still there is no standard available for multi-tasking, for tightly coupled or loosely coupled multi-user needs, and one full generation of CPU (the 286) has had capabilities that are important (virtual memory and protected software shells) that may never be supported with software. That is, the 286 family of machines exist almost exclusively as faster PC's and so too do the 386's installed. Why? Well perhaps the fact that IBM announced the PS/2 (Personal System/2) family of micro-computers has something to do with that. Perhaps in their zeal to force proprietary 3 1/2" floppy disks onto the public in order to force the clone manufactures out of business they failed to consider the needs of the existing computer owners. Perhaps IBM has all the answers and is about to bring them to the market after all. Which reminds me of the story of the most unfortunate woman who had been married three times and still claimed to be a virgin. Asked how that was possible she answered that the first time she married she was young and so was her husband. On the wedding night they had partied a bit too much and after a tragic car accident she was left widowed and - 24 - the marriage had not been consummated. The second time she married it was her decision to play it safe. She married an older man who was financially secure and stable - didn't even drink. Unfortunately he was a bit too old and on their wedding night he died of a heart attack just after she had removed her clothes - widowed for a second time without consummating the relationship. The third time she married an IBM salesman. He was in his mid twenties, healthy, good looking, and apparently eager. They had been married for six months before she filed for divorce as the marriage remained unconsummated. Asked why she said that every night it would be the same old thing; he would get into bed and tell her how good it was going to be. Yes, Bruce, I have heard of the products you mentioned. In my opinion you should be more interested in the established Local Area Networks than in these products to satisfy the longer range needs of your client. 3COM Plus or Novell are the leading contenders and they both have highly reliable and highly efficient LAN capabilities. Multi- tasking is supported only on the server and is reserved for the support of the slaves, not end-user work. Tightly coupled systems are more cost effective than LAN but suffer from a lack of standards and finite (small) number of simultaneous users. Wish I could be more helpful. - 25 -