RISKS-LIST: RISKS-FORUM Digest Wednesday 1 February 1989 Volume 8 : Issue 19 FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator Contents: Massachusetts limits disclosure of driver's license database. (Jon Jacky) Dead Code Maintenance (Douglas Jones) Re: Structured Programming (Eric Roskos) Random Thoughts on Redundancy (Earl Boebert) One last word about probabilities (Dr Robert Frederking) Independence and probabilities (PGN) Counting Engines (Mike Bell) Talk by Roy Saltman on computerized vote tallying (Charles Youman) ---------------------------------------------------------------------- Date: Tue, 31 Jan 89 08:53:23 PST From: jon@june.cs.washington.edu Subject: Massachusetts limits disclosure of driver's license database. This was sent to me by a friend who works at DEC. What I find notable about this story is the linkage between selling personal information in government databases to anyone who asks and legitimate law enforcement activities. It seems in this case it is felt you cannot limit the first without hampering the other. I can't tell from this account whether that is a technical consequence of the way the database works, follows from the legalities somehow, or is just a misconception. - Jon Jacky, University of Washington ------- Forwarded Message From THE BOSTON GLOBE, January 22, 1989, p.30 Registry Can Share Data With Police The Massachusetts Registry of Motor Vehicles may continue to give computerize information to law enforcement agencies, at least until there is a final ruling on a privacy suit challenging that practice, the Massachusetts Appeals court has ruled. The Appeals Court action modified an injunction issue last week by Superior Court Judge Joseph Mitchell. At issue is a challenge to a decades old Registry practice of which most holders of driver's licenses were not aware. For a small fee, the Registry has sold information about Massachusetts motorists, including Social Security numbers, to private businesses and anyone else who asked. The Registry - which has perhaps the largest computerized data base in the state - also routinely shares data with hundreds of law enforcement agencies and with registries in other states. Citizens concerned about privacy filed suit against the Registry almost four years ago in Middlesex Superior court to block dissemination of their social security numbers and other personal data. Judge William Welch threw the case out on the grounds that the data collected and stored by the agency are "a public record." But in September, the Appeals court disagreed and gave the registry 60 days to show in Superior court why the information should not be kept confidential. Last week, in Middlesex Superior Court Judge Mitchell issued a permanent injunction declaring the registry's information about a motorist's age, height or Social Security number "personal data" that may not be disclosed. The registry was banned from "distributing, offering, selling or making available" the data to anyone outside the registry. That ruling alarmed state officials who said it would cripple law enforcement efforts if the registry could not share information with police agencies. The attorney general's office, representing the registry, warned that the agency would have no choice but to disconnect completely from the Criminal Justice Information System, which is connected to 500 Massachusetts and local police agencies. The system handles 125,000 requests a day for information - 25,000 involving Registry data. "If the permanent injunction is not stayed, there would no effective enforcement of the motor vehicle laws within this state of any other state," testified Peter Larkowich, general counsel to the state agency that runs the information system. Without the data, he said, police could not identify motorists who cause or witness accidents and could not issue tickets "with any degree of certainty." Robert Hernandez, the attorney representing the citizens in the privacy suit, said his clients did not want to appear to be "cop killers", so they negotiated a partial stay of Mitchell's ruling. "Basically, the feeling was that no judge was going to allow something to go on that would endanger the lives of law enforcement officers," Hernandez said. He said the registry would now have to warn motorists seeking a new license or renewing an old one that some of the information will be available to police. The registry will also have to inform motorists that they can request a randomly chosen number for their license number rather than their Social Security number. The Appeals Court also said it would hear appeals from the citizens and the Registry on an expedited schedule. ------------------------------ Date: Tue, 31 Jan 89 13:04:56 CST From: Douglas Jones Subject: Dead Code Maintenance One of the benefits I get from living in Iowa City is that many of my students have worked for one or the other of the local divisions of Rockwell International. One of them, who had worked for the Government Avionics Division, on the Global Positioning System project related the following tale to me: Global Positioning System receivers are boxes that use information broadcast by a system of satelites to deduce the latitude, longitude, and altitude of the receiver. These boxes are built into a variety of weapons systems now in use by the United States and its allies. The box contains a radio receiver to listen to the satelites, and a fairly powerful computer to interpret the radio signals. The computers in the current production GPS receivers are programmed in Jovial, although a new generation programmed in Ada will no doubt appear someday. My student was part of one of the teams that maintained the GPS code. After some time on the job, he began to realize that the code his team maintained was never executed and had never been executed in the memory of any team member. That is, an entire team of programmers was being paid to maintain dead code. Despite the fact that the code was dead, the team was required to produce the entire range of documents supporting each release of the code, and they were required to react to various engineering change requests. Not too surprisingly, my student became demoralized and left the company, but not before learning enough to make the following hypothesis about how his situation had come to be. He guesses that, once upon a time, there was a prototype GPS system where his module actually served some purpose and came to be executed from time to time. The structure of this system was presumably used to define Rockwell's contractual relationship to the Department of Defense, and as a result, his module gained a legal standing that was quite independent of its function in the GPS system. As time passed, the actual calls to procedures in his module were eliminated from the GPS system, for one reason or another, until the code was dead. At first, nobody knew it was dead. The project was big enough that it wasn't uncommon for the people working on one module to have at best infrequent communication with those who called the procedures in the module, and engineering change notices that required changes to the module kept everybody busy. Engineering change notices would not have arrived if the actual structure of the program were used to determine who needed to participate in a change. In fact, the notices were distributed based on many other criteria, including the contractual descriptions of the modules. The team was quite busy keeping their code up with the changes, testing changes using locally developed scaffolding, and waiting for any report of failures from the global system tests. The discovery that the code was dead appears to have resulted from its passing global system tests even when it was obviously in error. Once my student found that the code was dead, he asked his managers why his efforts were being waisted on it. Their answer was that it was less expensive to maintain dead code than it was to rewrite the contract with the Department of Defense to eliminate the job. Douglas W. Jones, Department of Computer Science, University of Iowa ------------------------------ Date: Wed, 1 Feb 89 09:45:05 EST From: roskos@ida.org (Eric Roskos) Subject: Re: Structured Programming > What REALLY happens when a group of structured programmers tries to > develop a large program? Usually they argue about how the program should > be indented, what the comments should be like, how the subroutines > should be nested, ... etc. If one believes that this is what "structured programming" is about, it is no wonder that there are such problems with it. I wish I could give you some "war stories" about unstructured vs. programming, but unfortunately, all the software I've worked on has been proprietary, and I've only encountered a few insightful people who could tell from the outside which of the very large-selling software packages was internally well-structured, and which wasn't. Suffice to say that often there is a strong correlation between how easily a program can be adapted to meet new requirements and host system enhancements, and how well-structured the program is. Often it shows in the product architecture (how the features visible to the user interrelate) too. There are at least two problems. First, "structured programming" seems to be one of those things you can learn only through experience; you discover it slowly through years of practice, during which time some the things that are taught as "structured programming" start to make some sense, but only as a superficial veneer over what's really involved. Second, the sentiment expressed in the original article that "structured programming" is a bad thing in the development environment is widespread, and the disasters which result (which are very real to those who have had to maintain large programs) are often covered over by the developers, so that they are unknown on the outside. Sometimes a carefully-worded manual or an assortment of appealing new features can hide irreparable flaws in a program. Problems can result because, for instance, a programmer hardcoded a machine-specific feature throughout a several hundred thousand line program instead of isolating it in one place. Or because the code was made dependent throughout on the size of some data structure which was always referenced with a hard-coded constant offset rather than using some more "structured" reference. There are a lot of examples of this sort of problem, which has nothing to do with whether the program is "provably correct" or with how widely it sells. But it does have a significant bearing on how long the program will last without having to be rewritten. Lately there seem to be new paradigms emerging (such as "object-oriented programming") which are intended to make some of these principles more obvious. They seem to have pros and cons, particularly in terms of efficiency, but perhaps the fact that there is so much programming to be done, and so few really experienced programmers out there, makes it necessary that some easier-to-understand concept take the place of "structured programming" for the most part. Just as there are not very many really well-designed products of any sort, there are not very many well-structured programs, and thus people tend to blame "structured programming" for what is, in reality, simply bad programming in a superficially neat and tidy style. Eric Roskos, IDA (roskos@CS.IDA.ORG or Roskos@DOCKMASTER.ARPA) ------------------------------ Date: Wed, 25 Jan 89 14:22 EST From: Boebert@DOCKMASTER.ARPA Subject: Random Thoughts on Redundancy From "Flight" magazine, many years ago: The Chairman of Rolls-Royce [which makes aircraft engines] was asked why he always flew the Atlantic in four-engined aircraft. His reply: "Because there are no five-engined aircraft." The same magazine once noted that a mechanical engineer looks out an aircraft window, sees four engines, and relaxes with a drink; the expert on fuel contamination looks at the same sight and tightens his or her seat belt. So maybe the only fundamental truth is that we are all prisoners of our metaphors, and never more so than in the software business. On a less philosophical note, people interested in the issue of engine redundancy might find it worthwhile to look up the chapter in "Spirit of St. Louis" where Lindbergh presents the tradeoffs that led him to choose a single-engined aircraft for his attempt. To get an idea of how such tradeoffs go, first consider my experience in working on the software design for a generic triple-redunant autopilot, where I discovered that 85% of the logic was in redundancy management. This is a step forward in reliability? Then look at the Honeywell autopilot for the Saab JA37b fighter. This was, as far as I have been able to tell, the first digital fly-by-wire system ever put in an operational aircraft. It was a single-channel system, with flight-critical functions backed up in the air data computer, and analyzed to a fare-thee-well (Honeywell had to *demo* that all possible short circuits between two arbitrary pins resulted in an orderly transition to the backup mode). Last I heard a couple of hundred of them had been flying for over a decade without incident. So redundant is neat, but simple and well-understood ain't bad either. ------------------------------ Date: Wed, 1 Feb 89 17:59:14 -0100 From: ref@ztivax.siemens.com (Dr Robert Frederking) Subject: One last word about probabilities Organization: Siemens AG in Munich, W-Germany At great personal RISK to my ego, let me suggest that nobody has gotten the numbers right yet, even ignoring questions of whether independence, etc., holds in this case. For a three engine plane, where p is the probability of failure, P(all three fail) = p**3 P(any 2 fail) = 3p**2(1-p) P(any 1 fails) = 3p(1-p)**2 P(none fail) = (1-p)**3 This has the rather important property that all the probabilities add to 1. The key is to realize that the plane can be in eight states with respect to engine failure, each state's P is obtained by multiplication, and you add together all states that are essentially equivalent (differing only in which engine(s) are out). Thus P(crashing) = 3p**2-2p**3, if it can fly on 2 engines. Similarly for two engines, P(both fail) = p**2 P(one fails) = 2p(1-p) P(none fail) = (1-p)**2 which also happily adds to 1. P(one or both out) = 2p-p**2, which (I believe) is always bigger. As an aside, as I understand it, the FAA requires all airliners in the US to have more than one engine, and to be able to fly to a safe landing with one out. Robert Frederking ARPA: ref%ztivax@siemens.siemens.com Siemens AG/ZFE F2 INF 23 or : unido!ztivax!ref@uunet.UU.NET Otto-Hahn-Ring 6 UUCP: ...!unido!ztivax!ref D-8000 Munich 83 West Germany Phone: (-89) 636 47129 ------------------------------ Date: Wed, 1 Feb 1989 10:20:55 PST From: Peter Neumann Subject: Independence and probabilities It must be remembered throughout that the classical binomial probabilities assume independence. Cross-wiring throws that assumption for a loop. Subsequent to the 8 January crash of the British Midlands Airways 737 (where speculation still focuses on a wiring defect), FAA inspections have now turned up cross-wiring in engine or cargo-hold fire warning systems in at least four other planes. This is a particularly insidious type of problem, because it normally would be significant only in time of emergency, and under normal operation would have no effect (and remain undetected). ------------------------------ Date: 27 Jan 89 12:39:34 GMT From: Mike Bell Subject: Re: Counting engines Organization: Cambridge Consultants Ltd., Cambridge, UK Of course, an increase in the number of aircraft engines actually *increases* the chance of catastrophic failure: if each engine has a probability p of losing a turbine blade in such a way that fuel lines are severed and a major fire ensues, then if an aircraft has N engines the probability of failure is clearly N*p, so a one-engined jet is clearly safer, and a glider is safer still:-) And then again, the complexity of systems interconnecting the engines increases non-linearly: you can't have cross-wiring faults between engines on a single-engined aircraft. The point is simple: duplicating part of a system doesn't *guarantee* an improvement in overall safety, and indeed, can reduce it. ("This nuclear sub has two reactors so that if one should melt down, the second can...") -- Mike Bell -- , or even <...!ukc!camcon!mb> ------------------------------ Date: Wed, 01 Feb 89 11:14:38 EST From: Charles Youman (youman@mitre.org) Subject: Talk by Roy Saltman on computerized vote tallying Roy G. Saltman of the National Institute of Standards and Technology (formerly NBS) will be speaking on the topic "Accuracy, Integrity, and Security in Computerized Vote Tallying" at the February meeting of the Washington, DC Chapter ACM. The meeting will be held on Thursday, February 16, 1989, at the Rosslyn Holiday Inn, 1850 North Fort Myer Drive, Arlington, Virginia. The talk will begin at 8:00 p.m. There is an optional dinner preceding the talk which starts at 7:00 p.m. Reservations are required only for the dinner (cost $14) and can be made by calling (202) 659-2319 by noon on Tuesday, February 14. The talk summarizes an extensive report on this subject recently published by Mr. Saltman at NIST. The talk concerns protections against manipulation and fraud in the use of computer programs and hardware in computerized vote tallying. Recommendations concerning hardware, software, operational procedures, and internal control concepts are presented. [Saltman's excellent report was cited in RISKS-7.52. PGN] ------------------------------ End of RISKS-FORUM Digest 8.19 ************************ -------