Polymath A (mostly) technical weblog for Archivale.com

February 26, 2010

Recapture Rotary’s Spirit and Strength

Filed under: Uncategorized — piolenc @ 7:45 am

I wrote this six years ago. I am happy to say that some things have improved, at least as far as our tiny club is concerned, but there is much left to accomplish before Rotary is Rotary again.

Glenn E.Estess, Sr.
Rotary International President, R.Y. 2004-2005
c/o Rotary International
Evanston, Illinois, USA

Thursday, May 13, 2004
Iligan City, Philippines

Dear RI incoming President Estess,

Although I doubt that this letter will ever reach your desk, I am addressing it to you because you will soon have executive responsibility for the actions of Rotary International, and therefore will be responsible for recognizing and correcting its failings.

Today I received an urgent email message from the incumbent President of my Club, forwarding something called the 2004-2005 Club Goal Report Form. I opened the file eagerly, expecting to find guidance in formulating and implementing my Club’s service goals for the period of my upcoming presidency. Instead I found yet another demand for money. This was the RI equivalent of the U.S. Internal Revenue Service’s form 1040-EZ, inviting me to multiply the number of members in my Club by 100 dollars per member and to commit my fellow members to cough up the result, which the form arrogantly terms a “minimum goal.” This, in a country where a skilled worker makes $3 per day and a qualified medical doctor gets $2 per consultation. Our members’ generosity in past and current Rotary Years is not even mentioned; instead, Rotary International demands more of us, as if by right.

I hope you will forgive me for stating what should be obvious, but this is not what Rotary is about. Rotary consists of individual Rotarians, who are members of their respective Rotary Clubs. Rotary International was created by the Rotary Clubs to serve them and their members, and I am quite certain that our founder would be very disappointed by the current state of that relationship. The servant, it appears, considers itself the master. Not a very good one, alas.

The administrative services which are RI’s primary job seem to have been downgraded to the status of an afterthought. In the short time that I have been a Rotarian, my Club has been terminated three times for alleged non-payment of dues and suspended once for alleged failure to pay subscription fees for a local Rotary publication. In all but one case of termination for nonpayment of dues, the cause was Rotary International’s inability, or more accurately unwillingness, to correct errors in its own records; this was also an aggravating factor in the one instance in which my Club actually owed money. The burden of correcting those records was placed on us, yet our documentation was, as often as not, simply ignored. When a correction was finally made, the respite was brief, as new errors would be introduced the following semester. If the cost of long distance telephone calls, faxes and photocopies were added to our tally of contributions, I assure you that the per capita figure would be quite impressive! Yet we never have anything to show for it but more grief and neglect.

I have only been a Rotarian for four years, yet in that time the deterioration in RI’s service and its degeneration into a money mill have been marked. As a new Rotarian, I have attended as many Rotary functions as I could, and I can not help noticing that each new DISTASS and DISCON has given less emphasis to the delivery of services to our respective communities, and more exhortations to feed the beast in Evanston. Rotary Clubs are now rarely praised for their effective projects; only their capacity to elicit donations to the Rotary Foundation count. They are recognized, if at all, only for the number of dollars that they send far away from the scenes of their good works.

At first, I checked my growing irritation by reminding myself that the money comes back in the form of grants, and that my Club, or rather the communities that it serves, had been a major beneficiary of the Rotary Foundation in the past. But as the demands for remittance have increased in frequency and intensity, it has also become harder and harder to get  anything back. The final insult was delivered at the recent DISCON, which was almost entirely given over to fundraising for the Rotary Foundation. When the time finally came to report on the Rotary Foundation’s generosity toward us, we were reminded that 3H grants had been suspended indefinitely, and baldly informed that delivery of the Matching Grants in which we reposed special hope was also suspended. It seems that a Rotary District in Pakistan had failed to submit a required report in a timely fashion. Rotary International, we learned, is now practicing collective punishment—penalizing Rotarians by denying funding to their approved projects over delinquencies occurring thousands of miles away and beyond their control.

I am a very angry Rotarian, Mr. President, and I strongly suspect that others share my anger. My fellow Rotarians and I joined Rotary so that we could participate in good works that were beyond our individual means to implement, and in some cases beyond our individual capacities even to conceive. We are proud of having played our small part in eradicating polio, happy to use scarce vacation time to squirt Vitamin A into the mouths of schoolchildren, thus protecting them from precocious blindness. Every time we climb Mount Agad-Agad, we see growing evidence of the success of our reforestation project there. Our own projects—that is, those carried out with local resources only—include adopting two local public schools and providing books and equipment needed to set up a vocational training program.

We did not imagine that Paul Harris’ vision would be usurped by an incompetent, gluttonous bureaucracy, and not all of us are prepared to accept it.

Here, then, are MY goals for the coming Rotary Year:

I intend to complete my Paul Harris Fellow status by donating $400 to the Rotary Foundation. That is my personal goal—a matter of personal pride in finishing what I have started. I cannot in good conscience, however, exhort my fellow Rotarians to do likewise, or to contribute any amount to an  organization that has manifestly forgotten its responsibility to us. If they choose to contribute, well and good, but I will not lend my name to the process, because I no longer believe in its legitimacy or its compatibility with the Rotary ethic. Accordingly, I have entered zeros in the 2004-2005 Club Goal Report Form, forwarded herewith. This is intended to make the point that it is not Rotary International’s prerogative to set goals for the Rotary Clubs that it was created to serve; rather, the reverse is true.

As President, I will encourage our Board and our members, in everything that they contemplate, to seek out and meet local needs, relying for that purpose primarily on local resources. While I will not actively discourage applications for Matching Grants for our various purposes, I will not rely on Rotary International for help. As the proverb says, “He who expects little is rarely disappointed.” We will of course continue to lend our efforts to the worldwide, funded programs of Rotary International (e.g. Polio Plus) insofar as they are compatible with local needs.

Having refused to set arbitrary goals for financial contributions, I will also refrain from participating in the membership numbers game. The reason is simple: Rotarians recruited for the sake of boosting membership numbers rarely stay the course. The only reliable means of attracting serious members is the Club’s service activities—it is our reforestation project on the summit of Mount Agad-Agad, for instance, that initially attracted me. More services delivered means more solid members in the medium term, so here again my emphasis will be on creative solutions to local problems, implemented with local resources. Build a better project (and more of them) and they will come. I will also push hard for the establishment of at least one Rotaract club, to serve as a training ground for future Rotarians.While I prepare for a year of furthering the goals of Rotary despite Rotary International, I hope that you will prepare to dedicate your RI Presidency to restoring the service ethic to Rotary International, thus making it once again worthy of our contributions.

Very sincerely,

F. Marc de Piolenc
President (2004-2005)
Rotary Club of Metro Iligan
R.I. District 3870
Iligan City, Philippines

February 23, 2010

Education for a Strong Republic

Filed under: Personal — piolenc @ 11:00 am

Address at Puga-an High School
Barangay Puga-an, Iligan City
March 31, 2004

Principal Dante Sumagang; staff and faculty of Puga-an High School; students; graduates; ladies and gentlemen:

Today it is my honor and pleasure to address you on the topic of “Education for a Strong Republic.” This is a question of vital importance to the future of this, and many other countries.

The word “republic” has had various meanings through the ages since it was first used in the ancient world. Even today, it has different meanings in different places, so it behooves us to define our terms carefully before proceeding. Very simply, a republic is a country governed by its people.

A republic has a government, but that government has no authority or power of its own. Instead, it is the agent and servant of the people. Like any agent in the business world, it has a contract with the people that defines its duties and powers. We call that contract a constitution.

So one distinguishing feature of a republic is the relationship between the people and their government, which is that of master to servant, or better still of employer to employee. Let’s borrow an example from the world of employment to illustrate that relationship, and to see what can go wrong with it.

I like to use the analogy of a security guard hired by a department store. At first, his job is straightforward: he scans the crowd in the store for pickpockets and purse-snatchers, and makes sure nobody brings weapons or explosives into the store. If a customer in the canteen gets drunk and starts to annoy the other patrons, the guard gently removes the rowdy from the store.

So far, so good. The guard learns his duties and performs them conscientiously and discreetly. The customers and salespeople go about their business, hardly noticing the guard unless there is a disturbance. And there are very few disturbances, because the guard is vigilant. But the owner is not satisfied. After all, that guard is being paid for full-time work, but he spends most of his time just sitting and staring. Pretty soon, the owner starts to assign the guard other tasks “to keep him busy.” Customers often come into the store burdened with purchases that they have made elsewhere. For their comfort and to discourage shoplifting, the owner sets up a check-room where customers can deposit their packages as they come in and retrieve them when leaving. Who will accept the parcels and later give them back? The guard, of course—after all, he isn’t doing anything most of the time. The owner congratulates himself that he is saving money, because he has not been obliged to hire a new employee to provide the new service.

All seems well until one day, while the guard is busy checking in packages, a notorious thief enters the store and steals a lady customer’s handbag while she is paying for her purchases at the checkout counter. The thief escapes before the guard can be alerted. Human nature being what it is, the owner does not blame himself; instead, he scolds the security guard! When another theft occurs, he fires that guard and hires another. The new guard is just as overburdened as the old one, so the same thing continues to happen. At last, the owner admits that the guard has too many tasks to perform…and hires an additional guard. Naturally, this increases the store’s operating costs, so the customers are required to pay higher prices for the goods that they buy there. If he had considered the matter logically, the owner would have set up a self-service checkstand – perhaps with lockers – and given the guard only his original duties to perform, then lowered his prices to their former levels. The result would have been greater security at a lower cost. Not to mention better sales due to more competitive pricing.

We see the same error being committed on a much larger scale in the world at large. In every republic on Earth, citizens are demanding more of their governments than they can reasonably perform. When government fails to meet expectations, as it inevitably must, citizens assign it more resources..and even more responsibility. Few of us pause to consider whether we could perform these functions for ourselves at a lesser cost, and fewer still consider that a government that can give us everything we want must necessarily be able to take from us everything that we have. Like the store owner in my little fable, we are always demanding more, always getting somewhat less than we demand, and always paying more for it than it is worth.

In the process, we are creating a weak republic. Actually, we end up with something that is not a republic at all, but …something else. A strong republic, then, is one whose government has very few tasks to perform. Those tasks are tasks that the citizens either can not or should not perform for themselves. Because the government’s duties are limited and clearly defined, it is able to learn them and do them effectively and consistently. A strong republic, unlike those we know, has very few laws, but those laws apply equally to everybody and are enforced consistently and fairly.

A strong republic must be founded on strong citizens. I don’t mean that we should all be weight-lifters, but we must be good citizens, and that means understanding our responsibilities to our fellow citizens and our prerogatives with respect to our government. And that, ladies and gentlemen, is where education comes in.

What can education do to promote a strong republic? Quite simply – everything. Not all of education occurs in school, of course. The absolute fundamentals of a free man or woman’s preparation for life—respect for the lives and property of others, honesty, maturity, being true to one’s given word—all of these moral and ethical traits are best learned at home, by the example of one’s parents and one’s elders. But the schoolroom is still the training ground of the effective future citizen.

The history of Mankind is, by and large, the history of failure. Failure to act appropriately and to make the right choices, but above all failure to learn from the earlier failures of others. The academic subject that teaches us to avoid repeating these failure is of course History, which would seem to make it the most important subject. But it isn’t. If we are to learn the lessons of history—or of any other subject matter—we must first have the language skills that make us able to understand what we read and hear, analyze it and discuss it intelligently with others. If we can read effectively and with full understanding, then we can teach ourselves the things that our teachers and our parents can not teach us or don’t have the time to teach us. Without language skills, nothing else is learnable. By my reckoning, the preschool teacher who gives a child his earliest acquaintance with letters and numbers is perhaps the most important link in the long chain of instruction that leads him to graduation.

But which language? I believe it must be English. I say that knowing that this claim will not please everybody.

Am I claiming that English is a better language than any other that we might learn? No. The science of linguistics teaches us that all natural human languages are equivalent, and my limited experience seems to confirm that finding.

Do I believe that English speakers are endowed with superhuman intellect, that they are somehow better than those who do not speak English? Again, no. I’m sure that any one of us could think of several stupid people who speak English very well indeed. I see some of them on television almost every night.

What the English speaker does have, and what the non-English-speaker is deprived of, is access to the largest body of literature, on every subject, that has ever existed. English has also become the de facto language of commerce and of diplomacy, where it has displaced French. It is the official language of air commerce; when a Nigerian airline pilot asks for clearance to land at Orly, near Paris, he makes his request and receives his instructions in English. English is also the language of the sea. If you go to sea, and expect to do anything more rewarding than swabbing the deck or cleaning out the head, you must know English.

Those responsible for the education system of this country recognize these facts, though they may not be happy about them. And so, as a result, the language of higher education—university and postgraduate studies—is also English.

I see that English is taught as a separate subject in most schools, and taught as well as the resources of the school permit. But another vital subject—Civics (which now includes what little history is included in most curricula)—is taught in another language that is just as foreign to 30% of the population as English is. Still other subjects are taught in a mixture of English, Tagalog and the local language.

What does this do to our hopes for a strong republic? One of the few lasting benefits of the colonial period was the imposition of a common language for law, instruction and commerce. Better yet, that language as of 1946 was English, then as now a window to the world. Since that time, education has been Balkanized—fragmented. When English was the lingua franca, every student, no matter what dialect he spoke at home, had the same obstacles to overcome to master his formal schooling, and the first was learning English. As he gained confidence in that language, he found greater and greater ease in studying all his other subjects, which gave him a very strong incentive to master English.

Now, however, when a non-Tagalog-speaker enters school he finds that he must learn two foreign languages to finish his studies. What is more, he has little incentive to do well in English, because a good grade in English contributes only one-fifth or so of to his grade average. To pass Civics he must do well in Tagalog, and in most other subjects he can be sure that his teachers will switch to the vernacular to explain any difficult point. Learning a foreign language is difficult; to call forth his best effort, a young student must see an immediate benefit. That benefit is not offered in the non-Tagalog-speaking provinces.

And now consider the burden on schools. In the United States of America, which spends more on public schooling per student than any other nation on earth, most public Elementary, Junior High and High Schools have given up teaching even one foreign language. Yet every provincial school here—even those whose students must build their own desks—is expected to teach two. What amazes me and astounds the Peace Corps workers I have talked to, is how well many schools manage to perform this demanding task.

Even so, provincial High School graduates who do not speak Tagalog at home are at a disadvantage in seeking higher education and better opportunities, unless of course their parents were wealthy enough to send them to private schools or to hire language tutors. But it is the poor who need higher education most; it is they who need to qualify for better jobs. And the poor of this and other remote provinces are held back not so much by their poverty (there are scholarships available, after all!) but by the extra difficulty that they have to qualify for entrance to university, or for a high paying job with promotion potential.

This linguistic ghetto—created, ironically enough, in the name of national unity—contributes to discontent and a feeling of “us versus them” that has no place in a free, strong republic. It limits access to the many sources of instruction that make it possible for any determined citizen to fit himself for his role in the republic. In short, it ensures that much of the country’s population consists of second class citizens, who will have a more difficult time than most in improving their lot. This creates the temptation to employ short cuts, and some of those short cuts involve banditry and other forms of crime and delinquency. These in turn make it necessary to devote more resources to government, deprive the citizens of more of their liberties and thus further weaken the body politic.

What can educators do? Well, resources may be unevenly distributed, but ability and ambition are not. Some of the faculty members and administrators here today will someday gain high positions in the Department of Education—positions from which they can influence policy and determine the nation’s future by forming its future leaders. Some of the students and graduates seated here will gain government office; they, too can have an influence.

I hope that they will remember, then, how vital to our future it is to have a country ruled wisely by its people, and administered by a government that does few things, but does those things very well indeed.

Thank you!

Address to the School of Computer Studies, MSU/IIT

Filed under: Engineering,Personal — piolenc @ 10:16 am

This was written and delivered nearly six years ago. Unfortunately, the part about the consequences of using unverifiable software in life- and mission-critical applications has come tragically true. My other prediction—that this problem would be recognized and that packagers and end-users would insist on open-source software—has still not come true, and there is no sign of it happening.

Think about that the next time you go in for a CT scan.

Address to the graduates of the  School of Computer Studies, Mindanao State University/Iligan Institute of Technology
Recognition Night
29 March 2004

Current trends in information technology, and their implications for young professionals and end-users of all ages.

Good evening.

While preparing these remarks, I realized that it has been almost 35 years since I first sat down in front of a computer. Actually, that is not strictly true; the computer was 100 miles away, at Dartmouth College’s computer center. I was sitting in front of a very tired Teletype terminal at my prep school in Andover, Massachusetts, laboriously typing a 20-line BASIC program on its user-hostile keyboard, anxiously watching the fuzzy letters – all uppercase, of course – appear on the rough roll of yellow paper. At the same time, a pattern of holes was being punched in a narrow strip of paper tape. When I had finished this work, which was done offline, of course, I fed the tape through the reader attached to the terminal, where it was converted into frequency-shift keying and sent to the expensive giant at Dartmouth. The terminal typed “READY.” I gulped and typed “RUN.” A few seconds later, the terminal came to life. I awaited with bated breath the outcome of my first computer job. At last, the print head moved, and printed

JOB TERMINATED
CPU USAGE 3.0 SECONDS

…my entire CPU time allocation for that month.

I had programmed my first endless loop.

In those days, it was clearly understood that “real” computers would always be large and expensive and require a dedicated staff to keep them running and to see that they were put to the most profitable use. For most computers, that meant “time sharing”—allowing remote users like me to submit batch jobs and obtain the results for a set cost per CPU second. The trend in computer design was toward larger, more powerful machines capable of accomodating a larger number of users, and thus of making more money for their owners.

If anybody had made the prediction at that time that within ten years, computers would appear that would give a single user – the owner – more computing power on his desktop than the biggest 1969 mainframes, he would have been laughed out of the room. That was science fiction – and not very good science fiction. It was the same kind of implausible literature that gave Dick Tracy a videophone that he could wear on his wrist. Nonsense!

But that is exactly what happened. In my collection, still running but no longer in use, is a Polymorphic 8813, which came out in 1977. It came with a BASIC interpreter, a FORTRAN compiler, and even some prepackaged business software, including WordMaster, a primitive word processor. Its 8-bit microprocessor was clocked at 256 KILOhertz – about a thousand times slower than today’s machines. There was no hard drive. Later models had a 5-MB capacity eight-inch hard drive available as a special option, in a separate cabinet with its own power supply, but the cost was too high for most users, including me. Even so, it worked, and I continued to use it even after I had bought a more modern machine—a Kaypro 4 running CP/M. The “Poly” continued to serve until I assembled my first IBM AT-compatible machine in 1990. I even bought a 300-baud Hayes modem for it so that I could do my university programming projects at home, upload them to the VAX at UCSD and get back the results. Finding a parking space at UCSD was almost impossible at times and the computer lab was always crowded, so this early foray into telecomputing saved much time and aggravation and encouraged my early experiments with bulletin boards before I defected to the Internet in 1995.

I keep that old Poly to remind myself how quickly and drastically the dominant paradigm of information technology has changed. The Poly came out right on the cusp of the first big change, from multiple remote users and batch processing on a mainframe to a single user and interactive processing on a cheap desktop box. The next big change—packaged programs—was just starting; the Poly’s owners were still expected to churn out most of their own applications.

By the time I bought the Kaypro, that had changed. Most computer owners were users only – they bought packaged software and used it to perform whatever task needed doing. The hobbyists who had sustained the early development of microcomputers were still around, some of them working on commercial software applications, but many still following their original hobbyist inclinations and uploading ingenious, compact utility programs to specialized bulletin boards, from which appreciative users could download them free of charge. In many cases, they published the source code so that users could “port” the software to their particular platform, because this was still the heroic phase of microcomputer development and there were many competing architectures and operating systems.

The so-called IBM PC changed all that. In a bewildering series of switches, IBM had first dismissed microcomputers as mere toys, then endorsed the S-100 passive-backplane hardware standard (the basic architecture of my old Polymorphic, updated to allow for a future 32-bit data buss). Then it had come out with its own brand of microcomputer, with a motherboard-based architecture designed by Intel that didn’t have the slightest connection with the S-100 buss. Never mind. It was IBM, and it could no more be questioned than you could question the wholesomeness of a Disney cartoon. The final nail in the coffin of all the alternatives except Apple was, oddly enough, a setback for IBM. For some reason that I still don’t know, there was no intellectual-property protection on the motherboard architecture or the IBM PC expansion buss. The only thing protected by copyright was the firmware – the BIOS written on a ROM housed on the motherboard. When first one, then another clone company figured out how to make a BIOS ROM that was functionally identical to the original IBM BIOS, but contained no pirated IBM code, the way was open for other manufacturers to make computers that were functionally identical to the IBM PC, but cost a great deal less. The market for software that could run on the IBM PC expanded, and with it the number of programmers willing to enter that market. The commercial software publishers quickly realized that identical hardware platforms meant that they need only license the executable code to the user; the source code could be kept secret. The era of unaccountable, unverifiable software had begun.

Now forgive me, but I have to break the chronological sequence to go back to the early 70’s and another important development that affects us now very much. That, of course, was the development of the Unix operating system. Up until that time, an operating system was always understood to be an executive program that served as an intermediary between applications software and the basic system hardware, besides performing some housekeeping functions. Almost by definition, an operating system was programmed in machine language or in the assembly code of the particular, proprietary hardware architecture that it was meant to serve. Unix, on the other hand, was designed at the outset for portability—it was intended to be used on a variety of platforms. To make Unix portable, it had to be coded in a high-level language, not in proprietary, machine-dependent assembly code. To accomplish this, the developers of Unix had to create another pioneering work whose full significance was not properly appreciated at the time—the C programming language.

Existing high-level languages assumed that any system-level instructions would either be handled by the operating system or by the sloppy expedient of inline assembly code. There was no provision in FORTRAN, ALGOL or any other language for manipulating system-level entities. Clearly, however, a high-level language that would be used for writing an operating system had to have that capability, so C was created to provide it. Professional programmers and enthusiastic amateurs fell in love with it—its immense power, recondite syntax and general illegibility appealed to what I believe is an ingrained masochistic streak in computer geeks. Pretty soon it became the generally accepted programming language, almost eclipsing all predecessors. C and its daughters Java and JavaScript clearly dominate the languages market today.

I’m sure that the original creators of C did not intend to create a monster, but that is what their creation became when it left the mostly professional, competent circle of Unix geeks and got into the hands of amateurs and job-shop code butchers who were paid by the line. You see, to give a programmer access to the system from a high-level language, C had to discard most of the safeguards and error traps that by then were standard in compilers for other languages. It has to be possible, for instance, to assign a logical quantity to a floating-point variable or vice versa, something that is frowned on in any other context! That power appeals to nearly everybody, but giving it to any but the most experienced programmers is like giving a loaded .45-caliber pistol to a three-year-old child.

Unix, too, has its dangers. It is modular, highly versatile and very tolerant of shenanigans that other OS’s won’t put up with. It lends itself readily to networking and can allow a user on one networked workstation or mainframe transparent access to resources on another, possibly very distant platform. This is one of the factors that encouraged its early porting to microcomputers despite its hunger for system resources—one of the things that computer users lost when they migrated from mainframes to micros was the ability to easily share resources. Networking has made up for that. Unfortunately, properly configuring and administering a Unix system is not a simple matter, and moving Unix to the individual, minimally trained end-user’s desktop will require a good deal of adaptation and efficient automatic configuration.

Most of the commercial software—systems and application—in use today is programmed in C, and all of it is defective to some degree. Mind you, any large, complex project inevitably has some flaws. But add the power of C, and you have a recipe, not only for serious functional defects but also for deadly, lurking vulnerabilities that give a malicious programmer opportunities for unprecedented mischief. As this software moves into mission-critical applications—control of power dispatching over continent-wide grids, medical radiation therapy machines, life support, military applications, guidance systems—the potential for harm multiplies.

Believe it or not, this excursion into recent IT history really has a purpose, and that is to provide the factual background for a prediction. So far, I have set the stage, and all the actors in the digital drama that will be played out over the next few years are in costume and in character. We have:

· A systems software market dominated by proprietary operating systems programmed (mostly) in C; their source code is kept secret—only the binary code is distributed. The users find out about flaws in the code when something goes badly wrong with it, or when a malicious programmer detects and exploits a vulnerability.
· The alternatives that are being offered are various flavors of Unix, all programmed in C. Some are proprietary, some are not, some are disputed; some of the proprietary versions are “open source,” others not.
· The operating environment of software, already complicated by the many configuration options available to individual users on isolated workstations, is further complicated by the ubiquity of networking.
· We see growing concern among users about the dangers inherent in defective software, and growing pressure to impose standards and require testing, at least for certain critical applications.

Now for the prediction. Right now there is a growing sense of disgust with the unverifiability of commercial software. That disgust has long pervaded low-level users, but now bigger players are getting irritated, and their worries carry more weight. The US government has, for many years, sought to produce or to buy a “trusted” operating system – one that would allow concurrent processing of non-sensitive and highly classified material on the same machine or even over a network, simultaneous users being allowed access only to the material that they are authorized to work with. I first heard about this project more than ten years ago. There is still no trusted operating system available, and none in prospect. Recently, three cancer patients in South America suffered fatal overdoses of radiation when the software furnished with the radiation therapy machine gave plausible—but totally false —results and those results were uncritically accepted. This will continue to happen, and happen more and more often, until fundamental changes are made in what software comes to market, how it gets there, and what happens afterward.

The fundamental problem is software testing, software verification. Up until now, the model used for testing has been borrowed from other fields of engineering, where it has worked fairly well; this is functional testing. You take a new car model, and you run the prototype over a torture track and see what breaks. You figure out why it broke, redesign and repeat the test until you have a suspension that you can trust. A test pilot puts a new jet through harrowing maneuvers with part of its systems shut down, disabled or impaired to prove that certain essential functions continue to operate.

What is slowly being realized is that this doesn’t work in the digital world. The model simply doesn’t fit. An aircraft or automobile is fully characterized by a seemingly large, but still finite and knowable, set of parameters, all of them under the control of the designer and builder. That is not true of a computer. A general purpose digital computer is whatever its software tells it to be—a powerful calculator, a typesetter, a 6-piece orchestra—and whatever its connected peripherals allow it to be. A computer cannot be exhaustively tested; at best, it can be programmed to perform certain standard tasks—the Sieve of Eratosthenes, for example – and its performance of those tasks measured. This is where the so-called “benchmarks” come from that are published in the computer press. This is fairly obvious and well understood.

What is not as clearly realized is that the same holds true of software. In order to justify the expenditure on programming labor, software must be able to serve the largest possible group of users. Because those users have many configuration options available to them, the software must tolerate those variations and still run. It is fairly easy to prove that a program does what it is intended to do, at least in a certain environment. The tasks it is intended to perform are known, so the software can be given the correct input for each task, and it is easy to check that the proper result is returned. Even so, the operating environment of modern commercial microcomputer software is so complex, and changing so rapidly, that even basic functional testing is rarely complete when a program is released. During the life of a given software version, incompatibilities with configuration options, peripherals, drivers and so on will be discovered. The reputable software publishers maintain compatibility lists and keep their customers apprised of what works and what doesn’t. Even so, these problems are quickly discovered and obvious, and so are relatively benign.

The more serious problems arise when a program receives input that the writers never envisioned, and therefore never tested. In other words, the problem here is to test that a given program not only DOES what it is intended to do, but also does NOT do what it should not do.

That second task requires that the publisher subject the program to all possible incorrect input and to all possible incompatible operating environments, and thereby prove by exhaustion that the program will do no harm—or at least, no intolerable harm. This is impossible, for the simple and obvious reason that the set of incorrect and unexpected inputs is infinite.

Well, then, if functional testing won’t work, what is left? Clearly, we have to be allowed to look under the hood. In other words the source code must be published. This does not mean that the source code must be put into the public domain—after all, the novel that I bought today at the bookstore has its text published (there would be nothing to sell otherwise), but the copyright holder retains his right to his work. Why “open source?” Simply because no software publisher, however great his resources, has the time or the personnel to run every possible “what if..?” scenario on a chunk of code. Publishing the code and inviting critique makes the entire world your testing laboratory, every interested professional a member of your Quality Assurance staff. Of course, it makes every kibitzer a licensed critic too, and that can be irritating, but in my not-so-humble opinion the gains outweigh the inconvenience. Now you may receive a message in halting English from Vladivostok saying something like “I am not finding error trapping for input buffer overflowing…,” which is much better than fielding your product, only to have a criminal hacker in Podunk discover the vulnerability and exploit it. It also encourages the production of compact, well-documented code —nobody likes the entire world seeing his dirty underwear. That in itself is a benefit, because sloppy coding makes a source-code vulnerability analysis that much more difficult.

Will Open Source solve all our problems? No. We will still be human, still capable of making mistakes. Lousy Open Source software is still lousy software. The key advantage of Open Source is that it makes the detection of errors at an early stage more probable, success more easily achievable, and disasters less likely—provided of course that the valid criticisms are recognized, heeded and acted upon.

I venture to predict that, within five years, open source software will be mandatory for mission-critical applications in government and medicine. After that, corporate IT chiefs and server administrators will make the same demand, and the practice will quickly spread to the ordinary user’s desktop.

As for WHAT Open Source solution wins, my crystal ball is a bit cloudy. All the alternatives to the secret-source Microsoft/Apple axis are currently Unix versions, but we’ve seen that Unix is not necessarily the answer to a maiden’s prayer. I sometimes have weird dreams in which Bill Gates has an ecstatic vision, emerges wide-eyed from his inner office, and says “PUBLISH!” There will soon come a time when that will be the only way for him to preserve his market share; the question is, will he recognize the fact and act in time

February 18, 2010

Personal Submarines, Bottom Crawlers, Amphibians

Filed under: Engineering,Hydronautics — piolenc @ 10:49 pm

These are chapter notes for book to be entitled Personal Mobility in an Age of Restriction.

3. Water

c. Submersibles and Submersible Amphibians

It might surprise many readers to know that many people, in different countries and from various walks of life, own personal submarines, most of them built by the owner. There are even organizations devoted exclusively to this underwater sport.

See for instance: http://tech.groups.yahoo.com/group/international_psubs_minisubs/

and: http://www.psubs.org/

True, these craft are mostly designed for brief excursions in sheltered water, not for cruising, but their mere existence proves that safe submergence is not beyond the skill of “ordinary” people, which puts them within the scope of this book.

As we will see, the need to safely submerge, operate underwater and surface again imposes constraints far beyond those that apply to surface vessels and surface skimmers. To justify our interest, then, there must be compensating advantages.

We can start with the most obvious one: stealth. Even on the surface, a careful choice of materials, shapes and paint schemes can make a small submarine very difficult to detect, whether with radar or visually. With surface vessels, the only protection that the owner has is that private yachts are fairly common, and the ocean is wide. If a armed vessel chooses to stop him—even in international waters—there isn’t much that he can do about it except run if he can. It’s hard to assert the freedom of the high seas with an Oerlikon gun pointed at you. A submersible has some hope of not being found in the first place, and if found, of escaping. This is all the more true if the submersible is disguised as a surface vessel, in which case its submergence after being challenged on the surface may pass as a sinking or scuttling. Such a disguise is not too far-fetched, as we will see.

A less obvious advantage is immunity to bad weather offshore. Only a few wave heights below the surface, deep water is essentially undisturbed. A submarine submerged at that depth, running at just enough speed to maintain control to stretch battery life, can ride out the most severe storm in comfort and safety.

A still less obvious advantage is access to the bottom and to underwater features and installations. Here we are mostly talking about the bottom-crawling type of sub pioneered by Simon Lake, which were equipped with airlocks that allowed divers to leave the vessel and explore wrecks, collect shellfish, perform salvage or just sightsee. With the negative-buoyancy tanks dry, such subs behave just like conventional ones.

Protector_section

Simon Lake’s submarine Protector was perhaps the highest known development of this type, having a retractable two-wheel undercarriage and a lockout chamber, as well as a fair hull for regular surface and submerged cruising.

Why would freedom-seekers be interested in such a machine? Well, one reason might be emplacement of and access to stores of fuel and other provisions on the bottom, perhaps hidden among wrecks or other known (and therefore unremarkable) hazards to navigation. A small submarine has limited storage and bunkers, so being able to replenish without risking the shore or a harbor might be a good thing.

submarine supply station

If the circumstances were right (sheltering feature located close to land, “friendly” parties living on the seafront), buried cables and even hoses might be run out, allowing direct replenishment and access to the communications and power lines.

What’s wrong with submarines is, basically, that they have to submerge. This means that they have to be able to take on sufficient water to cancel their reserve buoyancy, and even to become negatively buoyant if rapid submergence or the ability to rest or crawl on the bottom is needed. That water ballast takes up space. The more reserve buoyancy the sub has, the less usable space it has inside! But shaving any vessel’s buoyancy margin is dangerous; a slight error in loading, or a wave running over an open hatch can send it to the bottom. Many early submarines ended this way. A narrow margin also makes a submarine very “wet” in heavy seas, because instead of rising to a steep wave it punches through. This is less important today because a deck watch is probably not needed any more, when a video camera mounted on a snorkel- or mast-head can see farther than any watch-stander. But the fundamental objections to low reserve buoyancy—vulnerability to even trivial accidents and intolerance of overloads—stand.

Simon Lake, whom we’ve already mentioned, may have been the first to solve this problem by a method variously called the “double hull” and “saddle tanks.” This layout relies on the fact that a submarine’s main ballast tanks have two normal conditions: full and empty. They are never filled partway, other than during blowing and flooding, which are transient conditions. Therefore they need not resist external pressure. No matter how deep the boat dives, the pressure outside and inside these tanks can be the same, both because water is substantially incompressible and because the tanks can be left free-flooding until it is time to blow them empty to surface. Therefore, they can be housed outside the pressure hull, in a lightweight casing that sometimes surrounds it completely, but often just straddles the top, like a saddle, and in modern submarines is usually housed at the bow and stern, fitting into the overall hydrodynamic contours of the hull. Other stores can go there, too; fuel oil floats on water, so fuel tanks can be provided with openings at the bottom that allow seawater to replace the volume of fuel that is consumed (these days it would make sense to enclose the fuel in a bladder, to completely separate it from the water). Again, pressures are equal inside and out at all times. In naval submarines, spare torpedoes and other stores that tolerate exposure to seawater could also go in the saddle, and with the casing serving as a hydrodynamic fairing, piping and other equipment that would otherwise take up space inside the pressure hull could go outboard. In one stroke, Lake solved the central problem of submarine utility (ironically, although he illustrated this innovation in patent drawings, he never claimed protection for it in the text), and this has been the standard configuration for fleet or cruising submarines ever since. It allows very rapid diving. The Kingston valves, which admit water to the bottom of the tank, are first opened, ready for diving, but water does not fill the tanks immediately because of the air trapped inside. When the sub is ready to dive the air vents are opened and the tanks finish filling quickly. It is important to fill the main ballast tanks quickly even in non-naval applications because, while the tanks are partly full, water is free to slosh fore and aft and the sub may be difficult to control. When surfacing, air vents are closed, the Kingstons are opened and compressed air is introduced to blow out the water. Once the sub is “decks awash” a low-pressure air pump is used to remove the last of the water. Or the Kingstons can be closed and the remaining water pumped overboard with a conventional bilge pump. The latter solution is more efficient, at least in theory, but requires more plumbing.

[Added later: Simon Lake didn’t patent saddle tanks because he was not the first to use them; that honor belongs, apparently, to the French engineer Laubeuf, another talented early sub designer.]

Some tankage has to stay inside the pressure hull, namely the trim tanks – fore and aft – and the negative-buoyancy tank if there is one. The trim tanks are for fine adjustments to ensure that the boat is neutrally buoyant and level with the main ballast tanks flooded. Theoretically, their capacity is equal to the difference between the maximum and the minimum loading of the sub. In practice, they need to be somewhat larger because no fore-and-aft adjustment is possible if the two tanks are either completely full or completely empty. In either of those conditions solid ballast or stores would have to be shifted to trim the boat. Housing the trim tanks inside the pressure hull is mandatory because generally, they will have in them some air, which is compressible. Since water is admitted or pumped out only on the surface or during a very shallow “trim dive,” the pressure differential between the trim tanks and the interior of the pressure hull is always low, and can be made nil by venting air from the tanks into the boat with the outside valves closed. Because the tanks are housed inside the pressure hull, compressed air should not be used for pushing out water; instead, water should be pumped overboard.

Finally, there is the negative tank – a special case if ever there was one. It may be flooded or emptied at any depth within the submersible’s operating range, so it requires special arrangements. If it is filled at depth to allow the sub to settle to the bottom, the water cannot be admitted to it at full ambient pressure or the internal partition between the tank and the sub’s interior would rupture. Admission of water has to be through a restriction, with a relief valve venting air into the boat as needed. Likewise, the tank cannot be blown with compressed air like the main ballast tanks; instead, water has to be pumped out of the boat through the pressure hull. That pump, operating in reverse with a brake on it, can serve as the intake restriction. In a bottom-crawler, the capacity of the negative tank is determined by the bottom pressure required to get adequate traction for the drive wheels; in a regular sub, by the maximum static rate of descent needed for crash dives.

Now here’s the beauty of the outer casing: it doesn’t have to withstand pressure, so it can have any shape desired, as long as it provides the necessary volume for ballast tanks and bunkers. It can look like a submarine…or like something entirely different and much less remarkable. It makes good sense for somebody who wants to be discreet to make his casing look ordinary.

Argonaut II

Lake chose the shape of a sailing sloop (though the two “masts” – snorkels, really – ahead and abaft the “deckhouse” would have raised some nautical eyebrows). For us, a motor yacht would probably make more sense, though in some places a trawler or some other kind of workboat would be more appropriate, and would make the snorkels easier to conceal or disguise. The hull simulated should be a displacement type, preferably with round chines, because the sub will definitely not be capable of planing! We want to keep submerged drag as low as possible, so excrescences should be minimized. A traditional displacement motor yacht, with a low deckhouse, should do the trick.

COST.

It makes sense to measure cost in relation to a surface vessel of equivalent volume or payload. Because of tankage, the total volume of pressure hull will need to be about 40% larger than the living quarters of a surface boat to get equivalent usable space. This in itself increases cost. The cost per unit of displacement will also be higher because of the greater amount of machinery used. Now add the casing, and it is probably safe to assume a cost multiplier of about three.

Bottom-Crawlers and Amphibians

With the possible exception of rumored clandestine reconnaissance craft built under the Soviet régime and the recently retired US Navy research sub NR-1, modern submarines have completely abandoned the bottom-crawling mode of operation championed with great success by Simon Lake. Lake’s brainchildren remain of interest to us for two reasons: our application requires the boat to operate in shallow water much of the time, and shallow water is the enemy of conventional subs; and adding wheels opens up the possibility of limited amphibious operation. This is very much in keeping with our preference for bi- and multimode operation. Here, however, the emphasis is on the word “limited.” An amphibious submarine, unlike a surface-bound amphibian, can never pass as an ordinary highway vehicle, simply because it is too heavy. To prove this, imagine an amphibious sub built to the maximum practical highway size of 40 ft x 8.5 ft x 8 ft overall, and give it a fullness coefficient of .60 for the pressure hull. That gives us a submerged displacement of about 55 tons, which is roughly what the beast has to weigh out of water. Putting all that weight on two axles violates every highway regulation in the civilized world.

Seeteufel_Elefant

One remedy, adopted in the late-WW2 German Seeteufel (Sea Devil) amphibious sub, is to use caterpillar treads to better distribute the weight, but the weight, cost and maintenance burden of a crawler rig makes it unattractive. The next best thing is low-pressure tires, but pneumatic tires are impractical for underwater operation; at a (shallow) depth corresponding to their inflation pressure, they will collapse. One could envision using pneumatic tires as part of the main ballast system, and pumping water into them when it’s time to submerge, but that would entail a good bit of complication. We’re back to Lake’s formula of solid-rimmed wheels, but with the addition of solid-rubber tires shaped and formulated for low ground pressure. This is relatively easy to do because of rubber’s unique and “tunable” properties. Hysteresis heating of the rubber will limit ground speeds out of water, but we’ve already seen that this thing isn’t going on the freeway in any case. Solid rubber is substantially incompressible, so there should be no change of buoyancy and trim with depth. Despite these limitations, the ability to leave the water, even if only to the extent of coming up on a beach or boat ramp, gives an operational flexibility that we’ve already commented on in connection with hovercraft. The ability to load and discharge cargo and passengers on dry land, independent of port facilities, lighters and stevedores, shortens turnaround time. To a commercial operator that translates into greater productivity; for us, it means greater security. The ability to operate in estuaries, over gravel beds, mud flats (with caution, because it is possible to get mired) and sandbars, without fear of going aground, is another big plus. And imagine the commotion when a cabin cruiser comes up the local boat ramp, under its own power, when nobody saw it approach. Amphibians do incur one penalty: To come out on land, we need at least three wheels distributed over two axles. Underwater, we could make do with two wheels in line, as Lake did in his later designs.

DESIGN.

The design of any submersible is more complicated than the design of a surface vessel, for a number of reasons, some of which have already been discussed. Stability, in particular, has to be verified for at least three conditions: surfaced, submerged and the transitional phase, during submergence and surfacing, when the main ballast tanks are partly full and the water in them is free to slosh. In the case of a bottom-crawler with a bicycle undercarriage, the sub’s immunity to tipping also needs to be verified.

Taking the simplest condition first, fully submerged stability requires only that the center of buoyancy be above the center of gravity. This condition is usually very easy to satisfy.

On the surface, the c.b. is nearly always below the c.g. This is permissible, provided that the metacenter – the point where a vertical line through the new center of buoyancy intersects the plane of symmetry of the vessel when the vessel is heeled slightly – is above the c.g. This calculation is well covered in regular naval architecture texts, but has to be done very carefully for submarines because they can be marginal for lateral stability on the surface, depending on the presence and shape of the saddle tank. A more boat-like casing tends to give better stability than the casing forms that are optimized for underwater cruising.

The transitional condition is usually dealt with by subdividing the main ballast tanks fore and aft to limit sloshing, and by lateral equalizing pipes to ensure that the corresponding port and starboard tanks fill and empty at the same rate. Aside from that, generously-sized valves and air vents ensure that the tanks fill quickly so that the transition is brief. Surfacing is usually done by powering up to periscope depth using the hydroplanes, then surfacing quickly from there. The critical transition condition occurs when the sub must surface statically – that is, by blowing its tanks – from a considerable depth. This might be the case if a bottom-crawler needed to surface from a tight spot where it was unsafe to drive or swim ahead. In this case, there might not be enough air to completely blow the tanks at depth. Instead, enough air would be released into the tanks to completely blow them at periscope depth. That air will only occupy part of the tank volume at depth, and will gradually expand as the sub rises and ambient pressure decreases. This leaves a possibly long period during which there is a free surface in the ballast tanks, making it especially important to get this phase right for bottom-crawlers.

Bottom crawling on a bicycle undercarriage means ensuring that the moment of the boat’s net weight (basically, the weight of water in the negative tank) is less than the moment of the boat’s buoyancy when the boat is heeled. This is fairly easy to ensure and to verify.

Detail design is concerned with keeping costs down, ensuring safety and minimizing crew workload. Cost control is primarily a matter of maximizing the utilization of expensive machinery. There are, for example, many tasks requiring the use of a pump; it pays to arrange for as many of them as possible to be done by the SAME pump. Here, as in many other design tasks, there is software that can help. There are several commercial software packages designed to optimize the design of chemical processing plants by minimizing piping runs and avoiding duplication of machinery that can help in laying out machinery aboard a submarine.

February 13, 2010

Book Review: Leichter als Luft

Filed under: Aeronautics,Engineering,Lighter than Air,Propulsion,Structures — piolenc @ 5:37 pm

Leichter als Luft

Transport- und Traegersysteme
Ballone, Luftschiffe, Plattformen

by Juergen K. Bock and Berthold Knauer

reviewed for Aerostation by F. Marc de Piolenc

Hildburghausen: Verlag Frankenschwelle KG, 2003; ISBN 3-86180-139-6, price: 39.80 Euros. 21.5 x 24 cm, 504 pages, single color, many line illustrations and halftone photographs, technical term index, symbol table, figure credits, catalog of LTA transport and lifting systems.

Summary of Contents

1. General fundamentals of lighter-than-air transport and lifting systems
2. Physical fundamentals
3. Design of airships and balloons
4. Reference information for construction
5. LTA structural mechanics
6. Flight guidance
7. Ground installations
8. Economic indicators
9. Prospects

Appendices:

A. Time chart
B. Selective type tables of operating lighter than air flight systems
C. Development concepts of recent decades
D. Systems under development or under test
E. Author index
F. Table of abbreviations
G. Symbol table
H. Illustration credits
I. List of technical terms
J. Brief [author] biographies

In LTA, which has seen only two book-length general works appear since Burgess’ Airship Design (1927), comparisons are inevitable despite a language barrier. It is therefore quite pleasing to note that the authors of this book have consciously set themselves a task that complements the work embodied in Khoury and Gillett’s Airship Technology1. While Khoury’s work is a review of the current state of the art, the present book provides

“…a scientific, technical and economic basis for a methodical, consistent procedure in developing new lighter than air flight systems as well as a catalog and appraisal of prior solutions and achievements.”

as stated in the preface by Dr.-Ing Joachim Szodruch of the DGLR. This is amplified in the authors’ Foreword:

“The observations contained herein are future-oriented and encompass without euphoria the current state of science and technology.”

This is in contrast to Khoury and Gillett’s introduction to Airship Technology, which reads in part:

“This book is intended as a technical guide to those interested in designing, building and flying the airship of today.”

The body of the book is completely consistent with its stated purpose, looking always toward the future and emphasizing how things should be done rather than how they have been done. Where examples of actual hardware and operations are needed, they are drawn from the most recent available, and meticulously documented.

Considering the authors’ long association with the LTA Technical Branch of the DGLR, it is not surprising to find that much of the material, and many of the collaborating authors listed in the Foreword, are drawn from the many Airship Colloquia held by that Branch over the years. Yet the style is seamless; there is nothing to suggest to this admittedly non-native reader where one contribution ends and another begins; style is consistent from paragraph to paragraph, and across chapter boundaries. What is more, the authors seem to have made a conscious effort to make the text accessible to non-Germans by keeping sentence structure simple and straightforward. The three-column-inch sentences, gravid with nested subordinate clauses, so beloved of the Frankfurter Allgemeine Zeitung, for example, are not to be found here, much to this reviewer’s relief.

It is compulsory to say something about the thoroughness of the book’s coverage. It is, however, difficult to formulate a “completeness” criterion for LTA, which is now more than ever an open-ended field, in which-as the authors correctly point out-the possible types are still far from exhausted, despite the antiquity of aerostatic flight. It is to the book’s credit that its presentation, too, is open-ended; that is, the authors have avoided presenting the usual narrow typology of LTA craft and their almost equally narrow applications. Instead, and in keeping with modern practice, they take a systems approach to LTA, situating it within the field of aeronautics and providing the tools that the reader needs to translate his own requirements into appropriate technology.

The only omission that might be considered significant concerns tethered aerostats: the authors appear to have neglected both tethered-body dynamics and cable dynamics in their technical and mathematical treatments. Tethered balloons as a type are mentioned, but that seems to be all the coverage that they get. Admittedly, long-tether applications have poor prospects because of potential operational and safety problems, but short-tether dynamics have caused problems in some applications that are relevant, including balloon logging, so coverage of that end of the scale would have been welcome. Tethers also play a role in some existing and proposed stratospheric balloon systems, including the exotic NASA Trajectory Control System or TCS.

This, however, is the only flaw in an otherwise comprehensive LTA design/analysis toolkit.

One especially notable and praiseworthy inclusion is subchapter 1.4 regarding regulation and certification. This topic, though a concomitant of any aeronautical project, is one that most techically oriented authors would prefer to avoid or to give only summary treatment, but Bock and Knauer dive into it fearlessly, setting forth in considerable detail, and with the help of flowcharts, German, Dutch, British and American certification categories and procedures, with reference to the governing documents. Not surprisingly, there is more detail about the German process, with which both authors have considerable experience. They also review the history and evolution of the European Joint Airworthiness Regulations (JAR), which are keyed to—-and sometimes based on—-corresponding Parts of the US Federal Aeronautical Regulations (FAR).

They do not flinch even from discussing certification costs and fees. Although they admit that the general policy of regulatory authorities is to require payments to government from the applicant that offset the costs incurred in administering and examining a certification application, they conclude that, compared with the cost of development of an airship, the regulatory fees charged are of only minor importance. It is not clear whether they consider here the costs incurred by idling the works while some bureaucrat makes up his mind! Perhaps it hasn’t happened to them…

Typography, binding and book design

The basic layout is in two columns, with generous leading and gutters, making the somewhat smaller than usual typeface easy to follow and to read. Equations are set in a slightly larger, bolder font and occupy the full width of the page, avoiding a common legibility problem with two-column layouts. There are no drop-outs to be found anywhere. The eggshell-white paper is thin enough to keep the book’s 500-plus pages within a thickness of less than an inch (2.5 cm), yet the paper is completely opaque, without bleedthrough and with perfect reproduction of fine-screen halftones. A color section is mentioned in the table of contents, but all pages in the review copy are single-color. The cover is paper, rather than cloth covered, printed front, spine and back in white on a dark blue background (reproduced in reverse for this review). This type of cover is less durable than the traditional cloth, but is in widespread use for textbooks and technical works despite this.

Second (English) Edition

Work is now in progress on a second edition, which will be published in English by Atlantis Productions. Note that this will not simply be a translation of the first, German edition but a new work, composed ab initio and including whatever revisions might seem appropriate considering response to the first edition. Both of the authors have a very strong command of English, so there is no reason to fear the damage that some excellent German technical works have suffered at the hands of translators (Eck’s treatise on Fans comes to mind).

A “must have” in either language.

1 While a more thorough and detailed comparison of the two books would have been desirable, it is unfortunately not possible, as Aerostation never received a review copy of Airship Technology. Such comparisons as can be made here are based on brief access to that book during a consulting stint.

This review originally appeared in Aerostation, Volume 27, No. 3, Fall 2004

February 7, 2010

Safety and Risk

Filed under: Engineering,Structures — piolenc @ 5:55 pm

This is taken from chapter notes for a book project about ropeways (aerial tramways) for use in mountainous areas of the Third World.

Safety and Risk

Inasmuch as an unreasonable standard of safety can kill a meritorious ropeway project, it is worth devoting the necessary space to a discussion of the related—but very different—concepts of risk and safety.

Risk is quantifiable, provided that the necessary data are available. It is simply the probability that a certain type of loss, or a certain level of loss, will occur over a certain span of operating time or output. Actuaries compile these figures and use them to compute, among other things, the premiums to be charged for insurance against the loss whose probability they have computed.

Safety, however, is not the inverse of risk. Risk, as we have seen, is quantifiable and objective, while safety ultimately rests on a value judgment—a subjective appraisal that will differ from place to place and from individual to individual. Typically, a standard of safety is expressed in terms of a maximum risk level deemed acceptable by the individual or organization concerned, and is determined by comparison to available alternatives.

For example, suppose that we are offered a ride on a single span, single car, to-and-fro ropeway with an open car that carries the rider over a deep gorge swept by high, cold winds. Such a ropeway, if installed in a developed country, would likely carry only goods if it were allowed to exist at all; there would likely be other, more comfortable and less risky alternatives available for carrying passengers, and the rickety mechanism would be condemned out of hand as “unsafe.” Transplant the same rig to a remote corner of Nepal or Bhutan, where the only alternative is a five-hour walk on a narrow, icy windswept path with a vertical cliff face on one side and a sheer drop on the other, and it will be praised as the acme of safety and comfort! The risk is the same in both hypothetical cases, but the “safety” value judgment is very different.

None of this causes a problem, so long as the individuals and groups directly concerned are free to choose the risk levels that they will accept. Unfortunately, we live in an age where government has arrogated itself the authority to make these decisions for us, even in countries generally considered “free.” The result is that government workers with secure, high-paying jobs, living and working in relatively low-risk environments, are making risk-acceptance decisions for people in very different circumstances. In most cases the bureaucrats mean well, but have little knowledge of conditions in the areas affected by their decisions and do not understand the adverse consequences of risk-averse regulation.

Tragically, one consistent consequence of applying arbitrary “safety” standards is higher risk. This paradoxical result arises as follows.

1. A novel, previously unapproved transport method is proposed, usually to supplant or supplement an existing transport medium. For our example, let the new method be a ropeway across a gorge, and the existing one a footpath and ford.

2. The new method is not part of the traditional infrastructure, so it must be studied and approved by competent authority. Said authority imposes safety requirements that it deems reasonable, including the provision of safety interlocks to prevent the ropeway from operating unless the car’s loading gate is latched, high factors of safety for the cables, redundant brake mechanisms and so forth.

3. The proponents of the ropeway find that they cannot afford to build to the standards imposed. In some cases, they may find that supporting infrastructure (e.g. electrical power), costing many times the price of the ropeway itself, will have to be provided to meet the requirements.

4. Result: the existing method remains the only one available, even though it is far more risky than even a very crude ropeway. Inevitably, some people will die in falls or by drowning who would have survived if the ropeway had been available, and they will die because because someone living far, far away had the power to deprive them of a less risky alternative…in the name of safety.

Keeping What is Yours

Filed under: Engineering,Personal — piolenc @ 3:03 pm

Personal Security in the Age of Intrusion

This post is taken from chapter notes for a book project with the same title as the post, and for another called The Tropical House.

The book that got me started in designing fortified hiding places was called How to Hide Almost Anything by David Krotz, published by William Morrow in the Seventies. Another book, with a title like “Secret Hiding Places,” from one of the small alternative publishers, was also helpful. Both emphasize concealment and disguise, but not resistance to forced entry once the hiding place is compromised. I’m fairly sure that Krotz also doesn’t cover the “dual port” problem, though I don’t know because both books are in my container. As mentioned in a chapter draft for my Tropical House book, there are three main considerations in providing secure storage under your own control:

1. concealment and disguise
2. resistance to forced entry
3. ease, rapidity and frequency of access

The nature of the threats that you face will dictate how much weight to give each factor, and that will in turn constrain your choice of solutions.

1. In the extreme case of secure storage for a vacation home that is vacant for most of the year, concealment is very important because thieves have a lot of time in which to work on penetrating your security measures. If they can’t find your strong-room to begin with, they can’t penetrate it, and if you’ve kept a low profile—so that nobody even suspects you have a cache—that’s better still. This point may be much less important in the case of a dwelling that is occupied full-time, unless of course you are forced to locate storage near areas accessible to strangers, like utility meters or service entries.

2. This is self-explanatory. How resistant you make your arrangements depends on what class of thief you are expecting, and how long he might have to work in the worst case. Also: how severe the consequences of a successful penetration would be for you. Here the possibility of duress has to be considered, too. If it is possible for somebody to be holding a gun on you or a loved one and demanding that you open up, special provisions need to be made. Here again, discretion is a great help. If the badguys don’t know you’re hiding something, they won’t demand that you reveal it.

3. This is often neglected in planning, but it’s of paramount importance. Suppose you live in one of the many countries that severely limit personal firearms ownership, and prohibit firearms ownership by foreigners entirely. Now suppose that you are fortunate enough to obtain a firearm with which to defend your household. You have to conceal it, but it does you no good if it’s locked in an impenetrable concealed gun safe that takes a half-hour to open. Here ease and speed of access are going to weigh more heavily than security factors. Likewise if you keep a “bug-out kit” of travel documents and ready cash, which you might need to get your hands on in a great hurry. In an extreme case, like the gun locker that I designed for a motor home belonging to a fellow who made frequent forays into less savory areas of Central America, you might have to settle for concealment as the only security measure, with a magnetic latch the only impediment to opening the locker.

I mentioned the “dual port” problem. This is an outgrowth of the fact that the biggest criminals in the world today are governments, and it is very easy in many parts of the world (including places in which the rule of law prevailed until quite recently) to get unlimited authority to search a home or business on the flimsiest of pretexts. In such an event, the outcome is foregone: your hiding place WILL be found and it WILL ultimately be opened. Here the only effective countermeasure is provision for unloading the cache from another access or “port” so that the cache will be found empty, or better yet loaded with only innocuous items of personal and pecuniary value. This is tied in with the duress problem generally, with the added complication that there is no point in providing a duress code that alerts the authorities, because they’re the ones doing the stealing.

Disguise is a popular way to conceal small objects, with novelty houses selling fake switchboxes and outlets, and booksafes. The problem, aside from limited capacity, is that these subterfuges are well known. I would never use a switchbox, junction box or any other electrical or plumbing item for concealment unless it could still function in its ostensible capacity: you should be able to plug something in or flip the switch or turn the valve and have it work. The item should also match all the others of its kind in your house in brand name, model and degree of wear, which is hard to arrange. I particularly distrust booksafes. The commercial types are completely useless because they stand out on nearly any normal person’s shelves. This is an item you HAVE to make yourself from a real book that fits in with the rest of your collection. Aside from the pain involved in committing vandalism on a useful piece of literature, the risk of a perfectly innocent person opening the “book” and discovering your secret is fairly high in most cases. One might easily be tempted to use modern hi-fi equipment as hidden storage; much of the equipment sold today is packaged in big, expensive cases to look as substantial and imposing as older vacuum-tube equipment, but consists mostly of empty space at the back, with a few tiny printed-circuit boards crowded with solid state components just behind the front panel. There is an obvious security problem here, namely that a thief might make off with your stereo for its own sake, not knowing that he has taken your stash with it. Another problem is cooling the equipment (remember, it has to work as advertised). You may have to install a pancake cooling fan or two to make sure your amplifier gets enough air…and plan to do any servicing and cleaning yourself!

Over the years since I got interested in this, I’ve worked out a generic two-layer strong room or safe design which works well in a wide variety of circumstances. The outer layer is designed for minimal security but rapid access, while the inner layer incorporates more robust security measures, and can also be made fire resistant. The outer layer is secured by an electronic lock, while the inner layer would typically have a mechanical combination lock. The outer layer holds rapid-access items like defensive firearms; the inner layer holds deeds and share certificates, bullion and bullion coins, your draft Memoirs, compromising pictures of the First Lady and the stableboy, etc. As a practical matter it is not usually possible to fireproof the outer layer, so any documents and currency stored there may have to be enclosed in their own fire resistant pouches. There is another obvious weakness to the outer layer, namely the need for electrical power to operate it. A backup battery may have to be provided, and it will have to be located inside the strong room so as not to give away its existence. In the past, finding a way to conceal or disguise the keypad that operates the outer lock, while ensuring easy access, was a problem, but there are ways to deal with that today that also give much faster access than tapping a four-digit code.

Powered by WordPress