Polymath A (mostly) technical weblog for Archivale.com

November 8, 2013

Oblique Convergence – the Two Roots of Modern Unmanned Aircraft

Filed under: Aeronautics,Engineering,Propulsion — piolenc @ 5:40 am

Unlike most high-tech industries, which start with a single core idea and a small coterie of self-educated practitioners before branching out into diverse applications, ours has at least two quite distinct roots  and two very different seed groups. One root is the traditional target drone – reconnaissance drone – armed drone – UAV progression based on aeronautical experts qualified in aerodynamics, structures, propulsion and control systems, starting with primitive analog automation and progressing to digital systems of ever-growing sophistication. It traces its origin to the experiments of Sperry and others as early as the 1920s.

The other root of modern UAVs is embodied in multicopters – ugly, crude, primitive-looking things designed mostly by electronics hobbyists with only the vaguest connection to aeronautics.

The former group build competent, elegant air vehicles – fixed-wing or rotorcraft – and then try to endow them with the control systems that they need to perform their missions without a human being on board. For them, the object is the aircraft and its mission; the control hardware and software are extensions of the aircraft that allow it some autonomy.

The latter group are hardware and software hackers (in the original sense of the word – meaning an expert, not a criminal) who are looking for the cheapest, simplest platform that can pick up their micro-controller board and its code and fly it around. For them, the object is the code, and the ‘copter is merely an extension of the hardware platform on which that code runs – basically a peripheral that allows them to have fun with computers outdoors and burn off all those Twinkies and potato chips.

Neither group has much regard for the other, or much interest in the other’s preoccupations, but a funny thing is happening: as the traditional UAVs tend toward cheapness and ubiquity, the digital eggbeaters of the multicopter hobbyists are becoming more capable: more payload, longer endurance, and very advanced software (because that’s their strength, don’t you know).

Pretty soon the two groups will either be collaborating on or competing for missions that are within the reach of both of them, though they’ll be approaching those tasks from very different perspectives.

What missions, for instance? Well, deliveries for one. We’ve already seen the pizza delivery demo, but I’m thinking of more sensitive deliveries – the kind you probably wouldn’t entrust to a high-school student on a motorbike under any circumstances. The kind where you lose a lot more than twenty dollars plus tips if something goes wrong.

Here’s one:

We think of the mining industry as being concentrated into huge, self-contained operations, from which a very crude, low-value raw material – the ore – is sent to be refined remotely into a valuable and compact commodity. But the reality can be different. Highly valuable resources – we might as well say gold or diamonds because that’s what we’re talking about – often occur, not in a concentrated vein, but in pockets spread over a large area. We speak of gold or diamond fields. Another characteristic of these resources is that they don’t need refining – at least not in the same sense as, say, copper ore – only extraction from a matrix. Gold occurs as the pure metal rather than as an oxide or sulfide like some other metals, and diamond is…diamond. So these resources are essentially finished products at the mining site, and are already very valuable and, because of their portability, very vulnerable.

In a typical gold field operation, small quantities are collected from various small mining sites (in the case of panned gold, there could be hundreds of pans or small dredges strung out along a river) into a more or less central location within the gold field, then transported to a permanent installation for assay, refining to .999 purity, and casting into bullion or minting into coins. On the way there, however, it is subject to theft and to hijacking.

Here’s where UAVs come in. Autonomous UAVs have the capability of taking off from a secure base, flying to a remote location under GPS guidance, and landing within a few meters of their target. There, a payload can be loaded aboard, and the machine refueled or recharged for the return journey. It then takes off and returns to base with the payload. Autonomy means that it can’t be electronically hijacked by interference with the communication link with the ground. Typically, there would be no such thing, which also carries an economic benefit, namely elimination of the ground station and its operators. Arriving at its base, the craft lands inside the secure perimeter and is safely unloaded, then serviced for the next pickup.

Obviously, we are talking about vertical takeoff and landing here. The VTOL field has many different types of craft within it, but helicopters have the most attractive characteristics for this mission because their low-disc-loading rotors allow a given load to be lifted at the lowest cost. Speed is not an issue – only security – so a helicopter’s relatively modest top speed won’t be a problem.

The avoided loss in securely delivering one typical gold shipment – twenty kilos – is nearly a million of the green pieces of paper we laughingly call “dollars.” On the other hand, the direct operating cost of shipping by autonomous UAV, with no pilot or ground crew salaries to pay, comes down to amortization of the initial cost of the vehicle and its support equipment. It follows that a significant investment in this technology can be justified. Imagining a first cost of one million dollars, that investment is almost fully recovered in one avoided hijacking. Actually, payback is quicker because conventional shipping methods involve substantial personnel costs, mostly connected with guarding the shipment, even if nothing at all goes wrong.

Converging on this opportunity are two very different technologies. The drone crowd will offer an autonomous helicopter – essentially a scaled-down version of a manned helicopter design equipped with a combustion engine and a simplified version of conventional flight-control hardware like the rotor head. From the other side will come proposals for a multicopter that will be a scaled-up version of the ones we see buzzing around on YouTube, equipped with one electric motor per rotor and a battery pack.

There will already be some technological convergence: both machines will likely be controlled by programmable micro-controller boards of purely civil, hobbyist origin running private-origin code. This will be for reasons of cost, but also because the control system that would normally have equipped the “drone” machine for, say, a military mission are not legally exportable from their countries of origin. We might see this as the drone people learning from the multicopter hobbyists.

In the present state of battery development, however, it will probably be necessary for the multicopter crowd to adopt technology from the drones, namely combustion engines. This is because storable liquid fuels have much higher specific energy storage capacity than the very best batteries. The easiest way to incorporate a combustion engine into a multicopter will be to have it drive an alternator to recharge the on-board battery through a rectifier/filter in the usual way. The battery would then drive the motors as if the combustion engine weren’t there. In essence, the multicopter would take its recharging station with it, and the payload penalty that carries with it would be partly compensated by having a much lighter battery. One operational advantage of this arrangement is that the machine can be refueled at the remote site and be instantly ready for flight. Hooking it up to a generator to gradually recharge the internal battery won’t be needed.

What else can the multicopters learn from the drone people? Well, a lot. Multicopters are pretty straightforward to control when hovering or moving slowly, but they run into trouble when trying to build up a significant cruise speed. This is the result of the trailing rotors operating in the downwash of the ones ahead. In a conventional tandem-rotor helicopter, this is compensated by increasing the collective pitch of the rear rotor, but no such option exists in a multicopter – the trailing rotors have to turn faster. This works up to a point, where the limit of speed control is reached and the multicopter pitches up abruptly, braking its forward motion. Judging from what’s on YouTube, the speed limit for multicopters looks to be about 70 km/h at present. This may actually be adequate for the mission under consideration, but some means needs to be found for improving it without sacrificing the essential mechanical simplicity of the multicopter. Ideally, that means should not involve additional control channels. That solution, whatever it may be, will likely come from people with conventional helicopter experience.

Another rotor-related problem is the vibration that occurs in a rigid rotor (propeller) in crossflow. You can hear this in the fluttery hum that multis make when moving in translation. This represents a loss of efficiency, and in the long run might lead to unpredictable rotor failure. Again, the conventional aero backgrounds of the drone people will help, with a bolt-on solution in the form of a teetering, flapping or even feathering rotor being the most likely result.

Net result – a much upgraded multicopter and/or a more economical, exportable helicopter drone…and happy miners.

August 6, 2010

The Helium Question

Filed under: Aeronautics,Engineering,Lighter than Air,Materials — piolenc @ 11:21 pm

[This piece first appeared in the Fall/Winter 2006 issue of Aerostation magazine]

LTA: 2006 and the Helium Question

The year 2006 was much the same as any other recent year, at least as far as lighter-than-air flight is concerned. Hopes were raised, then dashed. Projects were mooted, then cancelled. Brave talk was uttered, then swallowed.

If Ought Six is remembered for anything, perhaps it will be that the first visible cracks appeared in the ever-rickety edifice of helium supply. For the first time that I can recall, some users of helium were told that they could not have any at any price, due to the fact that one of the world’s few helium extraction plants was undergoing refurbishment. Presumably extraction has since resumed—there are no panic-stricken “Brother, can you spare some gas?” posts on the LTA-related lists—but this little hiccup is a harbinger of things to come.

Those of you who have followed my rants over the years may want to skip the rest of this piece, but some points deserve to be reviewed. As commodities go, helium is extremely unusual—perhaps unique. It is a by-product of the extraction of another commodity—natural gas—whose unit value is much lower, but whose aggregate value is orders of magnitude greater. This means that the usual assumptions about supply and demand do not apply.

To make it clear why this is so, imagine a commodity—say, a precious metal—that exists in a concentration of a fraction of a percent in a matrix that has no market value. If the market value of the metal justifies it, somebody will extract the tiny metal moiety from the huge mass of matrix, refine and market the metal and dispose of the now completely valueless matrix. The key fact is that, in this hypothetical but typical case, the extraction is driven solely by the market for the metal. If the market value of the metal drops below the cost of bringing it to market, extraction ceases and the metal remains in the ground, in its matrix, waiting for changing conditions to again make it profitable to exploit it.
Contrast this with what will happen if there is a significant increase in demand for helium. Helium, as we know, exists as a tiny fraction of natural gas; some deposits contain more helium than others, and gas from some of those favored deposits passes through a helium-extraction plant on its way to market. The helium in natural gas that is produced without extracting the helium is gone forever, wasted.

Now suppose that there is an increase in demand for helium. Once stored helium stock is exhausted, the only way to meet the greater helium demand is to increase the production of natural gas. This may be done, up to the point that storage capacity for natural gas awaiting delivery to consumers is completely used up. At that point, helium production is capped at a rate proportional to the current demand for natural gas, irrespective of demand for helium. Nobody is going to flare off natural gas to accommodate helium users!

Now the operation of a free market, when production of a commodity is fixed and demand for it increasing, is to raise prices until demand drops to match the supply. This gives a bidding advantage to users who are well-funded and use relatively small quantities of helium. LTA doesn’t match that description, and never will. But there’s worse.

Another effect of a free market is that rising prices of a commodity encourage capital to move into production of that commodity, leading to increased capacity, which in turn tends to put the brakes on price increases. Can we expect this to happen with helium? One can imagine that, with helium prices skyrocketing, producers of natural gas from fields less favored by Nature than those now being exploited might install helium extraction plants at their fields, thus intercepting streams of helium now going up the stack. And then again, maybe not. Helium extraction is capital-intensive—essentially, it requires that all gases except helium be liquefied, leaving only helium in gaseous form. Depending on the projected exhaustion of the field, the helium concentration in the gas and their estimate of the persistence of increased demand, the field’s exploiters may or may not feel that they can expect an adequate return on their investment in new helium plant. Even assuming that the answer is always affirmative, there is a definite physical limit to this capacity increase, which is imposed by the rate of production of natural gas. What is more, each increment of production will be smaller than the last and cost more per unit of capacity, as poorer fields are added to the helium production stream.

The best scenario that we can expect, then, in the event of a true rebirth of helium-based LTA, is a steady rise in price, possibly restrained (but not cancelled) by capacity increases. It is a safe bet, however, that any major increase in helium demand will run up against a hard capacity limit, whether imposed by the reluctance of field exploiters to install expensive helium extraction or by the finite and tiny concentration of helium present in the natural gas stream.
When such an absolute limit is reached, modern governments show a deep reluctance to let market forces operate. Instead, rationing and price controls are imposed and favored users are given priority for supply. In the USA, at least, there is no doubt of LTA’s position in the hierarchy of government favor: near bottom, perhaps just one step above party balloons, perhaps even one step below (what would a political campaign be without balloons, eh?).

In the long run, the Earth’s supply of helium will be exhausted when we run out of natural gas, regardless of the level of demand for helium. Helium is the end product of a long chain of radioactive decay, and for practical purposes “they ain’t makin’ any more of it.”

Of course, I’ve been oversimplifying, by assuming that a revival of large-scale helium LTA would occur in the first place. In fact, no prudent investor would invest in commercial helium-lift LTA without considering the prospects for gas supply, and with the certainty of price increases and the uncertainty of future supply at any price, he will put his money elsewhere.

The plain fact is that helium is already too expensive. Its 6% gross lift penalty compared to hydrogen comes directly out of useful lift, imposing a net penalty around 20% depending on payload ratio. Its cost constrains airship operations by limiting operating altitude or fullness (hence lift) to avoid valving gas and by forcing operators to operate at very low purity to delay “shooting” gas as long as possible. Both constraints further reduce the economic viability of an already marginal transport medium.

If large-scale LTA is to survive, there will have to be a transition to hydrogen as the lifting gas. The only question that is open is: when? If LTA is ever to be used for transport of goods or people, that revival will have to be based on hydrogen lift. And it must be soon.

The time to prepare the transition is now, while there are people still living who have handled hydrogen in an LTA context, and who can instruct others. Hydrogen is more dangerous than helium, but there is no alternative. There are obstacles to be overcome in using it, and the sooner we start overcoming them the sooner we will have viable commercial LTA. The principal obstacles are:

• Lack of trained personnel.  Solution: train some.

• Lack of insurance cover.  Solution: insurance companies will ensure anything for which they have reliable actuarial statistics. Only experience can produce those statistics. Until they are available, operators will have to self-insure. It has been done before, and it can be done again.

• Government regulations and statutes.  Solution: a stroke of a pen.

Hindenburg Syndrome.  Solution: education and exposure.

Priced out of the market, or forced out. Those are the only possible fates of LTA if we persist in considering only helium as a lifting gas. It is, in the current jargon, unsustainable.

FMP

February 23, 2010

Address to the School of Computer Studies, MSU/IIT

Filed under: Engineering,Personal — piolenc @ 10:16 am

This was written and delivered nearly six years ago. Unfortunately, the part about the consequences of using unverifiable software in life- and mission-critical applications has come tragically true. My other prediction—that this problem would be recognized and that packagers and end-users would insist on open-source software—has still not come true, and there is no sign of it happening.

Think about that the next time you go in for a CT scan.

Address to the graduates of the  School of Computer Studies, Mindanao State University/Iligan Institute of Technology
Recognition Night
29 March 2004

Current trends in information technology, and their implications for young professionals and end-users of all ages.

Good evening.

While preparing these remarks, I realized that it has been almost 35 years since I first sat down in front of a computer. Actually, that is not strictly true; the computer was 100 miles away, at Dartmouth College’s computer center. I was sitting in front of a very tired Teletype terminal at my prep school in Andover, Massachusetts, laboriously typing a 20-line BASIC program on its user-hostile keyboard, anxiously watching the fuzzy letters – all uppercase, of course – appear on the rough roll of yellow paper. At the same time, a pattern of holes was being punched in a narrow strip of paper tape. When I had finished this work, which was done offline, of course, I fed the tape through the reader attached to the terminal, where it was converted into frequency-shift keying and sent to the expensive giant at Dartmouth. The terminal typed “READY.” I gulped and typed “RUN.” A few seconds later, the terminal came to life. I awaited with bated breath the outcome of my first computer job. At last, the print head moved, and printed

JOB TERMINATED
CPU USAGE 3.0 SECONDS

…my entire CPU time allocation for that month.

I had programmed my first endless loop.

In those days, it was clearly understood that “real” computers would always be large and expensive and require a dedicated staff to keep them running and to see that they were put to the most profitable use. For most computers, that meant “time sharing”—allowing remote users like me to submit batch jobs and obtain the results for a set cost per CPU second. The trend in computer design was toward larger, more powerful machines capable of accomodating a larger number of users, and thus of making more money for their owners.

If anybody had made the prediction at that time that within ten years, computers would appear that would give a single user – the owner – more computing power on his desktop than the biggest 1969 mainframes, he would have been laughed out of the room. That was science fiction – and not very good science fiction. It was the same kind of implausible literature that gave Dick Tracy a videophone that he could wear on his wrist. Nonsense!

But that is exactly what happened. In my collection, still running but no longer in use, is a Polymorphic 8813, which came out in 1977. It came with a BASIC interpreter, a FORTRAN compiler, and even some prepackaged business software, including WordMaster, a primitive word processor. Its 8-bit microprocessor was clocked at 256 KILOhertz – about a thousand times slower than today’s machines. There was no hard drive. Later models had a 5-MB capacity eight-inch hard drive available as a special option, in a separate cabinet with its own power supply, but the cost was too high for most users, including me. Even so, it worked, and I continued to use it even after I had bought a more modern machine—a Kaypro 4 running CP/M. The “Poly” continued to serve until I assembled my first IBM AT-compatible machine in 1990. I even bought a 300-baud Hayes modem for it so that I could do my university programming projects at home, upload them to the VAX at UCSD and get back the results. Finding a parking space at UCSD was almost impossible at times and the computer lab was always crowded, so this early foray into telecomputing saved much time and aggravation and encouraged my early experiments with bulletin boards before I defected to the Internet in 1995.

I keep that old Poly to remind myself how quickly and drastically the dominant paradigm of information technology has changed. The Poly came out right on the cusp of the first big change, from multiple remote users and batch processing on a mainframe to a single user and interactive processing on a cheap desktop box. The next big change—packaged programs—was just starting; the Poly’s owners were still expected to churn out most of their own applications.

By the time I bought the Kaypro, that had changed. Most computer owners were users only – they bought packaged software and used it to perform whatever task needed doing. The hobbyists who had sustained the early development of microcomputers were still around, some of them working on commercial software applications, but many still following their original hobbyist inclinations and uploading ingenious, compact utility programs to specialized bulletin boards, from which appreciative users could download them free of charge. In many cases, they published the source code so that users could “port” the software to their particular platform, because this was still the heroic phase of microcomputer development and there were many competing architectures and operating systems.

The so-called IBM PC changed all that. In a bewildering series of switches, IBM had first dismissed microcomputers as mere toys, then endorsed the S-100 passive-backplane hardware standard (the basic architecture of my old Polymorphic, updated to allow for a future 32-bit data buss). Then it had come out with its own brand of microcomputer, with a motherboard-based architecture designed by Intel that didn’t have the slightest connection with the S-100 buss. Never mind. It was IBM, and it could no more be questioned than you could question the wholesomeness of a Disney cartoon. The final nail in the coffin of all the alternatives except Apple was, oddly enough, a setback for IBM. For some reason that I still don’t know, there was no intellectual-property protection on the motherboard architecture or the IBM PC expansion buss. The only thing protected by copyright was the firmware – the BIOS written on a ROM housed on the motherboard. When first one, then another clone company figured out how to make a BIOS ROM that was functionally identical to the original IBM BIOS, but contained no pirated IBM code, the way was open for other manufacturers to make computers that were functionally identical to the IBM PC, but cost a great deal less. The market for software that could run on the IBM PC expanded, and with it the number of programmers willing to enter that market. The commercial software publishers quickly realized that identical hardware platforms meant that they need only license the executable code to the user; the source code could be kept secret. The era of unaccountable, unverifiable software had begun.

Now forgive me, but I have to break the chronological sequence to go back to the early 70’s and another important development that affects us now very much. That, of course, was the development of the Unix operating system. Up until that time, an operating system was always understood to be an executive program that served as an intermediary between applications software and the basic system hardware, besides performing some housekeeping functions. Almost by definition, an operating system was programmed in machine language or in the assembly code of the particular, proprietary hardware architecture that it was meant to serve. Unix, on the other hand, was designed at the outset for portability—it was intended to be used on a variety of platforms. To make Unix portable, it had to be coded in a high-level language, not in proprietary, machine-dependent assembly code. To accomplish this, the developers of Unix had to create another pioneering work whose full significance was not properly appreciated at the time—the C programming language.

Existing high-level languages assumed that any system-level instructions would either be handled by the operating system or by the sloppy expedient of inline assembly code. There was no provision in FORTRAN, ALGOL or any other language for manipulating system-level entities. Clearly, however, a high-level language that would be used for writing an operating system had to have that capability, so C was created to provide it. Professional programmers and enthusiastic amateurs fell in love with it—its immense power, recondite syntax and general illegibility appealed to what I believe is an ingrained masochistic streak in computer geeks. Pretty soon it became the generally accepted programming language, almost eclipsing all predecessors. C and its daughters Java and JavaScript clearly dominate the languages market today.

I’m sure that the original creators of C did not intend to create a monster, but that is what their creation became when it left the mostly professional, competent circle of Unix geeks and got into the hands of amateurs and job-shop code butchers who were paid by the line. You see, to give a programmer access to the system from a high-level language, C had to discard most of the safeguards and error traps that by then were standard in compilers for other languages. It has to be possible, for instance, to assign a logical quantity to a floating-point variable or vice versa, something that is frowned on in any other context! That power appeals to nearly everybody, but giving it to any but the most experienced programmers is like giving a loaded .45-caliber pistol to a three-year-old child.

Unix, too, has its dangers. It is modular, highly versatile and very tolerant of shenanigans that other OS’s won’t put up with. It lends itself readily to networking and can allow a user on one networked workstation or mainframe transparent access to resources on another, possibly very distant platform. This is one of the factors that encouraged its early porting to microcomputers despite its hunger for system resources—one of the things that computer users lost when they migrated from mainframes to micros was the ability to easily share resources. Networking has made up for that. Unfortunately, properly configuring and administering a Unix system is not a simple matter, and moving Unix to the individual, minimally trained end-user’s desktop will require a good deal of adaptation and efficient automatic configuration.

Most of the commercial software—systems and application—in use today is programmed in C, and all of it is defective to some degree. Mind you, any large, complex project inevitably has some flaws. But add the power of C, and you have a recipe, not only for serious functional defects but also for deadly, lurking vulnerabilities that give a malicious programmer opportunities for unprecedented mischief. As this software moves into mission-critical applications—control of power dispatching over continent-wide grids, medical radiation therapy machines, life support, military applications, guidance systems—the potential for harm multiplies.

Believe it or not, this excursion into recent IT history really has a purpose, and that is to provide the factual background for a prediction. So far, I have set the stage, and all the actors in the digital drama that will be played out over the next few years are in costume and in character. We have:

· A systems software market dominated by proprietary operating systems programmed (mostly) in C; their source code is kept secret—only the binary code is distributed. The users find out about flaws in the code when something goes badly wrong with it, or when a malicious programmer detects and exploits a vulnerability.
· The alternatives that are being offered are various flavors of Unix, all programmed in C. Some are proprietary, some are not, some are disputed; some of the proprietary versions are “open source,” others not.
· The operating environment of software, already complicated by the many configuration options available to individual users on isolated workstations, is further complicated by the ubiquity of networking.
· We see growing concern among users about the dangers inherent in defective software, and growing pressure to impose standards and require testing, at least for certain critical applications.

Now for the prediction. Right now there is a growing sense of disgust with the unverifiability of commercial software. That disgust has long pervaded low-level users, but now bigger players are getting irritated, and their worries carry more weight. The US government has, for many years, sought to produce or to buy a “trusted” operating system – one that would allow concurrent processing of non-sensitive and highly classified material on the same machine or even over a network, simultaneous users being allowed access only to the material that they are authorized to work with. I first heard about this project more than ten years ago. There is still no trusted operating system available, and none in prospect. Recently, three cancer patients in South America suffered fatal overdoses of radiation when the software furnished with the radiation therapy machine gave plausible—but totally false —results and those results were uncritically accepted. This will continue to happen, and happen more and more often, until fundamental changes are made in what software comes to market, how it gets there, and what happens afterward.

The fundamental problem is software testing, software verification. Up until now, the model used for testing has been borrowed from other fields of engineering, where it has worked fairly well; this is functional testing. You take a new car model, and you run the prototype over a torture track and see what breaks. You figure out why it broke, redesign and repeat the test until you have a suspension that you can trust. A test pilot puts a new jet through harrowing maneuvers with part of its systems shut down, disabled or impaired to prove that certain essential functions continue to operate.

What is slowly being realized is that this doesn’t work in the digital world. The model simply doesn’t fit. An aircraft or automobile is fully characterized by a seemingly large, but still finite and knowable, set of parameters, all of them under the control of the designer and builder. That is not true of a computer. A general purpose digital computer is whatever its software tells it to be—a powerful calculator, a typesetter, a 6-piece orchestra—and whatever its connected peripherals allow it to be. A computer cannot be exhaustively tested; at best, it can be programmed to perform certain standard tasks—the Sieve of Eratosthenes, for example – and its performance of those tasks measured. This is where the so-called “benchmarks” come from that are published in the computer press. This is fairly obvious and well understood.

What is not as clearly realized is that the same holds true of software. In order to justify the expenditure on programming labor, software must be able to serve the largest possible group of users. Because those users have many configuration options available to them, the software must tolerate those variations and still run. It is fairly easy to prove that a program does what it is intended to do, at least in a certain environment. The tasks it is intended to perform are known, so the software can be given the correct input for each task, and it is easy to check that the proper result is returned. Even so, the operating environment of modern commercial microcomputer software is so complex, and changing so rapidly, that even basic functional testing is rarely complete when a program is released. During the life of a given software version, incompatibilities with configuration options, peripherals, drivers and so on will be discovered. The reputable software publishers maintain compatibility lists and keep their customers apprised of what works and what doesn’t. Even so, these problems are quickly discovered and obvious, and so are relatively benign.

The more serious problems arise when a program receives input that the writers never envisioned, and therefore never tested. In other words, the problem here is to test that a given program not only DOES what it is intended to do, but also does NOT do what it should not do.

That second task requires that the publisher subject the program to all possible incorrect input and to all possible incompatible operating environments, and thereby prove by exhaustion that the program will do no harm—or at least, no intolerable harm. This is impossible, for the simple and obvious reason that the set of incorrect and unexpected inputs is infinite.

Well, then, if functional testing won’t work, what is left? Clearly, we have to be allowed to look under the hood. In other words the source code must be published. This does not mean that the source code must be put into the public domain—after all, the novel that I bought today at the bookstore has its text published (there would be nothing to sell otherwise), but the copyright holder retains his right to his work. Why “open source?” Simply because no software publisher, however great his resources, has the time or the personnel to run every possible “what if..?” scenario on a chunk of code. Publishing the code and inviting critique makes the entire world your testing laboratory, every interested professional a member of your Quality Assurance staff. Of course, it makes every kibitzer a licensed critic too, and that can be irritating, but in my not-so-humble opinion the gains outweigh the inconvenience. Now you may receive a message in halting English from Vladivostok saying something like “I am not finding error trapping for input buffer overflowing…,” which is much better than fielding your product, only to have a criminal hacker in Podunk discover the vulnerability and exploit it. It also encourages the production of compact, well-documented code —nobody likes the entire world seeing his dirty underwear. That in itself is a benefit, because sloppy coding makes a source-code vulnerability analysis that much more difficult.

Will Open Source solve all our problems? No. We will still be human, still capable of making mistakes. Lousy Open Source software is still lousy software. The key advantage of Open Source is that it makes the detection of errors at an early stage more probable, success more easily achievable, and disasters less likely—provided of course that the valid criticisms are recognized, heeded and acted upon.

I venture to predict that, within five years, open source software will be mandatory for mission-critical applications in government and medicine. After that, corporate IT chiefs and server administrators will make the same demand, and the practice will quickly spread to the ordinary user’s desktop.

As for WHAT Open Source solution wins, my crystal ball is a bit cloudy. All the alternatives to the secret-source Microsoft/Apple axis are currently Unix versions, but we’ve seen that Unix is not necessarily the answer to a maiden’s prayer. I sometimes have weird dreams in which Bill Gates has an ecstatic vision, emerges wide-eyed from his inner office, and says “PUBLISH!” There will soon come a time when that will be the only way for him to preserve his market share; the question is, will he recognize the fact and act in time

February 18, 2010

Personal Submarines, Bottom Crawlers, Amphibians

Filed under: Engineering,Hydronautics — piolenc @ 10:49 pm

These are chapter notes for book to be entitled Personal Mobility in an Age of Restriction.

3. Water

c. Submersibles and Submersible Amphibians

It might surprise many readers to know that many people, in different countries and from various walks of life, own personal submarines, most of them built by the owner. There are even organizations devoted exclusively to this underwater sport.

See for instance: http://tech.groups.yahoo.com/group/international_psubs_minisubs/

and: http://www.psubs.org/

True, these craft are mostly designed for brief excursions in sheltered water, not for cruising, but their mere existence proves that safe submergence is not beyond the skill of “ordinary” people, which puts them within the scope of this book.

As we will see, the need to safely submerge, operate underwater and surface again imposes constraints far beyond those that apply to surface vessels and surface skimmers. To justify our interest, then, there must be compensating advantages.

We can start with the most obvious one: stealth. Even on the surface, a careful choice of materials, shapes and paint schemes can make a small submarine very difficult to detect, whether with radar or visually. With surface vessels, the only protection that the owner has is that private yachts are fairly common, and the ocean is wide. If a armed vessel chooses to stop him—even in international waters—there isn’t much that he can do about it except run if he can. It’s hard to assert the freedom of the high seas with an Oerlikon gun pointed at you. A submersible has some hope of not being found in the first place, and if found, of escaping. This is all the more true if the submersible is disguised as a surface vessel, in which case its submergence after being challenged on the surface may pass as a sinking or scuttling. Such a disguise is not too far-fetched, as we will see.

A less obvious advantage is immunity to bad weather offshore. Only a few wave heights below the surface, deep water is essentially undisturbed. A submarine submerged at that depth, running at just enough speed to maintain control to stretch battery life, can ride out the most severe storm in comfort and safety.

A still less obvious advantage is access to the bottom and to underwater features and installations. Here we are mostly talking about the bottom-crawling type of sub pioneered by Simon Lake, which were equipped with airlocks that allowed divers to leave the vessel and explore wrecks, collect shellfish, perform salvage or just sightsee. With the negative-buoyancy tanks dry, such subs behave just like conventional ones.

Protector_section

Simon Lake’s submarine Protector was perhaps the highest known development of this type, having a retractable two-wheel undercarriage and a lockout chamber, as well as a fair hull for regular surface and submerged cruising.

Why would freedom-seekers be interested in such a machine? Well, one reason might be emplacement of and access to stores of fuel and other provisions on the bottom, perhaps hidden among wrecks or other known (and therefore unremarkable) hazards to navigation. A small submarine has limited storage and bunkers, so being able to replenish without risking the shore or a harbor might be a good thing.

submarine supply station

If the circumstances were right (sheltering feature located close to land, “friendly” parties living on the seafront), buried cables and even hoses might be run out, allowing direct replenishment and access to the communications and power lines.

What’s wrong with submarines is, basically, that they have to submerge. This means that they have to be able to take on sufficient water to cancel their reserve buoyancy, and even to become negatively buoyant if rapid submergence or the ability to rest or crawl on the bottom is needed. That water ballast takes up space. The more reserve buoyancy the sub has, the less usable space it has inside! But shaving any vessel’s buoyancy margin is dangerous; a slight error in loading, or a wave running over an open hatch can send it to the bottom. Many early submarines ended this way. A narrow margin also makes a submarine very “wet” in heavy seas, because instead of rising to a steep wave it punches through. This is less important today because a deck watch is probably not needed any more, when a video camera mounted on a snorkel- or mast-head can see farther than any watch-stander. But the fundamental objections to low reserve buoyancy—vulnerability to even trivial accidents and intolerance of overloads—stand.

Simon Lake, whom we’ve already mentioned, may have been the first to solve this problem by a method variously called the “double hull” and “saddle tanks.” This layout relies on the fact that a submarine’s main ballast tanks have two normal conditions: full and empty. They are never filled partway, other than during blowing and flooding, which are transient conditions. Therefore they need not resist external pressure. No matter how deep the boat dives, the pressure outside and inside these tanks can be the same, both because water is substantially incompressible and because the tanks can be left free-flooding until it is time to blow them empty to surface. Therefore, they can be housed outside the pressure hull, in a lightweight casing that sometimes surrounds it completely, but often just straddles the top, like a saddle, and in modern submarines is usually housed at the bow and stern, fitting into the overall hydrodynamic contours of the hull. Other stores can go there, too; fuel oil floats on water, so fuel tanks can be provided with openings at the bottom that allow seawater to replace the volume of fuel that is consumed (these days it would make sense to enclose the fuel in a bladder, to completely separate it from the water). Again, pressures are equal inside and out at all times. In naval submarines, spare torpedoes and other stores that tolerate exposure to seawater could also go in the saddle, and with the casing serving as a hydrodynamic fairing, piping and other equipment that would otherwise take up space inside the pressure hull could go outboard. In one stroke, Lake solved the central problem of submarine utility (ironically, although he illustrated this innovation in patent drawings, he never claimed protection for it in the text), and this has been the standard configuration for fleet or cruising submarines ever since. It allows very rapid diving. The Kingston valves, which admit water to the bottom of the tank, are first opened, ready for diving, but water does not fill the tanks immediately because of the air trapped inside. When the sub is ready to dive the air vents are opened and the tanks finish filling quickly. It is important to fill the main ballast tanks quickly even in non-naval applications because, while the tanks are partly full, water is free to slosh fore and aft and the sub may be difficult to control. When surfacing, air vents are closed, the Kingstons are opened and compressed air is introduced to blow out the water. Once the sub is “decks awash” a low-pressure air pump is used to remove the last of the water. Or the Kingstons can be closed and the remaining water pumped overboard with a conventional bilge pump. The latter solution is more efficient, at least in theory, but requires more plumbing.

[Added later: Simon Lake didn’t patent saddle tanks because he was not the first to use them; that honor belongs, apparently, to the French engineer Laubeuf, another talented early sub designer.]

Some tankage has to stay inside the pressure hull, namely the trim tanks – fore and aft – and the negative-buoyancy tank if there is one. The trim tanks are for fine adjustments to ensure that the boat is neutrally buoyant and level with the main ballast tanks flooded. Theoretically, their capacity is equal to the difference between the maximum and the minimum loading of the sub. In practice, they need to be somewhat larger because no fore-and-aft adjustment is possible if the two tanks are either completely full or completely empty. In either of those conditions solid ballast or stores would have to be shifted to trim the boat. Housing the trim tanks inside the pressure hull is mandatory because generally, they will have in them some air, which is compressible. Since water is admitted or pumped out only on the surface or during a very shallow “trim dive,” the pressure differential between the trim tanks and the interior of the pressure hull is always low, and can be made nil by venting air from the tanks into the boat with the outside valves closed. Because the tanks are housed inside the pressure hull, compressed air should not be used for pushing out water; instead, water should be pumped overboard.

Finally, there is the negative tank – a special case if ever there was one. It may be flooded or emptied at any depth within the submersible’s operating range, so it requires special arrangements. If it is filled at depth to allow the sub to settle to the bottom, the water cannot be admitted to it at full ambient pressure or the internal partition between the tank and the sub’s interior would rupture. Admission of water has to be through a restriction, with a relief valve venting air into the boat as needed. Likewise, the tank cannot be blown with compressed air like the main ballast tanks; instead, water has to be pumped out of the boat through the pressure hull. That pump, operating in reverse with a brake on it, can serve as the intake restriction. In a bottom-crawler, the capacity of the negative tank is determined by the bottom pressure required to get adequate traction for the drive wheels; in a regular sub, by the maximum static rate of descent needed for crash dives.

Now here’s the beauty of the outer casing: it doesn’t have to withstand pressure, so it can have any shape desired, as long as it provides the necessary volume for ballast tanks and bunkers. It can look like a submarine…or like something entirely different and much less remarkable. It makes good sense for somebody who wants to be discreet to make his casing look ordinary.

Argonaut II

Lake chose the shape of a sailing sloop (though the two “masts” – snorkels, really – ahead and abaft the “deckhouse” would have raised some nautical eyebrows). For us, a motor yacht would probably make more sense, though in some places a trawler or some other kind of workboat would be more appropriate, and would make the snorkels easier to conceal or disguise. The hull simulated should be a displacement type, preferably with round chines, because the sub will definitely not be capable of planing! We want to keep submerged drag as low as possible, so excrescences should be minimized. A traditional displacement motor yacht, with a low deckhouse, should do the trick.

COST.

It makes sense to measure cost in relation to a surface vessel of equivalent volume or payload. Because of tankage, the total volume of pressure hull will need to be about 40% larger than the living quarters of a surface boat to get equivalent usable space. This in itself increases cost. The cost per unit of displacement will also be higher because of the greater amount of machinery used. Now add the casing, and it is probably safe to assume a cost multiplier of about three.

Bottom-Crawlers and Amphibians

With the possible exception of rumored clandestine reconnaissance craft built under the Soviet régime and the recently retired US Navy research sub NR-1, modern submarines have completely abandoned the bottom-crawling mode of operation championed with great success by Simon Lake. Lake’s brainchildren remain of interest to us for two reasons: our application requires the boat to operate in shallow water much of the time, and shallow water is the enemy of conventional subs; and adding wheels opens up the possibility of limited amphibious operation. This is very much in keeping with our preference for bi- and multimode operation. Here, however, the emphasis is on the word “limited.” An amphibious submarine, unlike a surface-bound amphibian, can never pass as an ordinary highway vehicle, simply because it is too heavy. To prove this, imagine an amphibious sub built to the maximum practical highway size of 40 ft x 8.5 ft x 8 ft overall, and give it a fullness coefficient of .60 for the pressure hull. That gives us a submerged displacement of about 55 tons, which is roughly what the beast has to weigh out of water. Putting all that weight on two axles violates every highway regulation in the civilized world.

Seeteufel_Elefant

One remedy, adopted in the late-WW2 German Seeteufel (Sea Devil) amphibious sub, is to use caterpillar treads to better distribute the weight, but the weight, cost and maintenance burden of a crawler rig makes it unattractive. The next best thing is low-pressure tires, but pneumatic tires are impractical for underwater operation; at a (shallow) depth corresponding to their inflation pressure, they will collapse. One could envision using pneumatic tires as part of the main ballast system, and pumping water into them when it’s time to submerge, but that would entail a good bit of complication. We’re back to Lake’s formula of solid-rimmed wheels, but with the addition of solid-rubber tires shaped and formulated for low ground pressure. This is relatively easy to do because of rubber’s unique and “tunable” properties. Hysteresis heating of the rubber will limit ground speeds out of water, but we’ve already seen that this thing isn’t going on the freeway in any case. Solid rubber is substantially incompressible, so there should be no change of buoyancy and trim with depth. Despite these limitations, the ability to leave the water, even if only to the extent of coming up on a beach or boat ramp, gives an operational flexibility that we’ve already commented on in connection with hovercraft. The ability to load and discharge cargo and passengers on dry land, independent of port facilities, lighters and stevedores, shortens turnaround time. To a commercial operator that translates into greater productivity; for us, it means greater security. The ability to operate in estuaries, over gravel beds, mud flats (with caution, because it is possible to get mired) and sandbars, without fear of going aground, is another big plus. And imagine the commotion when a cabin cruiser comes up the local boat ramp, under its own power, when nobody saw it approach. Amphibians do incur one penalty: To come out on land, we need at least three wheels distributed over two axles. Underwater, we could make do with two wheels in line, as Lake did in his later designs.

DESIGN.

The design of any submersible is more complicated than the design of a surface vessel, for a number of reasons, some of which have already been discussed. Stability, in particular, has to be verified for at least three conditions: surfaced, submerged and the transitional phase, during submergence and surfacing, when the main ballast tanks are partly full and the water in them is free to slosh. In the case of a bottom-crawler with a bicycle undercarriage, the sub’s immunity to tipping also needs to be verified.

Taking the simplest condition first, fully submerged stability requires only that the center of buoyancy be above the center of gravity. This condition is usually very easy to satisfy.

On the surface, the c.b. is nearly always below the c.g. This is permissible, provided that the metacenter – the point where a vertical line through the new center of buoyancy intersects the plane of symmetry of the vessel when the vessel is heeled slightly – is above the c.g. This calculation is well covered in regular naval architecture texts, but has to be done very carefully for submarines because they can be marginal for lateral stability on the surface, depending on the presence and shape of the saddle tank. A more boat-like casing tends to give better stability than the casing forms that are optimized for underwater cruising.

The transitional condition is usually dealt with by subdividing the main ballast tanks fore and aft to limit sloshing, and by lateral equalizing pipes to ensure that the corresponding port and starboard tanks fill and empty at the same rate. Aside from that, generously-sized valves and air vents ensure that the tanks fill quickly so that the transition is brief. Surfacing is usually done by powering up to periscope depth using the hydroplanes, then surfacing quickly from there. The critical transition condition occurs when the sub must surface statically – that is, by blowing its tanks – from a considerable depth. This might be the case if a bottom-crawler needed to surface from a tight spot where it was unsafe to drive or swim ahead. In this case, there might not be enough air to completely blow the tanks at depth. Instead, enough air would be released into the tanks to completely blow them at periscope depth. That air will only occupy part of the tank volume at depth, and will gradually expand as the sub rises and ambient pressure decreases. This leaves a possibly long period during which there is a free surface in the ballast tanks, making it especially important to get this phase right for bottom-crawlers.

Bottom crawling on a bicycle undercarriage means ensuring that the moment of the boat’s net weight (basically, the weight of water in the negative tank) is less than the moment of the boat’s buoyancy when the boat is heeled. This is fairly easy to ensure and to verify.

Detail design is concerned with keeping costs down, ensuring safety and minimizing crew workload. Cost control is primarily a matter of maximizing the utilization of expensive machinery. There are, for example, many tasks requiring the use of a pump; it pays to arrange for as many of them as possible to be done by the SAME pump. Here, as in many other design tasks, there is software that can help. There are several commercial software packages designed to optimize the design of chemical processing plants by minimizing piping runs and avoiding duplication of machinery that can help in laying out machinery aboard a submarine.

February 13, 2010

Book Review: Leichter als Luft

Filed under: Aeronautics,Engineering,Lighter than Air,Propulsion,Structures — piolenc @ 5:37 pm

Leichter als Luft

Transport- und Traegersysteme
Ballone, Luftschiffe, Plattformen

by Juergen K. Bock and Berthold Knauer

reviewed for Aerostation by F. Marc de Piolenc

Hildburghausen: Verlag Frankenschwelle KG, 2003; ISBN 3-86180-139-6, price: 39.80 Euros. 21.5 x 24 cm, 504 pages, single color, many line illustrations and halftone photographs, technical term index, symbol table, figure credits, catalog of LTA transport and lifting systems.

Summary of Contents

1. General fundamentals of lighter-than-air transport and lifting systems
2. Physical fundamentals
3. Design of airships and balloons
4. Reference information for construction
5. LTA structural mechanics
6. Flight guidance
7. Ground installations
8. Economic indicators
9. Prospects

Appendices:

A. Time chart
B. Selective type tables of operating lighter than air flight systems
C. Development concepts of recent decades
D. Systems under development or under test
E. Author index
F. Table of abbreviations
G. Symbol table
H. Illustration credits
I. List of technical terms
J. Brief [author] biographies

In LTA, which has seen only two book-length general works appear since Burgess’ Airship Design (1927), comparisons are inevitable despite a language barrier. It is therefore quite pleasing to note that the authors of this book have consciously set themselves a task that complements the work embodied in Khoury and Gillett’s Airship Technology1. While Khoury’s work is a review of the current state of the art, the present book provides

“…a scientific, technical and economic basis for a methodical, consistent procedure in developing new lighter than air flight systems as well as a catalog and appraisal of prior solutions and achievements.”

as stated in the preface by Dr.-Ing Joachim Szodruch of the DGLR. This is amplified in the authors’ Foreword:

“The observations contained herein are future-oriented and encompass without euphoria the current state of science and technology.”

This is in contrast to Khoury and Gillett’s introduction to Airship Technology, which reads in part:

“This book is intended as a technical guide to those interested in designing, building and flying the airship of today.”

The body of the book is completely consistent with its stated purpose, looking always toward the future and emphasizing how things should be done rather than how they have been done. Where examples of actual hardware and operations are needed, they are drawn from the most recent available, and meticulously documented.

Considering the authors’ long association with the LTA Technical Branch of the DGLR, it is not surprising to find that much of the material, and many of the collaborating authors listed in the Foreword, are drawn from the many Airship Colloquia held by that Branch over the years. Yet the style is seamless; there is nothing to suggest to this admittedly non-native reader where one contribution ends and another begins; style is consistent from paragraph to paragraph, and across chapter boundaries. What is more, the authors seem to have made a conscious effort to make the text accessible to non-Germans by keeping sentence structure simple and straightforward. The three-column-inch sentences, gravid with nested subordinate clauses, so beloved of the Frankfurter Allgemeine Zeitung, for example, are not to be found here, much to this reviewer’s relief.

It is compulsory to say something about the thoroughness of the book’s coverage. It is, however, difficult to formulate a “completeness” criterion for LTA, which is now more than ever an open-ended field, in which-as the authors correctly point out-the possible types are still far from exhausted, despite the antiquity of aerostatic flight. It is to the book’s credit that its presentation, too, is open-ended; that is, the authors have avoided presenting the usual narrow typology of LTA craft and their almost equally narrow applications. Instead, and in keeping with modern practice, they take a systems approach to LTA, situating it within the field of aeronautics and providing the tools that the reader needs to translate his own requirements into appropriate technology.

The only omission that might be considered significant concerns tethered aerostats: the authors appear to have neglected both tethered-body dynamics and cable dynamics in their technical and mathematical treatments. Tethered balloons as a type are mentioned, but that seems to be all the coverage that they get. Admittedly, long-tether applications have poor prospects because of potential operational and safety problems, but short-tether dynamics have caused problems in some applications that are relevant, including balloon logging, so coverage of that end of the scale would have been welcome. Tethers also play a role in some existing and proposed stratospheric balloon systems, including the exotic NASA Trajectory Control System or TCS.

This, however, is the only flaw in an otherwise comprehensive LTA design/analysis toolkit.

One especially notable and praiseworthy inclusion is subchapter 1.4 regarding regulation and certification. This topic, though a concomitant of any aeronautical project, is one that most techically oriented authors would prefer to avoid or to give only summary treatment, but Bock and Knauer dive into it fearlessly, setting forth in considerable detail, and with the help of flowcharts, German, Dutch, British and American certification categories and procedures, with reference to the governing documents. Not surprisingly, there is more detail about the German process, with which both authors have considerable experience. They also review the history and evolution of the European Joint Airworthiness Regulations (JAR), which are keyed to—-and sometimes based on—-corresponding Parts of the US Federal Aeronautical Regulations (FAR).

They do not flinch even from discussing certification costs and fees. Although they admit that the general policy of regulatory authorities is to require payments to government from the applicant that offset the costs incurred in administering and examining a certification application, they conclude that, compared with the cost of development of an airship, the regulatory fees charged are of only minor importance. It is not clear whether they consider here the costs incurred by idling the works while some bureaucrat makes up his mind! Perhaps it hasn’t happened to them…

Typography, binding and book design

The basic layout is in two columns, with generous leading and gutters, making the somewhat smaller than usual typeface easy to follow and to read. Equations are set in a slightly larger, bolder font and occupy the full width of the page, avoiding a common legibility problem with two-column layouts. There are no drop-outs to be found anywhere. The eggshell-white paper is thin enough to keep the book’s 500-plus pages within a thickness of less than an inch (2.5 cm), yet the paper is completely opaque, without bleedthrough and with perfect reproduction of fine-screen halftones. A color section is mentioned in the table of contents, but all pages in the review copy are single-color. The cover is paper, rather than cloth covered, printed front, spine and back in white on a dark blue background (reproduced in reverse for this review). This type of cover is less durable than the traditional cloth, but is in widespread use for textbooks and technical works despite this.

Second (English) Edition

Work is now in progress on a second edition, which will be published in English by Atlantis Productions. Note that this will not simply be a translation of the first, German edition but a new work, composed ab initio and including whatever revisions might seem appropriate considering response to the first edition. Both of the authors have a very strong command of English, so there is no reason to fear the damage that some excellent German technical works have suffered at the hands of translators (Eck’s treatise on Fans comes to mind).

A “must have” in either language.

1 While a more thorough and detailed comparison of the two books would have been desirable, it is unfortunately not possible, as Aerostation never received a review copy of Airship Technology. Such comparisons as can be made here are based on brief access to that book during a consulting stint.

This review originally appeared in Aerostation, Volume 27, No. 3, Fall 2004

February 7, 2010

Safety and Risk

Filed under: Engineering,Structures — piolenc @ 5:55 pm

This is taken from chapter notes for a book project about ropeways (aerial tramways) for use in mountainous areas of the Third World.

Safety and Risk

Inasmuch as an unreasonable standard of safety can kill a meritorious ropeway project, it is worth devoting the necessary space to a discussion of the related—but very different—concepts of risk and safety.

Risk is quantifiable, provided that the necessary data are available. It is simply the probability that a certain type of loss, or a certain level of loss, will occur over a certain span of operating time or output. Actuaries compile these figures and use them to compute, among other things, the premiums to be charged for insurance against the loss whose probability they have computed.

Safety, however, is not the inverse of risk. Risk, as we have seen, is quantifiable and objective, while safety ultimately rests on a value judgment—a subjective appraisal that will differ from place to place and from individual to individual. Typically, a standard of safety is expressed in terms of a maximum risk level deemed acceptable by the individual or organization concerned, and is determined by comparison to available alternatives.

For example, suppose that we are offered a ride on a single span, single car, to-and-fro ropeway with an open car that carries the rider over a deep gorge swept by high, cold winds. Such a ropeway, if installed in a developed country, would likely carry only goods if it were allowed to exist at all; there would likely be other, more comfortable and less risky alternatives available for carrying passengers, and the rickety mechanism would be condemned out of hand as “unsafe.” Transplant the same rig to a remote corner of Nepal or Bhutan, where the only alternative is a five-hour walk on a narrow, icy windswept path with a vertical cliff face on one side and a sheer drop on the other, and it will be praised as the acme of safety and comfort! The risk is the same in both hypothetical cases, but the “safety” value judgment is very different.

None of this causes a problem, so long as the individuals and groups directly concerned are free to choose the risk levels that they will accept. Unfortunately, we live in an age where government has arrogated itself the authority to make these decisions for us, even in countries generally considered “free.” The result is that government workers with secure, high-paying jobs, living and working in relatively low-risk environments, are making risk-acceptance decisions for people in very different circumstances. In most cases the bureaucrats mean well, but have little knowledge of conditions in the areas affected by their decisions and do not understand the adverse consequences of risk-averse regulation.

Tragically, one consistent consequence of applying arbitrary “safety” standards is higher risk. This paradoxical result arises as follows.

1. A novel, previously unapproved transport method is proposed, usually to supplant or supplement an existing transport medium. For our example, let the new method be a ropeway across a gorge, and the existing one a footpath and ford.

2. The new method is not part of the traditional infrastructure, so it must be studied and approved by competent authority. Said authority imposes safety requirements that it deems reasonable, including the provision of safety interlocks to prevent the ropeway from operating unless the car’s loading gate is latched, high factors of safety for the cables, redundant brake mechanisms and so forth.

3. The proponents of the ropeway find that they cannot afford to build to the standards imposed. In some cases, they may find that supporting infrastructure (e.g. electrical power), costing many times the price of the ropeway itself, will have to be provided to meet the requirements.

4. Result: the existing method remains the only one available, even though it is far more risky than even a very crude ropeway. Inevitably, some people will die in falls or by drowning who would have survived if the ropeway had been available, and they will die because because someone living far, far away had the power to deprive them of a less risky alternative…in the name of safety.

Keeping What is Yours

Filed under: Engineering,Personal — piolenc @ 3:03 pm

Personal Security in the Age of Intrusion

This post is taken from chapter notes for a book project with the same title as the post, and for another called The Tropical House.

The book that got me started in designing fortified hiding places was called How to Hide Almost Anything by David Krotz, published by William Morrow in the Seventies. Another book, with a title like “Secret Hiding Places,” from one of the small alternative publishers, was also helpful. Both emphasize concealment and disguise, but not resistance to forced entry once the hiding place is compromised. I’m fairly sure that Krotz also doesn’t cover the “dual port” problem, though I don’t know because both books are in my container. As mentioned in a chapter draft for my Tropical House book, there are three main considerations in providing secure storage under your own control:

1. concealment and disguise
2. resistance to forced entry
3. ease, rapidity and frequency of access

The nature of the threats that you face will dictate how much weight to give each factor, and that will in turn constrain your choice of solutions.

1. In the extreme case of secure storage for a vacation home that is vacant for most of the year, concealment is very important because thieves have a lot of time in which to work on penetrating your security measures. If they can’t find your strong-room to begin with, they can’t penetrate it, and if you’ve kept a low profile—so that nobody even suspects you have a cache—that’s better still. This point may be much less important in the case of a dwelling that is occupied full-time, unless of course you are forced to locate storage near areas accessible to strangers, like utility meters or service entries.

2. This is self-explanatory. How resistant you make your arrangements depends on what class of thief you are expecting, and how long he might have to work in the worst case. Also: how severe the consequences of a successful penetration would be for you. Here the possibility of duress has to be considered, too. If it is possible for somebody to be holding a gun on you or a loved one and demanding that you open up, special provisions need to be made. Here again, discretion is a great help. If the badguys don’t know you’re hiding something, they won’t demand that you reveal it.

3. This is often neglected in planning, but it’s of paramount importance. Suppose you live in one of the many countries that severely limit personal firearms ownership, and prohibit firearms ownership by foreigners entirely. Now suppose that you are fortunate enough to obtain a firearm with which to defend your household. You have to conceal it, but it does you no good if it’s locked in an impenetrable concealed gun safe that takes a half-hour to open. Here ease and speed of access are going to weigh more heavily than security factors. Likewise if you keep a “bug-out kit” of travel documents and ready cash, which you might need to get your hands on in a great hurry. In an extreme case, like the gun locker that I designed for a motor home belonging to a fellow who made frequent forays into less savory areas of Central America, you might have to settle for concealment as the only security measure, with a magnetic latch the only impediment to opening the locker.

I mentioned the “dual port” problem. This is an outgrowth of the fact that the biggest criminals in the world today are governments, and it is very easy in many parts of the world (including places in which the rule of law prevailed until quite recently) to get unlimited authority to search a home or business on the flimsiest of pretexts. In such an event, the outcome is foregone: your hiding place WILL be found and it WILL ultimately be opened. Here the only effective countermeasure is provision for unloading the cache from another access or “port” so that the cache will be found empty, or better yet loaded with only innocuous items of personal and pecuniary value. This is tied in with the duress problem generally, with the added complication that there is no point in providing a duress code that alerts the authorities, because they’re the ones doing the stealing.

Disguise is a popular way to conceal small objects, with novelty houses selling fake switchboxes and outlets, and booksafes. The problem, aside from limited capacity, is that these subterfuges are well known. I would never use a switchbox, junction box or any other electrical or plumbing item for concealment unless it could still function in its ostensible capacity: you should be able to plug something in or flip the switch or turn the valve and have it work. The item should also match all the others of its kind in your house in brand name, model and degree of wear, which is hard to arrange. I particularly distrust booksafes. The commercial types are completely useless because they stand out on nearly any normal person’s shelves. This is an item you HAVE to make yourself from a real book that fits in with the rest of your collection. Aside from the pain involved in committing vandalism on a useful piece of literature, the risk of a perfectly innocent person opening the “book” and discovering your secret is fairly high in most cases. One might easily be tempted to use modern hi-fi equipment as hidden storage; much of the equipment sold today is packaged in big, expensive cases to look as substantial and imposing as older vacuum-tube equipment, but consists mostly of empty space at the back, with a few tiny printed-circuit boards crowded with solid state components just behind the front panel. There is an obvious security problem here, namely that a thief might make off with your stereo for its own sake, not knowing that he has taken your stash with it. Another problem is cooling the equipment (remember, it has to work as advertised). You may have to install a pancake cooling fan or two to make sure your amplifier gets enough air…and plan to do any servicing and cleaning yourself!

Over the years since I got interested in this, I’ve worked out a generic two-layer strong room or safe design which works well in a wide variety of circumstances. The outer layer is designed for minimal security but rapid access, while the inner layer incorporates more robust security measures, and can also be made fire resistant. The outer layer is secured by an electronic lock, while the inner layer would typically have a mechanical combination lock. The outer layer holds rapid-access items like defensive firearms; the inner layer holds deeds and share certificates, bullion and bullion coins, your draft Memoirs, compromising pictures of the First Lady and the stableboy, etc. As a practical matter it is not usually possible to fireproof the outer layer, so any documents and currency stored there may have to be enclosed in their own fire resistant pouches. There is another obvious weakness to the outer layer, namely the need for electrical power to operate it. A backup battery may have to be provided, and it will have to be located inside the strong room so as not to give away its existence. In the past, finding a way to conceal or disguise the keypad that operates the outer lock, while ensuring easy access, was a problem, but there are ways to deal with that today that also give much faster access than tapping a four-digit code.

January 15, 2010

Research Resources: Lighter Than (LTA) Air Flight

Return to ABAC Page

LTA Research Resources

compiled by F. Marc de Piolenc

To suggest resources not listed here, or to correct errors, please leave a comment below.


Libraries & Special Collections
Name/Collection Address/Telephone Description
Embry-Riddle University Library Daytona Beach, FL 32014
(904) 239-6931
Northrop University Library—
Pacific Aeronautical Collection
5800 W. Arbor Vitae St.
Los Angeles, CA 90045
(213) 641-3470
Documentation on West Coast aeronautical activity, including LTA. Photographs.
National Air and Space Museum Library Smithsonian Institution-A157203
Washington, DC 20560
In addition to its collection of books and documents, NASM also has an extensive graphic archive, much of it digitized.
University of Akron
Arnstein Collection
The University of Akron
University Libraries
Polsky Building
225 South Main Street, Room LL10
Akron, OH 44325-1702
Tel: (330) 972-7670
Fax: (330) 972-6170
email: jvmiller@uakron.edu
http://www.uakron.edu/archival/arnstein/arnstein.htm
Papers of the late Dr. Karl Arnstein of Goodyear-Zeppelin Corp. Papers have been listed; the lists and some photographs are available on the University’s Web site. See Internet Resources for on-line access and use information.
University of Texas

Charles E. Rosendahl Collection

Douglas H. Robinson Collection

The University of Texas at Dallas
Special Collections Department
P.O. Box 830643
Richardson, Texas 75083-0643
Phone: 972-883-2570
Dr. Erik D. Carlson, Department Head for Special Collections (carlson@utdallas.edu)
Papers of the late VAdm Charles Rosendahl and the late author Douglas M. Robinson were donated to UT.
Zeppelin Archive

(Luftschiffbau Zeppelin GmbH)

c/o Zeppelin Museum
Seestraße 22
D-88045 Friedrichsafen
Germany
Contact: Barbara Weibel (waibel@zeppelin-museum.de)
Phone: 0049 7541 3801 70
Fax: 0049 7541 3786 249
Housed in the same building as the Zeppelin museum, this is a Zeppelin/LTA archive with about 500 linear meters of papers, 7,000 plan sheets and about 17,000 photographs. Another large collection, of books, is housed with the Zeppelin Company archives. Hours are Tuesday to Thursday 9-12 am and 1-5 pm, but an appointment is required.

Museums
Name Address Description
Zeppelin Museum Friedrichshafen Seestraße 2288045 Friedrichshafen

Germany

Tel: +49 / 7541 / 3801-0

Fax: +49 / 7541 / 3801-81

e-mail:zeppelin@zeppelin-museum.de

The Zeppelin museum. Open May-October Tuesday-Sunday, 10 am
to 6 pm; November-April Tuesday-Sunday, 10 am to 5 pm
Aeronauticum, Nordholz

Deutsches Luftschiff- und Marinefliegermuseum

Peter-Strasser-Platz 3, 27637 Nordholz

Postfach 68, 27633 Nordholz

Telephone: 04741-941074

0700-AERONAUTICUM
Telefax 04741-941090

Email: info@aeronauticum.de,

aeronauticum@t-online.de

Located at the site of a former military airship base; collaborator
of the Heinz Urban museum at Meersburg mentioned elsewhere in these pages.
Has custody of the archives of the now-defunct Marine Luftschiffer Kameradschaft.


1 March-30 June and 1 Sept-31 October: M-Sat: 1300-1700

Sun and holidays: 1000-1800

1 July-31 Aug: daily 1000-1800

26 Dec-10 Jan: daily 1100-1700

Other times: open for groups by appointment

New England Air Museum Bradley International Airport

Hartford, Connecticut

USA

…a true gem and a little treasure of LTA stuff. They have displays
and materials on the Hindenburg, various balloons, a CM-5 engine
nacelle (French WWI airship used by US), a large model of the R-100, a
Packard engine designed for the Shenandoah, and the K-28 control car undergoing
restoration. [Airship-List]
Point Sur Lighthouse Big Sur, California

USA

Lighthouse has a nice display of Macon material, model, diagrams
of where it lies, a short video and overall is worth the trip.
Maritime Museum of Monterey
Stanton Center
Monterey, California

USA

..has a good little area on the Macon, including some recovered
artifacts, models, and multiple videos which include interviews with Gordon
Wiley, son of CDR Wiley. Well worth a visit if you are in the area. [Airship-List]
Moffett Field Moffett Field

(near Sunnyvale, California)

USA

The hangar looks great. You can sometimes gain entrance through the
small museum. This museum is a real treasure. Carol Henderson and her docents
have assembled the most impressive museum I have ever seen. It truly rivals
any professionally run museum such as Smithsonian ones. [Airship-List]
Deutsches Museum Museuminsel 1

D-80538 München

Tel: (089) 2 17 91
Fax: (089) 2 17 93 24

Answering machine: (089) 2 17 94 33

Covers all fields of technology, but reported by Siegfried Geist to
have “a worthwhile section devoted to LTA.” Open daily (except holidays)
from 9:00 am to 5:00 pm.
Stadt Gersthofen Ballonmuseum Bahnhofstraße 10

86368 Gersthofen

Tel: (0821)2491 135 or 101

Five floors of ballooning history, technology and artifacts. Videos
of current aerostatics activity, and a special exhibit on balloons as a
decorative theme. Open Weds 2-6 pm; San, Sun and holidays 10 am to 6 pm.
Zeppelin-Museum

Meersburg am Bodensee

Schloßplatz 8

D-88709 Meersburg am Bodensee

Germany

Tel: 07532 7909

After hours: 07532 41042

Small private museum run by Heinz Urban, specializing in technical
Zeppelin artifacts. Collection includes a spark transmitter from a naval
Zeppelin, the complete bomb-release panel of LZ6 and many other technical
items. Open March through mid-November daily, 10 am to 6 pm. Guided tours
by appointment.
Albert-Sammt-Zeppelin-Museum Hauptstraße

D-97996 Niederstetten

Germany

Small museum honoring a commercial Zeppelin officer of local birth
who rose from helmsman in 1924 to command of LZ130. Multimedia presentation
on Zeps.
Zeppelin-Museum

Zeppelinheim (near Frankfurt/Main)

Zeppelin-Museum Zeppelinheim
Kapitän-Lehmann-Straße 2

63263 Zeppelinheim

Germany

A small Zeppelin museum housed in a municipal building in a Frankfurt
suburb, near the airport. When I was there in ’80, the Curator was an old
Zeppelin-Reederei Maschinist.
Tønder
Zeppelin Museum
Manfred Petersen

Museerne iTønder

Kongevej 55,

DK-6270 Tønder
Denmark

Tel:. (0045) 74 72 26 57 * (0045) 40 59 62 41

This is the old “Tondern” Zeppelin base.
Central Museum of Aviation & Cosmonautics Krasnoarmeyskaya 14

Moscow

Russia

NAS Richmond Museum

c/o Ford U. Ross

11020 SW 15th Manor

Davie, FL 33324

USA

Display commemorating Navy blimp ASW activity in World War II
Soukup &
Thomas Balloon Museum
700 N. Main St.

Mitchell, SD 57301

Tel: (605) 996-2311

Fax: (605) 996-2218

Museum Director, Becky Pope : beckyp@btigate.com

Museum of Flight East Fortune Airfield

North Berwick

East Lothian. EH39 5LF

Scotland

Tel: 062 088308 or

0131 225 7534

Models of the R100 and R34, plus the Lion Rampant Standard which adorned
the front of the R34.  There is also a plaque commemorating R34’s
[transatlantic] flight  to be seen [East Lothian was the point of
departure]. Several other LTA items are featured, including film excerpts,
handouts and bits of Zeppelin frame. [Ian Paterson]

Organizations
Name Address Description
Association
of Balloon & Airship Constructors
(ABAC)
P.O. Box 3841

City of Industry, California 91744

email: abac@archivale.com

Publishes quarterly Aerostation (now part of LTAI’s Airshipworld

journal)

Airship Heritage Trust c/o Shuttleworth College

Old Warden Park

Biggleswade, Bedfordshire SG 18 9EA

UK

Tel: +44 (0)1767 627195

Charitable organisation with a large collection of airship artefacts
and photographs relating to the

British Airship Programme from its early days at
the turn of the century to the Skyships of the

1980’s.

The Airship Association
[UK]
The SecretaryThe Airship Association

6 Kings Road,

Cheriton Folkestone, Kent CT20 3LG England.

Email: info@airship-association.org

Premier UK-based LTA association. Publishes the quarterly magazine
Airship.
Balloon Federation of America Box 400

Indianola, IA 50125

Tel: (515) 961-8809

Fax: (515) 961-3537

Publishes bimonthly Balloon Life
The Bombard Society 6727 Currant Street

McLean, VA 22101

Association of upmarket hot-air ballon operators.
Experimental Balloon
and Airship Association
Brian Boland

PO Box 51

Post Mills Airport

Post Mills, VT 05058

Free membership for anyone interested in experimental balloons or airships
Fédération Française de l’Aérostation 3 bis, square Antoine Arnauld
75016 Paris

France

LTA Society Box 6191

Johannesburg

2000 Republic of South Africa

Japan Bouyant Flight Association

Kyoritsu Kenkyru

402 Hitotsumatsu Bldg 1

2-3-14 Shiba Daimon, Minato-ku

Tokyo

Japan

33-433-2541

The Lighter Than Air Society 1436 Triplett Blvd

Akron, OH 44306

Tel: (847) 384-0215 (Robert Hunter)

fax: (330) 668-1105 (Attn: E. Brothers)

Publishes Buoyant Flight
National Balloon Racing Association Rt 11, Box 97

Statesville, NC 28677

(740) 876-1237

Naval Airship Association 901 Pillow Drive
Virginia Beach, VA 23454

(757) 481-1563

Publishes newsletter The Noon Balloon
Scandinavian LTA Society Drevkarlsstigen 2-4

Sollentuna

S-191 53 Sweden

Zeppelin Kameradschaft Kapitän-Lehmann Str. 2

Zeppelinheim 6078

Germany

Internet Resources
World Wide Web (WWW) Sites
Name Description
Association
of Balloon and Airship Constructors
Direct access to the 1600+ item Library List of LTA technical documents
available as reprints. LL can also be downloaded in ASCII or PDF format.
Links to other LTA organizations.
AIRSHIP –
The
Home Page for Lighter-Than-Air Craft
Hosted at the University of Colorado’s Web server by John Dziadecki,
this is truly the central reference for LTA on the Web.
The Airship Association
(UK)
Announces AA meetings and other LTA activities, esp. in Britain, plus
membership and subscription information. It has many links to other LTA
resources.
Airship & Blimp
Resources
Maintained by a young Swiss studying in the USA, it has many links
to other LTA resources, including photo archives.
Balloon Technology Database NASA-funded database of balloon technology, with 2300 documents indexed
as of 1997. Check the “Balloon Technology” box before beginning your search.
Promotions Dirigeables Web site of Paris-based LTA organization. Pages are bilingual (English/French).
Lighter-Than-Air
Technical Committee
,

American Institute of Aeronautics and Astronautics

Announces LTA TC activities. Note that permission may be required for
attendance by other than TC members; email first.
Lighter-Than-Air
Society [USA]
LTA organization with a primary emphasis on LTA history. Web page has
membership information, announcements and an email link.
Naval Airship Association Organization of former US Navy airshipmen dedicated to preserving the
memory of USN airship anti-submarine activity in WW II. Helps maintain
the LTA exhibits at the Naval Aviation Museum, Pensacola, Florida. Page
has announcements and membership information.
University of Akron
Archival Services
Information on how to use the University’s archival services. U. of
Akron is the custodian of the Karl Arnstein Papers.
Alan Gross (Airship Al) Independent consultant and lighter-than-air archivist.
Email Lists
Airship-List-Server

listproc@lists.Colorado.edu

World-wide discussion group about airships sponsored by the [UK] Airship
Association. To subscribe, send email to the address at left with the words
subscribe
airship-list
in the message body.
LTA-builder

lta-builder@launch.net

The emphasis in this list is on airships. To subscribe, send an email
message with the word

subscribe

in the subject line

Balloon Mailing List

majordomo@lboro.ac.uk

Hosts discussion of balloons, both gas and hot-air. To subscribe, send
a message to the address at left with

subscribe balloon [your email address]

in the body of the message.

AirshipList To subscribe, send a blank message to AirshipList-subscribe@yahoogroups.com

Indexes and Bibliographies
Source/Order Number Title & Description
Kent O’Grady

36 Martinglen Way NE

Calgary, Alberta T3J 3H9

Canada

email: kogrady@cadvision.com

Index of Buoyant Flight Bulletin – Lighter Than Air Society
260 pp. Cost:

$23.00 US for orders from the USA

$28.00 CDN for orders within Canada

$30.00 CDN for orders from any other country-surface

$45.00 CDN for orders from any other country-airmail

Index of Dirigible – Airship Heritage Trust

23 pp. Cost:

$4.50 US for orders from the USA
$6.00 CDN for orders within Canada

$8.00 CDN for orders from any other country-surface

$14.00 CDN for orders from any other country-airmail

ABAC – Acq. #126 Index of Daniel Guggenheim Airship Institute Report file. This is a different body of work from the papers that appeared in the DGAI’s three Publications. Now if we only knew where to get our hands on the reports themselves…
ABAC – Acq. #301 LTA Society Preliminary Inventory [this is a list of what LTAS donated to the University of Akron, which appears to have retained the Arnstein papers and donated the books to a county library]
ABAC – Acq. #439 Index of LTA Articles in Military Review
ABAC—Acq. #1427 Bibliography of LTA Articles in the US Naval Institute Proceedings 1912-60
ABAC – Acq. #463 David Taylor Model Basin tests of airship models
ABAC – Acq. #713 BuAer Technical Notes, 1916-1924. Another obscure report series.
ABAC – Acq. #802 Index of Aerostation through Volume 7 Number 3 [current volume is 22]. Kent O’Grady (see above) is preparing an up-to-date index.
ABAC – Acq. #946 Index of Airship #s 51-65 (Mar 81-Sep 84)
ABAC – Acq. #1409 Index of US Army Air Corps LTA Information Circulars

Return to ABAC Page

December 27, 2009

Introduction to Curve Fitting

Filed under: Engineering — piolenc @ 12:59 pm

When a body of empirical data—data derived from experiment or observation—must be used in design or analysis, it is often inconvenient to use the data in their raw form, for two reasons:

1. The data are discontinuous; that is, there isn’t a y for every possible value of x.

2. There is scatter; that is, there is more than one y for some values of x.

To make the data useful, the designer or analyst must fit an empirical equation to them. The equation will, if correctly formulated, give him a single, most probable value of y for each value of the argument x within the range of validity of the data. This can then be used in building the larger, comprehensive mathematical model that will contribute to rapid and accurate design decisions. What’s needed, then, is a technique, or collection of techniques, that allow such equations to be constructed and to verify that they are indeed the best possible fit to the data.

The discipline is called curve fitting. It is as important and useful as ever, but it is generally neglected in engineering curricula. Like dimensional analysis, it is being neglected and set aside at just the time when convenient and cheap computing power is making it easier to use and more effective.

Curve fitting proceeds in three stages: first rectification, to establish the type of equation that best fits the data set, then determination of the coefficients of that equation, then testing to verify conformity with the data and usefulness for design. (In the examples that occur in the post “Putting Numbers to Your Cooling System,” the data had already been rectified by plotting on a log-log grid, and our task was only to determine the coefficients of the equations.)

RECTIFICATION

By rectification, we mean plotting the data on special scales until we find a set of scales that plots the data as a straight line. The nature of the special scales needed to accomplish this then tells us what kind of equation we should choose to fit the data.

Linear Equation. The first step in any curve fit is plotting the data on a linear graph (x and y scales both linear). In the simplest case, this plot gives a straight line. In this case, the best fit to the data is a linear equation of the form y = mx + b, where m is the slope and b is the y-intercept of the line. This is also the easiest case for determining the coefficients, since m and b can often be read off the graph by inspection, and the least squares method can be applied directly to refining those values.

Effect of Log and Semilog Plotting

Plotting data on nonlinear scales implies a change of dependent variable. When plotting y = f(x) on a log or semi-log graph, for example, that amounts to substituting (log y) for y and taking the logarithm of f(x).

Exponential Relations y = a(10)mx

Supposing for instance that you plotted y = a(10)mx on a semilog graph; the result displayed is log y = mx + log a, a straight line with slope m and y-intercept a when y values are plotted on a logarithmic scale and the x scale remains linear. This type of plot rectifies this type of equation, so if your data were rectified by plotting on a log-linear graph you could expect an exponential relation like this to fit them. A more general form of this relation, y = a(10)mx + c, requires that the equation first be rearranged so that the constant c is on the left: y-c = a(10)mx. The new dependent variable is then log (y-c) = mx + log a, which makes the rectification process a bit more difficult, in that just plotting y values on a log scale won’t automatically rectify the data if they are shifted by a constant value c.

Do we have to generalize still further by replacing 10 with an arbitrary base number B? No, because of the property of logarithms that says that logB x = log10 x/log10 B. The two logs differ only by a constant factor 1/log10 B, which is captured in the constant m of the transformed equation, so plotting on a decimal log scale will rectify any exponential relation, provided that the additive constant c is taken care of. The only effect of a different base number is on the slope of the rectified plot.

Relations of the Form y = axm

Now imagine that our data are related as y = axm as were the heat transfer and fluid flow data in the post “Putting Numbers to Your Cooling System.” Taking the logarithms of both sides of the relation gives log y = m log x + log a. This will give us a straight line of slope m and y-intercept a when plotted on a log-log graph. If our data are rectified by plotting both x and y on logarithmic scales, then it is a relation of this type we should be looking for. As before, an additive constant complicates things; if the true relation is y = axm + c, then we have to consider (y-c) the new dependent variable and we need to find the value for c to fully rectify the data.

COMPUTING THE COEFFICIENTS

At this point, we have a linear equation in the form Y = mX + b, where X and Y represent either the original variables x and y if the data are best represented by a straight line, or transformed variables log x and/or log y or log(y-c) if the data fit a more complex relation. We can get a tentative value for m by choosing  widely separated points and computing the slope of a line connecting them, and of b by measuring the y-intercept of the line. If the data are scattered, however, this tentative set of coefficient values needs to be refined.

The “least squares” method allows us to arrive at the coefficients of a linear equation that best represent the data to which it is fitted. Theoretically, the fact that we’re operating on functions of the original variables skews the probability distribution for scattered data, but in most instances that fact won’t give us any trouble and can be ignored. The method gives us the means to determine two coefficients m and b in the transformed equation Y = mX + b; the additive constant c, if there is one, is determined during rectification. The two equations are:

ΣY – mΣX – nb = 0

Σ(XY) – mΣX2 – bΣX = 0

where n is the sum of the residuals—the differences between the Y values in the data and the corresponding values generated by our tentative fit. The Greek letter Σ (sigma) represents summations of the variables or combination of variables that follow it. To conveniently test adjusted values of m and b, we can set up a spreadsheet with the original data, their transformed values X and Y, cells for the various summations, and finally for the two equations above. Once the spreadsheet is in place we can quickly tell what changes in the coefficients move the results of the least squares equations closer to zero. To make the process even faster, the spreadsheet should incorporate an embedded chart plotting both the raw data points and the current line fit, plotted on the rectifying scales.

TESTING CORRELATION

Finally, we test the result we’ve achieved for fit and fitness—how well have we actually managed to model the data at hand? The tool we use for this purpose is the correlation coefficient I, defined as follows:

RECURSIVE CURVE FITTING

What if we have the best fit achievable with one of our standard equation forms, but the fit isn’t quite good enough? We may need an equation of the form y = F(x) + G(x), where we already have the function F(x), from our standard curve fit, which takes care of most of the fit, but still want to find another function G(x) to account for a wobble or a dip in the data that F(x) doesn’t take into account.  We proceed by tabulating the residues of F(x)—the differences between the local values of F(x) and the tabulated data; we may already have these in our least squares spreadsheet. Those residues become our new data set, and we proceed as above to fit an equation to them. That equation gives us our G(x), which we add to F(x). We then test the new, two-term fit by computing its correlation coefficient, which should be better than the original fit, and hopefully good enough for our purpose. If the original function F(x) was chosen on the basis of theoretical considerations, the form and amplitude of G(x) may reveal the presence of other, previously unsuspected phenomena at work in the system being observed or tested. It might also give insight into the nature of experimental or sensor error that is skewing the measurements.

EXAMPLE: SOLUBILITY OF NITROGEN AND CARBON DIOXIDE IN WATER

For our example dataset we chose the solubilities of some gases in water, as tabulated in the 32nd Edition of the Handbook of Chemistry and Physics (Chemical Rubber Publishing, 1950), page 1478. We’re cheating a little here, because of course there is no scatter in these data; the most probable value has already been assigned to each point by the scientists who compiled the data.

More reading:

Fogel, Charles M.: Introduction to Engineering Computations. (International Textbook Company, 1962)

Buy a photocopy from Archivale.com

Search for a used copy on Bookfinder

December 22, 2009

Early "Seabasing" Concepts – Still Relevant

Filed under: Aeronautics,Engineering,Floating Structures,Materials,Structures — piolenc @ 6:25 pm

Recently, thanks to the efforts of a friend in the States, a report collection that was formerly available only on 35mm microfilm has been scanned into PDF files. While entering the 400 or so reports into my catalog I came across a 1934 critique, by Charles P. Burgess of the US Navy’s Bureau of Aeronautics, of a proposal by Edward R. Armstrong for a chain of floating airstrips called “seadromes.” These were to consist of an overhead deck and a submerged ballast tank, connected by a double row of vertical cylinders. If that sounds familiar, it should – it’s more or less the standard configuration for modern Very Large Floating Structures (VLFS), including the US Navy’s proposed SeaBase platforms. That was a bit of a surprise to me, because none of the articles on VLFS or sea basing that I’ve seen has acknowledged Armstrong’s much earlier work, which began during WW1 and continued until his death in 1955.

But it gets more interesting, because Burgess’ critique and alternative are just as applicable to the modern proposals as they were to Armstrong’s. Noting that a small waterplane area is the ultimate reason for the stability under wave action of Armstrong’s seadromes, Burgess proposed a more shiplike unitary hull with an anvil-shaped cross section – swollen at the bottom to accomodate ballast, spreading at the top into a wide flight deck – giving a small and very fine waterplane area and much lower resistance to forward movement than the multiple prisms of Armstrong’s concept. In the process, he created a configuration now known by the acronym SWASH – Small Waterplane Area Single Hull – about thirty years before its time. Burgess seems to have been more conscious than Armstrong of the difficulties of deep-ocean anchorage; his concept emphasizes powered station-keeping, which is facilitated by the hydrodynamically favorable hull. Burgess also anticipates modern seabasing proposals, emphasizing the value of a shiplike configuration in getting out of harm’s way if the area starts to “heat up.” I’ve uploaded Burgess’ report to the Files area of the Nation-Builders group on Yahoogroups (file name is BA157.pdf).

A good article on Armstrong and his platform proposals:
http://www.americanheritage.com/articles/magazine/it/2001/1/2001_1_32_print.shtml

The back-issue archive at Popular Science magazine’s http://www.popsci.com also has many articles and news items about Armstrong’s work.

The main difference between Armstrong’s proposal (and Burgess’ counterproposal) and what is mooted now is the current emphasis on modularity. Both Armstrong and Burgess proposed unitary platforms, while nowadays the ability to assemble large units from small, identical components is highly prized – one VLFS concept even involves dynamic assembly and disassembly in situ to suit changing conditions! Armstrong’s configuration is implicitly modular – it consists largely of identical units repeating at equal intervals – which explains its prevalence in modern proposals. Burgess the naval architect, on the other hand, gives his SWASH a beautiful continuously-curved waterline in plan, so his hull could only be built as a single unit. It turns out, though, that minor changes would make Burgess’ configuration “modularizable,” and at the same time cheapen its construction considerably, without compromising its main advantages.

The main change is redesigning the load waterline to consist of a long parallel section, tapered abruptly and symmetrically at both ends. This allows the hull to consist of a variable number of identical “center” units capped with identical “end” units at bow and stern. The end units would have identical propulsion units built in, each capable of giving the whole shebang steerage way and not much more. You end up with the SWASH equivalent of a double-ended ferry, but with only enough installed power for station-keeping. Substituting waterjets with orientable nozzles for conventional screw propellers would allow even very large assemblies to be maneuvered without tugboats. The center units, containing no machinery, could be manufactured in very summary facilities much less well-equipped than standard shipyards. It might be advantageous to make the end units in regular shipbuilders’ yards.

Taking the whole idea one step further, the individual units could be built with double hulls, providing enough reserve flotation to allow them to float, albeit with little reserve buoyancy and with decks awash, even when fully flooded. This would allow them to be assembled into complete vessels or platforms on the water. End units would even be navigable under their own power when unmated and fully flooded – the machinery spaces, located in the ballast tank area, would be sealed and connected with the deck by a narrow trunk like the conning tower of an old-style submarine. This in turn would allow end units and center units to be assembled in separate areas, the end units, mated in pairs, being driven under their own power to where their center units awaited them. The mating operation itself could be carried out in open water, with the end units connecting, independently, with center units one by one until they had enough between them; then the two half-vessels would maneuver to join up.

When newly assembled, the new platform would look like a monitor without the gun-turret, deck flush with the water, but with the hull complete it would gradually be pumped dry inside, ready for fitting-out. It might even be possible to equip the propulsion units to serve as high volume, low pressure pumps, at least in the initial stages of pumping-out.

Materials and manufacturing technology are pretty much ad lib. – steel or aluminum, riveted or welded are feasible, but my favorite is of course ferrocement, which if properly executed can be longer-lived than any other material. Joining method for mating the sections is also up in the air. If the sections are made of steel and they were intended to remain assembled, welding would be the obvious method of choice; bolts are the obvious reversible method, but they are very expensive and would have to be fitted, in our hypothetical open-water assembly method, by divers working underwater and in very poor visibility. One technique that appeals to me is adapted from a system developed for assembling buildings from prefabricated panels in earthquake-prone areas, namely lacing the structure together with steel cables. For permanent assembly, the cables can be grouted into their channels; otherwise they can be secured with cable thimbles at their ends. Post-tensioning would then be possible, which would relieve bending loads on very long assemblies.

Armstrong’s patents:
Canada:
253,140
628,310

US:
1,378,394
1,511,153
1,892,125
2,248,051
2,399,611
2,963,868

France:
532,360
572,543

Burgess’ critique: US Navy Bureau of Aeronautics, Lighter than Air Section, Design Memorandum No. 157, February 1934, “A Proposal for a Single Hulled Seadrome,” by C. P. Burgess. Available from the Files section of the Nation-Builders group on Yahoogroups (see link above).

Older Posts »

Powered by WordPress