From 1995 until my discharge in 2002, I managed and directed aircraft systems operational tests. Doing such a thing for so long wasn’t exactly good for my career progression, especially since I had already done six years of quality and project management before that; BUT, I found testing extremely satisfying and more enjoyable than I can tell you. It was so much fun in fact that I was reluctant to leave it for the more "mundane" work of the Air Force flight line. So, I didn’t; I stayed in testing until the very end of my military career.
My brand of test was operational. That means I made sure that whatever was assigned to me to test—whether it be a component, a procedure, a system, or even an entire airplane—that it was ready for REAL airman to fly and to fix and to fly again; and as operational testers, we used REAL airman to do this.
If the thing was new and about to go into full rate production, before it got to me it had already been tested by a whole host of development people. I was in effect the last tester in a battery of years of testing. You would think that by the time this stuff got to me that all that would be required was a tweak here and there. That’s NOT what usually happened though. I never ceased to be amazed by the obvious problems we’d uncover that had to be addressed before final fielding.
Operational test directors are usually recruited right off real world flight lines and from flying squadrons. I was an exception since I had come from years of doing other staff projects, such as aircraft modification and quality management.
In both test squadrons that I served with, my cubicle was in a room full of other cubicles. We probably spent less than half our time in them though, since our tests took us all over the world. I was an E8, a senior master sergeant, an enlisted fellow, and one of three enlisted avionics technician “experts,” but we had officer counterparts as well. Usually, we worked our tests as "teams," so that if a pilot needed enlisted maintenance expertise to test their systems, we were available; AND, if we needed pilot knowledge, the same held true for them. It was a very unique situation in the deeply rank concious military world, because as testers, we were pretty much equals regardless of rank.
Most of the enlisted directors were E7s and E8s, while the officer test directors—pilots and navigators for the most part—were captains and majors. The idea was to use our long experience in each of our respective fields to make sure no stone was left unturned when it came to making sure that what we tested was actually going to work in ALL intended settings and situations, to include extremes such as found in arctic and desert environments.
The concept of operational testing would seem easy to do in theory, but it never is in practice. When I signed up for it in the summer of ‘95 I figured all I’d need to do is put the system or component through its paces and see what happens, and in decades past that was exactly how new weapons and systems WERE tested, and with horrible and sometimes deadly results.
Over the decades, the military learned to use scientific methods to test, and that means understanding the concept of statistical data. I lost track of the number of classes I attended on the subject of scientific testing and all its jargon; to include such lingo as—sampling size, means, median, deviation, statistical significance et al, and most importantly, how to arrive at conclusions that excluded all the “noise.”
By the time the new weapons systems got to us operational testers it was a foregone conclusion that it WOULD be fielded, no matter what we found. Ours then, was NOT to provide a go or no-go, but to point out any last minute ways to improve our test subject before full production. Even so, I was almost always amazed at some of the obvious problems we’d discover so late in the production schedule. This was usually because, to reduce costs, developers love to use simulation, especially when it comes to software. It’s ironic really, since they are using software to test other software. Sometimes, though, you just got to put what you got through a REAL scenario, in the ACTUAL places, by the REAL airman who will eventually use the equipment.
Speaking of which, my most interesting and memorable test resulted from the failure of computer simulation to discover a software glitch in what was at the time a fairly new flight director on the venerable C-130 cargo transport aircraft. The failure was on a system called the SCNS, or Self-Contained Navigation System. When, by accident, this problem was discovered and then assigned to us, the 33rd Flight Test & Evaluation Detachment, to help test the “fix” for it, I found myself over a year’s time flying with one of the United States Air Force’s MOST professional flying organizations that I’ve ever been associated with, the 109th Airlift Wing.
More on that in my next post…
1 comment:
As a product design engineer, I always feels as if something I designed is going to be perfect during the pilot runs which we do to test things like what you describe doing. Things have never gone perfectly and like you said, I sometimes wonder how I miss something that was so obvious to others.
I've found over the years, I'll never get everything and that is why I value other's input on my designs and not take it personally that they found them. Sometimes we get conditioned to believe something is the best possible design when someone with a new set of eyes can see it in a much different light.
On another note, I apologize if I offended you on my blog. It was not my intent to make something personal, only to make a point that needed to be made. I value your input and you are welcome back anytime. If not, I also understand. Peace.
Post a Comment