Tag Archives: work

1977-12-00: Job Hunting

After I returned from Australia, I had to find a job. I visited a local recruiting agency (Quest Systems?), where they reviewed my background and arranged interviews at a half-dozen companies in the Washington area, on the Maryland side of the Potomac.

When the interviews were finished, I had more offers than companies I interviewed with. I selected a position with Sperry, to begin work in mid-January. On the way home from a meeting where I accepted their offer, I was driving our Saab 99 over a road slick from recent snow. I crested a hill near home a bit too fast and lost control of the car on the downhill, hitting an obstacle on the side of the road. The car was still (barely) drivable, so I gingerly drove it the rest of the way home. We arranged for it to be taken to a body shop, where they worked on the exterior damage while a replacement rack (part of the rack-and-pinion steering) was ordered. The Saab is a Swedish car, and the required part was not available in the US, so it had to be shipped (by ship) from Sweden. When the rack arrived, it turned out to be a roof rack, due to unfamiliarity of the local mechanics with the Saab. Eventually the right part was procured and repairs completed. We didn’t keep it long after that.

 

2015-12-11: Office Posters

I was looking around my cubicle, wondering how long it will take to move everything out that I want to keep. I realized there are a bunch of “motivational” posters that I don’t need to pack. Here are the posters (quotes) I’ve printed and put up over the years. Some were left up for a short time, some for a long time. These are in no particular order.

AddSimplification Arithmetic BeautifulStrategy BrainWonderfulOrgan_Frost ComfortZone Count CrazyOnes Experiment ForeverYoung freeproduct GardenOfYourMind LearnFromExperience mindfire QuestionAuthority StupidityGotUsIntoThisMess TenGoodIdeas ToAchieveSuccess TreatEarthWell

1978-01-07: Unisys

In 1986, Sperry and Burroughs (another computer maker) merged to form Unisys Corporation. In the period of uncertainty after the merger, managers became reluctant to take chances, including investing in the pursuit of ongoing work at GSFC. Sperry had over 20 years experience under a series of contracts, but the end was in sight. As the expiration of our contract approached, GSFC management decided to combine multiple support contracts under a single umbrella contract. Unisys chose not to compete for the overall contract, and offered to provide our expertise to whichever company won the contract. This strategy resulted in an exodus of programmers, some of whom moved to the company that won the contract. I remained with Unisys, which managed to get a short-lived sub-contract to support NASCOM. After a short time, I was made a supervisor over another contract at GSFC for a few months, while that contract also wound down. By early 1990, plans had been made to close the Unisys office that supported GSFC. At that time, I was receiving three weeks of vacation each year, and the vacation year started on April 1. By combining two years vacation, I could take six weeks for our trip to France.

When I returned from vacation, I was assigned to support marketing. This involved reading requests for proposals, writing sections of proposals, and setting up demos of proposed equipment. For the remainder of my time at Unisys, I worked on 11 proposals; Unisys won just one of these, a terrible record. In January 1995, I was laid off, along with many others. Unisys layoff policy was to grant one week of leave for every year of service, and I had 17 years of service. I was fortunate to find a position relatively quickly, and to start working in May; I had nearly four months of ‘retirement’ before I joined IITRI.

Previous: Knowledge Engineering

 

1978-01-05: Silver Snoopy

One of my favorite aspects of working at NASCOM was the requirement/opportunity to be on-site during critical operations, so that we could help recover the systems in the (thankfully rare) event of a failure in the communications network. This included satellite launches, the re-entry from orbit of Skylab in 1979, and of course the launch and landing of Space Shuttle missions, beginning in 1981.

On-site support consisted primarily of sitting in a back room, trying to work on whatever programming task was current and waiting for the phone to ring. For special events, such as the actual launch and landing of a Space Shuttle, we would congregate by the windows of the NASCOM control center, and watch the video monitors with the NASA feed. I distinctly remember the first launch of Columbia, and watching as the payload bay doors opened, revealing a bunch of missing tiles on the Orbital Maneuvering System pods at the rear of the orbiter. The concern for the rest of the flight was palpable among everyone there, until the orbiter had safely passed through the extreme heating of reentry, and was gliding to its first landing.

Although I was interested in all aspects of space flight, including Earth-orbiting satellites and interplanetary probes, manned space flight is certainly the most dramatic and inspiring. I think everyone who works with NASA feels the same.

It turned out that my reputation was considered sufficient to earn a coveted award. The Silver Snoopy is an award granted by the astronauts themselves. It consists of a silver lapel pin depicting Snoopy in a space suit, and associated certificate. The award is personally presented by an astronaut. The ceremony took place at the GSFC auditorium on December 7, 1984. I received the award from astronaut Bryan O’Connor.

The pictures show my pin (actual size about 1/2 inch), the Silver Snoopy certificate, and autographed picture of astronaut Story Musgrave. The January 1985 issue of Goddard News described the event. (local copy)

SilverSnoopy ss600

StoryMusgrave

Previous: Operating System

Next: Knowledge Engineering

 

1978-01-06: GSFC – Knowledge Engineering

Since the 1950s, computer scientists have confidently predicted that computers would one day (real soon now!) be able to think like people. The field devoted to achieving this goal has been called ‘Artificial Intelligence’ (AI) all this time. By the mid 1980s, one of the achievements of the field was to develop ‘expert systems’, which could assess numeric and symbolic information and make deductions based on sets of rules that reflect the knowledge of experts in some domain. An example of the time was an expert system that could diagnose blood diseases as well as the best experts in that field. Such systems are not programmed in low-level languages, but in a special-purpose rule-based language, in which a set of rules is applied to some input data by an ‘inference engine’. The rules might determine that additional information is needed to make a diagnosis, and ask questions for more data, or for certain tests to be run. A very capable expert system might have 50,000 rules, derived from extensive collaboration with experts in the domain under study. This specialized subfield of AI was called ‘knowledge engineering’, and its practitioners knowledge engineers.

The programming language that such inference engines were written in is called Lisp. It is not widely used, or even known by most programmers. However, it was the preferred language for AI research, and the development of expert systems. I had studied it to a small extent in school, but had never used it in practice. Proponents of the language had developed specialized computers, called ‘Lisp Machines’, designed to efficiently execute programs written in Lisp, even writing their operating systems in Lisp.

In 1984, Sperry jumped on the expert system bandwagon. They made partnerships with related hardware and software providers. They funded the training of perhaps a hundred programmers in the Common Lisp language, principles of expert system development, and special purpose tools to support such development. I was selected in the first cohort of 20 students. We were sent to Sperry’s training center in Princeton, New Jersey for about six weeks of classes.

Following the general training, those of us in the Washington, D.C. area reported to an office in Northern Virginia for more training. We were to develop examples of small expert systems (without benefit of actual experts) relevant to our customers. I worked on one to diagnose problems with NASCOM’s high-speed circuits. Attending these sessions meant that for the first time I missed covering a Space Shuttle launch, on January 28, 1986. While in class, I received a call from Susan that the Challenger had exploded during launch.

Sperry salesmen, with my support explaining the potential of expert systems, convinced NASCOM management to buy two Sperry Explorers, a re-branded Lisp Machine from Texas Instruments, along with Intellicorp’s Knowledge Engineering Environment (KEE). I spent much of 1986 working with NASCOM experts and capturing their rules for diagnosing problems. This work is amazingly difficult, because many highly skilled people don’t realize what knowledge they’re using when they do their work, and can’t just write down their rules of inference. Knowledge engineering is a combination of observing an expert at work, asking what they’re thinking, and probing into the underlying concepts and relationships that enter into their decisions. The concepts and rules, once elicited from an expert, can be added to the system, or edited to refine the expert system’s decision-making. Extensive testing and validation with the expert is critical to the success of the system.

Eventually, we placed one of the systems on the production floor, where it could be used by ordinary operators to trouble-shoot problems, or to help less-experienced technicians diagnose and correct problems. I don’t think it was actually used very much.

Another programmer and I convinced Sperry management to purchase an Apple Macintosh II with a lisp machine co-processor, along with Gold Hill’s Golden Common Lisp and some other software. This was much cheaper than a dedicated Lisp machine; however, KEE was not available for it, and it was much less capable than the Explorer/KEE combination. Nonetheless, the other programmer and I worked on it to develop small expert systems and other AI-related programs. She was particularly interested in programs that could understand and generate plain English sentences as the interface between human operators and systems.

The impact of expert systems on NASCOM and other Sperry customers was minimal. The technology had been over-hyped, and was followed by an ‘AI Winter’, a period in which the failure of reality to match expectations was followed by distrust and lack of support. The AI field has suffered a number of AI Winters, and 1987 saw the end of the Lisp machines and efforts such as NASCOM’s diagnostic expert. I sometimes wonder what happened to the machines at GSFC.

Sperry’s foray into knowledge engineering was a high point of my time with the company. It made me more aware of research and the difficulty of transforming interesting ideas into viable products. The fact of my selection also made me more aware of the importance of reputation in making an impact on an organization. Even today, I remain interested in the AI field, though I don’t have much expectation of contributing to it.

One benefit from this work was an opportunity to attend a conference at the Microelectronics and Computer Technology Corporation (MCC). This was a well-funded (largely through the Defense Department) research organization in Austin, Texas. At the conference, I saw several interesting demos, including one that inspired my work on the Meta-Dimensional Inspector.

Previous: Silver Snoopy

Next: Unisys

 

 

1978-01-04: GSFC – Operating System

Sometime around 1982, NASCOM replaced the aging 494 computers with more modern 1100s. They did not replace the communication processors that handled the low-speed circuits, known (if memory serves) as C2100; these were old, but still serviceable. Although I had not been involved with the applications that handled the low-speed circuits, I had demonstrated my ability to work on system-related tasks. The new configuration required a new interface program, called a ‘driver’, to allow the 1100 to work with the C2100. I was assigned this task.

The program that controls the hardware components of a computer system, such as its disk drives, tape drives, printers, and other peripherals is called an operating system (or OS). For the 1100, this was called OS1100. OS1100 has the ability to handle a wide variety of peripheral devices, but must be customized for each specific type, such as the C2100. Once the driver was available, application programmers could write programs to perform useful tasks, such as receiving and passing on messages received over the circuits connected to the C2100.

I, along with a couple of other programmers, attended specialized classes related to the internal working of OS1100. Using this knowledge, I soon wrote the driver for the C2100, and the message-processing application programmers could do their work. This was another feather in my cap, and prepared me for the best assignment in my time at Sperry. But first I received some special recognition.

Previous: Enhancing development tools

Next: Silver Snoopy

1978-01-03: GSFC – Enhancing Development Tools

All of NASCOM’s computers at this time were programmed in machine-specific low-level languages, collectively called assembly languages. Writing code in these languages required a separate line of text for each instruction in the resulting program, and each instruction was very limited. An example of an assembly language instruction is:

L1 LR R5,COUNTER

This represents an instruction at a location in memory denoted by ‘L1’, and the instruction loads a register (a storage area that can be acted on directly by the processor), ‘R5’, from a memory location denoted by the symbol ‘COUNTER’. A program that does anything useful contains thousands of such instructions. A program called an assembler translates these symbolic instructions into actual machine instructions, one-for-one.

A Meta (or Macro) Assembler, such as MASM, can be programmed to generate different types of instructions for different types of computers, such as the 494 and the 3760. In addition, it can be set up to generate multiple instructions from a single symbolic line of text. It often happens that certain patterns of instructions are used over and over again, with slight changes.

Another approach to programming uses high-level languages, such as FORTRAN or COBOL. These languages are typically useful for scientific or commercial applications, but don’t generate code efficient enough for very demanding tasks, such as handling communication circuits.

I proposed a set of MASM macros that would implement control sequences similar to high-level languages, allowing IF-THEN-ELSE statements and LOOPs to be expressed easily. With help from two other programers, we implemented these control structures. Once they were adopted by the NASCOM team, reliability of the code and programmers’ productivity were improved.

I found this effort rewarding for a couple of reasons. First, it was a real contribution to the efficiency of our team. Second, it was an application of lessons I had learned in studying computer science at school, and totally unanticipated by most of the programmers on our team; really ‘outside the box’. This initiative further improved my reputation in Sperry.

Previous: Learning and enhancing 3760/CPx

Next: Operating System

1978-01-02: GSFC – 3760/SIOC

As I dug into it, the operation of the 3760 was fairly straightforward. The machine was basically a minicomputer, internally the same as a computer Sperry made for the U.S. Navy, known as the AN-UYK-20. It was well documented, and all of the NASCOM code was documented and available for me to read. The SIOC was not well documented. The only thing resembling a program was a listing of octal numbers (hundreds of lines of 6- or 12-digit numbers using the digits 0 through 7) representing the code loaded into the SIOC processors when they were powered on. This code was stored on cassette tapes (the same as audio cassettes of the time). There was also a very brief manual that listed the instructions the processors could execute.

While waiting for an actual assignment, I took it on myself to annotate a copy of the octal listing. I used the manual to interpret the octal listing as instructions. and wrote the instruction code next to each line of octal numbers, an incredibly tedious process. Once I could read the instructions, I then identified the segments of straight line code, conditional branches, loops, subroutines, and data areas. Repeated passes over this “disassembled” listing gave me a basic outline and understanding of what the code was doing.

One of the development tools that ran on the 494 was called MASM, for Meta-Assembler. This was a program that could be configured to generate code for different kinds of computers, and was already configured to generate either 494 or 3760 code. I was able to add the ability to generate code for the SIOC, and then to write a MASM program that re-generated the octal listing for the SIOC. Of course, this wasn’t exactly the same as the original code, since I had no idea what names the original programmers had used for their data areas, branch targets, and subroutines; and it had no comments to explain it. Nonetheless, I could provide suggestive names based on what I surmised about the functioning of the program.

Eventually, I was given programming tasks for the 3760, and deepened my understanding of the programs in that computer, as well as the interaction between the 3760 and SIOC. A key aspect was the way in which buffers (temporary storage areas for messages) were obtained and returned to the memory pool of the 3760. One of the problems of this system was an occasional “crash” in which the 3760 ran out of memory, and had to reboot. Rebooting took a couple of minutes, and could cause the loss of many packets of data through the system. There were five or six 3760 programmers at this time, all more experienced than me, of course. However, none of them had ever looked into the SIOC code, and none could figure out the source of the crashes. My analysis revealed an error in the way the buffers were allocated, such that occasionally a buffer would be lost to the pool, which nowadays is called a “memory leak”. This was rare enough that the system could run for a long time before losing so many buffers that none were available when a new one was needed. The actual time between crashes depended on the amount of data being processed, but was typically several days. One work-around was to restart the computers on a regular schedule; however, this was not always convenient, especially during a mission of some sort.

I developed an approach to modifying the SIOC code to avoid the problem. However, I could not implement it because changes to that part of the system were only authorized by the original development group in Salt Lake City. My proposal was sent to them, and a trip was arranged for me to meet with them. When I explained what I had found, and how I had found it, they accepted my proposal, and updated the code. In addition, they gave me additional documentation and MASM code that would allow our group to make changes without having to make trips to Salt Lake City. This episode greatly enhanced my reputation in Sperry.

Later, I noticed a way to reduce the number of buffers that needed to be initially allocated, so that buffers could be allocated to a circuit only after data had been detected on it. This  effectively doubled the number of buffers available to the system in the same amount of memory. Unfortunately, I implemented this enhancement at the same time that Sperry’s salesmen were promoting a doubling of 3760 memory. When NASCOM management realized they didn’t need to spend money to buy more memory, they stopped that deal. This episode hurt my reputation in Sperry, at least with the salesmen.

This was my first work experience in a team of programmers working together, and I was quite pleased with myself to find my work appreciated, and to be looked to as an expert in the 3760/SIOC and related areas. It was also my first experience with the potential conflict between solving a problem with software and solving it by buying more hardware.

The SIOC was highly specialized, and not many customers used them. One group that did was a U.S. Army network technology unit located at Fort Huachuca, in Sierra Vista, Arizona. Once it was known that I had enhanced the SIOC at GSFC, it was arranged that I would go to Fort Huachuca to explain what I had learned, and to advise their programmers on techniques for programming them. Thinking back on the events, this might have stepped on the toes of the SIOC development group in Salt Lake City; however, I never heard of any complaints.

Previous: NASCOM

Next: Enhancing development tools

1978-01-01: GSFC – NASCOM

In January 1978, I started work for Sperry Rand (later Sperry, Sperry Univac, and Unisys), in the group that supported NASA’s Communication (NASCOM) organization at the Goddard Space Flight Center (GSFC) in Greenbelt, Maryland; our office was off-site a few miles away. My badge is shown below (with my Social Security Number obscured; we were so innocent in 1978).

SperryBadge1noSSN

NASCOM supported all of NASA’s manned and unmanned missions. When I started, NASCOM included three world-wide networks of voice, low-speed teletype circuits, and high-speed data circuits. The Sperry staff was divided into three groups for these different types of communications, and I was assigned to the 3760/CPx group supporting the high-speed network.

When I arrived, NASA was just preparing for the launch of a satellite (Seasat) that would use new high-speed communication capabilities, and everyone in the group was very busy supporting last-minute testing. With nothing much to do, I spent my time reading manuals and looking at code, trying to understand what these computers did, and how they did it.

The operational complement was two 3760s; two more served as “hot” backups (meaning they could be switched on-line immediately in the event of a problem with the primary set), and a fifth for development and test activities. A 3760 was the size of a desk. In fact, you could sit at it; all of the processor and memory circuits were in the space occupied by the drawers in a normal desk. There was a terminal and keyboard on the desktop, and a pair of tape-cassette drives for loading software. Next to the desk was a refrigerator-sized cabinet housing the specialized I/O processing, called an SIOC. Each cabinet had 20 or 30 circuit boards, about 18-24 inches square, each with two or four processors. Each processor connected to a single high-speed data line. In 1978, high-speed meant something up to 56,000 bits per second (56kbps), about the speed of the last dial-up modem you might have had. These data lines carried data in packets of two sizes: 1200 bits or 4800 bits. This was different from the low-speed data lines which handled variable-length messages containing individual characters. Although the I/O was handled by the SIOC, the memory of the 3760 held the data buffers that the data passed through. When a packet came into a buffer, the 3760 examined its header and determined which output lines it should be sent out on; most data packets were sent to multiple destinations (up to six, as I recall). The memory of the 3760s was quite limited, with only enough space for about 100 packets at any moment, supporting about 30 data lines.

This routing function is essentially the same as what your cable modem does. It has multiple connections (mine from Verizon has a main network connection to the cable, four ports to plug in ethernet cables, and a WiFi network), and can accept input data from any of the connections, and route it to the appropriate output connection. In this way, any of the computers, phones, tablets, and printers in my house can communicate with any of the others and the internet, without needing dedicated wires connecting each one to all of the others.

In addition to the 3760s, NASCOM used Univac 494 mainframe computers. These were large computers, each contained in several refrigerator-sized cabinets, containing powerful processors, memory, large tape and disk storage, and many I/O connections. There was one online and a hot backup that was also used for development of software. The 494 ran the development software for all of NASCOM’s computers.

Previous: Sperry/Unisys/GSFC – Overview

Next: Learning and enhancing the 3760/CPx

 

1978-01-00: Sperry/Unisys/GSFC – Overview

My time working for Sperry at Goddard Space Flight Center (GSFC) was a pivotal period in my working life, from January 1978 to September 1990. I want to describe it in some detail, and I have divided the description into seven articles, representing successive stages of my time with Sperry/Unisys. I dated the articles as 1978-01-0x, to keep them grouped together and in order, although the events run through 1995. The articles cover the following topics:

Next: Introduction to NASCOM

 

 

1960-06-01: My So-called Career

My work experience has included the following “positions”:

  • In Barstow, I sometimes made a little money mowing lawns (at the urging of my parents).
  • Beginning about 1960 in Barstow, while I was in junior high school, I occasionally helped Dad with the parking lot sweeping business. I don’t recall being paid for this, but it was more like work than play, though the sweepers were pretty neat. Also the trips to the dump, populated with feral goats.
  • Dad later partnered in a house-maintenance business, and sometimes I helped out. I think I got paid for this.
  • In high school, I passed the San Bernardino County civil service exam, and worked part-time in the library in Highland, mostly shelving books. It wasn’t as interesting as I’d hoped.
  • At Carnegie Tech, I worked part-time in the cafeteria, and as a User Consultant in the computer center, helping other students with their programs and keypunch jams. In summer 1969, I worked in the  computer center as a programmer on the Univac 1108 operating system.
  • Also at Carnegie, I worked in the cafeteria and in the library. The library was better.
  • In the summer of 1967, I worked a couple of weeks as a door-to-door encyclopedia salesman. (Notice I didn’t say I sold encyclopedias door-to-door.)
  • By 1968, Uncle Bob made his mother, who hired for FEDCO, hire me as summer help (Family comes first!). This lasted through 1970, and became full-time in 1971. To work there, I had to join the Teamsters Union!
  • In January 1972, I started working as a contractor for the Jet Propulsion Laboratory, in Pasadena, California.
  • At the University of New Hampshire, I was a teaching assistant and a research assistant, culminating in a field trip to Australia.
  • In January 1978, I started working for Sperry Rand (which eventually became Unisys). Awarded Silver Snoopy.
  • In May 1995, I started working for the Tax Systems Modernization Institute of the Illinois Institute of Technology Research Institute. Note the word “Institute” appears three times in the organization’s name. Eventually, IITRI was reorganized as Alion, and became an employee-owned company.
  • In 2005, I started working for Management Systems Designers, which was acquired by Lockheed Martin in December 2006.
  • In 2009, I started Castle Knob Publishing.
  • In August 2010, I became an employee of the federal government, in the Internal Revenue Service.
  •  If I’m lucky, I can retire near the end of 2016; if not, maybe 2017. [update: I retired October 28, 2016.]

1977-10-00: Alice Springs, Australia

After leaving UNH, I still had one outstanding obligation to Prof. Chupp, to support his part in the expedition to Australia to test his gamma-ray telescope. I left Susan in Maryland and flew from Dulles to San Francisco, where Gary met me in the airport for a brief visit between flights. I flew on a Qantas 747 via Honolulu (in the middle of the night) to Sydney, then changed planes to Alice Springs via Adelaide, on Ansett Air. In those days, airliners had smoking sections. Apparently the Australian sense of egalitarianism dictated that the division between smoking and non-smoking was the aisle; as I recall, the non-smoking section was the right side of the plane.

The rest of the team (I’ve lost track of their names, but I recall three other physicists from Prof. Chupp’s lab, besides myself) arrived about the same time; we stayed in the Oasis Motel.

Chupp and his team had participated in previous ballon flights in Palestine, Texas. They knew the procedures and equipment involved. This expedition was in the southern hemisphere, and involved (if I recall correctly) about ten groups planning to fly various instruments to observe the southern skies. The launch truck and related equipment had been brought from the US, and was to be left for future Australian use.

Each group’s instrumentation and related equipment was delivered to a hangar at the Alice Springs airport, and had to be assembled, tested and readied for flight. On the first day, the Australian manager of the expedition gave a safety lecture, emphasizing the hazards of reaching under pallets, etc., where snakes might shelter. He said the local snake’s poison could kill a person in 20 minutes, but not to worry too much, because it was only 18 minutes to the hospital.

As each team declared its readiness for flight, it was put on the preference list. When the launch director determined a day was suitable for launch, the teams were given the opportunity to launch or decline in order of their readiness.

During this time, Carl was working in Tehran. When he got some time off, rather than going home to Maryland, he flew to Alice via Hong Kong and hung out with us for a couple of weeks. There was little time for sightseeing, but I did get a chance to go to a local glider club and had a flight in one of their two-place planes, a staggered-seat Australian design called the Kookaburra. The back seat person’s legs were beside the front seat. One day when we were at the glider port, someone was attempting to set a world record for altitude in a Piper Supercub; I think he appeared to have succeeded, but these things require review of instruments and documentation to be official. The day I flew, there was a brush fire approaching the glider port, and the club members dragged all of their planes out of the hangar onto bare ground where they couldn’t be burned. When I flew, the pilot took advantage of the rising hot air from the fire to gain altitude in the Kookaburra. It was a little smoky, but fun anyway.

Carl rented a Mini Moke (like a jeep), and we drove around some of the surrounding countryside. It seemed a lot like Arizona to me: desert with dry grass everywhere. The rock formations looked interesting, though we didn’t get to the famous Ayre’s Rock. I think the Moke had “roo bars” on the front, to protect the grill and lights in the event of hitting a kangaroo.

The winds in the stratosphere blow at high speed from east to west for half the year, then reverse direction for the next half year. While the direction is reversing, the winds are slow enough to allow a balloon to remain in the vicinity of its launch site for many hours or even some days. The ability to track the balloon and to receive data sent by radio from instruments to their ground stations limit the duration of a flight.

I think four or five teams were ready before us, but even so, Chupp declined some opportunities hoping for better wind forecasts. We finally launched after about seven weeks of preparation and waiting. The balloon drifted to the northwest, and eventually we could no longer receive data. I was chosen to board a twin-engine Piper to go to terminate the flight. We flew out and landed at a muddy airstrip at a station (ranch), known as “Beasley International”. The balloon was still drifting northwest, so we took off again, with some concern about being able to break free of the mud. We landed once more at another station, where a woman and her two children lived. The father was away from house, tending cattle. These people get very few visitors. The woman served us tea, and we chatted for a while. They had a radiotelephone that served to connect the kids to their school. They had their books at home, and the teacher gave instruction to widely distributed students by radio.

We used their radiotelephone to verify that it was safe to terminate the flight. It is not a good idea to have an empty balloon fall into the path of an airliner. Given the OK, I flipped a switch on a little box we were carrying. This sent a signal to the balloon that separated the parachute and instrument package from balloon, and ripped a hole in the envelope. Then we got back in the plane and flew out to find where it landed so it could be recovered. The pilots were skilled at this task, and quickly spotted the white and orange parachute on the ground, marking its location on a map. They asked me if I could find it again from the ground. I had my doubts, but it turned out that wasn’t going to be my job anyway.

The next day, I made arrangements to go home, spending one day in Sydney. An observer from NASA, who sponsored some of the teams, was also leaving that day, and we went to the Sydney zoo, crossing the harbor on a ferry right next to the famous opera house, and with a good view of the famous bridge.

My flight home was on a 747-SP (Special Performance). This version has a shorter fuselage but holds the same amount of fuel as a standard 747, so it can fly fewer passengers over a longer distance. We flew directly from Sydney to Los Angeles. This flight wasn’t possible in the other direction with normal winds. The departure was delayed after we boarded, and the captain said one of the restrooms needed a repair: “This is an eleven hour flight, and we aren’t leaving until all the restrooms are working.”

I was glad to get home. Alice was nice enough, but it was a long time to be away. When Susan met me at Dulles, she said she had forgotten I had a beard.

 

2013-10-10: Generation Zero

Perhaps the most interesting work experience I’ve had was during my first real job, at JPL: the programming of the first robot proof-of-concept robot planetary rover.

In 2013, I decided to use 3D animation technology to create a reconstruction of the demo that resulted from that work, as well as a document explaining how the work was done. I called the work Generation Zero: My Best Job Ever, since it came before the first real generation of robot projects at JPL. I used Blender to construct and animate the robot and its environment, and iMovie on the Mac to edit the segments into the final movie.

Though published by Castle Knob, this title was not made to be sold. The story is freely available from the Castle Knob site, and the video is freely available on Vimeo.

1972-01-00: JPL

When Susan and I married in October 1971, I didn’t have a job. Her former boss, Dr. Agnes Stroud, arranged an interview at the Jet Propulsion Laboratory that resulted in my being hired as a contractor in January 1972, to perform computer-related tasks as needed.

I’ve documented the high point of my JPL “career” elsewhere. In summary, the tasks I worked on were:

  • Helping Dr. Len Jaffe, the Principal Scientist of the Lunar Surveyor program, analyze the size distribution of lunar fine particles returned to earth by the Apollo 12 mission.
  • Developing the control software for JPL’s first proof-of-concept planetary rover.
  • Working in the Image Processing Laboratory, applying (and developing, in a minor way) techniques to analyze digital images (from planetary probes, ERTS (now Landsat), and aerial photography). We also enhanced the launch images for the Skylab launch, to try to determine how much damage occurred when a solar panel tore loose during the launch. A lot of the work was determing land use and vegetation types in the Verde Valley, Arizona, and on the shore of Lake Mead.

While I was working on the robot, a representative of General Electric came to JPL to demonstrate a teleoperate manipulator. I was one of the few selected to actually try it out after its “handler” demonstrated it. He used it to pick up a large steel plate, and to use a large plastic basketball backboard like a ping-pong paddle, batting around a basketball. I simply moved the unloaded arm around for a couple of minutes, but they gave me a picture/certificate! There is a website with a video of the Man-Mate in action, featuring the same man who demo’d it at JPL.

I left JPL in August 1973.

man-mate

man-mate-cert