Tag Archives: ai

2016-08-24: Cyc

I first became aware of the Cyc artificial intelligence (AI) engine around 1984, while working for Unisys, and have followed its progress ever since. From time to time I have downloaded and fiddled with the OpenCyc version; this basically contains a relatively large ontology knowledge base. More recently I’ve learned that the more-complete ResearchCyc can be used for non-commercial purposes. According to Wikipedia, “In addition to the taxonomic information contained in OpenCyc, ResearchCyc includes significantly more semantic knowledge (i.e., additional facts) about the concepts in its knowledge base, and includes a large lexicon, English parsing and generation tools, and Java based interfaces for knowledge editing and querying. In addition it contains a system for Ontology-based data integration.”

I am interested in exploring these additional capabilities. One way to explore them might be to try to implement Asimov’s Three Laws of Robotics. However, I would start with more limited knowledge entry and query exercises.

 

1978-01-06: GSFC – Knowledge Engineering

Since the 1950s, computer scientists have confidently predicted that computers would one day (real soon now!) be able to think like people. The field devoted to achieving this goal has been called ‘Artificial Intelligence’ (AI) all this time. By the mid 1980s, one of the achievements of the field was to develop ‘expert systems’, which could assess numeric and symbolic information and make deductions based on sets of rules that reflect the knowledge of experts in some domain. An example of the time was an expert system that could diagnose blood diseases as well as the best experts in that field. Such systems are not programmed in low-level languages, but in a special-purpose rule-based language, in which a set of rules is applied to some input data by an ‘inference engine’. The rules might determine that additional information is needed to make a diagnosis, and ask questions for more data, or for certain tests to be run. A very capable expert system might have 50,000 rules, derived from extensive collaboration with experts in the domain under study. This specialized subfield of AI was called ‘knowledge engineering’, and its practitioners knowledge engineers.

The programming language that such inference engines were written in is called Lisp. It is not widely used, or even known by most programmers. However, it was the preferred language for AI research, and the development of expert systems. I had studied it to a small extent in school, but had never used it in practice. Proponents of the language had developed specialized computers, called ‘Lisp Machines’, designed to efficiently execute programs written in Lisp, even writing their operating systems in Lisp.

In 1984, Sperry jumped on the expert system bandwagon. They made partnerships with related hardware and software providers. They funded the training of perhaps a hundred programmers in the Common Lisp language, principles of expert system development, and special purpose tools to support such development. I was selected in the first cohort of 20 students. We were sent to Sperry’s training center in Princeton, New Jersey for about six weeks of classes.

Following the general training, those of us in the Washington, D.C. area reported to an office in Northern Virginia for more training. We were to develop examples of small expert systems (without benefit of actual experts) relevant to our customers. I worked on one to diagnose problems with NASCOM’s high-speed circuits. Attending these sessions meant that for the first time I missed covering a Space Shuttle launch, on January 28, 1986. While in class, I received a call from Susan that the Challenger had exploded during launch.

Sperry salesmen, with my support explaining the potential of expert systems, convinced NASCOM management to buy two Sperry Explorers, a re-branded Lisp Machine from Texas Instruments, along with Intellicorp’s Knowledge Engineering Environment (KEE). I spent much of 1986 working with NASCOM experts and capturing their rules for diagnosing problems. This work is amazingly difficult, because many highly skilled people don’t realize what knowledge they’re using when they do their work, and can’t just write down their rules of inference. Knowledge engineering is a combination of observing an expert at work, asking what they’re thinking, and probing into the underlying concepts and relationships that enter into their decisions. The concepts and rules, once elicited from an expert, can be added to the system, or edited to refine the expert system’s decision-making. Extensive testing and validation with the expert is critical to the success of the system.

Eventually, we placed one of the systems on the production floor, where it could be used by ordinary operators to trouble-shoot problems, or to help less-experienced technicians diagnose and correct problems. I don’t think it was actually used very much.

Another programmer and I convinced Sperry management to purchase an Apple Macintosh II with a lisp machine co-processor, along with Gold Hill’s Golden Common Lisp and some other software. This was much cheaper than a dedicated Lisp machine; however, KEE was not available for it, and it was much less capable than the Explorer/KEE combination. Nonetheless, the other programmer and I worked on it to develop small expert systems and other AI-related programs. She was particularly interested in programs that could understand and generate plain English sentences as the interface between human operators and systems.

The impact of expert systems on NASCOM and other Sperry customers was minimal. The technology had been over-hyped, and was followed by an ‘AI Winter’, a period in which the failure of reality to match expectations was followed by distrust and lack of support. The AI field has suffered a number of AI Winters, and 1987 saw the end of the Lisp machines and efforts such as NASCOM’s diagnostic expert. I sometimes wonder what happened to the machines at GSFC.

Sperry’s foray into knowledge engineering was a high point of my time with the company. It made me more aware of research and the difficulty of transforming interesting ideas into viable products. The fact of my selection also made me more aware of the importance of reputation in making an impact on an organization. Even today, I remain interested in the AI field, though I don’t have much expectation of contributing to it.

One benefit from this work was an opportunity to attend a conference at the Microelectronics and Computer Technology Corporation (MCC). This was a well-funded (largely through the Defense Department) research organization in Austin, Texas. At the conference, I saw several interesting demos, including one that inspired my work on the Meta-Dimensional Inspector.

Previous: Silver Snoopy

Next: Unisys