1992-02-15: Mirror Worlds

Mirror Worlds, or the Day Software Puts the Universe in a Shoebox… How It Will Happen and What It Will Mean (1991)

by David Gelernter (1955-)

The following are notes and extracts from this interesting bbok.  The ideas are grand, but the prophecies are incomplete and seem unlikely.

Terminology

He coined a term for the understanding of the big picture, in the face of too many details to comprehend at once: topsight.  “If insight is the illumination to be achieved by penetrating inner depths, topsight is what comes from a far-overhead vantagepoint, from a bird’s-eye view that reveals the whole – the big picture; how the parts fit together.  (“Overview” comes fairly close to what I mean.  But an “overview” is something you either have or you don’t.  Topsight is something that, like insight, you pursue avidly and contimuously, and achieve gradually.)”

The idea behind Linda is to construct ensembles of information machines. (Gelernter calles them infomachines, I think informats, or infomats, sounds better.)

The Linda Architecture

He uses the idea of tuples in tuple space, and ways that each infomachine can use them.  There are four basic instructions:

out – creates a tuple, and sets it adrift in tuple space

in – grabs a tuple out of tuple space

read – reads the content of a tuple, but leaves it in tuple space

eval – creates a new infomachine (which eventually becomes a tuple)

A given informat can address a problem by harnessing a number of other informats to attack parts of the problem.  The tuples are a communication standard by which problem specs and results can be transferred between informats.

Design

An interesting sidelight on software architecture vis-a-vis design:

Software architecture is no medium for untrammeled whimsey.  It imposes ironclad discipline on the designer:  The point is to solve a hard problem efficiently, not to make art.  But good designers in any medium make art despite themselves; whether they work in steel or concrete or software or silicon, that’s precisely how you recognize them.  Some of the best art being produced today is “applied art” in exactly these tough media – because art, after all, requires discipline.  You can’t push if nothing is pushing back.  The popular belief that you get art by pushing against society’s assumptions and expectations used to make sense but is now quaintly obsolete, because after all, these expectations and assumptions – about content and form – collapsed years ago.  You can offend people, sure, but you can’t surprise them.  (Attention Connoisseurs:  If you fling yourself in a screaming massive assault at a supposedly locked door that turns out, in the event, to be wide open, what you get is called “slapstick,” not “art.”)  Technology, on the other hand, still pushes back.

Damned hard, too.  You’d better believe it.

Trellis Machines

An ensemble monitoring many streams of raw data and generating responses to actuators is described as a Trellis.  The informats at the lower levels perform low-level data filtering and actuator commanding, and the higher-level ones perform more abstract analyses and make goal-driven decisions to be implemented by lower levels.

An interesting monitor concept is described for displaying the status of a Trellis:  A horizontal plane, seen in perspective as a receding surface, is divided into tiles, one for each informat, with location in the plane correlating with position in the (2-d) Trellis (e.g., x separates different data streams and y distinguishes high/low levels).  Each informat which is in a ‘normal’ state is represented by a flat tile in a ‘cool’ color (blue in the example), with no other distinguishing marks.  As an informat’s state exceeds certain thresholds, it’s tile tips up toward the vertical orientation, its color changes to a ‘hotter’ color (yellow/orange/red), and a legend is printed on the tile.  This display is called a Trellis Dashboard.

Fetch and Generalize

He describes two basic operations for making sense of special cases in light of past experience.  These are Fetch and Generalize (he usually calls them plunge and squish).  He also describes the use of these operations in finding precedents for cases, and making comments on cases (specifically in a medical diagnosis context).

Fetch is the process of submitting a case (represented as a set of attribute/values) to a database search engine.  The engine compares all known cases to the presented one, and finds those which are ‘close’ to the presented case.  Closeness is based on matches or similarity in attributes, modulated by evocativeness or appropriateness to the purpose at hand (diagnosis or pizza preferences).  The relevant close matches are precedents for the given case, or conclusions drawn from the matching cases.

Generalize is the process of taking a set of cases and combining them into a super-case with composite attributes.  This allows comments of two kinds: conclusions and speculative guesses.  Both of these are ways of filling in the blanks of the presented case.  Speculation occurs when a Generalized case has attributes that don’t appear in the sample case.  A program could speculate that the generalized value(s) might appear for that attribute in the sample case, suggesting a test to be performed or inquiry to make, and cite precedents for comparison.  An attribute might be possible (appears in many cases), definite (appears in nearly all cases), or surprising (never or seldom appears).

(There is a third process in his FGP program: Project allows you to pick what you want out of a generalization.)

Speculation requires the possibility of being distracted by an interesting possibility.  One way to achieve this is to temporarily set aside the presented case, and take the interesting attribute/value, treating it as a separate case, performing Fetch and Generalize.  The result of this generalization might be a super-case with some well-defined attributes worth speculating on.  If the new generalization is a close enough match to the original test case, speculate on the differences.

The generalization process might repeatedly form the same generalization.  If there is a mechanism for retaining generalizations for a time, and noticing their recurrence, they can become useful cases in their own right, by virtue of encapsulating a lot of experience.  These might form the kernel for a ‘common sense’ effect.

Forgetting is also a mechanism for reducing complexity.  Suppose there are two or more cases that are very similar (under some measure).  If their difference is below some threshold, they might be separately forgotten, replaced only by their generalization.  Differences can include the relative ages of two cases, which get closer as time passes (e.g., comparing ratios).

The generalization and forgetting mechanism could be a way to automatically generate rules of inference, scripts for scenarios, etc.

Gelernter says, “the process of bringing knowledge to bear on a problem appears to be logically identical to the process of accumulating knowledge in the first place.”

He also supposes a ‘concentration knob’, which determines how evocative something must be in order to be distracting.  Lowering the concentration level can result in three effects: laziness about generalizing (allowing evocativeness to be less focused), lower distraction threshold, and tendency to continue excursions rather than immediately returning to the previous case.  The spectrum of approaches resulting from the settings of this knob correspond to reasoning and free association, with a range of non-rational thinking styles in between.

 

Print Friendly, PDF & Email