Intelligent Machinery Week 1 Summary

Menabrea’s work begins with an explanation of Charles Babbage’s machine similar to the structure of Turing Machines, except with a needle indicating position rather than some sort of marker on tapes.  More specifically, these seem to be forms of calculators for specific purposes, particularly multiplication tables.  As such, these are limited to finite computational methods of mathematical analysis, rather than any sort of asymptotic or infinite analysis.  Arguably, all mathematical operations are numerically composable of these elementary operations, so this covers an infinite number of possible calculations.  He goes on to recognize explicitly that this machine does not necessarily have a full version of consciousness.  The author refers to the 7th-order gradient as the particularly complicated calculation for which this machine was designed.  Fascinatingly, he prototypes the idea that any general sort of data can be numerically represented and operated upon.

A comparison is then posed to the Difference Engine, which Menabrea argues does not “weave algebraic patterns” in the same way as the Analytical Engine does.  The Difference Engine is referred to as only having the capability of synthesis, not analysis.

Returning to Lovelace’s notes, an annotated component notices her description of the Analytical Engine effectively allowing the creation of a state machine.  In Note D, Lovelace demonstrates the capability of complex calculation of such state machines.  As further annotated, she also covers loops, variables, and conditional branching. As an example, a connection is drawn to the multiple bodies problem.

Babbage’s piece discusses Prony’s thoughts on the challenge posed by the government of deriving massive multiplication tables, seemingly so as to eventually introduce his Analytic and Difference Engines.  In order to describe the division of mental labor, he separates mathematical considerations as one might functions within some other enterprise.  He references these engines as enhancements to these functions.

Padua’s illustration clarifies the nature of the Analytical Engine as being a mechanism for storing information from an adding machine, so as to improve the scope of problems capable of being solved.  Padua emphasizes that, prior to Lovelace’s analysis, Babbage didn’t associate his Engines with the possibility of symbolic calculations (compared to numerical calculations).  Padua continues to explain that punch cards in these engines are equivalent to machine code in modern computers.

In the James chapter on psychology, an extension to Babbage’s point is made, arguing that human brains are not optimized for formal logic or arithmetic.  A model of the brain where each neuron is characterized as a gate involving the values of all previous neurons.  In formulating these “circuits,” James characterizes association through ideas of total and partial recall.

The introduction to the McCulloch-Pitts paper expands on this idea by focusing on the direct characterization of neurons as a quantum element in these circuits.  One of the contrasts to James’ initial connectivist model is that it focuses on the capability of parallel computation.  In the end, the result of the simplistic computational model of neurons was that it was more representative of what came into importance as logic gates.

The next McCulloch-Pitts paper focuses on audiovisual perception, so references an idea of apparitions as constant representatives of objects in images or any sensory experience.  One inferred simplification that has not held up is that idea that there is some sort of scanning and selection process in various regions of the brain.  This has clearly not manifested itself to be true.

The Rosenblatt paper presents the idea of the Perceptron, a single layer neural network capable of learning more advanced pattern matching than previous computational statistical methods.  It has served as the root of growing “intelligence” of neural nets.

From my interpretation, Block’s modifications of the Perceptron focus on reducing bias by restricting changes to misclassifications.  This works well but causes a very long tail in achieving error-free classification.

Second to last, the Hinton paper seminally introduces backprop, the fundamental algorithm enabling neural nets.  Supposedly this was a simultaneous discovery with Yann LeCun and Werbos.

Finally, Feynman’s piece discusses an entirely different direction of the possibility of creating the minute physical systems that make up modern computers.  He proposes both ideas of externally controlled equipment to do so and also chemical synthesis in the way that chemists currently prepare experimental chemicals.  All things considered, he poses some fantastic predictions of what electronics manufacturing actually turned out to be like and how electron microscopes continue to develop.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s