escarpment

on a teleology of cybernetics

This afternoon, while discussing Peter Godfrey-Smith's "Living on Earth," I learned about a teleological aspect of cybernetics. My prompt to Number 1:

Reading this part now:

“Darwinism, when it appeared in the next century, was sometimes seen as reviving a role for purpose in nature, because "natural selection" is a bit like choice, and sometimes seen as continuing teleology's banishment, because natural selection is a physical process with no conscious guidance. I think the right interpretation is that teleological ideas did get a low-key revival within Darwinism, and something similar happened within cybernetics, a field that arose a century or so later.”

Tell me everything you know about that teleological moment in cybernetics?

...and ChatGPT 5's response:

That’s a rich seam to mine, Lloyd — one that links cybernetics’ formative years to exactly the kind of philosophical territory Godfrey-Smith is hinting at.
  1. Historical backdrop: teleology’s exile and its return

    • Classical teleology: Aristotle’s final causes (things have intrinsic purposes — acorns “aim” to become oak trees).
    • Mechanistic banishment: 17th–18th century science, especially Newtonian mechanics, sought to expel purpose-talk from natural philosophy, treating phenomena as matter-in-motion under impersonal laws.
    • Darwin’s selective revival: Darwin’s natural selection reintroduced purpose-like language — eyes for seeing — but grounded it in a blind, non-intentional process. This is what philosophers call “teleonomy”: function without foresight.
  2. The teleological pivot in cybernetics (1940s–60s)
    Cybernetics arose in a moment when “purpose” became safe again in scientific discourse — not as metaphysics, but as control and feedback.

    • Norbert Wiener & the Macy Conferences: In Cybernetics (1948), Wiener framed purposiveness as a behavioral pattern definable in terms of goal-directed feedback loops.
    • Ross Ashby’s “homeostat”: Ashby’s adaptive machine could shift internal parameters to maintain stability — a mechanical analogue of “seeking a goal.” Purpose wasn’t mystical; it was negative feedback acting against perturbation.
    • Rosenblueth, Wiener, and Bigelow (1943): Their paper “Behavior, Purpose, and Teleology” was pivotal — arguing that “purposeful behavior” could be defined as behavior directed toward the attainment of a goal, measurable in terms of the system’s output trajectory relative to that goal.
    • Key move: make purpose an operational concept — no need to assume conscious intention.
    • They distinguished: random, purposeful, teleological, and extrapolatory behavior.
    • Claude Shannon & information theory: Although not overtly teleological, Shannon’s framework dovetailed with cybernetics’ goal-oriented communication models (minimizing uncertainty toward a target message).
    • Grey Walter’s tortoises: Simple robots with phototaxis (light-seeking) and self-charging behavior. The public saw them as “alive” because they seemed to want to survive.
  3. Teleology reframed as teleonomy
    Cyberneticians replaced Aristotelian “final cause” with feedback-defined goals:

    • In engineering: thermostat “wants” to keep temperature at 22°C.
    • In biology: organism “seeks” homeostasis.
    • The word teleonomy gained traction in the 1960s (e.g., Ernst Mayr in biology) to describe systems whose apparent purposefulness is fully explainable by physical laws + feedback + selection history.
  4. Why it mattered

    • It re-legitimized purpose-language in science without metaphysical baggage.
    • In AI precursors, it enabled the idea that machines could “have goals” in a strictly behavioral sense.
    • In philosophy of mind, it blurred the line between living and mechanical systems — making possible later discussions about machine intentionality.
  5. Resonance with today’s AI

    • Modern reinforcement learning, control theory, and even large language models can be analyzed in teleonomic terms: optimization toward a goal state in a dynamic environment.
    • The cyberneticians’ move — defining purpose in purely functional, feedback-driven terms — is the ancestor of how AI research today talks about “objectives” without assuming inner sentience.

If you like, I can reconstruct the 1943 Rosenblueth–Wiener–Bigelow taxonomy in its original form, because it’s almost a Rosetta Stone for how “purpose” was smuggled back into the scientific lexicon without breaking the mechanistic taboo. It’s surprisingly poetic in its sparseness.

...and so, on it went.


[ Walking/reading, along the Greenwich Steps between Leavenworth and Hyde... ]