Musings on Cognitive Compression, Generality, and Autonomy

In discussing whether simple developmental AI that have rudimentary self-programming are “alive” or not, I wanted a way to distinguish them from big switch statements (BSS) that do the same thing. The AI agents and their environments are so simple (refined) that given any environment, making a BSS is trivial. The idea is obvious: they learn based on experimentation and, albeit being simple and limited, can learn to thrive in many more environments than the BSS (of similar length).

The property I was looking for is one where the AI is ‘simpler’ than the sum of the environments it can thrive in. (With code one can roughly call simplicity code length.) The degree to which it’s simpler can be called “cognitive compression” (named by Ben when I asked him what this property is called >.<). CC is in a sense a measure of generality. Are there limits on the generality of a system without autonomy? That is, viewed through the CC-lens, is there a class of environments a system can’t thrive in without autonomy?

In the same discussion a definition of an autonomous agent by Stan Franklin came up:

An autonomous agent is a system situated within and part of an environment that senses that environment and acts on it, over time, in pursuit of its own agenda and so as to affect what it senses in the future.

Of course a non-autonomous agent acting in an environment will have to sense the environment and act on that to be more than a remote-controlled tool. So the big point here is having its own agenda (goals) and checking whether its actions work toward its goals or not.

The standard idea of moral autonomy is also enlightening here: the ability of an agent to weigh multiple goals and their estimated outcomes to see which best fits its agenda. For example, I want that apple and I don’t want to spend money, but I want to avoid conflict with cops even more than I don’t want to spend money. Autonomy is the flexibility to sacrifice some (sub)goals for the big picture or long-term agenda. (And when there’s no good way to compare the goals indecisiveness ensues :D.)

So, is autonomy actually useful then? Yes. Complex environments without perfect solutions require the ability to weigh options and make allowances. Having the goals and success-verification inside the system make it more robust. Otherwise the system will rely on an external source for motivation and reward. This will add inefficiencies and room for failure. This restricts the amount of environments the agent can thrive in to ones with an external goal source (or a list of environment-specific goals).

The flip side of the argument is that if you give your general problem solving agent these capabilities, it’ll basically be autonomous. Give it a scheme for dealing with potentially-conflicting tasks and some long-term concurrent tasks, and you’ve got a system with moral autonomy and goals. For efficiency throw in the ability to check how its doing with its tasks, and you’ve made your agent autonomous. (Note that this may not be an advisable way to make an autonomous general problem solving agent.)

Autonomy is a beneficial feature and a basic part of cognitive compression :-D