Making cute and interesting digital life.
We’ve decomposed the problem of intelligence into two broad fields Individual and System intelligence, i.e. that of humans or agents and that of general systems like evolution or human society or an ant colony.
Basically, the complex dynamics leading to intelligent behavior / adaptation to environments / problems has to be somewhere.
Further. Dogs are alive. Dogs are cute. Dogs have lots of subtle, fairly intelligent aspects to their behavior. However, their potential is questionable. Dogs can learn a few hundred words. They can emotionally understand us pretty well (at least at a course level, though perhaps more subtle in some ways than most of us). However, if you put a dog in an interesting and complex enough environment / universe, it probably won’t develop greater and greater intelligence. In fact, even if you have a whole society of dogs, it won’t. With evolution, over a looong, looong time, it may. But the potential there is that of evolution.
How interesting is a one-off digital life form that has such a limited scope? Of course, it will be a lot of fun (and practical uses abound) to make use of its subtle touch. Interacting with its sensual sensitivities can be heart-warming, encouraging and more. But, nonetheless, disappointing.
It’s kind of interesitng to notice, actually. If you want the potential for significant intelligence development, even an entity the level of a dog ultimately falls back on the intelligence potential of evolutionary dynamics rather than its individual intelligence potential. (Caveat: if you don’t want life forms without obviously bounded potential, dogs are bundles of loving fun ~<:3)
What about humans? Well, shucks. Individual humans may also fail the potentiality (generality?) test. Even if you ignore the limited lifespan issue, the intelligence expansion of humans seems to plateau. Even worse, mindsets / concepts / paradigms seem to progressively stagnate in most humans too. “Science progresses one funeral at a time” (Max Planck). So, then, wouldn’t a human-level digital being also, well, disappoint? Kinda. Yeah.
The hope is that a human mind in a more transprent + easily modifiable substrate won’t disappoint. A human mind upload may qualify, although it may want additional tools for increased mental transparency (as it could still be pretty difficult for a human to understand its own workings, even if it could ‘look’ at them). In this case, the human-level mind could analyze its own mind and make intelligent ‘external’ adjustments that would actually elevate its intelligence beyond the plateau.
– Wouldn’t there just be another kind of plateau waiting for it? —- We’ll have to find out, won’t we ;-) – Hard physical / theoretical limits?
A basic idea here seems to be that mind-internal dynamics will reach intelligence plateaus – or just skill plateaus. Mind-external adjustments can overcome these plateaus.
Evolutionary dynamics provide one route to mind-external adjustmenst, albeit on a trans-individual scale.
Direct brain / cognitive substrate / cognitive architecture modifications are another route, not yet implementable.
However, Civilization / social networks / social collaboration seem to be another method of overcoming the individual plateaus. Humans will soon gain the ability to directly augment their minds, but that’s not the work of an individual human. Nonetheless, through our trans-individual global mind we are overcoming the intelligence potential plateau!
The above is a case of System Intelligence though. Even in humans, the System Intelligence ultimately trumps as the carrier of potentiality – though disentangle it from our individual intelligences we can not.
(Err, heh, we’re pretty slow too.)
– What about intelligence potential plateaus for intelligent systems? —- Do they reach them and overcome them through individual breakthoughs? —- The dual of individual intelligence potential plateaus :O – Or are they inherently / generally more capable of “mind-external adjustments”? —- What would a mind-external adjustment be for evolutionary dynamics? —- Or for the (human) global mind? Okiddly doki, this reeks of a mythical goose that lays golden eggs.
What if a digital mind’s mind-internal dynamics were somehow intimately tied to mind-external adjustmetns – or, better to mind-architecture adjustments? Would it in principle be less susceptible to intelligence potential plateaus? (Especially if in a moderately supportive milieu.)
Or, even if that’s ‘possible’, would it most probably just fuck itself up in one of many possible ways?
I guess it’s fairly likely that the easiest way to have such a system not fuck itself up is to have it be reasonably intelligent, like, say, at least smart human level : | . Not being able to casually modify your own mental architecture is generally a good safety feature for less intelligent minds, even if it induces intelligence pleateaus. |
However, I’d bet that once there are AGI or far more intelligent, augmented humans or w/e that have a greater understanding of mental dynamics, they will be able to engineer ‘simpler’ minds that nonetheless have this potential without fucking themselves up. Maybe they wouldn’t stay dog-level for that long if the environment stimulated them to grow, but what if it were cozy (dogs are royalty)?
Hmm . . .
There’s also the issue that the self-modifying mind would generally need the intelligence to comprehend and intentionally modify itself. If its methods of self-modification were implicit and not based on such explicit understanding, would those methods themselves run into limits? Or is there some weird way past that strict intelligence threshold? . . .