I’m a big proponent of uplifting animals, preferable individuals and not just species, on grounds of both compassion and curiosity. That is, I want to increase the intelligence of dogs, rats, Donkeys, and probably even centipedes.
However, to the degree possible, I also want the choice to be voluntary. Yet how can one explain the concept to a dog? Perhaps we will develop tests that gauge the degree to which they want to understand more of the world, and extrapolate that they would agree if they could.
What if this is not good enough? Must we ask whether the dog would be glad it was uplifted? Perhaps something like Coherent Extrapolated Volition (CEV)? Yet, there are many ways to uplift a dog. Could one rig up an uplift that nearly guarantees the uplifted animal is ‘glad’ for it after the fact?
Contemporary humans would view altering a woman’s mind to consent prior to sex, or worse yet, just to make her glad you raped her after the fact. How different is this really?
Is there a choice-preserving way of uplifting an animal? One where the animal will (demonstrably) make the same choices it would before? Is there, asking more, one that doesn’t limit the ability of the being uplifted animal to use its newly granted cognitive abilities to make choices?
Next, what if we can’t develop the theory sufficiently without actually uplifting some animals and asking them? That could be the only way of really getting the needed data.
In the end, we may just have to make the choice on our own. For them. Muahaha, we’re doing that already as we exterminate species after species. :’(
Now what about humans?
Some of us want to greatly transcend human intelligence; others don’t. But does either party satisfactorily understand the ramifications? Likely not.
The same dilemma as with the rest of the animals shows its head. May my choice to uplift myself be like a contract signed under duress?
A potentially sad scenario: Augmented Intelligence Humans turn out to be a bad idea. They are inherently unstable, violent tendencies never quite leave, and come to the strong realization that ignorance really was bliss. There are Minds far surprassing humans that are satisfied, but there is no self-continuous transformation from human minds to any of these classes of minds. The choice of a human to become one of these Minds is akin to suicide – little better than using one’s remains to become a tree.
Of course, while that would be quite tragic for me (and I doubt it is the case), a theory of the optimal level of intelligence for a class of minds (how do classes of minds correspond with types/species of animals?) would be interesting.