Dictator Bob's Valuism

Since 2011 a significant portion of my personal philosophies have been grounded in values, which, wondrously and frustratingly, do not seem readily reducible. However, values can change, which leads to the interesting question of value dynamics, which reminds me a bit of Omohundro’s basic AI drives (pretty cool, but we need to delve deeper).

Many areas of philosophy are greatly simplified and clarified via being grounded in values.

It occurred to me to call it Valuism, and similar ideas can be found in The Philosophy of Individual Valuism. Alas, this term is associated with many things I’d rather not associate myself (see the Facebook page or website). And then someone else tries to define “Valuism” with biases for certain values built in, but he seems to be heading in the right direction at least (imo :p).

Of course, these ideas have most certainly been had before. An easy example is Hume’s keen separation of matters-of-fact and matters-of-value, which he did extent to ethics, etc.

I may not even have originality in how I frame things. Yet their acceptance at a de facto level seems all too rare. And, meh, why not write down my take :-3

Ethics

The first step is to give up on the search for some absolute moral / ethical truths. Basically, the first step is a sort of moral relativism, but stopping at descriptive moral relativism is a bit unsatisfactory: what, then, do morals depend on? Values.

“Ethics depend on values” is a good start, but how do we get an ethical policy from this? We just need a way of telling / estimating which policies are conducive to bringing about states of the world where our values are satisfied. Something like

E[satisfaction(values vs, world state ws) | policy p]

(The expected satisfaction of values vs in world state ws given policy p is enacted.) Kind of like a utility function :p.

If you want to be thorough and empirically grounded, then you will need to formulate a clearly measurable satisfaction metric for each value you care about, as well as mustering every bit of research you can on how the policies under consideration affect the state of the world. (Doing this should encounter similar issues to those encountered studying psychological states D;) This provides a surprisingly simple yet important realization: an ethical policy depends on values and on one’s policy-evaluation methods. 

(Hey, this looks like “value + evaluation -> policy”, which is similar to “context + schema -> goal,” something out of OpenCog theory of mind :p)

Thus two people or groups with the same values can support contradictory ethical policies. Someone supporting policies that are utterly reprehensible to you does not necessarily mean there is an incommensurable difference in your values and theirs. There could be, but more likely than not, they could have done a very bad job of finding an ethical policy. Not very surprising, as humans often suck at knowing what’s good for them. We are easily compelled to smoke, to eat unhealthily, to chase infatuation, etc. Why should they be good at figuring out what actions / policies will be good for what they value? And that’s it. You still have two hard tasks:

  • What do you value?

  • What can you do to live in such a world?

You’re task is made much clearer though, when seen through the value-based ethics lens.

Q) Can’t you just take a scientific approach to analyzing your values too? A) Not that simply. For instance, a naive approach could end up concluding that you value sugar. Do you really want to value everything you currently value? Everything you’re biologically predisposed to value? (Value dynamics may not be that simple.) Using our scientific self-knowledge helps though . . ..

“Soul-searching” to find what you ‘really’ want is a notoriously hard task. We often don’t fully know how we feel about something until we experience it (and even then).

Perhaps we’re seeing part of why ethics and morals are an endless morass we can never quite seem to be satisfied with: we’re never quite fully sure what we want / value in the first place!

(Side note: this inability to concretely pin down our values may be a quirk of our cognitive architecture, knowledge representation, etc. :p) Q) Don’t we objectively seem to have fairly similar values? A) Yes, it would seem so. The amount of variety doesn’t really measure up to what can be imagined.

This makes our tasks easier.

Unless you’re a super-powerful dictator, warlord, oligarch or whatnot, you have to take other people in your society / community into account when devising an ethical policy. For all intents and purposes, a value_X of yours and a value_X of someone else will be merged, and their ‘small differences’ will be smoothed out. You only really have to deal with course-grained approximations to your values thanks to having to deal with your peers.

And in practice, we can constantly revise our ethical policies on the finer points as they come up and we feel our way through them.

(Alas, some sort of fight / conflict-resolution takes place when groups have incommensurable values :|.) Q) Isn’t this a bit ego-centric? Couldn’t one empirically argue that moral systems tend to support group-fitness? (The wording is inspired from a Facebook conversation with Eray Ozkural.) A) Technically, one may indeed be able to argue that. Or is that just the kind of story we tell ourselves as we try to sell our ethical policies to others? More to the point, it will be hard to tell for sure: an ethical policy pursued as a member of a group has to take the group into account. Individuals can value other individuals and they can value groups.

Perhaps more interestingly, one could characterize groups as having their own values as well: how do the values of the members come together?

That is, are human moral systems often about group-fitness (directly or indirectly) because they are actual ethical policies of groups? They are attempts to realize the group’s values.

It’s then not surprising that group’s often value themselves.

Yet, as above, it doesn’t make sense to fix group values or ethical policies in stone. Not that this value seems likely to change, but more powerful, clear perspectives are, imo, preferable to narrow and concrete ones (even if currently correct).

All in all, I find the perspective of Valuism provides satisfactory clarity. I find exploring value -> ethical policy relations pretty interesting: given some value system, what ethical policies seem good? Surprisingly, as in a recent blog post, seemingly different value-systems may lead to similar ethical policies!

~__~ Zzz

I was planning on saying more, but my washing and drying machine went and flooded the kitchen on me D:<