In response to John’s post, I’d like to point out that values are not necessarily heuristics standing in for more thorough utilitarian analyses. People often think an act is wrong inherently, not because they believe it will reduce human welfare. Indeed, if a value is based on welfare, isn’t it really a position supported by a deeper underlying value, that increasing welfare is good? It seems to me that a defining characteristic of a moral value is that it is cannot be reduced to more fundamental beliefs about what is right and wrong.

As people become more informed, then, their values will not be updated based on new information. However, when formulating positions on the issues of the day, they may find the newly available utilitarian reasoning on a particular issue more compelling than the guidance provided by their values and take a different position than they would have before. Another way to look at it is that deciding your policy preferences based a utilitarian rationale is difficult and relatively expensive (in time and mental effort) when you are uninformed. Of course, somebody might stick to their values regardless of new information, but I would expect that if you made a given sample of people better informed, some would switch to voting based utilitarian concerns.  So I come to the same conclusions — I’m just quibbling over details.

Advertisements