You are currently browsing the category archive for the ‘psychology’ category.

Decision Science News gives a tip on generating less biased estimates of your own future behavior:

When asked to make a forecast 1) generate an answer under ideal conditions, then 2) generate your forecast. Though you’d think the ‘ideal conditions’ would skew your forecast upwards due to anchoring, it does not. In fact, it causes you to generate more realistic forecasts of your own behavior.

Which is not too difficult. I believe that the logic behind this is that simply asking someone to “forecast X ” generates extremely similar results to asking someone to “forcast X under ideal conditions”. Asking someone (or yourself) to do both makes the fact that “non-ideal” conditions apply to forcast 2 very salient and thus more likely to take that fact into account. I suspect this technique is also useful for things like forcasting project completion  times.


I have noticed something about myself in the last year or so, I fear being proven wrong. When I see some study on a topic about which I have a prior opinion, I get a light dose of fear. To give a more specific example, after I finish writing a post for this blog, I consider reading it over in order to revise it. My immediate reaction to that suggestion is to fear that my belief that I have written an eloquent post is wrong. Ironically actually discovering errors does not cause me much pain.

This seems like a clear case of confirmation bias, and being biased is bad. Bias leads to costly mistakes, like writing bad posts and making unwise investments. Luckily, I have come to recognize this feeling of fear relatively easily, and recognizing it helps me to identify when I might be prone to confirmation bias and to lookout for possible disconfirming evidence. This feeling of fear is my own special marker for confirmation bias.

When I recognize the feeling, I know that I should think carefully about my choices about what evidence to seek. I make sure not to avoid looking at evidence simply because it might force me to change my beliefs, and I often try to actively seek possible disconfirming evidence.

I am sure I don’t always notice my fear, and overcoming even light fear can be difficult, but I am glad that confirmation bias has a relatively clear marker. I don’t know if other people get this same fear emotion, but I wouldn’t be surprised if it is a general phenomena.

Consider two ways for an individual to arrive at their policy preferences. First, an individual can consider inherent goodness or badness of a policy. For example, an individual can consider banning drugs to be good because it is inherently good to prohibit people for using drugs. I’ll call this method Specific Value evaluation. Alternatively, an individual can consider the results of a (rough) utilitarian calculus. For example, an individual can consider banning drugs to be good because they judge that it will improve overall human welfare by reducing suffering because of drugs. I’ll call this method Utilitarian Evaluation.

Here is my question: Would a person who was required to learn a lot about a certain policy rely more on a utilitarian evaluation of the policy than on judgments about inherent goodness or badness of the policy than a person who was not required to learn about the policy?

My intuition is that yes, greater information leads to judgments based more on Utilitarian evaluation than on Specific Value evaluation, because Specific Values are not values themselves but simply very simplistic Utilitarian evaluations. If this is the case, then the thought that prohibiting drug use is inherently good is simply a way of expressing the thought that prohibiting drug would very obviously improve human welfare. However, I am not very sure about this.

I am interested in this because I am curious about what sort of politics Professional Voting would lead to. I obviously hope that greater information leads people to make more utilitarian judgments, because my own preferences are quite utilitarian.

If anyone could point me to relevant research, I would be very grateful.

Your brain doesn’t treat words as logical definitions with no empirical consequences, and so neither should you.  The mere act of creating a word can cause your mind to allocate a category, and thereby trigger unconscious inferences of similarity.  Or block inferences of similarity; if I create two labels I can get your mind to allocate two categories.  Notice how I said “you” and “your brain” as if they were different things?

I am  sure that I have been guilty of ignoring this in the past, probably even on this blog.