Category Archives: Advice

What *exactly* do you want to know? Risk tolerance versus loss reactivity

You might not know it, but one of the core components of traditional finance advice is hotly debated, and rightly so. Risk tolerance has a couple of definitions, but all broadly related to how much risk a person is willing to bear in exchange for a slightly higher return on investment. While this should be a core means of tailoring portfolios for customers, the question of if we can accurately gauge it, and how we do so, is one of the most hotly contested issues both in academic circles and between practitioners. Many financial advisors lament that risk tolerance questionnaires aren’t accurate, and many clients resent being put through time consuming quizzes which all seem the same.

It’s easy to both make the controversy and the frustration go away. Traditional definitions of risk tolerance confound two different aspects of a persons psyche, giving inaccurate measurements of both of them. Once you separate risk tolerance from loss-reactivity, you’ll have a much more useful view of the individual.

The first conception of risk tolerance comes from academic economists, and has been around for a long time. For economists, it’s a matter of how you make decisions about future outcomes- it’s all about uncertainty regarding where you’ll end up at the end of an investment horizon, say 20 years from now. In formal decision models, risk tolerance is combined with expected returns & expected risk to reach a decision about whether or not to invest. Using these three inputs, there are three reasons someone may decrease their stock allocation:

  1. Their expectation of benefit (returns) has gone down.

  2. Their expectation of risk has gone up.

  3. Their tradeoff measure (risk tolerance) has gone down.

A key point is that to an economist or decision scientist, a good measure of risk tolerance doesn’t move at all in response to market changes. – option three shouldn’t occur. By definition it shouldn’t – it’s a fixed personality trait which helps you make decisions in varying environments or across different options.

Thanks to psychologist Daniel Kahneman, we have a fairly good answer to these weightings- generally, people weight a loss about twice as much as an equivalent gain when they make decisions. A very small set of people are risk neutral – they don’t weigh losses any more heavily than gains. And some people weight losses much heavier – they are not very risk tolerant. So within academic economists, there isn’t too much controversy over risk tolerance (or aversion) as a measure.

However, what most financial advisors want to know is “how will this person react to a loss in their portfolio of a given size?” At the center of the concern is that emotionally jumping out of the market after a loss, and hopping back in after a period of gains is hardly a winning strategy. This emotional market timing reduces investor returns over time, by around 1.6% a year. And so good advisors want to minimize their clients jumps. At the same time, the reward for bearing risk is higher returns, so they want to have their client bear as much risk as possible, subject to not jumping. Advisors care about the risk of the journey, not just the risk at the destination.

This second measure I call it loss reactivity, and it’s much more in line with what advisors want to know. While research that I did while at Barclays Wealth did find it to be correlated with risk tolerance, it is independent enough that you cannot categorize individuals as the same on these dimensions reliably. You can put a high risk-tolerance individual, who also is highly loss-reactive in a high risk portfolio, and they’re more likely to be stressed and jump than a non-loss-reactive individual.

Loss reactivity may also be more valuable, as it has more practical uses than the academic definition of risk tolerance. Correctly identifying an individual’s risk tolerance is difficult and often results in false precision. A great example is this graphic from Isaac & James (2000), which shows that the way you elicit a risk aversion coefficient is almost completely responsible for if individuals are classified as risk-averse, or risk-seeking – a huge difference. Each arc represent an individual’s assessment using one method (above 1, or risk-seeking), and then using a second method (beneath 1, or risk averse).

And that said, it’s not clear that it’s better to use psychometric risk tolerance rather than financial measures when determining how much risk someone should take.. and optimizing for loss reactivity to ensure they actually get from A to B without jumping.

The good news is that we’re already fairly far along the path to diagnosing it ahead of time, and thinking about how to prevent it from harming investors. More to come.

When to recommend active management?

From the FT:

Under what circumstances would you advise somebody to use active money managers as opposed to index funds?

SA: I can think of many cases in which I would recommend active money managers over index funds. For example, I might be giving the advice to someone I hate, or—and this happens a lot—someone I expect to hate later. I would also recommend active money managers if I were accepting bribes to do so, if I were an active money manager myself, or if it were April Fools’ Day. And let’s also consider the possibility that I might be drunk, stupid or forced to say things at gunpoint. I’ve also heard good things about a German emotion called schadenfreude, so that could be a factor too.

Scott Adams is the creator of the Dilbert comic strip that appears in thousands of newspapers world-wide and www.dilbert.com.

Financial advisors make us dumb? Not so fast…

Full disclosure: I work for a firm that provides financial advice. I wouldn’t do it if I didn’t think we were helping our clients, but I still cannot pretend to be an impartial party. But, I hope this also makes my opinion a little more informed.


A recent paper on how we use expert financial advice has been grabbing a bit of attention lately – Dan Ariely called the results “troublesome, perhaps even frightening” . In a Wired article title “Given ‘Expert’ Advice, Brains Shut Down” the paper’s author states:

When the expert’s advice made the least sense, that’s where we could see the behavioral effect…

In this world, you take advice, integrate it with your own information, and come to a decision. If that were true, we’d have seen activity in regions that track decisions. But what we found is that when someone receives advice, those relationships went away.

Yikes! We stop thinking when someone gives us advice! I need to read the article.

So I did, and you can too (bless PLoS). Reaction? Don’t believe the hype. What follows is about the choices people made. Criticism about neuroimaging analysis are beyond my skillset.

  1. The “expert advice” was a single word – “Accept” or “Reject”, detailing what “the expert would do”. The expert was an economist who explicitly made conservative recommendations¹.
  2. Given that you tell me someone is an expert and their opinion I’d believe you too – and that would affect my behaviour. Questioning the expertise of someone is almost secondary to how you integrate definitely expert advice into your decisions.
  3. The graph below depicts the two estimated probability weighting functions with/without expert advice. There is a statistically significant difference between them – we can reliably tell one from the other. But the change appears pretty small the maximum difference in the function (at an objective probability of 0.8) appears to be 0.05, or a maximum effect of 5%. Effect on probability functionQuoting the paper –“the expert’s advice led to a significant change … in the direction of the expert’s advice.” So the respondents listened to the expert a bit, but didn’t do anything really stupid. Ok, is that supposed to be surprising or interesting?
  4. The effect on choices was as follows – when not told what the expert would do, 64% did what he would have recommended. When told his recommendation, 72% did what he recommended. That’s right – a difference of 8%, when the baseline agreement was 64%.

Effect on decisions

Thoughts…

These aren’t as impressive results as thought they’d be. The comparison of the “harm” the advice did was benchmarking responses to expected utility theory, so it’s questionable if subjects were “harmed” by it. And the “expert” explicitly states he’s giving conservative advice. This is interesting because in actual financial advisory settings, you are never sued for advising taking on too little risk. To my knowledge every article you will read is about financial advisors advising too much risk. This article actually defines harm by not taking on enough risk, in fact!

I do think there is a great study in this ala Stanford prison experiment – how much will we follows experts advice, including when it’s obviously harmful or wrong? But this experiment doesn’t really ask those questions.

Experts provide advice about things we supposedly know less than them about, so that we don’t have to know everything they know to come to as-informed a conclusion. While I do think everyone should assess expertise critically², I think this paper should have been titled “Expert advice influences choices and decreases cognitive load” – which is pretty much what we go to experts for.


1. The quote I found was:

Though the recommendations were delivered under his imprimatur, Noussair himself wouldn’t necessarily follow it. The advice was extremely conservative, often urging students to accept tiny guaranteed payouts rather than playing a lottery with great odds and a high payout.

2. And is exactly why I read the articles myself.

Engelmann JB, Capra CM, Noussair C, Berns GS. Expert Financial Advice Neurobiologically “Offloads” Financial Decision-Making under Risk. PLoS ONE. 2009 ;4(3):e4957.