エピソード

  • May Contain Lies...
    2024/09/11

    If you’ve been around the block, you will likely have seen some eye-rolling use of evidence during meetings. Evidence can be used badly for many reasons; a misunderstanding of what conclusions can be drawn from it, or perhaps it has been cherry-picked to support a particular position.

    In this episode, we unpick these issues with Professor Alex Edmans of London Business School. Alex recently published a book, ‘May Contain Lies’, which discusses the methodological, psychological and incentive problems surrounding evidence use.

    We spend a decent amount of time on a core idea from the book, ‘The Ladder of Misinference’. If you think scientifically, there are no earth-shattering revelations here, but I really like it because it is a simple teachable framework that groups can adopt. Alex gives some great examples that everyone can understand and internalise.

    The Ladder deals with the challenges of method, but that’s only half the story. We also have to beat the behavioural cards that nature has dealt us, e.g. confirmation bias.

    And even if we beat the first two traps, incentives can nudge us away from saying what we really believe.

    Key insights:

    - How Alex tries to move beyond black-and-white thinking and engage with complexity - getting the right mix of data and stories

    - Why do bad ideas stick - do you still 'Power Pose'?

    - Changing minds – the power of good questions (there’s a great experiment on pianos and toilets that you can try at home).

    - Trading off the short and long-term - why he chose the most critical agent to help him publish his book.

    - Understanding neurological carrots and sticks - what happens when we put people in a brain scanner and give them statements they like and don’t?

    - The state of debate around ESG and DEI – ideology, identity and pressures to conform.


    続きを読む 一部表示
    48 分
  • Pants on fire
    2024/06/18

    Lying – it’s something that all humans do. Most of the lies we tell are small and harmless. But deceptive behaviour in the investment industry lowers trust and increases costs and complexity.

    We are deceptive for many reasons and one of them is that we can get away with it. This is because, despite what we might believe, most of us are pretty terrible at spotting lying – including highly experienced financial analysts.

    But what would happen if we all had access to AI-powered technology on our phones that could spot deception with a high degree of accuracy? Would that change how the industry behaves?

    This is no idle speculation – in this episode of Decision Nerds, we explore research that suggests that AI is significantly better at spotting lying than humans. And as we all know, AI has a habit of surprising us by appearing in the wild far faster than we might expect.

    How would this technology impact the investment industry? We discuss:

    𝙏𝙝𝙚 𝙢𝙤𝙩𝙞𝙫𝙖𝙩𝙞𝙤𝙣 𝙛𝙤𝙧 𝙙𝙚𝙘𝙚𝙥𝙩𝙞𝙤𝙣 𝙞𝙣 𝙩𝙝𝙚 𝙞𝙣𝙙𝙪𝙨𝙩𝙧𝙮 – the entirely logical reasons that we don’t always tell the truth

    𝘿𝙞𝙛𝙛𝙚𝙧𝙚𝙣𝙩 𝙠𝙞𝙣𝙙𝙨 𝙤𝙛 𝙙𝙚𝙘𝙚𝙥𝙩𝙞𝙤𝙣 𝙖𝙣𝙙 𝙩𝙝𝙚𝙞𝙧 𝙧𝙚𝙡𝙖𝙩𝙞𝙫𝙚 𝙞𝙢𝙥𝙖𝙘𝙩𝙨 – what are the traps that managers fall into and why

    𝙅𝙪𝙨𝙩 𝙝𝙤𝙬 𝙢𝙪𝙘𝙝 𝙗𝙚𝙩𝙩𝙚𝙧 𝙞𝙨 𝘼𝙄? – the results might surprise you

    𝙒𝙤𝙪𝙡𝙙 𝙖 𝙩𝙧𝙪𝙩𝙝 𝙢𝙖𝙘𝙝𝙞𝙣𝙚 𝙙𝙚𝙨𝙩𝙧𝙤𝙮 𝙩𝙝𝙚 𝙞𝙣𝙙𝙪𝙨𝙩𝙧𝙮 𝙤𝙧 𝙢𝙖𝙠𝙚 𝙞𝙩 𝙗𝙚𝙩𝙩𝙚𝙧? – our take on ‘creative destruction’

    𝙏𝙝𝙚𝙧𝙚’𝙨 𝙣𝙤 𝙩𝙧𝙪𝙩𝙝 𝙢𝙖𝙘𝙝𝙞𝙣𝙚 𝙮𝙚𝙩 - we discuss a few better questions that we can use today.

    Affectiva facial recognition demo
    Paper on analysts' ability to spot CEO deception
    Paper on AI's ability to spot CEO deception
    Lying on CVs

    続きを読む 一部表示
    41 分
  • Underperformance - everyone's got a plan until they're hit in the face
    2024/04/23

    𝗛𝗼𝘄 𝗺𝘂𝗰𝗵 𝘁𝗶𝗺𝗲 𝗱𝗼 𝘆𝗼𝘂 𝗴𝗶𝘃𝗲 𝗮𝗻 𝘂𝗻𝗱𝗲𝗿𝗽𝗲𝗿𝗳𝗼𝗿𝗺𝗶𝗻𝗴 𝗶𝗻𝘃𝗲𝘀𝘁𝗺𝗲𝗻𝘁 𝗺𝗮𝗻𝗮𝗴𝗲𝗿?

    1. Hire a manager after a period of strong performance.

    2. Watch in horror as you don’t experience that outcome, maybe the opposite.

    3. Spend a huge amount of time and emotional labour deciding on whether your initial thesis was wrong, something has changed at the manager, or with market dynamics.

    4. After sucking up a huge amount of governance time, decide to sack the manager.

    5. Rinse and repeat with potentially similar outcomes.


    Now that might not be you, but it is a story that plays out regularly.


    Experiencing underperformance is one of the unavoidable realities of hiring an active manager. And it’s painful for everyone; clients, managers and advisers. And badly managed pain creates some predictably bad outcomes for all parties.


    One important and manageable issue is time horizon mismatch. And this is what and I explore in the latest episode of Decision Nerds. We discuss


    𝗪𝗵𝘆 𝗶𝗻𝘃𝗲𝘀𝘁𝗺𝗲𝗻𝘁 𝗲𝗱𝗴𝗲 𝗶𝘀 𝗻𝗼𝘁 𝗲𝗻𝗼𝘂𝗴𝗵 – managers need an appropriate amount of time to let their edge play out. It may be longer than you think.


    𝗧𝗵𝗲 𝗕𝘂𝘅𝘁𝗼𝗻 𝗜𝗻𝗱𝗲𝘅 – a simple way of articulating time frames that would help everyone.


    𝗘𝘃𝗲𝗿𝘆𝗼𝗻𝗲 𝗵𝗮𝘀 𝗮 𝗽𝗹𝗮𝗻 𝘂𝗻𝘁𝗶𝗹 𝘁𝗵𝗲𝘆 𝗮𝗿𝗲 𝗵𝗶𝘁 𝗶𝗻 𝘁𝗵𝗲 𝗳𝗮𝗰𝗲 – we posit that most people’s ability to predict how they will deal with the pressure of underperformance won’t reflect reality when things get tough.

    We talk about the distinct behavioural pressures facing clients, advisers and managers and what they might consider doing to make things easier. A couple of takeaways if you don’t have time to listen.


    𝗖𝗹𝗶𝗲𝗻𝘁𝘀 - what is your real capacity for tolerating underperformance, how do you know this?

    𝗠𝗮𝗻𝗮𝗴𝗲𝗿𝘀 - have a clear view of the timeframe your results should be judged over. Whilst this a tough problem, if you don’t have a view, one will be forced on you.

    𝗔𝗱𝘃𝗶𝘀𝗲𝗿𝘀 - is your role to be a behavioural bulwark and help clients through this period, or just keep them happy? They are not always the same thing.


    More information on the Buxton Index is here: https://lnkd.in/eHXDfkfe




    続きを読む 一部表示
    39 分
  • The other F word
    2024/02/20

    Thinking and talking about failure can be tough, especially when it’s us who’s failing. But it’s important for us as individuals and investment teams to find a way to do this in an effective manner that allows us to both learn and evolve. In this episode of Decisions Nerds…

    - Joe pits his inner Brian Blessed against an AI auto-editor

    - We examine Joe’s framework for thinking about errors around investment beliefs, processes and outcomes.

    - We then think about this in the context of broader research that thinks about different typologies of errors; basic, complex and intelligent.

    - We then discuss the practical steps that can make life easier.

    続きを読む 一部表示
    36 分
  • Christmas behaviour
    2023/12/12

    We downed some eggnog and decided to record an impromptu Christmas special. We have a chat about how insights from behavioural science might help the festive holidays proceed more smoothly:

    😂 Why Joe's team might think (in his own words) that he’s an ‘egotistical a*&hole’.

    😧 A simple way to feel better about Christmas (but you might not like it).

    😠 The nudges that retailers use to get us to buy.

    💵 Gift cards or cash as a present – to constrain or not to constrain?

    🤔 Do New Year’s resolutions get a bad rap and how can we can structure them to give a better chance of success?

    🍿 Our favourite Christmas movie as a metaphor for decision-making and a gift that helps people understand their value to the world.

    Wherever you are, however you celebrate (or don’t), we hope that you have get some high-quality time with family and friends over the festive period.

    We’ll see you on the other side.

    続きを読む 一部表示
    22 分
  • Principled? Does power corrupt and absolute power corrupt absolutely?
    2023/11/13

    Following the publication of a new book, 'The Fund', Ray Dalio and Bridgewater have been getting attention for all the wrong reasons. The book alleges a toxic culture where the reality of day-to-day life doesn’t match that described in Dalio’s book, ‘Principles’.

    In this episode of Decision Nerds, we dive into the sludge and extract some key points that are relevant for organisations and teams. Key takeaways:

    #𝟭 𝗗𝗼𝗻’𝘁 𝗮𝘀𝘀𝘂𝗺𝗲 𝘄𝗵𝗮𝘁 𝘄𝗼𝗿𝗸𝘀 𝗳𝗼𝗿 𝗼𝘁𝗵𝗲𝗿𝘀 𝘄𝗶𝗹𝗹 𝘄𝗼𝗿𝗸 𝗳𝗼𝗿 𝘆𝗼𝘂
    It’s tempting to look at something that appears to be working for someone else and try and replicate it. This is harder than we think – especially when it involves culture.

    #𝟮 𝗛𝗼𝘄 𝗳𝗲𝗲𝗱𝗯𝗮𝗰𝗸 𝗶𝘀 𝗮𝗽𝗽𝗿𝗼𝗮𝗰𝗵𝗲𝗱 𝗺𝗮𝘁𝘁𝗲𝗿𝘀
    We all need feedback to learn and develop - how it's given matters. Systems like 'Radical Transparency' may work for some people, but not all people. Probably not most people.

    #𝟯 𝗔𝗰𝗰𝘂𝗿𝗮𝘁𝗲 𝗳𝗲𝗲𝗱𝗯𝗮𝗰𝗸 𝗶𝘀 𝗽𝗿𝗼𝗯𝗮𝗯𝗹𝘆 𝗺𝗼𝘀𝘁 𝗶𝗺𝗽𝗼𝗿𝘁𝗮𝗻𝘁 𝗳𝗼𝗿 𝗹𝗲𝗮𝗱𝗲𝗿𝘀
    We often focus feedback on helping junior colleagues learn. Of course that is valuable. However, leaders make key decisions and set the cultural tone and strategic direction. Their greater impact means that they need clear feedback on whether what they are doing is working/where they could develop.

    #𝟰 𝗣𝗼𝘄𝗲𝗿 𝗮𝘀𝘆𝗺𝗺𝗲𝘁𝗿𝗶𝗲𝘀 𝗺𝗲𝗮𝗻 𝘁𝗵𝗮𝘁 𝗹𝗲𝗮𝗱𝗲𝗿𝘀 𝗼𝗳𝘁𝗲𝗻 𝗱𝗼𝗻'𝘁 𝗴𝗲𝘁 𝘁𝗵𝗲 𝗳𝗲𝗲𝗱𝗯𝗮𝗰𝗸 𝘁𝗵𝗲𝘆 𝗻𝗲𝗲𝗱
    Almost every person giving feedback upwards will be making some conscious, or unconscious, calculation of career risk. Leaders need to recognise this. For some people this will be a positive (they don’t want that feedback). For others that do, they need to recognise that they still might not get it. You may need to go the (emotional) extra mile to convince people that you really want to hear about what you could do better.

    #𝟱 𝗪𝗲 𝘀𝗵𝗼𝘂𝗹𝗱 𝗱𝗲𝘀𝗶𝗴𝗻 𝗳𝗼𝗿 𝗵𝘂𝗺𝗮𝗻𝘀 𝘄𝗶𝘁𝗵 𝗮𝗹𝗹 𝗼𝘂𝗿 𝗻𝗲𝗲𝗱𝘀 𝗮𝗻𝗱 𝗳𝗹𝗮𝘄𝘀
    For many of us, it can be a struggle to give and receive feedback, acknowledge weakness, mistakes, etc. We can try and design cultures such as Radical Transparency and recruit for people who can live with it, or try simpler approaches that reflect individual needs.

    One simple tool is a feedback profile, which can be found here.

    続きを読む 一部表示
    17 分
  • Seeing the future
    2023/10/09

    In this episode we discuss how we can make better forecasts with Professor Paul Goodwin.

    We tackle everything from the techniques available, to human problems around understanding, bias and incentives. See the chapter headings for full details.


    Notes
    Philip Tetlock's book is Superforecasting: The Art and Science of Prediction

    A paper that describes the key concepts can be found here.

    続きを読む 一部表示
    50 分
  • Regrette, je regrette beaucoup de choses
    2023/08/08

    Does fear of getting things wrong stop you from making decisions?

    If so, you’re not alone. Regret can have a profound impact on our choices.

    In this short episode, we unpack the challenges, discuss some solutions and use the timely case of thematic investment funds as a case in point. We tackle:

    The psychological nature of regret – why it can be useful, but also harmful.

    How regret can impact the best and bright of us – smarts and market knowledge are no defence.

    How it can hurt our choices from FOMO to delayed decisions – we discuss the case surrounding thematic funds.

    Some simple tweaks that can reduce the fear of getting things wrong – the value of processes, monitoring choices and justifiability.

    How we can help other people help us – self-knowledge and Joe and my challenges in this area, including Joe’s parents’ day race travails…



    続きを読む 一部表示
    23 分