When Do Natural Language Metaphors Influence Reasoning? A Follow-Up Study to Thibodeau and Boroditsky (2013) by Gerard J. Steen, W. Gudrun Reijnierse, and Christian Burgers.
In this article, we offer a critical view of Thibodeau and Boroditsky who report an effect of metaphorical framing on readers’ preference for political measures after exposure to a short text on the increase of crime in a fictitious town: when crime was metaphorically presented as a beast, readers became more enforcement-oriented than when crime was metaphorically framed as a virus. We argue that the design of the study has left room for alternative explanations. We report four experiments comprising a follow-up study, remedying several shortcomings in the original design while collecting more encompassing sets of data. Our experiments include three additions to the original studies: (1) a non-metaphorical control condition, which is contrasted to the two metaphorical framing conditions used by Thibodeau and Boroditsky, (2) text versions that do not have the other, potentially supporting metaphors of the original stimulus texts, (3) a pre-exposure measure of political preference (Experiments 1–2). We do not find a metaphorical framing effect but instead show that there is another process at play across the board which presumably has to do with simple exposure to textual information. Reading about crime increases people’s preference for enforcement irrespective of metaphorical frame or metaphorical support of the frame. These findings suggest the existence of boundary conditions under which metaphors can have differential effects on reasoning. Thus, our four experiments provide converging evidence raising questions about when metaphors do and do not influence reasoning.
The influence of metaphors on reasoning raises an interesting question for those attempting to duplicate the human brain in silicon: Can a previously recorded metaphor influence the outcome of AI reasoning?
Or can hearing the same information multiple times from different sources influence an AI’s perception of the validity of that information? (In a non-AI context, a relevant question for the Michael Brown grand jury discussion.)
On it own merits, a very good read and recommended to anyone who enjoys language issues.