Hierarchical integration in action observation
This project investigates the sources of information that people make use of during social perception and how they are integrated. In other words, if action observation is predictive and under top-down control, where do these predictions come from? Our research so far suggests a hierarchical integration of two types of information.
One source of information is the knowledge we have about the other individual, reaching from knowledge about how someone's short term goals within a situation, to long term behaviour patterns that describe the individuals we know l (e.g. their generosity, musical preferences, or that they tend to pick their noise in public). We have found that both types of person knowledge - their short term-goals and longer-term behaviour patterns - play a crucial role in social perception, directly affecting how these individuals' actions are perceived, judged and responded to.
As described under Predictive Social Perception, attributing short-term goals to others affects how their behaviour is perceived, inducing subtle visual illusions. For example, seeing someone state that they want an object, changes subsequent perception of their action, such that the hands appear closer to this object than it really is (Hudson et al., 2017; 2017), compared to when they just heard the person saying that they don't want it. In recent work (Hudson et al., 2017), we have now established that these predictions follow the knowledge we have about the person: we make use of them more if the person - in previous encounters - usually did what they said, compared to when they were more unreliable.
Others' longer-term behavior patterns have similar effects, biasing action observation towards these actions. For example, even when participants cannot readily verbalise others' behaviour tendencies, they nevertheless identify actions of others' more quickly if they are typical for that person in the given situation, compared to an action that would be typically carried out by someone else (Schenke, Wyer & Bach, 2017). Similarly, simple seeing a face look repeatedly at one type of object (e.g. foods) but not at others (drinks) guides our own attention to these objects when this face is seen again, as if we already anticipate the action of this individual (Joyce et al., 2015). These person-based action predictions can even affect one's own behaviour: seeing people we associate with particular actions - such as famous tennis or football players - affects how we carry out these actions, suggesting a direct route from person model to action anticipation (Bach & Tipper, 2007; Tipper & Bach, 2011).
Person information cannot predict actions directly. Not all goals can be realised in all situations, and each situation differs in how – with what actions – it affords goal achievement. People can only still their hunger, for example, when food is available, and they can only be altruistic when there is the option of, perhaps, helping a homeless person or contributing to a charity). Person information therefore needs to interface with information about the action possibilities afforded by the given situation.
Situation models provide information about which objects are available to the other person in the given situation (e.g. a hammer), which goals can be achieved with them (hammering in a nail), and how these objects needs to be manipulated to achieve this goal (swinging motion). As soon as such a situation model is established, it can provide an interface between knowledge of the other person's goals and observed actions (straight arrows in the figure). For example, if we know that someone wants to hang up a picture, we predict that he will reach for the object that helps them do that (a hammer), and will soon use it with a swinging motion, exerting force on the other object between his fingers. In contrast, seeing a different behaviour can cause us to revise the prior goal attribution.
Several studies from ours and others' labs show this top-down effect that goals have on action knowledge about objects. We have shown, for example, using both behavioral (Bach et al., 2005) and fMRI data (Bach et al., 2010), that seeing pairs of objects that imply action goals (e.g. screwdriver and screw; key and keyhole) also evoke the motor actions that need to be executed, effectively predicting how these objects will be used together. Similar data come from automatic imitation experiments, with participants only imitating observed lower-level action components (e.g. reach directions) if the seen action properties (e.g. a small hand grip) match the available goal object (a small object, Bach, Bayliss & Tipper, 2011).
Hudson, M., Nicholson, T., & Bach, P. (2017). You Said You Would! The Predictability of Other's Behavior from their Intentions Determines Predictive Biases in Action Perception. Journal of Experimental Psychology: Human Perception and Performance. Publisher – PDF – Data
Hudson, M., Nicholson, T., Simpson, W. A., Ellis, R., & Bach, P. (2016). One step ahead: The perceived kinematics of others’ actions are biased toward expected goals. Journal of Experimental Psychology: General, 145(1), 1-7. Publisher – PDF – Data
Hudson, M., Nicholson, T., Ellis, R., & Bach, P. (2016). I see what you say: Prior knowledge of other's goals automatically biases the perception of their actions. Cognition, 146, 245-250. Publisher – PDF – Data
Tipper, S.P. & Bach, P. (2011). The face inhibition effect: Social contrast or motor competition? Journal of Cognitive Psychology, 23(1), 45-51. PDF
Tipper, S.P. & Bach, P. (2008). Your own actions influence how you perceive other persons: a misattribution of action appraisals. Journal of Experimental Social Psychology. 44, (4), 1082-1090. PDF
Bach, P. & Tipper, S.P. (2007). Implicit action encoding influences personal-trait judgments. Cognition, 102, 151-178. PDF
Bach, P. & Tipper, S.P. (2006). Bend it like Beckham: embodying the motor skills of famous athletes. Quarterly Journal of Experimental Psychology, 59(12), 2033-2039.PDF
-- featured in BPS research digest newsletter, CNET, Metro, etc.