Avatars are a special case as their Movement does not depend on the Model to View "translateTo" Message, as the view moves first and then the model gets updated for avatars. (For other objects, it is other way around). So the avatar Pawn ignores the translateTo message and friends sent from the actor. – Yoshiki Ohshima post ⇒ Avatar Data Experiment
~
AREND, Matthias G. and MÜSSELER, Jochen, 2021. Object affordances from the perspective of an avatar. Consciousness and Cognition. 1 July 2021. Vol. 92, p. 103133. DOI 10.1016/j.concog.2021.103133. Humans often interact with avatars in video gaming, workplace, or health applications, for instance. The present research studied object affordances from an avatar’s perspective. In two experiments, participants responded to objects with a left/right keypress, indicating whether the objects were upright or inverted. Task-irrelevant objects’ handles were aligned with either the left or right hand of the actor and/or avatar. We hypothesized that actors respond faster when the handles are aligned, as compared to non-aligned, with the respective avatar hand (spatial alignment effect or object-based Simon effect). In Experiment 1, the spatial alignment effect was increased through the presentation of avatar hands as compared to when no hands were presented. In Experiment 2, the avatar perspective was rotated by 90° to the right and left of the actor’s view. Here, the spatial alignment effect was guided by the avatar, suggesting that the actors took its perspective when perceiving objects’ affordances.