This website provides an overview of a mass-participant experiment conducted to examine how consequential sounds produced by robots impact human-perception of robots, and thus influence human robot interactions and robot adoption in shared spaces. The detailed results of this research appear in the following two publications:
Robots make machine sounds known as 'consequential sounds' as they move and operate. How do these sounds effect people's perception of robots and their desire to colocate with robots?
For this experiment, 182 participants experienced videos of 5 different robots, and answered questionnaires on their perception of the robots.
Half the participants experienced the robot consequential sounds, whilst the other half formed the control group and experienced the same videos played completely silently.
Questionnaires used for this research can be found here.
Diversity of cohort characteristics for the 182 participants can be seen in the below figures.
Age-group of participants
Participant genders
Frequency that participants experience robots
For futher details on the experiment, see the above papers.
Consequential sounds are the unintentional noises that a machine produces itself as it moves and operates i.e 'sounds that are generated by the operating of the product itself'. In terms of robots, consequential sounds are the audible 'noises' produced by the actuators of the robot as the robot performs its normal operation, and the robot can not function without making these sounds.
Participants were exposed to 5 videos of robots which contained robot consequential sounds (CS condition) or were completely silent (control NS condition). A playlist of these 10 trial stimulus videos is available on Youtube.
Sound features (as heard in the videos) are visualised in the below spectrogram, which shows the frequency spectra of the sound stimuli used in the experiment. Each line represents a different robot/trial. The first 5 seconds display the environmental sound baseline, followed by a black strip no sound separator. The remaining 19-20 seconds are the sounds heard by participants (robot + environment sounds) for each trial. Sound intensities between rows have been scaled to allow comparison between robots.
Robots make compulsory machine sounds, known as `consequential sounds', as they move and operate. As robots become more prevalent in workplaces, homes and public spaces, understanding how sounds produced by robots affect human-perceptions of these robots is becoming increasingly important to creating positive human robot interactions (HRI).
This paper presents the results from 182 participants (858 trials) investigating how human-perception of robots is changed by consequential sounds. In a between-participants study, participants in the sound condition were shown 5 videos of different robots and asked their opinions on the robots and the sounds they made. This was compared to participants in the control condition who viewed silent videos.
Consequential sounds correlated with significantly more negative perceptions of robots, including increased negative 'associated affects', feeling more distracted, and being less willing to colocate in a shared environment with robots.
This paper tests the following two hypotheses:
Hypothesis 1 (H1): Consequential sounds lead to more negative human perceptions of the robot such as: negative feelings (e.g. being uncomfortable), distraction, less willingness to colocate with robots, and/or disliking the robot.
Hypothesis 2 (H2): Some robots are perceived more negatively than other robots due to their consequential sounds.
Eleven Likert questions ranging from negative (1) to positive (7) perception were grouped into 4 thematic scales representing critical aspects of perception towards robots. The following scales were used for the analysis:
Least-squares linear regression was used to analyse the effect of the participant condition (sound versus no sound) and robot predictors on the 4 human-perception of robots scales. The interaction between the two predictors was initially included, with results showing that almost all interaction effects were non-significant p ∈ [.194, .934], except for two borderline significant effects for 'Associated Affect' (Quadrotor with sound) (β = -0.413, t = -1.98, p = .048) and 'Distracted' (Pepper with sound) (β = 0.447, t = 1.87, p = .061). Following best practices to improve model precision, the interaction effects were removed from the regression model as they were not significant.
Please see the paper for full details on the regression analysis methodology.
A summary table of regression results appears below. Detailed regression results can be downloaded here.
Hypothesis 1 is supported.
Consequential sounds were found to significantly affect how people perceive the robots, leading to people feeling more distracted, experiencing stronger negative affects, and reducing desire to colocate with robots.
However the presence of sound showed no significant effect on how much the robots were liked.
These significant negative human-perceptions of robots are likely to have an unwanted effect on robot adoption in shared spaces.
Hypothesis 2 had insufficient evidence to support it. H2 suggests that different robots will have a more extreme effect on human-perception due to their consequential sounds. The interaction effects between robot and condition showed only two borderline significant effects, 'associated affect' (Quadrotor with sound) and 'distracted' (Pepper with sound). Further research is required to understand the effects of consequential sounds at an individual robot level.
The below plots show the data distributions across robots and 'Consequential Sound' versus 'No Sound' (control) conditions for all 4 scales (a) 'Associated Affect' induced by the robot, (b) 'Distracted' by the robot, (c) 'Colocate' with the robot, and (d) 'Like' the robot. All scales are from (1) = negative perception to (7) = positive perception. Means (coloured dots) and 95% confidence intervals (vertical lines) are shown between condition pairs. Overlaid data-points show the scale values for each trial. As these were calculated from multiple questions, some points have Likert scores at 1/4 or 1/3 between integer values when scales included 4 or 3 questions respectively.
Data distributions between sound-condition across the 4 human-perception scales
Data distributions between sound-condition for each of the 5 robots across the 4 human-perception scales
The below boxen plots provide alternate visualisations to suplement the above figures and main figures in the paper. Tails of distributions are shown as progressively smaller boxes. Dots represent outliers ranging from a single point (light dot) to small single-digit concentrations of outliers (darker dots).
Distribution by Sound-condition
Distribution by Robot plus Sound-condition
@article{Allen2025RAL,
author={Allen, Aimee and Drummond, Tom and Kulić, Dana},
journal={IEEE Robotics and Automation Letters},
title={Robots Have Been Seen and Not Heard: Effects of Consequential Sounds on Human-Perception of Robots},
year={2025},
volume={10},
number={4},
pages={3980-3987},
doi={10.1109/LRA.2025.3546097}
}
Positive human-perception of robots is critical to achieving sustained use of robots in shared environments. One key factor affecting human-perception of robots are their sounds, especially the consequential sounds which robots (as machines) must produce as they operate.
This paper explores qualitative responses from 182 participants to gain insight into human-perception of robot consequential sounds. Participants viewed videos of different robots performing their typical movements, and responded to an online survey regarding their perceptions of robots and the sounds they produce. Topic analysis was used to identify common properties of robot consequential sounds that participants expressed liking, disliking, wanting or wanting to avoid being produced by robots.
Alongside expected reports of disliking high pitched and loud sounds, many participants preferred informative and audible sounds (over no sound) to provide predictability of purpose and trajectory of the robot. Rhythmic sounds were preferred over acute or continuous sounds, and many participants wanted more natural sounds (such as wind or cat purrs) in-place of machine-like noise.
The results presented in this paper support future research on methods to improve consequential sounds produced by robots by highlighting features of sounds that cause negative perceptions, and providing insights into sound profile changes for improvement of human-perception of robots, thus enhancing human robot interaction.
Robots must make consequential sounds, however the exact sounds produced can potentially be modified e.g. by dampening individual components, noise cancelling, or sound augmentations. Psychoacoustics research shows that humans don't notice every sound around them, i.e. adding some new sounds can hide or enhance others.
Understanding what sound properties are liked (wanted) versus disliked (should be avoided) is necessary to effectively implement any of these solutions.
Research Question 1 (RQ1): What properties of robot consequential sounds affect human-perceptions of robots, either negatively or positively?
Research Question 2 (RQ2): How would people prefer robots to sound instead?
Exploratory analysis of the qualitative data was conducted using NVivo QDA tool.
Category analysis (topic modelling) methods were used to code the data on 3 dimensions - question priming, response valence and sound property topic code. Details on each dimension appear below.
Question Priming (2 codes):
Sound Primed versus General Questions (not specifically asking about sound)
Valence (4 codes):
Using the above 3 dimensions, 35% of the data was manually coded, before autocoding the rest of the data, and finally checking samples for accuracy. Each coded reference was counted, forming a quantitative representation of participant views expressed in their qualitative comments.
Research findings provide the following insights for robot designers and researchers into promising robot consequential-sound profile changes for improvement of human-perception of robots and HRI:
The below two sets of plots help to visualise the above summary as well as many other insights.
The following four plots separate responses by question priming (sound primed versus general) and whether participants were responding to existing robot consequential sounds (which sound properties are liked versus disliked) or providing preferences for how they would prefer robots to sound (properties to 'avoid' and what people 'want instead'). Preferences for how robots should sound are additionally separated by sound (CS) versus no-sound (NS) participant condition.
The following plots separate the same data presented in the paper into a separate graph for each of the 16 sound property codes, thus facilitating easy comparison of participant valences towards each sound property. These additional visualisations are supplementary to the main figures from the paper, and are presented to assist people when designing modifications for sounds produced by robots.
Acute refers to sounds which are sudden
Constant refers to sounds which stay the same over a period of time
Rhythmic sounds have a noticeable pattern
Gradually increaing volume before the main sound, then decreasing volume afterwards
Sounds are audible (not completely silent)
Natural or Animal sounds include: wind, water, cat, dog etc
Informational sounds contain details on the state of the robot such as position or task
*Not consequential sounds, but were specifically mentioned unprompted
*Not consequential sounds, but were specifically mentioned unprompted
Example subjective qualities include: meditative, friendly, creepy, distracting (see paper for details)
@inproceedings{Allen2025HRI,
author = {Allen, Aimee and Drummond, Tom and Kuli\'{c}, Dana},
title = {Sound Judgment: Properties of Consequential Sounds Affecting Human-Perception of Robots},
year = {2025},
publisher = {IEEE Press},
booktitle = {Proceedings of the 2025 ACM/IEEE International Conference on Human-Robot Interaction},
pages = {94-102},
numpages = {9},
location = {Melbourne, Australia},
series = {HRI '25}
}