Even now, the robots of the movies don’t exist (although we wake up checking the news every day to see if the breakfast droid has been invented yet). So when the idea of the humanoid robot was taking shape in past decades, people had to work with what they had. Real human voice recordings were piped through strange effects to make them sound less human, and often more metallic. It’s interesting to hear the answer to the question “What does a computer sound like?” in early representations of robot voices from the silver screen.

So how did early sound engineers make voices sound less human?

Pitch is a factor played with in the history of robot voices, using a harmonizer that was originally invented for musical applications, you could get a ‘double’ voice sound, thickening the audio to make it sound overlapped and phasey. A robotic voice doesn’t need to sound just like a humans, so having a very low or high pitch is acceptable, giving us a sense of a mechanical being rather than one made of flesh and bones.

Distortion is an effect you can link to the robot. Early ideas of artificial beings would have mirrored the speakers used in consumer products like radios and televisions. And to show a sound is coming from an electronic circuit, what better way than to make it a circuit that’s malfunctioning and distorting?

Phaser effects process a sound by sending it through electronic components that slightly delay different parts of the audio signal. A phaser is a great way to make a voice recording sound high-quality and understandable, but still retain elements of being synthetic.

Let’s dive into how you can create a robot voice changer free in Audacity.