top of page
Search
Writer's pictureLASM

Let’s Make a Planetarium Show:  Part 2 – The Voice


You’ve got your script in hand.  It’s been run through the Educators and the Editor.  It’s all on the up-and-up.   Now you just have to have your script recorded by a voice-over artist.

Before I became the Planetarium Producer, I was the Master Projectionist, Assistant Technician, and the audio specialist for all our content.  The Planetarium Producers we’ve had in the past would mainly work on the visual aspect of the show: the animations, the Digital Sky stuff, the programming, etc.  I was responsible for the audio recordings, the music, and the sound effects.  I came into the job with an audio engineering background having released a couple of albums with some bands.  I also had a background in writing as it was part of my college degree and I had published a few things in the past.  So, when the job of Producer became available, it was natural for me to slip into the role.


I used to record all the audio for our shows here at the Museum.  We built a little studio space by my office desk and ran some wires up into a make-shift voice-over isolation booth located behind the dome.  It was quite difficult doing a recording session when I’m one floor below the person doing the voice-over acting.  Not only that, but the microphone I used was so powerful that any truck that passed along the River Road running alongside us would be picked up by the microphone.

When I became the Producer, I moved into a new office and my go-to voice-over artist moved on to other things.  This meant I had to find another narrator, preferably one that had their own studio for quieter recordings.


I found Stephanie Murphy through a recommendation by a friend.  She can also be found on Voices.com, which is the same website I used to find George W Robinson who ended up doing the narration for my “The Worlds Within Stars Wars” show.

Dealing with a voice-over artist in a different location is much easier than it sounds.

The first thing I do is put in a pronunciation guide within my script.  These are little footnotes along the way that will phonetically spell out how each weird word is pronounced and where the stressed syllable lies.  For example, the star name Deneb is pronounced “din-EBB.”

Of course, with many astronomical names, there are a variety of ways to “officially” pronounce them.  But, over the years, we’ve stuck to a certain way of saying certain words.  The star name Vega can be pronounced either “VAY-guh” or “VEE-guh,” but we’ve gone with “VAY-guh.”

I then have to communicate with the voice-over artist what kind of settings my projects are at on my Digital Audio Workstation (DAW).

When crafting a planetarium show’s audio, you have to mix in not only the voice but also all the music and sound effects as well.  For that, I mainly use Cubase.  Since my audio is going onto a certain system, I typically always have my project settings at 24bit and 44.1khz.


When the voice-over artist records their performance on their end, they need to set their project up the same way I have mine.  That way, when I pull in their audio, it’ll work perfectly with my current settings.

So, I send off the script with the pronunciation guide, notes on the kind of performance I’m looking for, and project settings guide.

While waiting for the voice-over recording to come back, I have a little bit of time to begin work in my Digital Sky program and with the storyboard I created for each scene.

Digital Sky is my main tool for crafting the majority of the show.

There’s a huge learning curve to getting into the software.  It’s not something you can watch YouTube video tutorials on; it’s not something you can go to school for.  You’ve got to read the manual that they provide, do the tutorials they provide, take the Digital Sky Academy courses they provide, and just work at, work at it, work at it.

Creating a flight path from Earth to Mars is one thing, but you have to keep in mind that in order for the audience to see what you want them to see, you have to provide a camera.  So, you have to place a camera on Earth and have it pointed a certain way, lift the camera off Earth, point it at where Mars is located, fly to Mars at a certain distance, slow down, approach, and orbit.

I mentioned before that everything in Digital Sky is mapped in 3D space.  This means that if I want to fly to Mars I can’t travel at the speed in which an airplane flies, not even the speed in which a space shuttle would fly.  That would make the Sky Tonight show far too long and boring to watch as we waited to get there.  Instead, in the planetarium, we have the ability to travel in increments far exceeding the speed of light.  We can travel in Parsecs, Megaparsecs, Astronomical Unites, and so on.  That makes getting to Mars, the Andromeda Galaxy, even the edge of the observable universe a lot faster—possible in a matter of seconds.

But this all requires lines and lines of code.  Code which tells the computer to orbit the planet this many degrees, place the camera in this location of space, travel at this speed for this amount of time, stop at this location in space at a certain moment in Earth time, so on and so on.

It sounds tedious, but if you get good at it and configure a set of commands to illustrate what you want people to see, you can save these commands to create a button which will perform those lines of script over and over again.

However, the time of those flights depends greatly on the timing of the voice-over artist and how the Producer chooses to pace the show.  So, I start to pre-program flight paths while waiting for the voice-over to come in, but I have to wait for the edited version of the voice-over to finally start timing all the Digital Sky segments I’ve been working on.  The voice-over helps me fit the scene together, i.e. when the audience sees us leave Earth and finally arrive at Mars as the voice-over artist mentions leaving Earth and finally arriving at Mars.  It wouldn’t make any sense to hear “…here we are arriving at Mars” while we’ve already been in orbit around Mars for 10 seconds.

In a couple of days I’ll get an email with a link to the voice-over audio file Stephanie Murphy has recorded, usually stored on something like Dropbox in a .wav format (not a compressed .mp3) or directly attached to the email.

I instruct Stephanie to not “normalize” the audio after she records.  Normalizing audio is something done to a track that has a volume level that is a bit low and has been ramped up to a normal level.  I like to do any volume adjustments on my own as opposed to taking an audio track that has been normalized and trying to bring it back down to a certain level.  I like to work with a good voice-over reference track to compare volume level, EQ, and plug-in adjustments to the unprocessed voice-over track I’m working with.

<image – audio recording>

I also have Stephanie not edit her final track at all.  I like to edit it myself and retain the normal vocal flow and rhythm that she records at.  I don’t like to cut and move things in tighter for a quicker read.  I also don’t like to use plug-ins to eliminate background noise.  I like to manually go in and eliminate all background noise myself.  I think using plug-ins to clean up audio affects the original audio in some way.  By manually doing, it I can retain the original audio integrity to do any fine adjustments later.  It’s just my preference, but everyone likes to work differently.

Editing voice-over tracks is a tedious process.

I like to edit out all breaths and cut close to the transients.  For me, once you notice breathing in a voice-over performance you can never un-notice it.  I just like hearing a cleaner performance without room noise and without breathing before every sentence.

Once I have the voice-over edited to my liking, I’ll apply a little bit of EQ to the voice to bring up the bottom end and curb any top end stuff.  A good use of EQ can make a voice sparkle a bit.  I also add a plug-in called a “De-Esser” which calms down any excessive sibilant “S” noise.  That’s the hissing, sizzling sound you can get sometimes from words that start or end in the letter S or Z.  I also like to apply a small amount of reverb to the voice once the sunset in the show concludes and we’re out looking at the stars.  This is around the time that the voice comes in and music is present.  I think the small amount of reverb not only helps the voice settle into the mix with the music but also gives the impression that we’re out in space.

<image – colorized sections>

And when all that is done, I like to colorize the voice-over sections according to “scene” so I can render each section in parts.  This way, I can work on each scene in sections by importing the audio into my “Render” PC—the PC I use to do all my Digital Sky and animation work. This is effectively my guide track to help me pace when certain things occur in animation or Digital Sky.

Well, now that our voice-over is edited, in place, colorized by section, and rendered in parts, we can now begin to really get in there and start working on animations.

1 view0 comments

コメント


bottom of page