Q&A with Chris Bell: Grammy-Nominated Recording & Mixing Engineer
Based out of Texas, Chris Bell has been professionally recording, mixing, and producing music in the recording studio for nearly three decades. He’s since become a massive advocate of the new Dolby Atmos immersive music format.
Chris Bell is a veteran producer and engineer based out of Austin, Texas. His long list of clients includes U2, Erykah Badu, Everclear, Destiny’s Child, Fuel, The Polyphonic Spree, Peter Gabriel, Earth, Wind & Fire, The Eagles, Don Henley, Brian Blade, Dave Matthews, and Jakob Dylan. He’s also President of the Record Academy’s Texas Chapter and Co-Chair of the P&E Wing for the Memphis Chapter. Many of his current projects involve immersive audio, specifically the Dolby Atmos format.
We had a chance to chat with Chris in between mixing sessions about his early career, what kind of immersive projects he’s currently working on, and how he thinks recent strides in immersive audio will shape the music industry in the years to come.
Tell us about your early career. What made you want to pursue a career in audio production/engineering?
I basically wanted to be a guitarist, and that didn’t work out. I was recording my own stuff on a four-track machine, trying to figure out what I wanted to do for a living, and then a girl I was dating at the time said, “Well, why don’t you record people? You’re already good at it.” My reaction was, “That’s a job you can make money doing?” So for me, it was just a hobby at first.
I just pretty much jumped into it from there, got an internship, and started engineering. I slowly worked my way up the ladder, getting food for people and stuff.
I’ve heard similar stories from a lot of producers and engineers. You start off as an assistant delivering coffees, then they eventually give you a shot.
Yeah, basically what happened was the owner of the studio was going through a divorce. This was in the early ‘90s. I was working with a hair metal band and he was like, “Hey, can you start punching vocals?” I was nervous at first, but I pulled it off and the band liked working with me.
That’s usually what happens. You just hang out long enough until the engineer above you is sick or something and then you get thrown in there.
The studio you worked at was still using an analog tape machine?
Oh, yeah. We didn’t have computers in the control room. It was a console and two tape machines. Sometimes we used DAT instead of 2-inch 24-track.
The last project I did all-analog was a jazz record, Brian Blade’s Landmarks (2014), where we recorded on two-inch and mixed on half-inch. We never went into a computer. I was doing a lot of two-inch tape edits, and I felt pretty comfortable doing it just because you can always piece it back together.
You’ve worked with dozens of famous artists across a variety of genres. Is there a particular album or artist you’re most proud to have worked on or with?
I’ve enjoyed almost everything I’ve worked on, but The Eagles’ Long Road Out Of Eden (2007) was pretty cool, just because it was The Eagles.
How did you become aware of immersive audio?
I used to work at a studio in Dallas called Luminous Sound. We had put a J-series SSL desk in there, which had a 5.1 monitoring section.
I had previously done some sound design in 5.1 while I was working at another studio called Dallas Sound Lab. There was a company behind us called HD-Vision, who were pioneering high-definition television long before it caught on. We started off using a Dolby Pro-Logic encoder for 5.1, then later we upgraded to the Dolby Digital (AC-3) decoder which was super-expensive at the time. I was mixing on a Tascam multitrack machine, printing the six discrete tracks individually. I mean, it’s so much easier to do 5.1 now.
While I was learning how to do surround for video and picture at Luminous, I started listening to a bunch of 5.1 music discs and decided I wanted to try that as well. For almost every stereo mix I did at that time, I would also tell the band that I’m gonna run them a 5.1 mix for free. It was good practice for me. They’d hear it and be like, “This is amazing.” Most of them couldn’t be released at the time due to budgetary constraints, so they probably still have them. I told them to hold on to these because it’ll probably end up taking off and you can release this later. Honestly, you could create them pretty quickly because you have so much more space to work with instead of cramming all the audio up into two speakers. If something doesn’t fit, you can just reposition it instead of applying more EQ.
How does mixing in Dolby Atmos differ from mixing in 5.1?
When you have the height element, that makes all the difference in the world actually. It’s object-based, so you can position elements anywhere in a three-dimensional space.
Tell us about remixing Mr. Big’s Lean Into It (1991) in 5.1 & Dolby Atmos. How did you become involved in the project? Was it difficult to replicate the reverbs, balances, and other effects used in the original stereo mix?
I only mixed it in Atmos. The 5.1 on the SACD is actually a fold-down. That being said, Michael Romanowski did a great job mastering it. I listened back to what he did and it sounds great. The full Atmos mix is streaming on Apple Music, though I’m hoping they release it on Blu-Ray also.
The label, Evo88, contacted me because I’m listed on the Dolby site. I was a good fit since I worked with a lot of hair metal bands back in the ‘90s. I was a huge fan of Paul Gilbert and knew Mr. Big’s music well. I had actually sent them a picture of one of my Racer X albums and they were like, “Okay, the job is yours.”
We put so much work into those mixes. They were syncing multiple two-inch machines back then, so I had a ton of separate elements. There were probably 40-tracks per song. They’d have a lot of harmonies and a lot of overdubbed guitar stuff. Kevin Elson, the producer, was really good about tracking room ambience on separate channels as well, which lent itself perfectly to Atmos. We’d take the drum room tracks and put them in the overhead speakers, so it sounds like you’re right there in the room with the drum kit.
There are so many different immersive mixing styles used by engineers. Some are more conservative and use the extra speakers primarily for back-of-the-hall ambience, while others are more experimental and place isolated elements behind or above the listener. Can you take us through your decision-making process when using the extra channels?
Well, I think my approach would be different depending on what type of music it is.
In this case, I was trying to stay true to Kevin Elson’s original production. I referenced the original stereo mixes a lot. My goal was to take what he already did and just spread it out. I actually spoke to Kevin on the phone, and fortunately he remembered exactly what type of gear he was using for reverb and compression 30 years ago.
If there’s a shaker or some extra percussion, those would go to the back. If Paul Gilbert has an acoustic that’s subtly placed in there, I’d pull that out of the mix and put it in the back speakers. I went a bit further with some of Gilbert’s solos spinning around your head. I wanted to show off the format, but not to the point where you’re making people nauseous.
What are your thoughts on Apple and Tidal’s Atmos streaming platforms? Do you see this as ultimately becoming a mainstream success, unlike past attempts to market surround music?
It’s great. I was really excited when Apple jumped on board, because, if you think about it, they’re the ones who actually killed 5.1 discs with the iPod. I am a little concerned about the binaural version of Atmos they’re promoting, since I don’t think the engineers who mixed on physical speakers ever planned on their work being heard that way. There are some mixes that sound great in the spatial sound on headphones, and there are others that don’t translate as well.
Any current and/or future projects involving immersive audio you can tell us about?
You know, I’m really pushing the format hoping that everyone I work with gets on board. The problem right now is that a lot of the artists are so strapped for money, it’s hard for them to justify adding more production costs. I’m trying to find ways to make it more affordable for them to do so. It’s gonna take a minute. We’re going through growing pains right now.
I am just really excited that we’re not going to have to brickwall everything now. There’s actually going to be dynamics back in music again. The Atmos thing kind of fixes the ‘loudness wars’ issue that we’ve been dealing with for the last 10-15 years. If you’ve noticed on Apple Music, the Atmos mixes are usually significantly quieter than their stereo counterparts if you flip back-and-forth. We’re also gonna be listening to higher-resolution music on streaming now.
A lot of the Atmos material on Apple Music tends to come in the form of singles rather than full albums. Do you think the issue is purely cost-related or are there difficulties locating and compiling the source?
For some records that I worked on back in the day, it would be almost impossible to pull all the multitracks together and have them remixed to Atmos. The tapes would get sent to different studios to be worked on by different engineers, and some portions may have been mixed digitally in Pro-Tools. It was right around that time when the industry was transitioning from tape to digital.
The fun part is, when we do new records now, we can plan ahead for the Atmos and experiment with different ways of miking. I think that in the future, we’re gonna see some really incredible-sounding records in the Atmos or spatial format that are just gonna blow people’s minds. I think it’s gonna be great.