Mert Özcan is a Los Angeles-based based recording & mix engineer and co-founder of The Record House, a collective of sonic artists known for crafting musical and audio visual experiences with artists, directors and brands. He’s also among the first to explore Dolby Atmos music creation, having begun to work in the format throughout 2020.
After getting his degree in Music Production and Engineering from Berklee College of Music, Mert went on to work at some of the most prestigious studios around the world such as Abbey Road, British Grove, Capitol and Interscope Studios. After running the House of Rock studios with Beto Vargas, the two started The Record House.
We had the chance to chat with Mert about his career path, experience with immersive mixing, and where he thinks the industry is headed.
Tell us a bit about your early career in audio. How did you first get into engineering and producing music?
I went to the Berklee College of Music to study music production and engineering. That was 2011. I then graduated in the summer of 2013 and moved out to Los Angeles.
Prior to that, I didn’t have much of a technical background. I would record simple demos with a synth on my laptop growing up, but I really learned everything about production, recording, and engineering at school. I had an internship at Blue Microphones in Los Angeles while I was at Berklee, which opened a lot of doors for me. So I stayed in LA and worked at Interscope for a while, then started my own music production company called The Record House.
The idea of The Record House was to form a creative collective. I already knew some composers from Berklee that were doing great things, so they would handle the scoring while myself and others could offer additional services like sound design and ADR for films.
I've been doing that for the past ten years now, producing artists, doing music for commercials, and also music and post-production for film.
Fast-forward to early 2020, just before the pandemic, and you guys are one of the first studios in the country to have a certified Dolby Atmos mixing room. How did that come about?
Yeah, exactly. I went to Mix Magazine's Sound For Film Conference at Sony, and there was a representative from Netflix talking about how they wanted all shows to be mixed in Atmos from that point forward. That’s kind of when the lightbulb went off and I thought “oh, we should probably invest in this now.”
We built the studio to be a mid-size post-production room to do home entertainment TV mixes, so it’s not really a theatrical space. Construction was finished maybe three days before the first COVID lockdown. Once that happened, there obviously wasn’t any post-production work to be had as everything was shut down. Everyone wanted remote work at that point.
The silver lining was that Dolby’s certification process was complete, so we were one of just 15 studios that were cleared to mix Atmos for home entertainment at the time. They were just starting to push Dolby Atmos music with Universal and other labels, so I was in a unique position to accept that kind of work.
I’m really excited to mix new music [in Atmos], because you can do anything with it. It would have been fun to do an older title, like Pink Floyd’s Dark Side of The Moon, but you end up in this impossible position of having to satisfy people who grew up with and love that record.
I also reached out on my own to indie labels and managers that I knew with offers to mix in the new format. Most of them were unfamiliar with Atmos and didn’t know how it was gonna play going forward.
Once Apple launched Spatial Audio in June 2021, all those people came back to me asking for Atmos mixes. So it was a journey from the start of the pandemic until that point, but I’ve done close to 500 tracks in Atmos over the past two-and-a-half years.
Wow, that many! Do you know if all of them were released? I was only aware of St. Vincent’s Daddy’s Home (2021) and Rhye’s Home (2021).
I think so. I don’t have the whole catalog in front of me, but there should be close to 30 albums that I mixed in Atmos up on Apple Music. They’re not all listed on the website though, just to keep things a bit more concise.
It is definitely frustrating that Apple Music doesn’t display immersive mixer credits, though Tidal sometimes does.
Yeah. On Tidal, you should be able to see my name on the St. Vincent album and the others I did for Loma Vista. Most of the Atmos mixes I did for Atlantic and Warner don’t seem to have that metadata though. I’ve talked to some A&R people about this and I can’t imagine it’s high up on their list of priorities, but at least they’re aware.
Even looking from the outside in, the immersive rollout from the major labels seems like a massive operation. I remember reading a press release from Universal way back in 2019 announcing that they’d already stockpiled 200 or more songs in Dolby Atmos. It seemed hard to believe at the time, as Blu-Ray disc was the only release option then, but when immersive streaming launched with Tidal in 2020 – and then Apple the following year – it all made sense.
Yeah, exactly. Some early Atmos mixes were issued on Blu-Ray, but now it’s all about streaming. When I first started doing these mixes, Apple Music hadn’t launched spatial audio yet. There was a question of “how do we get this out there to people? When Rhye’s Home came out in January 2021, we uploaded the binaural mix to YouTube and that’s what was used for the album launch listening party.
It was difficult in the beginning, which is why more labels were reluctant to spend money on this, but everything changed once Apple got on board.
How did you send Atmos mixes out to clients during the lockdown? I guess it’d be possible to audition the binaural rendering over something like AudioMovers, but certainly not the full 7.1.4 experience.
Yeah, I was sending headphone mixes out to the artists. It was mostly remote reviews, and often still is.
When it comes to mixing music in Dolby Atmos, do you have a specific philosophy or approach? Some mixers seem to opt for a more conservative route and use the additional speakers primarily for reverberation, while others are more experimental and place isolated instruments behind or above the listener.
I mostly go for the second one, because I generally find it more ‘immersive’ and interesting. I’m a bit more careful with simpler acoustic songs though, as pulling apart something like just piano, guitar, and vocals too much breaks up the listening experience.
With a more involved production, you can have things like a hammond or strings swirl around a bit to make it more exciting. I try to avoid relying on too much movement, though I did end up panning a lot of stuff on that St. Vincent album. That record kind of lent itself to that treatment though, it having more psychedelic elements like phasers and other effects.
Nowadays I like to spread things around, but I'm not necessarily zig-zagging guitar solos all over the place. That's too much.
I’d almost argue they want you to play around more with movement in Atmos though, especially given how easy it is to create tempo-based moves and sequences with the Dolby Atmos Music Panner plugin. At least in Pro Tools, It’s so much easier than having to use the automation lanes to create a smooth circular pan when working in 5.1.
Yeah, the Music Panner’s been around since the early days of Atmos. I definitely did some of that time-based panning, but I also like almost freehand drawing certain effects with the mouse. It’s in two parts, first the lateral movement and then the height action on top. If you want to create something a bit more unique and spacey, sometimes you have to just put in the time and do the automation. So I don't necessarily mind spending time on a pan move, it's just part of creating that experience.
I also love the Sound Particles Energy Panner, you can make some really interesting shapes with that. It starts doing the move when the signal comes in, then waits when it drops out and restarts. Since it doesn’t reset to the same position, it’s really helpful to do fast movements or swirls.
Do you mix primarily with objects, or a combination of beds and objects?
Both, but I use some of my objects as if they’re static bed positions. The regular bed is mostly for reverb returns, but there are also some static objects in different places that I’ll feed an auxiliary track to as if they’re part of the 7.1.2 configuration. Those objects each have a different position, size, and binaural setting, but they don’t move.
Working this way gives me more flexibility, because the regular bed is always set to ‘mid’. I then have this separate ‘ring’ of objects that kind of feeds both the top and bottom sections of the soundfield. In my opinion, this helps with the timbre change that sometimes happens with object movement when translated into binaural.
I work the same way, with a series of static stereo objects for each speaker pair (front height, rear height, top middle, sides, rear, etc) plus a few extras reserved for sequencer moves. How do you get around not being able to use bus compression on the entire mix?
The way I have it set up, a traditional bus compression scheme just wouldn’t work the same way. I kind of like having a more dynamic mix, and I wouldn’t necessarily want the compression on my drum sets also acting on the synth part I’m swirling around.
If the original stereo master was pushed hard in mastering in terms of limiting and EQ, then I try to manipulate the stems individually to get it as close as I can. If you’re listening on headphones, you can definitely tell the difference. Sometimes the clients want it to be a bit more ‘in-your-face’ and punchy like the stereo master, but I’m able to achieve that effect by tweaking individual elements rather than the whole mix.
I try to match the sonic qualities of the stereo master and make sure it’s up to spec in terms of loudness, because with Atmos there’s no final step where you’re going through a two-bus compression or parallel EQ. It’s almost like a film mix, where you’re working to deliver the final product. There’s no extra [mastering] step at the end.
At the beginning of the process, what kind of assets do you typically receive from the client? Is it just a Pro-Tools session – where you cross your fingers and hope you have all the same plugins they used – or are you getting stems with effects like reverb and compression committed?
It was different every time in the beginning, but now I ask people to print stems with all their bus processing. I ask for well-separated stems though, it’s not just drums & bass on a single stereo track. I’ll usually ask for the snare, overheads, and percussion broken out if possible. With guitars, different parts like an acoustic melody or solo electric should be kept separate too. It’s essentially a multitrack, but with all the processing baked in.
If it went through an analog mastering stage or was mixed on a board, I work with whatever I can get and try to match the additional processing by ear.
Once you’re done mixing in Atmos, is there an additional mastering stage or is it simply released as-is?
No, I’m technically both the Atmos mix and mastering engineer. There are some mastering engineers getting Atmos projects to master, but I do it as a single person. I try to match the sonic qualities of the stereo master and make sure it’s up to spec in terms of loudness, because with Atmos there’s no final step where you’re going through a two-bus compression or parallel EQ.
Our loudness is overall loudness, and -18 LKFS integrated is as high as you can go. So I’ll make sure that’s how the loudest track in the album is set, with the remaining track levels being relative from there.
It’s almost like a film mix, where you’re working to deliver the final product. There’s no extra step at the end.
With the introduction of Apple’s “Spatial Audio” format, it seems most listeners are experiencing a binaural approximation of Atmos over headphones rather than the full immersive experience on a home theater setup. Is it difficult to simultaneously achieve effective results on both speakers and headphones? How do you deal with the differences between the Dolby Renderer’s binaural engine and Apple’s proprietary version?
It was difficult at first for me, I had to figure out my workflows and Dolby has improved their binaural version since we started. Apple is constantly updating theirs as well.
In the beginning, I found stuff like overheads, hi-hat, and vocals to sound overly-bright and sibilant on headphones. When you put something up in the height channels, the timbre changed too much for me. It’s fine for sound design, but for music I really felt it was compromising the EQ and intended sound of the instrument.
So I was making EQ changes to get a better result on headphones but that was altering the speaker mix I was happy with in the first place, which didn’t really make sense. They ended up changing their binaural model to compensate for this and it got better.
Going back to my workflow with objects positioned in a ‘ring’ between the ground and heights, that ‘blend’ helps negate the timbral change.
The whole ecosystem is growing. There’s even cars with Atmos systems now. If you don’t have speakers at home, why not listen in an immersive format during your hour-long commute? The car is almost the perfect studio for the consumer, because it's a closed and controlled environment.
Apple’s is a bit different. There’s still that extra step with having to print an MP4 and then listen back on an Apple device, but they’ve been very open in terms of feedback.
The process of figuring out what I liked on the Dolby binaural and then getting that to translate well into the Apple binaural was really a matter of ear training and experience, but the good thing is that the technology has improved vastly since I’ve been doing this.
In Logic, I think you can switch between the Apple and Dolby binaural renderings in real time. It'd be great to see Apple partner with Avid to implement that feature in Pro-Tools, but I suppose it’s not in their interest to help out a competitor [laughs].
Exactly. Logic users got a real benefit with that built-in renderer. I’m on a Pro-Tools system, with two machines running. My renderer is on a separate computer all hooked up with Dante, so I’m still on that old workflow of having to print and then listen.
Going back to your comments on loudness, one thing I’ve found odd when mixing in Atmos is that the binaural master output tends to go ‘in the red’ even when you’re sticking to a target of -18 or even quieter for speaker playback. Is it considered bad practice to trigger the built-in limiter on the binaural master?
Not necessarily. If you’re mixing to -18, you should be in a pretty good spot. But I also pay attention to how hard that limiter is hitting the binaural meter. If it’s going past 6 dB, you should probably back off a little. I try to stay around 3 dB. You’re inevitably going to end up ‘in the red’ because it’s trying to fit 12 channels of -18 into a stereo output.
Sometimes I get stereo mixes where the master is technically clipping, like it’s over digital zero because they’ve intentionally pushed it to that point. If it doesn't sound like it's actually clipping and you don't hear distortion, there’s probably no issue.
As a good rule of thumb though, I wouldn’t go past 6 dB of gain reduction on your binaural limiting. Any more than that and you’ll notice the pumping effect and the mix getting squashed.
Do you believe there’s a significant difference in sound quality between the 768 kbps Dolby Digital Plus codec used by the streaming services for Atmos delivery and the ADM masters? Have you heard of any plans to eventually increase the streaming bitrate?
I don't know if that will happen. ADM files can be anywhere from 600 MB to 2 or even 4 GB, depending on how many objects you use. You can’t stream a file that big to people’s cell phones. All things considered, I think the quality of it is surprisingly decent. When you think about how much data compression is being used to shrink a 2 GB file to 30 MB, I’m shocked it’s not awful. You’re right though, they could make it better.
In the IAA shop, we sell albums in the MKV format with Dolby Atmos audio encoded in lossless Dolby TrueHD. TrueHD files are significantly smaller than ADM masters but larger than an MP4, averaging around 300-400 MB per song. I wonder if it would be possible to implement this into a streaming service?
I need to brush up on my codecs. Does that include the objects too? I remember they were saying that at least one of the alternative codes didn’t support the object binaural information and speaker metadata, which is why they used the MP4 format.
Upping the file size from even 30 to 300 MB is still a big jump though, especially with a catalog of 20,000 to 40,000 songs. I’m not really certain, but maybe they’ll figure it out eventually.
Do you have a personal “wishlist” of albums you’d like to mix in Dolby Atmos?
I’m really excited to mix new music, because you can do anything with it. It would have been fun to do an older title, like Pink Floyd’s Dark Side of The Moon, but you end up in this impossible position of having to satisfy people who grew up with and love that record.
So there’s definitely a bit more creative freedom with newer releases. With catalog, you have to be a bit more respectful and cautious in certain cases. Overall though, I think any records with psychedelic or spatial properties lend themselves well to the format.
The other tricky thing with catalog remixes – especially older albums from the ‘60s or ‘70s that were recorded on tape – is that they may have committed multiple elements to a single channel when tracking, so you can’t separate them.
Yeah, exactly. I did a few David Guetta singles from the early-2000s in Atmos, so they had Pro Tools sessions that wouldn’t fully open because the plugins were discontinued. It’s a challenge, because you have to hope that the original recordings were preserved in a good format.
I know sometimes people do weird things at the last minute, like adding an effect or harmony line during final print–so it doesn't exist on the multitrack, just in the finished stereo mix.
Catalog stuff is definitely fun to work with in theory – I’d love to do Pink Floyd or Funkadelic, for instance – but you don’t know what you’re getting in terms of files or if you can even do the kinds of things you’re envisioning for the Atmos mix.
Do you feel optimistic about the future of this format? Are you confident that it’ll take hold not just with audiophiles, but also the mainstream market?
I have no doubt about that. The music world is just getting into it, but Atmos isn’t really a new technology. It’s been around in the film industry for close to ten years now. Most feature films have an Atmos mix. HBO, Netflix, and all the other video streamers are offering Atmos content.
I don't know if there is a TV broadcaster in the US yet, but Skye in the UK has football matches broadcast in Atmos. The whole ecosystem is growing. There’s even cars with Atmos systems now. If you don’t have speakers at home, why not listen in an immersive format during your hour-long commute? The car is almost the perfect studio for the consumer, because it's a closed and controlled environment.
Can you tease any upcoming immersive projects you’re involved with?
The last project that I worked on was Sigur Ros’ new album, they’re going out on tour. I did a few of their older records (2002’s ( ) and 2012’s Valtari) in Atmos as well.
Another upcoming project is Prateek Kuhad. He's signed with Elektra/Atlantic and released an album in English last year called The Way That Lovers Do. I kind of discovered him when they asked me to do the Atmos mixes, turns out he’s a great songwriter. It's the one year anniversary of the album, so there’s a deluxe edition with acoustic versions in Atmos coming out soon and they're really good.