-
I'm guessing your studio experience was some time ago - 24bit recording (and the 32/64bit fp audio engines) means there's essentially no noise floor in the digital realm. Obviously if you're capturing analog sources then getting the gain structure right and minimising signal chain and source noise is part of the game (shit in = shit out) - compression raises the noise floor [of an individual recorded track] in most cases.
Other than that you both seem to be arguing the same point - audiophile 'purity' is nonsense.
The only place where really high paper-spec gear is used in music production is in mastering and I suspect most of the magic there is in the people carrying out the work!Edit: and I think you're both right about who is worrying about this - friends in the industry just make music and use their ears, it's dorks on places like SoS who obsess over jitter and speak authoritatively about £30k stereo converters (at least until you realise they've never made music professionally).
No-one sets up their studio with any of that stuff in mind.
Have you ever been in a studio whilst it's in use, or conversed with sound engineers?
Noise floor... it's all about reducing the noise floor. You're going to have a lot of channels mixed together each bringing their own background noise. It doesn't matter so much where in the chain you go from analog to digital, that's a lot about trade-offs on control, tooling, sound... but it's noise floor.
Yes... reduce the noise floor, and then compress it out of the final mix. If the noise floor rises into the final listening range of the composition nothing can help you.
Oh, I should include a picture...
Source: https://libremusicproduction.com/answer/noise-floor.html
All that hiss and noise, even when not audible on a single channel, combine to create noise that when you have enough it rises into the audible range. This stuff is like magic, if you think of ghost notes where several other notes are combined and give rise to hearing a note that wasn't played... well this happens with noise too, individual samples and channels may have no perceivable noise, and yet combined in certain ways they give rise to notes that don't exist but can be heard... fighting the noise floor is hard. I've been in a studio with PJ Harvey and Flood and saw them spend the whole day just trying to find which sample was causing one minor symptom of raising the noise floor.
The academic and hypothetical purity of any single component is just irrelevant, it may be relevant for reproduction of audio and processing of single signals (games, VOIP, etc)... but it's just totally irrelevant when it comes to recording, production and mastering... because so long as the word clocks are synchronised everything can be lined up just fine.
The battle in a studio, the concern for those who record, is combining their many channels together without increasing the noise floor.
Sheesh, even the links you cited mentioned "reproduction" and "playback"... single source, and nothing to do with music creation, production or mastering.
There are some areas I'll defer to you on, but this one where you specifically stated "most music is produced on a Mac mini / iMac"? Nah, you're talking out of your arse quite loudly.