-
No-one sets up their studio with any of that stuff in mind.
Have you ever been in a studio whilst it's in use, or conversed with sound engineers?
Noise floor... it's all about reducing the noise floor. You're going to have a lot of channels mixed together each bringing their own background noise. It doesn't matter so much where in the chain you go from analog to digital, that's a lot about trade-offs on control, tooling, sound... but it's noise floor.
The most important thing for quality is how music is captured, mixed, processes and laid down in to its final recording.
Yes... reduce the noise floor, and then compress it out of the final mix. If the noise floor rises into the final listening range of the composition nothing can help you.
Oh, I should include a picture...
Source: https://libremusicproduction.com/answer/noise-floor.htmlAll that hiss and noise, even when not audible on a single channel, combine to create noise that when you have enough it rises into the audible range. This stuff is like magic, if you think of ghost notes where several other notes are combined and give rise to hearing a note that wasn't played... well this happens with noise too, individual samples and channels may have no perceivable noise, and yet combined in certain ways they give rise to notes that don't exist but can be heard... fighting the noise floor is hard. I've been in a studio with PJ Harvey and Flood and saw them spend the whole day just trying to find which sample was causing one minor symptom of raising the noise floor.
The academic and hypothetical purity of any single component is just irrelevant, it may be relevant for reproduction of audio and processing of single signals (games, VOIP, etc)... but it's just totally irrelevant when it comes to recording, production and mastering... because so long as the word clocks are synchronised everything can be lined up just fine.
The battle in a studio, the concern for those who record, is combining their many channels together without increasing the noise floor.
Sheesh, even the links you cited mentioned "reproduction" and "playback"... single source, and nothing to do with music creation, production or mastering.
There are some areas I'll defer to you on, but this one where you specifically stated "most music is produced on a Mac mini / iMac"? Nah, you're talking out of your arse quite loudly.
-
I'm guessing your studio experience was some time ago - 24bit recording (and the 32/64bit fp audio engines) means there's essentially no noise floor in the digital realm. Obviously if you're capturing analog sources then getting the gain structure right and minimising signal chain and source noise is part of the game (shit in = shit out) - compression raises the noise floor [of an individual recorded track] in most cases.
Other than that you both seem to be arguing the same point - audiophile 'purity' is nonsense.
The only place where really high paper-spec gear is used in music production is in mastering and I suspect most of the magic there is in the people carrying out the work!Edit: and I think you're both right about who is worrying about this - friends in the industry just make music and use their ears, it's dorks on places like SoS who obsess over jitter and speak authoritatively about £30k stereo converters (at least until you realise they've never made music professionally).
Ok cool, yes PCs too, but you’ve entirely proved my (imperfectly argued) point which is that there is a huge amount of juju around the ‘purity’ of digital sources, usb cables and power supplies when applied upstream of the DAC on playback. I’ve even seen claims that the speed you rip a CD at makes a difference 😂
The most important thing for quality is how music is captured, mixed, processes and laid down in to its final recording. If mastering is done on a computer of some sort, and in most cases it is, then why is a computer unsuitable to perform the very simple task of extracting the data and buffering it to a digital stream?
I’m not doubting @StevePeel’s ears but equally I think I understand reasonably well stuff like clock syncing and jitter. When being mastered, most devices in the studio containing an ADC will have their clocks synchronised, but that’s because you’re dealing with multiple different embedded OSs, ADCs and connection methods (optical, USB etc). On playback however, clock syncing / jitter is rarely the cause of poor audio unless you’re dealing with a very poor quality stream converter or DAC. It’s far more likely to be caused by poor galvanic isolation or software drivers.
TL/DR: perfect streaming with Sub -120dB noise is quite achievable on a (properly functioning) computer, windows or Mac.
Sources: http://archimago.blogspot.com/2013/04/measurements-laptop-audio-survey-apple.html
https://www.audiosciencereview.com/forum/index.php?threads/auralic-leo-gx-dac-clock-review.11001/