AV 9000 Checklist Item Under Test:
For audio conference systems, adjust microphone input gain so as to demonstrate that "standard talker" (60 dB SPL at 1 m), positioned at each talker position in the room, produces a “0 dB” level at the input of the mixer bus of the audio conference DSP meter. If there is local reinforcement ("mix-minus"), AGC and ALC may need to be restricted. Record test results as pass/fail. Record level across analog telephone line. Inspect DSP mixer telephone line levels, both transmit and receive, when normal speech is encountered in the room.
Several DSP manufacturers deploy different types of gain controls for their microphone inputs. One might be called “Course Gain” or “Input Type” and adjusts the level while the signal is still in its analog form. The other might be called “Fine Gain” or “Level” and adjusts the level after the signal has been converted to digital. There is a rule of thumb that says most of the gain should happen in the analog stage, and the digital gain should be used minimally. Understanding why this rule of thumb was created will help the operator set the system gain structure properly. If the analog gain is set too high, and the signal distorts, attempting to attenuate the level at the digital gain stage will only decrease the level of that distorted signal. It does nothing to remove the distortion. To make matters worse, the DSP meters may show the level as the “perfect 0 dB” signal in the digital realm, even though the signal hit the rails at the analog realm.
The top gain settings on the DSP meter are set properly, indicated by the 0.0788% THD measurement on the attached meter picture. The bottom gain, although also reading 0 dB in the DSP, is not set properly, indicated by the 22.4% THD measurement. This is because the signal is distorting during the “Coarse Gain” stage, and the distorted signal is being attenuated at the “Fine Gain” stage.
I always tell our AV specialists that setting gain structure with DSP is very easy. As long as you get 0 dB in, and 0 dB out with standard talkers and sources, the entire system should be well behaved. Then, to confirm this good behavior, measure the signal to noise (SN) and total harmonic distortion (THD) of the system…just to be sure. For simple systems, it should take all of 20 minutes to get a conference room well-tuned. You could imagine my surprise when one of our most promising new specialists called me from a system he was setting up and he was barely intelligible.
We had drilled on easy set-ups like this before, and it should have been no problem. He showed me the levels in the room and everything was looked great. A standard talker was created 0 dB at all microphones. He output to the far end was 0 dB. My input from the far end was 0 dB. The references were all 0 dB. There was plenty of headroom everywhere. I had no idea where this distortion was coming from. When something like this happens, before calling the manufacturer, I just like to take a small piece of the system and start the tuning process from scratch as a spot check. I took one microphone, routed it to the far end, and reset the gain. The microphone inputs had two gain settings: “Input Type” and “Fine Gain”. The Input Type had options of Line (0 dB), Electret (+30 dB), and Dynamic (+50 dB). The microphone was a wireless receiver whose output was kind of line level, but kind of not (roughly -20 dBu). Since the rule of thumb was to use the fine gain as little as possible, the “proper” setting was to set the Input Type to Electret (+30 dB) and then attenuate the signal 10 dB at the Fine Gain stage. It was “better” to apply 10 dB of attenuation on the Fine Gain than to apply 20 dB of gain if you want to use the Fine Gain as little as possible.
Because we were starting from scratch, and because the output of the wireless microphone was supposed to be line level, I set the Input Type to Line (0 dB), and then used the Fine Gain to bring the level up to 0 dB on the meter (a digital gain of 20 dB). It sounded great. There was no distortion. That’s where I found what was going on.
Originally, by applying 30 dB of analog gain (Electret Input Type), the new specialist was distorting the signal before it was digitized. When he applied 10 dB of attenuation, he got 0 dB on the DSP meter…but the signal was already distorted and in the system. No amount of digital algorithms could fix that signal. It was doomed from the beginning. As they say in film editing: “Poop in, poop out”.
I unknowingly broke the rule of thumb, and by doing so, fixed the problem. The issue isn’t using the digital gain as little as possible. The issue is maintaining a “clean signal” (no distortion) at the analog gain. Said a different way, if the digital gain is being used to attenuate the signal after an analog gain, this may indicate that the signal is being distorted in the analog stage. Ideally, there would be a meter at each gain stage to confirm that the signal is not distorting anywhere. Oftentimes, there’s not. Ideally, there would be a block diagram of the different gain stages of the mixer so an operator can understand exactly what’s happening to the signal. Oftentimes, there's not. In an effort to simply the interface, manufacturers have stopped including these very helpful tools.
The take away from this experience was to make sure analog and digital gain are used properly. In general, the analog gain should be used as much as possible while still allowing for adequate head room. The digital gain can then be used to make the mixer happy (0 dB). In fact, we haven’t found any issues with applying as much digital gain as possible. The system SN and THD are not affected by the digital amplification. I prefer understanding exactly what is going on in the equipment to rules of thumb, but I suppose they have their place. So, perhaps the new rule of thumb should be, “Much like the Bhut Jolokia (Ghost Pepper), digital attenuation in DSPs should be used sparingly, but like bacon, digital gain can be used anywhere.”
Originally appeared in Sound and Communications Magazine 11/1/13