| 63 Independent Listeners |
693 Scored Data Points |
| 11 Perceptual Parameters |
30 Point Gap — Both Days |
Three systems.
One variable.
The electromagnetic environment.
The Design
Three headphone systems. Identical amplifiers, identical DACs, identical headphones. Varying only in the electromagnetic field environment surrounding each rack — from stock and uncontrolled to foundation-level SR treatment to full reference-level implementation.
The Condition
An open convention floor at CanJam NYC 2026. Crowd noise. Competing audio systems in adjacent rooms. Three simultaneous listeners, each selecting their own music. The single worst acoustic environment available. By design.
The Scoring
A differential scale from −5 to +5 against Rack A as zero baseline. Each point represents 20% of maximum possible improvement. Eleven perceptual parameters drawn from established Head-Fi vocabulary — width, depth, imaging, micro-detail, transient response, decay, background silence, timbre, treble, bass, and musicality.
The Disclosure
Every methodological constraint is documented in full — listener awareness, listener controlled volume levels, Day 1 network failure, equipment tampering between sessions. Negative scores are retained. Skeptical listeners are named. Nothing was excluded from the dataset.
The findings that
require explanation.
The Invariant Gap
The gap between Foundation and Reference electromagnetic environments held at exactly 30 percentage points on both days — Day 1 with 15 listeners, Day 2 with 48 listeners drawn from a harder, more skeptical walk-on convention population. If expectation bias were driving the scores, the gap would have moved as the listener typology changed. It did not move a decimal point.
The Binary Question
Listeners were asked: if Rack A and Rack C have nearly identical RLC measurements, does the spec sheet fully describe what you just heard? Across 37 clear answers, 61% answered NO — concluding that identical electrical measurements do not capture what they experienced. That result held as the listener pool grew larger and more skeptical on Day 2.
Musicality — The Highest-Scoring Parameter
Across both days, musicality returned the highest differential score of any parameter. This is not incidental. Musicality is the aggregate of everything the electromagnetic environment affects — timing relationships, harmonic density, spatial coherence, dynamic contrast. It is also the parameter most resistant to any measurement protocol currently available.
“This is not proof. It is the most transparent perceptual dataset this industry has produced on electromagnetic field effects in audio. Read it and tell us what's missing.
— Ted Denney III, Founder & Lead Designer, Synergistic Research
Every constraint. Documented.
This study was not designed as a laboratory experiment. It was designed to answer a single field-condition question: can 63 experienced listeners, in a real-world environment, perceive a consistent and repeatable difference between three electromagnetic environments built around identical hardware?
The scoring instrument uses a differential scale rather than an absolute scale. Rack A — stock hardware, no SR treatment — is defined as zero. Listeners score Rack B and Rack C against that baseline. Each integer point represents 20% of maximum possible improvement. The scale accommodates negative scores for any parameter where SR treatment was perceived as a degradation.
Documented constraints, disclosed in full:
Listeners were not blind to the rack configurations. The study's argument rests on gap invariance across populations — not listener naivety. The gap held regardless.
Volume levels were self-selected per listener and per track. Three simultaneous listeners choosing their own music from Qobuz — mastering-level gain differences baked into source recordings made level-matching across racks impossible by design.
A network infrastructure failure on Day 1 reduced the listener count to 15 and degraded source quality on the SR-equipped racks. This is documented, directionally assessed, and retained in the dataset rather than excluded.
Between Day 1 and Day 2, a rack component was interfered with by an unknown party. Documented and disclosed.
The AI analysis was given one instruction: report everything. It disclosed every unfavorable finding without prompting. The complete dataset — including negative scores, outliers, and skeptical binary responses — is reproduced in full in the white paper.
All 11 parameters.
Both days.
| Parameter | Rack B Avg | Rack C Avg | Delta | C vs Maximum |
|---|---|---|---|---|
| Width | +1.5 | +2.8 | +1.3 | |
| Depth | +1.6 | +2.9 | +1.3 | |
| Imaging | +1.7 | +3.2 | +1.5 | |
| Micro-Detail | +1.7 | +3.3 | +1.6 | |
| Transient Response | +1.2 | +2.9 | +1.7 | |
| Decay | +1.4 | +2.8 | +1.4 | |
| Background Silence | +1.5 | +2.6 | +1.1 | |
| Timbre | +1.9 | +3.3 | +1.4 | |
| Treble | +1.6 | +3.2 | +1.6 | |
| Bass | +1.6 | +3.2 | +1.6 | |
| Musicality | +1.7 | +3.4 | +1.7 |
Full Technical Report
Read it.
Tell us what's missing.
The complete white paper includes full raw scorecard data, listener-level analysis, methodology detail, and the physics framework underlying SR's field-based engineering philosophy.
PDF · Synergistic Research · CanJam NYC 2026
Discussion