Tuesday, July 05, 2011

Distance and Signal Strength RD Tests: A Case of the Tail Wagging the Dog?


Updated: 23 Apr 14, By Veil Guy


Distance and Signal Strength RD Tests: A Case of the Tail Wagging the Dog?


I have been meaning to publish an article on this subject for more than a few years and had been sitting on putting on the finishing touches to this one, that is until I came across a posted comment online this evening.

The forum member had just read the latest annual test to come out from a long-established RD testing website and incorrectly interpreted their results, further reinforcing my concerns about performance comparison test reviews that are and have been presented in a fashion as to confuse the ordinary consumer as to how different radar detectors perform relative to each other.

As a result of that post, I decided it was high-time to dust of this article, apply some finishing touches, and publish it. (I also want to note here that it is not my intention to offend anyone in the following article, but I do believe the story needs to be told, for the benefit of BOTH the manufacturers and the consumers of radar detectors.)

So here we go...

Straight Talking Radar Detector Comparison Tests: Distance vs Signal Strength vs Sensitivity

When I think back to the early passive radar detectors of the 70’s, the biggest issue besides their lack of sensitivity was their lack of ability to provide signal strength feedback to allow the driver to accurately access the sense of urgency: Should I let off the gas pedal and apply a light touch to the brake pedal, or do I need to stand on the brakes immediately?

Many early radar detector adopters were truck drivers and the most common tool for a truck driver was his/or her CB radio. I guess it came as no surprise when Escort introduced their first super-heterodyne model that it shared something in common with the CB radio. They both employed a strength meter that truckers had become dependent on over the years, because beyond the importance of alerting you to an approaching radar threat, a radar detector must also convey an accurate sense of urgency to same.

When the threat is sufficiently far enough away, a radar detector should alert with a level of urgency that informs the driver that while there is a threat ahead, he or she has enough time to react appropriately and safely (ie; gradually) in the event any speed "corrections" need to be made.

On the other hand, if the threat is eminent, the driver must be informed by the detector that attention must be swiftly directed.

When the approaching threat is somewhere between these two extremes, the detector should indicate this as well with a varying degree of urgency.

In each of the three conditions cited above, a radar detector which conveys this sort of information through its alerting mechanism accurately and gradually is said to provide a good (smooth) signal ramp. A radar detector that does not, provides a poor signal ramp and is far less desirable.

But, something happened to one of my favorite brand of detectors. Their silky smooth ramp-up (of nearly 30 years) gave away to a ramp-up that, shall we say, was less than desirable. I noticed this and commented on this phenomenon when I reviewed an early Escort Passport 9500i (Escort's first GPS-enabled dashmount). My hope and expectation was that the observed behavior was a result of an early production "glitch" (for lack of a better word).

Unfortunately, this did not turn out to be the case as subsequent models exhibited similar aggressive signal ramps (this included newer revisions/versions of older/existing detectors, as well). An aggressive signal ramp occurs when a radar detector alerts to a weak signal, but conveys an urgent Geiger rate. When an aggressive signal ramp is combined with a very sensitive radar detector, one ends up with a near full strength alert for miles depending on terrain and well before one's speed could even be clocked with police radar.

The opposite of an aggressive ramp is one that provides a low warning for the majority on an encounter only to ramp up at the very tail end after you've been well inside the "red zone." Cobra radar detectors, tend to act in this manner (as their dynamic range is far too broad).

My preference is a ramp up that is somewhere in between. Maybe I was read the story of Goldilocks and the Three Bears one too many times as a child, but it seems to be there may be a lesson within the story. I have repeatedly commented on my observations in both written and video forms and is something that I still continue to hope that this recent trend in overly aggressive signal ramp will be addressed to be more along the lines of some recent models released by Beltronics (such as the all-time best, Beltronics STi-R Plus) and the pre-M4 platformed models (S7) of Escort.

For years now I have asked myself why would such an approach to signal ramp-ups and ramp-downs be forgone to something that is far less useful in information conveyance. I had not been able to come up with any possible answers until recently when I was reviewing historical test results from a long standing RD test site.

The year was 2005. It was a particular radar detector test that caught my eye and started the wheels turning in my mind. The test for the first time (and the ones that followed in subsequent years, to this very day) showed detection test results, not just in distance, but accompanied with signal strength indicated for each with an outright suggestion that those reporting at higher signal strengths than others were more sensitive than those that reported at lower signal strengths.

Having read this year's published test results and the ones that followed, I believe there were implications, in these reports/reviews, that any given radar detector that alerted with a higher signal strength level (as a percentage of its maximum) demonstrated a higher level of sensitivity than one that was also alerting, but with a lower level of urgency.

To the untrained eye or uneducated consumer, such an implication may sound plausible on the surface, but it is completely untrue.

While it is conceivable that any two identically designed models that alerted in a manner where one was consistently higher in signal strength than the other at any given distance, would indicate that the one alerting at a higher alert signal could properly be viewed as being more sensitive, but making such comparisons or drawing such conclusions across different brands or even more so across different manufacturers would be egregiously false as there is no standard to which everyone adheres to.

To be candid, I fell prey to the same faulty conclusion when I road-tested one of the first Escort Passport 9500cis, when I came across a radar trap that was quite a distance away. The ci alerted at nearly full- strength. It was so startling, that I slammed on my brakes, thinking that I was within speed-clocking range (typically less than 2000 feet). When I realized, instead, that he was quite a bit farther away than that, my initial impression was that "wow, what a sensitive detector to be alerting a full-strength from such a great distance." Even with all of my years of driving with radar detectors, I was duped as well. So I fully understand and empathize with those who come to the same faulty conclusions.

So, why would such a test be orchestrated and results be reported in such a manner?

Well, I thought about this too and I am putting forth the following speculation.

Over the many years of detector manufacturing (three decades plus and counting), the separation in performance, to constant-on (continuously firing) radar, between radar detectors as evidenced on certain staged test courses (ie; extreme long-distance) has compressed significantly, as detectors in general have improved considerably. To render such a test as less meaningful/relevant.

There was a time not so long ago that such a test could clearly demonstrate the superiority of one brand verses another. But as time has marched on and improvements have been made across all models from all manufacturers, such a test no longer clearly demonstrated the big differences that still exist, but do not become apparent in such tests.

Such long-distance test courses have been extended from 9+ miles and beyond**, to the point that the curvature of the earth comes into play and presents itself as the limiting feature. The problem becomes then if all or most of the detectors are alerting at these extreme distances in this remote section of the United States how can any distinctions be made for the reader to determine which ones are best?

Enter the signal strength level and the myth that higher signal strengths indicated at any given distance equate to higher sensitivity and performance levels.

Since readers (and especially manufacturers) expect winners and losers in any "test," what better and easier way to do this than by adding a seemingly important additional indicator?

The problem is, the logic is faulty as are the conclusions made from such a test.

Going back to the beginning of this article, an essential feature to a radar detector is the ability to accurately convey to its owner the level of any threat in real-time. A detector which alerts either too early or too late with its maximum alert is doing the driver a complete disservice as most every alert will either feel immediate and never immediate enough, respectively.

While I absolutely appreciate the sheer sensitivity and performance of the current Escort line of detectors (such as the 9500ci mentioned earlier or the dashmount Redline), it is their too aggressive signal-ramp that has me personally favoring the Beltronics camp (and their brilliant STi-R Plus with its far more useful signal ramp) and appreciating, even more so, other (less sensitive) detectors which have stayed true to accurate signal-strength reporting, such as the Valentine One, some Whistler models, and earlier model versions (S7 Platform) Escort's and Beltronics' dashmounts.

Perhaps it is merely an unfortunate coincidence, but detectors incorporating far too aggressive/non-linear signal ramps, while appearing "good" to some (uninformed) on paper in tabulated form, in reality provide little or no utility in conveying situational awareness accurately.

To be clear, I am not suggesting (and I don't have any knowledge) that there is a direct connection between any given manufacturers chosen signal ramp and such tests nor am I specifically condemning any specific reviewer's testing methods—as I fully appreciate the time, expense, and effort to conduct them—it is simply my assertion that reviewers as a whole (professional or amateur) should not place emphasis on signal strength and try to tie it to the appearance of improved sensitivity in such a manner as to encourage manufacturers to deviate from sound design approaches just for the sake of appearing especially good on such a test which, in the final analysis, serves little purpose other than to show some arbitrary group of "winners and losers" and has no basis in reality.

Happy and safe motoring.

** Footnote: Anyone who has driven with a radar detector out West in the remote desert or on a long flat-bridge (like those of the Florida Keys or the Chesapeake Bay-Bridge Tunnel) and encountered a radar signal, can tell you (as I will) that many multi-mile alerts to radar are not only useless, they are annoying as hell. And the reality, despite claims to the contrary by some testing sites, extreme distance detection capability (to constant-on radar) doesn't translate in many cases into extreme alerting differences in the real-world of every day driving encounters and detection distances are generally far far far less than what would otherwise be suggested by such tests, with the very same detectors. There are many other attributes of detector performance (including environmental) that are just as important, if not even more so.
Post a Comment