Monday, July 27, 2009

Photo Enforcement Safety Benefits: Separating Fact from Fiction

Speed Camera Photo Enforcement

Photo Enforcement Safety Benefits: Distinguishing Fact from Fiction (Part II)

Evaluation of the Executive Summary of the Report Entitled, Evaluation of the City of Scottsdale's Loop 101 Photo Enforcement Program, Draft Summary.

Updated: 28 JUL 09, 1840EST

In my previous blog post, I laid the groundwork and context for this series of related posts.

In this part, I look at the Executive Summary of the preliminary analysis results of the fixed speed-enforcement camera demonstration program (SEP) that was "conducted" on Arizona State Route 101 (a.k.a. Loop 101) during the period of January 2006 through October 2006 (a duration of approximately 10 months).

The stated purpose of this "evaluation "program was to quantify the following five elements:
  • The impact of the SEP (photo enforcement program) on speeding detections (76mph or faster)
  • The impact of the SEP (photo enforcement program) on average speeds
  • The effect of the SEP (photo enforcement program) on traffic safety (ie; motor vehicle crashes)
  • The expected economic costs and benefits of the SEP (photo enforcement program)
  • The financial public perception impacts of the program
The "evaluation" was administered (but not conducted) by the Arizona Department of Transportation (ADOT) and utilized data from a variety of sources, namely the Arizona Department of Public Safety (for crash report data), ADOT itself (for data, traffic volume, and speed data), the Arizona Crash Outcome Data Evaluation Systems (for crash data and crash costs), the National Highway [Traffic] Safety Administration (NHTSA) and Redflex (for detection rates and traffic speeds).

In essence, the vendor standing to benefit most from a "favorable" outcome of this "evaluation" program, was Redflex itself. Does this sound a bit like asking the proverbial fox to guard the hen house?


OK. Here is where the fun begins.

The evaluation program was to look at four key time periods:
  • The before SEP (photo enforcement program) period
  • The SEP (photo enforcement program) warning period, only warnings issued to drivers
  • The SEP (photo enforcement program) violation period, actual citations issued to drivers
  • The after SEP (photo enforcement program) period, system no longer utilized during evaluation study
The Scottsdale SEP program employed 6 speed cameras on a 6.5 mile section of Loop 101 within the city limits of Scottsdale.

Now keep in mind the primary objective of the SEP program was to assess potential safety and cost benefits (of reduced speeding detections) in terms of an assumed consequence of overall reduction of crash and injury rates.

The very periods likely to have the highest rates of crashes (particularly multi-vehicle accidents or MVAs) or associated injuries are periods of high vehicle density, like those during morning and evening rush-hours, for obvious reasons.

Would you be surprised if I told you that those periods were specifically left out of the speed detection portion of the program?

The reason? Simple.

The report itself stated that during peak-hour traffic, speeds were constrained by congestion, and therefore it was highly unlikely that high speeds (defined as an excess of 11mph over the PSL of 65) were even possible!

For this "reason" speed data were discarded or not measured. In other words, the times when crashes were potentially at their highest rates, speeding could have not be a direct causation factor, but other dynamics were more likely to be, such as excessive traffic density, driver inattentiveness, poor vehicle maintenance, and/or inadequate highway engineering designs.

I find this a remarkable admission.

I suspect had that data been included, the overall "beneficial effects" of "reduced crash rates" as a they were related to speed (ie; the positive effects of the SEP program) would be have been futher marginalized.

For convenience and for the sake of Redflex being able to call their program a "success," speed data from the most dangerous periods were completely ignored! Furthermore, the report stated that empirical data sets were either extremely limited or completely non-existent.

To make up for these deficits of real empirical and concrete (independently generated) data, the "authors" instead relied upon limited data (from another section of the highway and not even in the city limits of Scottsdale!) and the reliance on complex statistical models, instead, the kind that only individuals with Ph.Ds would use.

By using data from an entirely different portion of the highway and with a reliance on "hocus-pocus" math, the underlying integrity of the report's conclusions was severely undermined.

As if this wasn't enough...

Even with the limited data and the massaged statistical analyses, their findings suggested an increase (33%) of rear-end type multi-vehicle (MVA) crash-rates (not at all surprising considering the unsafe dynamics these photo enforcement systems create, such as "traffic porpoising"), though the report also suggested a somewhat paradoxical finding that the related injuries were reduced (by 12.57%).

In reality, one would expect injuries associated with such crash-type (along with offset or head-on collisions) to be of the most serious nature (ie; spinal/neck).

The executive summary also suggested that increases in these rear-end crash types were swapped for decreases in other types (ie; potentially less severe ones). In plain english, this meant the crash reductions did not necessarily occur, but different (ie; more severe) crash types, did.

I find this an even more remarkable admission.

However, the report did indicate that this comparison of the program evaluation periods to the "before program periods" did not consider other factors that could have or would have accounted for the varying accident rates observed or calculated, such as weather or other roadway conditions (like construction zones or lane restrictions).

Even with its limited amount of concrete empirical data, the report basically concluded that the SEP program reduced average speeds by approximately 9.5 mph, during its implementation.

Now that may sound like a lot until one one considers that highway posted limits should be set to the 85th percentile of speed (according to ITE engineering standards).

Therefore, if Loop 101 had a more appropriate posted limit of 70mph or 75mph (again, only obtainable during non-peak times), then these "speed detection reductions" would have be entirely irrelevant.

Stated another way, it appears to me that the SEP program only confirmed that the appropriate PSL (posted speed limit) should be raised in accordance with normal traffic flow rates (and that it remains 5mph to 10mph too low).

The report's executive summary of the SEP report does "come clean" in certain respects:
  • The "results" were based on small and incomplete [data] samples. (ie; insufficient data)
  • The "results" were based upon incomplete time (ie; insufficient time where random fluctuations of crash were common and could have influenced the results substantially)
  • Crash result trends were made at another site and crash data was used from high-peak periods (ie; rush-hour) even though speed detections were ignored for same periods.
Another interesting golden nugget suggested in the executive summary of this preliminary report of the 101 Loop SEP evaluation program was the admission that the highway of Scottsdale 101 (Loop 101) was already statistically safer to drive than other highways throughout the country!

Perhaps, this was the report's most remarkable admission for several reasons.

This acknowledgment, suggests the obvious: that speed, in-and-of-itself, is/was not a major contributor to highway crash or injury rates or has a limited adverse impact on overall highway safety.

Why would a city that was being actively lobbied at the time by Redflex and/or ATS conduct such a test [for/with] these very same companies who were (at the same time) forecasting huge potential profits from 'exploitation' of the marketplace?

Why would the findings (ie; conclusions) from this preliminary report be used to drive legislative policy nearly two years before the final report was to see the light of day?

Answer (to both questions): Follow the money.

How the SEP could be described as an "unqualified" success (by others not financially connected to its findings) would be a bit of a stretch.

How the report concluded (as it did later) that an increase of 33% in rear-end crashes as a result of the SEP was a "negligible" increase and an "equitable" exchange for less-severe accidents, would be a bit more than a bit of stretch.

That's enough analysis for now, because if you are like me, your head may dizzy from all the spin.

Related Reading:
Related Online Discussion:
©2009, all rights reserved. no portion of this article may be reproduced without expressed consent of the author.

No comments: