DRAFT AMBULANCE SERVICE PERFORMANCE STANDARDS

KERN COUNTY EMS DEPARTMENT

JAN DE LEEUW

Department of Statistics, University of California, Los Angeles, CA

 

 Date: November 15, 2006.

 

I consulted both Draft Version 5 , and Draft Feedback/Comments Version  2

 

This note comments briefly on Section IX of these standards, regulating response time performance

 

Compliance: 

The standard for compliance is set at 90%, i.e. 90% or more of the responses have to be within the speciified time limits. There is no argument supporting this percentage, which seems arbitrary. How does it compare with other counties?

Note that KCFD in its comments suggests  93%. 

Zones:

Required response times differ per zone (Metro/Urban/Suburban/Rural/Wilderness).

It is unclear from the standards document how zones are defined, but the whole concept seems somewhat counter-intuitive.

Any location in the EOA can be described by its distance to the nearest ambulance station (or the nearest three ambulance stations). Response times are more naturally defined in terms of these geographical coordinates.

Accountability:

If the provider does not meet the standards in a particular month, nothing seems to happen. If the  provider does not meet the standards in a particular year, the EMS department may modify the zoning for a particular area (after consulting with the provider). This does not look  like enforcement of compliance at all.

One would, at the very  least, have to tie costs of the permits or the levels of county tax to performance.

It says that "The ambulance provider will take precautions to assure that no zone within the EOA is chronically underserved." But this does not define "chronically" and it does not specify any form of sanction. 

Measurement:

Response time is time between "call" and "arrival at scene." There is no problem with reliably establishing "call" time, since it will be in the dispatch records. But  "arrival at scene" time is more difficult to measure reliably. 

For various reasons, there may be a time interval between "wheels stopped" and "contact with the patient," and it is obvious that "contact with the patient" is the more important time point.

Also, "arrive at the scene" time is established when the dispatcher is notified by ambulance crew.  There should be information in the document about the validity and reliability of the "arrive at the scene" time measurement.

Multiple Ambulances:

If more than one ambulance is required, the response of the first arriving ambulance determines compliance. Although this is probably a rare event, it does imply  some bias in the measurements – it would make more sense to use the response time of the last arriving

ambulance. For instance, if two ambulances are needed and the second one does not arrive at all, that could not reasonably be called  compliance.

 

Exceptions:

There are clearly valid reasons to grant exceptions to the response time standards. Nevertheless EMS has to be very careful here. For instance, if an exception is requested based on "weather and roadway conditions", the Department has to make sure these conditions actually were  severe enough to warrant the exception.

This has to take into account, for example, that rural areas already have considerably longer response times. And that rural areas demand appropriately customized ambulances and equipment.

If suitable infrastructure is available, then "weather and roadway conditions" should not be used as a reason for an exception (for the same reason as equipment failures and traffic congestion cannot be used, see IX F3).

Overload:

The "overload score" for an hour is defined as the mean plus 1.5 times the standard deviation for priority 15  calls dispatched at that hour for the past 20 weeks.

So if  there have been 10 calls on average at 4 p.m. over the last 20  weeks, with standard deviation 4, then 10  + 1.5 × 4 = 16 is  the overload score.

Thus a provider handling more than 16  calls is in overload and can get a response time exception. In the case of an approximate normal distribution of calls, this will mean there is overload 7% of the time.

If the load over 20 weeks is approximately constant, then the standard deviation is almost zero, and the provider is in overload any time the call volume is above the mean (which will happen  about 50% of the time).

In order to form an opinion about  the reasonableness of this rule, one would have to have an idea about the actual value of the mean and standard deviation.

Clearly, if the standard deviation is really small, the rule is not reasonable, because it provides an exception in a situations which is not at all rare.

100 Response Rule:

Compliance on a monthly basis in any  priority category is only required if there are more than 100 responses in that priority category. Otherwise the last 100 responses will be used to establish compliance.

This  means, for instance, that wilderness areas may not have ac cumulated 100 responses total, so there are no compliance requirements.

 It also means that if service deteriorates in low density areas, then it will take relatively long until the provider is out of compliance.

Again, to evaluate the reasonableness of this rule, one would need to know the number  of responses in the various zones over time.

Data

 In order to find out if the 90% limit, the overload rule, and the 100 response rule are appropriate, or are appropriate for any given area, one would need to have detailed monthly information about call volume and response times from EMS.

In other words, one would need to have the data mentioned in Section X on Records and Reports. I donŐt know if these data are publicly available in a suitable electronic format. 

 

Department of Statistics, University of California, Los Angeles, CA  900951554  Email address, Jan de Leeuw: deleeuw@stat.ucla.edu  URL, Jan de Leeuw: http://gifi.stat.ucla.edu

 

Data derived from  http://www.co.kern.ca.us/ems/news.asp