Our Blog

Why Consumer UBI Approaches Just Don’t Cut it for Commercial Insurance

Plus, Key Questions to Ask Regarding Driver Scorecards

The use of UBI approaches to commercial insurance necessitate a very different approach than the typical consumer UBI that we are all familiar with. Commercial use of UBI technology needs to identify very different factors of driving risk to help achieve underwriting insight and provide the foundation for an effective loss prevention program. Commercial insurance applications have been slow on the uptake thus far because vendors have simply not delivered required value to commercial insurers. This value is clearly the predictive risk associated with driving behavior. This is the missing piece of underwriting insight for commercial insurers. Just how different are commercial and personal lines solutions?

Commercial vs. Personal Lines UBI

Unlike personal lines,

  • Commercial risk prediction is not aimed at delivering a “discount” to the opt-in (self-selected) participant. Value is delivered in terms of incremental underwriting insight for the insurer as well an effective loss prevention tool for the fleet. Potential financial participation of both the insurer and the fleet are enabled as the insurer receives underwriting insight and the fleet is enabled to implement an effective loss prevention program.
  • Commercial application typically involves mandated full participation by the driving population, allowing all fleet drivers to be compared to one another and all fleets to be compared, allowing the insurer effective comparisons of companies across their book of business and allowing fleets effective comparisons amongst all of their drivers.
  • Commercial application is more focused on highly accurate driver risk prediction for underwriting insight. Driver behavior is the most actionable incremental insight the insurer can use to accurately predict future risk.
  • Commercial insurance is more focused on higher value, higher premium vehicles, with much higher loss experiences, thus a more attractive and attainable ROI is associated with effective loss prevention programs and risk identification and ultimately, improvement in loss histories.
  • Commercial insurers desire device neutrality so insureds can use their customer’s chosen technology (telematics device, video camera, smartphone App) while still providing underwriting insight to insurers. The smartphone delivers the added value of being able to be delivered across the insurer’s entire book of business at an attractive price whereas individual devices will never provide a telematics view across all insureds.
  • Commercial insurance is more focused on loss prevention efforts of fleets, while there is little focus on driver improvement with personal lines solutions. Insurers may commonly incent/co-fund loss prevention initiatives of fleets since fleets and insurers benefit through improved fleet loss experiences. Motivation/incentive for driver improvement are driven by fleet safety as well as the insurer in commercial applications. In consumer applications, motivation is typically related to a discount in insurance cost, a status only a minority of drivers will even qualify for.
  • Commercial auto is a loss leader, increased volume does not move the profit dial, accurate identification/removal or appropriate pricing of worst risks does. Thus driver risk profiling and associated risk pricing are much more important to commercial insurers.
  • Just like personal lines, negative customer selection is a huge motivation to accurately identify fleet and driver risk exposure—let your worst risks go to your competition (or price them appropriately).

Achieving More Risk-Predictive Scoring

Understanding that the commercial fleet insurance space is far different than the consumer space and that the focus is much more clearly on the identification of driving risk, there are key questions that should be answered in regard to the ability of various solutions to deliver a higher degree of risk-predictive value associated with driving behavior.
The understanding of actual driver behavior is typically the missing piece in the insurer’s quest to underwrite future risk. Industry actuarial insight is available, specific customer loss histories are available, it is a fairly simple process (and typically already done within the insurer’s actuarial skills) to determine the “risk of the driven road,” i.e.—what is the risk impact of roads, road conditions, weather, time of day, geography, and inherent safety of the vehicle. The missing piece in the underwriting equation is the differentiator between similar types of insureds—namely, driving behavior calculated accurately.
Since most TSP/video/smartphone Apps that seek to monetize the value of telematics data do that on the basis of questionable logic associated with risk determination, the insurer should understand the real predictive value of risk associated with driving and what kind of data processing is required to deliver that improved data.

track-fleet-gps

Key Questions to Ask Regarding Driver Scorecards

While all drivers are certainly not created equal, it may not be so obvious that all driver scorecards are also not created equal. There are very significant differences in scorecards which directly impact the actionable nature of the scoring and the predictive risk value that they offer. The world is in a rush to “monetize telematics value.” The value of telematics data is directly related to its ability to accurately predict risk, namely associated with driver behavior.

With all of the vast potential associated with the identification of driver risk through the interpretation of vehicle motion, it seems like the cornerstone of insurance telematics assessment ought to be the accuracy of the identification of risk. In reality, however, we find very little questioning or understanding of the accuracy of the typical driver dashboard.

This guide attempts to give a framework for understanding the accuracy of driver scoring solutions and aims to identify the type of scoring that insurers can count on to be most predictive of future driving risk, hence the optimal tool to use as an additional tool in underwriting operations, pre-renewal processes, and loss prevention programs.

Driver scorecards are available with virtually every solution from video providers as well as telematics suppliers (TSPs). These scores attempt to show risk associated with risky behavior predominantly identified from vehicle motion. These determinations are typically made either from accelerometer data or by calculating changes in speed/momentum over time. Customers wanted more insight into identifying and correcting driving risk. TSPs, typically relying on telematics devices provided by a small group of suppliers, used the ability of the device to set an arbitrary g-force level which allowed them to keep track of how many times a driver exceeded that g-force level. This became the foundation of the identification of a “risky event.” The threshold for the g-force level would often be set to record a “manageable/reportable” number of such events, again an arbitrary distinction based on expediency. From a safety perspective, it was concluded that the number of times a driver exceeded an arbitrary g-force momentum change was a good depiction of that driver’s risky driving.

When speaking of video solution providers, early market leaders relied on the marking of an erratic event (determined by a g-force level being exceeded) to be retrieved later and used as the basis of corrective driver feedback. They also sought to make a real time determination of what level of g-force threshold would determine a risky event and then commit a small video recording before and after the “triggering event” (the g-force level being exceeded to cause the event to be recorded). With declining costs, it became possible to continuously record video data, but the concept of marking relevant events is still pervasive, so the fleet has a tool to get at the most relevant video examples more efficiently.

So what’s wrong with this? While “good enough” or “better than nothing” scoring has been pervasive, it is not accurate enough to be relied upon to accurately predict future driving risk based on past performance. Applying better science to the interpretation of risk is not significantly more expensive, it is just better. Better science creates more predictive risk for insurers and the basis of a more accurate loss prevention/driver behavior improvement tool for the fleet.

When Evaluating Driver Scorecards and Their Accuracy, The Customer Should Ask the Provider of the Driver Scorecard the Following Questions:

1. How do you define a risky event? If the event is determined by exceeding a simple g-force or delta v/delta t calculation, the inherent inaccuracy is easy to define. Accelerometers or delta v/delta t calculations determine the relative change of vehicle momentum and correlate that to risk. accuscore-fleet-gps2The issue, however, is that it is very easy to achieve a significant change in relative momentum at low speed. At high speed, you can have a very significant braking event, for example, but the actual change in relative momentum is relatively small. This results in a preponderance of low speed events and very few high speed braking (for example) events. This is exactly the opposite as it should be since higher speed events are inherently riskier. And speed is only one of the variables that needs to be properly compensated for to achieve accurate driver risk scoring.

2. When your vendor speaks of a driver safety system, what is the focus of that program? Often vendors will quote improvements in speed management or the identification of non-driving (contextual) events as important to driver scoring. The essence of driver scoring should be the identification of risk brought into the picture specifically around how the driver drives. Correlating location information to contextual insight (what is the weather, time of day, road conditions, accident history of road, inherent safety of the vehicle, etc.) has value but these are all factors that the insurer has evolved internally over many years. From a safety perspective, there is very little that you can do to address the “risk of the driven road”. The vast majority of opportunity is tied to correcting weaknesses in driver behavior and that can only be done through the most accurate assessment of risk possible.

3. Does your vendor talk exclusively about g-forces or is it understood that changes in momentum are not necessarily significant unless understood in the context of the entire driving situation? The focus should be on energy displacement, not the exceeding of a pre-determined momentum change threshold. Energy displacement cannot be determined real-time at the device level; continuous “big” data must be interpreted on the back-end to apply all appropriate compensations to yield the most accurate risk scoring.

4. Does the solution identify severities of events? If you are just declaring a risky event to be correlated with the exceeding of an arbitrary threshold, you are basically then keeping a tally of how many times that threshold is exceeded and coming up with a driver risk score based on that. Your safety program does not exist below that arbitrary threshold and there is no segmentation of event severity above that arbitrary level. The problem is that there is a whole lot of difference between having an event that exceeds a certain threshold and the less frequent really REALLY dangerous event. You want to know the difference. In insurance speak, most driver dashboards give you a driver risk score on (poorly defined) frequency only; there is no ability to understand the true severity of any particular event. If you are not making appropriate compensations to data, there is no way of knowing the risk of an event based on a g-force reading. It is very possible to have a more dangerous event at .06 g-forces in comparison to another event at .08 g’s. Knowing both the frequency and severity of events is critical to determining risk and this only done by understanding the entire context of the driving situation.

5. Does the system identify your best drivers as well as your worst? Typical telematics solutions use the (flawed) calculation of driver risk but can at least generate a score for each driver. Many (especially video oriented) solutions focus on identifying the worst examples of risky driving and intervening aggressively with those drivers. The problem with that is that there is little reporting/feedback available to the masses for them to understand their behavior and make the subtler corrections that might improve their driving safety. Driver acceptance of the overall program is greatly enhanced if a preponderance of drivers actually receive feedback about how good they are. Good drivers also take pride in this aspect of their performance and make an even greater effort to improve their scoring. While it does not create as much movement in the total overall company risk, every bit of improved driving helps and is important in positive acceptance of the program in the fleet.

6. Specific compensations have to be made to data to create an accurate depiction of risk. Earlier, we mentioned that a change in vehicle momentum is only relevant if you know the exact speed of the vehicle exactly when that event happened. In similar fashion, it should be understood that a heavier vehicle takes longer to adjust to any given driving situation or requires more braking energy to stop in the same distance; that it’s kinetic mass delivers more destructive potential, etc. Therefore, the weight of the vehicle is needed to be able to make a determination of what safe driving looks like. The heavier the vehicle, the smaller the margin for error, so the scoring must be more critical of heavier vehicles going through the same circumstances at the same speed, with the same degree of momentum change, etc.

7. Does the system compensate for trip length? A bias toward better scoring will be associated with a long uneventful trip (2 hours on a freeway with good traffic flow) than with a shorter stop and go kind of trip. Is there a method of normalizing those two trip results so they can accurately be compared from a safety perspective? How do you do that; how is a favorable bias toward longer trips compensated for?

8. Does your scorecard take into account duration of the event? If you are “event based” (looking to declare an event real-time at the device level), there is no view of the overall risk associated with the longer duration event. Example would be a long braking event with no particular moment exceeding a threshold level. Second by second analysis will offer the potential of identifying the risk in that one second, interpreting the risk of the next contiguous second, determining if that is a continuation of the event, and adding the total risky segments to arrive an overall risk score. In an event-based system, at no one point in time did you necessarily exceed the pre-determined threshold level, hence this wouldn’t even show us as an event.

9. If you are relying on accelerometers to determine momentum change, are you sure those x, y, and z readings are accurate? A small distortion, or drift, in those readings can cause significant misinterpretation of data. How do you determine if accelerometer data is true?

10. In the interest of expediency, most driver scorecards and risk prediction programs rely on an “event” to be captured real time at the device level. More accuracy is achieved through evaluation of continuous data samples. Is your vendor delivering event summaries or is complete data fully evaluated on the backend? From a “change in momentum” point of view, even on second samples contain a lot of noise, therefore typically requiring sub-sampling within the one second. How does the solution balance the need for accurate driver risk prediction with an economic data approach?

11. Is your smartphone App really smart? While the potential use of smartphones is obvious for large scale commercial deployments, there are really only a handful of smartphone App providers that have the capability to effectively use smartphones to generate the data needed to do sophisticated scoring. These providers apply advanced artificial intelligence and signal processing to differentiate between motion related to the actual vehicle vs. motion of the smartphone within the vehicle. How is the determination of risky driving behavior/events made with the use of the App?

Summary

With an event-based approach to driver scoring, all momentum change conclusions are flawed to some degree. They may create a rough picture of driving risk, which is better than nothing, but you can do much better. For maximum accuracy of driver risk prediction, continuous data should be interpreted on the back end so appropriate compensations can be made allowing the risk associated with changes in vehicle momentum to be accurately determined. Compensations must be made to guarantee integrity of data and determine risk of momentum changes in the context of the entire driving situation.

Alan Mann is President of Accuscore, a San Diego-based company specializing in the accurate determination of driving risk through the application of superior science. More predictive driving risk enables commercial insurers to underwrite better while also enabling the fleet to implement a cost effective and efficient driver behavior improvement program.