SNOW SPORTS INJURY STUDIES - WHAT ARE THEY ALL ABOUT?!

The first studies on skiing injuries were conducted in the mid to late 1960's in America. Probably the most well known alpine research group in the world is the team lead by Bob Johnson, Carl Ettlinger and Jake Shealy at Sugarbush in Vermont, USA. Their case-control study has been ongoing since 1972 and is still the envy of people like me! They have been instrumental in shaping new developments in ski equipment, ski area design and making sure that alpine sports are as safe and enjoyable as possible. A classic example is the development of releasable ski bindings. It was the realisation through injury studies that many skiers were sustaining fractures to the lower leg in the event of a twisting fall that paved the way for this significant development.

Nowadays, there's so much going on in the world of alpine sports that there's no shortage of topics to keep us researchers busy! The questions people are wanting answers to now are:-

1. Can helmets definitely reduce injury severity?

2. Which specific type of wrist protection should snowboarders wear?

3. What are the injury risks associated with the latest crazes - skiboards, mega-side cut skis etc. etc.

4. What more can ski areas do to improve slope safety?

5. What are the major factors associated with injury?

Hopefully, you'll find the answers to most of these questions on this website - but remember I only have the information because someone somewhere has done all the hard work. So how do the studies come up with the information we all need to guide us? Well read on and find out.......For the sake of clarity I am going to use the terms 'ski', 'skier' and 'skiing' to cover all alpine sports (snowboarding, ski boarding etc). In particular, "skier days" means everyone on the slopes be they skiers, snowboarders, skiboarders or tele/XC skiers. Apologies to those I offend but I simply can't be fagged with all the typing.
 

The fundamentals of ski injury research


Not surprisingly, to carry out injury research you need injured people! Where you select your injured people from will influence the results you obtain. The highest injury rates come from 'self-reported' studies. Examples of these are written questionnaire studies of ski clubs or, in recent years, web based studies. There are several problems associated with this type of study though:-

  1. Data accuracy - difficult to ascertain if the information given is accurate as there is no way to verify it.
  2. Diagnostic accuracy - who made the diagnosis and is it reliable?
  3. Hypochondriasis - there is a tendency for people to report all sorts of trivia which may distort the results and this generally accounts for the higher rates in this type of study.

You can click here to see the current ski patrol injury report form used in Scotland.

At the other end of the scale are the hospital based studies. Not surprisingly, these tend to have the lowest injury rates of all because they tend to include only the more serious injuries. A lot of mild to moderate injuries are treated elsewhere and are not included in their data set. On the plus front though, diagnostic accuracy should be pretty good!

In the middle of the pack, perhaps the most common basis for a study is to use either ski patrol data or data from an on-site medical centre. Both of these data sets are likely to give a more accurate assessment of injury rates. Diagnostic accuracy has been quoted as a problem with ski patrol based studies. For our own study here in Scotland, I use ski patrol data and subsequently follow-up those injuries that had an uncertain diagnosis at initial presentation. Scottish ski patrollers seem get the diagnosis right more often than not in my experience.


 

Case control studies


In many ways the 'gold standard' in epidemiological injury studies is to perform a so-called case control study. This means the researchers not only collect information from injured skiers but also similar data from uninjured skiers for comparison. This gives a study a lot more power and makes the results far more meaningful. For example, to find that 75% of injured snowboarders are male might seem to suggest that males are more at risk of injury. But, if the control data collected from the uninjured snowboarder population shows that 75% of them are also male, then the answer is that more males are injured simply because there are more of them on the slopes. If, however, males made up only 25% of the uninjured population, then it would imply a 3 fold excess risk of injury amongst male snowboarders. This principle can be extended to look at all sorts of things - the use of wrist guards for example. If less people in the population with upper limb injuries are wearing guards compared to the uninjured population, then it implies that guards are protective against upper limb injury. If the converse were true then it would mean guards are associated with higher risk of injury. Important to remember though that this does not definitely imply association - the guards themselves might not be the cause - there might be another factor at play - see regression analysis later on on the page. And you thought this study lark was simple, eh?

The reason a lot of studies don't collect control data is that it adds considerably to the logistical problems of a study. It takes time to collect the control data, uninjured skiers need to be convinced to take the time to answer the questions and the data must be collected completely at random and at a variety of sites and on a variety of days in all sorts of weather throughout the season to reduce the chances of bias. For example, if you just stood and collected control data from skiers at the bottom of a beginner lift only on  sunny days, that population is not likely to be representative of the entire population at the ski area in question. So control data collection is very important but adds considerably to the workload. Don't I know it....

The dreaded denominator - skier numbers.


To calculate an injury rate for a ski area or a particular snow sport you not only need to know the number of injuries that occur in a set period of time but you also need to know the denominator - how many skiers or snowboarders were there in total during the same time frame? This can be a tricky one! It's easiest if the ski area in question has only one point of access (i.e. a cable car or gondola) that can measure the number of skiers who go up per day. Even then, there is no guarantee that every one who goes up will be skiing or snowboarding - some may be walkers, staff, whatever. Using daily ticket sale counts can be problematic too - what about season ticket holders and monitoring how many times they ski?! So you can see that once again its not as simple as it seems. Once you have a denominator (and any injury study worth its salt will describe in its methods section how they arrived at their figure), then you can work out an injury rate. Most use the term 'skier day' - the easiest way to understand this term is one person skiing or boarding for one day. Skier visits is another term you see which basically means the same thing. As commented on below, skier days have their problems but its difficult to see a practical alternative at present!
 

Calculating an overall injury rate


The traditional way to describe ski injury rates has been in terms of  "injuries per thousand skier days" (IPTSD) where:

IPTSD = (No. of injuries/No. of skier days) x 1000.

Say the IPTSD total was 3.2. In simple terms, this means that for every 1000 people on the slopes at any one time, just over three people will be injured. So, if there are 10,000 skiers on the hill, you would expect to treat 32 people that day. You can now see that this sort of information is quite useful to ski areas and patrol teams as it  allow them to plan (roughly) how busy they are likely to be.

The problems associated with using the "skier day" concept have been debated ad nauseum - the basic problem is that it assumes that when a lift ticket is bought or a person goes up on a lift, that they will be skiing or boarding all day - obviously not always a  correct assumption. The problem is to come up with a better solution. The Norwegians define a skier day as 10 lift transports - this may be true for some skiers but certainly not all - even at my age I would regard only 10 lift rides in a day as a bit meagre!

Other solutions have been suggested including measuring the distance travelled by a calculation derived from individual lift use. The problem with this is that at some resorts ascending one lift might lead to several runs of differing lengths - so when someone goes up the lift you really have little idea which route they took down (and hence how far they travelled). It has been suggested that GPS could be used to measure how far skiers and snowboarders cover in a day to derive an average figure. Not a bad idea but fairly big bucks would be needed to carry this one out.

Perhaps we may be better to use injuries per skier hour - it shouldn't be that difficult - just ask all the people in the study (injured and uninjured) how long they have been skiing for. Problem is knowing whether this information is really any more accurate.

So whilst there are problems with skier days, it still seems to be the best measurement tool at present and certainly is (and has been) the most widely used - this allows a degree of comparison between studies.


 

Calculating a rate for a specific injury


If we were to use IPTSD for individual injuries that may occur fairly infrequently we would end up with very unworkable numbers, 0.004 IPTSD for example. A better system, originally suggested and introduced by the Vermont group, is to use the term "mean days between injury" (MDBI). This is calculated as follows:-

MDBI = (No. of skier days*/Number of injuries$)

[* - for the sport in question    $- for the specific injury in question]

So to calculate the MDBI for a wrist fracture in snowboarding, you need to know the total number of wrist fractures sustained by snowboarders and also the total number of snowboarders on the mountain  for the defined time period of the study. So far, we only have a number for the total number of all participants in alpine sports. What is needed is some way to calculate the proportion of the total mountain population who snowboard, alpine ski etc. In Scotland we do this through a series of random counts in a variety of positions at ski areas throughout the season. For example, we calculated last season that snowboarders made up 26% of the mountain population - so the mathematics go like this:-


 Total number of snowboarder days = Total skier days* x Snowboard fraction

= 263,317 x 0.26

= 68462 snowboarder days

(*  - Confusingly, "total skier days" here means all visits by skiers, snowboarders, skiboarders and tele/XC skiers!)

 

Statistical significance


The level of statistical significance of a finding indicates the degree of confidence that the results did not just occur by chance. Usually, significance is indicated by way of "p values", typically you will see something like "p<0.001". Most studies use a cut off of p<0.05 to indicate a result that is statistically significant. If p=0.05 then this means that the researcher is 95% confident that the results did not just occur by chance. Put another way, if the study were to be repeated, similar results would be found at least 95 times out of 100. So it follows that the lower the p value, the more statistically significant is the result. There are many different computer statistical packages on the market - our data first goes into MS Access, then gets imported into MS Excel and finally we use the SPSS stats package to chew the numbers.
 

Multi-variate analysis


This is where my head really begins to hurt and I'm glad I have a statistician to help me do the complex stuff! Multivariate analysis involves using a computer program to assess all the variables that have initially been found to be significant to see which ones really are the true risk factors for (in our case) injury. For example, the facts that more beginners are injured and that more children are injured might be linked by the fact that children may be more likely to be beginners. Once a multivariate analysis has been performed and a factor still found to be significant, then that factor is an independent risk factor for injury. This is usually expressed as an "odds ratio" relative to something else. So, for example, if the odds ratio of injuring your wrist snowboarding as a beginner is 2.2 compared to an advanced boarder, this means (not surprisingly) that taking all other factors into account, compared to advanced boarders, beginners are 2.2 times more likely to injury their wrist.



Our most recent multivariate studies have shown that age <16 years, less than 5 days experience that season and first day's experience are such independent risk factors. This has shaped our ongoing research as we look at each of these groups in more detail.

ANY THAT'S ALL THERE IS TO IT FOLKS!!!
(Now you know why I have to drink as much malt whisky as I do.....)



 

 



  Automatic notification when pages
get added or updated