Sports Performance Measurements – Which data matter?

I love that we are collecting more data than ever before. There are new devices and technologies all of the time aiming to help us better understand training and performance. We have everything from the low end, like the Jawbone and Fitbit, all the way up to the high end, research-grade equipment. This means we have access to more sports performance measurements, more environments and more sports than ever before. There are also more people than ever before that have access to measurement devices. The consumer-aimed devices, like I mentioned earlier, are rarely more than a few hundred dollars or so. Even some devices that are entirely open-ended (that don’t have proprietary restriction on the measurements you take like these are CHEAP! This makes them easily accessible to the measurement-oriented person who might want to collect new data.

One of the major hurdles we have to cross is what to do with all of this data. There are all kinds of metrics out there that can be obtained from devices, but we have a long way to go to determine what significance these measurements have. We can collect all of the data in the world, using all kinds of unique variables and metrics, but-

ultimately, if those metrics don’t actually mean anything to sports performance, if they don’t mean anything in the real world, i.e. outside of the lab- they are not terribly useful.

I should point out that lots of research and measurements do not have direct application to the “real world”, and yet are still very important and valuable. I do not mean to imply that mechanistic research is not. Applied research is almost always built on the back of both experience and mechanistic research that doesn’t have a direct application. Mechanistic research is not always immediately useful for applied areas. One great example is Robert Goddard, whose work on liquid-fueled rockets in the early part of the 20th century made huge strides toward making space-flight possible. Despite the modern importance of his work, Goddard was widely ridiculed for his efforts in the media. Like Goddard’s research, the benefits of mechanistic research and analysis are not always immediately obviously important, but may become important later.

Computer for data collection
Trendy laptop optional for data collection

Anyway, at the heart of the matter is that we have a glut of data, but we need to continue figuring out what it all means. We also need to know if certain data is worth our time. Part of this process is getting the devices out there and take a close look at its use. One example (PDF) of where we tried to do this was with some simple accelerometers from PASCO. We attached these on the end of barbells and measured snatch attempts with some of our weightlifters last year. In particular, I was curious to see if the vertical acceleration data could tell me anything about whether or not one of the lifters made or missed attempts. We chopped up the acceleration-time trace in a whole bunch of different ways, only to find a big fat nothing. In this case, vertical acceleration and variables calculated from the acceleration-time trace weren’t terribly useful. While this was only a case study, it seemed decent enough evidence to lead me to believe we need 3D acceleration to tell us more meaningful information. The cost of devices might end up being a little bit higher, but the PASCO wasn’t comprehensive enough information to tell us much. Not a very exciting conclusion to draw, but in my opinion, an important one.

One area in particular that seems to be growing quickly is velocity-based training, stuff that Bryan Mann and Mladen Jovanovic have been talking about for quite awhile. It seems like a really neat way to optimize the training loads that we give to our athletes- but the topic is still really young. We need a larger body of research to give us direction on how useful velocity is for guiding training, but also the specifics of it. For example, what are the velocity ranges we should be shooting for? And, if we are shooting for those ranges, do they actually matter to training? Personally, I think that this is an area that will bear lots of fruit for the future, and give us a great way to optimize our training prescriptions. However, we absolutely have to confirm this with rigorous testing, otherwise we are stumbling around in the dark on educated guesses, rather than research. While I doubt this is the case-

it could very well be that this area is useless- but we won’t know until multiple parties have thoroughly vetted it.
These are just a couple of the many examples of new data being collected that need further study. Regardless how slick the user-interfaces are, or how trendy the technology might be, or how much we care about our pet projects, all of these new areas need thorough vetting. I, just as much as anybody have my “pet” technologies, and “favorite” variables, but the fact remains the same- these all need scrutiny to ensure that we are measuring something worthwhile.

Which technologies do you think need the most efforts to work out the kinks? Any thoughts on how to do it?