In previous blogs (see Part 1 and Part 2) I’ve touted the features and potential uses of the PUSH Band. The utility and implications for monitoring are good food for thought. That said, it is always important to recognize the limitations of any piece of technology. In terms of fitness wearables for performance (not lifestyle), my main concern is whether the technology is useful to a coach or someone capable of self-coaching. Because PUSH is not attempting to gauge limb and body position for correction of form or muscle activation, it fits a unique role in performance enhancement – but only if it works. So does it?
It’s important to note that velocity based training is still a developing research area. This isn’t like heart rate training zones VO2max and intensities which the Cooper Institute has decades of database information about. In contrast, velocity based training is something that is more theoretically based. Much of the research on velocity based training (VBT) refers to work by Bryan Mann, Gonzalez-Badillo, Izquierdo, and Flanagan & Jovanovic (González-Badillo et al., 2015; Izquierdo et al., 2006; Jovanovic & Flanagan, 2014; Mann, Thyfault, Ivey, & Sayers, 2010).
The research I’m aware of on the sensor itself has mostly been conducted by the Sato lab at East Tennessee State University (Sato et al., 2015). They published a short communication showing that velocity from the band correlates very well compared to a criterion method like 3D motion analysis systems. They used 5 subjects (which is a small sample pool, if that’s a concern) doing 2 exercises for 4 sets and 10 reps. You could argue that a low sample pool makes for a limited study, but my view is the human subject acts as a phantom, just producing the action to be measured. The same concept is applied to metabolic carts that measure carbon dioxide production and oxygen consumption, commonly validated through the burning of propane or ethanol to produce measurements. In metabolic carts the phantom rate of combustion, or stoichiometric rate of carbon dioxide production and oxygen consumption, is known. There is some influence of subject height and average proportion of limb lengths relative to it (De Leva, 1996; Plagenhoef, Evans, & Abdelnour, 1983), but if we dig too deep in to the weeds it makes for a discussion of whether or not VBT is valid since velocity of human limbs is largely determined by limb length as well as muscular force characteristics. That is a separate discussion, and the research is on-going.
Since I’m personally not concerned with the human subject part of things, I acted as the phantom. Because I have no criterion method to compare velocity to, I was unable to check accuracy. I can only check reliability using a modified Bland-Altman plot and correlation (Bland & Altman, 1999; McClain, Sisson, & Tudor-Locke, 2007). In my mind, this is more important. From a practical standpoint, it’s not relevant if the device is off by 0.1 meters per second with every repetition of a movement as long as it is consistently off. Reliability is what will allow a user to progress the speed component of the movement, whereas accuracy will help me obtain a specific velocity objective. I’m not meaning to draw into question its accuracy, but rather acknowledge that I have no objective way to assess it and that it might not be as important as some would believe.
Since velocity is just displacement over time, you should be able to determine some semblance of accuracy of velocity if the device can tell you how off either the displacement of the load is or the time it takes to displace it. In my first test, I did several counter movement jumps using the PUSH Band. This is one movement where it will tell you vertical displacement. I did this movement on both the PUSH and the Just Jump mat.
Normally when one method is tested against another, the intent is to determine if one reading (usually high or low), tends to weight higher or lower. Also, because it makes things more relative in other studies, this is usually expressed as a variance in percent difference from a tried and true method. So for example, if a police radar gun clocks a 20 mile per hour car as going 20 mph, but clocks a car traveling at 50 mph as 56 mph (on the radar gun), then we would call that positive magnitude bias. If it clocked a 50 mph moving car at 46 mph, that would be a negative magnitude bias.
In the case of the PUSH Band, it has a flat bias when it comes to measuring vertical jump height. On average that difference is 2” (+/- 0.7”). The slope of difference across 12 readings is very flat which is encouraging. The positives of the PUSH Band is the determination of these things are largely on the software side, not the hardware side. While there might be more accurate accelerometers they could construct the device from, most of the calculations are performed through the app and shuffled through algorithms and considerations of many variables such as the length of the arm relative to the rest of the body and so forth. The software can be upgraded and the algorithms can be adjusted to increase accuracy of the calculation if that is the culprit of the bias.
Velocity and Time
In the case of tracking velocity, I can’t use another method of measurement to compare the PUSH Band. What I was able to do however, is demonstrate intra-unit variability by testing one Push Band against another. In this case, I surmised that I can configure units in three conditions to compare for time, average velocity, and peak velocity:
- One where I perform unilateral bicep curls (dumbbell) with one unit on the forearm and another at the wrist. Note that the wrist is not the recommended placement.
- Another was configured for unilateral bicep curls (dumbbell) with 2 units placed side by side on the forearm.
- The last condition had one PUSH band on the left arm and one PUSH band on the right arm as a bilateral bicep curl (barbell) was performed.
Ideally, we should see a positive correlation, where increased velocity on one PUSH Band showed increased velocity on another PUSH Band. Rather than hash out each differences in each measurement method, I’ll point out the following single subject observations:
- Because the unit likely measures angular velocity and converts it to linear velocity, placement at a high or low position doesn’t affect a bicep curl’s velocity. For more complicated movements this would likely make an impact, so in that respect it is likely better to use the manufacturer prescribed sensor location. I would think forearm girth would make a difference as well but we have not tested this theory.
- Changing the sensor orientation on the arm does lead to inconsistent readings. Putting the sensor upside down will give you erratic readings, but this wasn’t tested.
- Comparing one sensor to another produced a high correlation in readings of average velocity and peak velocity. The slope of both correlations was 1.01 (average velocity) and 1.05 (peak velocity). The r value was above 0.9 (1 being a perfect correlation). This shows that both sensors measure increasing velocities fairly evenly, close to a 1 for 1 increase with unit increase of velocity.
- The average velocity raw difference from the two sensors tested was 0.01 m/s (+/- 0.07 m/s)
- Time measurements matched up well when compared.
- There is certainly a skill limitation. I purposely tried to execute some jumps more shallow than others, but the unit seemed to interpret my purposeful deceleration as something else and drastically under or over-estimating the measurement. The same was true of the bicep curls. If the skill isn’t fully learned and muscle activation isn’t purposefully concentric, the unit seems to detect it and incorrectly estimates it. This has implications for some newer athletes that are learning complex movements. Those that already have a good base in using the unit will only have to slightly change their pattern before and after the lift to produce meaningful metrics.
I also made comparisons of force and power, but with no way to cross-reference the measurement, I would essentially be reverse engineering the math and likely coming up with the same result that PUSH calculates out. There’s no sense in comparing a theoretical number to another theoretical number. One last point of interest is that I was able to do all of this by using the PUSH Portal export function. The PUSH Portal is the online portal for users to track workouts, or in our case the entire facility. This feature exposes a good bit of the inner workings of the PUSH Band. This much transparency opens PUSH up to much more scrutiny and gives valuable information to anyone working with analytics and biomechanics. PUSH deserves a bit of credit for putting themselves out there like that.
In terms of velocity analysis, the PUSH is pretty good. PUSH gives you metrics you can act on up front, and the extraneous details on the back end. If we’re managing our expectations, it does pretty well on the things that so many other wearables cannot do. If you use it and put forth an effort to understand how it works and how it will help you, then you might be able to take advantage of it’s potential. It isn’t for everyone, just like your pedometers and calorie trackers aren’t for everyone. Wearables for lifestyle modification are nice, but it’s great to finally have something that fits a performance enhancement end and is reliable.
- Bland, J. M., & Altman, D. G. (1999). Measuring agreement in method comparison studies. Statistical Methods in Medical Research, 8(99), 135–160. http://doi.org/10.1191/096228099673819272
- De Leva, P. (1996). Adjustments to zatsiorsky-seluyanov’s segment inertia parameters. Journal of Biomechanics, 29(9), 1223–1230. http://doi.org/10.1016/0021-9290(95)00178-6
- González-Badillo, J. J., Pareja-Blanco, F., Rodríguez-Rosell, D., Abad-Herencia, J. L., del Ojo-López, J. J., & Sánchez-Medina, L. (2015). Effects of Velocity-Based Resistance Training on Young Soccer Players of Different Ages. Journal of Strength and Conditioning Research, 29(5), 1329–1338. http://doi.org/10.1519/JSC.0000000000000764
- Izquierdo, M., González-Badillo, J. J., Häkkinen, K., Ibáñez, J., Kraemer, W. J., Altadill, a., … Gorostiaga, E. M. (2006). Effect of loading on unintentional lifting velocity declines during single sets of repetitions to failure during upper and lower extremity muscle actions. International Journal of Sports Medicine, 27(9), 718–724. http://doi.org/10.1055/s-2005-872825
- Jovanovic, M., & Flanagan, E. (2014). Researched applications of velocity based strength training. Journal of Australian Strength & Conditioning, 21(1), 58–69.
- Mann, J. B., Thyfault, J. P., Ivey, P. a, & Sayers, S. P. (2010). The effect of autoregulatory progressive resistance exercise vs. linear periodization on strength improvement in college athletes. Journal of Strength and Conditioning Research / National Strength & Conditioning Association, 24(7), 1718–1723. http://doi.org/10.1519/JSC.0b013e3181def4a6
- McClain, J. J., Sisson, S. B., & Tudor-Locke, C. (2007). Actigraph accelerometer interinstrument reliability during free-living in adults. Medicine and Science in Sports and Exercise, 39, 1509–1514. http://doi.org/10.1249/mss.0b013e3180dc9954
- Plagenhoef, S., Evans, F. G., & Abdelnour, T. (1983). Anatomical Data for Analyzing Human Motion. Research Quarterly for Exercise and Sport, 54(2), 169–178. http://doi.org/10.1080/02701367.1983.10605290
- Sato, K., Beckham, G., Bazyler, C. D., Haff, G. G., Beckham, G. K., Carroll, K., … Haff, G. G. (2015). Validity of Wireless Device Measuring Velocity of Resistance Exercises Short Communication Validity of wireless device measuring velocity of resistance exercises. Journal of Trainology, 4(1), 15–18. http://doi.org/10.17338/trainology.4.1