I love James Bond movies as they are a nice blend of story, action, gadgets, and one liners. Taking a mental break I watched Skyfall and loved the plot and of course the fresh look at the darker James Bond. In the story without giving away spoilers, James is injured and returns. It’s an interesting take on the pros and cons of technology, testing validity, resourcefulness, and experience. If you have seen it or watch it, this blog post will resonate. After watching the movie and reflecting on coaching, I realized that applied sport science is a big problem when we are too reliant on the wrong technology and don’t have the field experience to execute a complicated real life plan. Reading the Matt Jordan and Mladen blogs, it seems that readiness and monitoring are challenges we all face, and what ways can we do a better job with.
Developing power, monitoring fatigue, and gaging readiness is both an art and science, but it seems that a combination that is effective is rare. Many left brained people have good programs on paper, but have problems applying it in the real world. Many right brained people can do an amazing job applying training programs, but don’t leverage science to get maximal results. Combining both requires both talent and experience. Here are my thoughts on developing power, monitoring, and gaging readiness. In this post I will talk about developing power in Part 1.
Developing Power- The resources of time and energy must be recorded to see why the program is working and compare intra an inter athlete. Being the king in your own kingdom is easy, try comparing numbers to other programs and look at the context. The idea of not testing power because of various reasons such as fatigue, contraction, and transmutation is understandable, but if you collect the data consistently you will see that patterns will arise, and conclusions are solid. Auditing the program is possible, but developing power is seasonal, not in block format. I have added a 100 pounds to squats and had athletes run the same times as the previous year because the resources spent on driving the numbers up were too much. I have been to conservative and cautious and the the athletes flatlined shortly after spring. I have improved all speed and power metrics and the guys raced poorly compared to what they could do on testing. The issue is how does the program, not just the blocks or phases develop power longitudinally without robbing for resources on the skills and competition. Being influenced by Vertical Integration and Hakan Andersson, I feel that training must have benchmarks with a distribution of risk among the specific training (track) and the support work (weights and conditioning). Transfer from weight room to the track is very poor, but like all training, it’s a bad but necessary investment. Sports training puts in a pound to get an ounce, and all of the hundreds of hours gets 1-2%, an accepted outcome in our sport.
Testing power can be done with an array of options. Some say the vertical jump family (SJ, CMS, DJ) is not sensitive to see if one is getting better. True, acutely this may be an issue but shouldn’t something change and run parallel to the track compared to previous seasons? During heavy training verticals do go down, but this is in hopes that later it will be higher, even if it’s months from now. When you test the vertical and corresponding speed tests should show improvement. If the labcoats are seeing changes in programs shouldn’t the coaches see something, even if it takes longer because more periodized (read peaking higher later) programs? Bodyweight compared to lifts make excellent ratios and I believe in the following works in my program.
Snatch- .8-1.1 X BW
Clean- 1.3-1.6 X BW
Back Squat 2-2.5 BW
Front Squat 1.8-2.2 BW
Those are best training numbers as I don’t test the lifts besides doing rested singles. While this may be poor use of terminology as the those thresholds and methods sound very similar to testing, I think the hybrid of half testing half training is practical. Too me it works getting the benefits without risking a fresh athlete chasing numbers. Athletes need to be guided to the right direction, not pointed to a wide open goal without benchmarks. Notice the above lifts are not jump related or have upper body scores. The lifts are for sprinters and hurdlers, and work for jumpers so far. Single leg exercises are support lifts and should not be tested like bilateral lifts. We are testing raw power not artificially combining stabilizer contributions. I find that the above lifts reduce risk to muscle designed to stabilize joints, not create prime mover actions. While muscles have different roles, wisdom hints that some have better roles than others. With the barbell hip thrust I would place it as a secondary movement, ahead of single leg lifts as the EMG and torque research is strong. I have unfairly been too harsh on Bret, namely because of early T-nation articles, but I find the specific load to hamstrings and glutes from sprinting to be so deep that I would put the barbell hip trust as secondary but it may be pivotal to HS athletes to gain glute and posterior work as a way to hack and accelerate development.
The Jump Tests are interesting because the skill involved with landing on horizontal jumps often factors into scoring, and I prefer using vertical jumps as they are safer unless one is a jumper in track. I like submaximal jumps based on NFL combine numbers and work up to 90% of distances and focus on landings. Repeat bounding and hopping or DL jumps are fine if prepared, but plyos are deceptively demanding and lead to a lot of joint and tissue problems because they are hard to see strain on the body like weights. The Drop Jump test is fine for some athletes, but Basketball and American Football would be a little tough. I have not seen soccer do this at major leagues for very long, as the CMJ and SQ are easier to get repeatability. More than weekly is hard to test jumps as they are not creating stress to adapt to. This is why testing must be minimal if it isn’t training. Testing snatch bar speed at submaximal loads does create a training adaption, but three countermovement jumps isn’t going to add slabs of muscle to lineman or help a forward develop speed. Testing power too frequently is foolish because even the best programs take weeks and weeks to improve. Monitoring is interesting and I don’t have an answer for that direction, but I have other metrics that are available to me and other coaches.
Speed Tests like 10-20m accel, 20-30m fly, and longer runs for speed endurance are less likely to see improvements as fast as the strength, power, and jump scores. Speed kills because it’s rare and less available. 5-10-5 or other agility and deceleration abilities require massive eccentric abilities and are more elastic power and single leg actions, be it side to side or front to back have helped many athletes but I don’t have solid data to conclude what is best. After 2-3 seasons one should see improvement, even if the tests flatline it may show up before testing in competition. Most of the time I have seen improvement in practice indicating the possible improvement in meets. I would say 8 of 10 coaches would say they see more improvement earlier in practice with good workouts than getting shocked by meet performance unless it was a hidden or accidental taper.
With each season being approximately the same length, the question is repeating a similar program with the same tests showing improvement. Some people will change programs year to year by hoping on fads. I have changed slowly over time as I am cautious about revolutions in training as much of the modalities are running, lifting weights, and doing general exercises. What has changed is the precision and sequence of things, but people are still doing pull-ups like they did centuries ago, and rings have been around just as long, so TRX people should calm down. I look for the same percentage of improvement as the last year, except the starting point is higher from the previous year’s improvement. The pattern of power may undulate from training and peaking, but it should shift up slightly or become the same with less effort. Testing all the power tests can be done with a LPT like Gymaware, and the key is to see improvement without forcing it with screaming or tapering. Resting is necessary, but the changes should be noted. In the next post I will go over best practices with HRV based on research, and show how real questions and talking can be an informal or formal questionnaire.