VivaCafe VIP Premium Health & Nutrition Services Articles Sports Fitness

Measurement and assessment of specific sports skills

By:Vivian Views:475

There is no universal "standard answer" in the core logic of the measurement and evaluation of special sports skills. The essence of all reliable evaluation systems is to first anchor the two core variables of "essential characteristics of the project" and "evaluation target scenario", then balance the weight of quantitative data and empirical judgment, and finally implement a customized solution. This is the most practical conclusion that I have come to after eight years of sports evaluation and stepping through countless pitfalls.

Measurement and assessment of specific sports skills

Last year, I helped the provincial tennis team to evaluate the training effect of the U14 echelon. In order to save trouble at first, I directly copied the general tennis skills evaluation form issued by the Institute of Physical Education. The hard indicators such as forehand and backhand success rate, 30-meter return run, and serving ball speed were measured. After the test, the pass rate was over 90%. I took the report to the head coach to sign it. He just threw the watch back into my arms: "All the kids you tested met the standards. Last week, there were only 2 left in the top 32 of the Southern Division. Can you explain to me?" ”I was so embarrassed at that time that I realized for the first time that no matter how good-looking the data is, it is just useless paper if the evaluation is divorced from the actual project scenario.

After this incident, I read a lot of information and chatted with colleagues in the industry, only to find that everyone’s differences on this matter actually existed for a long time. Colleagues engaged in sports and human body science are more inclined to fully quantitative assessment. Now the accuracy of motion capture, electromyographic sensors, and wearable devices is getting higher and higher. The angular velocity of the wrist joint of professional basketball players when shooting, the proportion of work done by the waist and abdomen of tennis players when serving, and even the kicking angle of each step of sprinters can be measured accurately. No one can argue with the data, and the reliability and validity are high. But the problem is also obvious: the core "soft abilities" in many special events cannot be measured at all - the "feel" of table tennis, the "position awareness" of football, and the "sense of competition" of short-track speed skating. You can't capture data on these things even if you stick sensors all over your body, let alone those "competition-oriented players" who usually have average training data and become popular when they come to competitions. Purely quantitative assessments can easily miss these good talents.

On the other hand, the veteran coaches who have led the team for 20 or 30 years do not recognize these fancy instruments at all. Their evaluation method is simple and crude: let the children go up and play for 20 minutes, and the quality can be seen at a glance. I know an old coach of the provincial short track speed skating team. He does not need to look at the timing board. He stands on the sidelines and watches your center of gravity posture when passing the first turn. The difference between the reported lap time and the actual time is less than 0.1 seconds. It is very painful to the eyes. But the shortcomings of this kind of empirical judgment are also very prominent: it relies too much on personal ability and cannot be copied in batches. Young coaches cannot learn it after ten or eight years, and it is even easy to bring in subjective biases. For example, many old coaches have a deep-rooted belief that small people cannot play basketball. When Chen Jianghua was 14 years old, he was only over 1.6 meters tall. He was rejected by countless local teams. If we relied solely on experience, we might have missed the talented guard who could beat Kobe Bryant.

Therefore, the current default solution in the industry is actually a dual-track system of "each taking a step back". There is no clear standard. It all depends on what your evaluation goals are. When I helped the Municipal Football Association select U12 teams last year, I specifically adjusted the weights: 60% is a quantitative hard indicator. The 12-minute run, the accuracy of the shot around the pole, and the placement rate of the 30-meter long pass must be measured to avoid hard thresholds and prevent coaches from swiping people based on their preferences. ; The remaining 40% is given to the coaching staff for "confrontation performance points", which depends on your running choices, handling of the ball, and mentality of playing against the wind in small-court confrontations, which cannot be quantified. The children selected at that time won the runner-up in the U12 group in the Provincial Youth Football Championship last year, which was much better than the results of the teams selected purely based on quantitative data in the previous two years.

Of course, there are many pitfalls that have been encountered. The most common one is to regard "evaluation indicators" as "training goals", which is purely a case of paying for one's money. For example, in many places, the basketball test in the physical education high school entrance examination now specifically tests the time of round-trip dribbling and layup. In order to improve scores, many schools do not teach basic movement specifications at all. They let students practice running back and forth repeatedly, regardless of turning their wrists or walking, as long as they can beat the time. In the end, the students get the points. When they actually play on the court, they can't even do a basic dribble. There are also movement assessments for ordinary enthusiasts. They have to use the standard cards of professional players. To hit the ball in badminton, the wrist joint angle must be exactly the same as Chen Long. To draw a loop in table tennis, the waist and legs must account for more than 60% of the force. They don't care that ordinary people just have fun and don't want to be professional players. In the end, a lot of assessments are done, but they have no use except to undermine people's self-confidence.

After working for so long, my biggest feeling is that before doing a special evaluation, don’t rush to find a watch or instrument. First, ask yourself three questions: First, what is the core competitiveness of this project? For example, the core of weightlifting is absolute strength, so you just focus on testing explosive strength. ; But for events like fencing, reaction speed and prediction ability are much more important than absolute strength. You can't just measure bench press and squat. Second, why am I testing this? The selection of seedlings should focus on potential, the assessment of training effects should focus on the progress, and the correction for enthusiasts should focus on safety and practicality. The goals are different and the standards are very different. Third, what will my test results be used for? If it is used to develop training plans for athletes, then more data should be measured to accurately identify shortcomings. ; If it is used for selection, it must leave enough room for experience and judgment.

I had dinner with an old professor from Beijing University of Physical Education a while ago, and I particularly agree with what he said: "Sports skill assessment is ultimately about 'looking at people.' Data is a tool, not a ruler. You can't use a dead ruler to measure the ability of a living person." ”Nowadays, the technological iteration of AI motion recognition and wearable devices is getting faster and faster. Many people say that in the future, all assessments will be done by AI, and coaches will lose their jobs. However, I feel that the more advanced the technology, the more practitioners who understand projects and people are needed to build the framework. After all, what we measure is never cold action data, but the possibility of each living athlete.

Disclaimer:

1. This article is sourced from the Internet. All content represents the author's personal views only and does not reflect the stance of this website. The author shall be solely responsible for the content.

2. Part of the content on this website is compiled from the Internet. This website shall not be liable for any civil disputes, administrative penalties, or other losses arising from improper reprinting or citation.

3. If there is any infringing content or inappropriate material, please contact us to remove it immediately. Contact us at: