The ideal scoring set up: Choosing scoresheet categories
Written By Akim Mansaray and Sherman Charles
Graphics by Akim Mansaray and Sherman Charles
When deciding on a scoring set up for your contest, there are several things to consider, such as the scoresheet, judges, and awards, just to name a few. Regardless, the primary focus must be fostering educational goals and developmental progress. Artistic performance assessment is understandably difficult because of the varied views and the subjective factors that impact performance curricula. Thus, the goal of this blog post is to lay out what needs to be considered to have a quality, transparent contest with integrity where education is the prize.
The following four posts discuss 1) Choosing scoresheet categories, 2) Caption weights and calculating results, 3) Choosing judges, and 4) Awards and time penalties. This post is the first of the four. If you have any questions about this post’s content, or if you would like to share your view on this topic, feel free to add your comments at the bottom of this page. Don’t forget to read the other three posts!
Choosing scoresheet categories
The most prominent feature of any scoring system is its scoresheet, or rubric (to be consistent with the wording throughout Carmen’s documentation and resources, “scoresheet” is used to refer to the list of captions and criteria against which a performance is evaluated).
From an educational perspective, this is an invaluable part of competing in a contest. The scores each judge awards to a given performance inform the directors and their students about not only what they could improve upon, but what they did well. Thus, the scoresheet functions as a scored music assessment tool. As a result, it is crucial that the scoresheet highlights learned and applied skills relevant to producing an educated and skillful performance.
Scored music assessments are an attempt to capture essential feedback on a performance in a condensed yet meaningful way. The value of the feedback gained is dependent upon the composition of the assessment tool used. If an assessment tool is used in competitive formats, great care should be taken to ensure that valid conclusions can be drawn. A good music assessment should yield feedback that can be used to better ensemble performance and depths of artistry.
In other words, this tool should result in formative assessment rather than summative assessment. To achieve this, a scoresheet must attempt to capture educational feedback in an easily accessible way. The design of a scoresheet requires multiple considerations that work in tandem to ensure that valid interpretations can be gathered from its usage.
Categories
Some criteria are obvious, such as intonation, rhythmic accuracy, synchronized choreography, uniform movement, etc., but others are much more difficult to define and are vulnerable to subjective interpretation and preference, like tone quality, musicality, choreographic content, entertainment value, etc. For example, a group of judges might have some disagreement as to what an entertaining performance is like, but generally all judges can agree when a performance has some issues with staying in tune. Assessment of something as nebulous as artistic expression or creativity—particularly since there is a general lack of agreement on performance curricula—is notoriously difficult, yet it is an essential part of art education. However, assessment of subjective criteria such as these help contextualize what is needed to have a skillful and artistic performance. Thus, it is important that both objective and subjective criteria are included in a scoresheet since both are quintessential components of art education.
The exact categories one chooses should also reflect aspects of the performance that the students can control and improve upon. This means that the items enumerated in the scoresheet should focus on learnable and applicable skills. However, it is much more difficult to determine what exactly the students have control over than it appears; sometimes students are given the responsibility to design the show, choreograph the show, write the charts, write the arrangements, choose or even design and make the costumes, and run rehearsals. Even if the materials are handed to them, ultimately the students are the performers presenting the materials. Thus, it is reasonable to include categories like costuming, as long as the design of the costumes themselves is not the focus but rather whether the students are presenting the costumes well and with intent or not. For categories like Difficulty and Innovation/Creativity, the students might not have much say, but students that successfully perform more difficult material or more creative and complex shows should certainly be rewarded for it. So, the majority of the categories should focus on learnable skills that can be refined, and aspects that are worth rewarding—like creativity and skill level—should only comprise a small portion of the overall scoresheet.
Another factor to consider when choosing the categories for a scoresheet is the skill level of the performances. A scoresheet meant for evaluating high skill level performances is inappropriate for evaluating lower skill level performances. An assessment tool meant to evaluate an advanced performance will not result in valuable educational feedback for a novice performance and vice versa. For lower level groups, the scoresheet categories should focus on the fundamentals of the performance to encourage the establishment of basic techniques. For higher level groups, the scoresheet should expand to more advanced skills with nuanced elements to push them even further in their development. An analog to this is designing a rubric for a writing assignment that is appropriate for a primary school student versus a secondary school student versus a university student, and etc. Additionally, it is recommended that the proficiency of performance levels selected be connected, showing a related and incremental progression of skills. Doing so provides further clarity to judges when assessing differences between levels and for more actionable feedback to be received by ensembles. We must remember that ultimately the scoresheet is an educational tool that provides encouragement, not a weapon of punishment.
Lastly, in order to make it clear to both the adjudicator and the adjudicated, be sure to provide clear and precise descriptions of each category.
While many categories might seem self explanatory, it is still a good idea to set the boundaries for everyone on what falls under each item. There are many rubrics out there that have very broad categories with vague descriptions that encompass a wide variety of smaller details. These categories are often assigned arbitrarily large point values that make up large portions of the total possible points on a scoresheet. These types of scoresheets are popular to some because it gives the judges a degree of freedom and lots of wiggle room, but it is at the expense of providing quality feedback to the performers. To create an image, let’s say in a hypothetical scoresheet there are three categories: Vocals worth 50 points, Visuals worth 40 points, and Show Design worth 10 points. There might be a description that says something like, “Vocals: tone, technique, musicality, intonation, etc.” Consequently, in these types of scoresheets, there is little to no frame of reference to understand what went well in a performance and what didn’t go well. For example, a group may have done really well in intonation, but they lacked a sense of musicality. How is the judge supposed to communicate this to that group with this scoresheet? Perhaps the judges could provide written or recorded feedback… perhaps, but how does a judge justify the score they give for that broad category? How do they know how many points belong to each detail of the category? Although it is much more tedious for a judge to assign values to 15 categories worth 10 points each instead of 3 worth larger values, the educational value of the detailed sheet is vastly more helpful to directors and students. In sum, avoid designing scoresheets with this format and provide interpretable descriptions of all criteria used.
If you find yourself a bit lost even after reading the previous paragraphs, the good news is that you do not have to reinvent the wheel; there is no need to start from zero. All of the scoresheets that have been used in the Carmen Scoring System by contests all over the US and UK are available to you here. Feel free to peruse them and use them as a resource by either adopting any of them at your contest or by picking and choosing what you want on your own custom scoresheet. We highly encourage you to coordinate with other contests in your locale to decide on a scoresheet that everyone uses. Regions that have already done so have seen the benefit of comparing feedback from contest to contest over their season (e.g., New England, Tyson Showchoir). If you need help, we are more than happy to meet with you to discuss your scoresheet as part of our scoring services. Send us a message to get started today. With that said, we have already put in a massive effort to create what we think are the ideal scoresheets for both lower level showchoirs and higher level showchoirs, showbands, concert choirs, solo singing, and tech crew. You can find them all here and you are more than welcome to adopt them at your contests.
Written and audio comments
In addition to the scoresheet, contests often have their judges provide written and/or audio recorded feedback. This form of feedback supplements and supports the information gained from the scores awarded, but it can often be sparse and unhelpful.
Written feedback can help judges provide a broader perspective on the performance. It allows them to comment on the whole rather than its parts while also filling in details that are not thoroughly captured by the scoresheet.
Recorded comments contextualize the feedback within specific moments during the performance. In the recording, the directors can hear the judges’ thoughts as well as what is happening in real time. Furthermore, judges will often provide a wholistic analysis as well as any additional details at the end of the recording to supplement everything that they have said throughout the show.
Although written and recorded feedback can provide valuable information, the quality of the feedback is highly dependent on the judges themselves. The judges do not have the structure that is available in the scoresheet to organize and quantify their impressions. As a result, written and audio feedback can be hit or miss.
Since it is a less reliable form of feedback for the directors and students, it is recommended that you ask the judges to do either written or audio comments. That way, if they do provide substantive feedback, they have the time between performances to do so.
Lastly, during my tenures as a student and as an assistant director of a competitive showchoir, we rarely if at all listened to the audio comments that were provided by contests. Out of curiosity, I informally polled directors from all over the US and about 70% of directors never listen to them. When I asked them why, they said it’s mostly a waste of time sitting through five 20 minute recordings from one week’s judges to gain just as much information from them as they could by simply looking at the scoresheet and written comments. The other 30% were shocked and even appalled by the fact that most directors don’t listen to audio recordings.
So, what do you do with this information? It’s our opinion that written comments only are the way to go. It saves judges the pain of making the recordings (imagine how tired their voice gets after 18 hours of judging), most people don’t listen to them anyway, and written comments and the scoresheet provide all of the information that the directors and students need. To be clear, this is not to discourage you from having your judges make recorded comments – it’s just food for thought.
Recommended Reading
Much of what is written above is inspired by the following publications. We encourage you all to take the time to read through these thoughtful papers to gain better insight into music assessment, its purpose, its value, and its intended outcomes. You can find many of them by simply searching for them in your favorite search engine or by clicking the DOI links below.
DeLuca, C., & Bolden, B. (2014). Music performance assessment. Music Educators Journal, 101(1), 70–76. https://doi.org/10.1177/0027432114540336
Denis, J. M. (2018). Assessment in music: A practitioner introduction to assessing students. Update: Applications of Research in Music Education, 36(3), 20-28. https://doi.org/10.1177/8755123317741489
Pellegrino, K., Conway, C. M., & Russell, J. A. (2015). Assessment in performance-based secondary music classes. Music Educators Journal, 102(1), 48-55. https://doi.org/10.1177/0027432115590183
Scott, S. J. (2012). Rethinking the roles of assessment in music education. Music Educators Journal, 98(3), 31-35. https://doi.org/10.1177/0027432111434742
Wesolowski, B. C. (2012). Understanding and developing rubrics for Music Performance Assessment. Music Educators Journal, 98(3), 36–42. https://doi.org/10.1177/0027432111432524