Encouraging Student Agency through Alternative Assessments

Research shows that the more we give students qualitative feedback and withhold quantitative grades, the more students are able to absorb that feedback and improve their learning (Schinske & Tanner, 2014). Alternative assessments can support that process.

Building trust with your students is foundational for developing alternative assessment practices that work. It’s important to explain to students why you chose your assessment methods. Wherever possible, provide clarity and transparency around your decisions and expectations. Keep in mind that your students may have spent most of their academic careers growing familiar and comfortable with standard assessments.

Questions to inform your assessment decisions:

  • What does it look like when students are learning?
  • How can you clearly communicate your expectations to students?
  • How can you give students opportunities to reflect and assess their own learning?

Reflecting on your answers to these questions can help you prioritize where and how to include alternative assessments in your course.

Alternative Assessment Examples

The Five-Point System

The five-point system is a compromise between students wanting everything graded and instructors who know that grades can distract from learning. Students earn five out of five points for every assignment, big or small, as long as they do three things:

  • Submit it on time.
  • Follow the directions.
  • Do the work completely.

The work doesn’t have to be correct or high quality; it just has to be done. Students receive not just points, but feedback in multiple ways throughout the term: peer review, instructor comments, self-reflections, and comparing their work to models and examples. This helps students learn from their effort and progress. Points averaged over the term are the “effort grade.” At the end of the term, students write a final reflection letter to the instructor explaining what they’ve learned and making the case for the grade they think they deserve. Their proposed grade is averaged in with their effort grade, and that is the grade they earn for the class.

Contract Grading

Popularized by Danielewicz and Elbow (2009), contract grading clearly lays out — at the beginning of the class — the tasks students must complete and the behaviors expected of them. The instructor attaches a grade to the contract so students know exactly what to do to earn that grade.

Peer Reviews

Asking students to give feedback to their peers gives them an opportunity to learn from one another and think critically about their work. Consider giving students specific elements to focus on during peer review and model what good feedback looks like. It helps to provide students with a rubric, checklist, or series of questions to answer as they examine their peers’ work.

Student Self-Assessment

Possibly the most radical of alternative assessment methods, student self-assessment requires a paradigm shift for both the instructor and the student. It requires students to truly believe they have the freedom to determine their own grade and the instructor to truly believe that students are capable of doing so fairly. Students can self-assess in large or small courses when they are taught how, and doing so deepens their learning and motivates them to keep learning. Self-assessment can happen in a variety of formats:

  • Exam wrappers
  • e-Journals
  • Blogs
  • Weekly reflections/check-ins
  • e-Portfolios

Implementing Alternative Assessments

Any use of alternative assessments is likely to be new for at least some of your students. As you implement alternative assessments in your course, it can help to prioritize clear communication around students’ process and progress, and your expectations — even when you and students have designed those pieces collaboratively.

Feedback

Give students regular feedback on their process work that is not linked to points or grades. Students are more likely to remember feedback and incorporate it into future work if it is not paired with a grade. Check out these research-based tips for giving students meaningful feedback. Low stakes, formative assessments that give you and your students feedback about their learning come in many forms:

  • Practice quizzes
  • Zoom polls during presentations
  • Follow-up surveys
  • Google Doc reading quote and question collection
  • Discussion-forum-based, student-led Q and A
  • One-on-one check-ins

Guidelines

Give students clear guidelines, examples, and rubrics that describe the desired learning outcomes for a given assessment. Allow time and flexibility for students to ask questions and make suggestions about how they might meet the learning outcomes in more than one way. Employing a Universal Design for Learning strategy will serve a diversity of learners.

References

Danielewicz, J., & Elbow, P. (2009). A Unilateral Grading Contract to Improve Learning and Teaching. College Composition and Communication, 61(2), 244–268. https://stats.lib.pdx.edu/proxy.php?url=https://www.jstor.org/stable/40593442

Schinske, J., & Tanner, K. (2014). Teaching More by Grading Less (or Differently). CBE Life Sciences Education, 13(2), 159–166. https://doi.org/10.1187/cbe.cbe-14-03-0054

Learn More Elsewhere

Articles

Website (blog post)


Understanding Assessment Methods

Assessment is discovering students’ knowledge, skills, attitudes, competencies, and habits of mind, and comparing them to what’s expected as a result of participating in your course and in a program of study. The desired state is to discover these things soon enough to redirect a course of study. We call this formative assessment.

Be transparent in your expectations for students by placing learning outcomes on each course syllabus, and sharing program outcomes on all program websites.

Quality Learning Outcomes

An outcome must be measurable, meaningful, and manageable. It specifies what you want the student to know or do. A good outcome statement also uses active verbs. An outcome has three components:

  • Audience (A) = Person doing or expressing
  • Behavior (B) = What audience will do or report
  • Condition (C) = What audience needs to do to succeed

Examples of learning outcomes:

  • Students in an introductory science course will be able to recall at least five of the seven periods of the periodic table.
  • Students in a psychology program will design a research experiment to carry out in their capstone course.
  • Students in a service-learning leadership program will demonstrate increased leadership skills by completing a leadership skills inventory, as indicated by a score of at least 80 percent.

A helpful and frequently used resource for writing learning outcomes is Bloom’s Taxonomy of Cognitive Skills. It associates verbs with a ranking of thinking skills, moving from less complex at the knowledge level to more complex at the evaluation level. Make sure to set the level of the outcome to match the level at which you teach the content.

Assessment Techniques

With so many ways to measure what students know and can do, why limit yourself to just one or two? Here are just a few assessment techniques:

  • Course and homework assignments
  • Multiple choice examinations and quizzes
  • Essay examinations
  • Term papers and reports
  • Observations of field work, internship performance, service learning, or clinical experiences
  • Research projects
  • Class discussions
  • Artistic performances
  • Personal essays
  • Journal entries
  • Computational exercises and problems
  • Case studies

Angelo and Cross (1993) outline the main characteristics of classroom assessment techniques:

  • Learner centered. Focus on the observation and improvement of learning — e.g prior knowledge, misconceptions, or misunderstandings students may have over course content.
  • Instructor directed. Decide what to assess, how to assess, and how to respond to what you find from assessment.
  • Formative. Use assessment feedback to allow students to improve, rather than assigning grades. Feedback is ongoing and iterative, giving you and students useful information for evaluation and improvement.
  • Mutually beneficial. Students reinforce their grasp of the course concepts and strengthen their own skills at self-assessment, while you increase your teaching focus.
  • Context situated. Assessment targets the particular needs and priorities of you and your students, as well as the discipline in which they are applied.
  • Best practice based. Build assessment on current standards to make learning and teaching more systematic, flexible, and frequent.
    • Assessing before instruction helps you tailor class activities to student needs.
    • Assessment during a class helps you ensure students are learning the content satisfactorily.
    • Using classroom assessment technique immediately after instruction helps reinforce the material and uncover any misunderstanding before it becomes a barrier to progress.

Assessment Tools

The two most common assessment tools are rubrics and tests.

Rubrics

Rubrics are used to assess capstone projects, collections of student work (e.g., portfolios), direct observations of student behavior, evaluations of performance, external juried review of student projects, photo and music analysis, and student performance, to name a few. Rubrics help standardize assessment of more subjective learning outcomes, such as critical thinking or interpersonal skills, and are easy for practitioners to use and understand. Rubrics clearly articulate the criteria used to evaluate students.

You can create a rubric from scratch or use a pre-existing one (as-is or modified) if it fits your context. Start with the end in mind: What do you want students to know or do as a result of your effort? What evidence do you need to observe to know that students got it? These questions lead to the main components of a rubric:

  • A description of a task students are expected to produce or perform
  • A scale (and scoring) that describes the level of mastery (e.g., exceed expectation, meets expectation, doesn’t meet expectation)
  • Components or dimensions students must meet in completing assignments or tasks (e.g., types of skills, knowledge, etc.)
  • A description of the performance quality (performance descriptor) of the components or dimensions at each level of mastery

Steps in rubric development:

  • Identify the outcome areas, also known as components or dimensions. What must students demonstrate (skills, knowledge, behaviors, etc.)?
  • Determine the scale. Identify how many levels are needed to assess performance components or dimensions. Decide what score to allocate for each level.
  • Develop performance descriptors at each scale level. Use Bloom’s taxonomy as a starting point. Start at end points and define their descriptors. (For example, define “does not meet expectations” and “exceeds expectations.”) Develop scoring overall or by dimension.
  • Train raters and pilot test. For consistent and reliable rating, raters need to be familiar with the rubric and need to interpret and apply the rubric in the same way. Train them by pilot-testing the rubric with a few sample papers and/or get feedback from your colleagues (and students). Revise the rubric as needed.

Pre-existing rubrics:

Tests

There is no one way to develop a classroom-level test. However, there are commonly agreed upon standards of quality that apply to all test development. The higher the stakes of the test used for decision-making (e.g., grades in course, final exams, and placement exams), the greater attention you must pay to these three standards:

  • Does the test measure what you intend?
  • Does the test adequately represent or sample the outcomes, content, skills, abilities, or knowledge you will measure?
  • Will the test results be useful in informing your teaching and give sufficient evidence of student learning?

In selecting a test, take care to match its content with the course curriculum. The Standards for Educational and Psychological Testing (1999), have a strict set of guidelines and apply “most directly to standardized measures generally recognized as ‘tests’ such as measures of ability, aptitude, achievement, attitudes, interests, personality, cognitive functioning, and mental health, it may also be usefully applied in varying degrees to a broad range of less formal assessment techniques” (p. 3). These are the general procedures for test development laid out in the Standards:

  • Specify the purpose of the test and the inferences to be drawn.
  • Develop frameworks describing the knowledge and skills to be tested.
  • Build test specifications.
  • Create potential test items and scoring rubrics.
  • Review and pilot test items.
  • Evaluate the quality of items.

References

American Psychological Association, & National Council on Measurement in Education. (1999). Standards for educational and psychological testing. American Educational Research Association. https://search.library.pdx.edu/permalink/f/p82vj0/CP7195947060001451

Angelo, T. A., & Cross, K. Patricia. (1993). Classroom assessment techniques : a handbook for college teachers (2nd ed.). Jossey-Bass Publishers. https://search.library.pdx.edu/permalink/f/p82vj0/CP71104374450001451

Learn More Elsewhere

Video