Assessment is discovering students’ knowledge, skills, attitudes, competencies, and habits of mind, and comparing them to what’s expected as a result of participating in your course and in a program of study. The desired state is to discover these things soon enough to redirect a course of study. We call this formative assessment.

Be transparent in your expectations for students by placing learning outcomes on each course syllabus, and sharing program outcomes on all program websites.

Quality Learning Outcomes

An outcome must be measurable, meaningful, and manageable. It specifies what you want the student to know or do. A good outcome statement also uses active verbs. An outcome has three components:

  • Audience (A) = Person doing or expressing
  • Behavior (B) = What audience will do or report
  • Condition (C) = What audience needs to do to succeed

Examples of learning outcomes:

  • Students in an introductory science course will be able to recall at least five of the seven periods of the periodic table.
  • Students in a psychology program will design a research experiment to carry out in their capstone course.
  • Students in a service-learning leadership program will demonstrate increased leadership skills by completing a leadership skills inventory, as indicated by a score of at least 80 percent.

A helpful and frequently used resource for writing learning outcomes is Bloom’s Taxonomy of Cognitive Skills. It associates verbs with a ranking of thinking skills, moving from less complex at the knowledge level to more complex at the evaluation level. Make sure to set the level of the outcome to match the level at which you teach the content.

Assessment Techniques

With so many ways to measure what students know and can do, why limit yourself to just one or two? Here are just a few assessment techniques:

  • Course and homework assignments
  • Multiple choice examinations and quizzes
  • Essay examinations
  • Term papers and reports
  • Observations of field work, internship performance, service learning, or clinical experiences
  • Research projects
  • Class discussions
  • Artistic performances
  • Personal essays
  • Journal entries
  • Computational exercises and problems
  • Case studies

Angelo and Cross (1993) outline the main characteristics of classroom assessment techniques:

  • Learner centered. Focus on the observation and improvement of learning — e.g prior knowledge, misconceptions, or misunderstandings students may have over course content.
  • Instructor directed. Decide what to assess, how to assess, and how to respond to what you find from assessment.
  • Formative. Use assessment feedback to allow students to improve, rather than assigning grades. Feedback is ongoing and iterative, giving you and students useful information for evaluation and improvement.
  • Mutually beneficial. Students reinforce their grasp of the course concepts and strengthen their own skills at self-assessment, while you increase your teaching focus.
  • Context situated. Assessment targets the particular needs and priorities of you and your students, as well as the discipline in which they are applied.
  • Best practice based. Build assessment on current standards to make learning and teaching more systematic, flexible, and frequent.
    • Assessing before instruction helps you tailor class activities to student needs.
    • Assessment during a class helps you ensure students are learning the content satisfactorily.
    • Using classroom assessment technique immediately after instruction helps reinforce the material and uncover any misunderstanding before it becomes a barrier to progress.

Assessment Tools

The two most common assessment tools are rubrics and tests.


Rubrics are used to assess capstone projects, collections of student work (e.g., portfolios), direct observations of student behavior, evaluations of performance, external juried review of student projects, photo and music analysis, and student performance, to name a few. Rubrics help standardize assessment of more subjective learning outcomes, such as critical thinking or interpersonal skills, and are easy for practitioners to use and understand. Rubrics clearly articulate the criteria used to evaluate students.

You can create a rubric from scratch or use a pre-existing one (as-is or modified) if it fits your context. Start with the end in mind: What do you want students to know or do as a result of your effort? What evidence do you need to observe to know that students got it? These questions lead to the main components of a rubric:

  • A description of a task students are expected to produce or perform
  • A scale (and scoring) that describes the level of mastery (e.g., exceed expectation, meets expectation, doesn’t meet expectation)
  • Components or dimensions students must meet in completing assignments or tasks (e.g., types of skills, knowledge, etc.)
  • A description of the performance quality (performance descriptor) of the components or dimensions at each level of mastery

Steps in rubric development:

  • Identify the outcome areas, also known as components or dimensions. What must students demonstrate (skills, knowledge, behaviors, etc.)?
  • Determine the scale. Identify how many levels are needed to assess performance components or dimensions. Decide what score to allocate for each level.
  • Develop performance descriptors at each scale level. Use Bloom’s taxonomy as a starting point. Start at end points and define their descriptors. (For example, define “does not meet expectations” and “exceeds expectations.”) Develop scoring overall or by dimension.
  • Train raters and pilot test. For consistent and reliable rating, raters need to be familiar with the rubric and need to interpret and apply the rubric in the same way. Train them by pilot-testing the rubric with a few sample papers and/or get feedback from your colleagues (and students). Revise the rubric as needed.

Pre-existing rubrics:


There is no one way to develop a classroom-level test. However, there are commonly agreed upon standards of quality that apply to all test development. The higher the stakes of the test used for decision-making (e.g., grades in course, final exams, and placement exams), the greater attention you must pay to these three standards:

  • Does the test measure what you intend?
  • Does the test adequately represent or sample the outcomes, content, skills, abilities, or knowledge you will measure?
  • Will the test results be useful in informing your teaching and give sufficient evidence of student learning?

In selecting a test, take care to match its content with the course curriculum. The Standards for Educational and Psychological Testing (1999), have a strict set of guidelines and apply “most directly to standardized measures generally recognized as ‘tests’ such as measures of ability, aptitude, achievement, attitudes, interests, personality, cognitive functioning, and mental health, it may also be usefully applied in varying degrees to a broad range of less formal assessment techniques” (p. 3). These are the general procedures for test development laid out in the Standards:

  • Specify the purpose of the test and the inferences to be drawn.
  • Develop frameworks describing the knowledge and skills to be tested.
  • Build test specifications.
  • Create potential test items and scoring rubrics.
  • Review and pilot test items.
  • Evaluate the quality of items.


American Psychological Association, & National Council on Measurement in Education. (1999). Standards for educational and psychological testing. American Educational Research Association.

Angelo, T. A., & Cross, K. Patricia. (1993). Classroom assessment techniques : a handbook for college teachers (2nd ed.). Jossey-Bass Publishers.

Learn More Elsewhere