Remote Lab Kit

Science labs often are either integrated components of larger lecture courses (lab sections) or smaller, self-contained courses. Either way, it’s worth defining what you want a lab to achieve before you select an online alternative.

Here are three possible scenarios based on lab focus. If your labs combine these scenarios, you could likewise combine recommendations — keeping in mind the appropriate time commitment for the combined activities.

Learning Techniques

If the focus is on learning techniques and their application to specific experimental situations, consider having students engage in online simulations that may cover at least portions of a protocol, if not the whole thing.

  • Harvard’s LabXchange has a suite of lab simulations with assessments that focus on basic molecular biology techniques.
  • MERLOT offers a collection of virtual labs in several science disciplines.
  • PHET offers interactive simulations that allow students to vary parameters.

Many textbooks also list interactive, lab-based resources.

Consider having your students watch videos of experiments. Ask them to first make predictions and then discuss the results.

Interpreting Experimental Data

If the focus is on interpreting experimental data, consider using datasets from published literature aligned with the experiments students would have encountered in the lab, along with developed problem sets that focus on interpreting the data.

You could also intersperse the experimental protocols with questions that explore the reasons behind specific steps. In place of actually performing the experiment, students can gain a critique-based understanding of the method followed by data interpretation.

You could give students a random sequence of steps in the experimental methodology, and ask them to arrange the steps in the correct logical order. This requires students to critically understand why each step has to come before the next in a protocol.

You could also give students a blank step to fill in for themselves once they identify which step is missing. Here’s an example from LabXchange. (Select “Design” from the “Context” menu.)

Project-based Lab Research

If the focus is on project-based lab research (as is often the case in lab courses) your students have already been working on their projects since the start of the term. They may have a capstone assignment in the form of a final paper, grant application and/or poster that describes their work, with both context and future directions defined.

Consider asking your students to switch to the current capstone assignment with an emphasis on interpreting the data they have already gathered — or if they have not generated their own data yet, focus on having them predict their experimental outcomes and design the next experimental steps in detail.

Divide the rest of the semester into draft submissions of capstone sections. This will allow you to give formative feedback and enable your students to experience experimental design, further hypothesis building, and predictive data analysis. This approach aligns especially well with a written capstone styled like a grant application.

Modified with permission from The Derek Bok Center for Teaching and Learning, Harvard University.

Learn More Elsewhere


Understanding Assessment Methods

Assessment is discovering students’ knowledge, skills, attitudes, competencies, and habits of mind, and comparing them to what’s expected as a result of participating in your course and in a program of study. The desired state is to discover these things soon enough to redirect a course of study. We call this formative assessment.

Be transparent in your expectations for students by placing learning outcomes on each course syllabus, and sharing program outcomes on all program websites.

Quality Learning Outcomes

An outcome must be measurable, meaningful, and manageable. It specifies what you want the student to know or do. A good outcome statement also uses active verbs. An outcome has three components:

  • Audience (A) = Person doing or expressing
  • Behavior (B) = What audience will do or report
  • Condition (C) = What audience needs to do to succeed

Examples of learning outcomes:

  • Students in an introductory science course will be able to recall at least five of the seven periods of the periodic table.
  • Students in a psychology program will design a research experiment to carry out in their capstone course.
  • Students in a service-learning leadership program will demonstrate increased leadership skills by completing a leadership skills inventory, as indicated by a score of at least 80 percent.

A helpful and frequently used resource for writing learning outcomes is Bloom’s Taxonomy of Cognitive Skills. It associates verbs with a ranking of thinking skills, moving from less complex at the knowledge level to more complex at the evaluation level. Make sure to set the level of the outcome to match the level at which you teach the content.

Assessment Techniques

With so many ways to measure what students know and can do, why limit yourself to just one or two? Here are just a few assessment techniques:

  • Course and homework assignments
  • Multiple choice examinations and quizzes
  • Essay examinations
  • Term papers and reports
  • Observations of field work, internship performance, service learning, or clinical experiences
  • Research projects
  • Class discussions
  • Artistic performances
  • Personal essays
  • Journal entries
  • Computational exercises and problems
  • Case studies

Angelo and Cross (1993) outline the main characteristics of classroom assessment techniques:

  • Learner centered. Focus on the observation and improvement of learning — e.g prior knowledge, misconceptions, or misunderstandings students may have over course content.
  • Instructor directed. Decide what to assess, how to assess, and how to respond to what you find from assessment.
  • Formative. Use assessment feedback to allow students to improve, rather than assigning grades. Feedback is ongoing and iterative, giving you and students useful information for evaluation and improvement.
  • Mutually beneficial. Students reinforce their grasp of the course concepts and strengthen their own skills at self-assessment, while you increase your teaching focus.
  • Context situated. Assessment targets the particular needs and priorities of you and your students, as well as the discipline in which they are applied.
  • Best practice based. Build assessment on current standards to make learning and teaching more systematic, flexible, and frequent.
    • Assessing before instruction helps you tailor class activities to student needs.
    • Assessment during a class helps you ensure students are learning the content satisfactorily.
    • Using classroom assessment technique immediately after instruction helps reinforce the material and uncover any misunderstanding before it becomes a barrier to progress.

Assessment Tools

The two most common assessment tools are rubrics and tests.

Rubrics

Rubrics are used to assess capstone projects, collections of student work (e.g., portfolios), direct observations of student behavior, evaluations of performance, external juried review of student projects, photo and music analysis, and student performance, to name a few. Rubrics help standardize assessment of more subjective learning outcomes, such as critical thinking or interpersonal skills, and are easy for practitioners to use and understand. Rubrics clearly articulate the criteria used to evaluate students.

You can create a rubric from scratch or use a pre-existing one (as-is or modified) if it fits your context. Start with the end in mind: What do you want students to know or do as a result of your effort? What evidence do you need to observe to know that students got it? These questions lead to the main components of a rubric:

  • A description of a task students are expected to produce or perform
  • A scale (and scoring) that describes the level of mastery (e.g., exceed expectation, meets expectation, doesn’t meet expectation)
  • Components or dimensions students must meet in completing assignments or tasks (e.g., types of skills, knowledge, etc.)
  • A description of the performance quality (performance descriptor) of the components or dimensions at each level of mastery

Steps in rubric development:

  • Identify the outcome areas, also known as components or dimensions. What must students demonstrate (skills, knowledge, behaviors, etc.)?
  • Determine the scale. Identify how many levels are needed to assess performance components or dimensions. Decide what score to allocate for each level.
  • Develop performance descriptors at each scale level. Use Bloom’s taxonomy as a starting point. Start at end points and define their descriptors. (For example, define “does not meet expectations” and “exceeds expectations.”) Develop scoring overall or by dimension.
  • Train raters and pilot test. For consistent and reliable rating, raters need to be familiar with the rubric and need to interpret and apply the rubric in the same way. Train them by pilot-testing the rubric with a few sample papers and/or get feedback from your colleagues (and students). Revise the rubric as needed.

Pre-existing rubrics:

Tests

There is no one way to develop a classroom-level test. However, there are commonly agreed upon standards of quality that apply to all test development. The higher the stakes of the test used for decision-making (e.g., grades in course, final exams, and placement exams), the greater attention you must pay to these three standards:

  • Does the test measure what you intend?
  • Does the test adequately represent or sample the outcomes, content, skills, abilities, or knowledge you will measure?
  • Will the test results be useful in informing your teaching and give sufficient evidence of student learning?

In selecting a test, take care to match its content with the course curriculum. The Standards for Educational and Psychological Testing (1999), have a strict set of guidelines and apply “most directly to standardized measures generally recognized as ‘tests’ such as measures of ability, aptitude, achievement, attitudes, interests, personality, cognitive functioning, and mental health, it may also be usefully applied in varying degrees to a broad range of less formal assessment techniques” (p. 3). These are the general procedures for test development laid out in the Standards:

  • Specify the purpose of the test and the inferences to be drawn.
  • Develop frameworks describing the knowledge and skills to be tested.
  • Build test specifications.
  • Create potential test items and scoring rubrics.
  • Review and pilot test items.
  • Evaluate the quality of items.

References

American Psychological Association, & National Council on Measurement in Education. (1999). Standards for educational and psychological testing. American Educational Research Association. https://search.library.pdx.edu/permalink/f/p82vj0/CP7195947060001451

Angelo, T. A., & Cross, K. Patricia. (1993). Classroom assessment techniques : a handbook for college teachers (2nd ed.). Jossey-Bass Publishers. https://search.library.pdx.edu/permalink/f/p82vj0/CP71104374450001451

Learn More Elsewhere

Video