Carnegie Mellon University

Hypothesize how students best learn through instructional activities. 

One key part of the science- and evidence-based approach that we employ is the notion that we can treat instructional activities as hypotheses on how students will best learn a given outcome. For example, if I'm designing a statistics course, one of the outcomes that students may need to achieve is: decide when to use and calculate mean and median; my hypothesis is that the best way for students to learn to make this distinction is to work through a series of problems in which they decide when to use and calculate each mean and median, providing appropriate examples, support, hints and targeted feedback; and specifically highlighting common misconceptions and areas of student struggle.

If we capture data on how students work through these problems, we can better understand how accurate our initial hypothesis--

  • Were the practice opportunities adequate?  
  • Did we anticipate the right misconceptions?
  • Were there enough hints to support struggling students?  

Based on this data, we can refine our hypothesis and improve the instructional materials, repeating this cycle with more students until the data indicates that students are able to distinguish between mean and median after using our activities (that is, validating our hypothesis!).

This mean/median example focuses on a very specific outcome and a single set of instructional activities  But this same approach can be used across the full spectrum of instruction, from individual assessment questions to modules to entire courses. At larger grains sizes, our instructional activities become more complicated and numerous, but the underlying assumption is the same: the prediction that the best way to support and measure students' attainment of a course's learning outcomes is a specific set of materials (readings, lectures, videos), learning activities (discussions, exercises, writing assignments, problems) and assessments (quizzes, check-points, exams), combined with a system to encourage students to engage with these appropriately (schedule, course structure, grades).

The data does not have to be fine-grained. 

Fine-grained, robust data and an explicit course design allow for the use of some very sophisticated tools in considering where your course is most effective and where it might be improved.  But data at all levels can be useful in getting started with this process, and can highlight where it might be most valuable to take greater care in your data collection moving forward.  A gradebook, for example, can offer a very useful set of data for considering the design of your course: How do different learning activities support students is succeeding on your assessments?  Are intermediate assessments like quizzes good predictors of exam success?  Do certain types of assignments, such as participation, seem especially good or bad at relating to course success?  Is there a specific exercise or quiz that stands out as being very effective or in need of significant improvement?

The more extensive your gradebook data and the more robust your course design, the more useful and interesting are the questions that can be explored; a gradebook with only midterm and final exam grades doesn't offer a lot of information to explore.  But even this minimal set of information can provide some guidance on where to focus your attention for future course offerings; if students are doing well on the midterm, but then have poor performance on the final, then spending some time creating activities that will support students and generate data for the second half of the course is a good first step towards evidence-based improvement.  And the kinds of activities that generate useful data also tend to be activities that, purposefully designed, offer good feedback and target common misconceptions---exactly the characteristics that we know create conditions for robust learning!

Beyond considering learning activities, we can also take a closer look at assessment information.  While aggregate exam and quiz scores provide a useful touchstone for relating to learning activities, information about individual assessment questions can offer a different type of look into your instruction.  Are specific items especially easy or hard?  Are certain answers problematic? Are certain learning outcomes being effectively assessed? Fortunately, most modern learning management systems make it easy to capture relatively detailed information about assessments; simply by delivering quizzes and exams through your LMS, you can normally export assessment information that can be easily analyzed.  An LMS isn't absolutely required, though even information from paper exams can be captured, collated and analyzed.

Get started. 

Some ways to get started with leveraging the data that you do have (and planning for ways to expand the data that you capture):

  1. Think about what information and data you already have, especially from earlier offerings of a course.  Do you have gradebook information? How detailed and in what format are these gradebooks?
  2. Can you offer consistent categories for the grades that you are capturing?
  3. Do you have information at a finer-grain size than gradebooks' quiz or exam questions data?  Rubrics from assignments?
  4. What format is your data in?  Can you migrate or transcribe paper-based grades or feedback into a more easy-to-analyze electronic format, such as spreadsheets?
  5. If you've used an LMS in the past, are there tools for exporting information that was captured from that usage?  Most LMS's offer reports and export functionality; explore it and see what's available.

As you consider your historical data, this is also a good opportunity to consider how you might better capture data from upcoming classes' are there things you might have done differently in the past that would have offered a cleaner dataset?  Can you make better use of the LMS and other electronic tools to automate the process of capturing data?  Can you make some changes to how you format your learning activities, feedback and scores to make for easier data use in the future?