By Dan Roberts, Lead Instructor, Interim Director of Education
In a previous post we discussed the problem of teaching inheritance in object-oriented programming. Ada’s curriculum relies heavily on inheritance, so we asked three questions:
- What about teaching inheritance is hard?
- How is inheritance used in the real world?
- Given those constraints, how can we teach inheritance more effectively?
We identified that inheritance solves a complex problem, and that the way inheritance is typically taught doesn’t correspond well with how inheritance is used, particularly by engineers early in their career. Recognizing this discrepancy, Ada rolled out an updated version of our inheritance curriculum this cohort focusing on realistic applications.
Results
Anecdotally, all four classroom instructors found the revised curriculum to be highly effective. Students seemed to have a much stronger grasp on inheritance than in previous cohorts, and the beginning of our Ruby on Rails unit was as smooth as it’s ever been.
However, we don’t have empirical support for this result. Part of this is a problem with experiment design, namely that we didn’t do any. If we were thinking ahead we would have designed an assessment to measure students’ knowledge of the theory and application of inheritance at a specific point in the curriculum, and given the same assessment to each cohort. We also would have run the experiment in a controlled fashion, making this the only large change to our curriculum between cohorts.
What we have instead is a random smattering of weekly quizzes and assessments that touch on inheritance only tangentially, many of which changed substantially between C10 and C11. We also made several medium-to-large changes to our curriculum for C11, any one of which could have influenced the results.
I did spend an afternoon massaging assessment data, trying to eliminate questions that were or irrelevant or that changed from cohort to cohort. Even so, I was not able to find a statistically significant difference in performance between the two groups of students.
I also looked at project feedback from the first two individual Rails projects, to try and confirm the theory that this new approach to inheritance would ease the transition into working with a framework. We give each student an overall “grade” of red, yellow or green for each project; I converted these into numbers 1-3 and ran a quick regression. Again the results were inconclusive – no statistically significant difference between the scores for the two cohorts.
This lack of results could have a number of explanations. It could be that the inheritance changes worked, but other changes we made to the curriculum cancelled out the gains. It could be that in the absence of an explicit points-based grading system for projects instructors allow their expectations to adjust to the current class, snapping results to an implicit curve. Or it could be the new curriculum truly wasn’t any more effective than the old one.
So we find ourselves at an impasse. On the one side we have our teachers’ intuition, that students understood and retained inheritance more effectively with the adjusted curriculum. On the other side we have a lack of concrete data. It feels like we’re onto something worthwhile here, but we don’t have any way to prove it, to ourselves or to anyone else.
Moving Forward
In future cohorts we’ll continue using the new version of our inheritance curriculum. Instructors found the new curriculum easier to teach, and we’re reasonably certain it didn’t have a negative effect on students, so there’s no reason to revert.
While the lack of positive results is disappointing, to some extent I believe this is a sign of the maturity of Ada’s curriculum. Most of the obvious, highly beneficial work has already been done, and future changes are not guaranteed to be improvements. If we want to continue to innovate and discover better ways to teach our students, we have to be deliberate and intentional in our work.
Specifically, before embarking on significant changes to our curriculum or teaching practices, we need to ask ourselves the following questions:
- What problem are we addressing, and why?
- What will we change, and why?
- How will we know whether the change worked?
- What else might affect the results?
This matches my intuition about software development. Early in a product’s life cycle the positive impact of work is easy to see: implementing missing features and fixing obvious bugs are clear wins. Once a product is mature big changes become more risky, and scientific development practices such as A/B testing and performance benchmarking must be used in order to ensure forward progress.
In both cases, working on the more mature version essentially follows the scientific method:
- Observation (something about the curriculum / product isn’t working)
- Hypothesis (what is the root cause of the problem)
- Prediction (if we change X, then Y will be better)
- Controlled experiment (measure Y before and after, control for confounding variables)
The challenge facing our instructional team over the next year or two is pivot our development practices from the fast-paced, fly-by-the-seat-of-your-pants approach that has gotten us this far, to a more measured style that will help move us forward. The trick will be to do so without abandoning the fire, passion and excitement that we currently bring to our work. Building out process and practices will largely fall on our newly hired Director of Education, but everyone on the team will have a part to play. I am confident that we are up to the task.