Exploring Depth Of Knowledge In Maths

If it isn’t broken, should we break it?

There is a year 6 student in my class who keeps making me look bad when I’m teaching Maths, and I think he is fantastic. We were recently working on a problem not dissimilar to the example above. When it appeared we had our best answer, I confidently announced that we probably wouldn’t find a better one and we moved on with the lesson. But 15 minutes later “Jack” pops up and suggests that he has found the better answer… Sure enough, he had. Damn it, Jack! This was the point in the lesson where I handed the whiteboard marker over to him and went to sit down with other learners.

Jack saw something we couldn’t and persevered when the rest of us had given up. Funny thing is, he came into my class thinking he wasn’t very good at maths, but that is a blog post for another day. The point is, the complex, open middle question we were tackling allowed Jack to search for his own best answer and display a depth of knowledge too often missing from our, and I suspect many others, day-to-day learning.

Our composite year 6/7 class is tasked with satisfying the following Australian Curriculum proficiency and content strands in Maths:

The proficiency strands understanding, fluency, problem-solving and reasoning are an integral part of mathematics content across the three content strands: number and algebra, measurement and geometry, and statistics and probability.

We regularly begin lessons with 5-minute mental challenges aimed at different proficiency strands. We use 3-act maths tasks, Booker Building Numeracy resources, Baker Natural Maths, and “real world” projects that I’ve either made or adapted from other sources. We don’t grade work, but we do assess proficiency and growth regularly. We revise and resubmit often. We still need to reflect more regularly, possibly in a similar way to the Journey Journals we use in English.

Like most classrooms, we have students at very different stages of their learning journey. About half the class have had the *cough* pleasure of my company the previous year, so they’ve heard my full repertoire of witty analogies and Dad jokes. I try to offer a continuum of inquiry tasks that allow students to choose their own starting point and work at independently or collaboratively. They can opt in or out of explicit mini-lessons as they see fit.

All learning tasks are practice (formative) and we use at least one performance task (summative) for each unit of work which is kept as evidence of learning in our portfolios. Students can start with a performance task if they like, but usually, students find they have more learning or revision to do in order to successfully complete the task. The only deadlines for collecting evidence of learning are the end of the semester when we need to conference and report a grade. It isn’t perfect by any stretch, it is messy at times, but it works for us.

Always on the lookout for our new better, we recently discovered Open Middle Maths problems, which we have incorporated into our routine. Students really like these problems.

Dan Meyer introduced us to the idea of “open middle” problems during his presentation on “Video Games & Making Math More Like Things Students Like” by explaining what makes them unique:

  • they have a “closed beginning” meaning that they all start with the same initial problem.
  • they have a “closed end” meaning that they all end with the same answer.
  • they have an “open middle” meaning that there are multiple ways to approach and ultimately solve the problem.

Open middle problems require a higher depth of knowledge than most problems that assess procedural and conceptual understanding.

(From the About Us page of Open Middle)

This led me to question how much I really know about depth of knowledge (DOK) in Maths. In recent years, I’ve focused on finding or creating Maths tasks that use higher order thinking skills from Bloom’s Taxonomy. I think this has helped increase engagement in tasks and has positively influenced student outcomes. While I don’t put much stock in standardized testing results, they can be a decent indicator of growth for individual students across time. Our recent year seven NAPLAN data revealed that the majority of our class displayed growth beyond the expected level, in the upper region. Data is not the same as evidence, so even this result is taken with a grain of salt, but it is a positive indicator.

Which leads me back to my opening question: If it isn’t broken, should we break it?

My answer right now is… Yep. Let’s pull that sucker apart and see what really makes it tick. What we are learning seems to be working okay at the moment, but questions remain about whether we truly have deep understanding of concepts or merely surface level knowledge. While we had excellent growth in year seven NAPLAN results, we didn’t set the world on fire with our achievement result, which was middle of the road. Is there also an opportunity in the year levels below to build depth of knowledge so that we avoid having yet another “Jack” turn up in year six thinking he is bad at Maths?

This led me to Robert Kaplinsky’s blog, in particular, his archive of DOK articles. To better understand what is meant by the term, I’ve included the following videos (with permission) straight from a blog post called Why Depth of Knowledge is Critical to Implement.

To understand whether his year eight students had a “deep, authentic command” of the achievement standard, Kaplinsky asked:

“What is the perimeter of a rectangle that measures 8 units by 4 units?”

The student knew how to find the perimeter of the rectangle (procedural skills & fluency) and understood the perimeter was the sum of the rectangle’s sides (conceptual understanding). This was of little surprise as the question is aimed at a year four standard, he asked a remarkedly similar question from the same standard:

“List the dimensions of a rectangle with a perimeter of 24 units.”

The difference between the two outcomes? As Kaplinsky explains:

Procedural skills and fluency?  Nope.

Conceptual understanding?  Nope.

I hope you are wondering what the heck just happened.  Is it possible that we have students like him in our classes that initially appear to have a “deep, authentic command” but actually have superficial understandings?  What was so different about the first question that allowed the student to appear to demonstrate procedural skill and fluency as well as conceptual understanding?

The answer?  Depth of knowledge

I don’t currently have the answers I’m looking for, but I’ve started asking questions:

If I break down the achievement standard so that students can collect evidence of their learning and self-report grades, how can I be sure they have more than a surface level of understanding?

Are our performance tasks providing enough opportunity to display depth of knowledge?

How valuable would it be to pull learning tasks apart with students to explore DOK? What is the most student friendly way to do this?

Update: This post called My Four Most Asked Questions On Depth Of Knowledge addresses questions like: What level should I start my students off at?

The problem above demonstrates the difference in complexity between DOK1 (left) and DOK3 (right). What the DOK 3 question does is avoid the potential for shallow thinking.

Kaplinsky explored this topic somewhat in another blog post titled Is Depth of Knowledge Complex or Complicated? Here he explains his own DOK matrix and explores a verb wheel, Penny Lund’s DOK posters and a DOK flowchart by Tracy Watanabe. From a teacher perspective, no question that Kaplinsky’s matrix is more detailed and cognitively challenging. The other scaffolds would appear to have cross-curricular application, whereas the matrix is Maths specific.

Below are two performance tasks I have created that we are currently working on for a unit on volume and area. Where would the following tasks fall on the DOK continuum?  Are they complicated or complex?

[embeddoc url=”https://blogmoore2017.edublogs.org/files/2017/08/Super-Mario-Swimming-Pool-Volume-Assignment-dragged-195r3uk-1s7p2s4.pdf” height=”600px” download=”all” viewer=”google” ]

[embeddoc url=”https://blogmoore2017.edublogs.org/files/2017/08/Super-Mario-Swimming-Pool-Volume-Assignment-dragged-1-2jd7rjd-1lo9sdl.pdf” height=”600px” download=”all” viewer=”google” ]

My initial feeling is that the Yoshi task is DOK 1. Even though there are multi-step elements of the tasks, it appears to be more complicated than complex, The Luigi pool design is more likely DOK 3 as there is no best answer, but countless possible designs. I think this would be an interesting conversation to have with teachers and students alike.

Finally, the last gem I discovered was a revision and reflection scaffold offered on the Open Middle website and used here (with permission) from Robert Kaplinsky. Another great blog post outlines how he uses this to integrate depth of knowledge into Maths. I have thoroughly enjoyed building upon my limited understanding of depth of knowledge in Maths and hope others can find some value in these resources too.

[embeddoc url=”https://blogmoore2017.edublogs.org/files/2017/08/Open-Middle-Revise-and-Reflect-Sheet-2e0bygm-1ziyvt9.pdf” height=”800px” download=”all” viewer=”google” ]

[embeddoc url=”https://blogmoore2017.edublogs.org/files/2017/08/DOKMatrixElementaryMath-2d31e9u-shjp07.pdf” height=”600px” download=”all” viewer=”google” ]

 

Print Friendly, PDF & Email

Leave a Reply

Your email address will not be published. Required fields are marked *