Using Metacognitive Resolution in Learning Design to Promote Competency

by Cia Kessler, Instructional Designer, EdTech

Consider this: You’re teaching a class filled with challenging content. You have spent several weeks providing a balance of different types of approaches to explaining the material, including extensive direction on the core concepts and how to study. You are pleased with the assessment you’ve devised: it is not only fair but it matches Bloom’s taxonomy for the task at hand. You wait hopefully for the results but are dismayed to find that they are worse than you would have anticipated, especially with all of the scaffolding you have provided. What could have gone wrong? Before you return the tests, you ask the class how it seemed to them. It was easy, they all seem to agree. You are puzzled by the misalignment in their perception and unsure of where to go next.

Metacognition, defined as the ability to be aware of what we know and how we learn, is an essential part of a successful student’s learning process (McDaniel, 1970). Often, we focus on the broader idea of metacognition, our ability to understand our own learning and thinking processes. Yet it is also important for any learner to retroactively identify correct answers from guesses, a process called metacognitive resolution (Vuorre, 2021). A student who is metacognitively aware and can resolve their answers is more efficient because they understand the gaps in their own understanding and can support them without judgment (Vuorre, 2021; Dweck, 2006). This neutral mindset is an important precursor to the ability to view challenges as opportunities for growth, as well as to be clear in understanding their learning gaps.

Learning psychologists have described the process of identifying areas of weakness as the “Four Stages of Competence” (Momsen, 2013; Gordon International, 2021.) Originally conceived as a model for understanding the progression of skills development in adult corporate training, this model can be adapted to facilitate a student’s understanding of their own metacognition and learning needs. Including metacognitive awareness along with competency awareness in your course design as a routine practice can

  1. empower students by reinforcing personal responsibility in the learning process,
  2. support better long-term retention,
  3. deepen the quality of learning, and
  4. improve the efficiency of their studying by allowing them to self-regulate their learning.

One simple way to include competency awareness/metacognitive resolution in an assessment is to include a reflection post-exercise as a part of any major assignment. What is proposed below is a more strictly defined approach that can provide information on a granular level of a student’s competency, and their learning gaps.

Recognizing Cognitive Complexity and Bloom’s Taxonomy

The importance of Bloom’s metacognitive hierarchy has long been recognized as critical to the design of educational goals and by extension to instructional design (Bloom, 1956.) You may recall that Bloom’s work categorized the complexity of cognitive tasks within a pyramid from the lowest order (recalling, recognizing) to the most complex (creating, designing) (Bloom, 1956.) Each level of the hierarchy can be associated with a verb or gerund which points to a cognitive process associated with mastery (Bloom, 1956.)

Later in his career, Bloom also examined the educational structures that support learning. It was then he reported on a phenomenon he referred to as the “2 sigma problem”; students who had access to one-on-one learning were able to perform two standard deviations better on assessments than students in a standard classroom (Bloom, 1984.) Recognition of the two-sigma problem suggested that thoughtful instructional design by itself was insufficient in promoting competency in the absence of radical changes to the classroom environment, many of which are not feasible. The obvious question posed in response to “2 Sigma”, is what does a one-on-one tutoring situation provide that is missing from a typical classroom? One response is the promotion of mastery learning, which foregrounds an individual student’s progress toward mastery of course concepts. Perhaps the answer is an element of accountability and a deeper student self-awareness, particularly with regard to their own learning needs. For this, we need a method for judging understanding to improve student self-awareness and self-regulation.

Knowing What We Know

Guiding students to mastery in courses is not only a function of instructional delivery and assessment selection. It is also closely tied to student motivation, self-awareness, and critical self-reflection by learners (Marshik, 2015.) One widely accepted way we often encourage this type of self-reflection is by asking students to identify their learning style as auditory, visual, or kinesthetic, yet recent studies have shown that learning styles are not meaningful (Marshik, 2015.) Students need to connect their approach to mastery of course content on a granular level with the criteria we are using in assessments to fairly evaluate their own learning gaps, misconceptions, and strengths. So how can we promote evidence-based critical self-reflection among students?

The competency model suggests that mastery of any content can be understood as an exercise in skill acquisition, a progression from incompetence to competence within a series of tasks. It is extensively used in adult training in the corporate world and is the basis for some models of online adaptive learning (Christiansen, 2019.) The data derived from the use of the model assists training departments in quantifying mastery and drives the algorithm that promotes the efficient strengthening of core knowledge (Christiansen, 2019.) Learners self-assess, rating their own confidence alongside their knowledge. When the assessment is evaluated, each item is either correct or incorrect and is also rated according to the categories below (Figure 1):

  • Consciously competent – you know what you know
  • Unconsciously competent – you can guess your way to the answer
  • Conscious incompetence – you know what you don’t know
  • Unconscious incompetence – misconceptions, misunderstandings

Conscious Competence

(able to do tasks broken into steps)

Conscious Incompetence

(knowing you don’t know)

Unconscious Competence

(able to correctly guess)

Unconscious Incompetence

(lack of understanding, misconceptions)

Figure 1.

The goal is to arrive above the line: to be conscious of what we know and don’t know, or at the highest level to be able to reproduce a skill or apply content knowledge with little conscious thought because we have arrived at “automaticity” (Hardy, 2015.) A secondary goal is to identify areas of unconscious incompetence, places where our thinking is driven by misconceptions or misunderstandings. The real advantage of this model lies in identifying those areas that are below the “conscious/unconscious” dividing line. Through a granular examination of their own responses, a learner is able to connect their level of perceived confidence with correct answers. This type of self-awareness can be difficult in a classroom setting but with deliberate design, we encourage students towards it as a practice (Naidu, 1997.) In other words, a learner is able to identify the questions that they got right because they were able to guess, as well as those they got wrong because of a misunderstanding vs. a lack of knowledge.

It is clear that timely instructor feedback, a core value of UDL, can promote a student’s self-awareness, providing direct evidence of not only the additional content that needs to be mastered, but also of the effectiveness of their method of studying. The problem is that a grade or an assessment can never lead to the kind of granular self-reflection that facilitates an understanding of learning gaps and how to address them.

Tying Unconscious Competence to Mastery to Drive Achievement

The method proposed below is an attempt at a more student-aligned vision of the assessment process that foregrounds metacognition, as well as a critical self-reflection on a student’s level of competency. Underneath each question on any assessment, add the following series of checkboxes:

I know it

(Consciously competent)

I don’t know it

(Consciously incompetent)

I think I know it

(Unconsciously competent)

Not Sure

(Unconsciously incompetent)

It is important to foreground the idea of understanding ourselves in a non-judgmental way our own effectiveness. Tell the students that they should be honest in rating themselves because they won’t be penalized in any way. Instead, they will learn about the effectiveness of their own studying and will be able to better prepare for future exams. When their test is returned to them, ask them to compare the box they checked with the correctness of their answer, focusing particularly on the two “unconscious” categories. These two categories, as we have explained above, allow us to understand where we missed the mark because of a lack of knowledge and where we got the right answer for the wrong reason (misunderstanding.) You can incentivize this process further by offering additional partial credit if they return the test to you later along with a short reflection that includes a plan to address their learning gaps and misunderstandings.

Focusing on student learning on a granular level also permits faculty to measure the effectiveness of their instructional design by looking at the aggregate level at questions that do not align with student self-assessment. For example, where there is a high degree of unconscious competence paired with a correct answer: is the question confusing? Should the material be presented in a different way?

Understanding Bloom’s and using it effectively in course design can be essential to designing courses but without self-awareness on the part of the student, its effectiveness may be self-limiting. Incorporating granular student reflection in assessments can increase students’ understanding of their effectiveness in studying, as well as improve their automaticity in learning fundamental concepts.

References

Bloom, Benjamin S; Engelhart, M. D.; Furst, E. J.; Hill, W. H.; Krathwohl, D (1956). Taxonomy of educational objectives: The classification of educational goals. Vol. Handbook I: Cognitive domain. New York: David McKay Company.

Adams, L. (2021). Learning a new skill is easier said than done. Gordon Training International. https://www.gordontraining.com/free-workplace-articles/learning-a-new-skill-is-easier-said-than-done

Bloom, B. S. (1984, July). The 2-Sigma Problem: The Search for Methods of Group Instruction as Effective as One-on-One Tutoring. Syracuse University Faculty Center. https://facultycenter.ischool.syr.edu/wp-content/uploads/2012/02/2-sigma.pdf

Christiansen, U. J. (2019). What is adaptive learning & how can it improve your e-learning outcomes. Area9 Lyceum. https://cdn2.hubspot.net/hubfs/2353984/What%20is%20Adaptive%20Learning%20Infographic.pdf

Dweck, C. (2023, March 27). Carol Dweck revisits the “growth mindset” (opinion). Education Week. http://www.edweek.org/leadership/opinion-carol-dweck-revisits-the-growth-mindset/2015/09

Hardy, B. (2015, April 5). How to learn a new skill well enough to do it automatically – fast company. Fast Company. https://www.fastcompany.com/3058572/how-to-learn-a-new-skill-well-enough-to-do-it-automaticall

Marshik, T. (2015, April 2). Learning styles & the importance of critical self-reflection | Tesia Marshik | tedxuwlacrosse. YouTube. https://www.youtube.com/watch?v=855Now8h5Rs

Mcdaniel, R. (1970, February 9). Metacognition. Vanderbilt University. https://cft.vanderbilt.edu/guides-sub-pages/metacognition/

Momsen, J., Offerdahl, E., Kryjevskaia, M., Montplaisir, L., Anderson, E., & Grosz, N. (2013, June 1). Using assessments to investigate and compare the nature of learning in undergraduate science courses. CBE life sciences education. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3671651/

Naidu, S. (1997). Collaborative Reflective Practice: An instructional design architecture for the Internet. Distance Education, 18(2), 257–283. Vuorre, M., & Metcalfe, J. (2021, January 11). https://doi.org/10.1080/0158791970180206

Measures of relative metacognitive accuracy are confounded with task performance in tasks that permit guessing – metacognition and learning. SpringerLink. https://link.springer.com/article/10.1007/s11409-020-09257-1

EdTech Innovations, Issue 25, fall 2023, cover page

 

More in this issue:

0 Comments

Leave a reply

Your email address will not be published. Required fields are marked *

*

Log in with your credentials

Forgot your details?

Eugenio María de Hostos Community College
500 Grand Concourse, Bronx, New York 10451
718-518-4444