Journal

discourse analysis

yoda

I am Yoda (please click here)


Week 8

3 highlights of this course

Even though they were really challenging, the course blog, the mid-term wiki, and the final website were amazing projects in different ways. I learned so much from each, both technically and theoretically, both collaboratively and independently. Putting everything together into a complex, creative, summative product requires the total package of thinking (many Bloom levels!) and many breaks for chocolate.

2 new understandings

(1) Formative assessment!!! Its importance in learning and teaching, and ways to do it online. (2) Technology: I’m not an expert, but I am already using new tools: VoiceThread, FlipGrid, Google docs for collaboration, PBworks–and I’ve gotten much better with WordPress.

Thank you!

Thank you

“Thank you for ALL your kindness” by Constanza on Flickr

is licensed under CC BY-NC 2.0

 

I am so grateful to be in UW-Stout’s E-learning Certificate Program and so grateful to have enrolled in this class! Maggie, you are a wonderful coach, guide, and teacher–always there when we needed you but holding us to the the highest standards. And to my classmates, you all helped me so much with your comments and your example! This class is full of smart, creative, ambitious, disciplined teachers who are dedicated to making the world a better place. I am proud to be part of this group. Good luck to you all!

Lorna@Lynchburg

Week 7

3 insights about creating the final project for EDUC 762

  • Deep, authentic learning is rare; this project constitutes an opportunity for true constructivist learning. It’s really hard because it’s complex and time is short!
  • Thinking through a concept map myself really helped me understand how I could use this project, rather than just go through the motions of creating a toolbox as a final assessment. It forced me to see the parts of the website and read the rubric more carefully. The concept map helped make the project more integrated.
  • There are still pieces I don’t fully grok because eight weeks! But because I kind of started over on a few learning objectives, the overall work is much more unified, and I’ll be able to use components more effectively in the future. Also, the review process is helpful, especially looking at sites that have been carefully considered.

Revisit your SAMR self-analysis and describe 2 insights about how close you came to the goal you set for yourself at the beginning of the course.

Here is my original post on SAMR:

SAMR should be used to assess technology use, not change the content learned—I think. SAMR is a big learning hurdle for me. First, there’s a lot of confusion, as I pointed out in my primary post, citing Hamilton et al. Even on our DB forum, posts show pressure to move up the ladder, and people talk about transforming learning with new tech, when, really, they are transforming the skills and content learned; for example, a research project resulting in a paper vs. a PowerPoint show. In this case, the mode of presentation changes the nature of what is learned. Research may remain the same but presenting research through writing ≠ presenting research through a slideshow. Each mode is valuable, but each teaches different skills. So that’s a problem, at least for me, especially in a department unfortunately averse to digital writing and multimodal projects, not to mention universal design and newfangled technology in general.
It took about a week of reading and mulling this over to realize that using SAMR as an assessment of how I use technology implies no pressure to move up the ladder and that the model can be useful, despite the real questions about its empirical validity. “How SAMR and Tech Can Help Teachers Truly Transform Assessment” by cognitive psychologist and educator Lindsay Portnoy helped me to sort this dilemma out. In contrasting SAMR with Bloom’s Taxonomy, Portnoy writes:
“Bloom’s taxonomy is essential for teachers to identify student’s levels of thinking, whereas Puentedura’s taxonomy is essential for teachers to identify the tools that can be used to innovate on instruction. The two are not synonymous and point to two potential views of technology and education: one where the technology guides instruction, and the other where skillful teachers guide instruction supported by technological tools.”

To paraphrase, I think Portnoy is making an important point: Use Bloom to identify students’ thinking levels. Use SAMR to innovate teaching.

*Technology should not guide instruction. Technology should support dynamic instruction.*

It sounds easy, yet I think we’ve been grappling with this postulate all semester: Are we assessing the students’ use of Prezi or what they are presenting on Prezi; are students learning technology, rather than the course objectives; am I changing my teaching just to introduce a cool tool?

The more challenging but more effective path is, I think, to use the principles of Universal Design to implement the use of effective tools for teaching and learning to provide multiple means of engagement, multiple means of learning, and multiple means of expression.

UDL

I am where I need to be re: SAMR: evolving, thinking critically, and learning—and flying under the radar, when needed, to rethink and redesign my courses.

1 more question

TBH, the Assessment Taxonomy Chart is still kind of a mystery to me. First, you have to get past the general misapprehension pointed out by Grant Wiggins (cited in an earlier post) that higher on Bloom’s scale means better and that Bloom’s taxonomy is hierarchical. Here’s what Wikipedia has to say about it:

Criticism of the taxonomy

As Morshead (1965) pointed out on the publication of the second volume, the classification was not a properly constructed taxonomy, as it lacked a systemic rationale of construction.
This was subsequently acknowledged in the discussion of the original taxonomy in its 2001 revision,[9] and the taxonomy was reestablished on more systematic lines. It is generally considered[citation needed] that the role the taxonomy played in systematising a field was more important than any perceived lack of rigour in its construction.
Some critiques of the taxonomy’s cognitive domain admit the existence of these six categories but question the existence of a sequential, hierarchical link.[13] Often, educators view the taxonomy as a hierarchy and may mistakenly dismiss the lowest levels as unworthy of teaching.[14][15] The learning of the lower levels enables the building of skills in the higher levels of the taxonomy, and in some fields, the most important skills are in the lower levels (such as identification of species of plants and animals in the field of natural history).[14][15] Instructional scaffolding of higher-level skills from lower-level skills is an application of Vygotskian constructivism.[16][17]
Some consider the three lowest levels as hierarchically ordered, but the three higher levels as parallel.[9] Others say that it is sometimes better to move to Application before introducing concepts,[citation needed] the idea is to create a learning environment where the real world context comes first and the theory second to promote the student’s grasp of the phenomenon, concept or event. This thinking would seem to relate to the method of problem-based learning.
Furthermore, the distinction between the categories can be seen as artificial since any given cognitive task may entail a number of processes. It could even be argued that any attempt to nicely categorize cognitive processes into clean, cut-and-dried classifications undermines the holistic, highly connective and interrelated nature of cognition.[18] This is a criticism that can be directed at taxonomies of mental processes in general.

700px-Blooms_rose.svg

Bloom’s Taxonomy by K. Ainsqatsi is licensed under CC BY-SA 3.0

 

 

Week 6

3 course adjustments to reflect this week’s readings

In “Cybercoaching: Rubrics, Feedback, & Metacognition, Oh My!” Naomi Peterson stresses the importance of aligning assessments with goals and standards (14). She writes that in “aligning activities with objectives, none of these criteria is appropriate unless it was established a an objective of the course” (9). In “E-Learning and Constructivism: From Theory to Application,” the authors explain that in order for distance education to be learner-centered, three kinds of assessment are critical: self-assessment, team assessment, and facilitator assessment (95). Constructivist learning requires authentic, collaborative projects, something I will continue to work on. In the meantime, I can more quickly do the following:

1. Work back from course goals for all assessments

2. Double check to make sure I’m not assessing technology use that isn’t part of the course objectives

3. Develop self-assessments for students on various assignments and tasks. This is especially important, so they can monitor their learning and become more autonomous in understanding objectives.

Koohang, Alex et al. “E-Learning and Constructivism: From Theory to Application.” Interdisciplinary Journal of E-Learning and Learning Objects, vol. 5, 2009. 91-109.

Peterson, Naomi. “Cybercoaching: Rubrics, Feedback, & Metacognition, Oh My!” Paper presented at E.C. Moore Symposium on Scholarship of Teaching and Learning, Indiana University. February 25, 2005.

2 insights about using rubrics and pre-course surveys

1. It’s important to have a rubric for most assignments. Single-point rubrics and instructional rubrics can be very helpful for students. I also want to explore student-created rubrics.

2. Pre-course surveys are wonderful tools to get to know students a little bit and to set up initial groups. It’s important to let people know how the information will be used and to include directions.

1 more question

My big question this week is about the timing of the virtual writing conference via VoiceThread. This project now has a learning objective and rubric, as well as a VoiceThread demo. While lots of smaller feedback and revision processes can be used, this major operation will probably only be possible once during a semester, so it’s important to think about the timing of the conference, when it will be most beneficial to students, the type of feedback, how they can best use the conference, and so on. I just read a fantastic article by two business professors who found, in a small study, that students preferred to get physical, marked-up papers back, along with audio feedback to accompany handwritten e-notes on papers they could read on any device. (They don’t want much, do they?)

Student Feedback Preference

And I am going to try to compare these findings to this article (below), a more general article on writing assessment that considers the historical overview of writing assessment in college English.

Toward a New Discourse of Assessment

Right now, my most detailed online writing feedback is a returned paper with comments in Word, both detailed and general, including links to resources, and a Vocaroo recording, all via email. I know there’s a better way . . . It may be VoiceThread; it may be something else. That is my question for the week–when and how to deliver the most effective feedback.

 

Week 5

3 things I learned

1. Writing measurable learning objectives is hard. It didn’t help that I was on the road for part of this week.

The Objective Builder from the University of Central Florida you (Maggie) gave us helped.

Here’s one more automated approach from Arizona State University.

One more helpful handout that adds SMART; it’s helpful if the objective is actually achievable.

2. From the Assessment Taxonomy Table activity: I want to increase formative assessment and use appropriate technology to do so, but I don’t want to make students learn unnecessary new technology for assessment. If it’s a technology related to the course, such as Google docs, especially if it will empower them beyond the course or improve accessibility, then that seems justifiable. However, adding to their sometimes already considerable technology burden needs to be carefully examined. With many of my online students, when they must do *anything* new with technology, such as a discussion board post or even clicking on a hyperlink—seriously—they have trouble. It takes explicit, step-by-step directions and sometimes, one-on-one work for even basic functions. We don’t have strong IT support (we’re small), and anyone can register for online courses. Adding to the technology burden for some of these students can be overwhelming for them. They’re not ideal candidates for online classes, but they usually have no choice if they want to attend college.

So having students learn to use Jeopardy Labs, for example, might be questionable. I should learn to use it; not students. With my assessments taxonomy, having re-thought the technologies, I can justify students basically learning to use VoiceThread because of its great utility and Vocaroo because it’s so easy and accessible (and there will be alternatives), but not necessarily a wiki or Jeopardy Labs, which would both take longer to learn to use and not necessarily be of use beyond a single assignment.

3. If having students learn to use a new technology can be justified, then the 2008 iteration of Bloom is helpful:  Bloom’s Digital Taxonomy

Digi tax

Another source for Bloom’s Digital Taxonomy is Global Digital Citizen. (This page is much easier to read in Firefox using the Reader view to get rid of clutter.)

2 insights about using Bloom’s Taxonomy

1. This isn’t my insight; I went looking for some expert commentary. “Five Unfortunate Misunderstandings that Almost All Educators Have about Bloom’s Taxonomy” by Grant Wiggins. The key point here, as with the SAMR Model, is that higher order thinking doesn’t mean better; in fact, the terms “higher-order” and “lower-order” don’t even appear in Bloom’s work. Knowledge, according to Wiggins, may be considered lower-order, but comprehension certainly cannot as it includes so many kinds of complex and interrelated thinking.

2. Part of what makes writing learning objectives so challenging is Bloom’s Taxonomy, so it helps to have a chart of verbs one likes.

1 more question

Where is there a bottleneck in learning in my courses? That is where I should focus more formative assessment. This article discusses a disciplinary approach to analyzing a learning hurdle and teaching students to think as experts in the field do.

 

 

Week 4:  Jagged Learning Toolbox

3 things I learned

1. Collaboration from a new perspective. The mid-term project was indeed challenging but also rewarding. As Curtis Bonk points out in Effective Online Teaching Tips, collaboration should be considered from the vantage points of both pedagogy and technology. This project was the best-integrated collaborative project I’ve ever experienced in terms of both categories, and as Bonk emphasizes, the success of a collaborative project must be based on intensive planning by the instructor.

It’s kind of amazing that we were able to complete our research and create a wiki in such a short time, and despite some very real anxiety and frustration with communication challenges, I still ended up learning more than I could possibly have learned alone and creating a better product with the talents of the group to draw on. In the end, the excellent communication skills of one member of my group and the outstanding tech skills of the other proved invaluable, and I hope to remember this experience—along with the intricate and comprehensive planning I imagine it took to set up the project—the next time I think about assigning group work. Yes, it’s a formidable amount of work all the way around, and at many points, things can go awry, but the potential for a unique, authentic, profound learning experience justifies the risk.

2. So much learning from the mid-term! Specifically, the necessity of getting communication times and mode structured first; learning how and why to insert navigational links at the top and bottom of each page on a wiki or website; how to use a table to stabilize objects on a website; all about VoiceThread—this tool is already being incorporated into plans for student self-assessments later this semester.

3. Continuing growth in my understanding of assessment, with the focus this week on performance-based assessment. Professor Linda Darling-Hammond of Stanford characterizes American students as “over-tested and underexamined” because of the unending over-reliance on high-stakes standardized testing (Ellis). It’s hard to argue with the project-based approach that nurtures and assesses deep learning, as practiced, for example, by The Urban Academy in New York City. As for the criticism that project-based learning and performance-based assessment harm standards, Grant Wiggins maintains that assessment must be local, evidence-based, and rigorously tied to learning objectives (Ellis).

Ellis, Ken. Director. Assessment Overview: Beyond Standardized Testing. Edutopia, 2002.     https://www.youtube.com/watch?v=b9OBhKzh1BM

2 insights about working in online teams

1. If the project isn’t well-designed and well-guided, chances are that it will flop. Because Maggie checked in at the right times and sent messages in different ways (DB, email, videos on the news section of the course), eventually everyone on the team became more informed about what we had to do. The success of a project depends on its design—even the self-assessment form supported learning. Using it called for a thoughtful reflection of how we worked as a team.

2. Communication is everything. You must get that down first–how to talk with each other and when to do it; otherwise, collaboration won’t be possible, or it will be very limited.

1 piece of advice for working in an online learning team

Get away from the discussion board ASAP to communicate as a team. We had 61 posts, and the timing was wack! At one point, I wrote, “I feel like I’m in that episode of I Love Lucy.”

 

 

 

Week Three

3 things I learned, or learned better

  1. How to write a specific learning objective. The Objective Builder from the University of Central Florida really helps. Learning with this model and the easy practice it affords, and then Maggie’s feedback, seem to be working . . . This activity, like the rest of this course so far, makes my understanding of the essential nature of assessment much clearer. It’s a little overwhelming to think about breaking e-v-e-r-y learning task into ABCD (audience, behavior, condition, degree) to structure instruction and assessment, but how else to evaluate student learning and my own effectiveness?
  2. Theory vs Real Life. The tension between authentic assessment and assessment as I must practice it at work, the requirement in my department for students to produce a specified number of argumentative essays to constitute 75 percent of their grade, can’t really be resolved or dismissed. Here’s the definition of authentic assessment from Jon Mueller’s Authentic Assessment Toolbox:

Authentic Assessment

Repeatedly composing basically the same kind of essay over the course of two semesters, an assignment designed to be graded by a single reader for the purpose of sorting students into artificial categories, has little to do with “real-world tasks” (Mueller) or “the kinds of problems faced by adult citizens” (Wiggins). The problem may start with an interpretation of curriculum, but more important, this over-reliance on one kind of assessment and almost total focus on summative assessment isn’t working; our measurements tell us that student writing at our college isn’t improving much, if at all.

  1. “Talent Is Always Jagged,” Chapter 4 in Todd Rose’s The End of Average: Unlocking Our Potential by Embracing What Makes Us Different, reinforced the importance of differentiating assessment and providing choice and variety in assessment. For me, this chapter is inseparable from Universal Design for Learning (UDL) and Howard Gardner’s theory of Multiple Intelligences. As with Rose’s thesis, both UDL and Gardner’s theory have implications for assessment.

2 things I’d like to learn more about

  1. Assessing student writing, theory and practice, for a foundation as I change my approach to assessment and use technology to increase formative assessment.
  1. The big picture of online assessment, to be able to tie it all together. I guess I need a textbook 🙂

1 more question

Still working on the difference between a task and an assessment. This has been a point of confusion since Week 1 because in my entrenched way of thinking, a task and assessment of the task are two different things. However, in authentic assessment, these terms are synonymous (Mueller).

Authentic Task

Weeks One and Two in Assessment in E-Learning

3 things I learned

  1. I have a lot to learn about assessment. So far, this course has taught me that K-12 teachers know a lot about assessment, and that the primary purpose of assessment should be to support learning. Our discussion posts provide so much rich content, full of creative and practical ideas about using assessment to aid learning, things I would never think of as a post-secondary educator. Still, after Week One and the foundation in traditional vs. authentic assessment, I became confused. See 2, below.
  2. SAMR should be used to assess technology use, not change the content learned—I think. SAMR is a big learning hurdle for me. First, there’s a lot of confusion, as I pointed out in my primary post, citing Hamilton et al. Even on our DB forum, posts show pressure to move up the ladder, and people talk about transforming learning with new tech, when, really, they are transforming the skills and content learned; for example, a research project resulting in a paper vs. a PowerPoint show. In this case, the mode of presentation changes the nature of what is learned. Research may remain the same, but presenting research through writing ≠ presenting research through a slideshow. Each mode is valuable, but each teaches different skills. So that’s a problem, at least for me, especially in a department unfortunately averse to digital writing and multimodal projects, not to mention universal design and newfangled technology in general.

It took about a week of reading and mulling this over to realize that using SAMR as an assessment of how I use technology implies no pressure to move up the ladder and that the model can be useful, despite the real questions about its empirical validity. “How SAMR and Tech Can Help Teachers Truly Transform Assessment” by cognitive psychologist and educator Lindsay Portnoy helped me to sort this dilemma out. In contrasting SAMR with Bloom’s Taxonomy, Portnoy writes:

“Bloom’s taxonomy is essential for teachers to identify student’s levels of thinking, whereas Puentedura’s taxonomy is essential for teachers to identify the tools that can be used to innovate on instruction. The two are not synonymous and point to two potential views of technology and education: one where the technology guides instruction, and the other where skillful teachers guide instruction supported by technological tools.”

By focusing on the use of novel technology that provides means of assessing learning as students learn, Portnoy shows the relationship between SAMR and assessment.

  1. Through the example of Maggie’s use of rubrics, I am experiencing firsthand the value of creating rubrics for every assignment. From the student’s perspective, they really help, especially on the DB.

2 things I’d like to learn more about

  1. Using surveys for reflections as self-assessment in writing to increase autonomy in student learning. I need to increase authentic, formative assessment and decrease the time I spend providing feedback. (I respond to both drafts and final papers and allow re-writes of some papers; I teach 17 hours a semester; this is not sustainable.)
  2. Backward design

1 more question

What is the difference between authentic and formative assessment?

 

Work Cited

Hamilton, Erica, et al. “The Substitution Augmentation Modification Redefinition (SAMR) Model: A Critical Review and Suggestions for Its Use.” Techtrends: Linking Research & Practice to Improve Learning, vol. 60, no. 5, Sept. 2016, pp. 433-441. EBSCOhost, doi:10.1007/s11528-016-0091-y.

 

Responsibility to yourself means refusing to let others do your thinking, talking, and naming for you . . . –Adrienne Rich

 

Advertisements

3 thoughts on “Journal

  1. Thanks for sharing, Lorna. You found some useful resources. The assessment guide is a nice summary of all the things we will do in this course! You ask about tasks and assessments. Let me give you an example. The midterm project is an assessment. Yes, it is a task or activity, but it is a means to an end. It is a way for me to assess your learning or your ability to synthesize what you’ve learned. The midterm rubric is a guide that will help me judge to which degree you met the criteria. ~Maggie

    Like

  2. Great insights, Lorna. I agree that communication is key and finding the right tool is key as well. Perhaps DB didn’t work for you. Consider a collaborative tool. How about Skype? or Abobe Connect? So much to choose from! MAggie

    Like

  3. Thanks for sharing, Lorna! Wow, lots of great insights. Regarding the criticism of Bloom’s I take things with a grain of salt. I use Bloom’s as a guide but it is not the be all and end all. Some assignments will always be on the lower levels” and that is ok. Students should be able to recall information before they can create and analyze. It is common sense. My main take away of Bloom’s and SAMR is that we should have variety in our assessments. We should also not be afraid to push our students to think and analyze at a higher level. S=We should push ourselves to infuse technology (when needed or appropriate.)

    Like

Comments are closed.