Thursday, March 29, 2012

Encouraging reflection on practice while grading an artifact: A thought on badges


When I started teaching I thought back to all of those teachers who made me write meaningless papers into which I put little effort and received stellar grades, and I vowed not to be that teacher. I promised myself and my future students that we – as equals – would discuss the literature as relevant historical artifacts that are still being read because the authors still have something to comment on in today’s society.
But then I stepped into the classroom and faced opposition from my colleagues who thought my methods would not provide students with the opportunities to master the knowledge of the standards. Worst of all, some teachers actually punished students who came from my class because they “knew” the students had not learned how to write or analyze since I did not give traditional tests or grade in a traditional way. 

Wednesday, March 21, 2012

Flipping Classrooms or Transforming Education?

Dan Hickey and John Walsh
Surely you have heard about it by now.  Find (or make) the perfect online video lecture for teaching particular concepts and have students watch it before class.  Then use the class for more interactive discussion.  In advance of presenting at Ben Motz’ Pedagogy Seminar at Indiana University on March 22, we are going to raise some questions about this practice.  We will then describe a comprehensive alternative that leads to a rather different way of using online videos, while still accommodating prevailing expectations for coverage, class structure, and accountability.

Compared to What?
A March 21 webinar by Jonathan Bergman that was hosted by e-School News (and sponsored by Camtasia web-capture software) described flipped classrooms as a place where “educators are actively transferring the responsibility and ownership of learning from the teacher to the students.”  That sounds pretty appealing when Bergman compares it to “teachers as dispensers of facts” and students as “receptacles of information.”



Sunday, March 18, 2012

Some Things about Assessment that Badge Developers Might Find Helpful

Erin Knight, Director of Learning at the Mozilla Foundation, was kind enough to introduce me to Greg Wilson, the founder of the non-profit Software Carpentry. Mozilla is supporting their efforts to teach basic computer skills to scientists to help them manage their data and be more productive. Greg and I discussed the challenges and opportunities in assessing the impact of their hybrid mix of face-to-face workshops and online courses. More about that later.
Greg is as passionate about education as he is about programming. We discussed Audrey Watters recent tweet regarding “things every techie should know about education.” But the subject of “education” seemed too vast for me right now. Watching the debate unfold around the DML badges competition suggested something more modest and tentative. I have been trying to figure out how existing research literature on assessment, accountability, and validity is (and is not) relevant to the funded and unfunded badge development proposals. In particular I want to explore whether distinctions that are widely held in the assessment community can help show some of the concerns that people have raised about badges (nicely captured at David Theo Goldberg’s “Threading the Needle…” DML post). Greg’s inspiration resulted in six pages, which I managed to trim (!) back to the following with a focus on badges. (An abbreviated version is posted at the HASTAC blog)




Sunday, March 11, 2012

Initial Consequences of the DML 2012 Badges for Lifelong Learning Competition

Daniel T. Hickey

The announcement of the final awards in MacArthur’s Badges for Lifelong Learning competition on March 2 was quite exciting. It concluded one of the most innovative (and complicated) research competitions ever seen in education-related research. Of course there was some grumbling about the complexity and the reviewing process. And of course the finalists who did not come away with awards were disappointed. But has there ever been a competition without grumbling about the process or the outcome?

A Complicated Competition
The competition was complicated. There were over 300 initial submissions a few months back; a Teacher Mastery category was added at the last minute. Dozens of winners of Stage 1 (Content and Program) and Stage 2 (Design and Tech) went to San Francisco before the DML conference to pitch their ideas to a panel of esteemed judges.

Thursday, March 1, 2012

Open Badges and the Future of Assessment

Of course I followed the roll out of MacArthur’s Badges for Lifelong Learning competition quite closely. I have studied participatory approaches to assessment and motivation for many years.  

EXCITEMENT OVER BADGES
While the Digital Media and Learning program committed a relatively modest sum (initially $2M), it generated massive attention and energy.  I was not the only one who was surprised by the scope of the Badges initiative.  In September 2011, one week before the launch of the competition, I was meeting with an education program officer at the National Science Foundation.  I asked her if she had heard about the upcoming press conference/webinar.  Turns out she had been reading the press release just before our meeting.  She indicated that the NSF had learned about the competition and many of the program officers were asking about it.  Like me, many of them were impressed that Education Secretary Duncan and the heads of several other federal agencies were scheduled to speak at the launch event at the Hirshhorn museum,

THE DEBATE OVER BADGES AND REWARDS
As the competition unfolded, I followed the inevitable debate over the consequences of “extrinsic rewards” like badges on student motivation.  Thanks in part to Daniel Pink’s widely read book Drive, many worried that badges would trivialize deep learning and leave learners with decreased intrinsic motivation to learn. The debate was played out nicely (and objectively) at the HASTAC blog via posts from Mitch Resnick and Cathy Davidson .   I have been arguing in obscure academic journals for years that sociocultural views of learning call for an agnostic stance towards incentives.  In particular I believe that the negative impact of rewards and competition says more about the lack of feedback and opportunity to improve in traditional classrooms.  There is a brief summary of these issues in a chapter on sociocultural and situative theories of motivation that Education.com commissioned me to write a few years ago.  One of the things I tried to do in that article and the other articles it references is show why rewards like badges are fundamentally problematic for  constructionists like Mitch, and how newer situative theories of motivation promise to resolve that tension.  One of the things that has been overlooked in the debate is that situative theories reveal the value of rewards without resorting to simplistic behaviorist theories of reinforcing and punishing desired behaviors.

Saturday, February 4, 2012

School Creativity Indices: Measurement Folly or Overdue Response to Test-Based Accountability?


Daniel T. Hickey
A February 2 article in Education Week surveyed efforts in California, Oklahoma, and other states to gauge the opportunities for creative and innovative work. One of our main targets here at Remediating Assessment is pointing out the folly of efforts to standardize and measure “21st Century Skills.” So of course this caught our attention.
What might come of Oklahoma Gov. Mary Smith’s search for a “public measurement of the opportunities for our students to engage in innovative work” or California’s proposed Creativity and Innovative Education index?
Mercifully, they don’t appear to be pushing the inclusion of standardized measures of creativity within high stakes tests. Promisingly, proponents argue for a focus on “inputs” such as arts education, science fair, and film clubs, rather than “outputs” like test scores, and the need for voluntary frameworks instead of punitive indexes. Indeed, many of these efforts are described as a necessary response to the crush of high stakes testing. Given the looming train-wreck of “value-added” merit pay under Race to the Top, we predict that these efforts are not going to get very far. We will watch them closely and hope some good come from them. 
What is most discouraging is what the article never mentioned. The words “digital,” “network,” or “writing” don’t appear in the articles, and no consideration of the need to look at the contexts in which creativity is fostered is present. Schools continue to filter any website with user-generated content, and obstruct the pioneering educators who appreciate that digital knowledge networks are an easy and important context for creative and knowledgeably engagement. 

Thursday, February 2, 2012

Finnish Lessons: Start a Conversation


Rebecca C. Itow and Daniel T. Hickey
In the world of Education, we often talk of holding ourselves and adhering to “high standards,” and in order to ensure we are meeting these high standards, students take carefully written standardized exams at the state and national level. These tests are then used to determine the efficacy of our schools, curriculum, and teachers. Now, with more and more states tying these scores to value-added teaching, these tests are having more impact than ever. But being so tied to the standards can be a detriment to classroom learning and national educational success.
Dr. Pasi Sahlberg of Finland spoke at Indiana University on January 20, 2012 to discuss accounts of Finnish educational excellence in publications like The Atlantic and the New York Times, and promote his new book, Finnish Lessons: What Can the World Learn from Educational Change in Finland? One of his main points was that the constant testing and accountability to which the U.S.'s students and teachers are subjected do not raise scores. He argued that frequent testing lowers scores because teachers must focus on a test that captures numerous little things, rather than delving more deeply into a smaller number of topics.

Saturday, December 17, 2011

Another Misuse of Standardized Tests: Color Coded ID Cards?


An October 4, 2011 Orange County Register article that reports a California high school’s policy to color code student ID cards based on their performance on state exams raises several real concerns, including student privacy. Anthony Cody in his blog post “Color Coded High School ID Cards Sort Students By Test Performance” published on October 6, 2011 in Education Week Teacher writes that “[s]tudents [at a La Palma, CA high school] who perform at the highest levels in all subjects receive a black or platinum ID card, while those who score a mix of proficient and advanced receive a gold card. Students who score "basic" or below receive a white ID card.” These cards come with privileges and are meant to increase motivation to perform well on state standardized exams. Followers’ comments and concerns posted to the blog address “fixing identity” and that testing conveys the idea that “learning and achievement isn't reward in itself. … You're not worth anything unless WE tell you are based on this one metric.” These are valid concerns, but the larger issue being highlighted here is the misuse and misapplication of the standardized tests themselves

Tuesday, December 13, 2011

Introducing Rebecca

It has been just about six months since I closed up my classroom in sunny Southern California, picked up my life, and moved to Bloomington, Indiana to pursue my PhD in Learning Sciences. I can say that a year ago I certainly did not think I would be posting on a blog about Re-Mediating Assessment. I didn't think I would writing up my research or be helping teachers develop and discuss curriculum that fosters more participation and learning in their classroom. But here I am.

In fact, a year ago I was celebrating Banned Books Week with my AP Language and Composition and Honors 9 English classes, preparing my Mock Trial team for another year of success, starting a competitive forensics team, chairing the AP department, and generally trying to convince my colleagues that my lack of “traditional” tests and use of technology in my almost-paperless classroom were not only good ideas, but actually enhanced learning. A year ago I was living life in sunny Southern California as normal ... then I decided to take the GRE. And I am so glad I did. It has been an interesting journey getting to this moment.

I never thought I would become a teacher. I have an AA in Dance, an AA in Liberal Arts, and a BA in Theatre Directing, but I found that working in the top 99 seat theatre in Los Angeles left me wanting more. When I went back for my MAEd and teaching credential, I was the only one who was surprised. Teaching students, I learned, is very much like directing actors - we want them to come to conclusions, but they need to come to them in their own way in order for the outcome to be authentic.

I have worked as a choreographer, director, and actor. I have taught 10 minute playwriting and directed festivals, as well as developed curriculum around this theme. I studied Tourette Syndrome under Dr. David Commings at the Beckman Research Institute at City of Hope, and informally counseled TS students at the high school. I am a classical dancer and recently picked up circus arts as a hobby.

Each of these very different interests contributed to my teaching. We explored literature through discussion, and often took on the roles of the characters to discuss what a piece said about the society in which it was written and its relevance today. Quite often administrators walked in while students were debating the ethics of the latest redaction of Adventures of Huckleberry Finn or discussing Fitzgerald’s symbolism while dressed to the nines at a Gatsby picnic. Still, I came up against resistance when presenting my methods and ideas to my colleagues; they didn't think that I was teaching if I wasn't giving traditional tests. I had too many A's and too few F's. I knew that I could affect greater change, but I wasn't sure how. Then the opportunity to come to Indiana University and work with Dan Hickey arose, and I had to take it.

Now I am in Bloomington, reflecting on a semester of writing, learning, studying, and creating curriculum. I have immersed myself in the school and culture and work here, and have found smaller networks of people with whom I can engage, play, think, debate, and grow. I am excited and encouraged by the adventures that await in this chapter, and am looking forward.

Monday, December 12, 2011

RMA is back!

After an extended hiatus, Re-Mediating Assessment is back.  In the meantime, lots has happened.  Michelle Honeyford completed her PhD and joined the faculty at the University of Manitoba in Winnipeg.  Jenna McWilliam has moved on to Joshua Danish's lab and is focusing more directly critical theory in new media contexts.  She renamed her blog too.

Lots of other things have happened that my student and I will be writing about.  I promise to write shorter posts and focus more on commentary regarding assessment-related events.  I have a bunch of awesome new doctoral students and collaborations who are lined up to start posting regularly about assessment-related issues.


For now I want to let everybody know that today is the official release day of a new volume on formative assessment that Penny Noyce and I edited.  It has some great chapters.  On the Harvard Education Press website announcing the book, my assessment hero Dylan Wiliam said:
"This is an extraordinary book. The chapters cover practical applications of formative assessment in mathematics, science, and language arts, including the roles of technology and teachers’ professional learning. I found my own thinking about formative assessment constantly being stretched and challenged. Anyone who is involved in education will find something of value in this book."
Lorrie Shepard's foreword is a nice update on the state of assessment.  David Foster writes about using the tools from Mathematics Assessment Resources Services in the Silicon Valley Mathematics Initiative Dan Damelin and Kimberle Koile from the Concord Consortium write about using formative assessment with cutting edge technology. (And we appreciate that the Concord Consortium is featuring their book on their website.

For me the best part was the chapter from Paul Horwitz of the Concord Consortium.  Paul wrote a nice review of his work with Thinker Tools and GenScope and the implications of that work for assessment.  Paul's chapter provided a nice context for me to summarize my ten year collaboration with him around GenScope.  That chapter is perhaps the most readable description of participatory assessment that I have managed to write.  A much more detailed account of our collaboration was just accepted for publication by the Journal of the Learning Sciences and will appear in 2011.

I promise you will be hearing from us regularly starting in the new year.  We hope you will comment and share this with others.  And if you have posts or links that you think we should comment on, please let us know.  I will let the rest of the team introduce themselves and add their bios to the blog as they start posting.