As UMUC faculty move to detailed definition and assessment of student learning outcomes, the use of grading rubrics and other holistic assessment tools is taking on increasing prominence. An enthusiastic advocate is UMUC Graduate School's Stella Porto. Prof. Porto has been researching, using, and spreading the word about the value of rubrics since the summer of 2002. That year, she was invited to give a 1-day course on assessment in Sao Paulo, Brazil, at an international conference held by the Brazilian Distance Education Association. Preparing for that event, she found that much of the work by T.A. Angelo and K.P. Cross inspired her, and she adapted many of their assessment practices to the online environment. Her active use of rubrics was a by-product of her research and has become a progressively indispensable element of her teaching. During these 2 years of implementing and improving her rubrics, Prof. Porto has been gratified by overwhelmingly positive student response to her assessment specifications and her comprehensive feedback on assignments, as well as by a reduction in student questions about assignment expectations and grading.
Students continue to demand and enroll in online courses, but are not always satisfied with their experiences. The purpose of this study was to determine if students’ responses to evaluations for online courses could be used to identify faculty actions that could lead to improved evaluation scores in teaching effectiveness and overall course value. Controversy continues to exist over the validity of student evaluations to measure faculty effectiveness and overall course quality. Faculty do not always utilize the collected data for the improvement of teaching. Results indicate that stimulation of learning had the most effect on perceptions of teaching effectiveness and useful and relevant assignments had the highest correlation to overall course value.
Mr. Kaplan, who runs one of the highest-achieving schools in the state, has been evaluating teachers since the education commissioner was a teenager. No matter. He is required by Nassau County officials to attend 10 training sessions, as is Carol Burris, the principal of South Side High School here, who was named the 2010 Educator of the Year by the School Administrators Association of New York State. “It’s education by humiliation,” Mr. Kaplan said. “I’ve never seen teachers and principals so degraded.”
A longtime friend on the school board of one of the largest school systems in America did something that few public servants are willing to do. He took versions of his state’s high-stakes standardized math and reading tests for 10th graders, and said he’d make his scores public.
Here's a question guaranteed to make your stomach lurch: "Would you mind if I gave you some feedback?" What that actually means is "Would you mind if I gave you some negative feedback, wrapped in the guise of constructive criticism, whether you want it or not?" The problem with criticism is that it challenges our sense of value. Criticism implies judgment and we all recoil from feeling judged. As Daniel Goleman has noted, threats to our esteem in the eyes of others are so potent they can literally feel like threats to our very survival.
"Welcome to the Graduate School of Management and Technology (GSMT) Assessment in Writing and English (AWE). This free tool is specifically designed to help graduate students evaluate their current skill in grammar, language conventions, written expression, and reading comprehension—competencies that are essential for success in graduate courses."
The best way to eliminate grade inflation is to take professors out of the grading process: Replace them with professional evaluators who never meet the students, and who don't worry that students will punish harsh grades with poor reviews. That's the argument made by leaders of Western Governors University, which has hired 300 adjunct professors who do nothing but grade student work.
"Multiple choice decisions are much more common in daily life than we realize. When we look around at all the information that we pass on our daily activities it's obvious that we are constantly being required to sort, categorize and prioritize information in order to make it useful. Most of these decisions are fairly inconsequential and minor but some can have immediate or long lasting repercussions. An over abundance of information can result in information overload. When we process information we must break it down into a digestible size and organize it so that we can understand it. Multitasking is only useful if we can effectively process the amount and various types of information we are being presented with. The same decision making process that we use to process information effectively throughout our day can also be applied to a specific area of endeavor such as test or exam writing. By sorting out information presented to us on a test , we can greatly increase our effectiveness. Here are a few quick rules to greatly increase your multiple choice ( multiple guess) test taking effectiveness."
Alfie Kohn writes and speaks widely on human behavior, education, and parenting. The author of twelve books and scores of articles, he lectures at education conferences and universities as well as to parent groups and corporations.
Kohn's criticisms of competition and rewards have been widely discussed and debated, and he has been described in Time magazine as "perhaps the country's most outspoken critic of education's fixation on grades [and] test scores."
"The National Center for Fair & Open Testing (FairTest) works to end the misuses and flaws of standardized testing and to ensure that evaluation of students, teachers and schools is fair, open, valid and educationally beneficial."
"AWE, the Assessing Women and Men in Engineering Project, provides assessment tools for people involved in K-16 formal and informal educational outreach activities. AWE assessment tools provide researchers and evaluators with high quality data and the possibility of meta-data based on comparisons of responses to consistent quantitative surveys from a variety of organizations or activities."
"OF all the goals of the education reform movement, none is more elusive than developing an objective method to assess teachers. Studies show that over time, test scores do not provide a consistent means of separating good from bad instructors. Test scores are an inadequate proxy for quality because too many factors outside of the teachers’ control can influence student performance from year to year — or even from classroom to classroom during the same year. Often, more than half of those teachers identified as the poorest performers one year will be judged average or above average the next, and the results are almost as bad for teachers with multiple classes during the same year. Fortunately, there’s a far more direct approach: measuring the amount of time a teacher spends delivering relevant instruction — in other words, how much teaching a teacher actually gets done in a school day. "
"We can ask students to self-report average time spent studying on a course (the are surprisingly honest!). We can scan and save student writing samples to demonstrate that students are not prepared, or that they have learnt something, but it may not be what they were supposed to learn for the course (mine usually write better at the end -- at least "write a well-argued analytical essay that answers a historical question, supported with specific detail," is one of our outcomes! But I ask you -- how can assessment be meaningful if we spend as much time teaching students to be students as we do our subject? and how can the teaching in our subject not suffer if we are taking so much time away from it to give students the skills they need to succeed (to a point -- if students really are clueless and hopeless, I will ask them to drop). I have colleagues who simply fail such students, but there has to be a better resolution."
"If the purpose of a college education is for students to learn, academe is failing, according to Academically Adrift: Limited Learning on College Campuses, a book being released today by University of Chicago Press. The book cites data from student surveys and transcript analysis to show that many college students have minimal classwork expectations -- and then it tracks the academic gains (or stagnation) of 2,300 students of traditional college age enrolled at a range of four-year colleges and universities. The students took the Collegiate Learning Assessment (which is designed to measure gains in critical thinking, analytic reasoning and other "higher level" skills taught at college) at various points before and during their college educations, and the results are not encouraging: # 45 percent of students "did not demonstrate any significant improvement in learning" during the first two years of college. # 36 percent of students "did not demonstrate any significant improvement in learning" over four years of college. # Those students who do show improvements tend to show only modest improvements. Students improved on average only 0.18 standard deviations over the first two years of college and 0.47 over four years. What this means is that a student who entered college in the 50th percentile of students in his or her cohort would move up to the 68th percentile four years later -- but that's the 68th percentile of a new group of freshmen who haven't experienced any college learning."
"GSMT has developed the Assessment in Writing and English (AWE) to help you be more successful at UMUC. AWE is a free tool designed to evaluate your current skills in grammar, language conventions, written expression, and reading comprehension. Your participation is voluntary and your AWE results will not impact the grades in any of your degree or certificate classes. We strongly recommend, however, that you complete the assessment. "
"“In reviewing about 100-some-odd accreditation reports in the last few months, it has been useful in our work here at Washington State University to distinguish ‘stuff’ from evidence. We have adopted an understanding that evidence is material or data that has been analyzed and that can be used, as dictionary definitions state, as ‘proof.’ A student gathers ‘stuff’ in the ePortfolio, selects, reflects, etc., and presents evidence that makes a case (or not)… The use of this distinction has been indispensable here. An embarrassing amount of academic assessment work culminates in the presentation of ‘stuff’ that has not been analyzed--student evaluations, grades, pass rates, retention, etc. After reading these ‘self studies,’ we ask the stumping question--fine, but what have you learned? Much of the ‘evidence’ we review has been presented without thought or with the general assumption that it is somehow self-evident… But too often that kind of evidence has not focused on an issue or problem or question. It is evidence that provides proof of nothing. (And I am aware of and distinguishing here the research usage of proof, in which even a rigorous design with meticulous statistics that demonstrate a significant gain suggest but do not prove…)” [Gary Brown, Washington State University, private e-mail, 9-26-10, used with permission]"
“Enough is enough,” say faculty members reviewing portfolio reports that resemble scrapbooks. “Where is the analysis?” they ask. “Where is the thinking?” Evidence-based learning concepts offer a way to re-frame the portfolio process so it produces meaningful and assessable evidence of achievement.
"How to assess and match tools to academic needs, and understand academic/administrative considerations when selecting specific applications. What do I need to know about the use of social media tools for learning in a productive and effective way without spending an enormous time in learning about the tool or becoming a social media guru?"
"As the pace of technology change continues unabated, institutions are faced with numerous decisions and choices with respect to support for teaching and learning. With many options and constrained budgets, faculty and administrators must make careful decisions about what practices to adopt and about where to invest their time, effort, and fiscal resources. As critical as these decisions are, the information available about the impact of these innovations is often scarce, uneven, or both. What evidence do we have that these changes and innovation are having the impact we hope for? What are the current effective practices that would enable us to collect that evidence? With the advent of Web 2.0, the themes of collaboration, participation, and openness have greatly changed the teaching and learning landscape. In light of these changes, what new methods for collecting evidence of impact might need to be developed? Established practices and good data have made inroads in these areas. Often, however, they are scattered, disconnected, and at times in competition, making it challenging for the teaching and learning community to discover and compare their merits. Bringing these practices together and encouraging the invention of new ones will enable more institutions to measure impacts and produce data, providing a richer, evidence-based picture of the teaching and learning landscape on both the national and international level. The ELI announces a program intended to bring the teaching and learning community into a discussion about ways of gathering evidence of the impact of our innovations and current practices. We hope to bring all types of higher education institutions and professional associations into a conversation on this theme. We envision an inclusive discussion that includes faculty members, instructional support professionals, librarians, students, and research experts in a collaborative exchange of insights and ideas."
What Are the Praxis II Tests? Praxis II Subject Assessments measure knowledge of specific subjects that K–12 educators will teach, as well as general and subject-specific teaching skills and knowledge. Who Takes the Tests and Why? Individuals entering the teaching profession take the Praxis II tests as part of the teacher licensing and certification process required by many states. Some professional associations and organizations require Praxis II tests as a criterion for professional licensing decisions. How Are the Praxis II Tests Given? Praxis II tests are available in paper-based format only and are offered on pre-scheduled dates throughout the year at universities, high schools and other locations throughout the world.
Click in to find related links.
Groups interested in assessment