Informal assessment usefulness of Norm-Referenced Tests



Download 56,51 Kb.
Date conversion12.05.2017
Size56,51 Kb.

INFORMAL ASSESSMENT

Usefulness of Norm-Referenced Tests

  • Norm-referenced tests are standardized methods for
  • assessing skills or behaviors. They are useful for making
  • statistically accurate comparisons between the
  • performance of a target student and the performances of
  • other students of the same age or grade level. With the
  • results of norm referenced tests, it is possible to determine
  • whether a target student’s performance is typical
  • or atypical, based on national norms.

Problems with Norm-Referenced Tests

  • Norm-referenced tests are useful in assessing students for
  • eligibility for special education, but they do have
  • significant limitations; such as…
  • May not accurately reflect curriculum being taught
  • Have limited numbers of alternative forms; problems exist with “test wiseness”
  • May not be sensitive to small gains in academic growth
  • Problems of bias exist in selection of items

Informal Assessment

  • Informal assessment means using
  • non-standardized methods of diagnosing
  • learning problems and measuring student
  • progress.

Advantages of Informal Assessment

  • Informal assessment has the following advantages
  • over norm-referenced testing:
  • Can be more closely related to the curriculum
  • More sensitive to small gains
  • Less cumbersome to administer and score
  • Relates directly to planning instruction and teaching
  • Can identify specific error patterns

Comprehensive Assessment= Comprehensive Test Battery

Types of Informal Assessment

  • criterion-related assessment--an assessment which involves comparing a student’s performance to a given criteria (rather than to a norm group; mastery testing)
  • curriculum-based assessment--tests which use excerpts from the general education curriculum as the subject matter for testing
  • direct measurement--measuring progress by using the same instructional materials or tasks that are used in the classroom
  • probes--brief tests used for assessment of mastery of specific skill or sub-skill

Teacher-Made Tests

  • When designing informal instruments research has shown that teachers
  • are prone to make errors that may skew the results.
  • E.g., matching items, followed by completion, essay and true/false.
  • Teachers may write test items using different levels of learning,
  • although, many teachers use items at the knowledge level because
  • they are easier to write.
  • Such items require the student merely to recall, recognize, or match the material.
  • Higher order thinking skills are needed to assess a student’s ability to
  • sequence, apply information, analyze, synthesize, infer, or deduct.

Criterion-Referenced Testing

  • Criterion-referenced tests (CRTs) compare the
  • performance of a student to a given criterion for mastery.
  • Criterion-referenced testing can be used to determine the
  • examinee’s position along the continuum from acquisition
  • to mastery. To be accurate, criterion-referenced tests
  • must have “item density,” enough items in each domain to
  • make sure that the topic is covered adequately. The
  • advantages of CRTs include:
  • Practical
  • Fair
  • Assists with measuring educational accountability

Sources for CRTs

  • Adapt existing norm-referenced instruments
  • Use published criterion-referenced tests (like The Brigance Inventories)
  • Design teacher-made CRTs
    • Curriculum-based
    • Direct measurement

Establishing Criterion

  • With published CRTs, the authors provide a criterion for
  • mastery on their instrument. When a teacher designs a CRT,
  • the teacher must determine an appropriate mastery criterion.
  • Some tasks require 100% mastery (e.g., math facts) and others
  • can tolerate a lower standard like 80 or 90% (e.g., reading
  • comprehension). Typical criterion for mastery are listed
  • below:
  • More than 95% = mastery of objective
  • 90% to 95% = instructional level
  • 76% to 89% = difficult level
  • Less than 76% = failure level
  • (See Activity 5.6, p. 165)

What Does Mastery Mean?

  • If a student is able to reach a mastery level score
  • just once on a particular criterion-referenced
  • instrument, this does not necessarily mean that
  • the student actually has mastered the skills being
  • tested. To establish mastery with some certainty,
  • the student would need to be tested over multiple
  • trials.

Beyond Mastery: Other Considerations

  • Does passing the test mean that the student is proficient and will maintain the skills?
  • Is the student ready to progress to the next level in the curriculum?
  • Will the student be able to generalize and apply the skills in other contexts?
  • Would the student pass the mastery test if it were given again at a later date?

Brigance Inventories

  • The Brigance is standardized assessment system that
  • provides criterion-related assessment of basic academic
  • skills. There are three age levels of Brigance Inventories.
  • Brigance Diagnostic Inventory of Early Development (birth to age 7)
  • Brigance Diagnostic Inventory of Basic Skills (elementary-aged students)
  • Brigance Diagnostic Inventory of Essential Skills (intermediate and secondary students)

Brigance Diagnostic Inventory of Early Development

  • The Brigance Inventory of Early Development, Revised (IED) is a
  • criterion-referenced scale for young children. It contains ten skill
  • clusters:
  • Pre-ambulatory motor
  • Gross motor
  • Fine motor
  • Self-help
  • Speech and language
  • General knowledge and comprehension
  • Readiness
  • Basic reading
  • Manuscript writing
  • Basic math.
  • The IED is designed for teachers to identify present levels of
  • performance and measure progress and as an instructional guide with
  • written objective for developing intervention programs.

Brigance Diagnostic Inventory of Basic Skills

  • Author: Albert Brigance (1991)
  • Publisher: Curriculum Associates
  • Description of Test: The test is presented in a plastic ring binder that
  • is designed to be laid open and placed between the examiner and the
  • student. A separate student booklet provided for the student’s
  • answers is designed so that the skills range from easy to difficult;
  • thus, the teacher can quickly ascertain the skills level the student has
  • achieved.
  • Administration Time: Specific time limits are listed on many tests;
  • others are untimed.
  • Age/Grade Levels: Grades K through 9. It is also used for academic
  • assessment of older students who are functioning below sixth-grade
  • academic levels.
  • Subtest Information: There are four subtests including 143 pencil
  • and-paper or oral-response tests: readiness, reading, language arts,
  • and mathematics

Brigance Comprehensive Diagnostic Inventory of Basic Skills-Revised (CIBS-R)

  • The CIBS-R is a criterion-referenced inventory designed for
  • use with elementary and middle school students. The inventory
  • includes subtests in the following domains: readiness, reading
  • (word recognition, passage reading, word analysis,
  • vocabulary), language arts (handwriting, grammar mechanics,
  • spelling, and reference skills), and math (grade level math,
  • numbers, operations, and measurement). For each area,
  • several subskills are assessed. For each of the items assessed
  • within a subskill, objectives are included. If a student fails to
  • show mastery of a particular subskill, the objective for that
  • subskill can be used for educational planning. Some of the tests
  • on the CIBS-R have been normed and results can be expressed
  • as standard scores, percentile ranks, and grade equivalents. A
  • computer program, the CIBS-R Standardized Scoring
  • Conversion Software, is available to assist in the scoring.

Brigance Inventory of Essential Skills

  • The criterion-referenced individually administered Brigance
  • Diagnostic Inventory of Essential Skills covers academic skill
  • areas and life skills. The former includes reading/language
  • arts, math, and study skills. Life skill subtests include food and
  • clothing, money and finance, travel and transportation, and
  • communication and telephone skills. The Inventory of
  • Essential Skills also includes rating scales for measuring health
  • and attitude, responsibility and self-discipline, job interview
  • preparation, communication, and auto safety. Inventory
  • materials include a student record book that records
  • competency levels and defines instructional objectives and a
  • class record book that provides a matrix of skills assessed, skills
  • mastered, and objectives for a group of up to 15 students. The
  • inventory is widely used to assess secondary level students and
  • adult learners with special needs.

Strengths of the Brigance

  • The Brigance is considered one of the most comprehensive
  • criterion-referenced instruments. It is also viewed as being well
  • suited to determining mastery of very specific learning objectives.
  • The test manual states that results of the Brigance should be
  • considered in conjunction with the student’s classroom
  • performance, classroom observations, and scrutiny of actual
  • curriculum goals. The specific strengths of the Brigance include:
  • Helps to determine what a student has or has not learned
  • Contains suggestions for specific instructional objectives
  • Requires no testing expertise
  • Can help with referral decisions

Curriculum-Based Assessment

  • Curriculum-based assessment (CBA) means using materials
  • and tasks from the general curriculum to diagnose learning
  • problems or to measure student progress. Curriculum-based
  • assessments are usually given at the end of an instructional
  • period (summative). CBA assesses mastery of specific content
  • or skill taught during an academic period. Students results are
  • compared against a standard of mastery (e.g., student must
  • pass with 80% of items correct).

What Is Curriculum-Based Measurement?

  • Curriculum-Based Measurement (CBM) is the
  • method of monitoring student progress through
  • direct, continuous assessment of basic skills.
  • CBM is used to assess skills such as reading
  • fluency, comprehension, spelling, mathematics,
  • and written expression. Early literacy skills
  • (phonics and phonological awareness) are similar
  • measures and are downward extensions of CBM.

CBM Is Formative Assessment

  • With curriculum-based assessment, the student is measured from
  • the beginning of instruction against the ultimate goal for the
  • student’s learning. For example, a child in third grade would be
  • measured against a year-end goal in reading (e.g., beginning fourth
  • grade), even though at the beginning of the year, the child would not
  • be expected to have mastered the goal. Throughout the school year,
  • the student would be measured against the year-end goal to see if the
  • student is making reasonable progress. Measuring progress during
  • instruction is called formative assessment. Formative assessment
  • allows the teacher to make changes in instruction based upon the
  • student’s academic performance. Thus, the teacher is able to make
  • quick adjustments so the student does not “get stuck” and continues
  • to make progress toward the ultimate learning goal.

How Valid Is CBM?

  • CBM assessment practices are based on 25 years
  • of scientific research at the University of Minnesota and
  • elsewhere (Deno, 1985; Deno, Marston, & Mirkin, 1982;
  • Deno, Marston, Shinn, & Tindal, 1983). These informal
  • tests are time efficient and inexpensive, yet produce
  • accurate charts of student growth over time.

What Is a CBM Probe Like?

  • CBM probes last from 1 to 5 minutes depending
  • on the skill being measured and student
  • performance is scored for speed and accuracy to
  • determine proficiency. Because CBM probes are
  • quick to administer and simple to score, they can
  • be given frequently to provide continuous
  • progress data. The results are charted and
  • provide for timely evaluation based on hard data.

What Is the Content of CBM?

  • As the name implies, CBM materials have
  • historically been derived from individual school
  • curricula. Currently, the CBM field is moving
  • towards standard general curriculum probes to
  • increase standardization and make more accurate
  • comparisons. This is especially helpful when
  • curriculum changes over time.

Rule of Thumb

  • Teachers can design their own curriculum-based assessments using
  • classroom materials. There are some basic guidelines for developing
  • curriculum-based assessments. Below are some “rules” for
  • assessment design in various academic areas:
  • In reading, students should read aloud from reading
  • materials for 1 minute. The number of words read
  • correctly per minute (WCPM) constitutes the basic decision-making unit.
  • In spelling, students write words that are dictated at specific intervals (either 5, 7, or 10 seconds) for 2 minutes. The number of correct letter sequences and words spelled correctly are counted.
  • In written expression, students write a story for 3 minutes after viewing a story starter. The number of words written, spelled correctly, and/or correct word sequences are counted.
  • In mathematics, students write answers to computational problems via two minute probes. The numbers of correctly written digits in correct position are counted.

Baseline and Goal

  • In order to set up a measurement process, the teacher first determines
  • baseline in the skills to be taught. For example, the teacher would do
  • three probes of a skill or set of skills. The scores for the three probes
  • are averaged or the median score can be selected as the baseline
  • score. Once the baseline number has been determined, the teacher
  • can estimate the goal (e.g., number of words read correctly, number
  • of problems solved correctly, number of words spelled correctly) to be
  • reached by the end of the year. The research literature provides
  • guidance for reasonable yearly gains by grade level. For example, a
  • second grader with a baseline of 55 correctly read words per minute
  • can be expected to increase oral reading proficiency by approximately
  • 38 words by the end of the year. The goal then is 93 correctly read
  • words per minute.

Aimline

  • The aimline is the goal line against which
  • progress is measured in curriculum-based
  • measurement. In order to plot the aimline,
  • the teacher would begin at the baseline
  • score and draw a line to the goal. To
  • monitor the instruction, the data are plotted
  • twice per week. When a student falls below
  • the aimline for three consecutive measures,
  • the instruction should be adjusted. When
  • the student excels above the aimline for
  • three consecutive measures, the instruction
  • should be made more challenging.

Content for CBA

  • When a teacher is designing a curriculum-based
  • assessment, a good source of information for what to
  • include in the assessment is a scope and sequence. A
  • scope and sequence is a formal listing of the range of skills
  • and the sequence in which those skills must be learned in
  • a particular academic domain (e.g., reading, written
  • expression, mathematics). To see an example of a Scope
  • and Sequence, go to the Document Sharing Section of the
  • course.
  • SCOPE
  • SEQUENCE

Task Analysis

  • Sometimes when developing CBA, the test
  • designer will look at a specific skill or sub-skill
  • and try to address it with items that constitute the
  • steps toward completing a task or skill. The
  • name for this process is task analysis. Task
  • analysis simply means analyzing a task by
  • breaking it down into the smallest steps or sub-
  • skills.

Task Analysis Example

  • Let’s assume, for example, that the teacher wants to assess the
  • following skill:
  • Recognizes initial consonant sounds and their association with the consonants in the alphabet
  • A task analysis for recognizing initial consonant sounds might include
  • steps like these:
  • Test single letters (uppercase) for identification in this order–
  • M, T, S, F, D, G, L, H, C, B, N, K, V, W, J, P
  • Test single letters (lowercase) for identification in this order-- m, t, s, f, d, r, g, l, h, c, b, n , k ,v, w, j, p
  • Test matching of upper- and lowercase consonants
  • Test matching most common sounds with consonants
  • Test initial consonant sound identification in CVC words

Error Analysis

  • Error analysis is one of the best ways to
  • determine what types of academic problems a
  • student may be having. What the teacher does is
  • look for patterns of errors. Once the error
  • patterns are discovered, the teacher can then
  • develop instructional lessons to correct the errors.
  • (See Activity 5.9,p. 167)

CBA/CBM Summary

  • Curriculum-based assessment is a relatively simple
  • process, involving a thorough analysis of the
  • requirements of the curriculum in a particular
  • domain, the development of items to cover the
  • domain, and arrangement of those items in order
  • from the simplest (or easiest) to the most complex (or
  • most difficult). Using CBM allows the teacher to
  • keep track of a student’s progress in the curriculum
  • and to compare one student’s scores to those of other
  • classmates learning the same curriculum.

Value of Curriculum-Based Assessment

  • Provides more direct feedback to students
  • Supports increases in student achievement
  • Provides accurate screening information for eligibility
  • Provides useful data to determine when students are ready to return to the general education program
  • Is appropriate for assessing medication effects
  • Is useful in designing instructional programs

Cautions in Using CBM

  • Limited to measuring discrete skills; can’t measure global skills like creativity
  • More sensitive to changes in rote learning than in higher level thinking skills

Issues in Informal Testing

  • Are standards appropriate for student in terms of race, culture and gender?
  • Are test items free from cultural bias?
  • Is the language appropriate for the student?
  • Does the measure bypass the limitations imposed by the disability?
  • Are CBA measures of sufficient technical quality?
  • Does the CBA measure thoroughly cover the skill range?
  • Is the test long enough to provide enough information on the student’s performance?

Informal Assessment of Reading

  • Decoding, word recognition, fluency, and comprehension
  • are the broad areas of reading that teachers typically
  • assess using informal methods.
  • Decoding--the ability to associate sounds and symbols
  • Word Recognition—the ability to read words instantly on sight
  • Fluency--rate and ease with which a student reads orally
  • Comprehension--ability to derive meaning from written language
  • Decoding
  • Word Recognition
  • Fluency
  • Comprehension
  • Attitudes Toward Reading
  • Match letters and sounds
  • Read grade level word lists
  • Timed oral reading of letters
  • Answering questions
  • Interview
  • Read isolated letters, blends, syllables and real words
  • Read Dolch Words
  • Timed oral reading of word lists
  • Paraphrasing
  • Questionnaire
  • Sound out nonsense words
  • Timed oral reading of phrases
  • Story retelling
  • Student history
  • Sound out combinations of vowel sounds and patterns, consonant blends, and digraphs
  • Timed oral reading of sentences
  • Cloze
  • Read sentences that contain new words
  • Timed oral reading of paragraphs
  • Maze
  • Sentence Verification
  • Vocabulary identification
  • Informal Techniques for Assessing Reading

Informal Reading Inventories

  • Informal reading inventories are informal assessments
  • that usually contain graded word lists for testing word
  • recognition ability and graded reading passages for
  • evaluating oral reading, silent reading, and
  • comprehension. Teachers may select from among many
  • commercially available informal reading inventories or
  • design one of their own. To learn more about commercial
  • informal reading inventories, conduct a search with the
  • ETS Test File at http://ericae.net/testcol.htm using the
  • keywords reading inventories. On the next slide is a
  • description of one commercial inventory.

Informal Reading Inventory—5th Ed IRI-5

  • The Informal Reading Inventory by Burns and Roe
  • introduces reading passages with a sentence that
  • provides a purpose for reading. The student reads the
  • selection orally and then responds to eight
  • comprehension questions read by the examiner The
  • number and percentage of word recognition and
  • comprehension errors made by the student are recorded
  • to determine whether the selection falls at the student’s
  • Independent, Instruction, or Frustration reading level.

Basic Sight Words aka High Frequency Words aka Dolch List

Sample Informal Reading Inventory

  • Motivation Statement: Imagine how you would feel if you
  • were up to bat and this was your team’s last chance to win
  • the game! Please read this story.
  • Passage:
  • Whiz! The baseball went right by me, and I struck at the air! “Strike
  • one!” called the man. I could feel my legs begin to shake! Whiz! The
  • ball went by me again, and I began to feel bad. “Strike two,”
  • screamed the man. I held the bat back because this time I would kill
  • the ball! I would hit it right out of the park! I was so scared that I
  • bit down on my lip. My knees shook and my hands grew wet. Swish!
  • The ball came right over the plate. Crack! I hit it a good one! Then I
  • ran like the wind. Everyone was yelling for me because I was now a
  • baseball star!

Comprehension Questions and Possible Answers

  • 1. What is this story about?
  • (Main idea--a baseball game, someone who gets two strikes and finally gets a hit
  • 2. After the second strike, what did the batter plan to do?
  • (Factual--Hit the ball right out of the park)
  • 3. Who is the “man” in this story who called strikes?
  • (Inferential--the umpire)
  • 4. In this story, what was meant when the batter said, “I would kill the ball”?
  • (Terminology--Hit it very hard)
  • 5. Why was the last pitch a good one?
  • (Cause and effect--Because it went right over the plate)
  • 6. What did the batter do after the last pitch?
  • (Case and effect--The batter hit it a good one and ran like the wind.)

Scoring an IRI

  • Error Count:
  • Omissions _____ Aided words _____
  • Insertions _____ Repetitions _____
  • Substitutions _____ Reversals _____
  • Scoring Guide
  • Word Recognition Errors Comprehension Errors
  • Independent 1 0
  • Instructional 6 1-2
  • Frustration 12+ 3+

IRI Reading Levels

  • The results obtained from IRIs are grade level scores.
  • Typically, informal inventories provide three reading
  • levels: Independent Level, Instructional Level, and
  • Frustration Level. A student’s Independent Level
  • is the level of graded reading materials that can be read
  • easily with a high degree of comprehension and few errors
  • in decoding. At this level, the student reads
  • independently, without instruction or assistance from the
  • teacher. Reading materials at the student’s Instructional
  • Level are somewhat more difficult; this is the level
  • appropriate for reading instruction. Materials at the
  • Frustration Level are too difficult for the student;
  • decoding errors are too frequent and comprehension too
  • poor for instruction to occur.

Criteria for Reading Levels According to Kirk, Kliebhan, and Lerner (1978), the usual criteria for determining independent, instructional and frustration levels are as shown in the chart below:

  • Independent Reading Level
  • Word Recognition: 98% to 100%
  • Comprehension: 90% to 100%
  • Instructional Reading Level
  • Word Recognition: 95%
  • Comprehension: 75%
  • Frustration Reading Level
  • Word Recognition: less than 90%
  • Comprehension: less than 50%
  • These levels have been
  • criticized for being too
  • stringent. For example,
  • Spache (1972) warned that
  • “if the teacher employs an
  • Informal Reading Inventory
  • (IRI) for his estimate of
  • instructional level, he may
  • be expecting children to
  • read with a very unrealistic
  • degree of oral accuracy.”

Error Analysis in Reading

  • Error analysis is generally used to investigate decoding mistakes in
  • oral reading. The teacher records deviations from the printed text
  • that the student makes while reading orally. Several types of errors
  • can occur when students read connected text. Most systems of error
  • analysis include at least four classes of errors:
  • Additions—the reader adds words or parts of words to the text
  • Substitutions—the reader mispronounces a word or parts of words; this type of error is also called a mispronunciation. (e.g., want for what)
  • Omissions—the reader fails to pronounce words or parts of words. This error occurs when readers skip words, when they hesitate in responding, or when they say the do not know a word.
  • Reversals—The reader changes the order of the words in a phrase or sentence or the order of sounds within a word.

Miscue Analysis

  • An alternate method of error analysis takes into account the quality
  • of the errors that readers make . This is called miscue analysis.
  • Miscues are analyzed to determine whether they represent a change
  • in meaning from the original test. For example, the substitution of
  • hold for fight in “fight back the tears” is semantically correct and
  • does not alter meaning. However, the substitution of ready for right in
  • “he’ll be all right” does change the sense of the passage.
  • Miscues that produce changes in meaning can be further analyzed.
  • For example, the student’s miscue and the original text can be
  • compared in these three ways:
  • Graphic Similarity: How much do the two words look alike?
  • Sound Similarity: How much do the two words sound alike?
  • Grammatical Function: Is the grammatical function of the reader’s word the same as the grammatical function of the text word?

Informal Assessment of Mathematics

  • Math is a relatively easy subject to assess using informal
  • methods. The areas that are usually assessed include:
  • Math facts
  • Computation
  • Math reasoning
  • Math applications
  • The assessment should be combined with both task
  • analysis and error analysis to determine specific problem
  • areas. These problem areas should be further assessed by
  • using probes to determine the specific difficulty.
  • Interviewing the student is also helpful in determining
  • how the student is reasoning through a problem.

Methods of Informal Math Assessment

  • Informal Inventories—Informal inventories survey a variety of skills to determine where the student’s strengths and weaknesses lie. Inventories usually have only one or two examples of each type of math problem so further analysis of errors is necessary in more specific probes.
  • Criterion-Referenced Tests--CRTs are used to assess mastery of specific mathematics skills (e.g. multiplication by 9).
  • Error Analysis—Error analysis is a process of looking at the student’s responses to determine why a mistake was made and to see if there is a pattern of repeated types of errors. Error analysis differentiates between systematic computation errors and errors that are random or careless mistakes.
  • Diagnostic Probes—Probes are in-depth assessments of the mastery of a specific skill or sub-skill; typically a probe contains several items focused on the same skill.
  • Clinical Math Interviews—Clinical interviews elicit information about the procedures that students use to arrive at their answers. The student is observed going about the mathematics task and then the student is interviewed to find out the cognitive strategies he or she used to accomplish the task.
  • Portfolio Assessment—A portfolio should contain several examples of the student’s work, including classroom quizzes or assignments, group or individual projects, written math reports or math logs, or artwork related to mathematics. Portfolios may also contain results of standardized tests and informal assessments, student self-assessments, and student interest surveys and questions. Teachers might include checklists of student progress, graphs of results from CBA measures and records of clinical math interviews.

Example of an Informal Inventory

  • Addition
  • 6 3 4 10 8 11 17 33 67
  • +2 + 5 + 0 + 5 + 3 + 4 + 5 +15 +71
  • Subtraction
  • 6 3 4 17 98 47 10 14 27
  • -4 -3 -0 -3 -4 -32 -3 -6 -24
  • Multiplication
  • 3 2 2 6 33 22 3 232 204
  • x 2 x2 x8 x0 x 4 x 4 22 x3 x 4

Example of Math CRT Criterion for Mastery: 100% (10/10) correct

  • Directions: Round off each number to the nearest
  • hundred (100).
  • 721 __________
  • 7,879 __________
  • 6,834 __________
  • 881 __________
  • 8502 __________
  • 13,782 __________
  • 789,332 __________
  • 3,055 __________
  • 803 __________
  • 419 __________

Math Error Analysis

  • The teacher examines the student’s work and observes how the student goes about solving the problems. The teacher can then analyze what types of errors the student is making. Common types of errors include:
    • Incorrect operation
    • Incorrect number fact
    • Incorrect algorithm
    • Errors in place value
    • Failure to follow sequence of
    • Placement (working from right to left)
    • Copying or handwriting errors
    • Random errors

Error Analysis Practice For each of the following problems, analyze and describe the types of errors the student is making. Note that within the same box, all of the problems display the same error.

  • 83 66 476 753
  • + 67 +29 +851 +693
  • 1410 815 148 1113
  • 67 58 627 861
  • +31 +12 -486 -489
  • 17 16 261 428
  • A
  • B
  • C
  • D

More Error Analysis

  • 175 185 632 523 563
  • - 54 - 22 -147 -366 -382
  • 1111 1513 495 167 181
  • 17 46 1206 divided by 6 = 21
  • x 4 x 8
  • 128 648
  • E
  • F
  • G
  • H

Teacher-Made Probes

  • Teacher-made probes can be used to identify specific problem areas. Mixed
  • probes are used to locate areas that need further assessment or instruction.
  • In the probe on the following page, each of the following categories
  • has nine items:
  • basic addition facts of sums to 9 (first item and then every fourth item),
  • two-digit numbers plus two-digit numbers with no regrouping (second
  • item and then every fourth item),
  • two-digit number plus one-digit number with no regrouping (third item and then every fourth item), and
  • basic addition facts of sums to 18 (fourth item and then every fourth item).
  • When scoring a probe, the student receives one point for every correct digit
  • in the correct place. On this probe, the student can obtain a maximum score
  • of 63 correct digits with no errors. After three times, a high score of 40 or
  • more correct digits per minute with no errors is a reasonable criterion for
  • diagnostic purposes.

Example of a Math Probe

  • 4 22 33 9 6 36 41 6 8
  • +3 +41 +6 +7 +2 +62 +3 +5 +0
  • 53 78 5 7 43 82 7 5 61
  • +44 +1 +8 +2 +36 +5 +4 +3 +37
  • 7 5 82 37 6 4 31 57 7
  • +6 +2 +13 +2 + 6 +4 +18 +32 +9
  • Patterns: 0-9 facts, 2D +2D, 2D + 1D, and 0-18 facts
  • Number of Correct Digits: ___________

Error Analysis of Probe Results

  • If the student fails to reach the criterion on a mixed
  • probe, it is important to analyze the responses and locate
  • the errors in the items missed. This analysis provides the
  • teacher with information for further assessment with
  • specific skill probes (such as 0-9 facts). Also, specific skill
  • probes can be used to monitor the daily progress of the
  • student.
  • For what grade level do you think this probe was designed?

Analysis of Math Observation and Clinical Interview

  • • What previous knowledge did the student bring to this problem?
  • To what extent are the ideas accurate and complete?
  • What strategy did the student employ to solve the
  • problem?
  • Could the student do the steps of the problem in proper sequence?
  • Were the student’s calculations accurate?

Diagnostic Questions for Analysis of Problem Solving

  • Agree Disagree
  • ______ ______ Decodes words correctly in story problem
  • ______ ______ Understands the meaning of the situation described in the story problem
  • ______ ______ Identifies the relevant and irrelevant information in the problem
  • ______ ______ Can illustrate the components of the problem
  • ______ ______ Selects the appropriate operation (addition, subtraction, multiplication or division)
  • ______ ______ Writes down the computational problem correctly
  • ______ ______ Remembers number facts correctly
  • ______ ______ Selects the appropriate computational algorithm
  • ______ ______ Estimates the correct answer
  • ______ ______ Determines is answer “makes sense”

Informal Assessment of Written Expression

  • Written expression includes a complex array of
  • skills which all must be working relatively well in
  • order for the written product to be legible,
  • understandable, and persuasive. Informal probes
  • of writing skills can be done in each of the
  • important areas, including handwriting, writing
  • mechanics, spelling, and composition. On the
  • next slide is a chart showing common methods of
  • informal writing assessment by category.

Common Informal Methods of Assessing Writing

  • Handwriting
  • Writing Mechanics
  • Spelling
  • Composition
  • Analysis of handwriting sample (copy 100 word passage)
  • CRTs of punctuation, capitalization, and grammar
  • Paper and pencil dictation test
  • Rating scales and checklists of skills
  • Rating handwriting sample according to grade level template (e.g., shape, slant, spacing, size, smoothness)
  • Informal surveys of punctuation, capitalization, and grammar
  • Oral spelling
  • Writing sample analysis (write for 15 minutes on a topic or from a story starter)
  • Inventories and CRTs (e.g., Denver Handwriting Analysis, Brigance)
  • Rating scales
  • Multiple-choice format
  • (choose the correct spelling)
  • Work samples
  • Writing samples
  • Spontaneous writing sample
  • Observation and clinical interview
  • Work samples
  • Work samples
  • Inventories of regular words, irregular words and homophones
  • Criterion-referenced test (usually of grade level words

Writing Sample

  • the peopol of englind didn’t the cherch roals. So a group
  • of pepol got to gether and desided to live. So after a lot of
  • comfermising. The king gov them 3 ships and they set sail
  • for a mew land. They sailed a long ways for a to long tine.
  • Then they saw it land it was North amareca. They landid
  • on plymouth rock. There they started to beld the ferst
  • coliny. The firs winter wase the hardes a lot peopl dide
  • from being sick. Afte the winter was over the ingin’s
  • becom frinds with them and to them how to hunt and
  • grow food.
  • What kinds of errors do you see in this composition? What is your error analysis?

Excerpt from a Writing Checklist

  • Agree Disagree
  • Content
  • ______ ______ Does the writing clearly communicate an idea or ideas to the reader?
  • ______ ______ Is the content adequately developed?
  • ______ ______ Is the content interesting to the potential reader?
  • Vocabulary
  • ______ ______ Does the writer select appropriate words to communicate his/her ideas?
  • ______ ______ Does the writer use precise/vivid vocabulary?
  • ______ ______ Does the writer effectively use verbs, nouns, adjectives and adverbs?
  • ______ ______ Does the vocabulary meet acceptable standards for written English (e.g., isn’t vs. ain’t)?
  • Sentences
  • ______ ______ Are the sentences complete (subject and predicate)?
  • ______ ______ Are run-on sentences avoided?
  • ______ ______ Are exceptionally complex sentences avoided?
  • ______ ______ Are the sentences grammatically correct (e.g., word order, subject-verb agreement)?
  • Paragraphs
  • ______ ______ Do the sentences in the paragraph relate to one topic?
  • ______ ______ Are the sentences organized to reflect the relationships between ideas within the paragraph?
  • ______ ______ Does the paragraph include a topical, introductory or transition sentence?

Sample Writing Interview Questions

  • You’ve finished your composition. Tell me about what you’ve written.
  • When you finished, did you read over what you had written? Did you make any changes?
  • What did you change?
  • Did you have anyone else read your paper? Did you change your paper on the basis of suggestions that someone else made?
  • While you were writing, what did you think about? Did you consider the ideas you were writing about? What should come first, second, and so on? Choosing the exact words to express your meaning? Spelling the words correctly, using correct punctuation, and following all the rules for correct grammar?
  • Do you think that you’ve accomplished your purpose in writing? Why or why not? If not, what do you need to change?
  • Do you think your writing will be understandable for your audience? Is the vocabulary suitable? The tone? If not, what do you need to change?

Spelling Assessment

  • A short informal spelling test can be designed by selecting
  • grade level words from a frequency-of-use word list. The
  • student is asked to spell on paper words from each
  • grade list until three words in a grade list are missed. The
  • student’s spelling level can be estimated as that at which
  • two or fewer words are missed.

Common Types of Spelling Errors

  • Dysphonetic errors. Spelling errors which reflect inaccurate spellings without regard to phonics. Words may have some correct letters, but the letters are placed in bizarre positions, such as ronaeg for orange. Students with this problem read and spell primarily through visualization.
  • Dyseidetic errors. Spelling errors reflect phonic-equivalent errors (e.g., pese for peace, det for debt).
  • What kinds of spelling errors do you see in the essay on
  • Slide #65?

Summary of Methods of Informal Assessment

  • Criterion Referenced Tests—tests of one skill with a designated level of accuracy in order for the skill to be considered mastered
  • Curriculum-Based Assessments—informal tests using content from the curriculum
  • Probes—tests of specific skills or sub-skills with multiple examples of the same skill to determine strengths and weaknesses of the student
  • Checklists—lists of academic or behavioral skills
  • Questionnaires—questions about a student’s behavior or academic performance that can be answered by the student or by a parent or teacher
  • Work Samples—samples of a student’s classroom work
  • Permanent Products—products made by the student that can be analyzed for academic or behavioral performance
  • Performance assessment--assessment that requires the student to create an answer or product to demonstrate knowledge
  • Authentic assessment--assessment that requires the student to apply knowledge in the real world
  • Portfolio assessment--evaluating student progress, strengths, and weaknesses using a collection of different measurements and work samples

Portfolio Contents

  • daily assignments
  • samples of student writing
  • examples of student problem solving
  • informal inventories and probes
  • criterion-referenced tests
  • results of standardized tests
  • story maps
  • reading log or reading list
  • math log
  • vocabulary journal
  • artwork, project papers, photographs, and other products
  • group work, papers, projects, and products
  • daily journal
  • writing ideas
  • letters to pen pals, letters exchanged with teacher
  • out-of-school writing and artwork
  • unit and lesson tests
  • multi-media products
  • independent research
  • self-evaluations
  • handwriting samples
  • math computation exercises

Elements of High Quality Assessment

  • Authentic and valid
  • Encompass the whole child
  • Involve repeated observations of patterns of behavior
  • Ongoing and continuous
  • Use a variety of methods
  • Provide a means for feedback
  • Provide an opportunity for students, teachers, and parents to discuss progress


The database is protected by copyright ©sckool.org 2016
send message

    Main page