Monday, July 5, 2010

Assessing Students with Learning Disabilities

Assessing Students with Learning Disabilities
The reauthorized Individuals with Disabilities Education Improvement Act of 2004 (IDEA, 2004) includes provisions that address ways in which specific disabilities are identified. Acknowledging the criticism of the previous Act, the movement toward change was to address the procedures and criteria specific to determining the components of assessing learning disabilities (LD).
Discrepancies between conceptual definitions, which are multi-faceted, and operational definitions, which have reduced traditionally to a single dimension the construct of learning disabilities, were the focus of the criticism of the previous ACT (Reschly & Hosp, 2004). Learning Disabilities, which are intrinsic to the individual, involve learning and cognition disorders that significantly affect a relatively narrow range of academic and performance outcomes (Bradley, Danielson, & Hallahan, 2002).
Understanding the components of the conceptual definition of LD is important, and the identification of the measures of assessment of this component will be the focus of this review of literature of educational assessments for students with learning disabilities. Educational assessments document the student’s knowledge, skills and abilities, usually in measurable terms.
Butler and McMunn (2006) define assessment as the act of collecting information about individuals or groups of individuals to understand them. Assessment implementation can include many forms, such as day-to-day observation, tests and quizzes, essays, self assessments, and journaling, to name a few. In conducting assessments the following areas should be included; student work at all stages of development, student process, knowledge and skill, programmatic processes, and instructional methods.
Student evaluation or assessments and procedures are an ongoing process that includes both formative and summative evaluations to provide accurate feedback on the teaching methods, type of activities used, student response, and as a result, student performance. This paper will present a brief review of literature regarding the assessment of students with learning disabilities (LD), and identify issues relevant to validity, reliability, ethical, and legal issues.
Learning disabilities and assessments
A learning disability is a disorder in one or more of the basic psychological processes involved in understanding or in using language, spoken or written and is a disorder that affects how individuals receive, analyze, store, and retrieve information (Hammill, Leigh, McNutt & Larsen, 1987). Learning disabilities manifestation themselves through difficulties experienced over time and can present in areas such as organization, time management, and attention (Salzer, Wick & Rogers, 2008). Learning disabilities vary among individuals and present difficult in learning using such skills as reading, writing, speaking, reasoning, and doing math. Students with learning disabilities typically have difficulties with traditional and standardized testing presenting false assessments when required accommodations are not provided.
Nagy (2000), points to three roles or functions that assessments have, gatekeepers in which assessment determines who is granted a privilege such as admission or graduation; ensuring accountability in which assessment is used to decide if schools are working well; and instructional diagnosis in which assessment is used to discover what students do and do not know and what to do about it. Black (1998) points out that there are differences in the desired purpose of the assessment and the selected instrument used and interpretation determined to serve each purpose. Different purposes require vastly different approaches and mixing the purposes is likely to ensure that none of them will be well served. It is not possible to use one assessment process for many purposes that educational institutions want it to fulfill. It is necessary to first determine the purpose and population for the assessment and then design the assessment program to fit that purpose and population (Gipps, 1994).
Assessing and testing students with learning disabilities can be a challenging and complex process. The learning disabled students Individual Education Program (IEP) should be in place within 60 days from the time a student is referred for learning disability testing. Because learning disabilities can vary greatly among students, the assessment process of gathering information in all areas related to a student's suspected learning disability will prove difficult. Following are examples of testing that has proven to be reliable in assessing learning disable students.
Literature Review
Lopes, (2007) conducted a study that investigated the presence of emotional, behavior, and academic problems in seventh grade students. Behavioral data was gathered using a regression analysis covering the beginning and end of the school year, and academic data was collected six times throughout the year.
The study concluded that there was an increase in performance under the pass/fail systems and an increase in emotional and behavioral problems by the end of the school year. The study also measured the academic grades of students who exhibited emotional and behavior problems and concluded that their progress had worsened significantly by the end of the school year as compared to their peers without exhibited emotional and behavior problems. Academic achievement had lowered by the end of the school year, and their emotional and behavioral problems had increased. Regression analysis results indicated that academic achievement better predicts emotional problems than behavior problems, and odds ratio showed that external and internal problems were more likely in students with lower levels of academic achievement.
Assessment instrument validity of this study was determined using regression analysis, which tests for change over time in the population and simultaneously assesses linear and quadratic time effects. Potential bias inherent in this research included the studies failure to consider the changes to the mean as a result of change in the population as a result of the use of the regression analysis. Greater proportions of students with higher ability scores were identified and students with discrepancies were identified when those with average reading scores may have been identify over a lower percentage of the population who have very low reading scores.
A study on Meta Analysis by Burns, & Wagner, (2008), relies on the use of assessment data to describe the learning problem and offer potential solutions. This study is composed of an analysis of 55 students in grades two to six. Of these students, six were identified as learning disability, seven were labeled with behavioral disorders and 12 were identified as mentally retarded whereas the remaining 30 students were not identified with any disability.
Meta-analytic procedures were used to analyze the link between skill proficiency and interventions categorized as addressing acquisition or fluency needs. Results suggest that the skill-by-treatment paradigm may be useful for matching skill levels in reading to successful interventions. The validity of the study was based in the meta-analytical procedures and their presumption that effect sizes based on different measures are directly comparable.
Recent theoretical work has shown that an invariance condition—universe score, or construct, validity invariance—must hold for either observed score or reliability-corrected effect sizes based on different measures to be directly comparable (Nugent, 2008). Results of studies conducted using meta-analytical procedures suggest that considerable variability in effect sizes can exist across measurement procedures that fail to meet universe score validity invariance and that this variability has the potential to affect negatively meta-analytic results.
Fletcher, Francis, O’Malley, Copeland, Mehta, Caldwell, Kalinowski, Young, and Vaughn (2009) conducted a study to assess the effects of bundling in educational assessments. This study investigated the efficacy of a bundle of accommodations for poor readers in Grade 7. Learning disabled students who participated in this study were randomly selected to take a high-stakes reading comprehension test.
The test results reflected that the accommodations helped both poor and average readers. Bias inherent in this study included accommodations that provided as a package so the value-added impact of structured extended time to the read aloud accommodations could not be assessed. Another bias was that the study was limited to student in grade seven making it an unknown if the accommodations would be fair or effective to younger students. These biases were address by making a comparison of the results of this study to a previous study conducted by Fletcher et al. (2006) that was designed to evaluate the bundled accommodations with younger students (elementary school) and determine the value-added effects of extended time accommodations to the read aloud accommodations.
Validity of this study was determined by using students with an identified learning disability with accommodations are designed to address their specific disability. The instrument used, the Texas Assessment of Knowledge and Skills, has proven construct validity (TAKS, 2007). This assessment instrument is aligned with grade-based standards from the Texas Essential Knowledge and Skills and is based on an interactive item development process in which items are field tested and evaluated for reliability, validity, and bias.
Assessment tools measure the skills and abilities and knowledge attainment of the students in all academic areas and the results serve as a baseline to measure effectiveness of educational programs. A major weakness inherent in educational assessments is the issue of bias.
Bias is inherent in many of the assessment tools and measures taken to make assessments reliable and valid, bias may remain an issue to overcome. Another weakness of assessments can be the cost associated with the development and delivery of the assessment. Finally, but certainly not exhaustive of the assessment weaknesses is the issue of the acceptance of the evaluation results in their ability and use to serve as a baseline to measure effectiveness of educational programs.
Assessments should be both valid and reliable. Reliability and consistency are equals when the discussion is about educational assessments (Popham, 2009). Reliability refers to the quality of the evidence and validity refers to inferences made based on the evidence. When creating an assessment which is reliable and valid, a teacher should use assessment instruments that achieve consistent results and that assess the right information.
Validity of assessments method assesses what it claims to assess and thus produces results that can lead to valid inferences usable in decision making. Reliability is the capacity of an assessment method to perform in a consistent and stable manner (Hargreaves, 2007). Content validity offers a practical approach to assessment development. If assessments are designed to include information in the instructional material in proportion to their importance in the course, then the interpretations of test scores are likely to have greater validity.
Conclusion
While assessments cannot be viewed as a one for all purposes, solution for measuring performance, educators should be reminded that multiple assessments and valid assessment instruments are necessary to address the learning goals and the organization’s mission of knowledge attainment for the students. Formative and summative assessments are important to the process and recognizing that assessment tools should take into consideration the individual needs of the learning disabled student, increasing the amount of assessment will not enhance learning.
One way to include students with learning disabilities is to allow them to take tests under nonstandard conditions, using various types of testing modifications or testing accommodations. The outline of the studies presented identify various means of accommodating learning disabled students while ensuring that knowledge is being attained and the goals set out in the curriculum development are transferred to the student in a valid and reliable manner despite this individual learning disability.





References
Barkley RA, Fischer M, Smallish L, Fletcher K (2004). Young adult follow-up of
hyperactive children: antisocial activities and drug use. J Child Psychol
Psychiatry 45: 195–211.
Bedrosian, J., Lasker, J., Speidel, K., & Politsch, A. (2003). Enhancing the written
narrative skills of an AAC student with autism: Evidence-based research issues.
Topics in Language Disorders, 23, 305–324.
Black, P. (1998). Testing: Friend or Foe? Theory and Practice of Assessment and
Testing. London, Falmer Press.
Bradley R, Danielson L, Hallahan DP. (2002). Identification of Learning Disabilities:
Research to Practice. (Eds.) Bradley, Danielson, Hallahan. Lawrence Erlbaum
Associates, Mahwah, NJ.
Burns, M.K., & Wagner, D. (2008). Determining an effective interval within a brief
experimental analysis for reading; A meta-analytical review. School
Psychological Review, 37, 126-136.
Butler, S. M. & McMunn, N. D. (2006). A teacher’s guide to classroom

assessment: Understanding and using assessments to improve student learning.

San Francisco: Jossey-Bass

Fletcher, J. M., Francis, D. J., Boudousquie, A., Copeland, K., Young, V., Kalinowsid,

S., et al. (2006). Effects of accommodations on high-stakes testing for students

with reading disabilities. Exceptional Children, 72. 136-152.

Fletcher, J. M. Francis, D. J. O'Malley, K., Copeland, K., Mehta, P., Calowell, C. J.,

Kalinowski, S., Young, V., & Vaughn, S.(2009). Affects of bundling:

Accommodations package on high stakes testing for middle school students with

reading disabilities. Exceptional Children, Vol. 75 Issue 4, p447-463.

Gipps (1994). Beyond Testing. Towards a Theory of Educational Assessment. London,
The Falmer Press.
Hammill, D. D., Leigh, J. E., McNutt, G., & Larsen, S. C. (1987). A new definition of
learning disabilities. Journal of Learning Disabilities, 20, 109 – 113.
Hargreaves, E. (2007), The validity of collaborative assessment for learning; Assessment

in Education Vol. 14, No. 2, pp. 185–199
Lopes, J. A. (2007). Prevalence of emotional, behavioral and learning problems: a study
of 7th-grade students. Education and Treatment of Children - Volume 30, Number
4, pp. 165-181.
Mellard D. (2003). "Understanding Responsiveness to Intervention in Learning
Disabilities Determination," Retrievable at http://www.nrcld.org/
publications/papers/mellard.shtml.
Nagy, P. (2000). The three roles of assessment: Gatekeeping, accountability, and
instructional diagnosis. Canadian Journal of Education 25(2).
Nugent, W. R. (2008). Construct Validity Invariance and Discrepancies in Meta-Analytic
Effect Sizes Based on Different Measures: A Simulation Study. Educational and
Psychological Measurement, vol. 69, 1: pp. 62-78.
Popham, W. J. (2006). Assessment for educational leaders. Boston: Pearson/Allyn and
Bacon.
Reschly DJ and Hosp JL. (2004). "State SLD Identification Policies and Practices,"
Learning Disability Quarterly, Vol. 27(4), p. 197-213.
Salzer, M. S., Wick, L. C., & Rogers, J. A. (2008). Familiarity with and use of
accommodations and supports among postsecondary students with mental
illnesses. Psychiatric Services, 59, 370 – 375.

Snow, C. E., Burns, M. S., & Griffin, P. (Eds.). (1998). Preventing reading difficulties in

young children. Washington, DC: National Academy Press.

Stone, W. L., & Hogan, K. L. (2003). A structured parent interview regarding

cooperative learning for children with learning disabilities. Journal of Learning

Disabilities and Developmental Disorders, 23, 639–652.

Texas Assessment of Knowledge and Skiils. (2007). Retrieved July 3, 2010, from

http://www.tea.statc. tx.us/student.assessment/resources/releasc/taks/2006/

gr7taks.ptlf.

Vaughn, S., Bos, C. S., & Schumm, J. S. (2000). Teaching exceptional, diverse, and at-
risk students in the general education classroom (2nd ed.). Boston: Allyn and
Bacon.

Wenzel, T. (2000). Cooperative student activities as learning devices. Analytical
Chemistry v72 p293A-296AAN

No comments:

Post a Comment