Saturday, July 31, 2010

One to read

I am always looking for great books to read so when there was a book fair at my job, of course I had to go. I was browsing through the books, many I had read before and I found something very interesting. “The Blackbird Papers" by Dr. Ian Smith was my treasure find. I picked it up, read the front and back covers and was amazed that YES it was THE Dr. Ian Smith. You know, the one who wrote that diet book that I have been meaning to buy and read and follow for several years now (just haven’t gotten around to it). I thought HUH!, this should be interesting. While I thought the book started out slow, and was disappointed that the pages were so packed with words in small print (my sight isn't as good as it use to be even with reading glasses), I couldn't put the book down. I read it from cover to cover and it was fabulous. I finished the book which had a startling ending (I won't give anything away in this post). And then I immediately sat down at my computer to research to see if a sequel had been written. Sadly it has not. :-( I highly recommend this book to those of you who love a good mystery and detective story. Common Dr. Smith, I can't wait for the sequel. I love the characters and would even recommend writing the prequel, what preceded the brothers relationship would make a great book. No one says you have to write in sequence go back and give me more of these characters earlier lives and relationships.

Monday, July 5, 2010

Assessing Students with Learning Disabilities

Assessing Students with Learning Disabilities
The reauthorized Individuals with Disabilities Education Improvement Act of 2004 (IDEA, 2004) includes provisions that address ways in which specific disabilities are identified. Acknowledging the criticism of the previous Act, the movement toward change was to address the procedures and criteria specific to determining the components of assessing learning disabilities (LD).
Discrepancies between conceptual definitions, which are multi-faceted, and operational definitions, which have reduced traditionally to a single dimension the construct of learning disabilities, were the focus of the criticism of the previous ACT (Reschly & Hosp, 2004). Learning Disabilities, which are intrinsic to the individual, involve learning and cognition disorders that significantly affect a relatively narrow range of academic and performance outcomes (Bradley, Danielson, & Hallahan, 2002).
Understanding the components of the conceptual definition of LD is important, and the identification of the measures of assessment of this component will be the focus of this review of literature of educational assessments for students with learning disabilities. Educational assessments document the student’s knowledge, skills and abilities, usually in measurable terms.
Butler and McMunn (2006) define assessment as the act of collecting information about individuals or groups of individuals to understand them. Assessment implementation can include many forms, such as day-to-day observation, tests and quizzes, essays, self assessments, and journaling, to name a few. In conducting assessments the following areas should be included; student work at all stages of development, student process, knowledge and skill, programmatic processes, and instructional methods.
Student evaluation or assessments and procedures are an ongoing process that includes both formative and summative evaluations to provide accurate feedback on the teaching methods, type of activities used, student response, and as a result, student performance. This paper will present a brief review of literature regarding the assessment of students with learning disabilities (LD), and identify issues relevant to validity, reliability, ethical, and legal issues.
Learning disabilities and assessments
A learning disability is a disorder in one or more of the basic psychological processes involved in understanding or in using language, spoken or written and is a disorder that affects how individuals receive, analyze, store, and retrieve information (Hammill, Leigh, McNutt & Larsen, 1987). Learning disabilities manifestation themselves through difficulties experienced over time and can present in areas such as organization, time management, and attention (Salzer, Wick & Rogers, 2008). Learning disabilities vary among individuals and present difficult in learning using such skills as reading, writing, speaking, reasoning, and doing math. Students with learning disabilities typically have difficulties with traditional and standardized testing presenting false assessments when required accommodations are not provided.
Nagy (2000), points to three roles or functions that assessments have, gatekeepers in which assessment determines who is granted a privilege such as admission or graduation; ensuring accountability in which assessment is used to decide if schools are working well; and instructional diagnosis in which assessment is used to discover what students do and do not know and what to do about it. Black (1998) points out that there are differences in the desired purpose of the assessment and the selected instrument used and interpretation determined to serve each purpose. Different purposes require vastly different approaches and mixing the purposes is likely to ensure that none of them will be well served. It is not possible to use one assessment process for many purposes that educational institutions want it to fulfill. It is necessary to first determine the purpose and population for the assessment and then design the assessment program to fit that purpose and population (Gipps, 1994).
Assessing and testing students with learning disabilities can be a challenging and complex process. The learning disabled students Individual Education Program (IEP) should be in place within 60 days from the time a student is referred for learning disability testing. Because learning disabilities can vary greatly among students, the assessment process of gathering information in all areas related to a student's suspected learning disability will prove difficult. Following are examples of testing that has proven to be reliable in assessing learning disable students.
Literature Review
Lopes, (2007) conducted a study that investigated the presence of emotional, behavior, and academic problems in seventh grade students. Behavioral data was gathered using a regression analysis covering the beginning and end of the school year, and academic data was collected six times throughout the year.
The study concluded that there was an increase in performance under the pass/fail systems and an increase in emotional and behavioral problems by the end of the school year. The study also measured the academic grades of students who exhibited emotional and behavior problems and concluded that their progress had worsened significantly by the end of the school year as compared to their peers without exhibited emotional and behavior problems. Academic achievement had lowered by the end of the school year, and their emotional and behavioral problems had increased. Regression analysis results indicated that academic achievement better predicts emotional problems than behavior problems, and odds ratio showed that external and internal problems were more likely in students with lower levels of academic achievement.
Assessment instrument validity of this study was determined using regression analysis, which tests for change over time in the population and simultaneously assesses linear and quadratic time effects. Potential bias inherent in this research included the studies failure to consider the changes to the mean as a result of change in the population as a result of the use of the regression analysis. Greater proportions of students with higher ability scores were identified and students with discrepancies were identified when those with average reading scores may have been identify over a lower percentage of the population who have very low reading scores.
A study on Meta Analysis by Burns, & Wagner, (2008), relies on the use of assessment data to describe the learning problem and offer potential solutions. This study is composed of an analysis of 55 students in grades two to six. Of these students, six were identified as learning disability, seven were labeled with behavioral disorders and 12 were identified as mentally retarded whereas the remaining 30 students were not identified with any disability.
Meta-analytic procedures were used to analyze the link between skill proficiency and interventions categorized as addressing acquisition or fluency needs. Results suggest that the skill-by-treatment paradigm may be useful for matching skill levels in reading to successful interventions. The validity of the study was based in the meta-analytical procedures and their presumption that effect sizes based on different measures are directly comparable.
Recent theoretical work has shown that an invariance condition—universe score, or construct, validity invariance—must hold for either observed score or reliability-corrected effect sizes based on different measures to be directly comparable (Nugent, 2008). Results of studies conducted using meta-analytical procedures suggest that considerable variability in effect sizes can exist across measurement procedures that fail to meet universe score validity invariance and that this variability has the potential to affect negatively meta-analytic results.
Fletcher, Francis, O’Malley, Copeland, Mehta, Caldwell, Kalinowski, Young, and Vaughn (2009) conducted a study to assess the effects of bundling in educational assessments. This study investigated the efficacy of a bundle of accommodations for poor readers in Grade 7. Learning disabled students who participated in this study were randomly selected to take a high-stakes reading comprehension test.
The test results reflected that the accommodations helped both poor and average readers. Bias inherent in this study included accommodations that provided as a package so the value-added impact of structured extended time to the read aloud accommodations could not be assessed. Another bias was that the study was limited to student in grade seven making it an unknown if the accommodations would be fair or effective to younger students. These biases were address by making a comparison of the results of this study to a previous study conducted by Fletcher et al. (2006) that was designed to evaluate the bundled accommodations with younger students (elementary school) and determine the value-added effects of extended time accommodations to the read aloud accommodations.
Validity of this study was determined by using students with an identified learning disability with accommodations are designed to address their specific disability. The instrument used, the Texas Assessment of Knowledge and Skills, has proven construct validity (TAKS, 2007). This assessment instrument is aligned with grade-based standards from the Texas Essential Knowledge and Skills and is based on an interactive item development process in which items are field tested and evaluated for reliability, validity, and bias.
Assessment tools measure the skills and abilities and knowledge attainment of the students in all academic areas and the results serve as a baseline to measure effectiveness of educational programs. A major weakness inherent in educational assessments is the issue of bias.
Bias is inherent in many of the assessment tools and measures taken to make assessments reliable and valid, bias may remain an issue to overcome. Another weakness of assessments can be the cost associated with the development and delivery of the assessment. Finally, but certainly not exhaustive of the assessment weaknesses is the issue of the acceptance of the evaluation results in their ability and use to serve as a baseline to measure effectiveness of educational programs.
Assessments should be both valid and reliable. Reliability and consistency are equals when the discussion is about educational assessments (Popham, 2009). Reliability refers to the quality of the evidence and validity refers to inferences made based on the evidence. When creating an assessment which is reliable and valid, a teacher should use assessment instruments that achieve consistent results and that assess the right information.
Validity of assessments method assesses what it claims to assess and thus produces results that can lead to valid inferences usable in decision making. Reliability is the capacity of an assessment method to perform in a consistent and stable manner (Hargreaves, 2007). Content validity offers a practical approach to assessment development. If assessments are designed to include information in the instructional material in proportion to their importance in the course, then the interpretations of test scores are likely to have greater validity.
Conclusion
While assessments cannot be viewed as a one for all purposes, solution for measuring performance, educators should be reminded that multiple assessments and valid assessment instruments are necessary to address the learning goals and the organization’s mission of knowledge attainment for the students. Formative and summative assessments are important to the process and recognizing that assessment tools should take into consideration the individual needs of the learning disabled student, increasing the amount of assessment will not enhance learning.
One way to include students with learning disabilities is to allow them to take tests under nonstandard conditions, using various types of testing modifications or testing accommodations. The outline of the studies presented identify various means of accommodating learning disabled students while ensuring that knowledge is being attained and the goals set out in the curriculum development are transferred to the student in a valid and reliable manner despite this individual learning disability.





References
Barkley RA, Fischer M, Smallish L, Fletcher K (2004). Young adult follow-up of
hyperactive children: antisocial activities and drug use. J Child Psychol
Psychiatry 45: 195–211.
Bedrosian, J., Lasker, J., Speidel, K., & Politsch, A. (2003). Enhancing the written
narrative skills of an AAC student with autism: Evidence-based research issues.
Topics in Language Disorders, 23, 305–324.
Black, P. (1998). Testing: Friend or Foe? Theory and Practice of Assessment and
Testing. London, Falmer Press.
Bradley R, Danielson L, Hallahan DP. (2002). Identification of Learning Disabilities:
Research to Practice. (Eds.) Bradley, Danielson, Hallahan. Lawrence Erlbaum
Associates, Mahwah, NJ.
Burns, M.K., & Wagner, D. (2008). Determining an effective interval within a brief
experimental analysis for reading; A meta-analytical review. School
Psychological Review, 37, 126-136.
Butler, S. M. & McMunn, N. D. (2006). A teacher’s guide to classroom

assessment: Understanding and using assessments to improve student learning.

San Francisco: Jossey-Bass

Fletcher, J. M., Francis, D. J., Boudousquie, A., Copeland, K., Young, V., Kalinowsid,

S., et al. (2006). Effects of accommodations on high-stakes testing for students

with reading disabilities. Exceptional Children, 72. 136-152.

Fletcher, J. M. Francis, D. J. O'Malley, K., Copeland, K., Mehta, P., Calowell, C. J.,

Kalinowski, S., Young, V., & Vaughn, S.(2009). Affects of bundling:

Accommodations package on high stakes testing for middle school students with

reading disabilities. Exceptional Children, Vol. 75 Issue 4, p447-463.

Gipps (1994). Beyond Testing. Towards a Theory of Educational Assessment. London,
The Falmer Press.
Hammill, D. D., Leigh, J. E., McNutt, G., & Larsen, S. C. (1987). A new definition of
learning disabilities. Journal of Learning Disabilities, 20, 109 – 113.
Hargreaves, E. (2007), The validity of collaborative assessment for learning; Assessment

in Education Vol. 14, No. 2, pp. 185–199
Lopes, J. A. (2007). Prevalence of emotional, behavioral and learning problems: a study
of 7th-grade students. Education and Treatment of Children - Volume 30, Number
4, pp. 165-181.
Mellard D. (2003). "Understanding Responsiveness to Intervention in Learning
Disabilities Determination," Retrievable at http://www.nrcld.org/
publications/papers/mellard.shtml.
Nagy, P. (2000). The three roles of assessment: Gatekeeping, accountability, and
instructional diagnosis. Canadian Journal of Education 25(2).
Nugent, W. R. (2008). Construct Validity Invariance and Discrepancies in Meta-Analytic
Effect Sizes Based on Different Measures: A Simulation Study. Educational and
Psychological Measurement, vol. 69, 1: pp. 62-78.
Popham, W. J. (2006). Assessment for educational leaders. Boston: Pearson/Allyn and
Bacon.
Reschly DJ and Hosp JL. (2004). "State SLD Identification Policies and Practices,"
Learning Disability Quarterly, Vol. 27(4), p. 197-213.
Salzer, M. S., Wick, L. C., & Rogers, J. A. (2008). Familiarity with and use of
accommodations and supports among postsecondary students with mental
illnesses. Psychiatric Services, 59, 370 – 375.

Snow, C. E., Burns, M. S., & Griffin, P. (Eds.). (1998). Preventing reading difficulties in

young children. Washington, DC: National Academy Press.

Stone, W. L., & Hogan, K. L. (2003). A structured parent interview regarding

cooperative learning for children with learning disabilities. Journal of Learning

Disabilities and Developmental Disorders, 23, 639–652.

Texas Assessment of Knowledge and Skiils. (2007). Retrieved July 3, 2010, from

http://www.tea.statc. tx.us/student.assessment/resources/releasc/taks/2006/

gr7taks.ptlf.

Vaughn, S., Bos, C. S., & Schumm, J. S. (2000). Teaching exceptional, diverse, and at-
risk students in the general education classroom (2nd ed.). Boston: Allyn and
Bacon.

Wenzel, T. (2000). Cooperative student activities as learning devices. Analytical
Chemistry v72 p293A-296AAN

Assessment Goals and Assumptions

Assessment Goals and Assumptions
Butler and McMunn (2006) define assessment as the act of collecting information about individuals or groups of individuals to understand them. Student evaluation or assessments and procedures are an ongoing process that includes both formative and summative evaluations. Evaluations are mostly a summative process however must also contain elements of formative evaluations (Butler & McMunn, 2006).
In comparison to education, a formative evaluation in business is considered an interim evaluation and a summative evaluation would be the annual or yearly evaluation. Educational assessment provides feedback that is equitable in regards to student learning, the success of the instructional material and the weak areas requiring attention and revisions (Reynolds, Livingston, & Willson, 2006). Evaluations in business occur in a results-oriented performance culture in which management and employees work together to clarify priorities for performance by focusing on accomplishments and concentrate on results and how their work directly helps the organization accomplish its mission (Grote, 2002). Employee evaluations in business should motivate employees to achieve higher levels of performance.
In this paper, the writer will identify the goals, and assumptions inherent in the employee assessment process and instruments of the employees of the Department of Housing and Urban Development (HUD). Additionally, the writer will analyze the instrument's quality and appropriateness.
HUD Employee Assessment Process
The foundation of the success of any organization is the individual employee. Performance appraisals can promote both the institutional development of the organization and the personal development of the people working in it. The Department of Housing and Urban Development (HUD) has established an employee evaluation plan that relies upon three strategic goals for human capital that are mission focused, maintain a high quality workforce reflective of employees who have necessary knowledge, skills, and abilities he or she needs to do their jobs and are held accountable for their performance, and provides for an effective succession planning process (Nelson, 2008).
The Office of Personnel Management (OPM), which provides oversight for all Federal Agencies, has provided a directive that requires all federal agencies to implement methods aimed at improving individual and organizational effectiveness. These methods are based on the four principles of being results-oriented, aligning employee performance results that contribute to the success of the organization, rewards depend on performance and results and finally the involvement of employees in planning and identifying critical elements and performance standards. Effective performance standards are standards that are accurately developed and applied, which should result in good employee morale and can affect the agency’s mission.
HUD views this as a results-oriented performance culture which management and employees work together to clarify priorities for performance by focusing on accomplishments. By focusing on accomplishments, employees concentrate on results and how their work directly helps the organization accomplish its mission. When management and employees work together to establish performance criteria and standards that the employee will be accountable for achieving, this enables the employee to maximize productivity and fulfill his or her potential; it also enhances planning, performance appraisals, rewards, and areas for improvement.
HUD has three Performance Management Systems that include Employee Performance Planning and Evaluation System (EPPES)—the performance system for non-supervisory bargaining unit and non-bargaining unit employees, Performance Accountability and Communication System (PACS)—the performance system for managers and Supervisors, and Executive Performance Accountability and Communication System (EPACS)—the performance system for senior executives (see figure 2). A performance plan, consisting of job elements and performance standards is a key component of performance management (Latham & Wexley, 1994). Job elements should link to the agency’s mission and goals, and performance standards must clearly indicate how each individual’s performance will be measured.
The performance technique used by HUD to develop performance standards is the SMART method. Using this methodology, performance standards are written to be Specific, Measurable, Attainable, Relevant, and Time-bound and are written so they describe how results will be obtained and measured, and when the work needs to be done (Nelson, 2008).
The performance management process consists of five basic components that are planning, monitoring, developing, rating, and rewarding. During the planning phase the performance expectations are outlined, discussed and agreed upon between management and the employee. Planning requires setting performance expectations and goals for individuals and groups that support organizational goals. The planning phase also identifies how the employee will accomplish the tasks, performance expectations, and criteria for levels of performance evaluation. An important part of the planning process is communicating organizational and individual performance expectations to employees. HUD uses SMART methodology to communicate expectations. This phase is followed by monitoring the employee’s performance. Monitoring means continually and consistently measuring performance and providing ongoing feedback to the employee and the work group on progress toward reaching goals. Monitoring will provide management with an opportunity to observe and note needed training and changes to assist the employee with his or her capacity development and performance improvement.
The developing phase provides the employee the opportunity to seek training, be given new assignments, improving work processes, or coaching. In the rating phase, the employee’s performance is evaluated against the standards in his or her performance plans and assigned an annual rating of record. The final phase is rewarding the employee for good performance. HUD uses a cash reward system to motivate employees and supervisors toward increased productivity and creativity.
Two evaluation systems, EPPES and PACS
HUD uses five rating levels for EPPES; these levels are also used for the PACS and EPACS systems (See figure 1). The appraisal categories used for each critical element are outstanding which is a rating reflecting that the employee has exceeded significantly the established performance standards for the individual critical element, and the achievement is of exceptionally high quality, Excellent means that the employee has produced a consistently high quality and quantity of work, Fully successful means that when the employee performs the duties and responsibilities of the job and has met the fully successful level of performance described in the performance plan, Minimally satisfactory and unsatisfactory performance standards means that the employee has barely or failed to meet the established performance standards.
Goals and Assumptions
The goals of the SMART evaluation plan, as identified in the SMART Performance Standards Guide, are to identify changes in the performance management system, determine how the work unit and individual employees support the HUD’s strategic goals, discuss the importance of monitoring, developing, reviewing and rewarding employees and implement strategies to communicate performance expectations to employees.
The assumptions presented in the performance guide indicate that evaluators and evaluates should remember that employees rated as Outstanding should not be viewed as perfect employees, but those whose efforts, services, and products are extraordinary and have a substantial effect on mission accomplishment. The rating of Outstanding should represent excellence in performance and therefore should be very difficult to achieve. The rating received of fully successful should not be presented as a negative evaluation by the employee. Fully successfully evaluations are also not viewed as mediocre. An employee performing at a level of fully successful has performed at the level intended by the established standard and is acceptable to the organization. The final assumption related to the evaluation process is that the retention standard must be clear and cannot be absolute.
Analysis of Instrument's Quality and Appropriateness
The assessment instruments used by the Department of HUD are justifiable for their intended purposes and based on the sound methodology of the SMART standards. The assessment instrument supports the quality of the evaluation process and supports an acceptable level of difficulty in distinguishing between the various levels of evaluation. The instruments used for both employee and management evaluations are appropriate because the interpretation of the assessment instrument will reflect employee performance customized to both the performance standards and the organization’s mission and goals.
The assessment results are a summative evaluation of employee knowledge, skill and ability as evidenced by accomplishments tied to the organization’s mission and management plan goals. The summative evaluations of the organization are useful to predict how well the evaluation instruction aligns with the organization’s mission and goals (Nelson, 2008).
Conclusion
Diagnostic assessments are designed to determine knowledge, skills, or misconceptions prior to planning instruction and can include both performances and products (Butler & McMunn, 2006). Both formative and summative evaluations are instrumental in the evaluation process of the employees and management of the organization. The performance management process used by HUD consists of five basic components that are planning, monitoring, developing, rating, and rewarding. This process can be viewed as a performance management system that allows the employee, jointly with management, to define the purpose of the job and relate the performance to the goals of the organization.




References

Butler, S. M. & McMunn, N. D. (2006). A teacher’s guide to classroom

assessment: Understanding and using assessments to improve student
learning. San Francisco: Jossey-Bass
Grote, R.C., (2002). The performance appraisal question and answer book: A survival guide for
managers. New York: AMACOM Books.
Latham, G.P., & Wexley, K. N., (1994). Increasing productivity through performance appraisal.

2nd ed. Reading, MA: Addison-Wesley, 1994.

Nelson, K. A., (2008). Strategic human capitol management; revised human capitol plan for
fiscal year 2008-2009. Retrieved June 21, 2010, from www.hud.gov/po/a/administration
Reynolds, C.R., Livingston, R.B., & Willson, V., (2006). Measurement and Assessment in
Education. Boston: Pearson Education Inc.
Smart Performance Standards retrieved June 19, 2010 from [PDF] SMART Performance Standards Course hudatwork.hud.gov/po/a/dasops/hihrts/eprfmnc/SMARTGuide.pdf - 2007-10-02 -


Figure 1:
HUD Mission: Increase homeownership, support community development, and increase access to affordable housing free from discrimination.
HUD
STRATEGIC GOALS
Retrieved from hudatwork.hud.gov/po/a/dasops/hihrts/eprfmnc/SMARTGuide.pdf - 2007-10-02

PROGRAMMATIC GOALS
Increase homeownership opportunities Promote decent affordable housing Strengthen communities
• Expand national homeownership opportunities.
• Increase minority homeownership.
• Make the home buying process less complicated and less expensive.
• Fight practices that permit predatory lending.
• Help HUD-assisted renters become homeowners.
• Keep existing homeowners from losing their homes. • Expand access to affordable rental housing.
• Improve the physical quality and management accountability of public and assisted housing.
• Increase housing opportunities for the elderly and persons with disabilities.
• Help HUD-assisted renters make progress toward self-sufficiency. • Provide capital and resources to improve economic conditions in distressed communities.
• Help organizations access the resources they need to make their communities more livable.
• End chronic homelessness and move homeless families and individuals to permanent housing.
• Mitigate housing conditions that threaten health.
CROSSCUTTING GOALS
Ensure equal opportunity in housing
• Resolve discrimination complaints on a timely basis.
• Promote public awareness of Fair Housing laws.
• Improve housing accessibility for persons with disabilities.
Embrace high standards of ethics, management and accountability
• Rebuild HUD’s human capital and further diversify its workforce.
• Improve HUD’s management, internal controls and systems and resolve audit issues.
• Improve accountability, service delivery and customer service of HUD and its partners.
• Ensure program compliance.
• Improve internal communications and employee involvement.
Promote participation of faith-based and community organizations
• Reduce regulatory barriers to participation by faith-based and community organizations.
• Conduct outreach to inform potential partners of HUD opportunities.
• Expand technical assistance resources deployed to faith-based and community organizations.
• Encourage partnerships between faith-based/community organizations and HUD’s traditional grantees.








Figure 2
SMART Performance Standards Training Manual
Retrieved from hudatwork.hud.gov/po/a/dasops/hihrts/eprfmnc/SMARTGuide.pdf - 2007-10-02


EPPES
PACS
EPACS

Rating Cycle
Oct 1 – Sept 30 Oct 1 – Sept 30 Oct 1 – Sept 30
5 Level
Performance
Ratings
O – Outstanding
E – Excellent
FS – Fully
Successful
MS – Minimally
Satisfactory
US – Unsatisfactory O – Outstanding
E – Excellent
FS – Fully Successful
MS – Minimally
Satisfactory
US – Unsatisfactory
O – Outstanding
E – Excellent
FS – Fully Successful
MS – Minimally
Satisfactory
US – Unsatisfactory

Basis for
Evaluation
Critical Elements

Performance
Standards Critical Elements
(Strategic Goals)
Performance
Objectives Critical Elements
(Strategic Goals)
Performance
Objectives

Assessment Purposes Strengths and Weaknesses

Educational assessments document the student’s knowledge, skills and abilities, usually in measurable terms. Butler and McMunn (2006) define assessment as the act of collecting information about individuals or groups of individuals to understand them. Student evaluation or assessments and procedures are an ongoing process that includes both formative and summative evaluations.
Both formative and summative evaluation should be conducted to provide accurate feedback on the teaching methods, type of activities used, student response, and as a result, student performance. This paper will present a discussion on the purpose of assessments, and identify their strengths and weaknesses in general terms.
Types, Design and Uses of Assessments
The three types of assessments include student, program, and system assessments (Davis, 1993). Student assessments will assist in identifying their knowledge, their skill and abilities, performance, their applied process or how they go about the tasks of doing their work and their motivation or how he or she felt about his or her work.
The functions of the assessment include diagnostic, formative, and summative. Diagnostic assessments identify knowledge the students need to obtain; formative assessments identify progress in performance; and summative assessments provide a final assessment of progress. Educational assessment provides feedback equitable in regard to student learning, the success of the instructional material and the weak areas requiring attention and revisions (Reynolds, Livingston, & Willson, 2006). Evaluations are mostly a summative process however must also contain elements of formative evaluations (Butler & McMunn, 2006).
Assessment implementation can include many forms, such as day-to-day observation, tests and quizzes, essays, self assessments, and journaling, to name a few. In conducting assessments the following areas should be included; student work at all stages of development, student process, knowledge and skill, programmatic processes, and instructional methods.
During the assessment design process, the teacher determines the basic elements of the assessment. This includes the purpose, either formative or summative, the time frame associated with the assessment, the goals, resources and the type of tool to be used to conduct the assessment (Black, Harrison, Lee, Marshall, & William, 2003). Testing should be learner-centered to reflect students' achievement at a point, but cannot be used as a means to evaluate the curriculum. The determination of these elements depends on various factors such as the institutions methodology and goals of learning. Other considerations include, applied learning theories, desired results being sought, the teaching strategies followed, the number of students and the constraints of the educational system, such as the cost and other state and Federal mandates (Pellegrino, Chudowsky, & Glaser, 2001).
The teacher, student, and other stakeholders are all involved in the assessment process, including parents. The results of the assessment are used to improve the focus of the teaching and instructional methods, identify areas of weakness that require improvements related to student knowledge, skills and abilities as well as areas of motivation, improve program planning, and reporting of results.
Teachers can adapt the curriculum to meet their students' need and give appropriate feedback and support to the student as part of the classroom instruction. They can modify the course objectives. This means that, if their class is weak, they have to depend on other materials to bring up their level to the expected level of the class before they can focus on the course objectives. At the same time, if the class is more advanced, the teachers have to make sure the course objectives are met.
How to Improve Assessments
Testing is emotionally charged and anxiety producing but should be effective in motivating, measuring, and reinforcing learning. Tests serve a minimum of four functions. These functions include helping to evaluate and assess whether students are learning what is expected, motivating academic efforts through well designed instruments, improving understanding of the material presented and reinforcing learning through the identification of concepts that still need to be mastered through instruction and assessments (Sadler, 1998).
To improve assessments, teachers should invest adequate amounts of time in the development of their tests. In development, a decision needs to be made whether the test should be helping educators make better instructional decisions and relate this to what the desired outcomes to measure should be. The assessment should be designed to capture the range of difficulty, the needed time associated with the assessment instrument, the format of the assessment, and the desired scoring procedures (Palomta & Banta, 1999).
Assessments should be consistent with the content of the instruction. Assessment content should identify and address the desired student skill level (Jacobs & Chase, 1992). Ideally, the assessment should measure students' academic achievements.

Assessments should be both valid and reliable. When creating an assessment that is reliable and valid, a teacher should use assessment instruments that achieve consistent results and that assess the right information. Validity of assessments method assesses what it claims to assess and thus produces results that can lead to valid inferences usable in decision making.
Reliability is the capacity of an assessment method to perform in a consistent and stable manner (Hargreaves, 2007). Content validity offers a practical approach to assessment development. If assessments are designed to include information in the instructional material in proportion to their importance in the course, then the interpretations of test scores are likely to have greater validity.
Strengths and Weaknesses of Assessments
Student assessment information is used in planning and decision-making that includes the mission and goals of the institution; academic programs; student support services; resource allocation; and faculty evaluation and rewards (Banta, 1985). The results of assessments can be analyzed not only for what they say about individual students but also for what they show about the strengths and weaknesses of a program.
Assessment tools measure the skills and abilities and knowledge attainment of the students in all academic areas and the results serve as a baseline to measure effectiveness of educational programs. A major weakness inherent in educational assessments is the issue of bias.
Bias is inherent in many of the assessment tools and measures that are taken to make assessments reliable and valid, bias may still remain an issue to overcome. Another weakness of assessments can be the cost associated with the development and delivery of the assessment. Finally, but certainly not exhaustive of the assessment weaknesses is the issue of the acceptance of the evaluation results in their ability and use to serve as a baseline to measure effectiveness of educational programs.
Conclusion
Assessment purposes are instrumental in their design. In designing assessments, educators must determine what the intended purpose of the results of the assessment will measure and how this information will be used to shape the programmatic and cognitive learning process directed at student academic achievement.
The decisions consider in educational assessment should include selection, evaluation, and instruction. Additionally, educators must decide whether a relative or absolute interpretation of students’ test results will be most useful. Finally, likely item-content sources must be considered so that tasks eliciting students’ knowledge, skills, and affect can be incorporated into an assessment instrument (Popham, 2006).















References
Black, P., Harrison, C., Lee, C., Marshall, B., & William, D. (2003) Assessment for
Learning: Putting it into practice. Berkshire, England: Open University Press.
Butler, D.L. & Winnie, P.H. (1995) Feedback and self-regulated learning: a theoretical
synthesis.Review of Educational Research, 65(3), 245-281.
Butler, S. M. & McMunn, N. D. (2006). A teacher’s guide to classroom

assessment: Understanding and using assessments to improve student learning.

San Francisco: Jossey-Bass
Ebel, R. L., and Frisbie, D. A. Essentials of Educational Measurement. (5th ed.)
Englewood Cliffs, N.J.: Prentice-Hall, 1990.
Gronlund, N. E., and Linn, R. Measurement and Evaluation in Teaching. (6th ed.) New
York: Macmillan, 1990.
Hargreaves, E. (2007), The validity of collaborative assessment for learning; Assessment

in Education Vol. 14, No. 2, pp. 185–199

Mehrens, W. A., and Lehmann, I. J. Measurement and Evaluation in Education and
Psychology. (4th ed.) New York: Holt, Rinehart & Winston, 1991
Palomba, C. A.,& Banta, T. W., (1999). Assessment essentials: planning, implementing,
and improving assessment in higher education. San Francisco: Jossey-Bass. [405 pp.]
Reynolds, C.R., Livingston, R.B., & Willson, V., (2006). Measurement and Assessment
in Education. Boston: Pearson Education Inc.

Sadler, D.R. (1998) Formative assessment: revisiting the territory. Assessment in
Education,5(1), 77-84.