This week I would first like to address our class activity
last week that was done in the computer lab.
The problems that were given are very familiar to me and actually bring
about some excitement. During both of my
previous statistic classes, I loved all the parts that related to math and
being able to perform calculations.
After my second class I was able to take it even a step further and
could then use those numbers to interpret data and my results. The more comfortable I become with this
information the better I feel about doing research and using data as a school
counselor. In class when I was able to
help my classmates with working Excel and successfully finishing all of the
problems, I had a great sense of pride and accomplishment. This was a very motivating feeling and
something that I have been looking for this year in the program. It also made me realize that if I had taken
Appraisal before Guidance Program Development things may have fit together
better for me. However, I do have the
advantage of understanding a greater picture the way I have completed the
classes. Appraisal fits into the overall
picture I have already created and it seems to be filling some of the holes. I am excited to see how much more confidence I can gain by the end of the semester and this class.
Both chapters for this week were again somewhat of a review
from my previous classes, however it took a new approach that I was not as
familiar with. In the Reliability
chapter, the sources of measurement error were organized by time-sampling error,
content-sampling error, and interrater differences. This type of organization gave me a greater
understanding of reliability and the methods used in order to estimate
reliability. The Validity chapter also
organized the information in a more modern way than I had previously been
taught. I have always learned and
understood validity to be broken up into content, criterion, and construct
validity. Although the chapter did
include sections on all three, it described all three as falling under
construct validity. Construct validity
is used as an umbrella term which is broken down into five sources (Drummond
and Jones, 2010). It became clearer that
the purpose is to establish a relationship between assessment scores and the
other variables. We are trying to determine if the claims and decisions that
are made on the basis of a particular assessment are meaningful and useful for
what they are supposed to be accomplishing (Drummond and Jones, 2010). Another aspect that I appreciated from our
text was the brief discussion on the fairness of certain assessments. “Validity also refers to the adequacy and
appropriateness of the uses of assessment results” (Drummond and Jones, p. 100,
2010). My recent work with multicultural
students and counseling has started to interest me in how fair certain parts of
the educational system are for their success.
The book points out that a lack of fairness is a lack of validity and
this would also show a lack of reliability.
This tells us that we should not be using this assessment to make
educated decisions about any student and in particular students from unique
backgrounds.
Drummond, R. J. and Jones, K. (2010). Assessment Procedures for Counselors and Helping Professionals. Upper Saddle River, New Jersey: Pearson Education, Inc.
Drummond, R. J. and Jones, K. (2010). Assessment Procedures for Counselors and Helping Professionals. Upper Saddle River, New Jersey: Pearson Education, Inc.
No comments:
Post a Comment