問題一覧
1
refers to the degree to which systematic error influences the measurement.
Bias
2
Refers to the proportion of the total variance attributed to true variance. The greater the proportion of the total variance attributed to true variance, the more reliable the test.
Reliability
3
is based on the idea that a person’s test scores vary from testing to testing because of variables in the testing situation.
Generalizability Theory
4
A statistic useful in describing sources of test score variability is the variance (σ²)—the standard deviation squared.
Variance
5
an estimate of the extent to which item sampling and other errors have affected test scores on versions of the same test when, for each form of the test, the means and variances of observed test scores are equal.
Parallel forms Reliability
6
the simplest way of determining the degree of consistency among scorers in the scoring of a test is to calculate a coefficient of correlation. This is referred to as a
Coefficient of Inter-scorer Reliability
7
seek to estimate the extent to which specific sources of variation under defined conditions are contributing to the test score.
Domain Sampling Theory
8
Refers to consistency in measurement; Something that produces similar results— not necessarily consistently good or bad, but simply consistent.
Reliability
9
Is obtained by correlating two pairs of scores obtained from equivalent halves of a single test administered once.
Split-Half
10
Potential sources of error variance. The examiner's physival appearance and demeanor-even the presence or abscence of an examiner-are some factors for consideration here.
Examiner-Related Variables
11
Variance from irrelevant, random sources is
Error Variance
12
may be thought of as the mean of all possible split-half correlations, corrected by the Spearman–Brown formula.
Coefficient Alpha
13
Consists of unpredictable fluctuations and inconsistencies of other variables in the measurement process. This source of error fluctuates from one testing situation to another with no discernible pattern that would systematically raise or lower scores.
Random Error
14
An Estimate Of Reliability obtained by correlating pairs of scores from the same people on two different administrations of the same test
Test-Retest
15
The Nature of the Test
dynamic or static
16
is the tool used to estimate or infer the extent to which an observed score deviates from a true score.
Standard Error of Measurement
17
also referred to as the true score (or classical) model of measurement.
Classical Test Theory (CTT)
18
allows a test developer or user to estimate internal consistency reliability from a correlation between two halves of a test
Spearman-Brown Formula
19
Variance from true differences
True Variance
20
a statistical measure that can aid a test user in determining how large a difference should be before it is considered statistically significant.
Standard Error of the Difference
21
the simplest way of determining the degree of consistency among scorers in the scoring of a test is to calculate a coefficient of correlation. This is referred to as a
Coefficient of Interscorer Reliability
22
a person’s standing on a theoretical variable independent of any particular measurement.
Construct Score
23
Other Source of Errors
Sampling Method
24
refers to the inherent uncertainty associated with any measurement, even after care has been taken to minimize preventable mistakes (Taylor, 1997, p. 3
Measurement Error
25
are potential sources of error variance. The examiner’s physical appearance and demeanor—even the presence or absence of an examiner—are some factors for consideration here.
Examiner-Related Variables
26
a range or band of test scores that is likely to contain the true score.
Confidence Interval
27
One source of variance during test construction is_______. terms that refer to variation among items within a test as well as to variation among items between tests.
Item Sampling
28
an estimate of the extent to which these different forms of the same test have been affected by item sampling error, or other error.
Alternate Forms Reliability
29
Is a statistic that quantifies reliability, ranging from 0 (not at all reliable) to 1 (perfectly reliable).
Reliability Coefficient
30
do not cancel each other out because they influence test scores in a consistent direction. They either consistently inflate scores or consistently deflate scores.
Systematic Error
31
refers to the degree of correlation among all the items on a scale. A measure of inter-item consistency is calculated from a single administration of a single form of a test.
Inter-Item Consistency
32
are measurement processes that alter what is measured
Carryover Effects
33
tied to the measurement instrument used. Reliable tests give scores that closely approximately help us understand and calculate reliability, and without reliability a test cannot be valid.
True Score
34
Pressing emotional problems, physical discomfort, lack of sleep, and the effects of drugs or medication can all be sources of error variance
Testtaker Variables
35
Variously referred to as scorer reliability, judge reliability, observer reliability, and interrater reliability is the degree of agreement or consistency between two or more scorers (or judges or raters) with regard to a particular measure
Inter-Scorer Reliability
36
provides a way to model the probability that a person with X ability will be able to perform at a level of Y.
Item Response Theory (IRT)
37
When the interval between testing is greater than six months, the estimate of test-retest reliability is often referred to as the
Coefficient of Stability