ログイン

psyass 6&7

psyass 6&7
100問 • 1年前
  • valerie
  • 通報

    問題一覧

  • 1

    legal terminology: if it is “executed with the proper formalities

    valid

  • 2

    , a judgment or estimate of how well a test measures what it purports to measure in a particular context. ● judgment based on evidence about the appropriateness of inferences drawn from test scores.

    validity

  • 3

    No test technique is “_______”. Tests may be shown to be valid within what we would characterize as “________”.

    universally valid, reasonable boundaries

  • 4

    Characterizations of the validity test

    acceptable, weak

  • 5

    , a logical result or deduction.

    inference

  • 6

    , process of gathering and evaluating evidence about validity.

    validation

  • 7

    role is to supply validity evidence in the test manual in validation

    test developer

  • 8

    conduct their own validation studies

    test user

  • 9

    , are absolutely necessary when the test user plans to alter in some way the format, instructions, language, or content of the test. ● yield insights regarding a particular population of testtakers compared to the norming sample.

    local validation studies

  • 10

    it might be useful to visualize construct validity as being “_______” because every other variety of validity falls under it.

    umbrella validity

  • 11

    , different plans of attack

    strategies

  • 12

    based on an evaluation of the subjects, topics, or content covered by the items in the test. ● a judgment of how adequately a test samples behavior representative of the universe of behavior that the test was designed to sample.

    content validity

  • 13

    , the universe of behavior

    assertive

  • 14

    , a plan regarding the types of information to be covered by the items, the number of items tapping each area of coverage, the organization of the items in the test etc. ● culmination of efforts to adequately sample the universe of content areas

    test blueprint

  • 15

    what constitutes historical fact depends to some extent on who is _______

    writing the history

  • 16

    obtained by evaluating the relationship of scores obtained on the test to scores on other tests or measures. ● a judgment of how adequately a test score can be used to infer an individual’s most probable standing on some measure of interest—the measure of interest being the criterion.

    criterion-related validity

  • 17

    the standard against which a test or a test score is evaluated.

    criterion

  • 18

    Characteristics of a criterion

    relevant, valid, uncontaminated

  • 19

    Characteristics of a criterion that is pertinent or applicable to the matter at hand.

    relevant

  • 20

    characteristics of a criterion: If one test (X) is being used as the criterion to validate a second test (Y), then evidence should exist that X is valid.

    valid

  • 21

    , a criterion measure that has been based, at least in part, on predictor measures. Ex. If the guards’ opinions were used both as a predictor and as a criterion, then we would say that criterion contamination had occurred.

    criterion contamination

  • 22

    a correlation coefficient that provides a measure of the relationship between test scores and scores on the criterion measure.

    validity coefficient

  • 23

    Two types of statistical evidence of Criterion-related validity

    validity coefficient, incremental validity

  • 24

    used to determine the validity between the two measures in the validity coefficient

    Pearson correlation coefficient

  • 25

    self-ranking =

    Spearman rho rank-order correlation

  • 26

    degree to which an additional predictor explains something about the criterion measure that is not explained by predictors already in use.

    incremental validity

  • 27

    Incremental validity is ____ when a predictor is strongly correlated with the criterion and minimally correlated with other predictors.

    highest

  • 28

    helps us decide whether the additional information a variable provides is worth the time, effort, and expense of measuring it.

    incremental validity

  • 29

    , a statistical procedure to estimate the quantitative of incremental validity

    hierarchical regression

  • 30

    , an index of the degree to which a test score is related to some criterion measure obtained at the same time ● If test scores are obtained at about the same time as the criterion measures are obtained

    concurrent validity

  • 31

    , an index of the degree to which a test score predicts some criterion measure. ● test scores may be obtained at one time and the criterion measures obtained at a future time

    predictive validity

  • 32

    a particular trait, behavior, characteristic, or attribute exists in the population (expressed as a proportion).

    base rate

  • 33

    proportion of people a test accurately identifies as possessing or exhibiting a particular trait, behavior, characteristic, or attribute.

    hit rate

  • 34

    proportion of people the test fails to identify as having, or not having, a particular characteristic or attribute.

    miss rate

  • 35

    , a miss wherein the test predicted that the testtaker did possess the particular characteristic or attribute being measured when in fact the testtaker did not.

    false positive

  • 36

    , a miss wherein the test predicted that the testtaker did not possess the particular characteristic or attribute being measured when the testtaker actually did.

    false negative

  • 37

    a judgment about the appropriateness of inferences drawn from test scores regarding individual standings on a variable called a construct.

    construct validity

  • 38

    unobservable, presupposed (underlying) traits that a test developer may invoke to describe test behavior or criterion performance. ● the unifying concept for all validity evidence

    construct validity

  • 39

    an informed, scientific idea developed or hypothesized to describe or explain behavior.

    construct

  • 40

    , how uniform a test is in measuring a single concept.

    homogeneity

  • 41

    homogeneity, how uniform a test is in measuring a single concept.

    evidence of homogeneity

  • 42

    , If a test score purports to be a measure of a construct that could be expected to change over time, then the test score, too, should show the same progressive changes with age

    evidence of changes with age

  • 43

    , evidence that test scores change as a result of some experience between a pretest and a posttest can be evidence of construct validity.

    evidence of pretest–posttest changes

  • 44

    if a test is a valid measure of a particular construct, then test scores from groups of people who would be presumed to differ with respect to that construct should have correspondingly different test scores.

    Evidence from distinct groups

  • 45

    , if scores on the test undergoing construct validation tend to correlate highly in the predicted direction with scores on older, more established, and already validated tests designed to measure the same (or a similar) construct

    convergent evidence

  • 46

    , (0) a validity coefficient showing little (statistically insignificant) relationship between test scores and/or other variables with which scores on the test being construct-validated should not theoretically be correlated provides this

    discriminant evidence

  • 47

    an experimental technique useful for examining both convergent and discriminant validity evidence ● the matrix or table that results from correlating variables (traits) within and between methods.

    multitrait-multimethod matrix

  • 48

    is the correlation between measures of the same trait but different methods.

    convergent validity

  • 49

    , the similarity in scores due to the use of the same method. ● correlations of different traits via the same method represents this

    method variance

  • 50

    , shorthand term for a class of mathematical procedures designed to identify factors or specific variables that are typically attributes, characteristics, or dimensions on which people may differ ● can be used to give a more precise evaluation of the homogeneity/unidimensionality of the test.

    factor analysis

  • 51

    , estimating, or extracting factors; deciding how many factors to retain; and rotating factors to an interpretable orientation.

    exploratory factor analysis

  • 52

    , researchers test the degree to which a hypothetical model (which includes factors) fits the actual data.

    confirmatory factor analysis

  • 53

    , each test is thought of as a vehicle carrying a certain amount of one or more abilities ● conveys information about the extent to which the factor determines the test score or scores.

    factor loading

  • 54

    condemned trinitarian view as fragmented and incomplete.

    Messick

  • 55

    , one that takes into account everything from the implications of test scores in terms of societal values to the consequences of test use.

    unitary view of validity

  • 56

    , a judgment regarding how well a test measures what it purports to measure at the time and place that the variable being measured (behavior, cognition, or emotion) is actually emitted.

    ecological validity

  • 57

    d greater the ecological validity of a test or other measurement procedure, the ______ of the measurement

    greater the generalizability

  • 58

    in-the-moment and in-the-place evaluation of targeted variables (behaviors, cognitions, and emotions) in a natural, naturalistic, or real-life context.

    ecological momentary assessment (EMA)

  • 59

    what a test appears to measure to the person being tested than to what the test actually measures.

    face validity

  • 60

    lack of face validity contribute to a ________ in the effectiveness of the test

    lack of confidence

  • 61

    , if a test definitely appears to measure what it purports to measure “on the face of it,”.

    High in face validity

  • 62

    factor inherent in a test that systematically prevents accurate, impartial measurement

    test Bias

  • 63

    , when the use of a predictor results in consistent underprediction or overprediction of a specific group’s performance or outcomes.

    intercept bias

  • 64

    when a predictor has a weaker correlation with an outcome for specific groups.

    slope bias

  • 65

    a judgment resulting from the intentional or unintentional misuse of a rating scale

    rating error

  • 66

    , a numerical or verbal judgment (or both) that places a person or an attribute along a continuum identified by rating scale.

    rating

  • 67

    a scale of numerical or word descriptors.

    rating scale

  • 68

    , an error in rating that arises from the tendency on the part of the rater to be lenient in scoring, marking, and/or grading.

    leniency error

  • 69

    , less than accurate rating or error in evaluation due to the rater’s tendency to be overly critical

    severity error

  • 70

    , the rater exhibits a general and systematic reluctance to giving ratings at either the positive or the negative extreme.

    central tendency error

  • 71

    one way to overcome the restriction-of-range rating errors (central tendency, leniency, severity errors)

    rankings

  • 72

    a procedure that requires the rater to measure individuals against one another instead of against an absolute scale. ● the rater is forced to select first, second, third choices, and so forth.

    rankings

  • 73

    (central tendency, leniency, severity errors)

    restriction-of-range rating errors

  • 74

    , tendency to give a particular ratee a higher rating than the ratee objectively deserves because of the rater’s failure to discriminate among conceptually distinct and potentially independent aspects of a ratee’s behavior.

    halo effect

  • 75

    the extent to which a test is used in an impartial, just, and equitable way.

    test Fairness

  • 76

    , practical value of using a test to aid in decision making and improve efficiency. ● also refer to the usefulness or practical value of a training program or intervention.

    test utility

  • 77

    anything from a single test to a large-scale testing program that employs a battery of tests.

    testing

  • 78

    disadvantages, losses, or expenses in both economic and noneconomic terms. ● relating to expenditures associated with testing or not testing.

    costs

  • 79

    the higher the ______[ of test scores for making a particular decision, the higher the utility of the test is likely to be.

    criterion-related validity

  • 80

    - costs in terms of loss.

    noneconomic costs

  • 81

    , profits, gains, or advantages.

    Benefits

  • 82

    family of techniques that entail a cost–benefit analysis designed to yield information relevant to a decision about the usefulness and/or practical value of a tool of assessment. ● umbrella term covering various possible methods

    utility analysis

  • 83

    , can provide an indication of the likelihood that a testtaker will score within some interval of scores on a criterion measure—an interval that may be categorized as “passing,” “acceptable,” or “failing.”

    expectancy table

  • 84

    provide an estimate of the extent to which inclusion of a particular test in the selection system will improve selection. ● provide an estimate of the percentage of employees hired by the use of a particular test who will be successful at their jobs, given different combinations of three variables: the test’s validity, the selection ratio used, and the base rate.

    Taylor-Russell tables

  • 85

    determining the increase over current procedures

    taylor-russell tables

  • 86

    , numerical value that reflects the relationship between the number of people to be hired and the number of people available to be hired.

    selection ratio

  • 87

    , obtaining the difference between the means of the selected and unselected groups to derive an index of what the test (or some other tool of assessment) is adding to already established procedures. ● determining the increase in average score on some criterion measure

    Naylor-Shine tables

  • 88

    used to calculate the dollar amount of a utility gain resulting from the use of a particular selection instrument under specified conditions.

    Brogden-Cronbach-Gleser formula

  • 89

    an estimate of the benefit (monetary) of using a particular test or selection method

    utility gain

  • 90

    a test has obviously ______ if the hit rate is _____ without using it.

    no value

  • 91

    , reference point derived as a result of a judgment and used to divide a set of data into two or more classifications, with some action to be taken or some inference to be made on the basis of these classifications.

    cutoff score/cut score

  • 92

    , a reference point—in a distribution of test scores used to divide a set of data into two or more classifications— that is set based on norm-related considerations rather than on the relationship of test scores to a criterion.

    relative cut score

  • 93

    , a reference point that is typically set with reference to a judgment concerning a minimum level of proficiency required to be included in a particular classification

    fixed cut score

  • 94

    , the use of two or more cut scores with reference to one predictor for the purpose of categorizing testtakers.

    multiple cut scores

  • 95

    , one collective element of a multistage decision-making process in which the achievement of a particular cut score on one test is necessary in order to advance to the next stage of evaluation in the selection process.

    multiple hurdles

  • 96

    , assumption is made that high scores on one attribute can, in fact, “balance out” or compensate for low scores on another attribute. ● a person strong in some areas and weak in others can perform as successfully in a position as a person with moderate abilities in all areas relevant to the position in question.

    compensatory model of selection

  • 97

    , statistical tool that is ideally suited for making such selection decisions within the framework of a compensatory model.

    multiple regression

  • 98

    cut scores are typically set based on tessttakers’ performance across all the items on the test; some portion of the total number of items on the test must be scored “correct” in order for the testtaker to “pass” the test.

    classical test scores theory

  • 99

    can be applied to personnel selection tasks as well as to questions regarding the presence or absence of a particular trait, attribute, or ability.

    Angoff Method (William Angoff)

  • 100

    collection of data on the predictor of interest from groups known to possess, and not to possess, a trait, attribute, or ability of interest. ● cut score is set on the test that best discriminates the two groups’ test performance.

    Known Groups Method

  • AB PSY CHAPTER 7: Eating and Sleep-Wake Disorders

    AB PSY CHAPTER 7: Eating and Sleep-Wake Disorders

    valerie · 100問 · 2年前

    AB PSY CHAPTER 7: Eating and Sleep-Wake Disorders

    AB PSY CHAPTER 7: Eating and Sleep-Wake Disorders

    100問 • 2年前
    valerie

    AB PSY CHAPTER 7: Eating and Sleep-Wake Disorders

    AB PSY CHAPTER 7: Eating and Sleep-Wake Disorders

    valerie · 100問 · 2年前

    AB PSY CHAPTER 7: Eating and Sleep-Wake Disorders

    AB PSY CHAPTER 7: Eating and Sleep-Wake Disorders

    100問 • 2年前
    valerie

    AB PSY CHAP 4 ANXIETY

    AB PSY CHAP 4 ANXIETY

    valerie · 71問 · 2年前

    AB PSY CHAP 4 ANXIETY

    AB PSY CHAP 4 ANXIETY

    71問 • 2年前
    valerie

    ITC PERSON-CENTERED

    ITC PERSON-CENTERED

    valerie · 34問 · 2年前

    ITC PERSON-CENTERED

    ITC PERSON-CENTERED

    34問 • 2年前
    valerie

    itc

    itc

    valerie · 15問 · 2年前

    itc

    itc

    15問 • 2年前
    valerie

    io 2

    io 2

    valerie · 80問 · 2年前

    io 2

    io 2

    80問 • 2年前
    valerie

    psy ass 3

    psy ass 3

    valerie · 50問 · 2年前

    psy ass 3

    psy ass 3

    50問 • 2年前
    valerie

    C3 PSYASS

    C3 PSYASS

    valerie · 91問 · 2年前

    C3 PSYASS

    C3 PSYASS

    91問 • 2年前
    valerie

    io 5

    io 5

    valerie · 44問 · 2年前

    io 5

    io 5

    44問 • 2年前
    valerie

    psyass 4

    psyass 4

    valerie · 57問 · 2年前

    psyass 4

    psyass 4

    57問 • 2年前
    valerie

    psyass 5

    psyass 5

    valerie · 65問 · 2年前

    psyass 5

    psyass 5

    65問 • 2年前
    valerie

    psyass 6&7 pt2

    psyass 6&7 pt2

    valerie · 7問 · 1年前

    psyass 6&7 pt2

    psyass 6&7 pt2

    7問 • 1年前
    valerie

    io 7

    io 7

    valerie · 44問 · 1年前

    io 7

    io 7

    44問 • 1年前
    valerie

    io 6

    io 6

    valerie · 47問 · 1年前

    io 6

    io 6

    47問 • 1年前
    valerie

    io 8

    io 8

    valerie · 58問 · 1年前

    io 8

    io 8

    58問 • 1年前
    valerie

    io 9

    io 9

    valerie · 27問 · 1年前

    io 9

    io 9

    27問 • 1年前
    valerie

    io 9.2

    io 9.2

    valerie · 45問 · 1年前

    io 9.2

    io 9.2

    45問 • 1年前
    valerie

    io 10

    io 10

    valerie · 24問 · 1年前

    io 10

    io 10

    24問 • 1年前
    valerie

    DMH 1

    DMH 1

    valerie · 31問 · 1年前

    DMH 1

    DMH 1

    31問 • 1年前
    valerie

    io 10.2

    io 10.2

    valerie · 13問 · 1年前

    io 10.2

    io 10.2

    13問 • 1年前
    valerie

    io 11

    io 11

    valerie · 45問 · 1年前

    io 11

    io 11

    45問 • 1年前
    valerie

    DMH 2

    DMH 2

    valerie · 38問 · 1年前

    DMH 2

    DMH 2

    38問 • 1年前
    valerie

    io 11.2

    io 11.2

    valerie · 32問 · 1年前

    io 11.2

    io 11.2

    32問 • 1年前
    valerie

    io 12

    io 12

    valerie · 27問 · 1年前

    io 12

    io 12

    27問 • 1年前
    valerie

    psyass 11

    psyass 11

    valerie · 65問 · 1年前

    psyass 11

    psyass 11

    65問 • 1年前
    valerie

    io 13

    io 13

    valerie · 45問 · 1年前

    io 13

    io 13

    45問 • 1年前
    valerie

    io 13.2

    io 13.2

    valerie · 34問 · 1年前

    io 13.2

    io 13.2

    34問 • 1年前
    valerie

    io 14

    io 14

    valerie · 28問 · 1年前

    io 14

    io 14

    28問 • 1年前
    valerie

    14.2

    14.2

    valerie · 33問 · 1年前

    14.2

    14.2

    33問 • 1年前
    valerie

    PFA

    PFA

    valerie · 8問 · 1年前

    PFA

    PFA

    8問 • 1年前
    valerie

    AUDIT 1

    AUDIT 1

    valerie · 67問 · 1年前

    AUDIT 1

    AUDIT 1

    67問 • 1年前
    valerie

    audit 2

    audit 2

    valerie · 70問 · 1年前

    audit 2

    audit 2

    70問 • 1年前
    valerie

    audit 3

    audit 3

    valerie · 47問 · 1年前

    audit 3

    audit 3

    47問 • 1年前
    valerie

    問題一覧

  • 1

    legal terminology: if it is “executed with the proper formalities

    valid

  • 2

    , a judgment or estimate of how well a test measures what it purports to measure in a particular context. ● judgment based on evidence about the appropriateness of inferences drawn from test scores.

    validity

  • 3

    No test technique is “_______”. Tests may be shown to be valid within what we would characterize as “________”.

    universally valid, reasonable boundaries

  • 4

    Characterizations of the validity test

    acceptable, weak

  • 5

    , a logical result or deduction.

    inference

  • 6

    , process of gathering and evaluating evidence about validity.

    validation

  • 7

    role is to supply validity evidence in the test manual in validation

    test developer

  • 8

    conduct their own validation studies

    test user

  • 9

    , are absolutely necessary when the test user plans to alter in some way the format, instructions, language, or content of the test. ● yield insights regarding a particular population of testtakers compared to the norming sample.

    local validation studies

  • 10

    it might be useful to visualize construct validity as being “_______” because every other variety of validity falls under it.

    umbrella validity

  • 11

    , different plans of attack

    strategies

  • 12

    based on an evaluation of the subjects, topics, or content covered by the items in the test. ● a judgment of how adequately a test samples behavior representative of the universe of behavior that the test was designed to sample.

    content validity

  • 13

    , the universe of behavior

    assertive

  • 14

    , a plan regarding the types of information to be covered by the items, the number of items tapping each area of coverage, the organization of the items in the test etc. ● culmination of efforts to adequately sample the universe of content areas

    test blueprint

  • 15

    what constitutes historical fact depends to some extent on who is _______

    writing the history

  • 16

    obtained by evaluating the relationship of scores obtained on the test to scores on other tests or measures. ● a judgment of how adequately a test score can be used to infer an individual’s most probable standing on some measure of interest—the measure of interest being the criterion.

    criterion-related validity

  • 17

    the standard against which a test or a test score is evaluated.

    criterion

  • 18

    Characteristics of a criterion

    relevant, valid, uncontaminated

  • 19

    Characteristics of a criterion that is pertinent or applicable to the matter at hand.

    relevant

  • 20

    characteristics of a criterion: If one test (X) is being used as the criterion to validate a second test (Y), then evidence should exist that X is valid.

    valid

  • 21

    , a criterion measure that has been based, at least in part, on predictor measures. Ex. If the guards’ opinions were used both as a predictor and as a criterion, then we would say that criterion contamination had occurred.

    criterion contamination

  • 22

    a correlation coefficient that provides a measure of the relationship between test scores and scores on the criterion measure.

    validity coefficient

  • 23

    Two types of statistical evidence of Criterion-related validity

    validity coefficient, incremental validity

  • 24

    used to determine the validity between the two measures in the validity coefficient

    Pearson correlation coefficient

  • 25

    self-ranking =

    Spearman rho rank-order correlation

  • 26

    degree to which an additional predictor explains something about the criterion measure that is not explained by predictors already in use.

    incremental validity

  • 27

    Incremental validity is ____ when a predictor is strongly correlated with the criterion and minimally correlated with other predictors.

    highest

  • 28

    helps us decide whether the additional information a variable provides is worth the time, effort, and expense of measuring it.

    incremental validity

  • 29

    , a statistical procedure to estimate the quantitative of incremental validity

    hierarchical regression

  • 30

    , an index of the degree to which a test score is related to some criterion measure obtained at the same time ● If test scores are obtained at about the same time as the criterion measures are obtained

    concurrent validity

  • 31

    , an index of the degree to which a test score predicts some criterion measure. ● test scores may be obtained at one time and the criterion measures obtained at a future time

    predictive validity

  • 32

    a particular trait, behavior, characteristic, or attribute exists in the population (expressed as a proportion).

    base rate

  • 33

    proportion of people a test accurately identifies as possessing or exhibiting a particular trait, behavior, characteristic, or attribute.

    hit rate

  • 34

    proportion of people the test fails to identify as having, or not having, a particular characteristic or attribute.

    miss rate

  • 35

    , a miss wherein the test predicted that the testtaker did possess the particular characteristic or attribute being measured when in fact the testtaker did not.

    false positive

  • 36

    , a miss wherein the test predicted that the testtaker did not possess the particular characteristic or attribute being measured when the testtaker actually did.

    false negative

  • 37

    a judgment about the appropriateness of inferences drawn from test scores regarding individual standings on a variable called a construct.

    construct validity

  • 38

    unobservable, presupposed (underlying) traits that a test developer may invoke to describe test behavior or criterion performance. ● the unifying concept for all validity evidence

    construct validity

  • 39

    an informed, scientific idea developed or hypothesized to describe or explain behavior.

    construct

  • 40

    , how uniform a test is in measuring a single concept.

    homogeneity

  • 41

    homogeneity, how uniform a test is in measuring a single concept.

    evidence of homogeneity

  • 42

    , If a test score purports to be a measure of a construct that could be expected to change over time, then the test score, too, should show the same progressive changes with age

    evidence of changes with age

  • 43

    , evidence that test scores change as a result of some experience between a pretest and a posttest can be evidence of construct validity.

    evidence of pretest–posttest changes

  • 44

    if a test is a valid measure of a particular construct, then test scores from groups of people who would be presumed to differ with respect to that construct should have correspondingly different test scores.

    Evidence from distinct groups

  • 45

    , if scores on the test undergoing construct validation tend to correlate highly in the predicted direction with scores on older, more established, and already validated tests designed to measure the same (or a similar) construct

    convergent evidence

  • 46

    , (0) a validity coefficient showing little (statistically insignificant) relationship between test scores and/or other variables with which scores on the test being construct-validated should not theoretically be correlated provides this

    discriminant evidence

  • 47

    an experimental technique useful for examining both convergent and discriminant validity evidence ● the matrix or table that results from correlating variables (traits) within and between methods.

    multitrait-multimethod matrix

  • 48

    is the correlation between measures of the same trait but different methods.

    convergent validity

  • 49

    , the similarity in scores due to the use of the same method. ● correlations of different traits via the same method represents this

    method variance

  • 50

    , shorthand term for a class of mathematical procedures designed to identify factors or specific variables that are typically attributes, characteristics, or dimensions on which people may differ ● can be used to give a more precise evaluation of the homogeneity/unidimensionality of the test.

    factor analysis

  • 51

    , estimating, or extracting factors; deciding how many factors to retain; and rotating factors to an interpretable orientation.

    exploratory factor analysis

  • 52

    , researchers test the degree to which a hypothetical model (which includes factors) fits the actual data.

    confirmatory factor analysis

  • 53

    , each test is thought of as a vehicle carrying a certain amount of one or more abilities ● conveys information about the extent to which the factor determines the test score or scores.

    factor loading

  • 54

    condemned trinitarian view as fragmented and incomplete.

    Messick

  • 55

    , one that takes into account everything from the implications of test scores in terms of societal values to the consequences of test use.

    unitary view of validity

  • 56

    , a judgment regarding how well a test measures what it purports to measure at the time and place that the variable being measured (behavior, cognition, or emotion) is actually emitted.

    ecological validity

  • 57

    d greater the ecological validity of a test or other measurement procedure, the ______ of the measurement

    greater the generalizability

  • 58

    in-the-moment and in-the-place evaluation of targeted variables (behaviors, cognitions, and emotions) in a natural, naturalistic, or real-life context.

    ecological momentary assessment (EMA)

  • 59

    what a test appears to measure to the person being tested than to what the test actually measures.

    face validity

  • 60

    lack of face validity contribute to a ________ in the effectiveness of the test

    lack of confidence

  • 61

    , if a test definitely appears to measure what it purports to measure “on the face of it,”.

    High in face validity

  • 62

    factor inherent in a test that systematically prevents accurate, impartial measurement

    test Bias

  • 63

    , when the use of a predictor results in consistent underprediction or overprediction of a specific group’s performance or outcomes.

    intercept bias

  • 64

    when a predictor has a weaker correlation with an outcome for specific groups.

    slope bias

  • 65

    a judgment resulting from the intentional or unintentional misuse of a rating scale

    rating error

  • 66

    , a numerical or verbal judgment (or both) that places a person or an attribute along a continuum identified by rating scale.

    rating

  • 67

    a scale of numerical or word descriptors.

    rating scale

  • 68

    , an error in rating that arises from the tendency on the part of the rater to be lenient in scoring, marking, and/or grading.

    leniency error

  • 69

    , less than accurate rating or error in evaluation due to the rater’s tendency to be overly critical

    severity error

  • 70

    , the rater exhibits a general and systematic reluctance to giving ratings at either the positive or the negative extreme.

    central tendency error

  • 71

    one way to overcome the restriction-of-range rating errors (central tendency, leniency, severity errors)

    rankings

  • 72

    a procedure that requires the rater to measure individuals against one another instead of against an absolute scale. ● the rater is forced to select first, second, third choices, and so forth.

    rankings

  • 73

    (central tendency, leniency, severity errors)

    restriction-of-range rating errors

  • 74

    , tendency to give a particular ratee a higher rating than the ratee objectively deserves because of the rater’s failure to discriminate among conceptually distinct and potentially independent aspects of a ratee’s behavior.

    halo effect

  • 75

    the extent to which a test is used in an impartial, just, and equitable way.

    test Fairness

  • 76

    , practical value of using a test to aid in decision making and improve efficiency. ● also refer to the usefulness or practical value of a training program or intervention.

    test utility

  • 77

    anything from a single test to a large-scale testing program that employs a battery of tests.

    testing

  • 78

    disadvantages, losses, or expenses in both economic and noneconomic terms. ● relating to expenditures associated with testing or not testing.

    costs

  • 79

    the higher the ______[ of test scores for making a particular decision, the higher the utility of the test is likely to be.

    criterion-related validity

  • 80

    - costs in terms of loss.

    noneconomic costs

  • 81

    , profits, gains, or advantages.

    Benefits

  • 82

    family of techniques that entail a cost–benefit analysis designed to yield information relevant to a decision about the usefulness and/or practical value of a tool of assessment. ● umbrella term covering various possible methods

    utility analysis

  • 83

    , can provide an indication of the likelihood that a testtaker will score within some interval of scores on a criterion measure—an interval that may be categorized as “passing,” “acceptable,” or “failing.”

    expectancy table

  • 84

    provide an estimate of the extent to which inclusion of a particular test in the selection system will improve selection. ● provide an estimate of the percentage of employees hired by the use of a particular test who will be successful at their jobs, given different combinations of three variables: the test’s validity, the selection ratio used, and the base rate.

    Taylor-Russell tables

  • 85

    determining the increase over current procedures

    taylor-russell tables

  • 86

    , numerical value that reflects the relationship between the number of people to be hired and the number of people available to be hired.

    selection ratio

  • 87

    , obtaining the difference between the means of the selected and unselected groups to derive an index of what the test (or some other tool of assessment) is adding to already established procedures. ● determining the increase in average score on some criterion measure

    Naylor-Shine tables

  • 88

    used to calculate the dollar amount of a utility gain resulting from the use of a particular selection instrument under specified conditions.

    Brogden-Cronbach-Gleser formula

  • 89

    an estimate of the benefit (monetary) of using a particular test or selection method

    utility gain

  • 90

    a test has obviously ______ if the hit rate is _____ without using it.

    no value

  • 91

    , reference point derived as a result of a judgment and used to divide a set of data into two or more classifications, with some action to be taken or some inference to be made on the basis of these classifications.

    cutoff score/cut score

  • 92

    , a reference point—in a distribution of test scores used to divide a set of data into two or more classifications— that is set based on norm-related considerations rather than on the relationship of test scores to a criterion.

    relative cut score

  • 93

    , a reference point that is typically set with reference to a judgment concerning a minimum level of proficiency required to be included in a particular classification

    fixed cut score

  • 94

    , the use of two or more cut scores with reference to one predictor for the purpose of categorizing testtakers.

    multiple cut scores

  • 95

    , one collective element of a multistage decision-making process in which the achievement of a particular cut score on one test is necessary in order to advance to the next stage of evaluation in the selection process.

    multiple hurdles

  • 96

    , assumption is made that high scores on one attribute can, in fact, “balance out” or compensate for low scores on another attribute. ● a person strong in some areas and weak in others can perform as successfully in a position as a person with moderate abilities in all areas relevant to the position in question.

    compensatory model of selection

  • 97

    , statistical tool that is ideally suited for making such selection decisions within the framework of a compensatory model.

    multiple regression

  • 98

    cut scores are typically set based on tessttakers’ performance across all the items on the test; some portion of the total number of items on the test must be scored “correct” in order for the testtaker to “pass” the test.

    classical test scores theory

  • 99

    can be applied to personnel selection tasks as well as to questions regarding the presence or absence of a particular trait, attribute, or ability.

    Angoff Method (William Angoff)

  • 100

    collection of data on the predictor of interest from groups known to possess, and not to possess, a trait, attribute, or ability of interest. ● cut score is set on the test that best discriminates the two groups’ test performance.

    Known Groups Method