Introduction to Psychological Testing
Expert-defined terms from the Professional Certificate in Psychological Testing course at Greenwich School of Business and Finance. Free to read, free to share, paired with a globally recognised certification pathway.
Introduction to Psychological Testing #
Introduction to Psychological Testing
Psychological testing is a crucial component of the field of psychology that inv… #
These tests are designed to measure specific traits, behaviors, or cognitive abilities and provide valuable information that can aid in diagnosis, treatment planning, and research. In the Professional Certificate in Psychological Testing course, students are introduced to the fundamental concepts, principles, and methods of psychological testing.
Assessment #
Assessment
Assessment refers to the process of gathering and evaluating information about a… #
It involves the systematic collection of data to make informed decisions about diagnosis, treatment planning, and intervention strategies.
Norms #
Norms
Norms are the established standards or reference points used to interpret test s… #
Norms provide a frame of reference for understanding test results and determining the significance of an individual's scores in relation to the general population.
Reliability #
Reliability
Reliability refers to the consistency and stability of test scores over time and… #
A reliable test produces consistent results when administered repeatedly to the same individual or group under similar conditions. Reliability is essential for ensuring that test scores accurately reflect an individual's true abilities or characteristics.
Validity #
Validity
Validity refers to the extent to which a test measures what it is intended to me… #
Validity is a critical aspect of psychological testing as it ensures that test results are meaningful, relevant, and applicable to the intended purpose of the assessment.
Standardization #
Standardization
Standardization involves the development of uniform procedures for administering… #
Standardized tests are designed to be administered and scored in a consistent manner to facilitate accurate comparisons across individuals and groups.
Intelligence Testing #
Intelligence Testing
Intelligence testing is a form of psychological assessment that aims to measure… #
Intelligence tests, such as the Wechsler Adult Intelligence Scale (WAIS) and the Stanford-Binet Intelligence Scale, are commonly used to assess intellectual abilities in clinical, educational, and research settings.
Personality Testing #
Personality Testing
Personality testing is a type of psychological assessment that focuses on evalua… #
Personality tests, such as the Minnesota Multiphasic Personality Inventory (MMPI) and the Big Five Inventory (BFI), are used to assess various aspects of personality and provide valuable insights into an individual's emotional, interpersonal, and behavioral tendencies.
Projective Testing #
Projective Testing
Projective testing is a method of psychological assessment that involves present… #
Projective tests, such as the Rorschach Inkblot Test and the Thematic Apperception Test (TAT), are designed to elicit unconscious thoughts, feelings, and motivations that may not be readily accessible through traditional assessment methods.
Neuropsychological Testing #
Neuropsychological Testing
Neuropsychological testing is a specialized form of psychological assessment tha… #
Neuropsychological tests, such as the Trail Making Test and the Wisconsin Card Sorting Test, are used to assess the impact of brain injuries, neurodevelopmental disorders, and neurodegenerative diseases on an individual's cognitive abilities.
Computerized Testing #
Computerized Testing
Computerized testing refers to the administration of psychological tests using c… #
Computerized tests offer several advantages, including standardized administration, automated scoring, immediate feedback, and the ability to customize test content based on the individual's responses. Computerized testing is increasingly used in clinical, educational, and occupational settings to streamline the assessment process and enhance the accuracy and efficiency of test administration.
Adaptive Testing #
Adaptive Testing
Adaptive testing is a form of computerized testing that adjusts the difficulty l… #
Adaptive tests use sophisticated algorithms to tailor the test content to the individual's ability level, allowing for more precise and efficient measurement of the individual's skills or abilities. Adaptive testing is particularly useful for assessing cognitive abilities, academic achievement, and job-related skills in a personalized and dynamic manner.
Clinical Assessment #
Clinical Assessment
Clinical assessment is the process of evaluating an individual's psychological f… #
Clinical assessments typically involve the use of standardized tests, interviews, observations, and self-report measures to gather relevant information about the individual's symptoms, strengths, and challenges.
Diagnostic Assessment #
Diagnostic Assessment
Diagnostic assessment is a specific type of clinical assessment that focuses on… #
Diagnostic assessments aim to provide accurate and reliable diagnoses to guide treatment decisions, monitor progress, and communicate with other healthcare professionals.
Screening Assessment #
Screening Assessment
Screening assessment is a brief and preliminary evaluation of an individual's ps… #
Screening assessments are commonly used in healthcare, educational, and organizational settings to quickly screen for symptoms, behaviors, or problems that may warrant more in-depth evaluation.
Intelligence Quotient (IQ) #
Intelligence Quotient (IQ)
The Intelligence Quotient (IQ) is a numerical score derived from intelligence te… #
IQ scores are typically standardized with a mean of 100 and a standard deviation of 15, with higher scores indicating greater intellectual ability and lower scores indicating lower intellectual ability.
Emotional Intelligence (EI) #
Emotional Intelligence (EI)
Emotional Intelligence (EI) refers to the ability to perceive, understand, regul… #
EI encompasses skills such as empathy, self-awareness, social competence, and emotional resilience, which are essential for building positive relationships, managing stress, and making informed decisions in various personal and professional contexts.
Validity Scales #
Validity Scales
Validity scales are special scales included in psychological tests to assess the… #
Validity scales provide valuable information about the individual's test-taking attitude, response style, and the credibility of their test scores, helping to ensure the accuracy and validity of the assessment results.
Reliability Coefficients #
Reliability Coefficients
Reliability coefficients are statistical measures used to assess the consistency… #
Common reliability coefficients include Cronbach's alpha, test-retest reliability, inter-rater reliability, and split-half reliability, which provide information about the internal consistency, temporal stability, and equivalence of test scores.
Factor Analysis #
Factor Analysis
Factor analysis is a statistical technique used to identify underlying factors o… #
Factor analysis helps to reduce the complexity of data, identify latent constructs, and interpret the relationships among variables in a test or assessment instrument. Factor analysis is often used in test construction, validation, and scale development to explore the structure of psychological constructs.
Item Analysis #
Item Analysis
Item analysis is a method used to evaluate the quality, difficulty, and discrimi… #
Item analysis involves examining the item difficulty, item discrimination, item-total correlations, and item response patterns to identify problematic items, improve test reliability, and enhance the validity of the test scores.
Standard Error of Measurement (SEM) #
Standard Error of Measurement (SEM)
The Standard Error of Measurement (SEM) is a statistical estimate of the amount… #
The SEM provides a range of possible scores within which an individual's true score is likely to fall, taking into account the precision and reliability of the test.
Criterion #
Related Validity
Criterion #
related validity is a type of validity evidence that examines the relationship between test scores and external criteria or outcomes to determine the effectiveness of the test in predicting relevant behaviors, traits, or performance. Criterion-related validity can be established through concurrent validity, predictive validity, and construct validity, depending on the nature of the criteria being used to validate the test scores.
Content Validity #
Content Validity
Content validity is a type of validity evidence that assesses the extent to whic… #
Content validity involves examining the relevance, representativeness, and comprehensiveness of the test items to ensure that they adequately cover the intended content and capture the key aspects of the construct.
Construct Validity #
Construct Validity
Construct validity is a type of validity evidence that evaluates the degree to w… #
Construct validity involves establishing the relationship between the test scores and other measures of the same construct, demonstrating convergent and discriminant validity, and confirming the underlying theoretical framework of the test.
Concurrent Validity #
Concurrent Validity
Concurrent validity is a type of criterion #
related validity that assesses the degree to which test scores are correlated with external criteria measured at the same time. Concurrent validity involves comparing the test scores with other measures or observations obtained concurrently to determine the extent to which the test accurately predicts or reflects the individual's current status or performance.
Predictive Validity #
Predictive Validity
Predictive validity is a type of criterion #
related validity that examines the ability of test scores to predict future criteria or outcomes that are measured at a later point in time. Predictive validity involves establishing the relationship between the test scores and future performance, behaviors, or events to determine the test's effectiveness in forecasting long-term outcomes or behaviors.
Construct Underrepresentation #
Construct Underrepresentation
Construct underrepresentation occurs when a test fails to adequately capture all… #
Construct underrepresentation can result in biased or misleading test scores that do not reflect the full range of the construct, compromising the validity and utility of the test.
Construct Irrelevance #
Construct Irrelevance
Construct irrelevance refers to the inclusion of items or content in a test that… #
Construct irrelevance can introduce noise, error, or confusion into the assessment process, undermining the accuracy and interpretability of the test results.
Construct #
Deficient Validity
Construct #
deficient validity occurs when a test fails to measure all essential aspects or components of the construct being assessed, resulting in an incomplete or limited representation of the construct in the test scores. Construct-deficient validity can lead to inadequate or misleading conclusions about the individual's abilities, traits, or characteristics, compromising the validity and utility of the test.
Criterion Contamination #
Criterion Contamination
Criterion contamination refers to the presence of irrelevant or confounding fact… #
Criterion contamination can distort the relationship between the test scores and the external criteria, undermining the validity of the test results and compromising the interpretation of the assessment findings.
Cross #
Validation
Cross #
validation is a method used to assess the generalizability and stability of test scores by examining the consistency of results across different samples, populations, or settings. Cross-validation involves splitting the data into multiple subsets, conducting analyses on each subset, and comparing the results to evaluate the robustness and reliability of the test scores across diverse conditions.
Test Bias #
Test Bias
Test bias refers to the systematic error or unfairness in a test that results in… #
Test bias can lead to inaccurate or discriminatory interpretations of test scores, affecting the validity, reliability, and fairness of the assessment process.
Item Bias #
Item Bias
Item bias occurs when specific test items disproportionately favor or disadvanta… #
Item bias can introduce measurement error, inflate or deflate test scores, and compromise the validity and fairness of the assessment results for individuals from diverse backgrounds.
Mode of Administration #
Mode of Administration
The mode of administration refers to the method or format used to deliver a psyc… #
The mode of administration can influence the validity, reliability, and accessibility of the test, as well as the individual's comfort, engagement, and performance during the assessment process.
Response Format #
Response Format
The response format refers to the structure or options provided for individuals… #
The response format can impact the ease of responding, the accuracy of the responses, and the interpretability of the test scores, depending on the cognitive demands and requirements of the task.
Scoring System #
Scoring System
The scoring system of a psychological test determines how the individual's respo… #
Scoring systems can vary in complexity, objectivity, and interpretability, affecting the accuracy, reliability, and utility of the test scores for diagnostic, decision-making, and research purposes.
Item Difficulty #
Item Difficulty
Item difficulty is a measure of how easy or difficult individual test items are… #
Item difficulty is typically expressed as the proportion or percentage of individuals who answer the item correctly, providing information about the discriminative power, appropriateness, and relevance of the item to the construct being measured.
Item Discrimination #
Item Discrimination
Item discrimination is a statistical index that measures the extent to which ind… #
Item discrimination reflects the ability of the item to discriminate between individuals with different levels of the construct being measured, indicating the item's efficacy in differentiating between individuals with varying abilities or traits.
Item Response Theory (IRT) #
Item Response Theory (IRT)
Item Response Theory (IRT) is a psychometric model used to analyze the relations… #
IRT models the probability of individuals' responses to test items based on their underlying abilities, providing insights into item difficulty, discrimination, and the precision of the test in measuring the construct of interest.
Classical Test Theory (CTT) #
Classical Test Theory (CTT)
Classical Test Theory (CTT) is a traditional approach to psychometric theory tha… #
CTT assumes that test scores are composed of a true score and random error, providing a framework for understanding reliability, validity, and test score interpretation in psychological testing.
Raw Score #
Raw Score
A raw score is the total number or sum of correct responses or points obtained b… #
Raw scores provide a basic measure of the individual's performance on the test items and serve as the basis for calculating scaled scores, percentiles, or other standardized scores for interpretation and comparison purposes.
Scaled Score #
Scaled Score
A scaled score is a standardized numerical score derived from a raw score to fac… #
Scaled scores are typically transformed onto a common scale with a mean of 100 and a standard deviation of 15, allowing for accurate comparisons of test performance relative to the general population.
Percentile Rank #
Percentile Rank
A percentile rank is a statistical measure that indicates the percentage of indi… #
Percentile ranks provide information about an individual's relative standing or performance compared to others in the normative group, with higher percentile ranks indicating better performance and lower percentile ranks indicating poorer performance.
Standard Score #
Standard Score
A standard score is a transformed score that represents an individual's performa… #
Standard scores are typically standardized with a mean of 100 and a standard deviation of 15, allowing for comparisons of an individual's performance to the general population in terms of deviations from the mean.
Z #
Score
A Z #
score is a standardized score that indicates how many standard deviations an individual's score is above or below the mean of a reference group. Z-scores provide information about the relative position or standing of an individual's score in relation to the distribution of scores in the normative group, allowing for comparisons of performance across different tests or populations.
T #
Score
A T #
score is a standardized score that has a mean of 50 and a standard deviation of 10, typically used in psychological testing to compare an individual's performance to a reference group. T-scores are easy to interpret and provide information about an individual's performance relative to the normative sample, with higher T-scores indicating better performance and lower T-scores indicating poorer performance.
Confidence Interval #
Confidence Interval
A confidence interval is a range of values around an observed score that is like… #
Confidence intervals provide information about the precision and reliability of the test scores, indicating the degree of uncertainty or variability in the individual's performance estimate based on the sample data and measurement error.
Error of Measurement #
Error of Measurement
Error of measurement refers to the amount of random error or variability in an i… #
Error of measurement can affect the accuracy, precision, and stability of test scores, influencing the interpretation and reliability of the assessment results.
Test #
Retest Reliability
Test #
retest reliability is a type of reliability estimate that assesses the consistency and stability of test scores when the same test is administered to the same individuals on two separate occasions. Test-retest reliability provides information about the temporal stability, reliability, and consistency of the test scores over time, indicating the extent to which the scores are free from random fluctuations or measurement error.
Inter #
Rater Reliability
Inter #
rater reliability is a type of reliability estimate that evaluates the consistency and agreement between different raters or judges who score the same test responses or observations independently. Inter-rater reliability provides information about the objectivity, consistency, and accuracy of the scoring process, ensuring that the test scores are reliable and free from subjective bias or variability.
Split #
Half Reliability
Split #
half reliability is a type of reliability estimate that assesses the internal consistency and stability of a test by splitting the test items into two halves and comparing the scores obtained from each half. Split-half reliability provides information about the homogeneity, reliability, and internal reliability of the test items, indicating the extent to which the test measures the same construct consistently.
Internal Consistency #
Internal Consistency
Internal consistency is a measure of the extent to which the items in a test are… #
Internal consistency estimates, such as Cronbach's alpha, provide information about the reliability, homogeneity, and coherence of the test items, indicating the degree to which the test measures