Motor-Free Visual Perception Test (MVPT)
Purpose
The Motor-Free Visual Perception Test (MVPT) is a widely used, standardized test of visual perception. Unlike other typical visual perception measures, this measure is meant to assess visual perception independent of motor ability. It was originally developed for use with children (Colarusso & Hammill, 1972), however it has been used extensively with adults. The most recent version of the measure, the MVPT-3, can be administered to children (> 3 years), adolescents, and adults (< 95 years) (Colarusso & Hammill, 2003).
In-Depth Review
Purpose of the measure
The Motor-Free Visual Perception Test (MVPT) is a widely used, standardized test of visual perception. Unlike other typical visual perception measures, this measure is meant to assess visual perception independent of motor ability. It was originally developed for use with children (Colarusso & Hammill, 1972), however it has been used extensively with adults. The most recent version of the measure, the MVPT-3, can be administered to children (> 3 years), adolescents, and adults (< 95 years) (Colarusso & Hammill, 2003).
The MVPT can be used to determine differences in visual perception across several different diagnostic groups, and is often used by occupational therapists to screen those with stroke
Available versions
Original MVPT
The original MVPT was published by Colarusso and Hammill in 1972.
MVPT – Revised Version (MVPT-R)
The MVPT-R was published by Colarusso and Hammill in 1996. In this version, four new items were added to the original MVPT version (40 items in total). Age-range norms (U.S.) were also added to the original MVPT, to include children up to the age of 12. No adult data were collected when the scale was developed, however, the MVPT-R has been used with both pediatric and adult populations (Brown, Rodger, & Davis, 2003). While the MVPT-R has been reported to have an excellent correlationThe extent to which two or more variables are associated with one another. A correlation can be positive (as one variable increases, the other also increases – for example height and weight typically represent a positive correlation) or negative (as one variable increases, the other decreases – for example as the cost of gasoline goes higher, the number of miles driven decreases. There are a wide variety of methods for measuring correlation including: intraclass correlation coefficients (ICC), the Pearson product-moment correlation coefficient, and the Spearman rank-order correlation.
with the original MVPT (r = 0.85, Colarusso and Hammill, 1996), Brown et al. (2003) caution that no other reliabilityReliability can be defined in a variety of ways. It is generally understood to be the extent to which a measure is stable or consistent and produces similar results when administered repeatedly. A more technical definition of reliability is that it is the proportion of “true” variation in scores derived from a particular measure. The total variation in any given score may be thought of as consisting of true variation (the variation of interest) and error variation (which includes random error as well as systematic error). True variation is that variation which actually reflects differences in the construct under study, e.g., the actual severity of neurological impairment. Random error refers to “noise” in the scores due to chance factors, e.g., a loud noise distracts a patient thus affecting his performance, which, in turn, affects the score. Systematic error refers to bias that influences scores in a specific direction in a fairly consistent way, e.g., one neurologist in a group tends to rate all patients as being more disabled than do other neurologists in the group. There are many variations on the measurement of reliability including alternate-forms, internal consistency , inter-rater agreement , intra-rater agreement , and test-retest .
and validityThe degree to which an assessment measures what it is supposed to measure.
data have been reported for this version.
MVPT – 3rd Edition (MVPT-3)
The MVPT-3 was published by Colarusso and Hammill in 2003. The MVPT-3 was a major revision of the MVPT-R, and includes additional test items that allow for the assessment of visual perception in adults and adolescents. The MVPT-3 is intended for individuals between the ages of 4-95, and takes approximately 25 minutes to administer. (http://www4.parinc.com/Products/Product.aspx?ProductId=MVPT-3)
Features of the measure
Items:
The items for the original MVPT, MVPT-R and MVPT-3 are comprised of items representing 5 visual domains:
Source: Colarusso & Hammill, 1996
Visual Discrimination | The ability to discriminate dominant features in different objects; for example, the ability to discriminate position, shapes, forms, colors and letter-like positions. |
Visual Figure-Ground | The ability to distinguish an object from its background. |
Visual Memory | The ability to recall dominant features of one stimulus item or to remember the sequence of several items. |
Visual Closure | The ability to identify incomplete figures when only fragments are presented. |
Visual Spatial | The ability to orient one’s body in space and to perceive the positions of objects in relation to oneself and to objects. |
Note: Five domains do not represent different subscales or subtests and thus cannot be used to yield individual scores.
Original MVPT
Contains 36 items.
MVPT-R
Contains 40 items. Since the MVPT-R includes children up to 12 years old, four items were added to the items of the original MVPT to accommodate the increased age-range covered by the norms of the MVPT-R.
MVPT-3
Contains 65 items. Before administering the MVPT-3, the examiner must ask for the patient’s date of birth and compute their age. This will determine where in the test one should begin. Children between the ages of 4-10 begin with the first example item and complete items 1-40. Individuals between the ages of 11-95 begin with the third example item and complete items 14-65. All of the items that fall within an individual’s age group must be administered.
Each item consists of a black-and-white line drawing stimulus, along with four multiple-choice response options (A, B, C, D) from which to choose the item that matches the example. For most items, the stimulus and response choices appear on the same page. The stimulus drawing appears at the top of the page above a row of four multiple-choice options (see image below).
Below are four examples of test items and their corresponding multiple-choice response options:
Items assessing visual memory have the stimulus and multiple-choice options presented on separate pages. For these items, the stimulus page is presented for 5 seconds, removed, and the options page is then presented. Items with similar instructions are grouped together in order of increasing difficulty. The patient points to or says the letter that corresponds to the desired answer (Su et al. 2000). The examiner records each response on the recording form.
To ensure that the patient understands the task instructions, example items are presented for each new set of instructions. Examiners must ensure that the patient understands these directions before proceeding to the next domain.
Subscales:
None.
Equipment:
Original MVPT and MVPT-R
Materials for the test include the manual that describes the administration and scoring procedures, the test plate book, score sheet, stopwatch and a pencil (Brown et al., 2003).
MVPT-3
Materials for the test include the manual that describes administration and scoring procedures, a recording form to record patient responses, and a spiral-bound test plates easel.
Training:
Various health professionals, including occupational therapists, teachers, school psychologists, and optometrists, can administer all versions of the MVPT. Only individuals familiar with both the psychometric properties and the score limitations of the test should conduct interpretations (Colarusso & Hammill, 2003).
Time:
Original MVPT and MVPT-R
The test takes 10-15 minutes to administer, and 5 minutes to score (Brown et al., 2003).
MVPT-3
According to the manual, the MVPT-3 takes approximately 20 to 30 minutes to administer and approximately 10 minutes to score.
Scoring:
Original MVPT and MVPT-R
One point is given for each correct response. Raw scores are then converted to age and perceptual equivalents to allow for a comparison of the patient’s performance to that of a normative group of same-aged peers.
MVPT-3
A single raw score is formed, representing the patients overall visual perceptual ability. The raw score is calculated by subtracting the number of errors made from the number of the last item attempted. The total scores range from 55-145. Higher scores reflect fewer deficits in general visual perceptual function. The raw score can then be converted to standard scores, age equivalents, and percentile ranks using the norm tables provided in the manual, which will allow for the comparison of a patient’s performance to that of a normative group of same-aged peers.
Alternative form of the MVPT
- MVPT – Vertical Version (MVPT-V) (Mercier, Hebert, Colarusso, & Hammill, 1996).
Response sets are presented in a vertical layout rather than the horizontal layout found in other versions of the MVPT. This layout allows for an accurate assessment of visual perceptual abilities in adults who have hemifield visual neglect, commonly found in patients with strokeAlso called a “brain attack” and happens when brain cells die because of inadequate blood flow. 20% of cases are a hemorrhage in the brain caused by a rupture or leakage from a blood vessel. 80% of cases are also know as a “schemic stroke”, or the formation of a blood clot in a vessel supplying blood to the brain. or traumatic brain injury. These patients are unable to attend to a portion of the visual field, and may therefore miss any answer choices that are presented in one part of the visual field when they are presented horizontally. The MVPT-V contains 36 items. Mercier, Herbert, and Gauthier (1995) reported excellent test-retest reliabilityA way of estimating the reliability of a scale in which individuals are administered the same scale on two different occasions and then the two scores are assessed for consistency. This method of evaluating reliability is appropriate only if the phenomenon that the scale measures is known to be stable over the interval between assessments. If the phenomenon being measured fluctuates substantially over time, then the test-retest paradigm may significantly underestimate reliability. In using test-retest reliability, the investigator needs to take into account the possibility of practice effects, which can artificially inflate the estimate of reliability (National Multiple Sclerosis Society).
for the MVPT-V (ICC = 0.92).
Note: The MVPT-V removes unilateral visual neglect as a variable in test performance and therefore should not be used to assess driving ability (Mazer, Korner-Bitensky, & Sofer, 1998).
Client suitability
Can be used with:
- Patients with strokeAlso called a “brain attack” and happens when brain cells die because of inadequate blood flow. 20% of cases are a hemorrhage in the brain caused by a rupture or leakage from a blood vessel. 80% of cases are also know as a “schemic stroke”, or the formation of a blood clot in a vessel supplying blood to the brain..
- The MVPT can be used in patients with expressive aphasiaAphasia is an acquired disorder caused by an injury to the brain and affects a person’s ability to communicate. It is most often the result of stroke or head injury.
An individual with aphasia may experience difficulty expressing themselves when speaking, difficulty understanding the speech of others, and difficulty reading and writing. Sadly, aphasia can mask a person’s intelligence and ability to communicate feelings, thoughts and emotions. (The Aphasia Institute, Canada) if they are able to understand instructions and the various sub-scale requirements.
Should not be used in:
- Children under the age of 4.
- The MVPT calculates a global score and thus provides less information regarding specific visual dysfunction than a scale that provides domain-specific scores (Su et al., 2000). To assess various domains of visual perceptual ability, an alternative with good pscyhometric properties is the Rivermead Perceptual Assessment Battery. It has 16 different subtests assessing various aspects of visual perception. It takes between 45-50 minutes to administer and has established reliabilityReliability can be defined in a variety of ways. It is generally understood to be the extent to which a measure is stable or consistent and produces similar results when administered repeatedly. A more technical definition of reliability is that it is the proportion of “true” variation in scores derived from a particular measure. The total variation in any given score may be thought of as consisting of true variation (the variation of interest) and error variation (which includes random error as well as systematic error). True variation is that variation which actually reflects differences in the construct under study, e.g., the actual severity of neurological impairment. Random error refers to “noise” in the scores due to chance factors, e.g., a loud noise distracts a patient thus affecting his performance, which, in turn, affects the score. Systematic error refers to bias that influences scores in a specific direction in a fairly consistent way, e.g., one neurologist in a group tends to rate all patients as being more disabled than do other neurologists in the group. There are many variations on the measurement of reliability including alternate-forms, internal consistency , inter-rater agreement , intra-rater agreement , and test-retest .
(Bhavnani, Cockburn, Whiting & Lincoln, 1983) and validityThe degree to which an assessment measures what it is supposed to measure.
(Whiting, Lincoln, Bhavnani & Cockburn, 1985) and was designed to assess visual perception problems in patients with strokeAlso called a “brain attack” and happens when brain cells die because of inadequate blood flow. 20% of cases are a hemorrhage in the brain caused by a rupture or leakage from a blood vessel. 80% of cases are also know as a “schemic stroke”, or the formation of a blood clot in a vessel supplying blood to the brain. (Whiting et al., 1985). - The is administered via direct observation of task completion and cannot be used with a proxy respondent.
- The MVPT-3 should be used only as a screeningTesting for disease in people without symptoms.
instrument with 4-year-old children but can be used for diagnostic purposes in all other age groups (Colarusso & Hammill, 2003). - McCane (2006) argues that although Colarusso and Hammill (2003) state that the MVPT-3 can be used as a diagnostic tool in all age groups other than in four year olds, even more cautious interpretation is needed. This is based on the generally accepted notion that the reliabilityReliability can be defined in a variety of ways. It is generally understood to be the extent to which a measure is stable or consistent and produces similar results when administered repeatedly. A more technical definition of reliability is that it is the proportion of “true” variation in scores derived from a particular measure. The total variation in any given score may be thought of as consisting of true variation (the variation of interest) and error variation (which includes random error as well as systematic error). True variation is that variation which actually reflects differences in the construct under study, e.g., the actual severity of neurological impairment. Random error refers to “noise” in the scores due to chance factors, e.g., a loud noise distracts a patient thus affecting his performance, which, in turn, affects the score. Systematic error refers to bias that influences scores in a specific direction in a fairly consistent way, e.g., one neurologist in a group tends to rate all patients as being more disabled than do other neurologists in the group. There are many variations on the measurement of reliability including alternate-forms, internal consistency , inter-rater agreement , intra-rater agreement , and test-retest .
of a tool should be > 0.90 to be used for diagnostic and decision-making purposes (Sattler, 2001). Therefore, the MVPT-3 should only be used as a diagnostic tool in adolescents between 14-18 years old because this is the only age group in which reliabilityReliability can be defined in a variety of ways. It is generally understood to be the extent to which a measure is stable or consistent and produces similar results when administered repeatedly. A more technical definition of reliability is that it is the proportion of “true” variation in scores derived from a particular measure. The total variation in any given score may be thought of as consisting of true variation (the variation of interest) and error variation (which includes random error as well as systematic error). True variation is that variation which actually reflects differences in the construct under study, e.g., the actual severity of neurological impairment. Random error refers to “noise” in the scores due to chance factors, e.g., a loud noise distracts a patient thus affecting his performance, which, in turn, affects the score. Systematic error refers to bias that influences scores in a specific direction in a fairly consistent way, e.g., one neurologist in a group tends to rate all patients as being more disabled than do other neurologists in the group. There are many variations on the measurement of reliability including alternate-forms, internal consistency , inter-rater agreement , intra-rater agreement , and test-retest .
exceeds 0.90. - The MVPT-V removes unilateral visual neglect as a variable in test performance and therefore should not be used to assess driving ability (Mazer et al., 1998).
In what languages is the measure available?
No information is currently available regarding the languages in which the instructions to the MVPT have been translated.
Note: Because the test requires no use of verbal response by the respondent, if the clinician can determine through the use of the practice items that the individual understands the task requirements, then it is possible to use the test with minimal language use.
Summary
What does the tool measure? | Visual perception independent of motor ability. |
What types of clients can the tool be used for? | The MVPT can be used to determine differences in visual perception across several different diagnostic groups, and is often used by occupational therapists to screen those with stroke The MVPT was originally developed for use with children, however it has been used extensively with adults. |
Is this a screening or assessment tool? |
Assessment. |
Time to administer | The MVPT and MVPT-R takes 10-15 minutes to administer, and 5 minutes to score. The MVPT-3 takes approximately 20 to 30 minutes to administer and approximately 10 minutes to score. |
Versions | Original MVPT; MVPT Revised version (MVPT-R); MVPT 3rd edition (MVPT-3); MVPT Vertical Version (MVPT-V). |
Other Languages |
No information is currently available regarding the languages in which the instructions to the MVPT have been translated. Note: Because the test requires no use of verbal response by the respondent, if the clinician can determine through the use of the practice items that the individual understands the task requirements, then it is possible to use the test with minimal language use. |
Measurement Properties | |
Reliability |
Internal consistency One study examined the internal consistency Test-retest: |
Validity |
Content: One study examined the content validity of the original MVPT. Criterion: Predictive: Construct: Known Groups: |
Does the tool detect change in patients? | No studies have examined the responsiveness of the MVPT. |
Acceptability | The MVPT is a short and simple measure and has been reported as well tolerated by patients. The test is administered by direct observation and is not suitable for proxy use. |
Feasibility | The MVPT has standardized instructions for administration in an adult population and requires the test plates and manual. Only individuals familiar with both the psychometric properties and the score limitations of the test should conduct interpretations. |
How to obtain the tool? | The MVPT can be purchased from: https://www.therapro.com/ |
Psychometric Properties
Overview
The reliability
and validity
of the MVPT has not been well studied. To our knowledge, the creators of the MVPT have personally gathered the majority of psychometric data that are currently published on the scale. In addition, the majority of the existing psychometric studies have been conducted using the original MVPT only, and few studies have examined the validity
of the MVPT-R and MVPT-3. Further investigation on the reliability
and validity
of the original MVPT, MVPT-R and MVPT-3 is therefore recommended.
Reliability
Original MVPT
Internal consistencyA method of measuring reliability . Internal consistency reflects the extent to which items of a test measure various aspects of the same characteristic and nothing else. Internal consistency coefficients can take on values from 0 to 1. Higher values represent higher levels of internal consistency.:
Colarusso and Hammill (1996) calculated the internal consistencyA method of measuring reliability . Internal consistency reflects the extent to which items of a test measure various aspects of the same characteristic and nothing else. Internal consistency coefficients can take on values from 0 to 1. Higher values represent higher levels of internal consistency. reliabilityReliability can be defined in a variety of ways. It is generally understood to be the extent to which a measure is stable or consistent and produces similar results when administered repeatedly. A more technical definition of reliability is that it is the proportion of “true” variation in scores derived from a particular measure. The total variation in any given score may be thought of as consisting of true variation (the variation of interest) and error variation (which includes random error as well as systematic error). True variation is that variation which actually reflects differences in the construct under study, e.g., the actual severity of neurological impairment. Random error refers to “noise” in the scores due to chance factors, e.g., a loud noise distracts a patient thus affecting his performance, which, in turn, affects the score. Systematic error refers to bias that influences scores in a specific direction in a fairly consistent way, e.g., one neurologist in a group tends to rate all patients as being more disabled than do other neurologists in the group. There are many variations on the measurement of reliability including alternate-forms, internal consistency , inter-rater agreement , intra-rater agreement , and test-retest .
.
Test-retest:
Colarusso and Hammill (1972) examined the test-retest reliabilityA way of estimating the reliability of a scale in which individuals are administered the same scale on two different occasions and then the two scores are assessed for consistency. This method of evaluating reliability is appropriate only if the phenomenon that the scale measures is known to be stable over the interval between assessments. If the phenomenon being measured fluctuates substantially over time, then the test-retest paradigm may significantly underestimate reliability. In using test-retest reliability, the investigator needs to take into account the possibility of practice effects, which can artificially inflate the estimate of reliability (National Multiple Sclerosis Society).
, ranging from r = 0.77 to r = 0.83 at different age levels, with a mean coefficient of r = 0.81 for the total sample.
Inter-rater:
Has not been investigated.
MVPT-R
Internal consistencyA method of measuring reliability . Internal consistency reflects the extent to which items of a test measure various aspects of the same characteristic and nothing else. Internal consistency coefficients can take on values from 0 to 1. Higher values represent higher levels of internal consistency..
Has not been investigated.
Test-retest:
Only one study evaluating the reliabilityReliability can be defined in a variety of ways. It is generally understood to be the extent to which a measure is stable or consistent and produces similar results when administered repeatedly. A more technical definition of reliability is that it is the proportion of “true” variation in scores derived from a particular measure. The total variation in any given score may be thought of as consisting of true variation (the variation of interest) and error variation (which includes random error as well as systematic error). True variation is that variation which actually reflects differences in the construct under study, e.g., the actual severity of neurological impairment. Random error refers to “noise” in the scores due to chance factors, e.g., a loud noise distracts a patient thus affecting his performance, which, in turn, affects the score. Systematic error refers to bias that influences scores in a specific direction in a fairly consistent way, e.g., one neurologist in a group tends to rate all patients as being more disabled than do other neurologists in the group. There are many variations on the measurement of reliability including alternate-forms, internal consistency , inter-rater agreement , intra-rater agreement , and test-retest .
of the MVPT-R has been reported in the literature. Burtner, Qualls, Ortega, Morris, and Scott (2002) administered the MVPT-R to a group of 38 children with learning disabilities and 37 children with age-appropriate development (aged 7 to 10 years) on two separate occasions within 2.5 weeks. Intraclass correlationThe extent to which two or more variables are associated with one another. A correlation can be positive (as one variable increases, the other also increases – for example height and weight typically represent a positive correlation) or negative (as one variable increases, the other decreases – for example as the cost of gasoline goes higher, the number of miles driven decreases. There are a wide variety of methods for measuring correlation including: intraclass correlation coefficients (ICC), the Pearson product-moment correlation coefficient, and the Spearman rank-order correlation.
coefficients (ICCs) for perceptual quotient scores ranged from adequate to excellent (ranging from ICC = 0.63 to ICC = 0.79). Perceptual age scores also ranged from adequate to excellent (ICC = 0.69 to ICC = 0.86). Pearson product moment correlations for perceptual quotient scores ranged from adequate to excellent (r = 0.70 to r = 0.80) and perceptual age scores were excellent, ranging from r = 0.77 to r = 0.87. These results suggest that the MVPT-R has adequate test-retest reliabilityA way of estimating the reliability of a scale in which individuals are administered the same scale on two different occasions and then the two scores are assessed for consistency. This method of evaluating reliability is appropriate only if the phenomenon that the scale measures is known to be stable over the interval between assessments. If the phenomenon being measured fluctuates substantially over time, then the test-retest paradigm may significantly underestimate reliability. In using test-retest reliability, the investigator needs to take into account the possibility of practice effects, which can artificially inflate the estimate of reliability (National Multiple Sclerosis Society).
with more stability in visual perceptual scores for children with learning disabilities.
Inter-rater:
Has not been investigated.
MVPT-3
Internal consistencyA method of measuring reliability . Internal consistency reflects the extent to which items of a test measure various aspects of the same characteristic and nothing else. Internal consistency coefficients can take on values from 0 to 1. Higher values represent higher levels of internal consistency.:
Colarusso and Hammill (2003) computed Cronbach’s coefficient alphas for each age group. Alpha coefficients ranged from poor to excellent (alpha = 0.69 to alpha = 0.90). In children aged 4, 5, and 7, alpha coefficients were 0.69, 0.76, and 0.73, respectively. ReliabilityReliability can be defined in a variety of ways. It is generally understood to be the extent to which a measure is stable or consistent and produces similar results when administered repeatedly. A more technical definition of reliability is that it is the proportion of “true” variation in scores derived from a particular measure. The total variation in any given score may be thought of as consisting of true variation (the variation of interest) and error variation (which includes random error as well as systematic error). True variation is that variation which actually reflects differences in the construct under study, e.g., the actual severity of neurological impairment. Random error refers to “noise” in the scores due to chance factors, e.g., a loud noise distracts a patient thus affecting his performance, which, in turn, affects the score. Systematic error refers to bias that influences scores in a specific direction in a fairly consistent way, e.g., one neurologist in a group tends to rate all patients as being more disabled than do other neurologists in the group. There are many variations on the measurement of reliability including alternate-forms, internal consistency , inter-rater agreement , intra-rater agreement , and test-retest .
coefficients for all other age groups were excellent (alpha’s exceed 0.80).
Test-retest:
Colarusso and Hammill (2003) examined the test-retest reliabilityA way of estimating the reliability of a scale in which individuals are administered the same scale on two different occasions and then the two scores are assessed for consistency. This method of evaluating reliability is appropriate only if the phenomenon that the scale measures is known to be stable over the interval between assessments. If the phenomenon being measured fluctuates substantially over time, then the test-retest paradigm may significantly underestimate reliability. In using test-retest reliability, the investigator needs to take into account the possibility of practice effects, which can artificially inflate the estimate of reliability (National Multiple Sclerosis Society).
(0.87 and 0.92, respectively), suggesting that the MVPT-3 is relatively stable over time.
Inter-rater:
Has not been investigated.
Validity
Content:
Only the content validityRefers to the extent to which a measure represents all aspects of a given social concept. Example: A depression scale may lack content validity if it only assesses the affective dimension of depression but fails to take into account the behavioral dimension.
of the original MVPT has been provided.
The content of the MVPT was based on item analyses as well as the five visual perceptual categories proposed by Chalfant and Scheffelin (1969). The authors examined item bias, including the effects of gender, residence, and ethnicity. Performance on each item was compared for differing groups to determine any biased content. Only three items appeared to function differently based on group membership. The authors examined these items and chose not to eliminate the items based on other psychometric data.
Criterion:
Concurrent:
Original MVPT
The following information is from a review article by Brown, Rodger, and Davis (2003):
- Correlations between the MVPT and the Frostig Developmental Test of Visual Perception ranged from adequate to excellent, from r = 0.38 to r = 0.60 (Frostig, Lefever, & Whittlesey, 1966).
- Correlations between the MVPT and the Developmental Test of Visual Perception ranged from poor to excellent, r = 0.27 to r = 0.74 (Hammill, Pearson, & Voress, 1993).
- CorrelationThe extent to which two or more variables are associated with one another. A correlation can be positive (as one variable increases, the other also increases – for example height and weight typically represent a positive correlation) or negative (as one variable increases, the other decreases – for example as the cost of gasoline goes higher, the number of miles driven decreases. There are a wide variety of methods for measuring correlation including: intraclass correlation coefficients (ICC), the Pearson product-moment correlation coefficient, and the Spearman rank-order correlation.
between the MVPT and the Matching subscaleMany measurement instruments are multidimensional and are designed to measure more than one construct or more than one domain of a single construct. In such instances subscales can be constructed in which the various items from a scale are grouped into subscales. Although a subscale could consist of a single item, in most cases subscales consist of multiple individual items that have been combined into a composite score (National Multiple Sclerosis Society).
of Metropolitan Readiness Tests was adequate, r = 0.40 (Hildreth, Griffiths, & McGauvran, 1965). - Correlations between the MVPT and the Word Study Skills and Arithmetic subscales of Stanford Achievement Tests (Primary) were adequate, from r = 0.37 to r = 0.42 (Kelly, Madden, Gardner, & Rudman, 1964).
- Correlations between the MVPT and the Durrell Analysis of Reading Difficulties were adequate, ranging from r = 0.33 to r = 0.46 (Durrell, 1955).
- CorrelationThe extent to which two or more variables are associated with one another. A correlation can be positive (as one variable increases, the other also increases – for example height and weight typically represent a positive correlation) or negative (as one variable increases, the other decreases – for example as the cost of gasoline goes higher, the number of miles driven decreases. There are a wide variety of methods for measuring correlation including: intraclass correlation coefficients (ICC), the Pearson product-moment correlation coefficient, and the Spearman rank-order correlation.
between the MVPT and the Slosson Intelligence Test was adequate, r = 0.31 (Slosson, 1963). - CorrelationThe extent to which two or more variables are associated with one another. A correlation can be positive (as one variable increases, the other also increases – for example height and weight typically represent a positive correlation) or negative (as one variable increases, the other decreases – for example as the cost of gasoline goes higher, the number of miles driven decreases. There are a wide variety of methods for measuring correlation including: intraclass correlation coefficients (ICC), the Pearson product-moment correlation coefficient, and the Spearman rank-order correlation.
between the MVPT and the Pinter-Cunningham Primary Intelligence Test was adequate, r = 0.32 (Pintner & Cunningham, 1965).
Colarusso and Hammill (1996) concluded that the MVPT measures the construct of visual perception adequately because the MVPT correlated more highly with measures of visual perception (median r = 0.49) than it did with tests of intelligence (median r = 0.31) or school performance (median r = 0.38).
Predictive:
Original MVPT
Mazer et al. (1998) examined whether the MVPT could predict on-road driving outcome in 84 patients with strokeAlso called a “brain attack” and happens when brain cells die because of inadequate blood flow. 20% of cases are a hemorrhage in the brain caused by a rupture or leakage from a blood vessel. 80% of cases are also know as a “schemic stroke”, or the formation of a blood clot in a vessel supplying blood to the brain.. Patients were given a pass or fail based on their driving behavior. The was found to be the most predictive of driving outcome out of a number of perceptual tests that were administered (the Complex Reaction Timer, the Single Letter Cancellation Test, the Double Letter Cancellation Test, the Money Road Map Test of Direction Sense, the Trail Making Test A and B, the Bells Test, and the Charron Test). Patients who scored < 30 on the MVPT were 8.7 times more likely to fail the on-road evaluation compared to those who scored >30 (positive predictive value = 86.1%; right sided lesions = 94%; left-sided lesions = 80%). Furthermore, patients who performed poorly on both the MVPT and the Trail Making test B (a test of visual conceptual and visuomotor tracking) were 22 times more likely to fail the on-road evaluation as compared with those who did well on both tests. However, the MVPT was not highly predictive of a pass, such that even at the highest possible scores, half of the subjects passed and half failed the on-road evaluation.
Korner-Bitensky et al. (2000) also examined whether the MVPT could predict on-road driving test outcome in 269 patients with strokeAlso called a “brain attack” and happens when brain cells die because of inadequate blood flow. 20% of cases are a hemorrhage in the brain caused by a rupture or leakage from a blood vessel. 80% of cases are also know as a “schemic stroke”, or the formation of a blood clot in a vessel supplying blood to the brain.. A cutoff of < 30 was used to indicate poor visual perception and > 30 indicated good perception. A low positive predictive value of 60.9% (the proportion of people who had a low score on the MVPT and failed the test) and a low negative predictive value (the proportion of people who had a high score on the MVPT and passed the driving test) of 64.2% were found. Logistic regression revealed the best predictor of driving failure to be increased age, a right hemisphere lesion and a low score. The results of this study demonstrate that the MVPT may not be as highly predictive of driving ability in patients with strokeAlso called a “brain attack” and happens when brain cells die because of inadequate blood flow. 20% of cases are a hemorrhage in the brain caused by a rupture or leakage from a blood vessel. 80% of cases are also know as a “schemic stroke”, or the formation of a blood clot in a vessel supplying blood to the brain. as previous research had indicated.
Ball et al. (2006) examined whether scores on the visual closure items of the MVPT were predictive of future at-fault motor vehicle collisions in a cohort of older drivers (over the age of 55). The MVPT was found to be predictive, such that individuals who made four or more errors on the MVPT were 2.10 times more likely to crash as those who made three or fewer errors.
Construct:
Convergent/Discriminant:
Original MVPT
Su et al. (2000) found excellent correlations between the MVPT and the Loewenstein Occupational Therapy Cognitive Assessment subscales of Visuo-motor organization and Thinking operations (r = 0.70 and 0.72). Adequate correlations between the Rivermead Perceptual Assessment Battery subscales of Sequencing“The coordination and proper ordering of the steps that comprise the task, requiring a proper allotment of attention to each step” (Lezak, 1989; as cited in (Baum, Morrison, Hahn & Edwards, 2007))
(0.39) and Figure-ground discrimination (0.41) were found. An excellent correlationThe extent to which two or more variables are associated with one another. A correlation can be positive (as one variable increases, the other also increases – for example height and weight typically represent a positive correlation) or negative (as one variable increases, the other decreases – for example as the cost of gasoline goes higher, the number of miles driven decreases. There are a wide variety of methods for measuring correlation including: intraclass correlation coefficients (ICC), the Pearson product-moment correlation coefficient, and the Spearman rank-order correlation.
between the MVPT and the Spatial awareness subscaleMany measurement instruments are multidimensional and are designed to measure more than one construct or more than one domain of a single construct. In such instances subscales can be constructed in which the various items from a scale are grouped into subscales. Although a subscale could consist of a single item, in most cases subscales consist of multiple individual items that have been combined into a composite score (National Multiple Sclerosis Society).
of the Rivermead Perceptual Assessment Battery was observed (0.72).
Cate and Richards (2000) investigated the relationship between basic visual functions (acuity, visual field deficits, oculomotor skills and visual attention/scanning) and higher-level visual-perceptual processing skills (visual closure and figure-ground discrimination) in patients with strokeAlso called a “brain attack” and happens when brain cells die because of inadequate blood flow. 20% of cases are a hemorrhage in the brain caused by a rupture or leakage from a blood vessel. 80% of cases are also know as a “schemic stroke”, or the formation of a blood clot in a vessel supplying blood to the brain. using a Pearson product-moment correlationThe extent to which two or more variables are associated with one another. A correlation can be positive (as one variable increases, the other also increases – for example height and weight typically represent a positive correlation) or negative (as one variable increases, the other decreases – for example as the cost of gasoline goes higher, the number of miles driven decreases. There are a wide variety of methods for measuring correlation including: intraclass correlation coefficients (ICC), the Pearson product-moment correlation coefficient, and the Spearman rank-order correlation.
analysis. An excellent correlationThe extent to which two or more variables are associated with one another. A correlation can be positive (as one variable increases, the other also increases – for example height and weight typically represent a positive correlation) or negative (as one variable increases, the other decreases – for example as the cost of gasoline goes higher, the number of miles driven decreases. There are a wide variety of methods for measuring correlation including: intraclass correlation coefficients (ICC), the Pearson product-moment correlation coefficient, and the Spearman rank-order correlation.
of r = 0.75 was observed between vision screeningTesting for disease in people without symptoms.
scores and scores from the MVPT.
Known groups:
Original MVPT
Su et al. (2000) compared the perceptual performance of 22 patients with intracerebral hemorrhage to 22 patients with ischemia early after strokeAlso called a “brain attack” and happens when brain cells die because of inadequate blood flow. 20% of cases are a hemorrhage in the brain caused by a rupture or leakage from a blood vessel. 80% of cases are also know as a “schemic stroke”, or the formation of a blood clot in a vessel supplying blood to the brain.. The MVPT was not found to be discriminatively sensitive to side of lesion (left or right) or type of lesion (intracerebral hemorrhage vs. ischemic).
York and Cermak (1995) examined the performance of 45 individuals with either right cerebrovascular accident, left cerebrovascular accident, or individuals without cerebrovascular accident using the MVPT. Patients with right hemisphere lesions demonstrated poor performance on the MVPT in comparison to patients with left hemisphere lesions and a non-stroke group. However, the degree of difference between the mean scores of each group, as calculated using effect sizeEffect size (ES) is a name given to a family of indices that measure the magnitude of a treatment effect. Unlike significance tests, these indices are independent of sample size. The ES is generally measured in two ways: as the standardized difference between two means, or as the correlation between the independent variable classification and the individual scores on the dependent variable. This correlation is called the “effect size correlation”.
, (ES = 0.67 and 0.54, respectively), suggesting that the MVPT can discriminate between patients with strokeAlso called a “brain attack” and happens when brain cells die because of inadequate blood flow. 20% of cases are a hemorrhage in the brain caused by a rupture or leakage from a blood vessel. 80% of cases are also know as a “schemic stroke”, or the formation of a blood clot in a vessel supplying blood to the brain. versus individuals without strokeAlso called a “brain attack” and happens when brain cells die because of inadequate blood flow. 20% of cases are a hemorrhage in the brain caused by a rupture or leakage from a blood vessel. 80% of cases are also know as a “schemic stroke”, or the formation of a blood clot in a vessel supplying blood to the brain..
MVPT-3
Colarusso and Hammill (2003) examined MVPT-3 performance differences among individuals who were developmentally delayed, or who had experienced a head injury, or had a learning disability and compared their MVPT-3 performance to the general population mean MVPT-3 score of 100. It was hypothesized that each of these groups should display lower scores on the MVPT-3 when compared to the general population. Individuals classified as developmentally delayed had a mean MVPT-3 score of 69.46 which falls more than two standard deviations below the mean. Individuals with head injury had a mean score of 80.16, falling approximately 1.33 standard deviations below the mean. The group with learning disability had an average score of 88.24. The lower MVPT-3 scores for each of these three groups lends support for the construct validity
of the test.
Responsiveness
Not applicable.
References
- Ball, K. K., Roenker, D. L., Wadley, V. G., Edwards, J. D., Roth, D. L., McGwin, G., Raleigh, R., Joyce, J. J., Cissell, G. M., Dube, T. (2006). Can high-risk older drivers be identified through performance-based measures in a department of motor vehicles setting? Journal of the American Geriatrics Society, 54, 77-84.
- Bhavnani, G., Cockburn, J., Whiting, S., Lincoln, N. (1983). The reliability of the Rivermead perceptual assessment. British Journal of Occupational Therapy, 52, 17-19.
- Bouska, M. J., Kwatny, E. (1982). Manual for the application of the motor-free visual perception test to the adult population. Philadelphia (PA): Temple University Rehabilitation Research and Training Center.
- Brown, T. G., Rodger, S., Davis, A. (2003). Motor-Free Visual Perception Test – Revised: An overview and critique. British Journal of Occupational Therapy, 66(4), 159-167.
- Burtner, P. A., Qualls, C., Ortega, S. G., Morris, C. G., Scott, K. (2002). Test-retest of the Motor-Free Visual Perception Test Revised (MVPT-R) in children with and without learning disabilities. Physical and Occupational Therapy in Pediatrics, 22(3-4), 23-36.
- Cate, Y., Richards, L. (2000). Relationship between performance on tests of basic visual functions and visual-perceptual processing in persons after brain injury. Am J Occup Ther, 54(3), 326-334.
- Chalfant, J. C., Scheffelin, M. A. (1969). Task force III. Central processing dysfunctions in children: a review of research. Bethesda, MD: US Department of Health, Education and Welfare.
- Colarusso, R. P., Hammill, D. D. (1972). Motor-free visual perception test. Novato CA: Academic Therapy Publications.
- Colarusso, R. P., Hammill, D. D. (1996). Motor-free visual perception test–revised. Novato CA: Academic Therapy Publications.
- Colarusso, R. P., & Hammill, D.D. (2003). The Motor Free Visual Perception Test (MVPT-3). Navato, CA: Academic Therapy Publications.
- Durrell, D. (1955). Durrell Analysis of Reading Difficulty. Tarrytown, New York: Harcourt, Brace and World.
- Frostig, M., Lefever, D. W., Whittlesey, J. R. B. (1966). Administration and scoring manual for the Marianne Frostig Developmental Test of Visual Perception. Palo Alto, CA: Consulting Psychologists Press.
- Hammill, D. D., Pearson, N. A., Voress, J. K. (1993). Developmental Test of Visual Perception. 2nd ed. Austin, TX: Pro Ed.
- Hildreth, G. H., Griffiths, N. L., McGauvran, M. E. (1965). Metropolitan Readiness Tests. New York: Harcourt, Brace and World.
- Korner-Bitensky, N. A., Mazer, B. L., Sofer, S., Gelina, I., Meyer, M. B., Morrison, C., Tritch, L., Roelke, M. A., White, M. (2000). Visual testing for readiness to drive after stroke: A multiscenter study. Am J Phys Med Rehabil, 79(3), 253-259.
- Kelly, T. L., Madden, R., Gardner, E. F., Rudman, H. C. (1964). Stanford Achievement Tests. New York: Harcourt, Brace and World.
- Mazer, B. L., Korner-Bitensky, N., Sofer, S. (1998). Predicting ability to drive after stroke. Arch of Phys Med Rehabil, 79, 743-750.
- McCane, S. J. (2006). Test review: Motor-Free Visual Perception Test.Journal of Psychoeducational Assessment, 24, 265-272.
- Mercier, L., Herbert, R., Gauthier, L. (1995). Motor free visual perception test: Impact of vertical answer cards position on performance of adults with hemispatial neglect. Occup Ther J Res, 15, 223-226.
- Mercier, L., Hebert, R., Colarusso, R., Hammill, D. (1996). Motor-Free Visual Perception Test – Vertical. Novato, CA: Academic Therapy Publications.
- Pintner, R., Cunningham, B. V. (1965). Pintner-Cunningham Primary Test. New York: Harcourt, Brace and World.
- Sattler, J.M. (2001). Assessment of children: Cognitive applications (4th ed.). San Diego, CA: Jerome M. Sattler.
- Slosson, R. L. (1963). Slosson Intelligence Test. East Aurora, NY: Slosson Educational Publications.
- Su, C-Y., Charm, J-J., Chen, H-M., Su, C-J., Chien, T-H., Huang, M-H. (2000). Perceptual differences between stroke patients with cerebral infarction and intracerebral hemorrhage. Arch Phys Med Rehabil, 81, 706-714.
- Whiting, S., Lincoln, N., Bhavnani, G., Cockburn, J. (1985). The Rivermead perceptual assessment battery. Windsor: NFER-Nelson.
- York, C. D., Cermak, S. A. (1995). Visual perception and praxis in adults after stroke. Am J Occup Ther, 49(6), 543-550.
See the measure
The MVPT can be purchased from: https://www.therapro.com/