Albert’s Test

Evidence Reviewed as of before: 26-11-2010
Author(s)*: Lisa Zeltzer, MSc OT; Anita Menon, MSc
Editor(s): Nicol Korner-Bitensky, PhD OT; Elissa Sitcoff, BA BSc

Purpose

Albert’s Test is a screening tool used to detect the presence of unilateral spatial neglect (USN) in patients with stroke. In this test, patients must cross out lines that are placed in random orientations on a piece of paper. USN is indicated when lines are left uncrossed on the same side of the page as the patients motor deficit or brain lesion is located.

In-Depth Review

Purpose of the measure

Albert’s Test is a screening tool used to detect the presence of unilateral spatial neglect (USN) in patients with stroke. In this test, patients must cross out lines that are placed in random orientations on a piece of paper. USN is indicated when lines are left uncrossed on the same side of the page as the patients motor deficit or brain lesion is located.

Available versions

The original Albert’s Test was published by Albert in 1973.

Features of the measure

Items:
The modified Albert’s Test is the preferred version of the test and varies only slightly from the original description in which 41 lines were placed on slightly smaller sheet of paper. In the modified version, a series of 40 black lines, each about 2 cm long, are randomly oriented on a sheet of white 11 x 8.6-inch size paper in 6 rows. The test sheet is presented to the patient at their midline. Some of the lines are pointed out to him/her, including those to the extreme right and extreme left. The examiner asks the patient to cross out all of the lines, and demonstrates what is required by crossing out the 5 central lines him/herself. The patient is encouraged to cross out all the lines until he/she is satisfied that they have all been crossed.

Scoring:
The presence or absence of USN is based on the number of lines left uncrossed on each side of the test sheet. If any lines are left uncrossed, and more than 70% of uncrossed lines are on the same side as motor deficit, USN is indicated. This may be quantified in terms of the percentage of lines left uncrossed (Fullerton, McSherry, & Stout, 1986).

Time:
Less than 5 minutes.

Training:
None typically reported.

Subscales:
None.

Equipment:

  • 11x 8.5-inch page of paper with 41 lines 2 cm in length each.
  • Pencil

Alternative forms of the Albert’s Test

Modified version of Albert’s Test. This version varies only slightly from the original version and consists of 40 black lines (25 mm long, 0.5 or 1.2 mm thick) of various orientations dispersed randomly on a 297 x 210 mm sheet of white paper. Each side of the stimulus sheet contains 18 lines divided into 3 columns of 6 lines. The columns are numbered as 1 to 6 from left to right.

Client suitability

Can be used with:

  • Patients with stroke.
  • Patients must be able to hold a pencil to complete the test (the presence of apraxia may impair this ability).

Should not be used with:

  • Albert’s Test should be used with caution in the clinical diagnosis of spatial neglect. Performance on Albert’s Test may be influenced by or may be indicative of other syndromes besides spatial neglect, such as hemianopia (damage of optic pathways that result in loss of vision in half of the visual field) (Ferber & Karnath, 2001). The use of a clinical expert system for assessment of perceptual disorders may be useful for interpreting results and forming a diagnosis (e.g. McSherry & Fullerton, 1985).

In what languages is the measure available?

Not applicable.

Summary

What does the tool measure? Unilateral Spatial Neglect (USN).
What types of clients can the tool be used for? Patients with stroke.
Is this a screening or assessment tool? Screening.
Time to administer Less than 5 minutes.
Versions Modified version of Albert’s Test.
Other Languages Not applicable.
Measurement Properties
Reliability Test-retest:
One study examined the test-retest reliability of Albert’s Test and reported excellent test-retest.
Validity Criterion:
Predictive:
Albert’s Test significantly predicted functional outcome at 6 months post-stroke.

Construct:
Convergent:
Excellent correlations reported between Albert’s Test and the Line Bisection Test, the Wundt-Jastrow Area Illusion test, and the Catherine Bergego Scale. An adequate correlation has been reported between Albert’s Test and the Star Cancellation Test.
Known groups:
Albert’s Test can distinguish between patients with neglect from patients without neglect.

Does the tool detect change in patients? Not applicable.
Acceptability Albert’s Test should be used as a screening tool rather than for clinical diagnosis of USN. Performance may be influenced by or may be indicative of other syndromes besides spatial neglect, such as hemianopia. This test cannot be completed by proxy.
Feasibility Albert’s Test requires no specialized training to administer and only simple equipment is required (an 11x 8.5-inch page of paper with 41 lines 2 cm in length each and a pencil). The clinician must present the test sheet to the patient at their midline. Some of the lines are pointed out to him/her, including those to the extreme right and extreme left. The clinician asks the patient to cross out all of the lines, and demonstrates what is required by crossing out the 5 central lines him/herself. The patient is encouraged to cross out all the lines until he/she is satisfied that they have all been crossed.
How to obtain the tool?

Please click here.

Psychometric Properties

Overview

For the purposes of this review, we conducted a literature search to identify all relevant publications on the psychometric properties of Albert’s Test.

Reliability

Test-retest:
Chen-Sea and Henderson (1994) reported that Albert’s Test has an excellent test-retest reliability of r = 0.79.

Validity

Criterion:
Predictive:
Fullerton, McSherry, and Stout (1986) found that test scores on Albert’s Test administered to 205 patients with stroke within 48 hours of hospital admission significantly predicted functional outcome at 6 months post-stroke (as measured by a 4-point crude scale). This study specifically found that 56.8% of individuals identified with visual neglect using the Albert’s Test were true cases of neglect (true positives). Approximately 4.3% of individuals without neglect were also screened negative on the Albert’s Test (true negatives). However, more than 35% of the individuals were unable to comply with the test because of an altered state of consciousness or dysphasia during this early phase of recovery.

Construct:
Convergent:
Agrell, Dehlin, and Dahlgren (1997) compared the performance of 57 elderly patients with stroke on 5 different tests for visuo-spatial neglect (Star Cancellation, Line Crossing, Line Bisection, Clock Drawing Task and Copy A Cross). Albert’s Test had an excellent correlation with the Line Bisection Test (r = 0.85) and correlated adequately with the Star Cancellation Test (r = 0.63).

Massironi, Antonucci, Pizzamiglio, Vitale, Zoccolotti (1988) found an excellent correlation between the Wundt-Jastrow Area Illusion test and Albert’s Test (r = 0.64).

Deloche et al. (1996) reported an excellent correlation between the Catherine Bergego Scale and Albert’s Test (Spearman’s r = 0.73).

Known groups:
Potter, Deighton, Patel, Fairhurst, Guest, and Donnelly (2000) examined a computer-based method of administering the Albert’s Test in 30 patients with stroke and neglect, 57 patients with stroke and without neglect, and 13 age-matched control subjects. Significant differences were found between subjects with neglect and those without neglect, as well as subjects with neglect and age-matched controls. No difference between patients without neglect and age-matched controls was observed.

Responsiveness

Not applicable.

References

  • Agrell, B. M., Dehlin, O. I., Dahlgren, C. J. (1997). Neglect in elderly stroke patients: a comparison of five tests.Psychiatry Clin Neurosci, 51(5), 295-300.
  • Albert, M. L. (1973). A simple test of visual neglect.Neurology, 23, 658 664.
  • Chen-Sea, M. J., Henderson, A. (1994). The reliability and validity of visuospatial inattention tests with stroke patients.Occup Ther Int, 1, 36-48.
  • Deloche, G., Azouvi, P., Bergego, C., Marchal, F., Samuel, C., Morin, L., Renard, C., Louis-Dreyfus, A., Jokic, C., Wiart, L., Pradat-Diehl, P. (1996). Functional consequences and awareness of unilateral neglect: Study of an evaluation scale. Neuropsychol Rehabil, 6, 133 150.
  • Fullerton, K. J., McSherry, D., Stout, R. W. (1986). Albert’s Test: A neglected test of perceptual neglect. The Lancet,1(8478), 430-432.
  • Massironi, M., Antonucci, G., Pizzamiglio, L., Vitale, M. V., Zoccolotti, P. (1988). The Wundt-Jastrow illusion in the study of spatial hemi-inattention. Neuropsychologia, 26(1), 161-166.
  • McSherry, D., Fullerton, K. (1985). Preceptor: A shell for medical expert systems and its applications in a study of prognostic indices in stroke. Expert Systems, 2, 140-145.
  • Menon, A., Korner-Bitensky, N. (2004). Evaluating unilateral spatial neglect post stroke: Working your way through the maze of assessment choices. Topics in Stroke Rehabilitation, 11(3), 41-66.
  • Plummer, P., Morris, M. E., Dunai, J. (2003). Assessment of unilateral neglect. Phys Ther, 83(8), 732-740.
  • Potter, J., Deighton, T., Patel, M., Fairhurst, M., Guest, R., Donnelly, N. (2000). Computer recording of standard tests of visual neglect in stroke patients. Clinical Rehabilitation, 14(4), 441-446.
  • Na, D. L., Adair, J. C., Kang, Y., Chung, C. S., Lee, K. H., Heilman, K. M. (1999). Motor perseverative behavior on a line cancellation task. Neurology, 52, 1569-1576

See the measure

How to obtain Albert’s Test

Click here to get a copy of Albert’s test.

Table of contents

Arnadottir OT-ADL Neurobehavioural Evaluation (A-ONE)

Evidence Reviewed as of before: 09-01-2012
Author(s)*: Annabel McDermott, OT
Editor(s): Nicol Korner-Bitensky, PhD OT

Purpose

The Arnadottir OT-ADL Neurobehavioural Evaluation (A-One) evaluates the impact of neurobehavioural impairment on functional performance of activities of daily living (ADL).

In-Depth Review

Purpose of the measure

The Arnadottir OT-ADL Neurobehavioural Evaluation (A-One) is a standardized, performance-based measure that identifies the impact of neurobehavioural impairment on functional performance of ADL. The measure allows observation of ADL and evaluation of the level of assistance required for ADL performance (Arnadottir et al., 2009). Accordingly, the A-ONE provides the therapist with an ecologically-relevant assessment of the consequences of neurobehavioural impairments through clinical observation of ADL tasks using a ‘top-down’ (occupation-based) approach (Arnadottir et al., 2009; Bottari et al., 2006; Carswell et al., 1992; Cooke et al., 2006).

The A-ONE is comprised of two parts: (a) assessment of the individual’s independence in ADL tasks and the type of assistance required; and (b) identification of the type and severity of neurobehavioural impairment that is limiting the individual’s independence in these tasks (Gardarsdottir & Kaplan, 2002).

The A-ONE can be used to assist therapists in goal setting and treatment planning (Gardarsdottir & Kaplan, 2002).

Available versions

The A-ONE was previously named the Arnadottir Occupational Therapy – ADL (OT-ADL) Neurobehavioural Evaluation.

Features of the measure

Items:
The A-ONE is comprised of 2 scales: the Functional Independence scale, more commonly referred to as the Activities of Daily Living Scale (ADL scale), and the Neurobehavioural Impairment scale (NBI scale).

The ADL scale measures 5 ADL domains (dressing; grooming and hygiene; transfers and mobility; feeding; and communication) using 20 everyday tasks.

1. Dressing

  • i. Put on shirt
  • ii. Put on pants
  • iii. Put on socks
  • iv. Put on shoes
  • v. Manipulate fastenings

2. Grooming and hygiene

  • i. Wash face
  • ii. Comb hair
  • iii. Brush teeth
  • iv. Shave beard/apply cosmetics
  • v. Perform toilet hygiene
  • vi. Bathe

3. Transfers and mobility

  • i. Sit up in bed
  • ii. Transfer from sitting
  • iii. Maneuver around
  • iv. Transfer to toilet
  • v. Transfer to tub

4. Feeding

  • i. Drink from glass/cup
  • ii. Use fingers to bring food to mouth
  • iii. Bring food to mouth by fork or spoon
  • iv. Use knife to cut and spread

5. Communication

The NBI scale consists of items to assist the therapist in identifying the probable site of cortical dysfunction based on the observed neurological behaviours. The NBI scale is comprised of 2 subscales:

1. The Specific Subscale Impairment (NBSIS) subscale

  • i. Motor apraxia
  • ii. Ideational apraxia
  • iii. Unilateral body neglect
  • iv. Spatial relations
  • v. Unilateral spatial neglect
  • vi. Organization and sequencing
  • vii. Perseveration
  • viii. Topographical disorientation (transfers and mobility)
  • ix. Sensory aphasia (communication)
  • x. Anomia (communication)
  • xi. Paraphasia (communication)
  • xii. Expressive aphasia (communication)

2. The Pervasive Subscale Impairment (NBPSI) subscale

  • i. Lability
  • ii. Apathy
  • iii. Depression
  • iv. Irritability
  • v. Frustration
  • vi. Restlessness
  • vii. Insight
  • viii. Judgement
  • ix. Confusion
  • x. Attention
  • xi. Distraction
  • xii. Initiative
  • xiii. Motivation
  • xiv. Performance latency
  • xv. Working memory
  • xvi. Confabulation

Description of tasks:
The therapist observes the patient performing the listed ADL tasks and determines the level of assistance required to complete the tasks (see scoring below). Errors in task performance are an indication of underlying neurobehavioural impairments. Different neurobehavioural impairments manifest as different errors or difficulties in ADL task performance. The therapist observes for the presence and severity of neurobehavioural impairments, according to how much the impairment impacts on the individual’s ability to perform the ADL task independently (Gardarsdottir & Kaplan, 2002; Arnadottir et al., 2009).

What to consider before beginning:
The A-ONE should be performed in the clinical setting (Bottari et al., 2006).

Scoring and Score Interpretation:
The ADL and the Neurobehavioural linear scales were developed as criterion-referenced rating scales of the ordinal type by application of Rasch analysis (Arnadottir & Fisher, 2008; Arnadottir et al., 2010).

The ADL scale measures the individual’s need for assistance to overcome neurobehavioural impairments during ADL task performance. Arnadottir et al. (2008) examined the original 5-point rating scale structure and noted that thresholds were disordered. This disorder was eliminated when score 2 (verbal assistance) and score 3 (supervision) were combined, resulting in a 4-point rating scale:

0 = Full assistance needed
1 = Minimum to considerable physical assistance needed
2 = Verbal assistance/supervision needed
3 = Independent

Scores can be added within each ADL domain, but total ADL scores should not be added. Individuals are not penalized for using assistive devices when performing ADL tasks (Arnadottir et al., 2008).

Scoring of the NBI scale is based on the extent to which the neurobehavioural impairment interferes with ADL task performance, not the severity of the impairment. Most items of the NBSIS are rated several times:

  1. Motor apraxia*
  2. Ideational apraxia*
  3. Unilateral body neglect*
  4. Spatial relations*
  5. Unilateral spatial neglect*
  6. Organization and sequencing*
  7. Perseveration^
  8. Topographical disorientation (transfers and mobility)
  9. Sensory aphasia (communication)
  10. Anomia (communication)
  11. Paraphasia (communication)
  12. Expressive aphasia (communication)

* scored 4 times (during dressing; grooming and hygiene; transfers and mobility; and feeding ADL tasks)

^ scored 5 times (during all ADL tasks)

Items that are rated more than once are scored using a 5-point ordinal rating scale, from 0 = the particular neurobehavioural impairment is not observed, to 4 = the patient is unable to perform the task due to the neurobehavoural impairment. All other items (including communication items from the NBSIS and all items from the NBPSI are rated dichotomously: 0 = absent or 1 = present during ADL task performance (Arnadottir et al., 2009).

The manual includes conceptual and operational definitions for all items as well as standardized instruction and detailed criteria for administration and scoring of the instrument.

Time:
Time taken to administer the A-ONE has not been reported.

Equipment:
Not reported

Alternative forms of the assessment

The author has developed several variations of the NBI scale including 2 global scales and 4 psychometrically-sound diagnostic-specific scales (Arnadottir, 2010):

  • Common global scale (NBI-CVA, 53 items)
  • NBI common short form scale (29 items)
  • Left-hemisphere CVA (NBI-LCVA, 42 items)
  • Right-hemisphere CVA (NBI-RCVA, 51 items)
  • Dementia Alzheimers Type (NBI-DAT, 49 items)
  • Dementia Unspecified (NBI-DU, 40 items).

The scales contain different items and hierarchical structure. These versions can be used across diagnostic groups but should not be used to compare different diagnostic groups (Arnadottir, 2010).

Client suitability

Can be used with:

  • The A-ONE can be used with patients with dementia and other neurological disorders (Gardarsdottir & Kaplan, 2002).

Should not be used with:

  • As the A-ONE has been designed for use with adults with neurobehavioural disorders, it is not recommended for use with individuals with other diagnoses or disorders.

Languages of the measure

English, Dutch

Summary

What does the tool measure? The Arnadottir OT-ADL Neurobehavioural Evaluation (A-One) measures the impact of neurobehavioural impairment on functional performance of ADLs. The A-ONE is used to assess (a) independence in ADL tasks and (b) neurobehavioural impairments that limit the individual’s independence in ADL tasks.
What types of clients can the tool be used for? The A-ONE is designed for use with adults with neurological dysfunction of cortical origin, including stroke, dementia, Alzheimer’s disease and other neurological disorders.
Is this a screening or assessment tool? Assessment tool.
Time to administer The A-ONE ADL scale takes approximately 25 minutes to administer.
Versions

There is only 1 version of the ADL scale but there are several versions of the neurobehavioural scale:

  • Common global scale (NBI-CVA)
  • NBI common short form scale
  • Left-hemisphere CVA (NBI-LCVA)
  • Right-hemisphere CVA (NBI-RCVA)
  • Dementia Alzheimers Type (NBI-DAT)
  • Dementia Unspecified (NBI-DU).

There is a Dutch version of the A-ONE.

Other Languages English and Dutch.
Measurement Properties
Reliability Internal consistency:
– One study reported adequate internal consistency of the ADL scale (α = 0.75 – 0.79), poor to adequate internal consistency of the NBSIS scale (α= 0.69 – 0.75), and poor internal consistence of the NBPIS scale (α= 0.59 – 0.63) using Cronbach’s alpha.
– One study reported excellent internal consistency (item separation reliability coefficient = 0.98, item separation index = 8.02; person separation reliability coefficient = 0.90, person separation index = 2.93) using Rasch analysis.

Test-retest:
One study reported excellent one-week test-retest reliability of the A-ONE (agreement of 0.85 or higher for all items).

Intra-rater:
No studies have reported on intra-rater reliability of the A-ONE.

Inter-rater:
Two studies have reported excellent inter-rater reliability for the A-ONE ADL scale (kappa = 0.83; weighted kappa=0.90; ICC=0.98; Kendall’s r=0.92), the NB scale (kappa = 0.85) or the NBSIS scale (ICC=0.93, weighted kappa=0.74).

Validity – One study reported logical ordering of ADL items according to difficulty, but noted large gaps in the hierarchy of item difficulty in some NBI scales.
– One study reported that the ADL scale may not be well targeted to higher functioning individuals.
– One study reported a moderate inverse relationship between ADL and neurobehavioural impairment scales, using Pearson product moment correlation (r=-0.57).

Content:
Content validity of the A-ONE is based on literature review and expert opinion.
– Internal validity of the A-ONE is determined by examination of goodness of fit for items, logical hierarchical ordering of items, targeting, and PCA analysis.
– One study reported unidimensionality of the ADL scale and logical hierarchical ordering of ADL items.
– One study reported unidimensionality of the NBI scales.

Criterion:
Concurrent:
One study reporConted excellent correlations between the A-ONE ADL scale and the Barthel Index (r=0.70), and between the A-ONE and the MMSE (r=0.85).

Predictive:
No studies have reported on the predictive validity of the A-ONE.

Construct:
One study reported that the ADL scale has three factors and the NBSIS scale has 2 factors. A later study reported that the NBSIS scale has a third factor.

Convergent/Discriminant:
No studies have reported on the convergent or discriminant validity of the A-ONE.

Known Groups:
– One study reported no significant difference in the extent of the impact of neurobehavioural impairment on ADL between patients with right CVA and left CVA.
– One study reported significant differences between adults with right and left CVA on the ADL scale and the NBSIS scale.

Floor/Ceiling Effects The A-ONE ADL scale and neurobehavioural scales demonstrate potential floor/ceiling effects. The ADL scale should be restricted to individuals who are not independent in ADLs. The NBI-CVA should be used clinically with patients with LCVA due to ceiling effects of the NBI-LCVA scale.
Sensitivity/Specificity No studies have reported on the sensitivity/specificity of the A-ONE.
Does the tool detect change in patients? The ADL ordinal rating scale can be used as an interval scale, which allows measurement of change in ADL task performance over time.
Acceptability No studies have reported on the acceptability of the A-ONE.
Feasibility No studies have reported on the feasibility of the A-ONE.
How to obtain the tool? The A-ONE can be found in the textbook The Brain and Behavior: Assessing Cortical Dysfunction Through Activities of Daily Living
For more information email: a-one@islandia.is

Psychometric Properties

Overview

A literature search was conducted to identify all relevant publications on the psychometric properties of the Arnadottir OT-ADL Neurobehavioural Evaluation (A-ONE). Nine articles were reviewed.

Floor/Ceiling Effects

In a study of the psychometric properties of the ADL scale Arnadottir et al. (2008) reported that 9 of 209 participants with left hemisphere stroke (LHS) reached maximum scores on all items, indicating possible ceiling effects of the scale. The authors concluded that use of the A-ONE should be restricted to individuals who are not independent in ADLs (where maximum scores indicate independence in ADLs). However, while a patient may achieve maximum scores on the ADL scale, they may demonstrate neurobehavioural impairments that can be detected using the NBI scale (Arnadottir, 2010).

In a study of the psychometric properties of the neurobehavioural scales Arnadottir et al. (2009) reported potential floor/ceiling effects of the measure. Two patients with LCVA and 6 patients with RCVA attained extreme (minimum) scores. Extreme (maximum) measures were seen on 10 items when used with patients with LCVA and on 4 items when used with patients with RCVA.

Following refinement of the NBI scales and development of additional versions of the scale, further analysis was conducted using Rasch analysis (Arnadottir, 2010). Results indicated that 30 of 422 patients with CVA or dementia achieved maximum or minimum scores on the 29-item NBI-Common scale; 6 of 215 patients with left-hemisphere or right-hemisphere CVA achieved maximum or minimum scores on the 53-item NBI-CVA scale; 9 of 114 patients with LCVA achieved maximum or minimum scores on the 42-item NBI-LCVA scale; and 0 of 108 patients with RCVA achieved maximum or minimum scores on the 51-item NBI-RCVA scale. Arnadottir (2010) recommended the NBI-CVA for clinical use with patients with LCVA due to the possible ceiling effect when using the NBI-LCVA scale.

Reliability

Internal consistency:
Arnadottir (1990) investigated the internal consistency of the A-ONE and reported adequate internal consistency of the ADL scale (α= 0.75 – 0.79), poor to adequate internal consistency of the NBSIS scale (α= 0.69 – 0.75), and poor internal consistency of the NBPIS scale (α= 0.59 – 0.63).

Arnadottir et al. (2008) examined the reliability of the ADL scale by performing a Rasch analysis with retrospective data from 209 patients with neurological conditions (dementia, n=111; CVA, n=95, other, n=3). Item separation reliability was 0.98 and the item separation index was 8.02, indicating reliable differentiation of items into at least 9 strata of difficulty. A separation reliability coefficient of 0.90 and person separation index of 2.93 was found, indicating reliable differentiation of the sample into at least 3 statistically distinct strata of ADL ability.
Note: Item separation reliability is the ratio of the “true” (observed minus error) variance to the obtained variation. The smaller the error, the higher the ratio will be. It ranges from 0.00 to 1.00 and is interpreted the same as the Cronbach’s alpha. A separation index > 2.00 is equivalent to a Cronbach’s alpha of 0.80 or greater (excellent).

Test-retest:
Gardarsdottir & Kaplan (2002) reported that one-week test-retest reliability of the A-ONE was excellent (agreement of 0.85 or higher for all items).

Intra-rater:
No studies have reported on intra-rater reliability of the A-ONE.

Inter-rater:
Arnadottir (1990) reported excellent inter-rater reliability for the A-ONE ADL scale (average kappa coefficient = 0.83) and the NB scale (kappa = 0.85).

Further analysis by Arnadottir (2008) reiterated excellent inter-rater reliability for the A-ONE ADL scale (ICC=0.98; Kendall’s r=0.92, weighted kappa=0.90) and the NBSIS scale (ICC=0.93, weighted kappa=0.74).

Validity

Content:
Internal validation of the A-ONE was performed by examination of goodness of fit for items, logical hierarchical ordering of items, targeting, and PCA analysis (Arnadottir, 2010).

Arnadottir et al. (2008) performed factor analysis of the A-ONE ADL scale using retrospective data from 209 patients with neurological conditions (CVA, n= 95; dementia, n= 111, other diagnosis, n=3). Analysis of all 22 ADL items revealed that the two communication items (expression, comprehension) and one feeding item (‘use knife’) did not demonstrate acceptable goodness of fit (total of 13.6% item misfit). Following removal of the two communication items, the item ‘use knife’ demonstrated substantially reduced misfit to an acceptable rate (≤ 5%), and as such was maintained. With removal of the two communication items, 84% of total variance was explained by the measures, with 3.6% of the unexplained variance accounted for by first contrast. These results support unidimensionality of the ADL scale.

Arnadottir et al. (2009) performed factor analysis of nomotor neurobehavioural items (34 NBSIS items and 16 NBPIS items) using retrospective data from 206 patients with CVA and dementia. After four items (anomia, expressive aphasia, working memory, motivation) were removed due to outfit misfit, 56.8% of variance was explained by the Rasch factor (global measure of neurobehavioural impairments), with 4.9% of the unexplained variance accounted for by the first contrast. These results indicate that the neurobehavioural impairment items can be viewed as unidimensional – i.e. belonging to the same construct. The authors proceeded to conduct a principal component analysis (PCA) of global hierarchies according to diagnosis (LCVA, n=36; RCVA, n=37; dementia, n=111). After removal of misfit items (LCVA group – 2 items; RCVA and dementia groups – 3 items), improved results were seen for all diagnostic groups (Rasch factors: LCVA group = 85.5%, RCVA group = 83.3%, dementia group = 79.2%; unexplained variance in first contrast: LCVA group = 2.4%, RCVA group = 3.4%, dementia group = 1.7%). These results indicate that the hierarchical structure of the dimension varies across diagnostic groups.

Arnadottir (2010) reported on factor analysis conducted in development of the NBI common short form scale. All diagnosis-specific versions of the NBI scales demonstrate unidimensionality, as confirmed by PCA analysis (Arnadottir, 2010). The original NBSIS scale included neurobehavioural motor items that measured left- and right-sided performance, which were collapsed to a singular motor item. A resulting 33 neurobehavoural motor items were common to all 4 diagnostic groups (LCVA, RCVA, Dementia Alzheimers type, Dementia). Four of the 33 items were omitted due to misfit by Rasch analysis and the remaining 29 items demonstrated acceptable goodness of fit. PCA analysis revealed 72.8% of variance was explained by Rasch factor, supporting unidimensionality.

Arnadottir et al. (2008) examined the hierarchical ordering of difficulty of ADL items using retrospective data from individuals with dementia (n=111), CVA (n=95), or other neurological conditions (n=3) and reported logical ordering according to item difficulty. However, Arnadottir (2010) conducted an evaluation of the targeting of person ability to item difficulty and identified that the ADL scale may not be well targeted to higher functioning individuals (discrepancy between mean measures = 1.61 logits).

Arnadottir (2010) reported that some NBI scales have large gaps in the hierarchy of item difficulty as there are few items that evaluate neurobehavioural impairments of higher-functioning individuals (mean person measure = -1.74, SD = 1.34).

Arnadottir et al. (2010) examined the relationship between ADL ability and the impact of neurobehavioural impairments on ADL using retrospective data from 215 patients with stroke. A moderate inverse relationship was found between ADL ability and the extent of neurobehavioural impairment impacting ADL, using Pearson product moment correlation (r=-0.57).

Arnadottir (2010) reported that the NBI-CVA scale demonstrates acceptable goodness of fit statistics for all retained items (MnSq ≤1.4, z < 2) and acceptable PCA.

Criterion:
Concurrent:
Steultjens (1998) examined the concurrent validity of the A-ONE. Comparison of the A-ONE ADL scale with the Barthel Index and comparison of NB scores with the MMSE revealed excellent correlations (r=0.70 and r=0.85 respectively).

Predictive:
No studies have reported on the predictive validity of the A-ONE.

Construct:
Convergent/Discriminant:
No studies have reported on the convergent or discriminant validity of the A-ONE.

Arnadottir (1990) conducted exploratory factor analysis and reported that the ADL scale has 3 factors and the NBSIS scale has 2 factors.

Arnadottir et al. (2009) performance factor analysis of the A-ONE neurobehavioural items and reported an additional factor is formed by neurobehavioural impairments that reflect occupational errors representative of lateralized motor impairments (e.g. tone).

Known Group:
Arnadottir et al. (2010) examined whether patients with right or left CVA differ in the extent to which their neurobehavioural impairments impact performance of ADLs, using retrospective data from 215 patients with stroke. No significant difference in the extent of the impact of neurobehavioural impairment on ADL was seen between patients with right CVA (n=103) and patients with left CVA (n=112).

Gardarsdottir & Kaplan (2002) examined the construct validity of the A-ONE ADL scale and Neurobehavioural Specific Impairment Scale (NBSIS) in adults with right CVA (n=19) and left CVA (n=23). Mann-Whitney U tests identified significant differences between the groups for only 3 of 18 ADL tasks: shave/makeup (p=0.013), comprehension (p=0.005), and speech (p=0.001), whereby patients with left CVA were more dependent than patients with right CVA for these tasks. Mann-Whitney U and chi-square tests revealed significant between-group differences for 13 of 46 neurobehavioural impairments, all within the three NSIS categories of motor apraxia, unilateral body neglect and abnormal tone. Results indicated that patients with left CVA demonstrated greater severity of motor apraxia in dressing (p=0.022), grooming and hygiene (p=0.001) and feeding (p=0.002) than patients with right CVA. Patients with left CVA also demonstrated greater severity of abnormal tone on both sides of the body in grooming and hygiene tasks (p=0.001) than patients with right CVA, whereas patient with right CVA demonstrated greater severity of abnormal tone on both sides of the body during performance of dressing (p=0.001), transfers and mobility (p=0.001) and feeding (p=0.001). Patients with right CVA also demonstrated greater severity of unilateral body neglect in grooming and hygiene tasks (p=0.002) than patients with left CVA.

Responsiveness

Principal component analysis of the ADL domain supported unidimensionality, enabling conversion of the ordinal rating scale to an interval scale, which would allow measurement of change in ADL task performance over time (Arnadottir, 1990).

References

  • Arnadottir, G. (2010). Measuring the impact of body functions on occupational performance: Validation of the ADL-focused Occupation-based Neurobehavioural Evaluation (A-ONE). (Doctoral dissertation). Retrieved from Swedish Dissertations database.
  • Arnadottir, G. (1990). The brain and behavior: Assessing cortical dysfunction through activities of daily living. St. Louis, MO: Mosby.
  • Arnadottir, G. & Fisher, A.G. (2008). Rasch analysis of the ADL scale of the A-ONE. The American Journal of Occupational Therapy, 62, 51-60
  • Arnadottir, G., Fisher, A.G., & Löfgren, B. (2009). Dimensionality of nonmotor neurobehavioural impairments when observed in the natural contexts of ADL task performance. Neurorehabilitation and Neural Repair, 23(6), 579-86.
  • Arnadottir, G., Löfgren, B., & Fisher, A.G. (2010). Difference in impact of neurobehavioural dysfunction on activities of daily living performance between right and left hemisphere stroke. Journal of Rehabilitation Medicine, 42, 903-7.
  • Bottari, C., Dutil, É., Dassa, C., & Rainville, C. (2006). Choosing the most appropriate environment to evaluation independence in everyday activities: Home or clinic? Australian Occupational Therapy Journal, 53, 98-106.
  • Carswell, A., Carson, L.J., Walop, W. & Zgola, J. (1992). A theoretical model of functional performance in persons with Alzheimer disease. Canadian Journal of Occupational Therapy, 59(3), 132-40.
  • Cooke, D.M., McKenna, K. & Fleming, J. (2005). Development of a standardized occupational therapy screening tool for visual perception in adults. Scandinavian Journal of Occupational Therapy, 12, 59-71.
  • Gardarsdottir, S. & Kaplan, S. (2002). Validity of the Árnadottir OT-ADL Neurobehavioral Evaluation (A-ONE): Performance in activities of daily living and neurobehavioural impairments of persons with left and right hemisphere damage. American Journal of Occupational Therapy, 56, 499-508.
  • Steultjens, E.M. (1998). A-ONE: De Nederlands Versie [A-ONE: The Dutch version]. Nederlands Tidskrift for Ergoterapie, 26, 100-4.

See the measure

How to obtain the assessment?

The A-ONE assessment is in the textbook: The Brain and Behavior: Assessing Cortical Dysfunction Through Activities of Daily Living.

For more information email: a-one@islandia.is

Table of contents

Behavioral Inattention Test (BIT)

Evidence Reviewed as of before: 12-10-2011
Author(s)*: Sabrina Figueiredo, BSc
Editor(s): Anita Menon, MSc; Nicol Korner-Bitensky, PhD OT

Purpose

The Behavioral Inattention Test (BIT), initially called as Rivermead Behavioral Inattention Test, is a short screening battery of tests to assess the presence and the extent of visual neglect on a sample of everyday problems faced by patients with visual inattention (Wilson, Cockburn, & Halligan, 1987).

In-Depth Review

Purpose of the measure

The Behavioral Inattention Test (BIT), initially called as Rivermead Behavioral Inattention Test, is a short screening battery of tests to assess the presence and the extent of visual neglect on a sample of everyday problems faced by patients with visual inattention (Wilson, Cockburn, & Halligan, 1987).

Available versions

The BIT was developed in 1987 by Barbara Wilson, Janet Cockburn and Peter Halligan.

Features of the measure

Items:
The BIT is divided into two subtests: Conventional and Behavioral. The BIT Conventional subtest (BITC) consists of 6 items: line crossing, letter cancelation, star cancellation, figure and shape copying, line bisection, and representational drawing. The BIT Behavioral subtest (BITB) consists of 9 items: pre-scanning, phone dialing, menu reading, article reading, telling and setting the time, coin sorting, address and sentence copying, map navigation, and card sorting. To minimize practice and learning effects upon re-testing, parallel versions of the test were created (Wilson et al., 1987).

A client will be diagnosed as being visual neglect if they failure to attend the target stimuli and based on the relative spatial location of targets omitted (Wilson et al., 1987).

The BIT items are as follow (Halligan, Cockburn, & Wilsom, 1991):

  • BITC subtest
Item Test description
Line crossing Patients are required to detect and cross out all target lines on a page. When administering the test, the examiner demonstrates the nature of the task to the patient by crossing out two of four lines located in a central column, and then instructing them to cross out all lines they can see on the page.
Letter Cancellation Paper and pencil test in which patients are required to scan, locate, and cross out designated targets from a background of distractor letters. The test consists of 5 rows of 34 upper case letters presented on a rectangular page. Forty target stimuli are positioned such that each appears in equal number on both sides of the page. Each letter is 6 mm high and positioned 2 mm apart.
Star Cancellation This test consists of a random array of verbal and non-verbal stimuli. The stimuli are 52 large stars (14 mm), 13 randomly positioned letters and 19 short (3-4 letters) words are interspersed with 56 smaller stars (8mm) which comprise the target stimuli. The patient is instructed to cancel all the small stars.
Figure and Shape copying In this test, the patient is required to copy three separate, simple drawings from the left side of the page. The three drawings (a four pointed star, a cube, and a daisy) are arranged vertically and are clearly indicated to the patient. The second part of the test requires the patient to copy a group of three geometric shapes presented on a separate stimulus sheet. Unlike the previous items, the contents of the page are not pointed out to the patient.
Line Bisection Patients are required to estimate and indicate the midpoint of a horizontal line. The expectation is that the patient with left neglect will choose a midpoint to the right of true center. Each patient is presented with three horizontal, 8-inch black lines, 1- mm thick, displayed in a staircase fashion across the page. The extent of each line is clearly pointed out to the patient who is then instructed to mark the center.
Representational Drawing Patient is asked to draw pictures of a clock face, together with the numbers and a setting of the hands; a man or a woman; and a simple outline drawing of a butterfly. The task is designed to assess patient’s visual imagery independent of sensory input. Patients with left sided neglect typically use the right side of the page and their drawings after contain major omissions of features on the left hand side. Drawings of a clock face, the human form and a butterfly have shown themselves to be clinically sensitive.
  • BITB subtest
Item Test description
Picture Scanning Three large photographs (a meal, a wash basin and toiletries, and a large room flanked by various pieces of furniture and hospital aids), each measuring 357 x 278 mm are presented one at a time. Each photograph is placed in front of the seated patient who is not permitted to move it. The patient is instructed to name and or point the main items in each picture.
Telephone Dialing A telephone with a numbered dial or a push-button keyboard is presented. Each number is placed directly in front of the telephone and patient is instructed to dial the number sequence presented.
Menu reading A menu “open-out” page (420 x 297 mm) containing 18 common food items arranged in 4 adjacent columns (2 on the left and 2 on the right) is presented. The food items are presented in 6 mm high letters. Patient is instructed to open the menu and read out all the items. Language-impaired patients are permitted to point to all the words they can see.
Article Reading Three short columns of text are presented, which patients are then instructed to read.
Telling and Setting the time This test has three parts. First, the patient is required to read the time from photographed settings on a digital clock face. Second, the patient is required to read the time from three settings on an analogue clock face. Finally, the patient is instructed to set times on the analogue clock face as they are called out by the examiner.
Coin Sorting An array of familiar coins is presented. The client is the asked to indicate the locations of the coin type called out by the examiner. This task requires selective scanning of the coin array in order to not miss any instance of the named denomination.
Address and Sentence copying Patient is required to copy an address and a sentence on separate pages.
Map Navigation Patient is required to follow and locate spatial points (letters) positioned on a network of pathways located on a sheet of paper. More specifically, after having been shown the junctions of each pathway, patients are instructed to use their fingers to trace out the sequence of the letters said by the examiner
Card Sorting Sixteen playing cards are presented in a 4×4 matrix. Initially, each card is pointed out to the patient, who is then required to point to each of the cards types present as the examiner calls them out.

Scoring:
The BIT total score, as well as the sub-scores for the BITC and BITB are obtained by adding the subtests scores together. Maximum scores fort the BIT, BITC and BITB are respectively 227, 146, and 81. Higher scores are indicative of more severe visual impairment (Menon & Korner-Bitensky, 2004).

  • BITC subtest
Item Scoring
Line crossing The four central lines are not included and neglect is diagnosed in any lines are missed by the patient. A score sheet is provided to notate the nature of the neglect.
Letter Cancellation The maximum score is 40, and a scoring template allows scorer to divide the total array into four columns, two on the left and two on the right. On completion of the task, the total number of omitted target letters is calculated, and the location of the omissions is noted.
Star Cancellation As with the letter cancellation task, the test sheet can be subdivided into columns to calculate the number and location of errors.
Figure and Shape copying Scoring is based on completeness of each drawing. Neglect is defined as an omission or gross distortion of any major contralesional component of the drawing.
Line Bisection The test is scored by measuring deviations from the true mid-point. Deviations to left scored as negative; to the right as positive. Deviation score is calculated using the normative data obtained from the age-matched controls. Each of the three lines is scored out of a maximum of three. Using data from the control sample, score values between o and 3 are assigned to the patient’s performance.
Representational Drawing Scoring is similar to copying tasks, where neglect is defined as the omission or gross distortion of any major contralesional component of the drawing.
  • BITB subtest
Item Scoring
Picture Scanning Only omissions are scored, though errors of identification are also noted. Scoring of this and all other BITB testes is out of a total of nine and is calculated from the total number of omission recorded.
Telephone Dialing Dialing sequence is recorded, together with number and location of omissions or substitutions.
Menu reading Each of 18 items is scored as correct or incorrect, where incorrect response refers to partial entire word substitutions or omission.
Article Reading Scoring is based on the percentage of words omitted across all three columns. Word omissions and partial or entire word substitutions are scored as errors.
Telling and Setting the time All three parts are scored according to the numbers of omissions or substitutions made.
Coin Sorting Scoring is based on the number of omissions
Address and Sentence copying Score is calculated from the number of letters omitted or substituted from each side of the page
Map Navigation Failure to complete any segment of the route sequence incurs a penalty deduction of one point down to a minimum of zero for each trial
Card Sorting To score, the position and total number of omissions are recorded.

Cut-offs scores for the BIT, BITC, and BITB are respectively 196 out of 227, 129 out of 146 and 67 out of 81 (Halligan et al., 1991; Menon & Korner-Bitensky, 2004).

To score the relative spatial location component, the number of screening tests that demonstrated an overall lateralized performance is calculated. If half of the tests show lateralized performance and half do not, the index of lateralized performance is then determined by the total number of omissions/errors made on each side. The severity of visual negligence can be calculated based on the client’s performance on the 6 BITC tests. This score is determined by the number of conventional tests on which a given client demonstrates visual negligence. Severity scores range from 1 to 6, with higher scores indicating more severe visual negligence (Halligan et al., 1991).

Time:
The BIT takes approximately 30 to 40 minutes to administer (Menon & Korner-Bitensky, 2004).

Subscales:
BITC – BIT Conventional subtest

BITB – BIT Behavioral subtest

Equipment:

  • Forms for the BITC and BTIB;
  • Photographs of 1) a meal, 2) wash-basin and toiletries, 3) a large room with pieces of furniture and hospital aids and 4) different settings of a digital clock;
  • Open-out menu;
  • Analogue clock;
  • Six different types of coins;
  • Playing cards;
  • Paper and pencil.

Training:
Not required.

Alternative forms of the BIT

  • BIT – shortened version: Developed by Stone, Wilsom, & Rose in 1987, the test is comprised of three Conventional (BITC) subtests (line crossing, star cancelation tests, figure copying) and five Behavioral (BITB) subtests (scanning a picture, reading a menu, eating a meal, reading an article, sorting coins). This version takes, on average, 10 to 15 minutes to administer (Menon & Korner-Bitensky, 2004).

Client suitability

Can be used with:

  • Clients with stroke

Should not be used with:

  • The BIT should not be used with clients who have difficulty communicating (e.g. apraxia or aphasia).

In what languages is the measure available?

English, Chinese.

Summary

What does the tool measure? The BIT estimates the presence and the extent of visual neglect.
What types of clients can the tool be used for? The BIT can be used with, but is not limited to clients with stroke.
Is this a screening or assessment tool? Assessment.
Time to administer The BIT takes 30 to 40 minutes to administer.
Versions BIT; BIT shortened version
Other Languages English, Chinese
Measurement Properties
Reliability Internal consistency:
No studies have examined the internal consistency of the BIT.

Test-retest:
Two studies have examined the test-retest reliability of the BIT. All reported excellent test-retest reliability using Pearson correlation coefficient.

Intra-rater:
No studies have examined the intra-rater reliability of the BIT.

Inter-rater:
Two studies have examined the inter-rater reliability of the BIT and reported excellent inter-rater reliability using Pearson correlation coefficient.

Validity Content:
One study examined the content validity of the BIT and reported the item generation process when creating the measure.

Criterion:
Concurrent:
No studies have reported the concurrent validity of the BIT in clients with stroke.

Predictive:
One study examined the predictive validity of the BITB and reported that BITB scores measured at 10 days post-stroke is an excellent predictor of poor functional outcome at 3, 6 and 12 months post-stroke.

Construct:
Convergent/Divergent:
Three studies have examined the convergent validity of the BIT and reported excellent correlations between the BIT, the Occupational Therapy Checklist and the Barthel Index, along with adequate correlations with the Rivermead Activities of Daily Living Assessment.

Known Groups:
One study reported that BITC scores were able to distinguish between those with and without visual neglect.

Floor/Ceiling Effects No studies have reported floor/ceiling effects of the BIT in clients with stroke.
Sensitivity/Specificity One study examined the specificity and sensitivity of the BIT and reported that both subtests are able to accurately identify individuals with visual neglect.
Does the tool detect change in patients? No studies have reported the responsiveness of the BIT in clients with stroke.
Acceptability The tests are easy and simple to administer.
Feasibility The tests are easy to administer, simple and relatively unambiguous to score, and are sufficiently wide ranging to detect different forms of visual neglect.
How to obtain the tool?

The BIT can be obtained from on the website:
http://www.pearsonclinical.com/psychology/products/100000138/behavioral-inattention-test-bit.html?Pid=015-8054-628&Mode=summary

Psychometric Properties

Overview

We conducted a literature search to identify all relevant publications on the psychometric properties of the Behavioral Inattention Test (BIT) in individuals with stroke. We identified 5 studies.

Floor/Ceiling Effects

No studies have reported floor/ceiling effects of the BIT in clients with stroke.

Reliability

Internal Consistency:
No studies have reported the internal consistency of the BIT in clients with stroke.

Test-retest:
Wilson, Cockburn, and Halligan (1987) examined the test-retest reliability of the BIT in 28 clients with stroke and 14 healthy individuals. Participants were re-assessed within 1 week. The test-retest reliability for the BIT, as calculated using Pearson Correlation Coefficient, was excellent (r = 0.83).

Halligan, Cockburn, and Wilsom (1991) estimated the test-retest reliability of the BIT conventional subtest (BITC) and the BIT behavioral subtest (BITB) in 10 clients with stroke. Participants were re-assessed within 15 days. The test-retest reliability was excellent for both BITC and BITB (r = 0.89; r = 0.97, respectively).

Intra-rater:
No studies have reported the intra-rater reliability of the BIT in clients with stroke.

Inter-rater:
Wilson et al. (1987) assessed the inter-rater reliability of the BIT in 7 clients with stroke. Two raters assessed participants simultaneously. 100% agreement level was found between raters.

Halligan et al. (1991) verified the inter-rater reliability of the BIT conventional subtest (BITC) and the BIT behavioral subtest (BITB) in 13 clients with stroke. Two independent raters scored participants separately but simultaneously. Correlation between raters mean scores, as calculated using Pearson Correlation Coefficient, was excellent (r = 0.99) for both BITC and BITB.

Validity

Content:
Wilson et al. (1987) obtained information about client’s everyday difficulties in order to construct a brief battery of tests that included real world’ experiences of patients recovering from stroke. Information was retrieved from published cases, behavioral observation of patients with neglect, as well as from discussions with occupational therapists, physiotherapists, clinical psychologists, and neurologists, all of whom had worked with patients with visual neglect. The final selection of items was determined based on results of a pilot study.

Criterion:
Concurrent:
The concurrent validity of the BIT has not been examined in clients with stroke.

Predictive:
Jehkonen, Ahonen, Dastidar, Koivisto, Laippala, Vilkki et al. (2000) examined in 50 clients with stroke to determine if visual neglect measured 10 days post stroke was predictive of poor functional outcomes at 3, 6 and 12 months post stroke. Visual neglect was measured with the BIT and functional outcomes with the Frenchay Activities Index (FAI) (Holbrook & Skilbeck, 1983). Linear regression analysis indicated that the BIT is an excellent predictor of poor functional outcomes, accounting for 73%, 64% and 61% of the total variance of the FAI at 3, 6 and 12 months respectively.

Halligan et al. (1991) analyzed the percentage of people that were correctly classified as having visual neglect using the BITC and the BITB. This study included 80 clients with stroke. Results were as follows:

  • BITC
Right brain damaged clients (n = 26) Left brain damaged clients (n = 54)
Sensitivity Specificity Sensitivity Specificity
Line crossing 65% 76% 75% 96%
Letter cancelation 77% 82% 100% 95%
Star cancelation 100% 64% 100% 77%
Figure copying 96% 97% 100% 91%
Line bisection 65% 76% 75% 96%
Representational drawing 42% 64% 0% 85%
  • BITB
Right brain damaged clients (n = 26) Left brain damaged clients (n = 54)
Sensitivity Specificity Sensitivity Specificity
Picture scanning 65% 76% 25% 88%
Telephone dialing 57% 72% 25% 88%
Menu reading 65% 76% 75% 96%
Article reading 38% 64% 50% 92%
Telling time 69% 78% 100% 100%
Coin sorting 100% 100% 100% 95%
Address and sentence copying 65% 76% 50% 92%
Map navigation 46% 67% 100% 95%
Card sorting 54% 70% 25% 88%

Construct:
Convergent/Divergent:
Halligan et al. (1991) examined the convergent validity of the BIT by comparing it to the Occupational Therapist Checklist and the Rivermead Activities of Daily Living Assessment (Whiting & Lincoln, 1980) in 80 clients with stroke. Excellent correlations were found between the BIT and the Occupational Therapist Checklist (r = -0.65); adequate correlations between the BIT and the Rivermead Activities of Daily Living Assessment (r = 0.55).

Hartman-Maier and Katz (1995) verified the convergent validity of the BIT Behavioral subtest by comparing it to a checklist of activities of daily living (ADL). Correlations, as calculated using Pearson Coefficient Correlation, were excellent (r = 0.77).

Cassidy, Bruce, Lewis, and Gray (1999) evaluated the convergent validity of the BIT by comparing it to the Barthel Index (Mahoney & Barthel, 1965) in 44 clients with stroke. Correlations between both measures were excellent (r = 0.64).

Known Groups:
Halligan et al. (1991) studied 80 clients with stroke to determine if the BITC subtest was able to distinguish between persons with visual neglect from those healthy ones. Individuals with visual neglect performed significantly worse on the BITC as compared to healthy ones (p<0.001; calculated using Kruskal-Wallis test). Therefore, the BITC is capable of discriminating between known groups.

Responsiveness

No studies have reported the responsiveness of the BIT in clients with stroke.

References

  • Albert, M. L. (1973). A simple test of visual neglect. Neurology, 23, 658-664.
  • Beschin, N., Robertson, I. H. (1997). Personal versus extrapersonal neglect: a group study of their dissociation using a reliable clinical test. Cortex. 33, 379-384.
  • Brunila, T., Jalas, M., Lindell, J.A., Tenovuo, O., Hamalainen, H. The two part picture in detection of visuospatial neglect. Clin Neuropsychol 2003;17:45-53.
  • Cassidy, T.P., Bruce, D.W., Lewis, S., & Gray, S.G. (1999). The association of visual field deficits and visuospatial neglect in acute right hemisphere stroke patients. Age Ageing, 28, 257-260
  • Diller, L., Ben-Yishay, Y., Gerstman, L. J., Goodin, R., Gordon, W., Weinberg, J. (1974). Studies in scanning behavior in hemiplegia. Rehabilitation Monograph No. 50, Studies in cognition and rehabilitation in hemiplegia. New York: New York University Medical Center, Institute of Rehabilitation Medicine.
  • Goodenough, F. L. (1926). The measurement of intelligence by drawing. New York: World Books.
  • Halligan, P., Cockburn, J., Wilson, B. (1991). The Behavioural Assessment of Visual Neglect. Neuropsychological Rehabilitation 1, 5-32.
  • Hartman-Maeir, A., Katz, N. (1995). Validity of the Behavioral Inattention Test: relationship with functional tasks. Am J Occup Therapy, 49, 507-516.
  • Holbrook, M., Skilbeck, C. E. (1983). An activities index for use with stroke patients. Age and Ageing, 12(2), 166-170.
  • Jehkonen, M., Ahonen, J.P., Dastidar, P., et al. (2000). Visual neglect as a predictor of functional outcome one year after stroke. Acta Neurol Scand, 101, 195-201.
  • Mahoney, F. I., Barthel, D. W. (1965). Functional evaluation: The Barthel Index. Md State Med J, 14, 61-5.
  • Menon, A., Korner-Bitensky, N. (2004). Evaluating unilateral spatial neglect post stroke: working your way through the maze of assessment choices. Top Stroke Rehabil 11, 41-66.
  • Zoccolotti, P., Antonucci, G., Judica, A. (1992). Psychometric characteristics of two semi-structured scales for the functional evaluation of hemi-inattention in extrapersonal and personal space. Neuropsychological Rehabilitation, 2, 179-191.
  • Whiting, S. & Lincoln, N. (1980). An A.D.L. assessment for stroke patients. British Journal of Occupational Therapy, 43, 44-46.
  • Wilson, B., Cockburn, J., Halligan, P. (1987) Development of a behavioral test of visuospatial neglect. Arch Phys Med Rehabil 68, 98-102.

See the measure

How to obtain the BIT

The BIT can be obtained from on the website:
http://www.pearsonclinical.com/psychology/products/100000138/behavioral-inattention-test-bit.html?Pid=015-8054-628&Mode=summary

The BIT Complete Kit, including manual, 25 record forms, various stimulus, test and playing cards, and clock face costs US$ 339.00. A package including only the record forms (n = 25) costs US$42.00.

Table of contents

Bells Test

Evidence Reviewed as of before: 11-01-2011
Author(s)*: Lisa Zeltzer, MSc OT; Anita Menon, MSc
Editor(s): Nicol Korner-Bitensky, PhD OT; Elissa Sitcoff, BA BSc

Purpose

The Bells Test is a cancellation test that allows for a quantitative and qualitative assessment of visual neglect in the near extra personal space.

In-Depth Review

Purpose of the measure

The Bells Test is a cancellation test that allows for a quantitative and qualitative assessment of visual neglect in the near extrapersonal space.

Available versions

The Bells Test was developed by Gauthier, Dehaut, and Joanette in 1989.

Features of the measure

Items:
There are no actual items to the Bells Test.

In the Bells Test, the patient is asked to circle with a pencil all 35 bells embedded within 280 distractors (houses, horses, etc.) on an 11 x 8.5 – inch page (Figure 1). All drawings are black. The page is placed at the patient’s midline.

Figure 1. Bells Test

The objects are presented in an apparently random order, but are actually equally distributed in 7 columns containing 5 targets and 40 distractors each. There is a black dot on the bottom of the page to indicate where the page should be placed in relation to the patient’s midsaggital plane. Of the 7 columns, 3 are on the left side of the sheet, 1 is in the middle, and 3 are on the right. Therefore, if the patient omits to circle bells in the last column on the left, we can estimate their neglect is mild. However, omissions in the more centered columns can suggest a greater neglect of the left side of space.

The examiner is seated facing the patient. First a demonstration sheet is presented to the patient. This sheet contains an oversized version of each of the distractors and one circled bell. The patient is asked to name the images indicated by the examiner in order to ensure proper object recognition. If the patient experiences language difficulties or if the examiner suspects comprehension problems, the patient can instead place a card representing that object over the actual image.

The examiner gives the following instructions: “Your task will consist of circling with the pencil all the bells that you will find on the sheet that I will place in front of you without losing time. You will start when I say “go” and stop when you feel you have circled all the bells. I will also ask you to avoid moving or bending your trunk if possible.” If a comprehension problem is present, the examiner has to demonstrate the task.

The test is then placed in front of the patient with the black dot (see arrow on Figure 1) on the subject’s side, centered on his midsagittal plane (divides the body into right and left halves). The test sheet is given after the instructions.

The examiner holds the scoring sheet (Figure 2) away from the patient’s view, making sure the middle dot is towards the patient. This upside-down position will make scoring easier for the examiner. After the patient begins the test, the examiner records the order of the bells circled by the patient by numbering the order on his/her scoring sheet (e.g. 1, 2, 3,…). If the patient circles another image (an image that is not a bell), the examiner indicates on his/her scoring sheet the appropriate number and the approximate location. The subsequent bell receives the next number.

Figure 2. Examiner scoring sheet.

If the patient stops before all the bells are circled, the examiner gives only one warning by saying “are you sure all the bells are now circled? Verify again.” After that, the order of numbering continues, but the numbers are circled or underlined. The task is completed when the patient stops his/her activity.

Scoring:
The total number of circled bells is recorded as well as the time taken to complete. The maximum score is 35. An omission of 6 or more bells on the right or left half of the page indicates USN. Judging by the spatial distribution of the omitted targets, the evaluator can then determine the severity of the visual neglect and the hemispace affected (i.e. left or right).

The sequence by which the patient proceeds during the scanning task can be determined by connecting the bells of the scoring sheet according to the order of the numbering.

Time:
Less than 5 minutes.

Training:
None typically reported.

Subscales:
None.

Equipment:

  • The test paper (8.5″x11″ page with 35 bells embedded within 280 distractors).
  • Pencil
  • Score sheet
  • Stopwatch

Alternative forms of the Bell’s Test

None.

Client suitability

Can be used with:
Patients with stroke.

  • Patients must be able to hold a pencil to complete the test (the presence of apraxia may impair this ability).
  • Patients must be able to visually discriminate between distractor items, such as the images of houses and horses, and the bells that are to be cancelled.

Should not be used with:

  • As with other cancellation tests, the Bells Test cannot be used to differentiate between sensory neglect and motor neglect because it requires both visual search and manual exploration (LÀdavas, 1994).
  • The Bells Test cannot be completed by proxy.

In what languages is the measure available?

Not applicable.

Summary

What does the tool measure? Unilateral Spatial Neglect (USN) in the near extra personal space.
What types of clients can the tool be used for? Patients with stroke.
Is this a screening or assessment tool? Screening.
Time to administer Less than 5 minutes.
Versions None.
Other Languages Not applicable.
Measurement Properties
Reliability Internal consistency:
No studies have examined the internal consistency of the Bells Test.

Test-retest:
No studies have examined the test-retest reliability of the Bells Test.

Validity Criterion:
One study reported that the Bells Test is more likely to identify the presence of neglect than the Albert’s Test in patients with stroke. One study found that the Bells Test and the Letter Cancellation Test were more likely to detect the presence of neglect than the Line Bisection Test in 35 patients with spatial neglect.

Construct:
Known groups:
The Bells Test was able to discriminate between patients with right cerebral lesions and patients with left cerebral lesions.

Does the tool detect change in patients? Not applicable.
Acceptability The Bells Test should be used as a screening tool rather than for clinical diagnosis of USN. Apraxia must be ruled out as this may affect the validity of test results. This test cannot be completed by proxy. Patients must be able to hold a pencil and visually discriminate between distractor items to complete. The measure cannot be used to differentiate between sensory neglect and motor neglect.
Feasibility The Bells Test requires no specialized training to administer and only minimal equipment is required (a pencil, a stopwatch, the test paper and scoring sheet). The test is simple to score and interpret. The test is placed at the patient’s midline and a demonstration sheet is used to familiarize the patient with the images used in the test. The examiner is required to follow along with the patient as they circle each bell, and record on the scoring sheet the order in which the bells are cancelled. Upon completion of the test, the examiner must count the number of bells cancelled out of a total of 35, and record the time taken by the patient.
An omission of 6 or more bells on the right or left half of the page indicates USN.
How to obtain the tool?

You can download a copy: Bell’s Test

Psychometric Properties

Overview

A review of the Bells Test reported that the measure has excellent test-retest reliability than Line Bisection Test (Marsh & Kersel, 1993; Azouvi et al., 2002).

For the purposes of this review, we conducted a literature search to identify all relevant publications on the psychometric properties of the Bells Test.

Reliability

Internal consistency:
No studies have examined the internal consistency of the Bells Test.

Test-retest:
No studies have examined the test-retest reliability of the Bells Test.

Validity

Criterion:
Vanier et al. (1990) administered the Bells Test and the Albert’s Test to 40 neurologically healthy adults, and 47 patients with right brain stroke. It was found that 38.3% of patients were diagnosed with USN using the Bells Test (with a cutoff score of greater than or equal to 4), compared with only 10.6 % with the Albert’s Test (using a cutoff score greater than or equal to 2). The results of this study suggest that the Bells Test is more likely to identify the presence of neglect than the Albert’s Test in patients with stroke.

Ferber and Karnath (2001) examined the ability of various cancellation and line bisection tests to detect the presence of neglect in 35 patients with spatial neglect. The Bells Test detected a significantly higher percentage of omitted targets than the other tests (Line Bisection Test, and 3 cancellation tests: Letter Cancellation Test, Star Cancellation Test and Line Crossing Test). The Line Bisection Test missed 40% of patients with spatial neglect. The Letter Cancellation Test and the Bells Test missed only 6% of the cases.

Construct:
Known groups:
Gauthier et al. (1989) examined the Bells Test in 59 subjects, of which 20 were controls, 19 had right cerebral lesions and 20 had left cerebral lesions. A statistically significant difference in mean scores between the group with right cerebral lesions and the group with left cerebral lesions was observed.

Responsiveness

No studies have examined the responsiveness of the Bells Test.

References

  • Azouvi, P., Samuel, C., Louis-Dreyfus, A., et al. (2002). Sensitivity of clinical and behavioural tests of spatial neglect after right hemisphere stroke. J Neurol Neurosurg Psychiatry, 73, 160 -166.
  • Ferber, S., Karnath, H. O. (2001). How to assess spatial neglect–Line Bisection or Cancellation Tests? J Clin Expl Neuropsychol, 23, 599-607.
  • Gauthier, L., Dehaut, F., Joanette, Y. (1989). The Bells Test: a quantitative and qualitative test for visual neglect. Int J Clin Neuropsychol, 11, 49-54.
  • Làdavas, E. (1994). The role of visual attention in neglect: A dissociation between perceptual and directional motor neglect. Neuropsychological Rehabilitation, 4, 155-159.
  • Marsh, N. V., Kersel, D. A. (1993). Screening tests for visual neglect following stroke. Neuropsychological Rehabilitation, 3, 245-257.
  • Menon, A., Korner-Bitensky, N. (2004). Evaluating unilateral spatial neglect post stroke: Working your way through the maze of assessment choices. Topics in Stroke Rehabilitation, 11(3), 41-66.
  • Vanier, M., et al. (1994). Evaluation of left visuospatial neglect: norms and discrimination power of the two tests. Neuropsychologia, 4, 87-96.

See the measure

How to obtain the Bell’s Test

You can download 

Table of contents

Catherine Bergego Scale (CBS)

Evidence Reviewed as of before: 20-01-2012
Author(s)*: Annabel McDermott, OT
Editor(s): Nicol Korner-Bitensky, PhD OT

Purpose

The Catherine Bergego Scale is a standardized checklist to detect presence and degree of unilateral neglect during observation of everyday life situations. The scale also measures self-awareness of behavioral neglect (anosognosia).

In-Depth Review

Purpose of the measure

The Catherine Bergego Scale is a standardized checklist to detect presence and degree of neglect during observation of everyday life situations. The scale also provides a measure of neglect self-awareness (anosognosia).

Available versions

There is only one version of the CBS. The CBS is comprised of a 10-item checklist for use by clinicians, and a corresponding patient-administered questionnaire that can be used to measure self-awareness of neglect (anosognosia).

Features of the measure

Items:
The CBS is comprised of 10 everyday tasks that the therapist observes during performance of self-care activities. The therapist scores the patient on the following items:

  1. Forgets to groom or shave the left part of his/her face
  2. Experiences difficulty in adjusting his/her left sleeve or slipper
  3. Forgets to eat food on the left side of his/her plate
  4. Forgets to clean the left side of his/her mouth after eating
  5. Experiences difficulty in looking towards the left
  6. Forgets about a left part of his/her body (e.g. forgets to put his/her upper limb on the armrest, or his/her left foot on the wheelchair rest, or forgets to use his/her left harm when he/she needs to)
  7. Has difficulty in paying attention to noise or people addressing him/her from the left
  8. Collides with people or objects on the left side, such as doors or furniture (either while walking or driving a wheelchair)
  9. Experiences difficulty in finding his/her way towards the left when traveling in familiar places or in the rehabilitation unit
  10. Experiences difficulty finding his/her personal belongings in the room or bathroom when they are on the left side

There is a corresponding patient/carer questionnaire that can be used to assess anosognosia (i.e. self-awareness of neglect). The questionnaire is comprised of 10 questions that correspond with CBS items. For instance, in accordance with the first item the clinician would ask the patient: “do you sometimes forget to groom or shave the left side of your face?” If the patient identifies that the difficulty is present, the clinician asks: “do you find this difficulty mild, moderate or severe?”

Scoring:
The CBS uses a 4-point rating scale to indicate the severity of neglect for each item:

0 = no neglect
1 = mild neglect (patient always explores the right hemispace first and slowly or hesitantly explores the left side)
2 = moderate neglect (patient demonstrates constant and clear left-sided omissions or collisions)
3 = severe neglect (patient is only able to explore the right hemispace)

This results in a total score out of 30.

Azouvi et al. (2002, 2003) have reported arbitrary ratings of neglect severity according to total scores:

0 = No behavioral neglect
1-10 = Mild behavioral neglect
11-20 = Moderate behavioral neglect
21-30 = Severe behavioral neglect

In cases of severe impairment the patient may not able to perform an item of the CBS. In these instances the item is considered invalid, is not scored, and is not included in the final score. As such, the total score would be a calculation of the average score of the valid questions (i.e. sum of individual scores divided by number of valid questions x 10).

The patient questionnaire also uses a 4-point rating scale according to the following levels of difficulty experienced by the patient:

0 = no difficulty
1 = mild difficulty
2 = moderate difficulty
3 = severe difficulty

The anosognosia score is then calculated as the difference between the clinician’s total score and the patient’s self-assessment score (Azouvi et al., 2003):

Anosognosia score = clinician’s CBS score – patient’s self-assessment score

Description of tasks:
The clinician observes the patient performing self-care activities and provides a score for each of the 10 items according to observations of neglect behaviors (Plummer et al., 2003).

What to consider before beginning:
The items of the CBS vary in their degree of difficulty (tasks listed from least difficult to most difficult, with item numbers indicated in parentheses): cleaning mouth after a meal (item 4); grooming (1) / auditory attention (7); eating (3); spatial orientation (9); gaze orientation (5); finding personal belongings (10); collides when moving (8); left limb knowledge (6); dressing (2). Note that two items (grooming, auditory attention) share the same level of difficulty (Azouvi et al., 2003).

Time:
The CBS takes approximately 30 minutes to administer.

Training requirements:
There are no formal training requirements for administration of the CBS.

Equipment:
The clinician requires the form, a pen and household equipment used to perform the tasks (e.g. razor, brush, toothbrush, clothing, mealtime utensils, serviette). The test can be administered in the patient’s own environment or in the rehabilitation setting.

Alternative forms of the assessment

There are no alternative forms of the CBS.

Client suitability

Can be used with:

  • Patients with stroke and hemispatial neglect. While the authors specify use of the CBS with patients with right hemispatial neglect, it may be modified for use with individuals with left hemispatial neglect (Plummer et al., 2003).

Should not be used with:

  • The CBS requires patients to perform upper limb and lower limb movements in various testing positions for approximately 30 minutes (Menon & Korner-Bitensky, 2004). While scoring can be adjusted for patients who are not able to perform all tasks, the clinician must consider whether these difficulties are due to neglect or other neurological deficits such as apraxia (Azouvi et al., 1996; Menon & Korner-Bitensky, 2004).

Languages of the measure

English, French, Portuguese

Summary

What does the tool measure? The CBS measures unilateral behavioural neglect.
What types of clients can the tool be used for? The CBS is designed for use with individuals with stroke who have hemispatial neglect.
Is this a screening or assessment tool? Assessment.
Time to administer The CBS takes approximately 30 minutes to administer.
Versions There is one version of the CBS checklist for use by clinicians. There is also a patient-administered questionnaire that can be used to measure self-awareness of neglect (anosognosia).
Languages English, French, Portuguese
Measurement Properties
Reliability Internal consistency:
Five studies have examined the internal consistency of the CBS and have reported adequate to excellent internal consistency, using Spearman rank.
Test-retest:
No studies have reported on the test-retest reliability of the CBS.
Intra-rater:
No studies have reported on the intra-rater reliability of the CBS.
Inter-rater:
One study examined the inter-rater reliability of the CBS and reported adequate to excellent inter-rater reliability, using kappa and correlation coefficients.
Validity Internal validity:
One study has examined the internal validity of the CBS and found unidimensionality of items by Rasch analysis.
Content:
No studies have reported on the content validity of the CBS.
Criterion:
Concurrent:
– Five studies has examined the concurrent validity of the CBS with other neglect tasks in patients with right hemisphere stroke and reported excellent correlations with Albert’s Test, Behavioral Inattention Test subtests, the Bells test, a reading task and a writing task. Adequate to excellent correlations were reported with Ogden’s scene drawing task, overlapping figures and a daisy drawing task. Poor to adequate correlation were reported with Bisiach et al.’s (1986) scale of awareness of visual and motor neglect.
– One study has examined the concurrent validity of the CBS with other neglect tasks in patients with left hemisphere stroke and reported adequate correlations with the Bells test, but no significant correlation with a line bisection test.
Predictive:
No studies have reported on the predictive validity of the CBS.
Construct:
One study conducted a factor analysis and revealed one underlying factor.
Convergent/Discriminant:
Two studies have examined the convergent validity of the CBS and reported an excellent correlation with the Barthel Index and an adequate correlation with the Functional Independence Measure and the Postural Assessment for Stroke Scale, using Spearman’s rank correlation coefficient.
Known Groups:
Two studies have examined the known groups validity of the CBS and reported that the CBS is able to distinguish between patients with/without neglect and patients with/without visual field deficits. The CBS does not differentiate between patients with/without depression.
Floor/Ceiling Effects

One study reported adequate floor/ceiling effects.

Does the tool detect change in patients? Two studies have reported that the CBS can detect change in neglect.
Acceptability Not reported
Feasibility The CBS is straightforward and easy to administer. While there is no formal training, administration of the CBS requires clinicians to possess a sound understanding of neglect and its behavioral manifestations. There is no manual to aide administration or scoring.
How to obtain the tool? The CBS can be accessed in the article by Azouvi et al. (1995).

Psychometric Properties

Overview

A literature search was conducted to identify all relevant publications on the psychometric properties of the Catherine Bergego Scale (CBS). Eight articles were reviewed.

Floor/Ceiling Effects

Azouvi et al. (2003) reported that 3.6% of patients with right hemisphere stroke achieved a total CBS score of 0 (no neglect), indicating an adequate floor/ceiling effect.

Reliability

Internal consistency:
Bergego et al. (1995) examined the internal consistency of the CBS with 18 patients with right hemisphere stroke and reported adequate to excellent correlations between the total score and all items (rho range = 0.48 – 0.87, p<0.05).

Azouvi et al. (1996) examined the internal consistency of the CBS with 50 patients with right hemisphere stroke, using Spearman rank correlation coefficients. An adequate correlation was found between the CBS total score and the “mouth cleaning” item (rho=0.58, p<0.0001). An excellent correlation was found between the total score and all other items (rho range from 0.69 – 0.88, p<0.0001). An adequate correlation was found between the therapist’s score and the patient’s self-evaluation (rho = 0.52, p<0.001). There was an excellent correlation between the therapist’s score and the anosognosia score (i.e. therapist’s total score – patient’s self-evaluation score) (rho = 0.75, p<0.0001).

Azouvi et al. (2002) examined the anosognosia component of the CBS in 69 patients with subacute right hemisphere stroke. Patients’ self-assessment score was significantly lower than the examiner’s score (p<0.0001). There was an excellent correlation between the therapist’s score (neglect severity) and the anosognosia score (r=0.82, p<0.0001).

Azouvi et al. (2003) examined the internal consistency of the CBS with 83 patients with right hemisphere stroke and reported adequate to excellent correlations between items (correlation coefficient range = 0.48 – 0.73, p<0.0001). Patients’ self-assessment scores were significantly lower than examiners’ total CBS scores (p<0.0001). An excellent correlation was found between the therapist’s score (neglect severity) and the anosognosia score (r=0.79, p<0.0001). Patients with moderate or severe neglect achieved high anosognosia scores, whereas patients with mild or no neglect achieved negative scores, indicating higher self-rating of severity than scores attributed by the therapist.

Luukkainen-Markkula et al. (2011) examined the internal consistency of the CBS using Spearman correlations. There were adequate< to excellent correlations between the CBS total score and all item scores, except for the “eating” item (grooming: r=0.64, p<0.05; mouth cleaning: r=0.73, p<0.05; gaze orientation: r=0.80, p<0.01; knowledge of left limbs: r=0.61, p<0.05; auditory attention: r=0.89, p<0.01; collisions when moving: r=0.89, p<0.01; spatial orientation: r=0.89, p<0.01; dressing: r=0.51, p<0.05; finding personal belongings: r=-0.66, p<0.05). Adequate to excellent correlations were also reported between many CBS items (r range from 0.51 to 0.94, p<0.05 to p<0.01).

Azouvi et al. (2003) examined the reliability of the CBS. The item reliability index was 0.93, resulting in 3.5 strata of significantly different difficulty (p<0.05). Although most items vary in their degree of difficulty, 2 items (grooming, auditory attention), share the same level of difficulty. The person reliability index was 0.88, resulting in 2.7 statistically different levels of ability (p<0.05).

Azouvi et al. (2003) examined the internal structure of the CBS by Rasch analysis. Item 2 (dressing) obtained an outlier outfit value (mnsq = 0.58), indicating potential redundancy of this item due to too little variance. Overall mean fit scores were very close to 1.00, supporting unidimensionality. This is further supported by high positive-point biserial correlation coefficients (i.e. the extent to which the items correlate with the linear measure) between each item score, and cumulative scores obtained across the whole sample.

Test-retest:
No studies have reported on the test-retest reliability of the CBS.

Intra-rater:
No studies have reported on the intra-rater reliability of the CBS.

Inter-rater :
Bergego et al. (1995) examined the inter-rater reliability of the CBS among 18 patients and found adequate to excellent inter-rater reliability for the 10 items (kappa coefficient range = 0.59 – 0.99). Correlation between the two examiners’ total scores, measured by Spearman rank correlation coefficient, was excellent (rho = 0.96, p<0.0001).

Validity

Content:
No studies have reported on the content validity of the CBS.

Criterion:
Concurrent:
Bergego et al. (1995) compared the CBS total score with other conventional neglect tests in 18 patients with right hemisphere stroke, using Spearman rank correlation coefficients. Excellent correlations were reported between the CBS and the Bells test, Ogden’s scene drawing task, writing task, reading task and Albert’s Test (rho = 0.72, 0.72, 0.72, 0.70 and 0.67 respectively, p<0.01). There was no statistically significant correlation with a flower drawing task.

Azouvi et al. (1996) compared the CBS total score with other conventional neglect tests among 50 patients with right hemisphere stroke, using Spearman rank correlation coefficients. Excellent correlations were found between the CBS and the Bells test (rho = 0.74, p<0.0001), Albert’s Test (line cancellation) (rho = 0.73, p<0.0001) and a reading task (rho = 0.61, p<0.0001). Adequate correlations were found with the Ogden’s scene drawing task (rho = 0.56, p<0.001) and a daisy drawing task (rho = 0.50, p<0.001). Adequate correlations were found between the CBS anosognosia score and conventional tests of neglect (rho range = 0.45 – .58, p<0.01). A weak correlation was found between the CBS anosognosia score and Bisiach et al.’s (1986) scale of patients’ awareness of neurological deficit (rho = 0.31, p<0.05).

Azouvi et al. (2002) compared the CBS with a battery of conventional pen and paper neglect tasks among 69 patients with right hemisphere stroke. Conventional neglect tasks included the Bells test, a figure-copying task, clock drawing, line bisection tasks (5cm, 20cm), overlapping figures test, reading task and a writing task. There were adequate to excellent correlations between the CBS and all conventional neglect tasks (r range = 0.49 – 0.77, p<0.0001), except for the short (5cm) line bisection task. The strongest correlation was seen with the Bells test (total number of omissions). Further stepwise multiple regression analysis revealed that four conventional neglect tasks – the total number of omissions in the Bells test, starting point in the Bells test, figure copying task and the clock drawing task – significantly predict behavioral neglect. Moderate to excellent correlations were found between the CBS anosognosia score and all conventional neglect tasks (r range = 0.47 – 0.70, p<0.0001), except the short (5cm) line bisection task. Comparison with Bisiach et al.’s (1986) scale of visual and motor anosognosia found a moderate correlation with visual anosognosia (r=0.37, p<0.05), but a poor correlation with motor anosognosia (r=0.29, p<0.05).

Azouvi et al. (2003) compared the CBS total score with three conventional neglect tasks in 83 patients with right hemisphere stroke. Excellent correlations were found with the Bells test (number of omissions) (r=0.76, p<0.0001) and a figure-copying task (r=0.70, p<0.0001), and an adequate correlation was found with a short reading task (number of omissions) (r=0.54, p<0.0001). Comparison of the CBS anosognosia scores with conventional neglect tasks revealed adequate to excellent correlations (r range = 0.43 – 0.72, p<0.01).

Azouvi et al. (2006) reported on unpublished data from 54 patients with left hemisphere stroke. There was an adequate correlation between the CBS score and the Bells test – total omissions (r=0.41, p<0.01) and right minus left omissions (r=0.34, p<0.01). There was no significant correlation with a line bisection test.

Luukkainen-Markkula et al. (2011) compared the CBS with the conventional subtests of the Behavioral Inattention Test (BIT C) among 17 patients with right hemisphere stroke and hemispatial neglect. Only the BIT C line bisection subtest showed statistically significant correlations with the CBS, demonstrating excellent correlations with the grooming (r=-0.64, p≤0.01) and gaze orientation (-0.61, p≤0.01) subtests, and adequate correlations with the auditory attention (r=-0.56, p≤0.05) and spatial orientation (r=-0.54, p≤0.05) subtests and the CBS total score (r=-0.54, p≤0.05). Conversely, the CBS eating item was the only item to demonstrate statistically significant correlation with the BIT C, revealing excellent correlations with the line crossing (r=-0.95, p≤0.01), letter cancellation (r=0.83, p≤0.01) and star cancellation (r=-0.83, p≤0.01) subtests and the BIT C total score (r=-0.83, p≤0.01).

Despite significant correlations between the CBS and traditional visual neglect tests, individual patients can demonstrate dissociations between visual and behavioral neglect (Azouvi et al., 1995, 2002, 2006; Luukkainen-Markkula et al., 2011).

Predictive:
No studies have reported on the predictive ability of the CBS.

Construct:
Azouvi et al. (2003) conducted a conventional factor analysis on raw scores, revealing a single underlying factor that explained 65.8% of total variance. The factor matrix showed that all 10 items obtained a high loading on this factor (range = 0.77 – 0.84). Further, principal component analysis on standardized residuals after the linear method was extracted showed that no strong factors remained hidden in the residuals between observed and expected scores.

Convergent/Discriminant :
Azouvi et al. (1996) examined the ability of the CBS to measure aspects of daily functioning related to neglect by comparing CBS and Barthel Index scores of 50 patients with right hemisphere stroke, using Spearman’s rank correlation coefficient. An excellent correlation was found between CBS total score and the Barthel Index (rho = -0.63, p<0.0001).

Azouvi et al. (2006) reported on unpublished data from 54 patients with left hemisphere stroke. There was an adequate correlation between the CBS score and the Functional Independence Measure (r=-0.48, p<0.01), and the Postural Assessment for Stroke Scale (r=-0.55, p<0.001).

Known Group:
Azouvi et al. (1996) compared behavioral neglect in patients with neglect on conventional tests and patients with no neglect, using Mann Whitney tests. There was a significant difference in total CBS scores between the two groups (p<0.0001). Comparison of anosognosia revealed a significant difference between groups (p<0.001), whereby patients with no neglect tended to overestimate their difficulties (compared to the difficulties reported by the therapist) and patients with neglect tended to underestimate their neglect difficulties.

Azouvi et al. (1996) reported no significant difference in CBS anosognosia scores between patients with depression and patients without depression.

Luukkainen-Markkula et al. (2011) compared behavioral neglect in patients with right hemisphere stroke and hemispatial neglect with visual field deficits (n=8) and patients with intact visual fields (n=9), using Mann Whitney tests. Patients with visual field deficits demonstrated significantly more severe behavioral neglect (i.e. higher CBS total score) than patients with intact visual fields (p=0.03).

Responsiveness

Samuel et al. (2000) reported that the CBS is responsive to clinical change following visuo-spatio-motor cueing intervention in patients with stroke and unilateral spatial neglect.

Sensitivity:
Azouvi et al (1995) examined the sensitivity of the CBS by determining the ability of each CBS item to detect the presence of neglect (i.e. a score of 1 or more) in a group of 50 patients with right hemisphere stroke. The most sensitive items at detecting neglect were items 2 (dressing), 4 (knowledge of left limbs) and 8 (collisions while moving), which all demonstrated neglect in more than 50% of patients. Sensitivity of other items in descending order was as follows: item 1 (grooming), 3 (eating), 10 (personal belongings), 5 (gaze orientation), 7 (auditory attention), 9 (spatial orientation) and 4 (mouth cleaning). The CBS was more sensitive to neglect than conventional neglect tasks (Bells test, reading task, line cancellation test), which were found to detect neglect in 42 – 49% of patients.

Azouvi et al. (2002) examined the sensitivity of the CBS among 69 patients with right hemisphere neglect. Similar to their earlier study (Azouvi et al., 1995), the most sensitive items were item 4 (knowledge of left limbs), item 8 (collisions) and item 2 (dressing). The authors compared the sensitivity of the CBS with a battery of conventional neglect tasks that comprised the Bells test, a figure copying task, clock drawing, line bisection task, overlapping figures test, reading task and a writing task. Stepwise multiple regression analysis revealed that four conventional neglect tasks – the total number of omissions in the Bells test, starting point in the Bells test, figure copying task and the clock drawing task – significantly predict behavioral neglect. These tasks revealed neglect in 71.84% of patients, but did not indicate neglect in 16.38% of patients who demonstrated neglect on the CBS. The highest incidence of neglect in conventional tests was 50%, whereas neglect was seen on at least 1 of the 10 CBS items in 76% of patients. This indicates that the CBS was more sensitive to neglect, although the difference in sensitivity between the CBS and the battery of conventional neglect tasks was not statistically significant.

Azouvi et al. (2003) compared the sensitivity of the CBS to three conventional neglect tasks in 83 patients with right hemisphere stroke. Incidence of neglect on individual conventional tasks was 44.3% on a figure copying task, 53.8% on the Bells Test and 64.2% on a reading task; 65.4% of participants showed neglect on at least one conventional neglect task and 32.7% of participants showed neglect on all three tasks. By comparison, 96.4% of participants demonstrated neglect based on total CBS score. Sensitivity of individual CBS items ranged from 49.5% (auditory attention, spatial orientation) to 79.5% (dressing).

Azouvi et al. (2006) reported on unpublished data from 54 patients with left hemisphere stroke, which showed that 77.3% of patients showed neglect on one item of the CBS scale. Only 5.4% of patients showed clinically significant neglect, compared to 36% of patients with right hemisphere neglect reported in an earlier study by Azouvi et al. (2002). The three items “neglect of right limbs’, “dressing” and “mouth cleaning after eating” showed higher incidence of neglect among patients with left hemisphere stroke, whereas the item “collisions while moving” obtained lower neglect scores. This is in contrast to earlier studies with patients with right hemisphere neglect (Azouvi et al., 1996, 2002).

References

  • Azouvi, P., Bartolomeo, P., Beis, J-M., Perennou, D., Pradat-Diehl, P., & Rousseaux, M. (2006). A battery of tests for the quantitative assessment of unilateral neglect. Restorative Neurology and Neuroscience, 24, 273-85.
  • Azouvi, P., Marchal, F., Samuel, C., Morin, L., Renard, C., Louis-Dreyfus, A., Jokie, C., Wiart, L., Pradat-Diehl, P., Deloche, G., & Bergego, C. (1996). Functional consequences and awareness of unilateral neglect: Study of an evaluation scale. Neuropsychological Rehabilitation, 6(2), 133-150.
  • Azouvi, P., Olivier, S., de Montety, G., Samuel, C., Louis-Dreyfus, A., & Tesio, L. (2003). Behavioral assessment of unilateral neglect: Study of the psychometric properties of the Catherine Bergego Scale. Archives of Physical Medicine and Rehabilitation, 84, 51-7.
  • Azouvi, P., Samuel, C., Louis-Dreyfus, A., Bernati, T., Bartolomeo, P., Beis, J-M., Chokron, S., Leclercq, M., Marchal, F., Martin, Y., de Montety, G., Olivier, S., Perennou, D., Pradat-Diehl, P., Prairial, C., Rode, G., Siéroff, E., Wiart, L., Rousseaux, M. for the French Collaborative Study Group on Assessment of Unilateral Neglect (GEREN/GRECO). (2002). Sensitivity of clinical and behavioral tests of spatial neglect after right hemisphere stroke. Journal of Neurology, Neurosurgery and Psychiatry, 73, 160-6.
  • Bergego, C., Azouvi, P., Samuel, C., Marchal, F., Louis-Dreyfus, A., Jokie, C., Morin, L., Renard, C., Pradat-Diehl, P., & Deloche, G. (1995). Validation d’une échelle d’évaluation fonctionnelle de l’héminegligence dans la vie quotidienne: l’échelle CB. Annales de Readaptation et de Medecine Physique, 38, 183-9.
  • Luukkainen-Markkula, R., Tarkka, I.M., Pitkänen, K., Sivenius, J., & Hämäläinen, H. (2011). Comparison of the Behavioral Inattention Test and the Catherine Bergego Scale in assessment of hemispatial neglect. Neuropsychological Rehabilitation, 21(1), 103-116.
  • Menon, A. & Korner-Bitensky, N. (2004). Evaluating unilateral spatial neglect post stroke: Working your way through the maze of assessment choices. Topics in Stroke Rehabilitation, 11(3), 41-66.
  • Plummer, P., Morris, M. E., & Dunai, J. (2003). Assessment of unilateral neglect. Physical Therapy, 83, 732-40.
  • Samuel, C., Louis-Dreyfus, A., Kaschel, R., Makiela, E., Troubat, M., Anselmi, N., Cannizzo, V., & Azouvi, P. (2000). Rehabilitation of very severe unilateral neglect by visuo-spatio-motor cueing: Two single case studies. Neuropsychological Rehabilitation, 10(4), 385-99.

See the measure

How to obtain the CBS

The Catherine Bergego Scale (CBS) can be viewed in the journal article:
Azouvi, P., Marchal, F., Samuel, C., Morin, L., Renard, C., Louis-Dreyfus, A., Jokie, C., Wiart, L., Pradat-Diehl, P., Deloche, G., & Bergego, C. (1996). Functional consequences and awareness of unilateral neglect: Study of an evaluation scale. Neuropsychological Rehabilitation, 6(2), 133-150.

Table of contents

Clock Drawing Test (CDT)

Evidence Reviewed as of before: 19-08-2008
Author(s)*: Lisa Zeltzer, MSc OT; Anita Menon, MSc
Editor(s): Nicol Korner-Bitensky, PhD OT; Elissa Sitcoff, BA BSc

Purpose


The CDT is used to quickly assess visuospatial and praxis abilities, and may determine the presence of both attention and executive dysfunctions (Adunsky, Fleissig, Levenkrohn, Arad, & Nov, 2002; Suhr, Grace, Allen, Nadler, & McKenna, 1998; McDowell, & Newell, 1996).

The CDT may be used in addition to other quick screening tests such as the Mini-Mental State Examination (MMSE), and the Functional Independence Measure (FIM).

In-Depth Review

Purpose of the measure

The CDT is used to quickly assess visuospatial and praxis abilities, and may determine the presence of both attention and executive dysfunctions (Adunsky, Fleissig, Levenkrohn, Arad, & Nov, 2002; Suhr, Grace, Allen, Nadler, & McKenna, 1998; McDowell & Newell, 1996).

The CDT may be used in addition to other quick screening tests such as the Mini-Mental State Examination (MMSE), and the Functional Independence Measure (FIM).

Available versions

The CDT is a simple task completion test in its most basic form. There are several variations to the CDT:

Verbal command:

  • Free drawn clock:
    The individual is given a blank sheet of paper and asked first to draw the face of a clock, place the numbers on the clock, and then draw the hands to indicate a given time. To successfully complete this task, the patient must first draw the contour of the clock, then place the numbers 1 through 12 inside, and finally indicate the correct time by drawing in the hands of the clock.
  • Pre-drawn clock:
    Alternatively, some clinicians prefer to provide the individual with a pre-drawn circle and the patient is only required to place the numbers and the hands on the face of the clock. They argue that the patient’s ability to fill in the numbers may be adversely affected if the contour is poorly drawn. In this task, if an individual draws a completely normal clock, it is a fast indication that a number of functions are intact. However, a markedly abnormal clock is an important indication that the individual may have a cognitive deficit, warranting further investigation.

Regardless of which type is used (free drawn or pre-drawn), the verbal command CDT can simultaneously assess a patient’s language function (verbal comprehension); memory function (recall of a visual engram, short-term storage, and recall of time setting instructions); and executive function. The verbal command variation of the CDT is highly sensitive for temporal lobe dysfunction (due to its heavy involvement in both memory and language processes) and frontal lobe dysfunction (due to its mediation of executive planning) (Shah, 2002).

Copy command:

The individual is given a fully drawn clock with a certain time pre-marked and is asked to replicate the drawing as closely as possible. The successful completion of the copy command requires less use of language and memory functions but requires greater reliance on visuospatial and perceptual processes.

Copy command clock

Clock reading test:
A modified version of the copy command CDT simply asks the patient to read aloud the indicated time on a clock drawn by the examiner. The copy command clock-drawing and clock reading tests are good for assessing parietal lobe lesions such as those that may result in hemineglect. It is important to do both the verbal command and the copy command tests for every patient as a patient with a temporal lobe lesion may copy a pre-drawn clock adequately, whereas their clock drawn to verbal command may show poor number spacing and incorrect time setting. Conversely, a patient with a parietal lobe lesion may draw an adequate clock to verbal command, while their clock drawing with the copy command may show obvious signs of neglect.

Clock reading clock

Time-Setting Instructions:

The most common setting chosen by clinicians is “3 O’clock” (Freedman, Leach, Kaplan, Winocur, Shulman, & Delis, 1994). Although this setting adequately assesses comprehension and motor execution, it does not indicate the presence of any left neglect the patient may have because it does not require the left half of the clock to be used at all. The time setting “10 after 11” is an ideal setting (Kaplan, 1988). It forces the patient to attend to the whole clock and requires the recoding of the command “10” to the number “2” on the clock. It also has the added advantage of uncovering any stimulus-bound errors that the patient may make. For example, the presence of the number “10” on the clock may trap some patients and prevent the recoding of the command “10” into the number “2.” Instead of drawing the minute hand towards the number “2” on the clock to indicate “10 after,” patients prone to stimulus-bound errors will fixate and draw the minute hand toward the number “10” on the clock.

Features of the measure

Scoring:

There are a number of different ways to score the CDT. In general, the scores are used to evaluate any errors or distortions such as neglecting to include numbers, putting numbers in the wrong place, or having incorrect spacing (McDowell & Newell, 1996). Scoring systems may be simple or complex, quantitative or qualitative in nature. As a quick preliminary screening tool to simply detect the presence or absence of cognitive impairment, you may wish to use a simple quantitative method (Lorentz et al., 2002). However, if a more complex assessment is required, a qualitative scoring system would be more telling.

Different scoring methods have been found to be better suited for different subject groups (Richardson & Glass, 2002; Heinrik, Solomesh, & Berkman, 2004). In patients with stroke, no single standardized method of scoring exists. Suhr, Grace, Allen, Nadler, and McKenna (1998) examined the utility of the CDT in localizing lesions in 76 patients with stroke and 71 controls. Six scoring systems were used to assess clock drawings (Freedman et al., 1994; Ishiai, Sugishita, Ichikawa, Gono, & Watabiki, 1993; Mendez, Ala, & Underwood, 1992; Rouleau, Salmon, Butters, Kennedy, & McGuire, 1992; Sunderland et al., 1989; Tuokko, Hadjistavropoulos, Miller, & Beattie, 1992; Watson, Arfken, & Birge, 1993; Wolf-Klein et al., 1989). Significant differences were found between controls and patients with stroke on all scoring systems for both quantitative and qualitative features of the CDT. However, quantitative indices were not helpful in differentiating between various stroke groups (left versus right versus bilateral stroke; cortical versus subcortical stroke; anterior versus posterior stroke). Qualitative features were helpful in lateralizing lesion site and differentiating subcortical from cortical groups.

A psychometric study in patients with stroke by South, Greve, Bianchini, and Adams (2001) compared three scoring systems: the Rouleau rating scale (1992); the Freedman scoring system (1994), and the Libon revised system (1993). These scoring systems were found to be reliable in patients with stroke (please see for the details of this study).

Subscales:

None typically reported.

Equipment:

Only a paper and pencil is required. Depending on the method chosen, you may need to prepare a circle (about 10 cm in diameter) on the paper for the patient.

Training:

The CDT can be administered by individuals with little or no training in cognitive assessment. Scanlan, Brush, Quijano, & Borson (2002) found that a simple binary rating of clock drawings (normal or abnormal) by untrained raters was surprisingly effective in classifying subjects as having dementia or not. In this study, a common mistake of untrained scorers was failure to recognize incorrect spacing of numbers on the clock face as abnormal. By directing at this type of error, concordance between untrained and expert raters should improve.

Time:

All variations of the CDT should take approximately 1-2 minutes to complete (Ruchinskas & Curyto, 2003).

Alternative forms of the CDT

The Clock Drawing Test-Modified and Integrated Approach (CDT-MIA) is a 4-step, 20-item instrument, with a maximum score of 33. The CDT-MIA emphasizes differential scoring of contour, numbers, hands, and center. It integrates 3 existing CDT’s:

  • Freedman et al’s free-drawn clock (1994) on some item definitions
  • Scoring techniques adapted from Paganini-Hill, Clark, Henderson, & Birge (2001)
  • Some items borrowed from Royall, Cordes, & Polk (1998) executive CLOX

The CDT-MIA was found to be reliable and valid in individuals with dementia, however this measure has not been validated in the stroke population (Heinik et al., 2004).

Client suitability

Can be used as a screening instrument with:

Virtually any patient population (Wagner, Nayak, & Fink, 1995). The test appears to be differentially sensitive to some types of disease processes. Particularly, it has proven to be clinically useful in differentiating among normal elderly, patients with neurodegenerative or vascular diseases, and those with psychiatric disorders, such as depression and schizophrenia (Dastoor, Schwartz, & Kurzman, 1991; Heinik, Vainer-Benaiah, Lahav, & Drummer, 1997; Lee & Lawlor, 1995; Shulman, Gold, & Cohen, 1993; Spreen & Strauss, 1991; Tracy, De Leon, Doonan, Musciente, Ballas, & Josiassen, 1996; Wagner et al., 1995; Wolf-Klein, Silverstone, Levy, & Brod, 1989).

Can be used with:

  • Patients with stroke. Because the CDT requires a nonverbal response, it may be administered to those with speech difficulties but who have sufficient comprehension to understand the requirement of the task.

Should not be used in:

  • Patients who cannot understand spoken or written instructions
  • Patients who cannot write

As with many other neuropsychological screening measures, the CDT is affected by age, education, conditions such as visual neglect and hemiparesis, and other factors such as the presence of depression (Ruchinskas & Curyto, 2003; Lorentz, Scanlan, & Borson, 2002). The degree to which these factors affect ones score depends much on the scoring method applied (McDowell & Newell, 1996). Moreover, the CDT focuses on right hemisphere function, so it is important to use this test in conjunction with other neuropsychological tests (McDowell & Newell, 1996).

In what languages is the measure available?

The CDT can be conducted in any language. Borson et al. (1999) found that language spoken did not have any direct effect on CDT test performance.

Summary

What does the tool measure? Visuospatial and praxis abilities, and may determine the presence of both attention and executive dysfunctions.
What types of clients can the tool be used for? Virtually any patient population. It has proven to be clinically useful in differentiating among normal elderly, patients with neurodegenerative or vascular diseases, and those with psychiatric disorders, such as depression and schizophrenia.
Is this a screening or assessment tool? Screening
Time to administer All variations of the CDT should take approximately 1-2 minutes to complete.
Versions
  • Verbal command: Free drawn clock; Pre-drawn clock;
  • Copy command: Copy command; Clock reading test
  • Time-setting: “10 after 11”
  • The Clock Drawing Test Modified and Integrated Approach (CDT-MIA)
Languages The CDT can be conducted in any language.
Measurement Properties
Reliability Test-retest:
Out of four studies examining test-retest reliability, three reported excellent test-retest and 1 found adequate test-retest.
Inter-rater:
Out of seven studies examining inter-rater reliability, six reported excellent inter-rater and one reported adequate (for examiner clocks) to excellent (for free-drawn and pre-drawn clocks inter-rater.
Validity Criterion:
Predicted lower functional ability and increased need for supervision on hospital discharge; poor physical ability and longer length of stay in geriatric rehabilitation; activities of daily living at maximal recovery.
Construct:
The CDT correlated adequately with the Mini-Mental State Examination and the Functional Independence Measure.
Known groups:
Significant differences between Alzheimer’s patients and controls detected by CDT.
Does the tool detect change in patients? Not applicable
Acceptability The CDT is short and simple. It is a nonverbal task and may be less threatening to patients than responding to a series of questions.
Feasibility The CDT is inexpensive and highly portable. It can be administered in situations in which longer tests would be impossible or inconvenient. Even the most complex administration and scoring system requires approximately 2 minutes. It can be administered by individuals with minimal training in cognitive assessment.
How to obtain the tool? A pre-drawn circle can be downloaded by clicking on this link: pre-drawn circle

Psychometric Properties

Overview

Until recently, data on the psychometric properties of the CDT were limited. While there are many possible ways to administer and score the CDT, the psychometric properties of all the various systems seem consistent and all forms correlate strongly with other cognitive measures (Scanlan et al., 2002; Ruchinskas & Curyto, 2003; McDowell & Newell, 1996). Further, scoring of the CDT has been found to be both accurate and consistent in patients with stroke (South et al., 2001).

For the purposes of this review, we conducted a literature search to identify all relevant publications on the psychometric properties of the more commonly applied scoring methods of the CDT. We then selected to review articles from high impact journals, and from a variety of authors.

Reliability

Test-retest:

Using Spearman rank order correlations of the CDT has been reported by several investigators using a variety of scoring systems:

  • Manos and Wu (1994) reported an “excellent” 2-day test-retest reliability of 0.87 for medical patients and 0.94 for surgical patients.
  • Tuokko et al. (1992) reported an “adequate” test-retest reliability of 0.70 at 4 days.
  • Mendez et al. (1992) reported and “excellent” coefficients of 0.78 and 0.76 at 3 and 6 months, respectively.
  • Freedman et al. (1994) reported test-retest reliability as “very low”. However, when the “10 after 11” time setting was used with the examiner clock, which is known to be a more sensitive setting for detecting cognitive dysfunction, test-retest reliability was found to be “excellent” (0.94).

Inter-rater:

Inter-rater reliability of the CDT, as indicated by Spearman rank order correlations (not the preferred method of analyses for assessing inter-rater reliability but one used in earlier measurement research), has also been reported by several investigators:

  • Sunderland et al. (1989) found “excellent” coefficients ranging from 0.86 to 0.97 and found no difference between clinician and non-clinician raters (0.84 and 0.86, respectively).
  • Rouleau et al. (1992) found “excellent” inter-rater reliability, with coefficients ranging from 0.92 to 0.97.
  • Mendez et al. (1992) reported “excellent” inter-rater reliability of 0.94.
  • Tuokko et al. (1992) reported high coefficients ranging from 0.94 to 0.97 across three annual assessments.
  • The modified Shulman scale (Shulman, Gold, Cohen, & Zucchero, 1993) also has “excellent” inter-rater reliability (0.94 at baseline, 0.97 at 6 months, and 0.97 at 12 months).
  • Manos and Wu (1994) obtained “excellent” inter-rater reliability coefficients ranging from 0.88 to 0.96.
  • Freedman et al. (1994) reported coefficients ranging from 0.79 to 0.99 on the free-drawn clocks, 0.84 to 0.85 using the pre-drawn contours, and 0.63 to 0.74 for the examiner clocks, demonstrating “excellent” inter-rater reliability.

South et al. (2001) compared the psychometrics of 3 different scoring methods of the CDT (Libon revised system; Rouleau rating scale; and Freedman scoring system) in a sample of 20 patients with stroke. Intra-rater reliability were measured using the intraclass correlation coefficient (ICC). Raters used comparable criteria for each score demonstrating “excellent” inter-rater reliability. Raters used similar scoring criteria throughout, demonstrating “excellent” intra-rater reliability. South et al. (2001) concluded that while the Libon scoring system demonstrated a range of reliabilities across different domains, the Rouleau and Freedman systems were in the excellent range.

Validity

In a review, Shulman (2000) reported that most studies achieved sensitivities and specificities of approximately 85% and concluded that the CDT, in conjunction with other widely used tests such as the Mini-Mental State Examination (MMSE), could provide a significant advance in the early detection of dementia. In contrast, Powlishta et al. (2002) concluded from their study that the CDT did not appear to be a useful screening tool for detecting very mild dementia. Other authors have concluded that the CDT should not be used alone as a dementia screening test because of its overall inadequate performance (Borson & Brush, 2002; Storey et al., 2001). However, most of the previous studies were based on relatively small sample sizes or were undertaken in a clinical setting, and their results may not be applicable to a larger community population.

Nishiwaki et al. (2004) studied the validity of the CDT in comparison to the MMSE in a large general elderly population (aged 75 years or older). The specificity of the CDT for detecting moderate-to-severe cognitive impairment (MMSE score = 17) were 77% and 87%, respectively, for nurse administration and 40% and 91%, respectively, for postal administration. The authors conclude that the CDT may have value as a brief face-to-face screening tool for moderate/severe cognitive impairment in an older community population but is relatively poor at detecting milder cognitive impairment.

Few studies have examined the validity of the CDT specifically in patients with stroke. Adunsky et al. (2002) compared the CDT with the MMSE and cognitive Functional Independence Measure (FIM) (cognitive tests used for the evaluation of functional outcomes at discharge in elderly patients with stroke). The tests were administered to 151 patients admitted for inpatient rehabilitation following an acute stroke. Correlation coefficients (Pearson correlation) between the three cognitive tests resulted in r-values ranging from 0.51 to 0.59. Adunsky et al. (2002) concluded that they share a reasonable degree of resemblance to each other, accounting for “adequate” concurrent validity of these tests.

Bailey, Riddoch, and Crome (2000) evaluated a test battery for hemineglect in elderly patients with stroke and determined that the CDT had questionable validity in the assessment of representational neglect. Further, consistent with previous findings (Ishiai et al., 1993; Kaplan et al., 1991), the utility of the CDT as a screening measure for neglect was not supported from these results. Reasons include the subjectivity in scoring, and questionable validity in that the task may also reflect cognitive impairment (Freidman, 1991), constructional apraxia, or impaired planning ability (Kinsella, Packer, Ng, Olver, & Stark, 1995).

Responsiveness

Not applicable.

References

  • Adunsky, A., Fleissig, Y., Levenkrohn, S., Arad, M., Nov, S.(2002). Clock drawing task, mini-mental state examination and cognitive-functional independence measure: relation to functional outcome of stroke patients. Arch Gerontol Geriatr, 35(2), 153-60.
  • Bailey, M. J., Riddoch, J., Crome, P. (2002). Evaluation ofa test battery for hemineglect in elderly stroke patients for use by therapists in clinical practice. Neurorehabilitation, 14(3), 139-150.
  • Borson, S., Brush, M., Gil, E., Scanlan, J., Vitaliano, P.,Chen, J., Cahsman, J., Sta Maria, M. M., Barnhart, R., Roques, J. (1999). The Clock Drawing Test: Utility for dementia detection in multiethnic elders. J Gerontol A Biol Sci Med Sci, 54, M534-40.
  • Dastoor, D. P., Schwartz, G., Kurzman, D. (1991).Clock-drawing: An assessment technique in dementia. Journal of Clinical and Experimental Gerontology, 13, 69-85.
  • Freedman, M., Leach, L., Kaplan, E., Winocur, G., Shulman,K. I., Delis, D. C. (1994). Clock Drawing: A Neuropsychological Analysis (pp. 5). New York: Oxford University Press.
  • Friedman, P. J. (1991). Clock drawing in acute stroke.Age and Ageing, 20(2), 140-145.
  • Heinik, J., Vainer-Benaiah, Z., Lahav, D., Drummer, D.(1997). Clock drawing test in elderly schizophrenia patients. International Journal of Geriatric Psychiatry, 12, 653-655.
  • Heinik, J., Solomesh, I., Berkman, P. (2004). Correlationbetween the CAMCOG, the MMSE and three clock drawing tests in a specialized outpatient psychogeriatric service. Arch Gerontol Geriatr, 38, 77-84.
  • Heinik, J., Solomesh, I., Lin, R., Raikher, B., Goldray, D.,Merdler, C., Kemelman, P. (2004). Clock drawing test-modified and integrated approach (CDT-MIA): Description and preliminary examination of its validity and reliability in dementia patients referred to a specialized psychogeriatric setting. J Geriatr Psychiatry Neurol, 17, 73-80.
  • Ishiai, S., Sugishita, M., Ichikawa, T., Gono, S., Watabiki,S. (1993). Clock drawing test and unilateral spatial neglect. Neurology, 43, 106-110.
  • Kaplan, E. (1988). A process approach to neuropsychologicalassessment. In: T Bull & BK Bryant (Eds.), Clinical neuropsychology and brain function: Research, measurement, and practice (pp. 129-167). Washington DC: American Psychological Association.
  • Kaplan, R.F., Verfaillie, M., Meadows, M., Caplan, L.R.,Pessin, M. S., DeWitt L. (1991). Changing attentional demands in left hemispatial neglect. Archives of Neurology, 48, 1263-1267.
  • Kinsella, G., Packer, S., Ng, K., Olver, J., Stark, R.(1995). Continuing issues in the assessment of neglect. Neuropsychological Rehabilitation, 5, 239-258.
  • Lee, H., Lawlor, B. A. (1995). State-dependent nature of theClock Drawing Task in geriatric depression. Journal of the American Geriatrics Society, 43, 796-798.
  • Lorentz, W. J., Scanlan, J. M., Borson, S. (2002). Briefscreening tests for dementia. Can J Psychiatry, 47, 723-733.
  • Manos, P. J., Wu, R. (1994). The Ten Point Clock Test: Aquick screen and grading system for cognitive impairment in medical and surgical patients. International Journal of Psychiatry in Medicine, 24, 229-244.
  • McDowell, I., Newell, C. (1996). Measuring Health. A Guideto Rating Scales and Questionnaires. 2nd ed. NewYork: Oxford University Press.
  • Mendez, M. F., Ala, T., Underwood, K. L. (1992). Developmentof scoring criteria for the clock drawing task in Alzheimers disease. Journal of the American Geriatrics Society, 40, 1095-1099.
  • Nishiwaki, Y., Breeze, E., Smeeth, L., Bulpitt, C. J.,Peters, R., Fletcher, A. E. (2004). Validity of the Clock-Drawing Test as a Screening Tool for Cognitive Impairment in the Elderly. American Journal of Epidemiology, 160(8), 797-807.
  • Paganini-Hill, A., Clark, L. J., Henderson, V. W., Birge, S.J. (2001). Clock drawing: Analysis in a retirement community. J Am Geriatr Soc, 49, 941-947.
  • Powlishta, K. K., von Dras, D. D., Stanford, A., Carr D. B.,Tsering, C., Miller, J. P., Morris, J. C. (2002). The Clock Drawing Test is a poor screen for very mild dementia. Neurology, 59, 898-903.
  • Richardson, H. E., Glass, J.N. (2002). A comparison ofscoring protocols on the clock drawing test in relation to ease of use, diagnostic group and correlations with mini-mental state examination. Journal of the American Geriatrics Society, 50, 169-173.
  • Rouleau, I., Salmon, D. P., Butters, N., Kennedy, C.,McGuire, K. (1992). Quantitative and qualitative analyses of clock drawings in Alzheimers and Huntington’s. Brain and Cognition, 18, 70-87.
  • Royall, D. R., Cordes, J. A., Polk, M. (1998). CLOX: anexecutive clock drawing task. J Neurol Neurosurg Psychiatry, 64, 588-594.
  • Ruchinskas, R. A., Curyto, K. J. (2003). Cognitive screeningin geriatric rehabilitation. Rehabil Psychol, 48, 14-22.
  • Scanlan, J. M., Brush, M., Quijano, C., Borson, S. (2002).Comparing clock tests for dementia screening: naïve judgments vs formal systems – what is optimal? International Journal of Geriatric Psychiatry, 17(1), 14-21.
  • Shah, J. (2001). Only time will tell: Clock drawing as anearly indicator of neurological dysfunction. P&S Medical Review, 7(2), 30-34.
  • Shulman, K. I., Gold, D. P., Cohen, C. A., Zucchero, C. A.(1993). Clock-drawing and dementia in the community: A longitudinal study. International Journal of Geriatric Psychiatry, 8(6), 487-496.
  • Shulman, K. I. (2000). Clock-drawing: Is it the idealcognitive screening test? International Journal of Geriatric Psychiatry, 15, 548-561.
  • Shulman, K., Shedletsky, R., Silver, I. (1986). Thechallenge of time: Clock-drawing and cognitive function in the elderly. International Journal of Geriatric Psychiatry, 1, 135-140.
  • South, M. B., Greve, K. W., Bianchini, K. J., Adams, D.(2001). Inter-rater reliability of Three Clock Drawing Test scoring systems. Applied Neuropsychology, 8(3), 174-179.
  • Spreen, O., Strauss, E. A. (1991). Compendium ofneuropsychological tests: Administration, norms, and commentary. New York: Oxford University Press.
  • Storey, J. E., Rowland, J. T., Basic, D., Conforti, D. A.(2001). A comparison of five clock scoring methods using ROC (receiver operating characteristic) curve analysis. Int J Geriatr Psychiatr, 16, 394-9.
  • Sunderland, T., Hill, J. L., Mellow, A. M., Lowlor, B. A.,Grundersheimer, J., Newhouse, P. A., Grafman, J. H. (1989). Clock drawing in Alzheimer’s disease: a novel measure of dementia severity. J Am Geriatr Soc, 37(8), 725-729.
  • Suhr, J., Grace, J., Allen, J., Nadler, J., McKenna, M.(1998). Quantitative and Qualitative Performance of Stroke Versus Normal Elderly on Six Clock Drawing Systems. Archives of Clinical Neuropsychology, 13(6), 495-502.
  • Tracy, J. I., De Leon, J., Doonan, R., Musciente, J.,Ballas, T., Josiassen, R. C. (1996). Clock drawing in schizophrenia. Psychological Reports, 79, 923-928.
  • Tuokko, H., Hadjistavropoulos, T., Miller, J. A., Beattie,B. L. (1992). The Clock Test, a sensitive measure to differentiate normal elderly from those with Alzheimer disease. Journal of the American Geriatrics Society, 40, 579-584.
  • Wagner, M. T., Nayak, M., Fink, C. (1995). Bedside screeningof neurocognitive function. In: L. A. Cushman & M. J. Scherer (Eds.), Psychological assessment in medical rehabilitation: Measurement and instrumentation in psychology (pp. 145-198). Washington, DC: American Psychological Association.
  • Watson, Y. I., Arfken, C. L., Birge, S. J. (1993). Clockcompletion : An objective screening test for dimentia. J Am Geriar Soc, 41(11), 1235-40.
  • Wolf-Klein, G. P., Silverstone, F. A., Levy, A. P., Brod, M.S. (1989). Screening for Alzheimer’s disease by clock drawing.Journal of the American Geriatrics Society, 37, 730-734.

See the measure

Click here to find a pre-drawn circle that can be used when administering the CDT.

Table of contents

Comb and Razor Test

Evidence Reviewed as of before: 19-08-2008
Author(s)*: Lisa Zeltzer, MSc OT; Anita Menon, MSc
Editor(s): Nicol Korner-Bitensky, PhD OT; Elissa Sitcoff, BA BSc

Purpose

The Comb and Razor Test screens for unilateral spatial neglect (USN) in the client’s personal space by assessing their performance in functional activities, such as using a comb or applying makeup.

In-Depth Review

Purpose of the measure

The Comb and Razor Test screens for unilateral spatial neglect (USN) in the client’s personal space by assessing their performance in functional activities, such as using a comb or applying makeup.

Available versions

The Comb and Razor Test was published by Beschin and Robertson in 1997 and was developed from a test by Zoccolotti and Judica (1991) (Beschin & Robertson, 1997), which comprised of three tasks: hair combing, pretend shaving (men) or facial compact use (women) and putting on glasses.

Features of the measure

Items:
There are no actual items for the Comb and Razor Test. The patient is asked to demonstrate the use of two common objects for 30 seconds each: 1. comb and 2. razor or powder compact case. Each object is placed at the patient’s midline.

Comb (Males and Females)

  • Examiner sits opposite to the patient and holds the comb up saying: “I would like you to comb your hair, and continue combing until I tell you to stop. Do you understand that? O.K., now begin”.
  • Examiner activates the stopwatch as soon as the patient takes the comb.
  • Examiner observes and records the number of moves on the left side and right side of the head. Any moves that are difficult to categorize are classified as ambiguous.
  • At the end of 30 seconds, the examiner asks the patient to stop, and takes the comb from him or her.

Razor (Males)

  • Examiner sits opposite to the patient and holds the razor up saying: “I am going to give you a razor, and I want you to pretend that you are shaving (razor with shield). I want you to continue shaving until I say stop. Do you understand?”
  • Examiner activates the stopwatch as soon as the patient takes the razor.
  • Examiner observes and records the number of moves on the left side and right side of the head. Any moves that are difficult to categorize are classified as ambiguous.
  • At the end of 30 seconds, the examiner asks the patient to stop, and takes the razor from him.

Powder Compact Case (Females)

  • Examiner sits opposite to the patient and holds the open powder compact case up saying: “I am going to give you a powder compact case and I want you to pretend that you are putting powder on your face. I want you to continue putting powder until I say stop. Do you understand?”
  • Examiner activates the stopwatch as soon as the patient takes the powder compact case.
  • Examiner observes and records the number of touches on the left side and right side of the head. Any touches that are difficult to categorize are classified as ambiguous.
  • At the end of 30 seconds, the examiner asks the patient to stop, and takes the powder compact case from her.

Scoring:
There are two scoring methods available, the original Beschin and Robertson (1997) method and the preferred reformulated McIntosh et al. (2000) scoring method:

Beschin and Robertson (1997) scoring method:

The number of moves with the razor, comb or powder compact that are made to the left, right or ambiguously are recorded to calculate an average percentage for all three categories.

% left = (left moves) / (left + ambiguous + right moves)

The % left is calculated for the comb and razor/powder compact case, and the scores are combined in the formula below as the index for left personal neglect:

(razor/compact case % left) + (comb % left) / 2

A score < 0.35 indicates the presence of left personal neglect. A score > 0.35 indicates the absence of left personal neglect.

McIntosh et al. (2000) scoring method:

McIntosh, Brodie, Beschin, and Robertson (2000) developed a reformulated scoring method for the Comb and Razor Test, which is considered the preferred method:

% bias = (left – right moves) / (left + ambiguous + right moves)

The % bias formula yields a score between -1 (total left neglect) and +1 (total right neglect), with symmetrical performance at 0.

Time:
The Comb and Razor Test takes around 5 minutes to complete.

Training:
No training required.

Subscales:
None.

Equipment:

  • One comb
  • One razor with shield
  • One powder compact case
  • Stopwatch

Alternative forms of the Comb and Razor Test

Reformulated Comb and Razor Test (McIntosh, Brodie, Beschin, & Robertson, 2000).

McIntosh et al. (2000) examined a new method for scoring the Comb and Razor Test, which characterizes personal neglect as a lateral bias of behavior without further assumptions about the direction of bias, rather than as a lateralized deficit. The original % left formula of Beschin and Robertson (1997) (see scoring), characterizes personal grooming behavior according to the proportion of the total activity that is directed to the left side of the body. Conversely, the proposed %bias formula of McIntosh et al. (2000) yields a score between -1 (total left neglect) and +1 (total right neglect), with symmetrical performance at 0. The reformulated version of the Comb and Razor Test was able to discriminate between 17 right brain damaged patients with stroke and extra personal neglect from 14 right brain damaged patients with stroke without extra personal neglect as well as both of these groups from left brain damaged patients with stroke and healthy controls. This version was also found to be more sensitive to the behavioral abnormalities of patients with brain damage than the original. In the %left index, 11 patients performed below cut-off, however with the %bias cut-off, 20 patients performed below cut-off. Further, all cases of personal neglect that were diagnosed using the %left index were also diagnosed using the %bias index. The test-retest reliability for 40 patients tested twice was excellent (r = 0.95).

Client suitability

Can be used with:

  • Patients with stroke.

Should not be used with:

  • Patients who do not have unilateral voluntary movement and control of the shoulder, elbow, and fingers.
  • Need to rule out the presence of apraxia, given that this may impact the validity of testing results.

In what languages is the measure available?

Not applicable.

Summary

What does the tool measure? Unilateral Spatial Neglect (USN) in personal space
What types of clients can the tool be used for? Patients with stroke
Is this a screening or assessment tool? Screening.
Time to administer Less than 5 minutes.
Versions Reformulated Comb and Razor Test
Other Languages Not applicable.
Measurement Properties
Reliability Internal consistency:
No studies have examined the internal consistency of the Comb and Razor Test.

Test-retest:
Two studies have examined the test-retest reliability of the Comb and Razor Test and both studies reported excellent test-retest.

Inter-rater:
No studies have examined the inter-rater reliability of the Comb and Razor Test.

Validity Construct:
Known groups:
The Comb and Razor Test is able to discriminate between different groups of subjects (i.e. patients with right brain stroke, with or without extra personal neglect, patients with left brain stroke, as well as healthy individuals).
Does the tool detect change in patients? Not applicable.
Acceptability The Comb and Razor Test should be used as a screening tool rather than for clinical diagnosis of USN. Apraxia must be ruled out as this may affect the validity of test results. This test cannot be completed by proxy. Patients who do not have unilateral voluntary movement and control of the shoulder, elbow, and fingers cannot complete the test.
Feasibility The Comb and Razor Test requires no specialized training to administer and only simple equipment is required (a razor with shield, a compact case, a comb, and a stopwatch). The test is fairly simple to score and interpret, calculated using a mathematical equation. Cutoff scores for the presence of left or right neglect are provided.
How to obtain the tool? Not applicable.
To conduct the Comb and Razor Test, the clinician asks the patient to demonstrate the use of two common objects for 30 seconds each: 1. comb and 2. razor or powder compact case. Each object is placed at the patient’s midline. A dialogue has been created for administering the Comb and Razor Test. See Features of the measures in the in-depth review section.

Psychometric Properties

Overview

For the purposes of this review, we conducted a literature search to identify all relevant publications on the psychometric properties of the Comb and Razor Test as a measure of USN. Although easy to use, this tool has only minimal evidence of reliability and validity (Menon & Korner-Bitensky, 2004). Further studies examining the psychometric properties of the test have had small sample sizes. More testing is required prior to clinical use.

Reliability

Internal consistency:
No studies have examined the internal consistency of the Comb and Razor Test.

Test-retest:
Beschin and Robertson (1997) examined the reliability of the Comb and Razor Test. Forty-three patients with stroke were assessed twice. In this study, the test-retest reliability of the Comb and Razor Test was excellent (r = 0.94).

McIntosh et al. (2000) examined the reliability of a reformulated version of the Comb and Razor Test and compared it to the original version by Beschin and Robertson (1997). Forty patients who were administered the test were reassessed at a later time. The original Comb and Razor Test had excellent test-retest reliability (r = 0.94).

Inter-rater:
No studies have examined the inter-rater reliability of the Comb and Razor Test.

Validity

Construct:
Known groups:
Beschin and Robertson (1997) examined the psychometric properties of the Comb and Razor Test in 17 patients with right brain stroke and extra personal neglect, 14 without unilateral extra personal neglect, 13 patients with left brain stroke and 17 age-matched controls. An analysis of variance (ANOVA) by group shows that the four samples of subjects significantly differed in their performance [F (3, 57) = 18.0; p < 0.0001]. A series of Fisher’s post hoc exact tests showed significant differences between all the pairs of groups, with the exception of the left brain damage group who did not differ significantly from the control group. Therefore, this tool is able to discriminate between different groups of subjects (i.e. patients with right brain stroke, with or without extra personal neglect, patients with left brain stroke, as well as healthy individuals).

McIntosh et al. (2000) examined the validity of a reformulated version of the Comb and Razor Test in 88 patients with stroke: 17 with right brain damage and extra personal neglect, 14 with right brain damage and no extra personal neglect, 13 with left brain damage, and 44 age-matched controls. Mean scores for each group were as follows: patients with right brain damage and extra personal neglect scored 0.25, patients with right brain damage and no extra personal neglect scored 0.37, patients with left brain damage scored 0.46, and controls scored 0.43. An ANOVA by group performed upon the % left scores was highly significant [F (3, 84) = 27.54; p < 0.0001] and Fisher’s post-hoc exact tests found significant differences between all pairs of groups. Therefore, this tool is able to discriminate between different groups of subjects (i.e. patients with right brain stroke, with or without extra personal neglect, patients with left brain stroke, as well as healthy individuals).

Criterion;
No studies have examined the criterion validity of the Comb and Razor Test.

Responsiveness

No studies have examined the responsiveness of the Comb and Razor Test.

References

  • Beschin, N., Robertson, I. H. (1997). Personal versusextrapersonal neglect: a group study of their dissociation using areliable clinical test. Cortex. 33, 379-384.
  • McIntosh, R. D., Brodie, E. E., Beschin, N., et al. (2000). Improving the clinical diagnosis of personal neglect: a reformulated comb and razor test. Cortex, 36,
    289-292.
  • Menon, A., Korner-Bitensky, N. (2004). Evaluating unilateral spatial neglect post stroke: Working your way through the maze of assessment choices. Topics in Stroke Rehabilitation, 11(3), 41-66.
  • Zoccolotti, P., Judica, A. (1991). Functional evaluation of hemineglect by means of a semistructured scale: Personal extrapersonal differentiation. Neuropsychological Rehabilitation, 1, 33-34.
  • Zoccolotti, P., Antonucci, G., Judica, A. (1992). Psychometric characteristics of two semi-structured scales for the functional evaluation of hemi-inattention in extrapersonal and personal Space. Neuropsychological Rehabil, 2, 179-191.

See the measure

To complete the comb and razor test, one simply requires a comb, razor, and powder compact case.

Table of contents

Double Letter Cancellation Test (DLCT)

Evidence Reviewed as of before: 19-08-2008
Author(s)*: Lisa Zeltzer, MSc OT; Anita Menon, MSc
Editor(s): Nicol Korner-Bitensky, PhD OT; Elissa Sitcoff, BA BSc

Purpose

The Double Letter Cancellation Test (DLCT) is used to evaluate the presence and severity of visual scanning deficits, and is used to evaluate unilateral spatial neglect (USN) in the near extrapersonal space (Diller, Ben-Yishay, Gertsman, Goodkin, Gordon, & Weinberg, 1974).

In-Depth Review

Purpose of the measure

The Double Letter Cancellation Test (DLCT) is used to evaluate the presence and severity of visual scanning deficits, and is used to evaluate unilateral spatial neglect (USN) in the near extrapersonal space (Diller, Ben-Yishay, Gertsman, Goodkin, Gordon, & Weinberg, 1974).

Available versions

The DLCT was published by Diller et al. in 1974.

Features of the measure

Items:
There are no actual items for the DLCT.

The patient is asked to look at an 8.5″x11″ sheet of paper containing 6 lines with 52 letters per line. Together, the stimuli letters C and E are presented 105 times. The patient is instructed to put a mark through all the letters C and E. The time taken to complete the test is recorded.

This test may be more challenging than cancellation of shapes or colours, since it requires the discrimination of 2 letters (C and E) from the rows of letters. The letters are arranged in structured rows and thus require less organizational skills than when the forms are randomly scattered on the page. This enables the therapist to examine more closely the issue of attention without the confounding factor of visual organization.

To begin the DLCT, the therapist places the test sheet at the patient’s midline and secures it with tape, and points to the trial line, asking the patient to mark the Cs and Es. If the patient is unable to perform the trial, further instruction is given. If the trial is correctly performed, the therapist will then proceed to give instructions as follows: “Look at the letters on this page. Put one line through each C and E. Ready, begin here”. The therapist points to the first letter in the first row and begins timing the patient.

Scoring:
The score is calculated by subtracting the number of omissions (Cs and Es that were not crossed out) from the possible perfect score of 105. Higher scores indicate better performance. The timing and total number of errors should be noted. According to Diller et al. (1974) 13 control subjects had a median error of 1 with a performance time of 100 seconds. Errors on the right with those on the left side of the page should be compared. Randomly scattered errors indicate poor sustained attention. Errors concentrated on one half of the page (either left or right) indicate USN. Commissions are rarely seen and are therefore not included in the analyses.

The patient’s general approach to the task should be observed (i.e. does the patient work from left to right or move randomly over the page; is their response time slowed; does the patient selectively choose the correct response or frequently mark non-target letters).

Normative data has been published by sex and age, based on the results from 241 patients with lesions of the right hemisphere (Gordon, Ruckdeschel-Hibbard, Egelko, Diller, Simmens, & Langer, 1984).

Time:
Less than 5 minutes.

Training:
None typically reported.

Subscales:
None.

Equipment:

  • 11x 8.5-inch page of paper containing 6 lines with 52 letters per line and the stimulus letters C and E presented 105 times.
  • Pencil
  • Stopwatch

Alternative Forms of the DLCT

  • Number version of the DLCT (Wade, Wood, & Hewer, 1988).p>A test similar to the DLCT has been created using numbers instead of letters. The patient is asked to cancel all the 1s and 4s from the rows of numbers. Normative data for this form of the DLCT was gathered from 51 elderly individuals (Wade et al., 1988).
  • Letter cancellation subtest of the Behavioural Inattention Test (Wilson, Cockburn, & Halligan, 1987).
    The patient is presented with a sheet of paper with five lines of letters (34 per line). The patient is instructed to put a mark through all the letters E and R. The maximum score is 40 (20 left, 20 right).

Client suitability

Can be used with:

Patients with stroke.

  • Patients must be able to hold a pencil to complete the test (the presence of apraxia may impair this ability).
  • Patients must be able to recognize letters of the alphabet to complete the test.

Should not be used with:

  • The DLCT requires language skills sufficient to identify letters. Therefore the DLCT may not be appropriate for patients with receptive aphasia.
  • The DLCT may not be appropriate for patients with poor vision as the letters may appear too small.
  • The DLCT cannot be used to differentiate between sensory neglect and motor neglect because it requires both visual search and manual exploration (Ladavas, 1994).
  • The DLCT cannot be completed by proxy.

In what languages is the measure available?

The DLCT has been used with English and French-speak patients.

A Hebrew version of the DLCT has been used in some studies as part of the Behavioral Inattention Test (e.g. Friedman & Nachman-Katz, 2004).

Summary

What does the tool measure? Unilateral Spatial Neglect (USN) in the near extrapersonal space.
What types of clients can the tool be used for? Patients with stroke.
Is this a screening or assessment tool? Screening.
Time to administer Less than 5 minutes.
Versions None.
Other Languages The DLCT has been used with English and French-speak patients.
A Hebrew version of the DLCT has been used in some studies as part of the Behavioural Inattention Test.
Measurement Properties
Reliability Internal consistency:
No studies have examined the internal consistency of the DLCT.

Test-retest:
One study has examined the test-retest reliability of the DLCT and reported adequate test-retest.

Validity

Construct:
Adequate correlation with mean CT-scan damage.

Does the tool detect change in patients? Not applicable.
Acceptability
  • The DLCT should be used as a screening tool rather than for clinical diagnosis of USN.
  • Apraxia must be ruled out as this may affect the validity of test results. – This test cannot be completed by proxy.
  • The DLCT is known to be a more taxing measure of USN than the Single Letter Cancellation Test.
  • Patients must be able to hold a pencil.
  • The DLCT requires language skills sufficient to identify letters, and therefore may not be suitable for patients with receptive aphasia.
  • Patients with poor vision may not be able to complete the DLCT as the letters may appear too small.
  • The DLCT cannot be used to differentiate between sensory neglect and motor neglect.
Feasibility

The DLCT requires no specialized training to administer and only minimal equipment is required (a pencil, a stopwatch, and the test paper). The test is simple to score and interpret. No suggested cutoff score for the presence of USN is provided for the DLCT, however normative data has been published (see Gordon, Ruckdeschel-Hibbard, Egelko, Diller, Simmens, & Langer, 1984). The test is placed at the patient’s midline and is secured with tape. The time it takes for the patient to complete the test is recorded.

How to obtain the tool?

Please click here to see a copy of the DLCT.

Psychometric Properties

Overview

In general, cancellation tests are believed to have greater test-retest reliability than line bisection tests and are often more sensitive for detecting USN than line bisection tests (Marsh & Kersel, 1993; Azouvi et al., 2002). The DLCT has been reported to have adequate psychometric properties, including some evidence of reliability and validity, in identifying USN in the near extrapersonal space (Menon & Korner-Bitensky, 2004). For the purposes of this review, we conducted a literature search to identify all relevant publications on the psychometric properties of the DLCT.

Reliability

Internal consistency:
No evidence.

Test-retest.:
Gordon et al. (1984) examined the test-retest reliability of the DLCT using a group of 31 subjects and found adequate test-retest (r = 0.62).

Validity

Construct:
Egelko et al. (1988) found that the DLCT correlated adequately with mean CT-scan damage (r = -0.35).

Note: This correlation is negative because a high score on the DLCT indicates better performance, whereas a high CT-scan score indicates more brain damage.

Criterion:
No evidence.

Responsiveness

No evidence.

References

  • Azouvi, P., Samuel, C., Louis-Dreyfus, A., et al. (2002). Sensitivity of clinical and behavioural tests of spatial neglect after right hemisphere stroke. J Neurol Neurosurg Psychiatry, 73, 160 -166.
  • Diller, L., Ben-Yishay, Y., Gerstman, L. J., Goodin, R., Gordon, W., Weinberg, J. (1974). Studies in scanning behavior in hemiplegia. Rehabilitation Monograph No. 50, Studies in cognition and rehabilitation in hemiplegia. New York: New York University Medical Center, Institute of Rehabilitation Medicine.
  • Egelko, S., Gordon, W. A., Hibbard, M. R., Diller, L., Lieberman, A., Holliday, R., Ragnarsson, K., Shaver, M. S., Orazem, J. (1988). Relationship among CT scans, neurological exam, and neuropsychological test performance in right-brain-damaged stroke patients. J Clin Exp Neuropsychol, 10, 539-564.
  • Friedmann, N., Nachman-Katz, I. (2004). Developmental neglect dyslexia in a hebrew-reading child. Cortex, 40(2), 301-313.
  • Gordon, W. A., Ruckdeschel-Hibbard, M., Egelko, S., Diller, L., Simmens, S., Langer, K., Sano, M., Orazem, J., Weinberg, J. (1984). Single Letter Cancellation Test in Evaluation of the Deficits Associated with Right Brain Damage: Normative Data on the Institute of Rehabilitation Medicine Test Battery. pp1-7, New York: New York University Medical Center.
  • Ladavas, E. (1994). The role of visual attention in neglect: A dissociation between perceptual and directional motor neglect. Neuropsychological Rehabilitation, 4, 155-159.
  • Marsh, N. V., Kersel, D. A. (1993). Screening tests for visual neglect following stroke. Neuropsychological Rehabilitation, 3, 245-257.
  • Menon, A., Korner-Bitensky, N. (2004). Evaluating unilateral spatial neglect post stroke: Working your way through the maze of assessment choices. Topics in Stroke Rehabilitation, 11(3), 41-66.
  • Wade, D. T., Wood, V. A., Hewer, R. L. (1988). Recovery of cognitive function soon after stroke: A study of visual neglect, attention span and verbal recall. Journal of Neurology, Neurosurgery ,and Psychiatry, 51, 10-13.
  • Wilson, B., Cockburn, J., Halligan, P. (1987). Development of a behavioural test of visuospatial neglect. Archives of Physical Medicine and Rehabilitation, 68, 98-102.

See the measure

How to obtain the DLCT?

The DLCT can be purchased as part of the Behavioral Inattention Test from Harcourt Assessment by clicking on the following link: http://www.harcourt-uk.com/product.aspx?skey=2906
Click here to view a copy of the DLCT as it appears in the Behavioral Inattention Test.

Table of contents

Draw-a-Man Test

Evidence Reviewed as of before: 19-08-2008
Author(s)*: Lisa Zeltzer, MSc OT; Anita Menon, MSc
Editor(s): Nicol Korner-Bitensky, PhD OT; Elissa Sitcoff, BA BSc

Purpose

The Draw-A-Man Test (Goodenough, 1926) has been widely used as a measure of intellectual maturation in children, to elicit personality type and unconscious material, and as part of neuropsychologic test batteries. The test has also been used to identify the presence of unilateral spatial neglect (USN) in adult patients post-stroke.

In-Depth Review

Purpose of the measure

The Draw-A-Man Test (Goodenough, 1926) has been widely used as a measure of intellectual maturation in children, to elicit personality type and unconscious material, and as part of neuropsychologic test batteries. The test has also been used to identify the presence of unilateral spatial neglect (USN) in adult patients post-stroke.

Available versions

The Draw-A-Man test was published by Goodenough in 1926.

Features of the measure

Items:
There are no actual items in the Draw-A-Man Test. Patients are given a blank piece of paper (8.5 x11) entitled “Draw an Entire Man” and pencil, and are asked to draw an entire man from memory.

Scoring:
Chen-Sea (2000) developed a new scoring method for the Draw-A-Man Test specifically for examining the presence of USN. Rather than use the original quantitative 10-point scoring method that was not able to distinguish patients with personal neglect from healthy controls, this new method scores drawings that show only homogenous unilateral body parts as having USN and those with homogenous bilateral body parts as normal. This scoring system was found to have a high inter-rater reliability. Agreement between two raters was 95.45% for the participants without brain insult and 100% for the participants with stroke, demonstrating the ability for this scoring method to discriminate patients with personal neglect from those without personal neglect.

Time:
It takes less than 5 minutes to complete the Draw-A-Man Test.

Training:
The examiner must be able to distinguish homogenous unilateral body parts (indicates presence of USN) from homogenous bilateral body parts (indicates normal functioning), as drawn by the patient.

Subscales:
None.

Equipment:
A pencil and a blank piece of paper (8.5 x11) entitled “Draw an Entire Man”.

Alternative Forms of the Draw-A-Man Test

Quantitative 10-point scoring method of the Draw-A-Man Test.

  • Using a blank piece of paper and a pencil, the seated patient must draw an entire man. The picture is scored by giving one point for the presence of each of the following body parts: head, trunk, right arm, left arm, right hand, left hand, right leg, left leg, right foot, and left foot. The total score of this version of the test is 10 (see Figure 1). This method has not been found to be able to distinguish patients with personal neglect from healthy controls (Chen-Sea, 1995b).

Figure 1. Perfect score of 10 points.

(Source: Chen-Sea, 2000)

Client suitability

Can be used with:

  • Patients with stroke.
  • Patients must be able to hold a pencil to complete the test.

Should not be used with:

  • Patients who have had a left stroke and patients who are left handed.
  • Need to rule out the presence of apraxia, given that this may impact the validity of testing results.

In what languages is the measure available?

Not applicable.

Summary

What does the tool measure? Unilateral Spatial Neglect (USN) in the personal and extrapersonal space (as well as the presence of anosagnosia).
Other constructs: intellectual ability/cognitive function/body image.
What types of clients can the tool be used for? Adults with stroke.
Is this a screening or assessment tool? Screening.
Time to administer Less than 5 minutes.
Versions Quantitative 10-point scoring method of the Draw-A-Man Test
Other Languages Not applicable.
Measurement Properties
Reliability Internal consistency:
No studies have examined the internal consistency of the Draw-A-Man Test.

Test-retest:
Two studies have examined the test-retest reliability of the Draw-A-Man Test and both reported adequate test-retest.

Inter-rater:
One study examined the inter-rater reliability of the Draw-A-Man Test and reported excellent inter-rater.

Validity Construct:
Convergent:
Correlated with activities of daily living (ADL) performance measured by the Klein-Bell ADL Scale.

Known groups:
The Draw-A-Man Test was able to discriminate patients with personal neglect from those without personal neglect.

Does the tool detect change in patients? Not applicable.
Acceptability The Draw-A-Man Test should be used as a screening tool rather than for clinical diagnosis of USN. Apraxia must be ruled out as this may affect the validity of test results. This test cannot be completed by proxy. Patients must be able to hold a pencil. Patients with hemiparesis on their dominant side may have difficulty completing the test.
Feasibility The Draw-A-Man Test takes only 5 minutes to complete and requires minimal training to score (must be able to distinguish homogenous unilateral body parts from homogenous bilateral body parts). Only simple equipment is required (a pencil and paper).
How to obtain the tool? All that is required is a blank piece of paper (8.5×11) entitled “Draw an Entire Man”, and a pencil. The patient is asked to draw an entire man from memory.

Psychometric Properties

Overview

The Draw-A-Man Test has rarely been used with patients with stroke in the right hemisphere to determine USN. For the purposes of this review, we conducted a literature search to identify all relevant publications on the psychometric properties of the Draw-A-Man Test as a measure of USN.

Reliability

Internal consistency:
No evidence.

Test-retest:
Chen-Sea (1995, reported in Chen-Sea, 2000) administered the Draw-A-Man Test to 19 patients with stroke in a pilot study, and found that the test showed adequate test-retest reliability (r = 0.50).

Gordon et al. (1984) also reported an adequate test-retest reliability for the Draw-A-Man Test (r= 0.62) in patients with right brain damage.

Inter-rater:
Chen-Sea (1995, reported in Chen-Sea, 2000) administered the Draw-A-Man Test to 19 patients with stroke in a pilot study, and found that the test showed excellent inter-rater reliability (r = 0.96).

Chen-Sea (2000) administered the Draw-A-Man Test to 51 patients with stroke and 110 age-matched controls without brain injury. The test had a high inter-rater reliability, as two blinded raters had 96% agreement for the patients with neglect and 100% for those without neglect.

Validity

Criterion:
No evidence.

Construct:
Convergent:
Chen-Sea (2000) reported significant correlations of the Draw-A-Man Test with activities of daily living (ADL) performance measured by the Klein-Bell ADL Scale (Klein & Bell, 1979). Subjects with personal neglect had lower scores in five areas of ADL as compared to those without personal neglect.

Known groups:
Chen-Sea (2000) administered the Draw-A-Man Test to 51 patients with stroke and 110 age-matched controls without brain injury and found that all of the controls were correctly classified as normal, and 13 of the 51 stroke patients were categorized as having USN. The results of this study demonstrate the ability of the Draw-A-Man Test to discriminate patients with personal neglect from those without personal neglect.

Responsiveness

No evidence.

References

  • Chen-Sea, M-J. (1995a). Test-retest reliability of Draw-A-Man Test. Unpublished manuscript, National Cheng Kung University, Tainan, Taiwan.
  • Chen-Sea, M-J. (1995b). Performance of normal and right CVA patients on Draw-A-Man Test. Unpublished manuscript, National Cheng Kung University, Tainan, Taiwan.
  • Chen-Sea, MJ. (2000). Validating the Draw-A-Man Test as a personal neglect test. Am J Occup Therap, 54, 391-397.
  • Goodenough, F. L. (1926). The measurement of intelligence by drawing. New York: World Books.
  • Gordon, W. A., Ruckdeschel-Hibbard, M., Egelko, S., Diller, L., Simmens, S., Langer, K. (1984). Single Letter Cancellation Test in Evaluation of the Deficits Associated with Right Brain Damage: Normative Data on the Institute of Rehabilitation Medicine Test Battery. New York: New York University Medical Center.
  • Klein, R. M., Bell, B. J. (1979). Klein-Bell Activity of Daily Living Scale: Manual. Seattle: Division of Occupational Therapy, University of Washington.

See the measure

How to obtain the Draw-A-Man Test?

All that is required is a blank piece of paper (8.5 x11) entitled “Draw an Entire Man”.

Table of contents

Line Bisection Test

Evidence Reviewed as of before: 19-08-2008
Author(s)*: Lisa Zeltzer, MSc OT; Anita Menon, MSc
Editor(s): Nicol Korner-Bitensky, PhD OT; Elissa Sitcoff, BA BSc

Purpose

The Line Bisection Test is a test is a quick measure to detect the presence of unilateral spatial neglect (USN). To complete the test, one must place a mark with a pencil through the center of a series of horizontal lines. Usually, a displacement of the bisection mark towards the side of the brain lesion is interpreted as a symptom of neglect.

In-Depth Review

Purpose of the measure

The Line Bisection Test is a test is a quick measure to detect the presence of unilateral spatial neglect (USN). To complete the test, one must place a mark with a pencil through the center of a series of horizontal lines. Usually, a displacement of the bisection mark towards the side of the brain lesion is interpreted as a symptom of neglect.

Available versions

There are many versions of the Line Bisection Test, and the procedures are rarely standardized, with the exception of when the Line Bisection Test is used as an item within a standardized test battery (Plummer, Morris, & Dunai, 2003).

The relationship between abnormal line bisection and visual neglect has been observed for over a century (e.g. Axenfeld, 1894; Liepmann & Kalmus, 1900). In 1980, Schenkenberg, Bradford, and Ajax formally evaluated this method of detecting the presence of visual neglect in patients with lesions of the non-dominant hemisphere, and are thought to be the first to statistically evaluate this method.

Features of the measure

Items:
Patients are asked to place a mark with a pencil (with their preferred or unaffected hand) through the center of a series of 18 horizontal lines on an 11x 8.5-inch page.

Scoring:
The test is scored by measuring the deviation of the bisection from the true center of the line. A deviation of more than 6 mm from the midpoint indicates USN. Omission of two or more lines on one half of the page indicates USN.

Time:
The test takes less than 5 minutes to complete.

Training:
None typically reported.

Subscales:
None.

Equipment:

  • 11x 8.5-inch page of paper with 18 horizontal lines
  • Pencil

Alternative form of the Line Bisection Test

The Line Bisection Test can be presented in various forms. Some studies use 18 horizontal lines, while others have used a single line (Parton, Malhotra & Husain, 2004), or a series of 10 lines (Ferber & Karnath, 2001). The Line Bisection Test is also offered as part of some standardized test batteries such as within the Behavioural Inattention Test (Wilson, Cockburn, Halligan, 1987; Schubert & Spatt, 2001).

Client suitability

Can be used with:

  • Patients with stroke.
  • Patients must be able to hold a pencil in order to complete the task (the presence of apraxia may impair this ability).

Should not be used with:

  • The Line Bisection Test should be used with caution in the clinical diagnosis of spatial neglect:
    Ferber and Karnath (2001) found that deviation in line bisection was not apparent in 40% of the patients in their sample that had severe neglect. In comparison, each of the four cancellation tests administered in this study (Line Crossing, Letter Cancellation, Star Cancellation Test and Bells Test) missed 6% of the subjects and may be preferred over the Line Bisection Test for diagnosing USN.
  • Performance on the Line Bisection Test may be influenced by or may be indicative of other syndromes besides spatial neglect, such as hemianopia (damage of optic pathways that result in loss of vision in half of the visual field) (Ferber & Karnath, 2001). Consequently, the Line Bisection Test is not a highly specific measure of USN.

In what languages is the measure available?

Not applicable.

Summary

What does the tool measure? Unilateral Spatial Neglect (USN) in the extrapersonal space
What types of clients can the tool be used for? Patients with stroke.
Is this a screening or assessment tool? Screening.
Time to administer Less than 5 minutes.
Versions

There are many versions of the Line Bisection Test, and the procedures are rarely standardized, with the exception of when the Line Bisection Test is used as an item within a standardized test battery such as in the Behavioural Inattention Test.

Other Languages Not applicable.
Measurement Properties
Reliability Test-retest:
Four studies have examined the test-retest reliability of the Line Bisection Test. Three studies reported excellent test-retest and one study reported adequate test-retest.
Validity Criterion:
One study reported that when the Line Bisection Test was compared to other cancellation tests, the sensitivity of the test for detecting visuo-spatial neglect in elderly patients with stroke was 76.4%.

Construct:
Convergent:
Excellent correlations with Albert’s Test and the Baking Tray Task. Adequate correlations with the Star Cancellation Test and with mean CT-scan damage. Poor correlation with the Clock Drawing Test.

Does the tool detect change in patients? Not applicable.
Acceptability The Line Bisection Test should be used as a screening tool rather than for clinical diagnosis of USN. Performance on the Line Bisection Test may be influenced by or may be indicative of other syndromes besides spatial neglect, such as hemianopia. Apraxia must be ruled out as this may affect the validity of test results. This test cannot be completed by proxy. Patients must be able to hold a pencil to complete.
Feasibility The Line Bisection Test takes only 5 minutes to complete and is simple to score. Only simple equipment is required (a pencil and paper with 18 horizontal lines).
How to obtain the tool?

The Line Bisection Test can be purchased as part of the Behavioural Inattention Test from Pearson Assessment by clicking on the following link:http://pearsonassess.ca/haiweb/Cultures/en-CA/Products/Product+Detail.htm?CS_Category=&CS_Catalog=TPC-CACatalog&CS_ProductID=749129972

Psychometric Properties

Overview

For the purposes of this review, we conducted a literature search to identify all relevant publications on the psychometric properties of the Line Bisection Test. The test has been evaluated in many studies for its criterion validity, resulting in evidence of its strong psychometric properties in comparison to other paper-and-pencil tests (Menon & Korner-Bitensky, 2004).

Reliability

Test-retest:
Schenkenberg et al. (1980) examined the test-retest reliability of the Line Bisection Test in patients with right-hemisphere lesions, diffuse lesions, left-hemisphere lesions, and hospital controls, and found that it had excellent test-retest reliability, ranging from r = 0.84 to r = 0.93.

Similarly, Chen-Sea and Henderson (1994) reported an excellent test-retest reliability of r = 0.93 for the Line Bisection Test.

Kinsella, Packer, Ng, Olver, and Stark (1995) found adequate test-retest reliability for the Line Bisection Test (Pearson r = 0.64).

Bailey, Riddoch and Crome (2004) examined the test-retest reliability of the Line Bisection Test in elderly patients with stroke (85 patients with neglect and 83 patients without neglect). Patients repeated the test within the hour. The intraclass correlation coefficient (ICC) was excellent for patients with neglect (ICC = 0.97).

Validity

Criterion:
Bailey, Riddoch, and Crome (2000) found that when the Line Bisection Test was compared to other cancellation tests, the sensitivity of the test for detecting visuo-spatial neglect in elderly patients with stroke was 76.4%.

Construct:
Marsh and Kersel (1993) examined the construct validity of the Line Bisection Test by correlating the test with the Star Cancellation Test using Pearson’s correlation in a sample of 27 rehabilitation patients with a history of stroke. The two measures were found to have an adequate negative correlation (r = -0.40). The correlation is negative because a high score on the Line Bisection Test indicates USN, however a score close to 0 on the Star Cancellation Test indicates the absence of USN.

Egelko et al. (1988) correlated Line Bisection Test scores with mean CT-scan damage, and CT-scan damage of temporal lobe, parietal lobe, and occipital lobe. All correlations were found to be adequate (r = -0.44, -0.59,-0.37, and -0.42, respectively).

Friedman (1990) examined whether the Line Bisection Test correlated with functional outcome in 82 elderly patients within 14 days of a non-lacunar stroke. At discharge assessment, patients with impaired line bisection had poorer functional outcome than those with normal line bisection as measured by Barthel Index scores, walking speed and discharge destination. When subjects with impaired line bisection were divided into two groups according to line bisection score, the severely impaired had worse functional outcome than the mildly impaired.

Convergent:
Agrell, Dehlin, and Dahlgren (1997) compared the performance of 57 elderly patients with stroke on 5 different tests for visuo-spatial neglect (Star Cancellation Test, Line Crossing-Albert’s Test, Line Bisection, Clock Drawing Test and Copy A Cross). The Line Bisection Test had an excellent correlation with Line Crossing-Albert’s Test (r = 0.85) and correlated adequately with the Star Cancellation Test (r = -0.33).

Bailey, Riddoch, and Crome (2000) administered the Line Bisection Test and the Baking Tray Task to 107 patients with right or left sided brain damage and 43 age-matched controls. The Baking Tray Task had an excellent correlation with the Line Bisection Test (r = -0.66). This correlation is negative because a high score on the Line Bisection Test indicates the presence of USN, whereas a high score on the Baking Tray Task indicates normal performance.

Binder, Marshall, Lazer, Benjamin, and Mohr (1992) compared the performance on line bisection with that on Letter Cancellation in a group of 34 patients with right-sided brain damage. They found no significant correlation (r = 0.39) between the scores in the two tests.

Similarly, Schubert and Spatt (2001) found that in 20 patients with right hemisphere stroke, no significant correlation between the Line Bisection Test and the Star Cancellation Test were found (r = 0.48). Furthermore, five patients with impaired performance on one of the tests were within the normal range on the other one.

Ishiai, Sugishita, Ichikawa, Gono, and Watabiki (1993) examined the construct validity of the Clock Drawing Test and found that it had a poor correlation with the Line Bisection Test (r = 0.05).

Known groups:
Schenkenberg et al. (1980) reported that Line Bisection Test performance can discriminate between patients with right-hemisphere lesions and patients with diffuse lesions, patients with left-hemisphere lesions, and hospital controls.

Responsiveness

No evidence.

References

  • Agrell, B. M., Dehlin, O. I., Dahlgren, C. J. (1997). Neglect in elderly stroke patients: a comparison of five tests. Psychiatry Clin Neurosci, 51(5), 295-300.
  • Axenfeld, D. (1894). Eine einfache Methode Hemianopsie zu constatiren. Neurol Centralbl, 437-438.
  • Bailey, M. J., Riddoch, M. J., Crome, P. (2000). Evaluation of a test battery for hemineglect in elderly stroke patients for use by therapists in clinical practice. NeuroRehabilitation, 14, 139-150.
  • Bailey, M. J., Riddoch, M. J., Crome, P. (2004). Test-retest stability of three tests for unilateral visual neglect in patients with stroke: Star Cancellation, Line Bisection, and the Baking Tray Task. Neurophsychological Rehabilitation, 14(4), 403-419.
  • Barton, J. J. S., Black, S. E. (1998). Line bisection in hemianopia. J Neurol Neurosurg Psychiatry, 64, 660-662.
  • Binder, J., Marshall, R., Lazar, R., Benjamin, J., Mohr, J. P. (1992). Distinct syndromes of hemineglect. Archiv Neurology, 49, 1187-1194.
  • Chen-Sea, M. J., Henderson, A. (1994). The reliability and validity of visuospatial inattention tests with stroke patients. Occup Ther Int, 1, 36-48.
  • Egelko, S., Gordon, W. A., Hibbard, M. R., Diller, L., Lieberman, A., Holliday, R., Ragnarsson, K., Shaver, M. S., Orazem, J. (1988). Relationship among CT scans, neurological exam, and neuropsychological test performance in right-brain-damaged stroke patients. J Clin Exp Neuropsychol, 10, 539-564.
  • Ferber, S., Karnath, H. O. (2001). How to assess spatial neglect–Line Bisection or Cancellation Tests? J Clin Expl Neuropsychol, 23, 599-607.
  • Friedman, P. J. (1990). Spatial neglect in acute stroke: the Line Bisection Test. Scand J Rehabil Med, 22, 101-106.
  • Ishiai,S., Sugishita, M., Ichikawa, T., Gono, S., Watabiki, S. (1993). Clock-drawing test and unilateral spatial neglect. Neurology. 43, 106-110.
  • Kinsella, G., Packer, S., Ng, K., Olver, J., Stark, R. (1995). Continuing issues in the assessment of neglect. Neuropsychological Rehabilitation, 5(3), 239-258.
  • Liepmann, H., Kalmus, E. (1900). Ãœber einer Augenma beta störung beu Hemianopikern. Berlin Klin Wochenschr, 38, 838-842.
  • Marsh, N. V., Kersel, D. A. (1993). Screening tests for visual neglect following stroke. Neuropsychological Rehabilitation, 3, 245-257.
  • Menon, A., Korner-Bitensky, N. (2004). Evaluating unilateral spatial neglect post stroke: Working your way through the maze of assessment choices. Topics in Stroke Rehabilitation, 11(3), 41-66.
  • Parton, A., Malhotra, P., Husain, M. (2004). Hemispatial neglect. J Neurol Neurosurg Psychiatry, 75, 13-21.
  • Plummer, P., Morris, M. E., Dunai, J. (2003). Assessment of unilateral neglect. Phys Ther, 83(8), 732-740.
  • Schenkenberg, T., Bradford, D. C., Ajax, E. T. (1980). Line bisection and unilateral visual neglect in patients with neurological impairment. Neurology. 30, 509-517.
  • Schubert, F., Spatt, J. (2001). Double dissociations between neglect tests: Possible relation to lesion site. Eur Neurol, 45, 160-164.
  • Wilson, B. A., Cockburn, J., Halligan, P. W. (1987). Behavioural Inattention Test. Titchfield, Hants, England: Thames Valley Test Company Ltd.

See the measure

How to obtain the Line Bisection Test?

Click here to obtain a copy of the Line Bisection Test

The Line Bisection Test can be purchased as part of the Behavioural Inattention Test from Pearson Assessment by clicking on the following link: Pearson Assessment

Table of contents

Semi-Structured Scale for the Functional Evaluation of Hemi-inattention

Evidence Reviewed as of before: 19-08-2008
Author(s)*: Lisa Zeltzer, MSc OT; Anita Menon, MSc
Editor(s): Nicol Korner-Bitensky, PhD OT; Elissa Sitcoff, BA BSc

Purpose

The Semi-Structured Scale for the Functional Evaluation of Hemi-inattention is a screening tool used to detect the presence of unilateral spatial neglect (USN) in both the personal and extra personal space. In this scale, patients must perform functional activities, such as using a comb or serving tea.

In-Depth Review

Purpose of the measure

The Semi-Structured Scale for the Functional Evaluation of Hemi-inattention is a screening tool used to detect the presence of unilateral spatial neglect (USN) in both the personal and extra personal space. In this scale, patients must perform functional activities, such as using a comb or serving tea.

Available versions

The Semi-Structured Scale for the Functional Evaluation of Hemi-inattention was published by Zoccolotti, Antonucci, and Judica in 1992.

Features of the measure

Items:
Patients are asked to perform different tasks with real objects.

To assess personal neglect, patients must demonstrate the use of three common objects: comb, razor/powder compact, and eyeglasses. The objects are placed at the patient’s midline one at a time and is asked:

  • “Show me how you comb your hair?”
  • “Show me how to use the razor?” (male) or “Show me how to powder yourself?” (female)
  • “Show me how to put the eyeglasses on?”

To assess extra personal neglect, patients must serve tea, deal cards, describe a picture, and describe an environment. The patient is asked to perform these activities with objects that are provided on a table.

  1. Serving tea.
    The patient is brought to a table with a tray containing 4 cups and saucers, a teapot, a sugar bowl, teaspoons, and paper napkins. Examiners are seated on the right, in front, and to the left of the patient who is asked to serve tea for him/herself and for those who are with him/her, to distribute napkins and teaspoons, and also to serve the sugar. The examiner, who is seated in front of the patient asks: “Would you like to serve the tea?”. If the patient serves the tea but not the napkins and/or teaspoons, the examiner asks: “Would you like to give us the teaspoons (napkins)?”.
  2. Card dealing.
    The examiners and the patient are seated the same way as they were for the tea-serving situation. The patient is asked if he/she knows how to play “Scopa”. If necessary, he/she is reminded of the basic rules (3 cards for each player and 4 in the middle of the table).
    Note: As Scopa is an Italian card game, as an alternative, other card games featuring four players can be used. The examiner seated in front of the patient asks: “Would you like to deal the cards for a game of Scopa?”.
  3. Picture description.
    A picture is placed in front of the patient and he/she is asked: “Will you describe everything you see in this picture?”. Three pictures are used. Two are cards 3 and 6 (45 x 32 cm) of Set 1 of the Progressive Picture Compositions by Byrne (1967); one is Tissot’s painting ‘The dance on the ship’ (60 x 100 cm). The examiner indicates the persons and objects pointed out by the patient with progressive numbers on a photocopy of the stimulus figure in the order in which they are reported, without soliciting in any way. When the description is finished, the patient is asked: “Well, what does this picture represent?”. The patients’ response is transcribed but it does not contribute to the score.
  4. Description of an environment.
    The patient is placed in a room full of objects on both sides (arm chairs, pictures, lamps) and is asked to describe it. The patient is told: “Will you describe everything you see in this room?”. To facilitate scoring, it is useful to record the elements described by the patient on a schematic drawing of the environment.

Scoring:
Patients receive a score ranging from 0 to 3 for each item based on the symmetry of his or her performance. A total score is calculated for each subscale.

  • Personal neglect subscale: A score of 0 indicates normal performance, 1 indicates slight asymmetry, 2 indicates clear omissions, and 3 indicates significant reduction in space explored. The maximum score that can be achieved is 9. A total score greater than the cutoff of 1 indicates the presence of personal neglect.
  • Extra personal neglect subscale: A score of 0 indicates normal performance, 1 indicates slight asymmetries, uncertainty, or slowness in space explored, 2 indicates clear omissions, and 3 indicates significant reduction in space explored. The maximum score that can be achieved is 12. A total score greater than the cutoff of 3 indicates the presence of extra personal

Time:
It takes approximately 5 minutes to complete the personal neglect subscale and 15 minutes to complete the extra personal neglect subscale

Training:
The therapist must be trained on how to use the rating scale.

Subscales:
Personal neglect and extra personal (spatial) neglect.

Equipment:

  • Comb
  • Razor/Powder compact
  • Eyeglasses
  • Tea set
  • Playing cards
  • Picture

Alternative forms of the Semi-Structured Scale for the Functional Evaluation of Hemi-inattention

None.

Client suitability

Can be used with:

  • Patients with stroke.

Should not be used with:

  • Patients who do not have unilateral voluntary movement and control of the shoulder, elbow, and fingers cannot be assessed for the presence of personal neglect.
  • Patients who do not have unilateral voluntary movement and control of shoulder, elbow, and fingers, language, cognition, or visual perceptual skills cannot be assessed for the presence of extra personal It may be challenging for patients with stroke to perform these high-level activities soon after stroke, however this scale may become more useful as the patient approaches discharge from acute care (Menon & Korner-Bitensky, 2004).
  • Need to rule out the presence of apraxia, given that this may impact the validity of testing results.
  • A proxy respondent cannot be used because the measure is dependent on observed completion of each task.

In what languages is the measure available?

Not applicable.

Summary

What does the tool measure? Unilateral Spatial Neglect (USN) in both the personal and extra personal space.
What types of clients can the tool be used for? Patients with stroke.
Is this a screening or assessment tool? Screening.
Time to administer It takes approximately 5 minutes to complete the personal neglect subscale and 15 minutes to complete the extra personal neglect subscale.
Versions None.
Other Languages Not applicable.
Measurement Properties
Reliability Internal consistency:
One study examined the internal consistency of the Semi-Structured Scale and found adequate internal consistency.

Test-retest:
No studies have examined the test-retest reliability of the Semi-Structured Scale.

Inter-rater:
One study examined the inter-rater reliability of the Semi-Structured Scale and reported excellent inter-rater reliability.

Validity Criterion:
Concurrent:
The extra personal subscale correlated with the Line Cancellation Test, Letter Cancellation Test, Wundt-Jastrow Area Illusion Test, and Sentence Reading Test. The personal subscale did not correlated with these conventional diagnostic tests and requires further validation.
Does the tool detect change in patients? Although the scale is typically used as a screening measure, one study examined the responsiveness of the scale and found that the personal neglect subscale was not responsive to clinical change following rehabilitation; however the extra personal subscale was responsive to clinical change after rehabilitative treatment.
Acceptability Both subscales cannot be completed by patients who do not have unilateral voluntary movement and control of the shoulder, elbow, and fingers. Furthermore, the extra personal neglect subscale cannot be completed by patients with deficits in language, cognition, or visual perception. Although it may be challenging for patients to perform these high-level activities soon after their stroke, this scale may become more useful to screen patients before they are discharged home from acute care or rehabilitation. Apraxia must be ruled out as this may affect the validity of test results. This test cannot be completed by proxy.
Feasibility The Semi-Structured Scale is one of the longer scales used to detect USN, and the personal neglect subscale requires further validation. The scale is simple to score, however training is required regarding how to use the rating scale. Simple and readily accessible equipment is required to complete the scale (Comb, Razor/Powder compact, Eyeglasses, Tea set, Playing cards, Picture).
How to obtain the tool? Not applicable.
To administer the personal neglect subscale of the Semi-Structured Scale, the clinician asks the patient to demonstrate the use of 3 common objects: comb, razor/powder compact, and eyeglasses. The objects are placed at the patient’s midline one at a time. To administer the extra personal neglect subscale, patients must serve tea, deal cards, describe a picture, and describe an environment. The patient is asked to perform these activities with objects that are provided on a table. A dialogue has been created for administering the Semi-Structured Scale and can be found under the tab ‘in-depth review – features of the measure’.

Psychometric Properties

Overview

For the purposes of this review, we conducted a literature search to identify all relevant publications on the psychometric properties of the Semi-Structured Scale for the Functional Evaluation of Hemi-inattention as a measure of USN. Although easy to use, this tool has only minimal evidence of validity (Menon & Korner-Bitensky, 2004). More testing is required regarding the reliability and validity of the scale.

Reliability

Internal consistency (inter-item correlations):
Zoccolotti et al. (1992) assessed the inter-item correlations of the scale and found that items within the personal subscale had adequate correlations ranging from r = 0.57 to r = 0.62, and items within the extra personal subscale had adequate correlations ranging from r = 0.44 to r = 0.71.

Test-retest:
No evidence.

Inter-rater:
Zoccolotti et al. (1992) found excellent inter-rater reliability for both the personal neglect items and extra personal neglect items of the scale (r = 0.88 and r = 0.96, respectively). However, in this study, raters underwent intense training, which may limit the generalizability of these findings.

Validity

Criterion:
Concurrent:
Zoccolotti et al. (1992) assessed the concurrent validity of the scale by comparing correlations of the personal and extra personal subscales with performance on four standard diagnostic tests for USN: Line Cancellation Test, Letter Cancellation Test, Wundt-Jastrow Area Illusion Test, and Sentence Reading Test. The extra personal subscale correlated with each conventional test (kendall tau = -0.60; -0.52; 0.20; and -0.40, respectively). Performance on the personal subscale did not correlate with performance on these conventional tests. According to the authors, the failure of the personal subscale to correlate with conventional tests suggests that conventional and personal tests measure different dimensions of neglect. The personal subscale requires further validation.

Responsiveness

Zoccolotti et al. (1992) examined the responsiveness of the scale and found that the personal neglect subscale was not responsive to clinical change following rehabilitation; however the extra personal subscale was responsive to clinical change after rehabilitative treatment.

References

  • Byrne, D. (1967). Progressive Picture Compositions. Picture Set 1. Burn Mill, Harlow: Longman.
  • Menon, A., Korner-Bitensky, N. (2004). Evaluating unilateral spatial neglect post stroke: Working your way through the maze of assessment choices. Topics in Stroke Rehabilitation, 11(3), 41-66.
  • Plummer, P., Morris, M. E., Dunai, J. (2003). Assessment of unilateral neglect. Phys Ther, 83(8), 732-740.
  • Tissot, J. il ballo sulla nave. Reproduction on canvas. Series: Maestri della Tavolozza, n. 1295 HH, Milano: Amilcare Pizzi.
  • Zoccolotti, P, Judica, A. (1991). Functional evaluation of hemineglect by means of a semistructured scale: personal extrapersonal differentiation. Neuropsychological Rehabilitation, 1, 33-44.
  • Zoccolotti, P., Antonucci, G., Judica, A. (1992). Psychometric characteristics of two semi-structured scales for the functional evaluation of hemi-inattention in extrapersonal and personal space. Neuropsychological Rehabilitation, 2, 179-191.

See the measure

How to obtain the Semi-Structured Scale for the Functional Evaluation of Hemi-inattention?

Not applicable.

Table of contents

Single Letter Cancellation Test (SLCT)

Evidence Reviewed as of before: 19-08-2008
Author(s)*: Lisa Zeltzer, MSc OT; Anita Menon, MSc
Editor(s): Nicol Korner-Bitensky, PhD OT; Elissa Sitcoff, BA BSc

Purpose

The Single Letter Cancellation Test (SLCT) is used to evaluate the presence and severity of visual scanning deficits, and is used to evaluate unilateral spatial neglect (USN) in the near extra personal space (Diller, Ben-Yishay, Gertsman, Goodkin, Gordon, & Weinberg, 1974).

In-Depth Review

Purpose of the measure

The Single Letter Cancellation Test (SLCT) is used to evaluate the presence and severity of visual scanning deficits, and is used to evaluate unilateral spatial neglect (USN) in the near extra personal space (Diller, Ben-Yishay, Gertsman, Goodkin, Gordon, & Weinberg, 1974).

Available versions

The SLCT was published by Diller et al. in 1974.

Features of the measure

Items:
There are no actual items for the SLCT.

The test consists of one 8.5 x 11 sheet of paper containing 6 lines with 52 letters per line. The stimulus letter H is presented 104 times. The page is placed at the patient’s midline. The patient is told to put a line through each H that is found on the page. The time taken to complete the test is recorded.

Scoring:
The score is calculated by subtracting the number of omissions (H’s that were not crossed out) from the possible perfect score of 104 (0 to 53 on the left and 0 to 51 on the right). Higher scores indicate better performance. Presence of USN can be inferred by calculating the frequency of errors to the left or to the right from the center of the page. Omissions of 4 or more have been found to be pathological (Zoccolotti, Antonucci, Judica, Montenero, Pizzamiglio, & Razzano, 1989). Commissions are rarely seen and are therefore not included in the analyses.

Normative data has been published by sex and age, based on the results from 341 patients with lesions of the right hemisphere (Gordon, Ruckdeschel-Hibbard, Egelko, Diller, Simmens, & Langer, 1984).

Normative data has been published by sex and age, based on the results from 341 patients with lesions of the right hemisphere (Gordon, Ruckdeschel-Hibbard, Egelko, Diller, Simmens, & Langer, 1984).

Time:
Less than 5 minutes.

Training:
None typically reported.

Subscales:
None.

Equipment:

  • 11x 8-inch page of paper containing 6 lines with 52 letters per line and the stimulus letter H presented 104 times (53 times on left, 51 on right).
  • Pencil
  • Stopwatch

Alternative forms of the SLCT

None typically reported.

Client suitability

Can be used with: Patients with stroke.

  • Patients must be able to hold a pencil to complete the test (the presence of apraxia may impair this ability).
  • Patients must be able to recognize letters of the alphabet to complete the test.

Should not be used with:

  • The SLCT cannot be used to differentiate between sensory neglect and motor neglect because it requires both visual search and manual exploration (Ladavas, 1994).
  • The SLCT cannot be completed by proxy.

In what languages is the measure available?

The SLCT has been used with English and French-speak patients.

A Hebrew version of the SLCT has been used in some studies as part of the Behavioral Inattention Test (e.g. Friedman & Nachman-Katz, 2004).

Summary

What does the tool measure? Unilateral Spatial Neglect (USN) in the near extra personal space.
What types of clients can the tool be used for? Patients with stroke.
Is this a screening or assessment tool? Screening.
Time to administer Less than 5 minutes.
Versions None.
Other Languages

The SLCT has been used with English and French-speak patients. A Hebrew version of the SLCT has been used in some studies as part of the Behavioral Inattention Test.

Measurement Properties
Reliability Internal consistency:
No studies have examined the internal consistency of the SLCT.

Test-retest:
One study has examined the test-retest reliability of the SLCT and reported adequate test-retest.

Validity Construct:
Adequate correlation with mean CT-scan damage; adequate to excellent correlations with Albert’s Test , Sentence Reading Test, and the Wundt-Jastrow Area Illusion Test; significant correlation with the Semi-Structured Scale for the Functional Evaluation of Hemi-inattention.
Does the tool detect change in patients? Not applicable.
Acceptability The SLCT should be used as a screening tool rather than for clinical diagnosis of USN. Apraxia must be ruled out as this may affect the validity of test results. This test cannot be completed by proxy. Patients must be able to hold a pencil and recognize letters of the alphabet to complete. The SLCT cannot be used to differentiate between sensory neglect and motor neglect.
Feasibility The SLCT requires no specialized training to administer and only minimal equipment is required (a pencil, a stopwatch, and the test paper). The test is simple to score and interpret. A suggested cutoff score for the presence of USN is provided (omissions of 4 or more). The test is placed at the patient’s midline and the time it takes for the patient to complete the test is recorded.
How to obtain the tool?

Please click here to see a copy of the SLCT.

Psychometric Properties

Overview

In general, cancellation tests are believed to have greater test-retest reliability than line bisection tests and are often more sensitive for detecting USN than line bisection tests (Marsh & Kersel, 1993; Azouvi et al., 2002). The SLCT has been reported to have strong psychometric properties, including reliability and validity, in identifying USN in the near extra personal space (Menon & Korner-Bitensky, 2004). For the purposes of this review, we conducted a literature search to identify all relevant publications on the psychometric properties of SLCT.

Reliability

Internal consistency:
No evidence.

Test-retest:
Gordon, Ruckdeschel-Hibbard, Egelko, Diller, Simmens, and Langer (1984) examined the test-retest reliability of the SLCT using a group of 31 subjects and found adequate test-retest (r = 0.63).

Validity

Construct:
Egelko et al. (1988) found that the SLCT correlated adequately with mean CT-scan damage (r = -0.35).
Note: This correlation is negative because a high score on the SLCT indicates better performance, whereas a high CT-scan score indicates more damage.

Zoccolotti, Antonucci, Judica, Montenero, Pizzamiglio, and Razzano (1989) found that correlations between the SLCT and other visuo-spatial tests (Albert’s Test , Sentence Reading Test, and the Wundt-Jastrow Area Illusion Test) ranged from adequate to excellent (ranging from r = 0.36 to r = 0.69). The SLCT was found to be the most sensitive among these tests in detecting USN (from 4.1% to 25%), which may be due to the high density of stimuli used.

Zoccolotti, Antonucci, and Judica (1992) found that the SLCT correlated with the Extra personal subscale of Semi-Structured Scale for the Functional Evaluation of Hemi-inattention (Kendal’s tau = -0.52).
Note: This correlation is negative because a high score on the SLCT indicates better performance, whereas a high score on the Semi-Structured Scale indicates the presence of USN.

Criterion:
No evidence.

Responsiveness

No evidence.

References

  • Azouvi, P., Samuel, C., Louis-Dreyfus, A., et al. (2002). Sensitivity of clinical and behavioural tests of spatial neglect after right hemisphere stroke. J Neurol Neurosurg Psychiatry, 73, 160 -166.
  • Diller, L., Ben-Yishay, Y., Gerstman, L. J., Goodin, R., Gordon, W., Weinberg, J. (1974). Studies in scanning behavior in hemiplegia. Rehabilitation Monograph No. 50, Studies in cognition and rehabilitation in hemiplegia. New York: New York University Medical Center, Institute of Rehabilitation Medicine.
  • Egelko, S., Gordon, W. A., Hibbard, M. R., Diller, L., Lieberman, A., Holliday, R., Ragnarsson, K., Shaver, M. S., Orazem, J. (1988). Relationship among CT scans, neurological exam, and neuropsychological test performance in right-brain-damaged stroke patients. J Clin Exp Neuropsychol, 10, 539-564.
  • Friedman, N., Nachman-Katz, I. (2004). Developmental neglect dyslexia in a hebrew-reading child. Cortex, 40(2), 301-313.
  • Gordon, W.A., Ruckdeschel-Hibbard, M., Egelko, S., Diller, L., Simmens, S., Langer, K. (1984). Single Letter Cancellation Test in Evaluation of the Deficits Associated with Right Brain Damage: Normative Data on the Institute of Rehabilitation Medicine Test Battery. New York: New York University Medical Center.
  • Ladavas, E. (1994). The role of visual attention in neglect: A dissociation between perceptual and directional motor neglect. Neuropsychological Rehabilitation, 4, 155-159.
  • Marsh, N. V., Kersel, D. A. (1993). Screening tests for visual neglect following stroke. Neuropsychological Rehabilitation, 3, 245-257.
  • Menon, A., Korner-Bitensky, N. (2004). Evaluating unilateral spatial neglect post stroke: Working your way through the maze of assessment choices. Topics in Stroke Rehabilitation, 11(3), 41-66.
  • Zoccolotti, P., Antonucci, G., Judica, A., Montenero, P., Pizzamiglio, L., Razzano, C. (1989). Incidence and evoluation of the hemi-negelct disorder in chronic patients with unilateral right brain damage. Int J Neurosci, 47, 209-216.

See the measure

How to obtain the Single Letter Cancellation Test?

Please click here to see a copy of the SLCT.

Table of contents

Star Cancellation Test

Evidence Reviewed as of before: 19-08-2008
Author(s)*: Lisa Zeltzer, MSc OT; Anita Menon, MSc
Editor(s): Nicol Korner-Bitensky, PhD OT; Elissa Sitcoff, BA BSc

Purpose

The Star Cancellation Test is a screening tool that was developed to detect the presence of unilateral spatial neglect (USN) in the near extra personal space in patients with stroke.

In-Depth Review

Purpose of the measure

The Star Cancellation Test is a screening tool that was developed to detect the presence of unilateral spatial neglect (USN) in the near extra personal space in patients with stroke.

Available versions

The Star Cancellation Test was developed by Wilson, Cockburn, and Halligan in 1987.

Features of the measure

Items:
There are no actual items to the Star Cancellation Test. In the Star Cancellation Test, the stimuli are 52 large stars, 13 letters, and 10 short words interspersed with 56 smaller stars (see figure below). The patient must cross out with a pencil all the small stars on an 8.5″ x 11″ piece of paper. Two small stars in the centre are used for demonstration. The page is placed at the patient’s midline.

Scoring:
The maximum score that can be achieved on the test is 54 points (56 small stars in total minus the 2 used for demonstration). A cutoff of < 44 indicates the presence of USN. A Laterality Index or Star Ratio can be calculated from the ratio of stars cancelled on the left of the page to the total number of stars cancelled. Scores between 0 and 0.46 indicate USN in the left hemispace. Scores between 0.54 and 1 indicate USN in the right hemispace

Time:
Less than 5 minutes.

Training:
None typically reported.

Subscales:
None.

Equipment:

  • The test paper (8.5″x11″ page with 52 large stars, 13 letters, and 10 short words interspersed with 56 smaller stars).
  • Pencil

Alternative forms of the Star Cancellation Test

Laterally extended version of the Star Cancellation Test (Small, Cowey, & Ellis, 1994). In this version of the test, the section in the traditional version of the Star Cancellation Test extending from the midline to the right side of the page is duplicated at the right end (i.e. the display area is extended twice as far to the right of midline as to the left). The dimensions of the test paper in this version are 41 cm x 21 cm.

Client suitability

Can be used with: Patients with stroke.

  • Patients must be able to hold a pencil to complete the test (the presence of apraxia may impair this ability).
  • Patients must be able to visually discriminate between distractor items such as the words and big stars, and the small stars that are to be cancelled.

Should not be used with:

  • As with other cancellation tests, the Star Cancellation Test cannot be used to differentiate between sensory neglect and motor neglect because it requires both visual search and manual exploration (Ladavas, 1994).
  • The Star Cancellation Test cannot be completed by proxy.

In what languages is the measure available?

The words included in the Star Cancellation Test can be translated into the patients’ native language (Linden, Samuellson, Skoog, & Blomstrand, 2005).

Summary

What does the tool measure? Unilateral Spatial Neglect (USN) in the near extra personal space.
What types of clients can the tool be used for? Patients with stroke.
Is this a screening or assessment tool? Screening.
Time to administer Less than 5 minutes.
Versions Laterally extended version of the Star Cancellation Test
Other Languages The words can be translated into the patients’ native language.
Measurement Properties
Reliability Internal consistency:
No studies have examined the internal consistency of the Star Cancellation Test.

Test-retest:
One study has examined the test-retest reliability of the Star Cancellation Test and reported excellent test-retest.

Inter-rater:
No studies have examined the inter-rater reliability of the Star Cancellation Test.

Validity Criterion:
Predictive:
One study reported that out of 4 different measures of USN (Line Crossing Test, Line Bisection Test, Star Cancellation Test, and Indented Paragraph), the Star Cancellation Test was the best predictor of functional outcome.

Construct:
Convergent:
The Star Cancellation Test correlated adequately with the Barthel Index, the Line Bisection Test, the Clock Drawing Test, and the Copy A Cross Test. The test had an excellent correlation with the Line Crossing Test and with the Indented Paragraph test.

Does the tool detect change in patients? Not applicable.
Acceptability The Star Cancellation Test should be used as a screening tool rather than for clinical diagnosis of USN. Apraxia must be ruled out as this may affect the validity of test results. This test cannot be completed by proxy. Patients must be able to hold a pencil and visually discriminate between distractor items to complete. The measure cannot be used to differentiate between sensory neglect and motor neglect.
Feasibility The Star Cancellation Test requires no specialized training to administer and only minimal equipment is required (a pencil and the test paper). The test is simple to score and interpret (counting the number of small stars cancelled by patient out of a total of 54). A suggested cutoff score for the presence of USN is provided (< 44 indicates the presence of USN). The test is placed at the patient’s midline and the 2 small stars in the middle are used for demonstration.
How to obtain the tool?

Please click here to see a copy of the Star Cancellation Test.

Psychometric Properties

Overview

A review of the Star Cancellation Test reported that the measure has excellent construct and criterion validity, however, little published data exist on the reliability of this measure (Menon & Korner-Bitensky, 2004). In general, cancellation tests with distractors (e.g. the big stars in the Star Cancellation Test) are thought to be more sensitive measures of USN than cancellation tests without distractors (e.g. Albert’s Test), and are believed to have greater test-retest reliability than line bisection tests (Marsh & Kersel, 1993; Azouvi et al., 2002).

For the purposes of this review, we conducted a literature search to identify all relevant publications on the psychometric properties of the Star Cancellation Test.

Reliability

Internal consistency:
No evidence.

Test-retest:
Bailey, Riddoch, and Crome (2004) examined the test-retest reliability of the Star Cancellation Test in elderly patients post-stroke (85 with neglect, 83 without neglect). For patients with neglect, the test-retest reliability was excellent (Intraclass Correlation Coefficient = 0.89).

Intra-rater:
No evidence.

Inter-rater:
No evidence.

Validity

Criterion :
Marsh and Kersel (1993) examined the sensitivity of four different measures of visual neglect (Line Crossing Test, Line Bisection Test, Star Cancellation Test, and Indented Paragraph) in a sample of elderly patients with stroke. The Star Cancellation Test was found to be the most sensitive measure of visual neglect (100%) when compared with the other tests.

Bailey, Riddoch, and Crome (2000) compared cancellation measures of USN (Star Cancellation Test, Line Bisection Test, Baking Tray Task, Exploratory Motor Task, Copying a Daisy and Clock Drawing Task) that were administered to 107 patients with right or left sided brain damage and 43 age-matched controls. In this study, the Star Cancellation Test and Line Bisection Test had the highest relative sensitivity for visuo-spatial neglect (76.4%) when compared to the other cancellation tests.

Halligan, Wilson, and Cockburn (1990) examined the internal structure of the measures that comprise the Behavioural Inattention Test (BIT). Out of the 15 tests that comprise the BIT, the Letter Cancellation Test and Star Cancellation Test were the most sensitive measures, identifying 74% of patients with neglect.

Predictive :
Marsh and Kersel (1993) examined the predictive validity of four different measures of visual neglect (Line Crossing Test, Line Bisection Test, Star Cancellation Test, and Indented Paragraph) in a sample of elderly patients with stroke. In this study, the Star Cancellation Test was found to be the best predictor of functional outcome (r = 0.55) as measured by the modified Barthel Index of Self-Care (Mahoney & Barthel, 1965).

Construct:
Convergent:
Agrell, Dehlin, and Dahlgren (1997) compared five tests of visuo-spatial neglect (Star Cancellation Test, Line Crossing Test, Line Bisection Test, Clock Drawing Test, and Copy A Cross Test) and the Barthel Index in 57 elderly patients using Spearman’s correlation coefficient. In this study, the Star Cancellation Test correlated adequately with the Barthel Index (r = 0.48), the Line Bisection Test (r = -0.33), the Clock Drawing Test (r = -0.47) and the Copy A Cross Test (r = -0.47). The test had an excellent correlation with the Line Crossing Test score (r = 0.63).
Note: Some correlations are negative because a high score on the Star Cancellation Test indicates normal performance, whereas a high score on some other measures of visual neglect indicates the presence of USN.

Marsh and Kersel (1993) examined correlations between four measures of visual neglect (Line Crossing Test, Line Bisection Test, Star Cancellation Test, and Indented Paragraph) in a sample of elderly patients with stroke. Excellent correlations were found between the Star Cancellation Test and the Line Crossing Test (r = 0.68) and with the Indented Paragraph test (r = -0.60). An adequate correlation was found with the Line Bisection Test (r = -0.40).
Note: Some correlations are negative because a high score on the Star Cancellation Test indicates normal performance, whereas a high score on some other measures of visual neglect indicates the presence of USN.

Sensitivity and Specificity:
Jehkonen et al. (1998) examined the specificity of the Star Cancellation Test in 52 patients with a first-ever single acute right hemisphere stroke. In this study, using the specific cutoff score for detecting the presence of USN, the Star Cancellation test had a diagnostic sensitivity of 80% and a diagnostic specificity of 91%. Out of all of the USN measures examined in this study (Star Cancellation; Figure Copying; Letter Cancellation; Representational drawing; Line Crossing; Line Bisection), the Star Cancellation Test was the best single test for diagnosing USN, and misdiagnosed only 4 patients with neglect and 3 patients who did not have neglect.

Responsiveness

No evidence.

References

  • Agrell, B. M., Dehlin, O. I., Dahlgren, C. J. (1997). Neglect in elderly stroke patients: a comparison of five tests. Psychiatry Clin Neurosci, 51(5), 295-300.
  • Azouvi, P., Samuel, C., Louis-Dreyfus, A., et al. (2002). Sensitivity of clinical and behavioural tests of spatial neglect after right hemisphere stroke. J Neurol Neurosurg Psychiatry, 73, 160 -166.
  • Bailey, M. J., Riddoch, M. J., Crome, P. (2000). Evaluation of a test battery for hemineglect in elderly stroke patients for use by therapists in clinical practice. NeuroRehabilitation, 14, 139-150.
  • Bailey, M. J., Riddoch, M. J., Crome, P. (2004). Test-retest stability of three tests for unilateral visual neglect in patients with stroke: Star Cancellation, Line Bisection, and the Baking Tray Task. Neuropsychological Rehabilitation, 14(4), 403-419.
  • Halligan, P., Wilson, B., Cockburn, J. (1990). A short screening test for visual neglect in stroke patients. Int Disabil Stud, 12(3), 95-99.
  • Jehkonen, M., Ahonen, J. P., Dastidar, P., Koivisto, A. M., Laippala, P., Vilkki, J. (1998). How to detect visual neglect in acute stroke. The Lancet, 351, 727.
  • Ladavas, E. (1994). The role of visual attention in neglect: A dissociation between perceptual and directional motor neglect. Neuropsychological Rehabilitation, 4, 155-159.
  • Linden, T., Samuellson, H., Skoog, I., Blomstrand, C. (2005). Visual neglect and cognitive impairment in elderly patients late after stroke. Acta Neurol Scand, 111, 163-168.
  • Mahoney, F. I., Barthel, D. W. (1965). Functional evaluation: The Barthel Index. Md State Med J, 14, 61-5.
  • Mark, V. W., Woods, A. J., Ball, K. K., Roth, D. L., Mennemeier, M. (2004). Disorganized search on cancellation is not a consequence of neglect. Neurology, 63, 78-84.
  • Marsh, N. V., Kersel, D. A. (1993). Screening tests for visual neglect following stroke. Neuropsychological Rehabilitation, 3, 245-257.
  • Menon, A., Korner-Bitensky, N. (2004). Evaluating unilateral spatial neglect post stroke: Working your way through the maze of assessment choices. Topics in Stroke Rehabilitation, 11(3), 41-66.
  • Small, M., Cowey, A., Ellis, S. (1994). How lateralized is visuospatial neglect? Neuropsychologia, 32(4), 449-464.
  • Wilson, B., Cockburn, J., Halligan, P. (1987). Development of a behavioral test of visuospatial neglect. Arch Phys Med Rehabil, 68, 98-101.

See the measure

How to obtain the Star Cancellation Test?

Please click here to see a copy of the measure.

Table of contents

Sunnybrook Neglect Assessment Procedure (SNAP)

Evidence Reviewed as of before: 12-04-2018
Author(s)*: Andréanne Labranche
Editor(s): Annabel McDermott OT
Expert Reviewer: Farrell Leibovitch
Content consistency: Gabriel Plumier

Purpose

The Sunnybrook Neglect Assessment Procedure (SNAP) is a test battery to screen for neglect in patients with acute stroke (Black et al., 2016).

In-Depth Review

Purpose of the measure

The Sunnybrook Neglect Assessment Procedure (SNAP) is a test battery for bedside screening of neglect. It is designed to assess neglect in patients with acute stroke and to monitor recovery of neglect at later stages of stroke recovery.

Available versions

There is one version of the SNAP.

Features of the measure

Items:
The SNAP is comprised of five paper-and-pencil items that are familiar measures of neglect. The items are administered to the patient in the following order:

  1. Spontaneous drawing of clock and daisy
  2. Line cancellation task
  3. Line bisection task
  4. Copying of clock and daisy
  5. Shape cancellation

Description of tasks:
A1. Drawing task
The patient is asked to draw a clock face and a daisy on a blank piece of paper.

B. Line Cancellation Task
The patient is instructed to cross out all lines on a page.

C. Line Bisection Task
The patient is instructed to draw a mark on a line in order to bisect the line in half. This task is completed using 15cm and 20cm lines.

A2. Copying Tasks
The patient is asked to copy a picture of a clock and a daisy.
Note: The assessor does not identify that the pictures are a clock and a daisy.

D. Shape Cancellation Task
The patient is required to circle all the targets on a page.
Note: This is a timed task. The patient is given a different color pencil after every tenth target is circled, to determine the search pattern.

Scoring and score interpretation:
The scoring of the different SNAP subtests is based on omissions made on the contralateral side of the brain lesion. Left-side omissions are scored in patients in right hemisphere stroke and right-side omissions are scored in left hemisphere stroke.

A. Copying and drawing tasks
Drawings with a significant omission of details on the contralateral side are scored as having neglect.

B. Line Cancellation
Each omitted line on the contralateral side of the page is scored as neglect.

C. Line Bisection
The score for this task is based on the mean percent deviation of the patient’s mark from the true midpoint. Percent deviation and average deviation is calculated for the four lines of the task, according to the formula stipulated in the SNAP manual.

D.Shape cancellation task
All targets omitted on the contralateral side of the page are counted.

Total score interpretation:

Scores are calculated using the SNAP scoring manual. The patient is awarded a score for each subtest, resulting in a total score out of 100.

Leibovitch et al. (2012) recommended classifying severity of neglect according to performance on the SNAP as follows:

SNAP Score Classification of Performance
0-5 Normal Performance
6-40 Mild Neglect
41-100 Severe Neglect

Please see the SNAP Administration and Scoring Manual for more details on scoring (Black et al., 2016).

What to consider before beginning:
SNAP test items should always be placed in the individual’s midline.

Time:
Time taken to administer the assessment has not been specified. Time will vary according to the patient’s attention and severity of neglect.

Training:
Training requirements have not been specified.

Subscales:
None.

Equipment:

  • SNAP assessment package
  • Two blank sheets of paper
  • Coloured pens/pencils
  • Stopwatch

Alternative forms of the SNAP

The original version of the Sunnybrook Neglect Assessment contained additional drawing tasks, a paragraph reading task, a writing task and additional visual search tasks. The four items that comprise the SNAP were deemed to be complementary and not redundant, so the additional items were eliminated (Leibovitch et al., 2012).

Client suitability

Can be used with:

  • Patients with stroke.

Should not be used with:

  • Not specified.

In what languages is the measure available?

English.

Summary

What does the tool measure? Hemispatial neglect
What types of clients can the tool be used for? Patients in the acute phase of stroke recovery.
What ICF domain is measured? Impairment.
Is this a screening or assessment tool? Screening.
Time to administer Not specified.
Versions There is one version of the SNAP.
Languages English.
Measurement Properties
Reliability Internal Consistency:
One study reported moderate to excellent correlations between subtests and the total score.

Test-retest:
No studies have reported on test-retest reliability among patients with stroke.

Intra-rater/inter-rater:
One study reported adequate to excellent correlations for subtests and the total score.

Validity Criterion:
Concurrent:
No studies have reported on concurrent validity of the SNAP among patients with stroke.

Predictive:
One study reported that the SNAP significantly predicted neglect (present/absent) on the visual search board (VSB) visual search task.

Construct:
– One study reported a moderate correlation among SNAP subtests.
– Two studies have conducted factor analysis and found that all subtests loaded on one factor that accounted for 69-72% of the variance.

Convergent/Discriminant:
– One study reported adequate correlations with the visual search board (VSB) visual search task.
– One study reported a significant correlation between neglect measured by the SNAP and parietal damage.
– One study reported an excellent correlation between the SNAP and a measure of generalized attentional capacity (digit span forward minus digit span backward).

Known groups:
One study reported significant differences in performance on the SNAP according to side of lesion.

Floor/Ceiling Effects No studies have reported on floor/ceiling effects among patients with stroke.
Sensitivity/ Specificity – One study reported 68% sensitivity and 76% specificity.
– Two studies reported that the shape cancellation task was the most sensitive subtest; a third study reported that the line bisection task was the most sensitive subtest.
– One study reported that the drawing/copying subtests showed highest specificity.
Does the tool detect change in patients? The tool does not detect or measure change but it can be used to monitor change in neglect over time.
Acceptability The SNAP is simple to administer and can be used at the individual’s bedside.
Feasibility The SNAP is portable, quick to administer and requires minimal equipment.
How to obtain the tool?

The SNAP administration and scoring manual and test booklet can be accessed here

Psychometric Properties

Overview

A literature search was conducted to identify all relevant publications on the psychometric properties of the Sunnybrook Neglect Assessment Procedure (SNAP) for use with patients with stroke. Five articles were reviewed.

Floor/Ceiling Effects

Leibovitch et al (2012) examined psychometric properties of the SNAP in a sample of 224 patients with acute stroke and 100 elderly individuals. Results from the population of elderly individuals without stroke showed no omissions of details on the drawing/copying subtests and no omissions on the line cancellation subtest.

Reliability

Internal Consistency:
Leibovitch et al (2012) examined internal consistency of the SNAP in a sample of 224 stroke patients with acute stroke, using Cronbach’s coefficient alpha. All subtests showed an excellent correlation with the total neglect score (alpha = 0.84, p < 0.001) and correlations between subtests were moderate (alpha < 0.07, p < 0.0005). This indicates that subtests are measuring the same construct but are not redundant.

Test-retest:
There are no studies of test-retest reliability of the SNAP among patients with stroke.

Intra-rater/inter-rater:
Leibovitch et al (2012) examined intra-rater and inter-rater reliability in a sample of 12 patients with acute stroke and 12 elderly individuals. Intra-rater reliability was measured with 1 rater and inter-rater reliability was measured between 2 raters. Reliability was measured using Kappa statistics for drawing/copying tasks and interclass correlation coefficients for other scores. The authors reported adequate to excellent correlations for subtests and the total score (r = 0.73-0.99, p < 0.001); specific statistics were not provided.

Validity

Content:
Development of the SNAP is not reported.

Criterion:
Concurrent:
No studies have reported on concurrent validity of the SNAP among patients with stroke.

Predictive:
Leibovitch et al (2012) examined predictive validity of the SNAP by comparison with the visual search board (VSB) visual search task (Kimura, 1986) in a sample of 224 patients with acute stroke, using logistic regression. Comparison of test results showed that the SNAP significantly predicted neglect (present/absent) on the VSB.

Construct:
Black et al. (1995) examined construct validity of the SNAP in a sample of 294 patients with acute stroke. SNAP subtest scores correlated significantly (r = 0.4-0.6, p = 0,0001). Initial factor analysis showed that all four tests contributed to a single factor which accounted for 72% of the information contained in the four subtests.

Leibovitch et al. (2012) examined construct validity of the SNAP in a sample of 224 stroke patients with acute stroke, using factor analysis of subtest scores. Results revealed all subtests loaded equally on one factor that accounted for 69.4% of the total variance (eigenvalue = 2.8). Further factor analysis according to side of brain injury similarly showed that, among patients with right hemisphere damage, all four subtests loaded equally on one factor that accounted for 69% of the total variance. Results of patients with left hemisphere damage revealed two factors accounting for 62% of the total variance: the first factor comprised three subtests (drawing/copying, line cancellation and shape cancellation) and accounted for 37% of total variance; the second factor (line bisection) accounted for 25% of the total variance.

Convergent/Discriminant:
Leibovitch et al. (2012) examined convergent validity of the SNAP by comparison with the visual search board (VSB) visual search task (Kimura, 1986) in a sample of 224 patients with acute stroke, using Receiver Operating Characteristic analysis. Results showed adequate validity (area under curve = 0.78).

Leibovitch et al. (1998) examined convergent validity of the SNAP by comparison with location and severity of brain damage on CT and SPECT scans in a sample of 120 patients with acute/subacute stroke, using regression analysis. Results showed that damage to the parietal and anterior cingulate cortex was a significant predictor of neglect using CT data (p<0.05), whereas regional blood flow in the parietal cortex was the best predictor of neglect using SPECT data (p<0.05).

Eskes et al. (2003) examined convergent validity of the SNAP by comparison with a measure of generalized attentional capacity (digit span forward minus digit span backward) in a sample of 9 patients with acute to chronic stroke, using Spearman correlation coefficient. Results showed an excellent correlation between measures (r=0.85, p<0.02).

Known groups:
Black et al. (1995) administered the SNAP to 294 patients with acute stroke. Comparison of incidence of neglect between patients with right hemisphere damage vs. patients with left hemisphere damage was 54% vs. 31% (respectively). Performance on SNAP subtests differed significantly between groups: Shape cancellation: 74% vs. 54% (respectively); Line bisection: 61% vs. 35 % (respectively); Line cancellation: 26% vs. 7% (respectively); and Drawing: 25% vs. 3% (respectively).

Responsiveness

Sensitivity/specificity:
Black et al. (1990) used a modified version of the SNAP in a sample of 41 patients with acute stroke (n=21 with right hemisphere damage). The tool comprised the standard SNAP subtests as well as two additional tasks (designs cancellation, visual search). Results showed that the line bisection task was the most sensitive subtest, with neglect observed in 76% and 30% of individuals with right and left hemisphere damage respectively. While the line bisection subtest was most likely to detect mild impairment, the line drawing and line cancellation subtests indicated more severe impairment.

Black et al (1995) administered the SNAP to 294 patients with acute stroke. Comparison of incidence of neglect between patients with right hemisphere damage vs. left hemisphere damage was 54% vs. 31% (respectively). Results indicate that the shape cancellation subtest was the most sensitive subtest, with neglect observed in 74% and 54% of individuals with right and left hemisphere damage respectively. The line bisection subtest revealed neglect in 61% and 35% of individuals with right and left hemisphere damage respectively.

Leibovitch et al. (2012) evaluated sensitivity and specificity of the SNAP in a sample of 224 patients with acute stroke, using the visual search board (VSB) visual search task (Kimura, 1986) to confirm neglect. Overall, the SNAP showed 68% sensitivity and 76% specificity. The shape cancellation task showed highest sensitivity (70% sensitivity); the drawing/copying tasks showed highest specificity (99% specificity).

References

  • Black, S.E., Vu, B., Martin, D., & Szalai, J.P. (1990). Evaluation of a bedside battery for hemispatial neglect in acute stroke [Abstract]. Journal of Clinical and Experimental Neuropsychology, 12, 109.
  • Black, S., Ebert, P. L., Leibovitch, F., Szalai, J. P., & Blair, N. (1995). Recovery in hemispatial neglect [Abstract]. Neurology, 45(suppl 4), A178.
  • Black, S. E., Leibovitch, F. S., Ebert, P. L., & L., B. K. (2016). SNAP : Sunnybrook Neglect Assessment Procedure Administration and Scoring Manual.
  • Eskes, G.A., Butler, B., McDonald, A., Harrison, E.R., & Phillips, S.J. (2003). Limb activation effects in hemispatial neglect. Archives of Physical Medicine and Rehabilitation, 84, 323-8.
  • Leibovitch, F.S., Black, S.E., Caldwell, C.B., Ebert, P.L., Ehrlich, L.E., & Szalai, J.P. (1998). Brain-behaviour correlations in hemispatial neglect using CT and SPECT: the Sunnybrook stroke study. Neurology, 50, 901-8.
  • Leibovitch, F. S., Vasquez, B. P., Ebert, P. L., Beresford, K. L., & Black, S. E. (2012). A short bedside battery for visuoconstructive hemispatial neglect: Sunnybrook Neglect Assessment Procedure (SNAP). Journal of Clinical and Experimental Neuropsychology, 34(4), 359-68. doi:10.1080/13803395.2011.645016
  • Menon-Nair, A., Korner-Bitensky, N., & Ogourtsova, T. (2007). Occupational Therapists’ identification, assessment, and treatment of unilateral spatial neglect during stroke rehabilitation in Canada. Stroke, 38, 2556-62. DOI: 10.1161/STROKEAHA.107.484857

See the measure

How to obtain the SNAP

The SNAP administration and scoring manual and test booklet can be accessed here.

Table of contents

Visual Impairment Screening Assessment (VISA)

Evidence Reviewed as of before: 20-01-2023
Author(s)*: Annabel McDermott, OT
Editor(s): Annie Rochette
Expert Reviewer: Fiona Rowe

Purpose

The Visual Impairment Screening Assessment (VISA) is designed to identify visual impairment following stroke, to allow referral for specialist visual assessment. The VISA was developed by the VISION research unit, University of Liverpool.

In-Depth Review

Purpose of the measure

The Visual Impairment Screening Assessment (VISA) is designed to identify visual impairment following stroke. The VISA screens for common visual impairments following stroke including impaired central vision, eye movement problems, visual field deficits and visual inattention. The VISA can be used to detect ocular signs separate from reporting of vision symptoms.

Available versions

The VISA was developed from a review of stroke and vision research studies, and in collaboration with a panel of stroke specialists and patients, and validated in a clinical study.

The VISA is available in print and as a software app.

Features of the measure

 Items:

The VISA comprises five sections:

  1. Case history – to screen for visual symptoms and observed signs – in person or by proxy.
  2. Visual acuity – to screen central vision at near (33cm) and distance (3m) using LogMAR (Logarithm of the Minimum Angle of Resolution) or Grating acuity; monocular or binocular depending on the ability of the patient.
  3. Ocular alignment and movement – to screen presence/absence of strabismus and eye movement problems.
  4. Visual field* – to screen peripheral and central field of vision by a guided confrontation method.
  5. Visual perception – to screen for visual inattention/neglect using (i) line bisection task, (ii) cancellation task and (iii) clock drawing assessment.

*Visual field assessment – print version: confrontation follows a typical method with the clinician seated directly opposite the patient at a distance of 1m and following stages that involve the patient indicating when a 10mm red target is seen in the periphery of their vision, finger counting in each quadrant of the visual field and comparison of examiner facial features.

*Visual field assessment – app version: a kinetic visual field assessment is undertaken at a test distance of 30cm and a screen width of 24.6cm, allowing an assessment of the 40degree visual field. The patient is asked to fixate a static fixation point in the corner of the screen while a stimulus moves from the other edges. They are asked to tap the tablet screen when the stimulus is seen. This is repeated with the fixation target positioned at all four corners of the screen.

Scoring:

Administration of the VISA screening tool does not result in a score. Rather, the tool serves as a guide for referral for specialist visual assessment as per observations outlined in the VISA Instructions for Use booklet.

What to consider before beginning:

The VISA screens for common forms of visual impairment that occur from brain injury but does not screen for all possible visual impairments. As such a negative screen does not rule out the presence of visual impairment.

The individual may not be able to complete all sections of the VISA at one time (e.g. due to fatigue, cognitive difficulties, communication difficulties). In this instance the VISA can be completed over several visits.

The individual is permitted to wear glasses (if required) for some assessment items.

The VISA must be performed in good lighting conditions.

Time:

The VISA takes approximately 10 minutes to administer, but longer if multiple visual problems and if associated cognitive issues.

Equipment:

Equipment is outlined in the VISA Instructions for Use booklet and includes:

  • Pen torch
  • Occlusive tape
  • 10mm red target
  • +3.00 power reading glasses
  • 3 metre string/tape measure
  • Matching card for visual acuity check
  • Visual attention worksheets
  • Pencil
  • VISA recording sheet

Client suitability

Can be used with:

Individuals with acute stroke

Individuals with community/cognitive difficulties who are unable to comply with any letter test – The assessor can use a grating chart that uses a preferential looking technique.

Stroke patients have reported that it is easier to respond using the touch screen (VISA app) than traditional pen and paper tasks when using their non-dominant hand.

Should not be used with:

The VISA may not be completed in full due to cognitive difficulty or fatigue. Information regarding the individual’s vision history can be gathered from reliable family members. VISA app cannot be used on small devices such as iPad Mini or smartphones.

Languages of the measure

English – print and app
Dutch – print
Norwegian – print

Requests for translations are welcome. The VISA researchers will work closely with translators using the WHO-recommended translation process.

Summary

What does the tool measure? Visual impairment
What types of clients can the tool be used for? The Visual Impairment Screening Assessment can be used with individuals with stroke.
Is this a screening or assessment tool? Screening
Time to administer 10 minutes – may be longer if multiple visual problems
ICF Domain Impairment
Versions There is a print version and an app version of the VISA.
Languages English
Dutch
Norwegian
Measurement Properties
Reliability Internal consistency:
No studies have reported on internal consistency of the VISA.
Test-retest:
No studies have reported on test-retest reliability of the VISA.
Intra-rater:
No studies have reported on intra-rater reliability of the VISA.
Inter-rater:
Two studies reported substantial agreement on inter-rater reliability evaluation of the VISA.
Validity Content:
Pilot validation of the VISA was conducted in collaboration with medical students (naïve screeners) and orthoptists.
Criterion:
Concurrent:
No studies have reported on concurrent validity of the VISA.
Predictive:
No studies have reported on predictive validity of the VISA.
Construct:
Convergent/Discriminant:
Two studies reported poor to substantial test component agreement between the VISA screen and specialist vision assessments.
One study reported perfect agreement between kinetic visual field test using the VISA app and formal perimetry.
Known Groups:
No studies have reported on known group validity of the VISA.
Floor/Ceiling Effects No studies have reported on floor/ceiling effects of the VISA. Two studies noted false positives and false negatives on individual components of the tool.
Does the tool detect change? No studies have reported on responsiveness of the VISA.
Acceptability Two studies reported on acceptability of the VISA, from a sample of stroke patients and orthoptists.
Feasibility The VISA is suitable for administration in various settings. The VISA requires minimal specialist equipment or training.
One study noted that the VISA is time-intensive when used in the hyperacute stage with unwell patients.
How to obtain the tool? The VISA is available in print or as an app (Medicines and Healthcare products Regulatory Agency regulatory approved): https://www.liverpool.ac.uk/population-health/research/groups/vision/visa/?

Further information regarding the tool and administration guidelines can be found here: https://youtu.be/-s6i–PfXNY

Psychometric Properties

Overview

The Visual Impairment Screening Assessment (VISA) was developed by the VISION Research Unit, University of Liverpool in consultation with an expert panel of stroke-specialist clinical orthoptists, stroke research orthoptists, stroke-specialist occupational therapists and neuro-ophthalmologists (Rowe et al., 2018). A literature search was conducted to identify all relevant publications on the psychometric properties of the VISA pertinent to use with participants following stroke. Two studies were identified.

Floor/Ceiling Effects

Floor/ceiling effects of the VISA have not been measured.

Reliability

Internal consistency:
Internal consistency of the VISA has not been measured.

Test-retest:
Test-retest reliability of the VISA has not been measured.

Intra-rater:
Intra-rater reliability of the VISA has not been measured.

Inter-rater :
Rowe et al. (2018) examined inter-rater agreement of the VISA in a sample of 116 individuals with stroke, whereby each individual underwent two vision assessments: a specialist vision assessment performed by orthoptists/ophthalmologists (n=5) and the VISA screening assessment, completed by medical students (n=2) and orthoptists (n=4). Agreement regarding need to make a referral to specialist eye services due to visual impairment was measured using kappa values. Overall agreement was substantial (k=0.736, 95% CI 0.602 to 0.870). As expected, a higher rate of false positives and false negatives were found among screeners naïve to vision testing (n=2 medical students) vs. experienced screeners (n=5 orthoptists/ophthalmologists).

Rowe et al. (2020) examined inter-rater agreement of the VISA in a sample of 221 individuals with stroke, whereby each individual underwent two vision assessments: a specialist vision assessment performed by orthoptists/ophthalmologists and the VISA screening assessment. The outcome was the presence/absence of visual impairment*. Agreement was substantial for the VISA print (k=0.648, 95% CI 0.424 to 0.872) and for the VISA app (k= 0.690, 95% CI 0.528 to 0.851).

*Presence/absence of visual impairment was defined as one or more of: reduced distance vision <0.2 logMAR, reduced near vision <0.3 logMAR (equivalent to N6), deviated eye position, eye movement abnormality (incomplete eye rotations in any position of gaze), visual field loss, visual inattention with displaced line bisection, <42 score on cancellation task and/or incomplete/displaced clock drawing.

Validity

Content:

Pilot validation of the VISA was conducted in collaboration with medical students (naïve screeners) and orthoptists; independent specialist vison assessment was performed by orthoptists/ophthalmologists. Written and verbal feedback was gathered from screeners and a thematic approach to analysis of qualitative date was used. A modified grounded theory approach was adopted to revise themes iteratively as analyses continued (Rowe et al., 2018).

Criterion:

Concurrent:
Concurrent validity of the VISA has not been measured.

Predictive:
Predictive validity of the VISA has not been measured.

Construct

Convergent/Discriminant :
Rowe et al. (2018) examined test component agreement between the VISA screen and specialist vision assessments (visual acuity, ocular alignment and movement, visual fields, visual perception) in a sample of 116 individuals with stroke, using kappa values. Agreement of items ranged from poor to substantial:

  • Near visual acuity (k=0.682, CI 0.543 to 0.820; 10 false negatives, 7 false positives)
  • Distance visual acuity (k=0.785, CI 0.665 to 0.904; 8 false negatives, 3 false positives)
  • Ocular alignment (k=0.585, CI 0.221 to 0.949; 4 false negatives, 0 false positives)
  • Ocular motility (k=0.120, CI -0.071 to 0.311; 21 false negatives, 6 false positives)
  • Visual fields (k=0.741, CI 0.599 to 0.884; 3 false negatives, 8 false positives)
  • Visual inattention (k=0.361, CI 0.144 to 0.578; 1 false negative, 16 false positives).

Rowe et al. (2020) examined test component agreement of the VISA print and VISA app screening assessments with specialist vision assessments (visual acuity, ocular alignment and movement, visual fields, visual perception) in a sample of 221 individuals with stroke, using kappa values. Agreement of individual components between VISA print and orthoptic vision assessment ranged from poor to moderate:

  • Near visual acuity (k=0.236, CI 0.045 to 0.427; 23 false negatives, 12 false positives)
  • Distance visual acuity (k=0.565, CI 0.405 to 0.725; 9 false negatives, 13 false positives)
  • Ocular alignment (k=0.388, CI 0.110 to 0.667; 5 false negatives, 7 false positives)
  • Ocular motility (k=0.365, CI 0.181 to 0.553; 10 false negatives, 19 false positives)
  • Visual fields (k=0.504, CI 0.339 to 0.668; 7 false negatives, 18 false positives)
  • Visual inattention (k=0.500, CI 0.340 to 0.659; 7 false negative, 21 false positives).

Agreement of individual components between VISA app and orthoptic vision assessment ranged from fair to substantial:

  • Near visual acuity (k=0.416, CI 00.227 to 0.605; 19 false negatives, 3 false positives)
  • Distance visual acuity (k=0.783, CI 0.656 to 0.910; 6 false negatives, 4 false positives)
  • Visual fields (k=0.701, CI 0.564 to 0.838, 3 false negatives, 12 false positives)
  • Visual inattention (k=0.323, CI 0.108 to 0.538; 6 false negatives, 16 false positives).

Rowe et al. (2020) examined agreement between kinetic visual field test using the VISA app and formal perimetry using the binocular Esterman programme with 25 individuals with stroke, using kappa values. There was perfect agreement (k=1.00) between measures.

Known Group:
Known group validity of the VISA has not been measured.

Responsiveness

Sensitivity & Specificity:
Rowe et al. (2018) examined sensitivity and specificity of the VISA in a sample of 89 individuals with stroke by comparison with a binary assessment of the presence/absence of visual impairment (low vision <0.2 logMAR, visual field loss, eye movement abnormality, visual perception abnormality). Sensitivity was defined as the proportion of patients with visual impairment who are correctly identified by the screener; sensitivity of 90.24% was found. Specificity was defined as the proportion of patients without visual impairment who were correctly identified by the screener; specificity of 85.29% was found. The positive and negative predictive values were 93.67% and 78.36% (respectively).

Rowe et al. (2018) also compared sensitivity and specificity of the VISA when performed by naïve screeners (n=2 medical students) vs. experienced screeners (n=5 orthoptists/ophthalmologists). When used by a naïve screener the VISA screen had a sensitivity of 82.93% and specificity of 80.95%; when used by an experienced screener the VISA screen had a sensitivity of 97.56% and specificity of 92.31%.

Rowe et al. (2020) examined sensitivity and specificity of the VISA in a sample of 221 individuals with stroke. Sensitivity was estimated as the proportion of patients with visual impairment as diagnosed by the gold-standard clinical examination, which are correctly identified by the screener; sensitivity of the VISA print and VISA app was 97.67% and 88.31% (respectively). Specificity was estimated as the proportion of patients without visual impairment that are correctly identified by the screener; specificity of the VISA print and VISA app was 60.00% and 86.96% (respectively). The positive and negative predictive values of the VISA print were 93.33% and 81.82%. The positive and negative predictive values of the VISA app were 95.77% and 68.97% (respectively).

Acceptability:
Rowe et al. (2018) examined acceptability of the VISA tool through process evaluation of written feedback and interviews with stroke patients and stroke specialists. Qualitative data regarding number of items, clarity of questions, time and ease of testing was gathered and analysed using a thematic approach, and a modified grounded theory approach was subsequently used to revise themes as interviews and analyses progressed.

Rowe et al. (2020) examined acceptability of the VISA tool through process evaluation of clinician feedback sheets and stroke patients interviews. Qualitative feedback regarding duration of assessment, presentation of tests on the VISA app and referral guides were received.

References

Rowe, F.J., Hepworth, L.R., Hanna, K.L., & Howard, C. (2018). Visual Impairment Screening Assessment (VISA) tool: pilot validation. BMJ Open, 8:e020562. doi:10.1136/bmjopen-2017-020562

Rowe, F.J., Hepworth, L., Howard, C., Bruce, A., Smerdon, V., Payne, T., Jimmieson, P., & Burnside, G. (2020). Vision Screening Assessment (VISA) tool: diagnostic accuracy validation of a novel screening tool in detecting visual impairment among stroke survivors. BMJ Open, 10:e033639. doi:10.1136/bmjopen-2019-033639

See the measure

How to obtain the Visual Impairment Screening Assessment (VISA)

The VISA is available in print or as an app (Medicines and Healthcare products Regulatory Agency regulatory approved).

The VISA Instructions for Use booklet can be found here: https://www.liverpool.ac.uk/population-health/research/groups/vision/visa/?

The VISA Instructions for Use video can be found here: VISA stroke vision screening video – YouTube

Table of contents
What do you think?