SciELO - Scientific Electronic Library Online

 
vol.26 número1Forças Pessoais e Pró-Sociabilidade: Um Estudo com Voluntários de Ações SociaisDesempenho de Adultos após Acidente Vascular Cerebral com e sem Afasia no NEUPSILIN-L índice de autoresíndice de assuntospesquisa de artigos
Home Pagelista alfabética de periódicos  

Serviços Personalizados

Journal

artigo

Indicadores

Compartilhar


Psicologia: teoria e prática

versão impressa ISSN 1516-3687

Psicol. teor. prat. vol.26 no.1 São Paulo  2024  Epub 16-Dez-2024

https://doi.org/10.5935/1980-6906/eptppa15263.en 

Psychological Assessment

Specific Learning Disorder Rating Scale: Development and evidences of validity and reliability

ESCALA DE EVALUACIÓN DEL TRASTORNO ESPECÍFICO DEL APRENDIZAJE: CONSTRUCCIÓN Y EVIDENCIAS DE VALIDEZ Y FIABILIDADE

1Federal University of Bahia (Universidade Federal da Bahia [UFBA]), Feira de Santana, Bahia, Brasil

2Feira de Santana State University (Universidade Estadual de Feira de Santana [UEFS]), Salvador, Bahia, Brasil


Abstract

Specific Learning Disorder (SLD) affects the acquisition of reading, writing and mathematics skills, with a prevalence rate of around 5% in school-age children. In the Brazilian context, there is a shortage of scales for the SLD symptoms screening. Therefore, this study aimed to generate the items and present evidences of validity and reliability of the Escala de Avaliação do Transtorno Específico da Aprendizagem (ESATA), a Likert-type scale developed for 2nd to 5th grade teachers. 80 items related to the domains of reading, written expression and mathematics were developed. Theoretical and empirical analyses were performed. After the expert panel review, 76 items presented a Content Validity Index (CVI) ranging from 0.86 to 1, with a total CVI = 0.98. The semantic analysis showed the items were easily understood. The Exploratory Factor Analysis (EFA) performed based on data collected from 308 2nd to 5th grade teachers, from 19 Brazilian states, showed a better fit of the data in a bifactorial structure, the first factor called Reading and Writing, with loads ranging from. 44 to .89, and the second factor called Mathematics, with loads ranging from .57 to .94. Both factors explained 58% of the variance. Cronbach’s α for the full scale was .99, indicating excellent reliability of ESATA with 74 items.

Keywords: specific learning disorder; dyslexia; dyscalculia; rating scale; neurop sychology

Resumen

El Trastorno Específico del Aprendizaje (TEA) dificulta la adquisición de la lectura, escritura y matemáticas, con una prevalencia de alrededor del 5% en escolares. A nivel nacional, hay escasez de escalas para detección de síntomas de TEA. Así, este estudio tuvo como objetivo elaborar los ítems y presentar evidencias de validez y fiabilidad de la Escala de Evaluación del Trastorno Específico del Aprendizaje (ESATA), una escala tipo Likert desarrollada para docentes del 2º al 5º curso de la educación primaria. Se desarrollaron 80 ítems referidos a los dominios de lectura, escritura y matemáticas, y se realizaron análisis teóricos y empíricos del instrumento. Luego del criterio de jueces, 76 ítems presentaron Índice de Validez de Contenido (CVI) que varió desde 0,86 hasta 1, con un IVC gobal = 0,98. El análisis semántico demostró facilidad de comprensión de los ítems. El Análisis Factorial Exploratorio (AFE) realizado con una muestra de 308 docentes de 2º a 5º curso, de 19 estados brasileños, mostró un mejor ajuste de los datos en una estructura bifactorial, siendo el primero factor denominado Lectura y Escritura, con cargas que van desde .44 hasta .89, y el segundo factor denominado Matemáticas, con cargas que van desde .57 hasta .94. Ambos factores explicaron 58% de la varianza. El α de Cronbach para la escala completa fue de .99, lo que indica una excelente fiabilidade de la ESATA con 74 ítems.

Palabras-clave: trastorno específico del aprendizaje; dislexia; discalculia; escala de evaluación; neuropsicología

Resumo

O transtorno específico da aprendizagem (TEAp) causa prejuízos na aquisição das habilidades de leitura, escrita e matemática, com prevalência em torno de 5% em crianças em idade escolar. No âmbito nacional, observa-se uma escassez de escalas para rastrear os sintomas do TEAp. Nesse sentido, este estudo tem por objetivo elaborar itens e apresentar evidências de validade e confiabilidade da Escala de Avaliação do Transtorno Específico da Aprendizagem (ESATA), uma escala do tipo Likert desenvolvida para professores do 2º ao 5º ano do Ensino Fundamental I. Elaboraram-se 80 itens referentes aos domínios de leitura, escrita e matemática e, em seguida, realizaram-se análises teóricas e empíricas do instrumento. Após a análise de juízes, 76 itens apresentaram índice de validade de conteúdo (IVC) variando de 0,86 a 1, sendo o IVC total = 0,98. A análise semântica demonstrou facilidade de compreensão dos itens. A análise fatorial exploratória (AFE) realizada com uma amostra de 308 professores do 2º ao 5º ano de 19 estados brasileiros identificou um melhor ajuste dos dados numa estrutura bifatorial, sendo o primeiro fator denominado Leitura e Escrita, com cargas variando de .44 a .89, e o segundo fator denominado Matemática, com cargas variando de .57 a .94. Ambos os fatores explicaram 58% da variância. O α de Cronbach para a escala total foi de .99, indicando excelente confiabilidade da ESATA com 74 itens.

Palavras-chave: transtorno específico da aprendizagem; dislexia; discalculia; escala de avaliação; neuropsicologia

Specific Learning Disorder (SLD) is a neurodevelopmental clinical condition characterized by a marked difficulty in acquiring the academic skills of reading, writing, and mathematics. According to the DSM-5-TR, the symptoms are conceived as phenotypes of cognitive impairments associated with neurobiological and environmental factors, with a prevalence of around 5 to 15% in school-age children (APA, 2022).

Characteristic symptoms of SLD include: 1) inaccurate or slow and effortful reading of words; 2) difficulty in comprehending the meaning of what is read; 3) spelling difficulties; 4) difficulties with written expression; 5) difficulties in mastering numerical sense, arithmetic facts, or calculations; and 6) difficulties in mathematical reasoning. For its diagnosis, the DSM-5-TR stipulates that at least one of these symptoms must be present, with a minimum persistence of 6 months, and that the difficulties are not due to intellectual, physical, or sensory limitations, psychosocial adversities, deprivation, or inadequate pedagogical methods (APA, 2022).

In the current edition of the International Classification of Diseases, ICD-11, SLD, now referred to as Developmental Learning Disorder (6A03), is described as a condition in which the learning of basic academic skills is compromised even when adequate instruction is provided, emphasizing the persistent nature of the disorder (World Health Organization, 2022).

Regarding the potential manifestations, SLD can be specified, according to the compromised domain, as follows: a) impairment in reading, where learning difficulties involve word reading accuracy, fluency, and reading comprehension; b) impairment in written expression, compromising spelling accuracy, grammar, punctuation, and clarity or organization of written expression; and c) impairment in mathematics, when difficulties involve numerical sense, memorization of arithmetic facts, calculation fluency, and mathematical reasoning (APA, 2022).

In clinical and educational practice, these impairments tend to be associated with negative short-term and long-term outcomes, especially when the individual does not receive a timely diagnosis and intervention. In the literature, reported impacts include reduced self-esteem and self-efficacy, social adjustment difficulties, school dropout, and unemployment (Livingston, Siegel & Ribary, 2018).

Accordingly, early diagnosis becomes crucial in determining the extent to which symptoms impair the individual’s functioning, identifying critical difficulties and potentials, and investigating the presence of comorbidities that may exacerbate the condition (Sanfilippo et al., 2020). For example, it is estimated that approximately 50% of individuals with Specific Learning Disorders also fulfill the diagnostic criteria for ADHD, highlighting the importance of a more comprehensive, preferably multidisciplinary assessment (Langer, Benjamin, Becker & Gaab, 2019).

In the context of Neuropsychology, when the purpose is to evaluate suspected SLD, the neuropsychological examination may involve, using, the use of rating scales (APA, 2022; Haberstroh & Schulte-Körne, 2019). Using such instruments is important as it complements the data obtained through testing, tending to provide the assessment with greater ecological validity, i.e., a closer approximation to the real-life situations experienced by the individual in terms of their daily functioning. In addition to measuring the frequency and intensity of symptoms, scales allow for monitoring the effects of interventions (Kyriazos & Stalikas, 2018).

Considering the literature on the assessment and diagnostic instruments for SLD, in Brazil, as of the present moment, there are no assessment scales available specifically for screening symptoms in school-age children that allow for the comprehensive measurement of difficulties in all three domains of academic skills and have appropriate psychometric qualities for use (Pinheiro, Marques & Leite, 2018). Therefore, to contribute to the screening of symptoms related to SLD in school-age individuals, in both clinical and educational contexts, as well as in research, the present study aims to construct and seek evidence of the validity and reliability of the Specific Learning Disorder Rating Scale (Escala de Avaliação do Transtorno Específico da Aprendizagem - ESATA).

Consisting of a Likert-type scale, it is intended that the ESATA can encompass the symptoms commonly observed in individuals with the disorder, considering the domains of reading, writing, and mathematics. The instrument is intended to be completed by teachers assessing SLD symptoms in children 7 to 12 years of age, students in the 2nd to 5th grades of Elementary School I.

Method

The present study is cross-sectional and employs both qualitative and quantitative procedures. For the research to be conducted, the project was submitted to and approved by the Research Ethics Committee of the Federal University of Bahia.

Participants

In the judges’ analysis, participants were seven professionals with extensive practical experience and theoretical knowledge of SLD, including five psychologists specializing in Neuropsychology and two pediatric neurologists, with the following levels of expertise: one specialist, one MSc holder, and five PhD holders. The judges were selected by convenience and invited to participate in the research. For the semantic analysis, eight teachers from public and private school networks were chosen by convenience (see Table 1).

Table 1 Characterization of the sample of participants in the semantic analysis by educational network and years of experience. 

Public Network Private Network Years of Experience
n % % M (SD) Median
2nd year group 4 75% 25% 10.5 (7.4) 8.5
5th year group 4 50% 50% 22.7 (4.2) 21.0
Total 8 62.5 37.5 16.6 (8.6) 20.0

For the Exploratory Factor Analysis (EFA) stage, 308 teachers from 19 Brazilian states participated. The sample had a mean of 14.5 years of teaching experience (SD 9.6), with 67.2% having some level of specialization. The sample characterization, including teachers and their respective students, is presented in Table 2.

Table 2 Characterization of participants of the AFE. 

n Idade % Network % Zone
Male Female M (DP) Public Private Rural Urban
Teachers 9 299 42.9 (8.7) 69.5 30.5 11.4 88.6
Students 217 91 8.8 (1.5)

Procedures

Operationalization of the construct

A literature review was conducted to identify the core symptoms that define SLD to develop candidate items for the ESATA. The search was performed in the PubMed, LILACS, SciELO, and CAPES Periodicals Portal databases, using the terms: specific learning disorder, dyslexia, dyscalculia, diagnosis, and assessment, as verified in the DeCS and MeSH systems to select descriptors that best suited the review’s objective. The following inclusion criteria were adopted: articles related to Specific Learning Disorder, studies involving the age range between six and 12 years, articles published within the previous ten years, and written in English or Portuguese. The following exclusion criteria were adopted: studies involving comorbidities, acquired learning difficulties, analyses exclusively on cognitive, genetic, or neurobiological levels, studies with preschool children, adolescents, or adults.

Judges’ analysis

For the item analysis, the participants were sent an electronic form containing a brief presentation of the disorder, the purpose of the analysis, the characterization of the study and the ESATA, evaluation instructions, and the items. They were asked to individually analyze the items, considering how representative each item would be of SLD and how essential it would be for the scale’s composition. The following response options were provided: “essential,” “useful but not essential,” and “not necessary,” as proposed by Cohen, Swerdlik, and Sturman (2014). In the form, the items were organized according to the domain to which they belonged - Reading, Written Expression, and Mathematics. The judges were also asked to make modifications and suggest items if they deemed it appropriate.

After obtaining the responses, the Content Validity Index, or CVI, was calculated, which is a measure of the degree of agreement among judges regarding the relevance and pertinence of items to the construct and the instrument. As a content validity measure, the CVI allows the analysis of each item individually and of the instrument as a whole. The formula to evaluate individual items consists of dividing the relevant responses by the total responses. For the analysis of the instrument as a whole, one of the alternatives is to calculate the quotient between the total items evaluated as relevant and the total items in the instrument. Concerning the value of the CVI, it is assumed that the closer it is to 1, the more valid the instrument is regarding its content. Generally, an acceptable CVI should have a minimum value of .80 (Yusoff, 2019).

Semantic analysis

This analysis aimed to assess how easily teachers, the target audience of the instrument, understood the items. As Pasquali (2003) recommended, two groups were formed, with four teachers in the 2nd-grade group and four in the 5th-grade group. After obtaining their consent, two meetings were held, one with the 2nd-grade group and another with the 5th-grade group.

In the meeting, the ESATA items were presented orally, one at a time, and the participants were asked to paraphrase in their own words what they had understood from each item, associating them with their classroom experiences with students. They were also asked to assess the clarity of the terms used and suggest revisions if any word was difficult to understand. If there were discrepancies between the researcher’s intended understanding and what was obtained, the item would undergo a new revision, and if comprehension difficulties persisted, the item would be discarded. Due to social distancing measures in response to the COVID-19 pandemic, the meetings took place virtually, with prior assurance of the participants’ suitable environment and internet connection quality.

Exploratory Factor Analysis and reliability analysis

In this stage, the ESATA was made available in an online form to be answered by the teachers from the 2nd to the 5th grade. The teacher’s task was to respond to the ESATA concerning a student in their class, with or without learning difficulties. To recruit teachers, the form’s link was made available through various online channels, including Instagram pages, Facebook and WhatsApp groups, and emails to recruit teachers. Partnerships were also established with public and private schools in Salvador and Alagoinhas in Bahia. Data collection took place between December 2019 and July 2021.

The form included a brief description of the study objectives and procedures section requesting the teacher’s consent to participate in the study, along with information about data confidentiality. There was also a section for the teachers to provide sociodemographic data for themselves and their respective students. After giving consent, participants were provided with instructions for filling out the form and the items. Each item, following the Likert scale model, had response options with the following categories: “never,” indicating that the child definitely does not exhibit the characteristic; “rarely,” indicating that the child rarely exhibits the characteristic; “sometimes,” indicating that the child exhibits the characteristic on occasion; “frequently,” indicating that the child exhibits the characteristic most of the time; “always,” indicating that the child definitely exhibits the characteristic.

To verify the factor structure of the ESATA, EFA was conducted using the Principal Axis Factoring (PAF) method and parallel analysis technique for factor retention. Oblimin rotation was chosen assuming some level of correlation between the data. The JASP software, version 0.14.1, was used for the analyses. Based on the data collected for the EFA, internal consistency analysis of the ESATA was conducted using Cronbach’s alpha.

Results

The literature search in the databases resulted in 1,765 articles using the specified descriptors. Out of these, 45 were selected for detailed analysis after applying the exclusion criteria. In addition to the articles and the DSM-5 (APA, 2013), bibliographic references on the topic published in Brazil were also consulted to expand the item options. In total, 80 items were developed, with 31 related to reading impairment, 21 related to writing impairment, and 28 related to mathematics impairment. These items were sent via electronic form for the judges’ analysis.

In the analysis, the participants suggested reforming some terms used (see Table 3). They recommended that the items include examples of related situations to improve clarity and facilitate understanding by the teachers. It was also suggested that the items be organized considering the difficulties still be expected for the early years of Elementary School I.

Table 3 Reformulations of the ESATA items as suggested by the judges. 

Item Suggested Reformulation Final Wording
“Does not name the letters” “Has difficulty naming letters (even some)” “Has difficulty naming the letters”
“Does not understand texts just read” “Do not understand texts just read, even if they are short” “Does not understand texts just read, even if they are short”
“Adds letters to words” “Adds letters or syllables to words” “Adds letters or syllables to words”
“Do not memorize words and instructions” “Do not memorize simple words and instructions” “Do not memorize words and instructions, even if they are simple”
“Reads in an incomprehensible manner” “The word ‘incomprehensible’ is very subjective. Define it better to facilitate the teachers’ response accuracy.” “It is difficult to understand the reading performed by him/her”
“Cannot perform mental calculations” “Cannot perform mental calculations, even if they are simple” “Has difficulty performing mental calculations, even if they are simple”

The items evaluated as essential or valuable for the instrument's composition were considered to calculate the CVI. Based on the acceptance criterion of items with a CVI equal to or higher than .80, items 23 and 40 were discarded as they both had a CVI of .71. These items were, respectively: “general knowledge is reduced for age” and “uses an eraser or correction fluid due to errors.” Item 34, “handwriting is not elaborate,” was also not included because it was observed to be similar to item 35, “handwriting is difficult to understand.” Item 52, “makes nominal and/or verbal agreement errors,” was also excluded since agreement errors can occur due to sociocultural influences and are not specifically a symptom of SLD. Of the remaining 76 items, 9 had a CVI of .86, and 67 had a CVI of 1. When calculating the total CVI, an index of .98 was obtained, indicating an adequate level of agreement among judges regarding the instrument’s content (Yusoff, 2019).

The semantic analysis revealed agreement among the teachers regarding the clarity of the terms used. The reproductions and examples mentioned by both groups demonstrated that the items did not present comprehension difficulties, and the teachers’ feedback was consistent with the expected understanding of each item. For item 24, “cannot associate words that start with the same sound,” in the Reading domain, the first group suggested the addition of a situational example, as seen in other items. The example was chosen and evaluated by the group in the same meeting. As a result of the analyses conducted, a preliminary version of the ESATA was obtained, with 30 items in the Reading domain, 18 items in Written Expression, and 28 items in Mathematics.

In conducting the EFA, a KMO index of .97 and a p-value of .000 were obtained in Bartlett’s sphericity test, indicating the data's adequacy (Taherdoost, Sahibuddin & Jalaliyoon, 2022). The initial analysis resulted in a 4-factor solution, however, the data did not fit well, as the factors were poorly discriminated, with items loading in more than one factor. Regarding the proportion of explained variance, the first factor explained 25%, the second 24%, the third 0.9%, and the fourth 0.3%.

Based on this result, it was decided to conduct a new analysis, fixing three factors according to the three domains that make up the instrument. These factors were named, considering the items that loaded in each one, as Mathematics (1), Reading (2), and Writing (3). The results showed that many items from the Writing factor also loaded in the Reading factor, including with higher factor loadings. The Mathematics factor items showed a good fit. Factor 1 explained 26% of the variance, Factor 2 explained 27%, and Factor 3 explained 0.7%.

Analyzing the scree plot, it was noted that better discrimination was achieved with only two factors. Subsequently, by conducting a correlation analysis between the ESATA items, it was found that the items in the Reading and Writing domains showed a strong correlation (r = .84, p < .001). Therefore, it was decided to perform a new analysis, fixing two factors.

The analysis with two factors revealed the best fit for the data. In this two-factor structure of the ESATA, Factor 1 was named Reading and Writing, and Factor 2 was called Mathematics, as items related to these domains loaded in these factors, respectively. Factor loadings in both factors were greater than .40, except for item 14 (Reading), which did not load in any factor from the initial exploratory analysis, and item 44 (Writing), which had a factor loading of .30. By deciding to keep only items with moderate to solid loadings, as recommended in the literature (Goretzko, Pham & Bühner, 2021), excluding items 14 and 44, it was found that there was no change in the other data. For the Reading and Writing factor, loadings ranged from .44 to .89, explaining 32% of the variance. For the Mathematics factor, loadings varied from .57 to .94, with 26% of the variance explained. The correlation between the two factors was .76 (p < .001).

Considering the 74 items that showed correlations with the factors, the instrument's internal consistency was analyzed by calculating Cronbach's alpha to measure reliability. The calculation indicated α = .98 for the two factors individually. For the instrument, α = .99 was obtained. These measures demonstrate that the items are highly consistent with each other, as the α coefficient can range from 0 to 1, with 1 indicating that the items are entirely homogeneous and, therefore, introduce fewer errors in the assessment, making the instrument more precise (Goretzko, Pham & Bühner, 2021).

Considering that some items could represent difficulties still expected for children in the 2nd grade of Elementary School, as highlighted in the judges’ analysis, a review of these items was conducted according to the competencies provided in the National Common Curricular Base (Base Nacional Comum Curricular - BNCC) for this school year (Brasil, 2017). It was found that 21 items referred to difficulties that are still acceptable for students at this stage of education and should not be assessed as manifestations of a possible clinical deficit in academic development. Therefore, in the final version of the scale for assessing 2nd-grade students, these items will be differentiated so that the teacher does not score the difficulties of the student based on them, as such behavior could favor the occurrence of false positives in the assessment of these individuals. In this case, when assessing students in this early elementary school stage, the teacher will only respond to items that refer to more basic skills, where difficulties could already indicate a risk for the developing of learning disorders.

Discussion

This study aimed to develop and analyze the items of the ESATA. This instrument intends to fill the current gap in the national context regarding screening scales that satisfactorily encompass the difficulties commonly observed in children with SLD, considering the domains of reading, written expression, and mathematics, and to be answered by the teacher. In practice, it is expected that the instrument can assist healthcare and education professionals, both in early diagnosis and in the design of intervention strategies for school-aged children.

Initially, when developing a total of 80 candidate items, the expectation was that at least half of them could be retained for the composition of the instrument after the analysis. However, most of the items were evaluated as relevant by the panel of judges, which is interesting in terms of the content’s comprehensiveness. Considering the complexity and heterogeneity of manifestations of SLD in each domain of academic skills, it is important that the instrument is capable of identifying the variability of symptomatic profiles in the evaluated individuals, allowing for a better definition of functionality and refinement of the diagnosis.

Regarding the content validity indices, only two items did not present adequate values, falling below the recommended value of .80 (Yusoff, 2019). After opting to exclude two more items that theoretically could present problems of redundancy and specificity in the assessment of children, high CVI values were obtained, both for the remaining 76 items and for the instrument as a whole, indicating a high level of agreement among experts on how well the constructed items represented the construct.

In the semantic analysis, the method used provided great agility in checking the clarity and ease of understanding of the items for the sample of teachers. One of the advantages observed was the ability to identify and correct divergences in understanding during the meeting, emphasizing the active role of the participants in the process. As highlighted by Pasquali (2003), this strategy allows for a brainstorming situation in which the researcher can verify how the terms and examples used by the interviewees resemble the expected understanding, according to the wording of the items. In the meetings, the teachers made associations between the items read and situations they experienced in the classroom, demonstrating the ability of the items to capture the real difficulties presented by children in their daily lives. In some items, when one teacher performed the reproduction, it was followed by verbalizations from others, such as “I thought the same thing,” “that’s exactly what happens,” or “I understood it the same way,” showing congruence in understanding.

The teachers who participated in the semantic analysis also mentioned that the examples used in the items, as suggested in the judges’ analysis, made the association with classroom occurrences more precise. In fact, they reported having already witnessed the same situation described in the example, such as “Cannot transcribe orally presented calculations (e.g., assembles incorrectly, omits/exchanges symbols or digits),” and “Does not discriminate letter sounds (e.g., does not know ‘m’ from ‘n’).” This indicates that the instrument has appropriate language for the target audience, which is essential in the assessment through scales since comprehension should not be an impediment to the accuracy of the provided information.

When investigating the factor structure of the instrument through EFA, with the aim of verifying whether the organization of the items would reflect the theoretical structure of SLD in three domains, i.e., Reading, Writing, and Mathematics, it was observed that the items that supposedly belonged to the Writing factor had a higher factor loading in the Reading factor, thus confirming a better fit of the variables in a bifactorial structure, with 58% of the variance explained. There was also a strong correlation between the Reading and Writing items. These findings corroborate the evidence that there is frequent comorbidity between reading and writing impairments in individuals with SLD, with dyslexia being an alternative term for the disorder in this presentation, involving impairments in reading fluency and accuracy and spelling difficulties (Fortes et al., 2016).

In a longitudinal study conducted by Diamanti et al. (2018), which aimed to investigate the effects of dyslexia on the development of reading and writing skills, children with and without dyslexia were followed for 18 months and evaluated through a battery involving phonological awareness tests, rapid naming, reading, and writing. The results showed that the group with dyslexia had deficits in all tasks when performance was compared to the same-age control group. Among the discussions, the authors emphasized the relationship between inefficient phonological processing and reading and writing difficulties, mentioning that impairments in phonological development affect the formation and quality of lexical representations, which have adverse consequences on reading efficiency, spelling accuracy, and text comprehension in individuals with dyslexia.

In a sample of adolescents, Chung and Lam (2019) used a battery composed of cognitive-linguistic tasks measuring morphological and phonological awareness, rapid naming, vocabulary, reading, and writing skills, to investigate, among other objectives, the role of morphological awareness in word reading and writing. It was observed that individuals with dyslexia performed worse in the tasks compared to students with typical development, with performance in these skills contributing to reading and writing in both groups. The authors highlighted the implication of morphological awareness in reading and writing, emphasizing that awareness of the lexical structure, i.e., how morphemes compose different words, contributes to reading and writing by facilitating the consolidation and retrieval of words, including more complex ones.

Considering this aspect, Galuschka et al. (2020) conducted a systematic review regarding the effectiveness of writing interventions for individuals with dyslexia. They found that phonological, morphological, and orthographic interventions presented significant effect sizes in both writing and reading, considering that strategies like these facilitate the understanding of the language system, making written language more transparent and aiding in the construction and automation of language structures, thereby reducing cognitive effort in tasks involving reading and writing. These findings demonstrate the consistency in the literature regarding the relationship between reading and writing skills, which are mediated by similar cognitive processes and help explain the grouping of the Reading and Writing domains into a single factor.

From the first analysis, the items in the Mathematics domain showed an excellent fit in a single factor, and considering that impairments in mathematics consist of a specific SLD condition, namely dyscalculia, this result is consistent with the theoretical structure (APA, 2022; World Health Organization, 2022). Furthermore, the Reading and Writing factor and the Mathematics factor showed a strong correlation, which is expected since both refer to the same construct.

The analysis showed that item 14 in the Reading domain, “Corrects word pronunciation during reading,” did not correlate with any factor. Item 44 in the Writing domain, “Has difficulty with manual tasks requiring delicacy (e.g., cutting with scissors, coloring with pencils),” had a weak factor loading, probably because it does not represent a core symptom of SLD (APA, 2022). By excluding both items, a version of the ESATA with 74 items was obtained, with 29 corresponding to the Reading domain, 17 Writing, and 28 Mathematics. Given the utility of an evaluative instrument that allows for a comprehensive investigation of the phenomenon in question, having a scale with 74 items for SLD assessment is relevant for the investigative process and intervention planning, by expanding the understanding of individual profiles.

Both the individual factors and the scale presented excellent reliability indices. Knowing that measurement instruments are subject to errors when an evaluator measures a specific construct from its items, reliability aims to ensure that the score obtained through the instrument reflects as closely as possible the individual’s actual performance or functioning, especially in reevaluation contexts when changes over time are desired.

It is important to mention that the number of items in each dimension of the ESATA may have influenced the instrument’s reliability coefficient values. However, beyond the number of items, other aspects are considered in the analysis of internal consistency, including sample size, homogeneity of the items, and their correlation with each scale dimension. Therefore, based on the methodological rigor adopted in the development of the items, as well as the analyses performed, the results indicate the psychometric adequacy of the ESATA in terms of validity and reliability.

The fact that the ESATA has the teacher as the respondent is an advantage of the instrument in assessing children due to the accuracy of this professional in providing information about students’ academic profiles (Helland, Morken & Helland, 2021). Furthermore, there is evidence that, in many cases, teachers do not have a solid theoretical knowledge of learning disorders, which can hinder the identification of symptoms in the school context (Peries et al., 2021). Therefore, the ESATA can be a valuable tool for teachers to screen for signs of SLD-related risk and symptoms objectively.

Accordingly, the ESATA is characterized as a helpful instrument, both for teachers who can rely on the scale to identify specific difficulties in students and make appropriate referrals to specialized services, and for other professionals, including psychologists, speech therapists, neuropsychiatrists, and educational psychologists, who can benefit from this measure by having relevant data for assessment and intervention based on the teacher’s responses to the instrument.

Considering that SLD is often associated with unfavorable outcomes, screening for learning difficulties and early identification has important implications in the educational sphere and social and personal aspects, by raising the possibility of harm prevention and reduction. The diagnostic assessment of SLD, as in any neurodevelopmental clinical condition, should be as comprehensive, valid, and reliable as possible. This is particularly crucial because school learning is influenced by many of factors, encompassing institutional, socioeconomic, sensory-motor, emotional, and cognitive aspects. Depending on the specific context, these factors in isolation can lead to challenges in acquiring reading, writing, and mathematical skills, emphasizing the need for a thorough differential diagnosis process (APA, 2022). Furthermore, the prevalence of comorbidity among different manifestations of SLD is high, around 7.6%, which reflects a considerable probability that children with reading and writing problems will also have difficulties in mathematics (Fortes et al., 2016). Therefore, characterizing the different profiles that may arise regarding impaired academic development is paramount.

As a complementary instrument in the assessment of SLD, it should be emphasized that the ESATA is not intended to make a diagnosis of the disorder solely based on its results. On the contrary, the purpose of this instrument is to gather data on how frequently a child exhibits specific symptoms associated with reading, writing, and math difficulties. This data is then used alongside other assessment procedures, such as clinical interviews, behavioral observations, and testing, which require theoretical knowledge and practical experience on the part of the examiner to determine whether a diagnosis is warranted and, more crucially, to outline appropriate intervention strategies for each individual case.

One limitation of the present study is the pronounced gender inequality in the student sample, which was composed mainly of males. It is possible that the teachers who responded to the instrument assessing students with difficulties referred more to male individuals, as SLD is up to three times more frequent in this population compared to females (APA, 2022).

Another limitation concerns the sample size for the EFA. According to the literature, larger samples tend to provide more accurate results when performing factor analysis (Goretzko, Pham & Bühner, 2021). However, there are no clear empirical bases for defining what an ideal sample size would be, as other factors influence the stability of the factor solution, including the number of items and the presence of high factor loadings, which make the quality of the analysis depend more on the quality of the instrument than on the sample size itself (Taherdoost, Sahibuddin & Jalaliyoon, 2022). Additionally, given the context in which the data were collected, i.e., the social isolation resulting from the COVID-19 pandemic, it is believed that the response rate was excellent, considering the difficulties encountered in conducting the research in this situation. This factor also prevented studies with clinical groups, as face-to-face classes were suspended throughout the country.

Considering that a psychometrically appropriate measurement instrument should provide various types of validity and reliability evidence, future studies will seek to investigate validity based on the relationship with other variables to determine how well the scale converges with other measures assessing the same construct. This will provide more robust evidence of the scale’s adequacy for investigating SLD. It is also important to seek evidence of the ESATA’s ability to predict individual performance based on its results, as well as to analyze its sensitivity and specificity characteristics that will allow for a more precise distinction between clinical and non-clinical profiles. Furthermore, data collection will continue so as to allow for Confirmatory Factor Analysis and analyses involving Item Response Theory (IRT). Normative studies will also be conducted to ensure the appropriate use of the instrument in the Brazilian context.

Financial support: The present study received financial support from the Bahia State Research Support Foundation (Fundação de Amparo à Pesquisa do Estado da Bahia [FAPESB]).

Evaluation system: double-blind review.

References

American Psychiatric Association: Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition. Washington, DC, American Psychiatric Association, 2013. [ Links ]

American Psychiatric Association: Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition, Text Revision. Washington, DC, American Psychiatric Association, 2022. [ Links ]

BRASIL (2017). Base Nacional Comum Curricular. MEC. Retrieved from http://basenacionalcomum.mec.gov.br/images/BNCC_EI_EF_110518_versaofinal_site.pdfLinks ]

Chung, K., & Lam, C. (2019). Cognitive-linguistic skills underlying word reading and spelling difficulties in Chinese adolescents with dyslexia. Journal of Learning Disabilities. https://doi.org/10.1177/0022219419882648Links ]

Cohen, R. J., Swerdlik, M. E., & Sturman, E. D. (2014). Testagem e avaliação psicológica: introdução a testes e medidas. Porto Alegre: AMGH. [ Links ]

Diamanti, V., Goulandris, N., Stuart, M., Campbell, R., & Protopapas, A. (2018). Tracking the effects of dyslexia in reading and spelling development: A longitudinal study of Greek readers. Dyslexia. https://doi.org/10.1002/dys.1578Links ]

Fortes, I. S., Paula, C. S., Oliveira, M. C., Bordin, I. A., Mari, J. J., & Rhode, L. A. (2016). A cross sectional study to assess the prevalence of DSM 5 specific learning disorders in representative school samples from the second to sixth grade in Brazil. European Child & Adolescent Psychiatry, 25, 195-207. https://doi.org/10.1007/s00787-015-0708-2Links ]

Galuschka, K., Görgen, R., Kalmar, J., Haberstroh, S., Schmalz, X., & Schulte-Körne, G. (2020). Effectiveness of spelling interventions for learners with dyslexia: A meta-analysis and systematic review. Educational Psychologist, 55(1), 1-20. https://doi.org/10.1080/00461520.2019.1659794Links ]

Goretzko, D., Pham, T.T.H. & Bühner, M. (2021). Exploratory factor analysis: Current use, methodological developments and recommendations for good practice. Curr Psychol 40, 3510-3521. https://doi.org/10.1007/s12144-019-00300-2Links ]

Haberstroh, S., & Schulte-Körne, G. (2019). Clinical pactice guideline: The diagnosis and treatment of dyscalculia. Deutsches Ärzteblatt International, 107-114. https://doi.org/10.3238/arztebl.2019.0107. [ Links ]

Helland, T., Morken, F., & Helland, W. A. (2021). Kindergarten screening tools filled out by parents and teachers targeting dyslexia. Predictions and developmental trajectories from age 5 to age 15 years. Dyslexia, 27(4), 413- 435. https://doi.org/10.1002/dys.1698Links ]

Kyriazos, T. A., & Stalikas, A. (2018). Applied psychometrics: The steps of scale development and standardization process. Psychology, 9, 2531-2560. https://doi.org/10.4236/psych.2018.911145Links ]

Langer, N., Benjamin, C., Becker, B. L. C., & Gaab, N. (2019). Comorbidity of reading disabilities and ADHD: Structural and functional brain characteristics. Human brain mapping, 40(9), 2677-2698. https://doi.org/10.1002/hbm.24552Links ]

Livingston, E. M., Siegel, L. S., & Ribary, U. (2018). Developmental dyslexia: emotional impact and consequences. Australian Journal of Learning Difficulties, 23:2, 107-135. https://doi.org/10.1080/19404158.2018.1479975Links ]

Pasquali, L. (2003). Psicometria: teoria dos testes na psicologia e na educação. Rio de Janeiro: Editora Vozes. [ Links ]

Peries, W. A. N. N., Indrarathne, B., Jayamanne, B. D. W., Wickramasekara, T. D., Alwis, K. A. C., & Jayatilleke, A. U. (2021). Primary school teachers’ readiness in identifying children with dyslexia: A national survey in Sri Lanka. Dyslexia, 27(4), 486- 509. https://doi.org/10.1002/dys.1696Links ]

Pinheiro, A. M. V., Marques, K. A., & Leite, R. C. D. (2018). Protocolo de avaliação para o diagnóstico diferencial dos transtornos específicos da aprendizagem. Paidéia, 13(19), 13-28. [ Links ]

Sanfilippo, J., Ness, M., Petscher, Y., Rappaport, L., Zuckerman, B., & Gaab, N. (2020). Reintroducing Dyslexia: Early Identification and Implications for Pediatric Practice. Pediatrics, 146(1), e20193046. https://doi.org/10.1542/peds.2019-3046Links ]

Taherdoost, H., Sahibuddin, S., & Jalaliyoon, Neda. (2022). Exploratory Factor Analysis; Concepts and Theory. Advances in applied and pure mathematics, 27, 375-382. [ Links ]

World Health Organization. (2022). International statistical classification of diseases and related health problems, 11th revision. Retrieved from https://icd.who.int/browse11/l-m/enLinks ]

Yusoff, M. S. B. (2019). ABC of content validation and content validity index calculation. Education in Medicine Journal, 11(2), 49-54. https://doi.org/10.21315/eimj2019.11.2.6Links ]

Received: March 12, 2022; Accepted: April 24, 2023

Section editor: Alexandre Luiz de Oliveira Serpa.

EDITORIAL BOARD

Editor-in-chief

Cristiane Silvestre de Paula

Associated editors

Alessandra Gotuzo Seabra

Ana Alexandra Caldas Osório

Luiz Renato Rodrigues Carreiro

Maria Cristina Triguero Veloz Teixeira

Section editors

“Psychological Assessment”

Alexandre Luiz de Oliveira Serpa

André Luiz de Carvalho Braule Pinto

Natália Becker

Juliana Burges Sbicigo

“Psychology and Education”

Alessandra Gotuzo Seabra

Carlo Schmidt

Regina Basso Zanon

“Social Psychology and Population’s Health”

Enzo Banti Bissoli

Marina Xavier Carpena

“Clinical Psychology”

Carolina Andrea Ziebold Jorquera

Julia Garcia Durand

Ana Alexandra Caldas Osório

“Human Development”

Maria Cristina Triguero Veloz Teixeira

Rosane Lowenthal

Technical support

Camila Fragoso Ribeiro

Fernanda Antônia Bernardes

Giovana Gatto Nogueira

EDITORIAL PRODUCTION

Publishing coordination

Surane Chiliani Vellenich

Language editor

Bardo Editorial (Irina Migliari)

Layout designer

Acqua Estúdio Gráfico

Correspondence concerning this article should be addressed to Edgar Weslei Aragão, Av. 24 de Maio, 264, Centro, Alagoinhas-BA, Brazil. CEP 48000067. Email: ewaragao@gmail.com

Creative Commons License This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.