Specific Learning Disorder (SLD) is a neurodevelopmental clinical condition characterized by a marked difficulty in acquiring the academic skills of reading, writing, and mathematics. According to the DSM-5-TR, the symptoms are conceived as phenotypes of cognitive impairments associated with neurobiological and environmental factors, with a prevalence of around 5 to 15% in school-age children (APA, 2022).
Characteristic symptoms of SLD include: 1) inaccurate or slow and effortful reading of words; 2) difficulty in comprehending the meaning of what is read; 3) spelling difficulties; 4) difficulties with written expression; 5) difficulties in mastering numerical sense, arithmetic facts, or calculations; and 6) difficulties in mathematical reasoning. For its diagnosis, the DSM-5-TR stipulates that at least one of these symptoms must be present, with a minimum persistence of 6 months, and that the difficulties are not due to intellectual, physical, or sensory limitations, psychosocial adversities, deprivation, or inadequate pedagogical methods (APA, 2022).
In the current edition of the International Classification of Diseases, ICD-11, SLD, now referred to as Developmental Learning Disorder (6A03), is described as a condition in which the learning of basic academic skills is compromised even when adequate instruction is provided, emphasizing the persistent nature of the disorder (World Health Organization, 2022).
Regarding the potential manifestations, SLD can be specified, according to the compromised domain, as follows: a) impairment in reading, where learning difficulties involve word reading accuracy, fluency, and reading comprehension; b) impairment in written expression, compromising spelling accuracy, grammar, punctuation, and clarity or organization of written expression; and c) impairment in mathematics, when difficulties involve numerical sense, memorization of arithmetic facts, calculation fluency, and mathematical reasoning (APA, 2022).
In clinical and educational practice, these impairments tend to be associated with negative short-term and long-term outcomes, especially when the individual does not receive a timely diagnosis and intervention. In the literature, reported impacts include reduced self-esteem and self-efficacy, social adjustment difficulties, school dropout, and unemployment (Livingston, Siegel & Ribary, 2018).
Accordingly, early diagnosis becomes crucial in determining the extent to which symptoms impair the individual’s functioning, identifying critical difficulties and potentials, and investigating the presence of comorbidities that may exacerbate the condition (Sanfilippo et al., 2020). For example, it is estimated that approximately 50% of individuals with Specific Learning Disorders also fulfill the diagnostic criteria for ADHD, highlighting the importance of a more comprehensive, preferably multidisciplinary assessment (Langer, Benjamin, Becker & Gaab, 2019).
In the context of Neuropsychology, when the purpose is to evaluate suspected SLD, the neuropsychological examination may involve, using, the use of rating scales (APA, 2022; Haberstroh & Schulte-Körne, 2019). Using such instruments is important as it complements the data obtained through testing, tending to provide the assessment with greater ecological validity, i.e., a closer approximation to the real-life situations experienced by the individual in terms of their daily functioning. In addition to measuring the frequency and intensity of symptoms, scales allow for monitoring the effects of interventions (Kyriazos & Stalikas, 2018).
Considering the literature on the assessment and diagnostic instruments for SLD, in Brazil, as of the present moment, there are no assessment scales available specifically for screening symptoms in school-age children that allow for the comprehensive measurement of difficulties in all three domains of academic skills and have appropriate psychometric qualities for use (Pinheiro, Marques & Leite, 2018). Therefore, to contribute to the screening of symptoms related to SLD in school-age individuals, in both clinical and educational contexts, as well as in research, the present study aims to construct and seek evidence of the validity and reliability of the Specific Learning Disorder Rating Scale (Escala de Avaliação do Transtorno Específico da Aprendizagem - ESATA).
Consisting of a Likert-type scale, it is intended that the ESATA can encompass the symptoms commonly observed in individuals with the disorder, considering the domains of reading, writing, and mathematics. The instrument is intended to be completed by teachers assessing SLD symptoms in children 7 to 12 years of age, students in the 2nd to 5th grades of Elementary School I.
Method
The present study is cross-sectional and employs both qualitative and quantitative procedures. For the research to be conducted, the project was submitted to and approved by the Research Ethics Committee of the Federal University of Bahia.
Participants
In the judges’ analysis, participants were seven professionals with extensive practical experience and theoretical knowledge of SLD, including five psychologists specializing in Neuropsychology and two pediatric neurologists, with the following levels of expertise: one specialist, one MSc holder, and five PhD holders. The judges were selected by convenience and invited to participate in the research. For the semantic analysis, eight teachers from public and private school networks were chosen by convenience (see Table 1).
Table 1 Characterization of the sample of participants in the semantic analysis by educational network and years of experience.
Public Network | Private Network | Years of Experience | |||
---|---|---|---|---|---|
n | % | % | M (SD) | Median | |
2nd year group | 4 | 75% | 25% | 10.5 (7.4) | 8.5 |
5th year group | 4 | 50% | 50% | 22.7 (4.2) | 21.0 |
Total | 8 | 62.5 | 37.5 | 16.6 (8.6) | 20.0 |
For the Exploratory Factor Analysis (EFA) stage, 308 teachers from 19 Brazilian states participated. The sample had a mean of 14.5 years of teaching experience (SD 9.6), with 67.2% having some level of specialization. The sample characterization, including teachers and their respective students, is presented in Table 2.
Procedures
Operationalization of the construct
A literature review was conducted to identify the core symptoms that define SLD to develop candidate items for the ESATA. The search was performed in the PubMed, LILACS, SciELO, and CAPES Periodicals Portal databases, using the terms: specific learning disorder, dyslexia, dyscalculia, diagnosis, and assessment, as verified in the DeCS and MeSH systems to select descriptors that best suited the review’s objective. The following inclusion criteria were adopted: articles related to Specific Learning Disorder, studies involving the age range between six and 12 years, articles published within the previous ten years, and written in English or Portuguese. The following exclusion criteria were adopted: studies involving comorbidities, acquired learning difficulties, analyses exclusively on cognitive, genetic, or neurobiological levels, studies with preschool children, adolescents, or adults.
Judges’ analysis
For the item analysis, the participants were sent an electronic form containing a brief presentation of the disorder, the purpose of the analysis, the characterization of the study and the ESATA, evaluation instructions, and the items. They were asked to individually analyze the items, considering how representative each item would be of SLD and how essential it would be for the scale’s composition. The following response options were provided: “essential,” “useful but not essential,” and “not necessary,” as proposed by Cohen, Swerdlik, and Sturman (2014). In the form, the items were organized according to the domain to which they belonged - Reading, Written Expression, and Mathematics. The judges were also asked to make modifications and suggest items if they deemed it appropriate.
After obtaining the responses, the Content Validity Index, or CVI, was calculated, which is a measure of the degree of agreement among judges regarding the relevance and pertinence of items to the construct and the instrument. As a content validity measure, the CVI allows the analysis of each item individually and of the instrument as a whole. The formula to evaluate individual items consists of dividing the relevant responses by the total responses. For the analysis of the instrument as a whole, one of the alternatives is to calculate the quotient between the total items evaluated as relevant and the total items in the instrument. Concerning the value of the CVI, it is assumed that the closer it is to 1, the more valid the instrument is regarding its content. Generally, an acceptable CVI should have a minimum value of .80 (Yusoff, 2019).
Semantic analysis
This analysis aimed to assess how easily teachers, the target audience of the instrument, understood the items. As Pasquali (2003) recommended, two groups were formed, with four teachers in the 2nd-grade group and four in the 5th-grade group. After obtaining their consent, two meetings were held, one with the 2nd-grade group and another with the 5th-grade group.
In the meeting, the ESATA items were presented orally, one at a time, and the participants were asked to paraphrase in their own words what they had understood from each item, associating them with their classroom experiences with students. They were also asked to assess the clarity of the terms used and suggest revisions if any word was difficult to understand. If there were discrepancies between the researcher’s intended understanding and what was obtained, the item would undergo a new revision, and if comprehension difficulties persisted, the item would be discarded. Due to social distancing measures in response to the COVID-19 pandemic, the meetings took place virtually, with prior assurance of the participants’ suitable environment and internet connection quality.
Exploratory Factor Analysis and reliability analysis
In this stage, the ESATA was made available in an online form to be answered by the teachers from the 2nd to the 5th grade. The teacher’s task was to respond to the ESATA concerning a student in their class, with or without learning difficulties. To recruit teachers, the form’s link was made available through various online channels, including Instagram pages, Facebook and WhatsApp groups, and emails to recruit teachers. Partnerships were also established with public and private schools in Salvador and Alagoinhas in Bahia. Data collection took place between December 2019 and July 2021.
The form included a brief description of the study objectives and procedures section requesting the teacher’s consent to participate in the study, along with information about data confidentiality. There was also a section for the teachers to provide sociodemographic data for themselves and their respective students. After giving consent, participants were provided with instructions for filling out the form and the items. Each item, following the Likert scale model, had response options with the following categories: “never,” indicating that the child definitely does not exhibit the characteristic; “rarely,” indicating that the child rarely exhibits the characteristic; “sometimes,” indicating that the child exhibits the characteristic on occasion; “frequently,” indicating that the child exhibits the characteristic most of the time; “always,” indicating that the child definitely exhibits the characteristic.
To verify the factor structure of the ESATA, EFA was conducted using the Principal Axis Factoring (PAF) method and parallel analysis technique for factor retention. Oblimin rotation was chosen assuming some level of correlation between the data. The JASP software, version 0.14.1, was used for the analyses. Based on the data collected for the EFA, internal consistency analysis of the ESATA was conducted using Cronbach’s alpha.
Results
The literature search in the databases resulted in 1,765 articles using the specified descriptors. Out of these, 45 were selected for detailed analysis after applying the exclusion criteria. In addition to the articles and the DSM-5 (APA, 2013), bibliographic references on the topic published in Brazil were also consulted to expand the item options. In total, 80 items were developed, with 31 related to reading impairment, 21 related to writing impairment, and 28 related to mathematics impairment. These items were sent via electronic form for the judges’ analysis.
In the analysis, the participants suggested reforming some terms used (see Table 3). They recommended that the items include examples of related situations to improve clarity and facilitate understanding by the teachers. It was also suggested that the items be organized considering the difficulties still be expected for the early years of Elementary School I.
Table 3 Reformulations of the ESATA items as suggested by the judges.
Item | Suggested Reformulation | Final Wording |
---|---|---|
“Does not name the letters” | “Has difficulty naming letters (even some)” | “Has difficulty naming the letters” |
“Does not understand texts just read” | “Do not understand texts just read, even if they are short” | “Does not understand texts just read, even if they are short” |
“Adds letters to words” | “Adds letters or syllables to words” | “Adds letters or syllables to words” |
“Do not memorize words and instructions” | “Do not memorize simple words and instructions” | “Do not memorize words and instructions, even if they are simple” |
“Reads in an incomprehensible manner” | “The word ‘incomprehensible’ is very subjective. Define it better to facilitate the teachers’ response accuracy.” | “It is difficult to understand the reading performed by him/her” |
“Cannot perform mental calculations” | “Cannot perform mental calculations, even if they are simple” | “Has difficulty performing mental calculations, even if they are simple” |
The items evaluated as essential or valuable for the instrument's composition were considered to calculate the CVI. Based on the acceptance criterion of items with a CVI equal to or higher than .80, items 23 and 40 were discarded as they both had a CVI of .71. These items were, respectively: “general knowledge is reduced for age” and “uses an eraser or correction fluid due to errors.” Item 34, “handwriting is not elaborate,” was also not included because it was observed to be similar to item 35, “handwriting is difficult to understand.” Item 52, “makes nominal and/or verbal agreement errors,” was also excluded since agreement errors can occur due to sociocultural influences and are not specifically a symptom of SLD. Of the remaining 76 items, 9 had a CVI of .86, and 67 had a CVI of 1. When calculating the total CVI, an index of .98 was obtained, indicating an adequate level of agreement among judges regarding the instrument’s content (Yusoff, 2019).
The semantic analysis revealed agreement among the teachers regarding the clarity of the terms used. The reproductions and examples mentioned by both groups demonstrated that the items did not present comprehension difficulties, and the teachers’ feedback was consistent with the expected understanding of each item. For item 24, “cannot associate words that start with the same sound,” in the Reading domain, the first group suggested the addition of a situational example, as seen in other items. The example was chosen and evaluated by the group in the same meeting. As a result of the analyses conducted, a preliminary version of the ESATA was obtained, with 30 items in the Reading domain, 18 items in Written Expression, and 28 items in Mathematics.
In conducting the EFA, a KMO index of .97 and a p-value of .000 were obtained in Bartlett’s sphericity test, indicating the data's adequacy (Taherdoost, Sahibuddin & Jalaliyoon, 2022). The initial analysis resulted in a 4-factor solution, however, the data did not fit well, as the factors were poorly discriminated, with items loading in more than one factor. Regarding the proportion of explained variance, the first factor explained 25%, the second 24%, the third 0.9%, and the fourth 0.3%.
Based on this result, it was decided to conduct a new analysis, fixing three factors according to the three domains that make up the instrument. These factors were named, considering the items that loaded in each one, as Mathematics (1), Reading (2), and Writing (3). The results showed that many items from the Writing factor also loaded in the Reading factor, including with higher factor loadings. The Mathematics factor items showed a good fit. Factor 1 explained 26% of the variance, Factor 2 explained 27%, and Factor 3 explained 0.7%.
Analyzing the scree plot, it was noted that better discrimination was achieved with only two factors. Subsequently, by conducting a correlation analysis between the ESATA items, it was found that the items in the Reading and Writing domains showed a strong correlation (r = .84, p < .001). Therefore, it was decided to perform a new analysis, fixing two factors.
The analysis with two factors revealed the best fit for the data. In this two-factor structure of the ESATA, Factor 1 was named Reading and Writing, and Factor 2 was called Mathematics, as items related to these domains loaded in these factors, respectively. Factor loadings in both factors were greater than .40, except for item 14 (Reading), which did not load in any factor from the initial exploratory analysis, and item 44 (Writing), which had a factor loading of .30. By deciding to keep only items with moderate to solid loadings, as recommended in the literature (Goretzko, Pham & Bühner, 2021), excluding items 14 and 44, it was found that there was no change in the other data. For the Reading and Writing factor, loadings ranged from .44 to .89, explaining 32% of the variance. For the Mathematics factor, loadings varied from .57 to .94, with 26% of the variance explained. The correlation between the two factors was .76 (p < .001).
Considering the 74 items that showed correlations with the factors, the instrument's internal consistency was analyzed by calculating Cronbach's alpha to measure reliability. The calculation indicated α = .98 for the two factors individually. For the instrument, α = .99 was obtained. These measures demonstrate that the items are highly consistent with each other, as the α coefficient can range from 0 to 1, with 1 indicating that the items are entirely homogeneous and, therefore, introduce fewer errors in the assessment, making the instrument more precise (Goretzko, Pham & Bühner, 2021).
Considering that some items could represent difficulties still expected for children in the 2nd grade of Elementary School, as highlighted in the judges’ analysis, a review of these items was conducted according to the competencies provided in the National Common Curricular Base (Base Nacional Comum Curricular - BNCC) for this school year (Brasil, 2017). It was found that 21 items referred to difficulties that are still acceptable for students at this stage of education and should not be assessed as manifestations of a possible clinical deficit in academic development. Therefore, in the final version of the scale for assessing 2nd-grade students, these items will be differentiated so that the teacher does not score the difficulties of the student based on them, as such behavior could favor the occurrence of false positives in the assessment of these individuals. In this case, when assessing students in this early elementary school stage, the teacher will only respond to items that refer to more basic skills, where difficulties could already indicate a risk for the developing of learning disorders.
Discussion
This study aimed to develop and analyze the items of the ESATA. This instrument intends to fill the current gap in the national context regarding screening scales that satisfactorily encompass the difficulties commonly observed in children with SLD, considering the domains of reading, written expression, and mathematics, and to be answered by the teacher. In practice, it is expected that the instrument can assist healthcare and education professionals, both in early diagnosis and in the design of intervention strategies for school-aged children.
Initially, when developing a total of 80 candidate items, the expectation was that at least half of them could be retained for the composition of the instrument after the analysis. However, most of the items were evaluated as relevant by the panel of judges, which is interesting in terms of the content’s comprehensiveness. Considering the complexity and heterogeneity of manifestations of SLD in each domain of academic skills, it is important that the instrument is capable of identifying the variability of symptomatic profiles in the evaluated individuals, allowing for a better definition of functionality and refinement of the diagnosis.
Regarding the content validity indices, only two items did not present adequate values, falling below the recommended value of .80 (Yusoff, 2019). After opting to exclude two more items that theoretically could present problems of redundancy and specificity in the assessment of children, high CVI values were obtained, both for the remaining 76 items and for the instrument as a whole, indicating a high level of agreement among experts on how well the constructed items represented the construct.
In the semantic analysis, the method used provided great agility in checking the clarity and ease of understanding of the items for the sample of teachers. One of the advantages observed was the ability to identify and correct divergences in understanding during the meeting, emphasizing the active role of the participants in the process. As highlighted by Pasquali (2003), this strategy allows for a brainstorming situation in which the researcher can verify how the terms and examples used by the interviewees resemble the expected understanding, according to the wording of the items. In the meetings, the teachers made associations between the items read and situations they experienced in the classroom, demonstrating the ability of the items to capture the real difficulties presented by children in their daily lives. In some items, when one teacher performed the reproduction, it was followed by verbalizations from others, such as “I thought the same thing,” “that’s exactly what happens,” or “I understood it the same way,” showing congruence in understanding.
The teachers who participated in the semantic analysis also mentioned that the examples used in the items, as suggested in the judges’ analysis, made the association with classroom occurrences more precise. In fact, they reported having already witnessed the same situation described in the example, such as “Cannot transcribe orally presented calculations (e.g., assembles incorrectly, omits/exchanges symbols or digits),” and “Does not discriminate letter sounds (e.g., does not know ‘m’ from ‘n’).” This indicates that the instrument has appropriate language for the target audience, which is essential in the assessment through scales since comprehension should not be an impediment to the accuracy of the provided information.
When investigating the factor structure of the instrument through EFA, with the aim of verifying whether the organization of the items would reflect the theoretical structure of SLD in three domains, i.e., Reading, Writing, and Mathematics, it was observed that the items that supposedly belonged to the Writing factor had a higher factor loading in the Reading factor, thus confirming a better fit of the variables in a bifactorial structure, with 58% of the variance explained. There was also a strong correlation between the Reading and Writing items. These findings corroborate the evidence that there is frequent comorbidity between reading and writing impairments in individuals with SLD, with dyslexia being an alternative term for the disorder in this presentation, involving impairments in reading fluency and accuracy and spelling difficulties (Fortes et al., 2016).
In a longitudinal study conducted by Diamanti et al. (2018), which aimed to investigate the effects of dyslexia on the development of reading and writing skills, children with and without dyslexia were followed for 18 months and evaluated through a battery involving phonological awareness tests, rapid naming, reading, and writing. The results showed that the group with dyslexia had deficits in all tasks when performance was compared to the same-age control group. Among the discussions, the authors emphasized the relationship between inefficient phonological processing and reading and writing difficulties, mentioning that impairments in phonological development affect the formation and quality of lexical representations, which have adverse consequences on reading efficiency, spelling accuracy, and text comprehension in individuals with dyslexia.
In a sample of adolescents, Chung and Lam (2019) used a battery composed of cognitive-linguistic tasks measuring morphological and phonological awareness, rapid naming, vocabulary, reading, and writing skills, to investigate, among other objectives, the role of morphological awareness in word reading and writing. It was observed that individuals with dyslexia performed worse in the tasks compared to students with typical development, with performance in these skills contributing to reading and writing in both groups. The authors highlighted the implication of morphological awareness in reading and writing, emphasizing that awareness of the lexical structure, i.e., how morphemes compose different words, contributes to reading and writing by facilitating the consolidation and retrieval of words, including more complex ones.
Considering this aspect, Galuschka et al. (2020) conducted a systematic review regarding the effectiveness of writing interventions for individuals with dyslexia. They found that phonological, morphological, and orthographic interventions presented significant effect sizes in both writing and reading, considering that strategies like these facilitate the understanding of the language system, making written language more transparent and aiding in the construction and automation of language structures, thereby reducing cognitive effort in tasks involving reading and writing. These findings demonstrate the consistency in the literature regarding the relationship between reading and writing skills, which are mediated by similar cognitive processes and help explain the grouping of the Reading and Writing domains into a single factor.
From the first analysis, the items in the Mathematics domain showed an excellent fit in a single factor, and considering that impairments in mathematics consist of a specific SLD condition, namely dyscalculia, this result is consistent with the theoretical structure (APA, 2022; World Health Organization, 2022). Furthermore, the Reading and Writing factor and the Mathematics factor showed a strong correlation, which is expected since both refer to the same construct.
The analysis showed that item 14 in the Reading domain, “Corrects word pronunciation during reading,” did not correlate with any factor. Item 44 in the Writing domain, “Has difficulty with manual tasks requiring delicacy (e.g., cutting with scissors, coloring with pencils),” had a weak factor loading, probably because it does not represent a core symptom of SLD (APA, 2022). By excluding both items, a version of the ESATA with 74 items was obtained, with 29 corresponding to the Reading domain, 17 Writing, and 28 Mathematics. Given the utility of an evaluative instrument that allows for a comprehensive investigation of the phenomenon in question, having a scale with 74 items for SLD assessment is relevant for the investigative process and intervention planning, by expanding the understanding of individual profiles.
Both the individual factors and the scale presented excellent reliability indices. Knowing that measurement instruments are subject to errors when an evaluator measures a specific construct from its items, reliability aims to ensure that the score obtained through the instrument reflects as closely as possible the individual’s actual performance or functioning, especially in reevaluation contexts when changes over time are desired.
It is important to mention that the number of items in each dimension of the ESATA may have influenced the instrument’s reliability coefficient values. However, beyond the number of items, other aspects are considered in the analysis of internal consistency, including sample size, homogeneity of the items, and their correlation with each scale dimension. Therefore, based on the methodological rigor adopted in the development of the items, as well as the analyses performed, the results indicate the psychometric adequacy of the ESATA in terms of validity and reliability.
The fact that the ESATA has the teacher as the respondent is an advantage of the instrument in assessing children due to the accuracy of this professional in providing information about students’ academic profiles (Helland, Morken & Helland, 2021). Furthermore, there is evidence that, in many cases, teachers do not have a solid theoretical knowledge of learning disorders, which can hinder the identification of symptoms in the school context (Peries et al., 2021). Therefore, the ESATA can be a valuable tool for teachers to screen for signs of SLD-related risk and symptoms objectively.
Accordingly, the ESATA is characterized as a helpful instrument, both for teachers who can rely on the scale to identify specific difficulties in students and make appropriate referrals to specialized services, and for other professionals, including psychologists, speech therapists, neuropsychiatrists, and educational psychologists, who can benefit from this measure by having relevant data for assessment and intervention based on the teacher’s responses to the instrument.
Considering that SLD is often associated with unfavorable outcomes, screening for learning difficulties and early identification has important implications in the educational sphere and social and personal aspects, by raising the possibility of harm prevention and reduction. The diagnostic assessment of SLD, as in any neurodevelopmental clinical condition, should be as comprehensive, valid, and reliable as possible. This is particularly crucial because school learning is influenced by many of factors, encompassing institutional, socioeconomic, sensory-motor, emotional, and cognitive aspects. Depending on the specific context, these factors in isolation can lead to challenges in acquiring reading, writing, and mathematical skills, emphasizing the need for a thorough differential diagnosis process (APA, 2022). Furthermore, the prevalence of comorbidity among different manifestations of SLD is high, around 7.6%, which reflects a considerable probability that children with reading and writing problems will also have difficulties in mathematics (Fortes et al., 2016). Therefore, characterizing the different profiles that may arise regarding impaired academic development is paramount.
As a complementary instrument in the assessment of SLD, it should be emphasized that the ESATA is not intended to make a diagnosis of the disorder solely based on its results. On the contrary, the purpose of this instrument is to gather data on how frequently a child exhibits specific symptoms associated with reading, writing, and math difficulties. This data is then used alongside other assessment procedures, such as clinical interviews, behavioral observations, and testing, which require theoretical knowledge and practical experience on the part of the examiner to determine whether a diagnosis is warranted and, more crucially, to outline appropriate intervention strategies for each individual case.
One limitation of the present study is the pronounced gender inequality in the student sample, which was composed mainly of males. It is possible that the teachers who responded to the instrument assessing students with difficulties referred more to male individuals, as SLD is up to three times more frequent in this population compared to females (APA, 2022).
Another limitation concerns the sample size for the EFA. According to the literature, larger samples tend to provide more accurate results when performing factor analysis (Goretzko, Pham & Bühner, 2021). However, there are no clear empirical bases for defining what an ideal sample size would be, as other factors influence the stability of the factor solution, including the number of items and the presence of high factor loadings, which make the quality of the analysis depend more on the quality of the instrument than on the sample size itself (Taherdoost, Sahibuddin & Jalaliyoon, 2022). Additionally, given the context in which the data were collected, i.e., the social isolation resulting from the COVID-19 pandemic, it is believed that the response rate was excellent, considering the difficulties encountered in conducting the research in this situation. This factor also prevented studies with clinical groups, as face-to-face classes were suspended throughout the country.
Considering that a psychometrically appropriate measurement instrument should provide various types of validity and reliability evidence, future studies will seek to investigate validity based on the relationship with other variables to determine how well the scale converges with other measures assessing the same construct. This will provide more robust evidence of the scale’s adequacy for investigating SLD. It is also important to seek evidence of the ESATA’s ability to predict individual performance based on its results, as well as to analyze its sensitivity and specificity characteristics that will allow for a more precise distinction between clinical and non-clinical profiles. Furthermore, data collection will continue so as to allow for Confirmatory Factor Analysis and analyses involving Item Response Theory (IRT). Normative studies will also be conducted to ensure the appropriate use of the instrument in the Brazilian context.