Training, Development and Education (TD&E) actions have increasingly gained prominence as organizational strategies (Andrade & Zerbini, 2023; Zerbini et al., 2012) to boost organizations’ performance and competitiveness. Thus, investment in TD&E actions has grown in work organizations (Zerbini et al., 2012).
Expanding continuing education programs in the education sector has highlighted the need to evaluate the effects of these actions on daily professional life, especially regarding the performance of school managers (Nascimento et al., 2020). The effectiveness of training offered to basic education managers has been the subject of growing interest, especially when considering its direct impact on improving pedagogical, administrative, and personnel management in public schools. Even so, there is a lack of psychometrically validated instruments that systematically measure the impact of such training on the actual performance of these professionals (Andrade & Zerbini, 2023).
According to Borges-Andrade (2006), the TD&E system is composed of three subsystems: Needs Evaluation, Planning and Execution, and Training Evaluation, the focus of this study. Training Evaluation is the main provider of information, feedback, and improvement of the TD&E system (Borges-Andrade, 2006).
The evaluation of the results of TD&E actions becomes increasingly important, as it allows verifying the occurrence of instructional events at different levels, justifying the investments made in these actions (Zerbini et al., 2012). In the literature, some methodologies and instruments are capable of assisting in the process of evaluating training actions. Classical training evaluation models, such as those proposed by Hamblin (1978) and Kirkpatrick (1976), have brought important contributions to the measurement of reaction, learning, behavior, and results. However, recent studies point to the need for more integrated and contextualized approaches, such as the MAIS (Borges-Andrade, 2006) and IMPACT (Abbad et al., 2012) models, which incorporate individual, contextual, and practical application variables of the content learned at work. These models have been used to evaluate face-to-face, blended, and distance learning training, offering a more robust analysis framework.
Based on the MAIS and IMPACT, Zerbini and Abbad (2005) developed a model specifically aimed at evaluating distance training offered via the internet. This model focuses on identifying factors that can predict the transfer of training in professional qualification courses; it evaluates the predictive power of individual variables, the study context, and reactions concerning the effective application of what is learned in training in the work environment. Currently, several studies have proposed models to evaluate training carried out in organizations, focusing both on measuring the knowledge transmitted and on how people assimilate, organize, apply and transfer this knowledge to the organizational context (Andrade & Zerbini, 2023; Eckert et al., 2023; Martins et al., 2019; Okoronkwo et al., 2021)
Faced with a reality of economic turbulence, advances in artificial intelligence, and uncertainty about the viability of achieving intended objectives, it is crucial to investigate the effects on organizations resulting from training offered both in person and remotely. The reason for this is that many organizations use various tools, including digital technologies, to mediate their TD&E processes, to keep up with the growing expansion of distance education as a means of teaching (Martins et al., 2019).
Studies aimed at evaluating the effectiveness of TD&E actions are still scarce (Eckert et al., 2023; Flegl et al., 2022; Martins et al., 2019; Sarfraz et al., 2021) and present inconsistencies with the objectives. The effectiveness of training is related to the study of variables that can influence the results of the educational action in different phases, whether before, during, or after the training process. These variables have the potential to increase or decrease training effectiveness and are traditionally investigated and classified into three categories: individual, training, and organizational (Andrade & Zerbini, 2023; Martins et al., 2019).
The effectiveness of training in the work environment of organizations is measured through the measures of Training Impact at Work - the focus of this study - Organizational Change and Final Value. By measuring the impact of training on the work of TD&E actions, it is being evaluated whether these actions have generated improvements in the performance of individuals, groups, and organizations (Martins et al., 2019). The impact of training at work can be measured in two ways: the impact of in-depth training and the impact of training in breadth.
The definition of the impact of in-depth training consists of the direct and specific effects of a TD&E action on the behaviors of the individual in their position in the organization, after the training completion, taking into account the educational objectives (Zerbini et al., 2012). The impact of training in breadth refers to the indirect influence exerted by the TD&E action on the overall performance, attitudes, and individuals’ motivation (Zerbini et al., 2012). This research aims to measure the effective results of the “New Principals” course, offered in a hybrid format (semi-presential) by the Escola de Formação e Aperfeiçoamento dos Profissionais da Educação do Estado de São Paulo (EFAPE).
In this context, Distance Learning (DL) has been widely used in the training of education professionals, who seek to develop new competencies, which influence the expansion of knowledge related to the profession and practical issues (Santos Junio et al., 2022). Actions offered remotely through digital technologies are being presented as a valuable strategy for developing employees’ skills and knowledge (Martins et al., 2019; Shukla et al., 2024). Due to these growing investments in corporate educational actions, it is necessary to evaluate the results obtained, including the effectiveness of the training (Zerbini et al., 2012).
The school is, in fact, an organizational environment in which the work carried out generates a source of income and has a significant impact on people’s behavior (Andrade et al., 2021). Today, schools are recognized as the fundamental unit and the place where the objectives and goals of the educational system are achieved. They occupy a central position in society’s attention, and are of great strategic value for our development (Honorato, 2018; Nascimento et al., 2020; Oliveira et al., 2020; Santos Junio et al., 2022). This situation represents a significant challenge for school managers, who must face new demands (Honorato, 2018; Fonseca & Borges, 2023).
In the Brazilian educational scenario, the role of the school principal is increasingly complex and strategic. The leadership exercised by this professional directly impacts student learning, the organizational climate, and the articulation between community and school (Leithwood et al., 2020). Therefore, it is essential to develop instruments capable of measuring, with validity and reliability, whether training aimed at this population results in real changes in professional performance.
Thinking about the development and preparation of educational managers is essential so that the professional who occupies this role is prepared to deal with contemporary challenges in education, such as: the integration of technology, hybrid teaching, the promotion of inclusion and adaptation to changes in the curriculum and educational policies (Fonseca & Borges, 2023; Leithwood et al., 2020). Their ability to manage resources, make strategic decisions, mobilize teams, and involve the school community is crucial to the overall success of education (Varzoni & Amorim, 2021). Thus, the manager, in educational contexts and the role of school principal, has a substantial impact on the learning environment, student motivation, and the quality of teaching. Their effective leadership is a key factor in the educational success of a school. Thus, TD&E actions offered in the DL modality have contributed to the development of competencies necessary for better performance of the work of principals in a school context (Andrade & Zerbini, 2023).
Despite the growing interest in continuing education policies in basic education, there are still few studies that systematically investigate the effects of these actions on school management practices. Parente et al. (2020) highlight the lack of research aimed at measuring the impact of educational programs, especially those aimed at school leadership. Given this scenario, this study aims to fill an important gap in the literature by constructing and seeking evidence of the validity of an instrument that evaluates the impact of training on the work of elementary school managers, contributing to the professionalization of management and the qualification of human development policies in schools.
The relevance of this study is also evident in the fact that the construction and validation of the Impact of In-Depth Job Training for School Managers (ITPGE) instrument can support more effective management practices in basic education. By offering a tool to systematically evaluate the effects of training in the context of school managers’ performance, the study contributes, based on evidence, to the planning of training actions that are more consistent with the demands of the school, promoting an organizational culture that values the strategic use of data, innovation in leadership, and decision-making oriented towards continuous development and a guide of validated activities.
Given the above, the objective was to describe the construction and verify the evidence of psychometric validity of the Impact of In-Depth Job Training for School Managers (ITPGE) scale for basic education, applied to 248 principals of public schools in São Paulo, graduates of a course aimed at new principals, with a focus on the managerial competencies of school principals. The instrument was developed based on observable instructional objectives, extracted from a continuing education course for principals of the São Paulo state public school system.
Method
Participants
This study was developed with a sample of school principals from the public education system in the State of São Paulo. A partnership was established with EFAPE, which is a pioneering initiative in the country, by the Secretaria de Educação do Estado de São Paulo (SEE-SP). The course evaluated was the “Specific Training Course for New School Principals”, also called “New Principals”, a semi-presential course, aimed at those approved and appointed through a public examination, to promote the deepening, complementation, review and renewal of knowledge, methodologies and perspectives present in their training for New Principals. It also sought to encourage reflection on their professional practice.
The course workload is 360 hours, in a hybrid modality, distributed in eighty-eight (88) hours of Face-to-Face meetings (F2F), and two hundred and seventy-two (272) hours of self-instructional studies, through distance activities in the EFAPE Virtual Learning Environment (VLE-EFAPE). It is structured in 11 modules. According to the presentation material for the New Principals course, the modules are aligned with three principles and dimensions: Team and People Management, Pedagogical Management, and Administrative Management.
The study involved 248 principals who answered the questionnaires sent during the data collection phase. Based on the information collected, it is possible to observe that the majority of participants were female (70.6%), married (58.9%), and had children (75.8%). On average, the sample considered is 49 years old (SD = 7.9), with 46 years being the age that appeared most frequently, 70 years being the maximum, and 32 years being the minimum. It is noted that 71 Education Directorates were represented in the survey, which, in turn, represent 78% of all Directorates in the State of São Paulo. Representatives from the state capital and inland cities took part.
The course was analyzed using a “Teaching Material Analysis Guide” developed and adapted to the characteristics of distance learning courses. This documentary analysis aimed to describe the course in its formal characteristics, related to the formulation of instructional objectives, the adequacy of instructional strategies, the compatibility of exercises with the nature and complexity of instructional objectives, the planning of activities, the sequencing of teaching, the sources of information (bibliography and other means) and general data about the course. The information used for the course analysis was extracted from the Pedagogical Support Material and Course Material.
The items analyzed comply with the literature; however, the People and Team Management dimension does not follow a logical sequence in the modules. The content is presented in modules 1, 2, 6, 7, and 8, which can make it difficult for participants to understand these dimensions. Once the instructional objectives had been identified, it was possible to identify performance and behavioral indicators that would make up the in-depth impact instrument, as shown below.
Instruments
The process of constructing the Impact of In-Depth Job Training for School Managers (ITPGE) scale was carried out by transforming the course’s instructional objectives into observable performance objectives. Performance objectives should indicate what behaviors are expected from participants in the instructional action, after the application of teaching and learning situations, as well as after training, in the workplace (Zerbini et al., 2012). In the analysis stage of the course material, 51 learning objectives were found, distributed over the 11 modules. To compose the ITPGE scale items, the inclusion criterion was that the scale items should be based on observable learning objectives, that is, objectives that suggest the permanence of the behaviors described in them in the workplace after the instructional action. For example, from the learning objective: “To monitor and guide the improvement of the teaching and learning process in the classroom, through observation and corresponding feedback dialog, adopting criteria and parameters for guiding results.”, the item was written: “I monitor and guide the improvement of the teaching process through observation and corresponding feedback dialog, and the use of criteria and parameters for guiding results”. Another example is the learning objective: “Develop or deepen leadership competencies and abilities: motivate the team, provide feedback, manage time, maintain focus, know how to prioritize, communicate and guide problem solving.”, the item was strict: “I execute leadership competencies and abilities, namely: motivate the team, provide feedback, manage time, maintain focus, know how to prioritize, communicate and guide problem solving.”
The exclusion criteria were non-observable learning objectives, those related to understanding, attitudes, or concepts that cannot be directly observed. Some of these objectives were identified as observable performance objectives. Thus, to compose the items of the scale, 26 objectives were identified and reworded to eliminate details or biases (Borges-Andrade, 2006). They were then described in terms of observable performance with precision in the action verbs concerning the description of the behavior expected by the participant. There was clarity in the description of the object of action, as well as clarity in defining the conditions for carrying out the expected behaviors. Finally, the 26 observable objectives made up the ITPGE scale. The scale was subjected to semantic and judge evaluation to increase the likelihood of measuring what was expected in the course. In the semantic validation stage, the instrument was analyzed by education experts and collaborators from the partner institution who were directly involved in creating and evaluating the target course. Next, in the judges’ evaluation, two school principals, one with a Master’s degree and the other a PhD in Education, acted as evaluators, examining the items based on the criteria of objectivity, simplicity, clarity, pertinence, precision, reliability, credibility, and the possibility of behavioral observation. Based on these analyses, only specific adjustments were necessary, which reinforces the instrument’s suitability for its evaluative purpose. Thus, a few changes were made after the semantic validation and by the judges.
Procedures
Data collection. Data collection was carried out entirely remotely, via the internet, using the free GoogleForms tool, which, in addition to being free, has security and confidentiality mechanisms that guarantee the protection of information provided by participants, by ethical research guidelines. On the platform, the instrument was transformed into a questionnaire to be administered online to research participants. The partner institution sent an e-mail to the 1,603 participants in the New Principals course, inviting them to take part in the survey with the link to access the ITPGE questionnaire. The scale was posted for three months. The partner institution itself determined the length of time. Out of a population of 1,603 principals who took part in the course, 248 responded to the questionnaire; thus, the return rate was 15.47%.
Data analysis. Tabachnick and Fidell’s (2012) guidelines were used to carry out the data analysis procedures. Statistical analysis of the data was carried out using SPSS (Statistical Package for the Social Science - version 23.0) AMOS (21.0.0), and JAMOVI (2.3.2) software. Descriptive analyses (mean, standard deviation, mode, minimum, and maximum) and exploratory analyses were performed to investigate the accuracy of data entry, the presence of extreme cases, the frequency distribution of variables and sample size, and correlational analyses between the criterion variable and the functional variables.
There were no omissions due to the mandatory response to all items of the questionnaire before sending it via the internet. To verify evidence of validity and trustworthiness of the measuring instruments, Confirmatory Factor Analysis - CFA) was performed on the empirical structures of the measuring instrument used. To judge the goodness of fit of the model, the incremental measures of fit (Goodness-of-Fit Index [GFI], Comparative-Fit Index [CFI], and Tucker-Lewis Index [TLI]) should have values greater than 0.90 (ideally > 0.95). The residual values (Root Mean Square Residual [RMSR] and Root Mean Square Error Approximation [RMSEA]), should be less than 0.08 (ideally <0.05). For the CFA, the SPSS AMOS software version 21.0 was used.
Ethical Considerations
Regarding the ethical aspects for conducting research, the project was submitted and approved by the local Ethics Committee (CAAE nº 40838720.7.0000.5407). Since the survey was carried out online, before the questionnaires were administered, the principals had to agree to the Free and Informed Consent Form (FICF), which ensured that their identity would remain confidential and anonymous, that their participation would be voluntary, and requested their authorization to use their information in the study. Upon acceptance, the participant had access to the questionnaires; otherwise, a new screen appeared, thanking them for their participation.
Results
The ITPGE scale measures the effect of the content learned on the specific performance of principals in their work routine, that is, performances directly related to the competencies acquired in the course. The instrument consists of 26 items, associated with a frequency scale ranging from 1 (Never) to 5 (Always).
Based on the descriptive results, it was possible to observe that the majority of principals stated that the course had a positive impact on their work, since the performances described in the form of specific competencies to be acquired with the New Principals Course received averages ranging from 4.18 to 4.77, with low standard deviation values for all items. This indicates agreement of opinions on the aspects evaluated. Furthermore, in all items, the highest concentration of responses is in the values 4 (four) and 5 (five), with more than 80% of the responses concentrated in these scores.
The items that presented a higher average were 25 - “I identify the importance of food in the learning process” (M= 4.77 and SD = 0.45), 24 - “I differentiate the sources of financing and the ways of using public and private financial resources, by aligning their use with the objectives of the Pedagogical Proposal of the School Unit” (M = 4.71 and SD = 0.51), 23 - “I relate the financial management practices of the school context to the basic principles of public administration” (M = 4.71 and SD = 0.51) and 26 “I identify the importance of offering school transportation, when necessary, in access to school and its contribution to quality education” (M = 4.65 and SD = 0.65).
The items that received slightly lower averages were: 17 - “I promote joint empowerment in the construction of the school unit’s identity through the analysis of existing power forces, as well as the values that guide them” (M = 4.39 and SD = 0.71), 16 - “I establish partnerships within and outside the school community to support the school’s actions, based on shared values and responsibilities” (M = 4.29 and SD = 0.80), and 20 - “I promote the elaboration, implementation and monitoring of the In-Service Training Plan, in partnership with the Education Directorate” (M = 4.18 and SD = 1.01).
Confirmatory factor analyses of the ITPGE scale
The 26 items of the ITPGE scale represent observable objectives of the 11 modules of the evaluated course, which are aligned with three principles and dimensions: Team and People Management, Pedagogical Management, and Administrative Management. Thus, for the analysis of the initial hypothetical model with three factors, the empirical structure of 09 items was found for the first Factor (People and Team Management), 10 items for the second Factor (Pedagogical Management), and 5 items for the third Factor (Administrative Management). The initial hypothetical model presents flaws in the goodness of fit, indicating that the confirmatory factor analysis for the ITPGE instrument, without the introduction of modification indexes and re-specifications, does not fit well. Regarding the analysis of normality, in general, the distribution of the items demonstrated normality regarding the reference values (|-2 and 2|) - the values found for asymmetry and kurtosis varied, respectively, from -0.64 to -1.41 and from -0.27 to 1.80.
The analysis of the modification indexes suggested the introduction of the correlation between errors e21 and e20 (r= .33), both from Factor 3 “Administrative Management”: item 25 - “I identify the importance of food in the learning process” - and item 26 - “I identify the importance of providing school transportation, when necessary, in access to school and its contribution to quality education”. Both deal with actions that can influence quality in the educational process. It can be seen that the model’s goodness-of-fit indexes improved in the re-specified 1 model after inserting the correlations between errors e21 and e20.
However, as the model’s goodness of fit has not yet shown satisfactory indexes, the correlation between errors e3 and e4 (r = .35) was introduced, present in Factor 1 “People and Team Management”: item 7 - “I carry out interventions with the teaching staff to correct the course of educational actions, through monitoring, follow-up and pedagogical guidance strategies” - and item 8 - “I direct the planning and investment in teaching, technological, cultural and training resources that promote the learning of all students through the analysis of educational and school performance indicators”. The results for the re-specified 2 model show improved fit (Table 1 and Figure 1).
Table 1 Fit Indicators for the Empirical and Re-specified Models of the ITPGE Scale - 3 factors
| Model | χ2 | Df | CMIN/DF | GFI | SRMSR | CFI | TLI | RMSEA |
|---|---|---|---|---|---|---|---|---|
| Original | 719 | 249 | 2.8 | 0.80 | 0.06 | 0.88 | 0.87 | 0.09 |
| Re-specified 1 | 696 | 248 | 2.8 | 0.80 | 0.06 | 0.88 | 0.88 | 0.09 |
| Re-specified 2 | 670 | 247 | 2.7 | 0.82 | 0.06 | 0.89 | 0.88 | 0.08 |
Note. χ2 (chi-square), Df (Degrees of freedom), CMIN/DF (χ2/df), GFI (Goodnnesss-of-Fit Index), RMSR (Root Mean Square Residual), CFI (Comparative-Fit Index), TLI (Tucker Lewis Index), RMSEA (Root Mean Square Error Approximation). N=234; the re-specified 1 model contains the correlations between the error pair e20 and e21; the re-specified 2 model contains the correlations between the error pairs e3-e4 and e20-e21.

Figure 1: Standardized factor loadings, correlation coefficients, and standard errors of the CFA for the ITPGE instrument.
After all the analyses, the ITPGE scale presented 24 items, distributed into three Factors, with factor loadings ranging from 0.42 to 0.89. The internal consistency index was 0.96 - the three factors explain 62.53% of the total variance of the responses to the instrument items.
In the re-specified 2 Model, with the inclusion of correlation between the error pairs e3-e4 and e20-e21, better values of the adjustment indicators were obtained (Table 1 and Figure 1). In the CFA of the ITPGE Scale, the solution with three factors presented satisfactory and acceptable goodness-of-fit indicators. The residual correlation indexes RMSEA and RMSR present acceptable results, respectively, 0.08 and 0.06, as well as the CMIN/DF and GIF indexes (2.7 and 0.81). The CMIN/DF value below 3.0 indicates the plausibility of the model; however, the tolerance value considered is < 5. The Incremental TLI and CFI indicators presented satisfactory results, 0.88 and 0.89, respectively.
Discussion
School management plays a fundamental role in the functioning and development of an educational institution. The principal is the protagonist of this process, as they are responsible for leading and managing the school. The importance of training for school principals is clear, as it prepares them to deal effectively with the challenges and demands of educational management. Continuous training is also essential to ensure that principals are prepared to meet the constantly evolving demands of the educational scenario. In this context, it is necessary to monitor the process of transferring the competencies acquired in the instructional actions through training evaluation.
The evaluation of the direct effects of a training action is carried out by measuring the impact of in-depth training on the job. This makes it possible to verify whether the knowledge, skills, and attitudes developed by the trainee are being transferred to their work environment. In addition, it is possible to evaluate whether the results are related to the instructional objectives of the course.
Based on the results found, the data indicate that most principals perceived a positive impact of the course on their work, with emphasis on items for which the averages exceed 4.5, such as item 25, related to the importance of nutrition in learning. These results demonstrate the high frequency of the expected behaviors. Even in items with slightly lower averages, such as item 17, the values remained above 4, reflecting satisfactory performance. The greater variations in standard deviations, observed in items such as 20, suggest differences in perception related to the different school contexts experienced by principals.
Therefore, although there is a variation in perceptions, the overall results demonstrate that the impact of the course was largely positive, with high agreement among participants. This panorama reaffirms that the training contributed significantly to the development of the managerial competencies of school principals.
Furthermore, the results showed two valid and reliable solutions. The first was composed of three factors: (1) People and Team Management (9 items); (2) Pedagogical Management (10 items); and (3) Administrative Management (5 items). The second was formed by two factors: (1) People, Team, and Pedagogical Management (19 items); and (2) Administrative Process Management (5 items). Both structures proved to be statistically reliable and valid measures.
Thus, the results obtained when the ITPGE was applied show high frequency indexes in the behavioral manifestations related to the competencies worked on in the course, which is in line with the theoretical models that conceive of competence as the mobilization of knowledge, skills and attitudes in specific professional contexts (Mangabeira & Ferraz, 2024; Marmitt & Bonotto, 2023). In the case of school managers, these competencies are expressed in strategic, pedagogical, and administrative actions that support educational leadership. The high average attributed to items such as the articulation of financial resources with the pedagogical proposal or the promotion of practices aligned with the principles of public administration points to the internalization of knowledge relevant to management practice. These results reinforce the importance of psychometrically valid instruments in measuring formative impact, allowing the evaluation of competencies to go beyond subjective perception and effectively contribute to improving school management.
Regarding psychometric indicators, the structure of the ITPGE Scale, composed of three factors, presented indicators that allow the model to be considered adjusted. The RMSEA (0.08) and SRMR (0.06) are within the limits considered acceptable, according to recommendations in the specialized literature. The CMIN/DF index was 2.7, a value lower than 3.0, and the literature admits tolerance up to 5.0 (Brown, 2015). The incremental indexes TLI (0.88) and CFI (0.89), although slightly below the ideal values (0.90), demonstrate coherence with the theoretical proposal of the instrument and are close to the established reference standards (Hu & Bentler, 1999). Considering the educational context and the nature of the sample, these indicators are satisfactory and reinforce the validity of the model. Therefore, the results support the adequacy of the scale to measure the managerial competencies of school managers, contributing to evaluation processes and professional development in basic education.
Although the results of this study reinforce the psychometric validity of the proposed scale, it is important to highlight that some limitations in the adjustment indexes must be considered. These indexes, although satisfactory, are not free from restrictions that may influence the generalization of the findings, especially in contexts different from the one investigated. Furthermore, although high scores on the items may suggest an association with the effectiveness of the course, it is essential to consider that pre-existing competencies of the participants may also influence the results. Thus, the proposed scale has great potential as a psychometric tool, but future research should be conducted to validate its applicability in different contexts and populations. Another limitation was the impossibility of applying ITPGE hetero-evaluation questionnaires. This could have generated comparative results with those obtained in the self-assessment.
Based on these considerations, future research should explore the application of the scale in different contexts and collect data at two points in time (pre and post), to more accurately evaluate the impact of the competencies developed. In addition, it is recommended that the scale items and response format be refined to ensure that the measure is more sensitive to the nuances of the educational context. These actions will help to improve the tool and validate it in a variety of educational scenarios.
This study confirms the importance of developing a culture of evaluating the results of distance training actions to improve the instructional planning used in the course (Zerbini et al., 2012). Thus, action evaluation of distance training at the level of the impact of the training on the job in depth is essential to gain a full understanding of the training results, verify the transfer of learning, identify necessary improvements, and make informed strategic decisions. This evaluation provides valuable information for improving the effectiveness of training and justifying future investment in people development.
The main contributions of this study focus on the development and psychometric validation of the scale, a promising tool for evaluating competencies in the educational context. The scale could also have significant implications for educational management and training evaluation by offering an accurate analysis of participants’ competencies, helping to improve educational programs. However, its impact can be expanded by testing it in broader and more diverse contexts.
Finally, it is worth noting that the scale constructed and validated in this study proved to be psychometrically reliable and valid, and could be adjusted and applied at different levels of education. Application to other samples and in different contexts is suggested. Furthermore, given the growing supply of distance education actions, it is hoped that the results presented in this study will contribute to the area of evaluation of distance learning TD&E.














