logo logo European Journal of Educational Research

EU-JER is is a, peer reviewed, online academic research journal.

Subscribe to

Receive Email Alerts

for special events, calls for papers, and professional development opportunities.

Subscribe

Publisher (HQ)

Eurasian Society of Educational Research
Eurasian Society of Educational Research
Christiaan Huygensstraat 44, Zipcode:7533XB, Enschede, THE NETHERLANDS
Eurasian Society of Educational Research
Headquarters
Christiaan Huygensstraat 44, Zipcode:7533XB, Enschede, THE NETHERLANDS
Research Article

Determining Factors Influencing Indonesian Higher Education Students' Intention to Adopt Artificial Intelligence Tools for Self-Directed Learning Management

Darmono , Rizal Justian Setiawan , Khakam Ma'ruf

Artificial intelligence (AI) has revolutionized higher education. The rapid adoption of artificial intelligence in education (AIED) tools has signific.


  • Pub. date: July 15, 2025
  • Online Pub. date: May 29, 2025
  • Pages: 805-828
  • 82 Downloads
  • 459 Views
  • 0 Citations

How to Cite

Abstract:

A

Artificial intelligence (AI) has revolutionized higher education. The rapid adoption of artificial intelligence in education (AIED) tools has significantly transformed educational management, specifically in self-directed learning (SDL). This study examines the factors influencing Indonesian higher education students' intention to adopt AIED tools for self-directed learning using a combination of the Theory of Planned Behavior (TPB) with additional theories. A total of 322 university students from diverse academic backgrounds participated in the structured survey. This study utilized machine learning it was Artificial Neural Networks (ANN) to analyze nine factors, including attitude (AT), subjective norms (SN), perceived behavioral control (PBC), optimism (OP), user innovativeness (UI), perceived usefulness (PUF), facilitating conditions (FC), perception towards ai (PTA), and intention (IT) with a total of 41 items in the questionnaire. The model demonstrated high predictive accuracy, with SN emerging as the most significant factor to IT, followed by AT, PBC, PUF, FC, OP, and PTA. User innovativeness was the least influential factor due to the lowest accuracy. This study provides actionable insights for educators, policymakers, and technology developers by highlighting the critical roles of social influence, supportive infrastructure, and student beliefs in shaping AIED adoption for self-directed learning (SDL). This research not only fills an important gap in the literature but also offers a roadmap for designing inclusive, student-centered AI learning environments that empower learners and support the future of SDL in digital education.

Keywords: Artificial intelligence, artificial neural networks, educational management, intention, self-directed learning.

description PDF
file_save XML
Article Metrics
Views
82
Download
459
Citations
Crossref
0

Scopus
0

Introduction

Artificial intelligence (AI) has increasingly enhanced human activity by making processes faster and more efficient (Masyitoh et al., 2024; Tai, 2020). In Indonesia, the adoption of AI technologies gained momentum during the COVID-19 pandemic (Nurmaini, 2021), influencing various sectors, including education (Ahmad et al., 2023; Yusriadi et al., 2023). AI tools now support educational management by facilitating learning activities, improving student outcomes, and reducing operational errors (Bit et al., 2024; Costa et al., 2017; Igbokwe, 2023). Moreover, the impact of AI has revolutionized education, especially at the higher education level (B. Chen et al., 2023; Chiu, 2023).

Many applications of AI in education (AIED) have emerged (Choi et al., 2022; S. Wang et al., 2024). AI tools are available to support students in completing tasks they thought were beyond their capabilities (Kasneci et al., 2023). Potential applications of AIED can cover several areas of educational management, including three categories such as intelligent tutoring systems, personalized learning, and assessment automation (Sain et al., 2024; Wollny et al., 2021). AIED for intelligent tutoring systems such as Khan Academy and Duolingo (Bicknell et al., 2024; Ocaña-Fernández et al., 2019), can facilitate student learning due to the ability to actively interact with students and provide various feedback (Kamalov et al., 2023). Furthermore, AIED in personalized learning like ChatGPT and Sora is possible given the scalability of AI across students because AI algorithms such as reinforcement learning can be used to dynamically learn and adjust the learning process accordingly (Kamalov et al., 2023; Leh, 2022; C. Weng et al., 2024). In Indonesia, ChatGPT is the most widely used AI application (Helmiatin et al., 2024), this platform is anticipated to greatly improve the quality of the educational experience (Kuhail et al., 2023). Then, AIED in assessment or ranking automation will provide tremendous relief for students, giving them more knowledge and analysis of wrong and true answers (Kamalov et al., 2023), University E-learning and iFlyTek are parts of this AIED category (iFlyTek, 2024; Janpla & Piriyasurawong, 2020; Rodriguez-Ascaso et al., 2017). Overall, AI tools have been able to revolutionize higher education in the areas of learning, teaching, and assessment (Michel-Villarreal et al., 2023; Ruiz-Rojas et al., 2023).

These AI tools have also been recognized to provide significant benefits in improving the quality of learning in higher education (Zhou & Zhang, 2024). The application of AIED, especially in promoting students' self-directed learning (SDL), has shown significant contributions (Mahendra et al., 2023). Self-directed learning refers to the process where individuals take initiative and responsibility for their own learning, in this concept individuals are free to set goals, resources, determine what is worth to learn, and evaluate the results independently (Loeng, 2020). The features offered by AIED can facilitate students in the learning process, including SDL (Foroughi et al., 2023; Yildirim et al., 2023). However, research related to factors that influence student acceptance of AI is still very limited.

Despite the adoption of AI tools has expanded rapidly, research on AIED remains relatively very limited, specifically in developing countries such as Indonesia. A Scopus data analysis conducted by Helmiatin et al. (2024) indicates that from 2015 to 2024, research on AI has been predominantly led by authors from North America, Europe, and several Asian countries such as China and India. Moreover, most published studies on AI are concentrated in other disciplines such as computer science (23%), engineering (14.8%), and medicine (13.6%). The growing use of AIED has also raised critical questions, regarding the factors that influence students continued and effective utilization of AI tools (Duong et al., 2024). This underscores a notable research gap in the exploration of AI's role in education within Indonesia, particularly in the context of its application for SDL among university students.

There has been no substantial study investigating the behavioral intentions of Indonesian higher education students or the factors influencing their decisions to use AI tools for SDL. To address this, the Theory of Planned Behavior (TPB) offers a relevant framework for understanding how attitudes, subjective norms, and perceived behavioral control shape behavioral intentions and actions (Ajzen, 1991). These three key factors play a significant role in shaping the decision-making process of users (Jiao & Cao, 2024). The use of AI tools in SDL can be seen as a behavior that can be analyzed using the TPB. However, TPB itself may not be able to fully determine the intention of university students in Indonesia to use AI tools in self-learning, due to other factors may have a greater influence.

This study also considers the Technology Readiness (TR) theory, developed by Parasuraman (2000), to explore factors influencing technology acceptance. Optimism and innovativeness of the users, which are positive readiness, are strong predictors of TR Kampa (2023). While TR also provides valuable insights like TPB, it may not fully explain students' intentions, as additional factors could play a significant role.

Additionally, the Technology Acceptance Model (TAM) emphasizes the role of perceived usefulness and perceived ease of use in shaping technology acceptance (Davis, 1986; Venkatesh & Davis, 2000). Past research has shown that perceived ease of use influences perceived usefulness but not vice versa (Chau, 1996; Folkinshteyn & Lennon, 2016; Holden & Karsh, 2010; Park & Park, 2020). Furthermore, existing studies have consistently highlighted the ease of use as a key driver of AI adoption in education (Algerafi et al., 2023; Almaiah et al., 2022; N. J. Kim & Kim, 2022). Considering this, there is no need to conduct another survey regarding perceived ease of use and only perceived usefulness will be included in this study.

Another theory related to technology use that can be considered is the Unified Theory of Acceptance and Use of Technology (UTAUT) by Venkatesh et al. (2012). UTAUT states that performance expectancy, effort expectancy, social influence, and facilitating conditions are significant determinants of user acceptance and usage behavior. However, the factors in UTAUT have similarities with some factors in other theories, such as perceived usefulness and perceived ease of use in TAM, and subjective norm in TPB (Chang, 2012; Venkatesh et al., 2003). UTAUT incorporates perceived usefulness into performance expectancy construct, perceived ease of use into effort expectancy, and subjective norm into social influence (Holden & Karsh, 2010; Venkatesh et al., 2003). Based on this, only facilitating conditions will be involved in this study.

Lastly, there is a factor that has emerged regarding a person’s intention to use AI tools, namely perception towards AI (Buabbas et al., 2023; Castagno & Khalifa, 2020). Several recent studies have also considered perception towards AI as one of the factors influencing intention to use it (Akudjedu et al., 2023; Al Omari et al., 2024). This factor needs to be considered in educational research, to find out whether this factor has a significant influence like in other fields.

To model these relationships, this study employs Artificial Neural Networks (ANN), a machine learning technique well-suited for identifying nonlinear behavioral patterns (German et al., 2022). By using ANN, this research not only tests the theoretical framework but also explores the relative importance and predictive strength of each variable.

The specific objective of this study is to explore the factors influencing Indonesian higher education students' intention to adopt AIED tools for SDL, using TPB. Various factors such as optimism, personal innovativeness, perceived usefulness, facilitating conditions, and perception toward AI will be considered for this study. This study is one of the pioneering research to explore the use of AIED in SDL among students using behavioral factors and additional factors to determine the intentions of students as AIED users. The results of this study can deepen the understanding of students' intentions in using AI tools in SDL, provide insights for educators and AI tool developers to optimize learning outcomes, and can help the government in considering facilitating more effective AI integration for higher education in Indonesia.

The other chapter in this study is outlined as follows: Section 3 describes the research methodology, covering participant or respondent recruitment, data collection, and the survey instruments employed. Section 4 focuses on the analysis and presentation of results. Section 5 offers a discussion of the findings, practical implications, research limitations, and recommendations for future studies. Lastly, Section 6 provides the conclusion of this study.

Conceptual Framework

The conceptual framework exploring the factors that potentially impact Indonesian higher education students' intentions to use artificial intelligence tools for SDL and explaining all hypotheses of this study. Figure 1 presents the conceptual framework employed in this study, highlighting the key factors incorporated into the extended TPB. This framework emphasizes the intention to utilize technology, specifically tools of Artificial Intelligence in Education (AIED), to support self-directed learning (SDL) among higher education students in Indonesia. From this conceptualization, eight (8) research hypotheses were formulated.

Figure 7

Figure 1. Research Framework

Each of the factors in Figure 1 is hypothesized to have a direct effect on intention, corresponding to hypotheses H1 through H8. The diagram provides a visual alignment of these constructs, illustrating how behavioral, psychological, and environmental elements interact to shape students' adoption behavior. By integrating these dimensions into a unified model, the framework aims to capture both rational decision-making and contextual influences in students’ use of AI-based learning tools.

The TPB is an explanatory model that has been widely applied to the prediction intention (Ajzen, 1985, 2020), this also applies to a variety of behavioral domains (Li et al., 2022). The original components of TPB, including attitude, subjective norm, and perceived behavioral control, have consistently shown their significance in shaping learners’ intention to adopt technology applications for learning (Mohr & Kühl, 2021; Sohn & Kwon, 2020). Furthermore, several previous studies have adopted various TPB constructs to explain users’ intention to adopt technologies such as e-commerce (Ozkan & Kanat, 2011) and smart products (Jang & Noh, 2017; J. Song et al., 2018).

Attitude is believed to be a factor influencing a person's behavioral intention in using technology (Nie et al., 2020; Valle et al., 2024). Numerous empirical studies have demonstrated a positive correlation between attitude and intention (J. V. Chen et al., 2019; Obaid & Aldammagh, 2021). Some scholars assert that an individual's attitude toward a particular technology is an important prerequisite in their willingness to adopt it (Faham & Asghari, 2019; Mengi et al., 2024). For instance, Mohr and Kühl (2021) and Sohn and Kwon (2020) identified users' attitudes toward innovative technologies as a key factor in predicting their intention to adopt such technologies. Based on this relevant research, the following hypothesis is proposed:

H1. Attitude significantly affects the intention to adopt AIED tools for SDL.

Subjective norms pertain to the social pressure or influence that affects an individual's decision to engage in a specific behavior (Ajzen, 1985). The more support individuals receive from influential people or groups in their social environment, the stronger their intention to engage in the behavior of interest (S. Wang et al., 2024). Previous studies have shown that subjective norms have a positive impact on behavioral intentions in the use of technology, such as the use of e-learning (Chu & Chen, 2016), AI technology in secondary schools (Chai et al., 2020), food delivery using drones (Choe et al., 2021), and patent applications (Lin & Yu, 2018). Sohn and Kwon's (2020), study revealed that AI products are highly sought after, although respondents have limited practical experience, as their adoption is heavily influenced by others' opinions, highlighting the strong impact of subjective norms. Thus, based on this premise, this study proposes the following hypothesis:

H2. Subjective norm significantly affects the intention to adopt AIED tools for SDL.

Perceived behavioral control (PBC) is an individual's assessment of how easy it is to perform a desired behavior (Ajzen, 1985; Polyportis, 2023). Individuals with higher PBC over resources, ease, and ability toward behavioral goals are more likely to perform the behavior (Jiao & Cao, 2024). Evidence from certain studies suggests that PBC serves as a positive predictor of technology adoption, such as mobile English learning (Nie et al., 2020), ChatGPT (Polyportis, 2023), and smart farming systems (Mohr & Kühl, 2021). In the context of AIED, individuals' perceived behavioral control may positively predict their intention to use AIED for SDL. Given these relevant studies, the following hypothesis was established:

H3. Perceived behavioral control significantly affects the intention to adopt AIED tools for SDL.

In the theory of Technology Readiness (TR), the optimism and innovativeness factors of users can encourage users to adopt new technologies and have positive intentions toward the technology (Hassan et al., 2024; Parasuraman, 2000; Yen, 2005). These factors often affect perceived ease of use and perceived usefulness (Buyle et al., 2018; M.-F. Chen & Lin, 2018; T. Kim & Chiu, 2019). However, Individuals who are optimistic about new technologies generally tend to have positive intentions to use them because they consider new technologies easier to use and interesting (Jo & Baek, 2023; Madar et al., 2019). This intention also occurs in individuals who have user innovativeness motivations for technology (Strzelecki, 2023). So, based on this, the following two hypotheses are formulated:

H4: Optimism significantly affects the intention to adopt AIED tools for SDL.

H5: User innovativeness significantly affects the intention to adopt AIED tools for SDL.

Perceived usefulness refers to the extent to which an individual holds the belief that utilizing a particular system will improve their job performance and efficiency (Bhattacherjee, 2001; Chatterjee et al., 2021; Venkatesh & Davis, 2000). Perceived usefulness is an important motivator for user intention to use a new technology system (Sun et al., 2022). The perceived usefulness of an AI device depends on the extent to which learners’ use of the AI ​​​​device leads to improvements in their learning performance (Zhou & Zhang, 2024). In studies related to students' interest in using m-learning technology in Jordan (Althunibat, 2015) and e-learning in Indonesia (Mailizar et al., 2021), it was found that perceived usefulness is a factor that greatly influences students' interest in using the technology in learning. Therefore, the following hypothesis is proposed:

H6: Perceived Usefulness significantly affects the intention to adopt AIED tools for SDL.

Facilitating conditions is a factor that can be defined as the user's assessment of the resources and assistance accessible when engaging in a technological action (Venkatesh et al., 2012). It also consists of the organizational and technical infrastructure aspects (Chawla & Joshi, 2019). In several studies, facilitating condition is a strong factor that influences the user's intention to utilize technology, such as the use of m-banking (Oliveira et al., 2014), online virtual lectures (Shuhaiber, 2016), mobile-assisted language learning (Ebadi & Raygan, 2023), dynamic mathematics software (Yuan et al., 2023), to the use of AI tools for teachers (Velli & Zafiropoulos, 2024). Based on this potential, the following hypotheses are proposed in this regard:

H7:Facilitating condition significantly affects the intention to adopt AIED tools for SDL.

Perception towards AI refer to an individual's or group's understanding and beliefs about AI (Lugito et al., 2024), including benefits of AI (Brauner et al., 2023). Students' perception towards AI also positively impacts their intention to continue using it in learning, because their good perception of AI makes them trust in using AI for learning (Liu & Huang, 2024). On the other hand, if students' perception towards AI is low, it will reduce students' interest in using AI (Fošner, 2024).

H8:Perception towards AI significantly affects the intention to adopt AIED tools for SDL.

Methodology

The study surveyed Indonesian higher education students to understand their intention to adopt artificial intelligence (AI) tools for self-directed learning (SDL) management. To gain insights from multiple perspectives, we used a purposive sample, targeting individuals aged 18 and above studying in higher education with different academic backgrounds and experiences using AI tools. All participants had experience using AI tools in SDL.

Following the recommendations of Bentler and Chou (1987) and Nicolaou and Masoner (2013), which advocate for a minimum of 5–10 responses per estimated parameter and recognizing the emphasis on robust sample sizes highlighted by Assele et al. (2023) and Bujang and Adnan (2016), the study targeted a minimum of 300 participants. This substantial sample size was designed to enhance the statistical power of the research, enabling the detection of subtle effects and providing more precise estimates of population parameters. Consequently, this approach contributes to the generalizability and reliability of the study's findings.

Participants

Table 1 presents an overview of the demographic characteristics of the respondents, comprising a total of 322 participants. Most respondents were female (54.35%) and aged 20–24 years (45.03%). Nearly half were pursuing a bachelor’s degree (49.38%), with a significant proportion enrolled in master’s programs (31.99%). The largest academic fields represented were engineering (27.02%) and education (23.29%).

Then, most of the participants (83.85%) were students studying at universities in Java. Although this region only covers 7% of Indonesia's territory, this region is the most densely populated and most digitally developed region of Indonesia (Buchori et al., 2017). Furthermore, based on data from the Central Statistics Agency of Indonesia, there are 1,409 universities in Java which accommodate 5,349,807 or 63.18% of students throughout Indonesia (Badan Pusat Statistik[Central Bureau of Statistics], 2023). Therefore, it is reasonable that the majority of respondents are concentrated in this region compared to others.

All participants had prior experience using AIED tools in learning and the majority of respondents said that the AIED type they used for the first time was personalized learning. Most respondents have less than one year of experience using AIED in self-learning (50.62%) and most use these AIED tools weekly (59.01%).

Table 1. Respondents Demographic (N = 322)

Characteristics Category Amount Percentage (%)
Gender Female 175 54.35
  Male 147 45.65
Age 19 years old or less 38 11.80
  20 – 24 years old 145 45.03
  25 – 29 years old 73 22.67
  30 – 34 years old 28 8.70
  35 – 39 years old 24 7.45
  ≥ 40 years old 14 4.35
Degree Level (On Going) Associate Expert 31 9.63
  Bachelor 159 49.38
  Profession 6 1.86
  Master 103 31.99
  Doctoral 21 6.52
  Specialist 2 0.62
Academic Field(On Going) Arts 16 4.97
  Business 19 5.90
  Education 75 23.29
  Engineering 87 27.02
  Medical 14 4.35
  Natural Science 48 14.91
  Social Science 41 12.73
  Other 22 6.83
University Location (Island) Java 270 83.85
  Kalimantan 11 3.42
  Sumatera 32 9.94
  Sulawesi 8 2.48
  Other 1 0.31
Have you ever used AIED? Yes 322 100
  No 0 0
What type of AIED was first used for self-learning? Intelligent Tutoring System (Ex: Duolingo, Khan Academy) 89 27.64
  Personalized Learning (Ex: ChatGPT, Absorb LMS, Docebo) 218 67.70
  Automated Assessment (Ex: University E-learning, iFlyTek) 15 4.66
Experience using AIED for self-learning < 1 year 163 50.62
  1-2 years 117 36.34
  > 2 years 42 13.04
Frequency of using AIED for self-learning Daily 44 13.66
  Weekly 176 54.66
  Monthly 102 31.68

Questionnaire

This study employed a combination of closed-ended questions (Connor Desai & Reimers, 2019) and self-reflection prompts (Brownhill, 2021) as indicators in an online survey format to identify potential participants for further data analysis. Online surveys offer several advantages, including convenience for respondents, reduced potential for bias, and enhanced privacy protection, particularly when addressing sensitive topics (Kays et al., 2013; McNeeley, 2012). Responses were collected using a five-point Likert scale ranging from “strongly disagree” to “strongly agree.” The questionnaire was administered online to Indonesian higher education students from August to December 2024. To ensure accessibility and mitigate sample bias, the survey was disseminated across various online social media groups using Google Forms at different times and days throughout the data collection period (S. Singh & Sagar, 2021).

Before initiating large-scale data collection, a pilot test of the questionnaire was conducted with 30 respondents to assess its reliability, as recommended by Bujang & Adnan (2016). These participants were selected to represent a range of academic backgrounds and familiarity with AI tools to ensure that the questionnaire was understandable to a wide range of respondent profiles.

During the pilot test, respondents were asked to complete the full survey and provide qualitative feedback on aspects such as item clarity, question relevance, ambiguity of wording, and appropriateness of the response scale. Some participants noted that they had difficulty interpreting the meaning of constructs such as “user innovation” or “perceived usefulness” without clear definitions on the form. Based on this feedback, we provided brief definitions on the form for each construct asked to make it easier to understand what it meant and so that each item in each construct could be answered appropriately.

In addition to reviewing the qualitative feedback and then following up with improvements, quantitative analysis was also conducted. Validity checks were performed using IBM SPSS 26, yielding an average Cronbach’s alpha score of .794 from the pilot survey, exceeding the commonly accepted threshold of .7 (Taber, 2018). These results indicate good internal consistency among the indicators, confirming the suitability of the research instrument for subsequent data collection. Table 2 presents the latent variables, and their corresponding indicators utilized in this study.

Table 2. Three-Point Likert Scale Question for Problem Definition

Latent Variable Item Code Items Reference
Attitude AT1 I believe that AI tools will improve my tasks and assignments. Valle et al. (2024)
  AT2 I think that AI tools contribute to more productivity results. Valle et al. (2024)
  AT3 I think using AI tools is positive for my work. Valle et al. (2024)
  AT4 I believe that AI will improve my life as a student. Valle et al. (2024)
  AT5 AI can be used as a quality control system to evaluate the learning. Mengi et al. (2024)
Subjective Norm SN1 My teachers believe it is necessary to learn how to use AI tools in learning. S. Wang et al. (2024)
  SN2 My senior believes it is necessary to learn how to use AI tools in learning. S. Wang et al. (2024)
  SN3 My classmate believes it is necessary to learn how to use AI tools in learning. S. Wang et al. (2024)
  SN4 My family supports me in learning how to use AI tools. S. Wang et al. (2024)
  SN5 People I admire or look up to often use AI, which makes me more likely to do the same. Chai et al. (2020)
Perceived Behavioral Control PBC1 Learning using AI is relatively easy for me. Polyportis (2023)
  PBC2 Using AI Tools as a student is entirely within my control. Polyportis (2023)
  PBC3 I have the knowledge and the ability to make use of AI Tools as a student. Polyportis (2023)
  PBC4 I could accept AI as a teacher and good friend in learning. Suh and Ahn (2022)
Optimism OP1 Using AI tools in learning activities is interesting. Kampa (2023)
  OP2 I am sure AI tools can give me more freedom of mobility in learning. Kampa (2023)
  OP3 I am confident that I can do many tasks on time because of AI. Kampa (2023)
  OP4 I like the idea of using AI tools in education. Kampa (2023)
User Innovativeness PI1 I like to experiment with new AI tools. Strzelecki (2023)
  PI2 When I hear about a new AI tool, I look for ways to experiment and try it in learning. Strzelecki (2023)
  PI3 I am often among the first in my social circle to try out a new AI tool. Strzelecki (2023)
  PI4 Overall, I do not hesitate to try out a new AI tool, especially in learning activities. Strzelecki (2023)


Table 2. Continued

Latent Variable Item Code Items Reference
Perceived Usefulness PUF1 Learning with AI tools can improve my learning efficiency. Zhou and Zhang (2024)
  PUF2 Learning with AI tools can help me with my study tasks. Zhou and Zhang (2024)
  PUF3 My experience of using AI tools for online learning is very satisfactory. Zhou and Zhang (2024)
  PUF4 AI tools produce more good suggestions than bad. Suh and Ahn (2022)
  PUF5 AI tools have feature which can solve my problem of understanding a lesson. Suh and Ahn (2022)
Facilitating Condition FC1 AI tools are compatible with the technologies I have and use. Velli and Zafiropoulos (2024)
  FC2 When I have problems using AI, some colleagues or experts are ready to help me. Velli and Zafiropoulos (2024)
  FC3 I have the financial resources necessary to subscribe to AI tools. Shuhaiber (2016)
  FC4 I have access to a suitable place to use Wi-Fi or provide a device. in case I face issues with my internet connection and personal device. Ebadi and Raygan (2023)
  FC5 My schools provide appropriate lesson and equipment conditions regarding AI in learning. Y. Song and Wang (2024)
Perception Towards AI PTA1 I trust AI because I understand the its principle and limitations in education. Buabbas et al. (2023)
  PTA2 I feel all students should study with AI as it will benefit their career. Buabbas et al. (2023)
  PTA3 I feel comfortable using AI in learning. Buabbas et al. (2023)
  PTA4 I believe AI will replace specializations in education within my lifetime. Buabbas et al. (2023)
Intention IT1 I am willing to learn about the experience of AI tools by learning from others. An et al. (2023)
  IT2 I am willing to learn the case of AI education applications from the Internet. An et al. (2023)
  IT3 I am happy to share my AI knowledge and experience in learning activity with others. An et al. (2023)
  IT4 I intend to use AI tools for learning in the future. An et al. (2023)
  IT5 I will keep myself updated with the latest AI tools in learning. Chai et al. (2020)

Artificial Neural Networks (ANN) Model

El-Sefy et al. (2021) highlighted that a neural network could serve as the optimal tool when the model has been properly trained. This study uses Artificial Neural Network (ANN) instead of traditional Structural Equation Modeling (SEM) to model students’ behavioral intention due to the unique advantages offered by ANN in handling complex linear and nonlinear relationships (Aghaei et al., 2023). In addition, Zabukovšek et al. (2018) and Sohaib et al. (2020) mentioned in their study that ANN can model relationships with high predictive accuracy compared to SEM methods. While SEM is well-suited to test theory-based linear paths and causal structures, it relies on several assumptions, including data normality, linearity, and measurement error constraints. In contrast, ANN does not require such assumptions and is more flexible in capturing nonlinear patterns and interactions between variables, making it very effective for modeling complex human behavior (Guo et al., 2025). In addition, SEM does not perform well in big data, meanwhile ANN can process big data (Grnholdt & Martensen, 2005). Thus, ANN was chosen in this study.

Figure 11

Figure 2. Methodological Flowchart

The researchers utilized Python and Spyder software to run the ANN model. The model implemented in this study utilized one input layer, one hidden layer, and one output layer (Hegde & Rokseth, 2020; Zhong et al., 2021). Eight nodes were considered for the input layer to represent the eight independent variables: AT, SN, PBC, OP, UI, PUF, FC, and PTA. On the other hand, the nodes for the hidden layer varied from 10 to 50 based on the design of the experiment. Lastly, these nodes are connected to the output layer's single node, which embodies the independent variable IT. The ANN model employed a feed-forward process considering various activation functions and optimizers for the forward and back propagation procedure. Figure 2 is presented the methodological flowchart of the detailed process.

Data Pre-Processing

Before the statistical and machine learning analysis, the collected data were pre-processed accordingly. A respondent is required to provide feedback on questions in all variables, consisting of 41 items. In the Google Form setting, these 41 questions are set to be mandatory to answer. The respondent cannot submit the form if there is an unanswered question. The researchers considered a total of 13,202 data obtained from the results of 322 respondents and 41 inputs made by each respondent. Based on checking through IBM SPSS Statistics software, no missing values ​​were found. Furthermore, the data was refined through correlation analysis as a statistical method that is used to discover if there is a relationship between variables or datasets. It applies a threshold of .20 for the correlation coefficient and a p-value of .05 to identify significant indicators (Franco-Gonçalo et al., 2024).

ANN Optimization

Following data pre-processing, the aggregated data was optimized to implement the ANN, a supervised machine learning algorithm that mimics the biological nervous system using artificial neurons (Njock et al., 2021). The neural network mechanism operates through mathematical functions designed to detect patterns within large datasets in a feed-forward manner (Yousefzadeh et al., 2021). The ANN's input layer comprises artificial neurons that represent the aggregated data, which is subsequently processed to identify non-linear relationships as it passes through hidden layers before reaching the output layer. German et al. (2022) highlight that ANN is highly effective for analyzing non-linear correlations and serves as a valuable tool for studying human behavior.

Based on various studies that utilize machine learning algorithms to predict human behavior, as outlined in Table 3, the activation functions evaluated for the hidden layer include Swish (Janjua et al., 2023; Ramachandran et al., 2017), ReLu (Eckle & Schmidt-Hieber, 2019; Janjua et al., 2023), and Tanh (Lederer, 2021; Maurya et al., 2023). For the output layer, the activation functions considered are Softmax (Shatravin et al., 2023; Zheng et al., 2023), and Sigmoid (Elfwing et al., 2018; Shatravin et al., 2022). The Adam optimizer was employed during the initial optimization phase (Irfan et al., 2023; Yang, 2024). Each parameter combination underwent 10 iterations across 150 epochs, taking into account the hidden layer nodes (Gumasing et al., 2023).

The train-test split ratio remained constant at 80/20, aligning with the Pareto principle. To ensure a statistically robust evaluation, a total of 7,200 model runs were conducted. Each parameter combination was executed 10 times, with each run training the model for 150 epochs. This comprehensive approach aims to identify the optimal ANN configuration for this study (Gumasing et al., 2023; Zhong et al., 2021).

Table 3. ANN Parameters

Items Parameters References
Hidden Layer Activation Functions Swish (Janjua et al., 2023; Ramachandran et al., 2017)
  ReLu (Eckle & Schmidt-Hieber, 2019; Janjua et al., 2023)
  Tanh (Lederer, 2021; Maurya et al., 2023)
Output Layer Activation Functions SoftMax (Shatravin et al., 2023; Zheng et al., 2023)
  Sigmoid (Elfwing et al., 2018; Shatravin et al., 2022)
Optimizers Adam (Irfan et al., 2023; Yang, 2024)

Results

Constructs Validity and Reliability

The reliability and validity of the measurement model were evaluated using various statistical metrics. Table 4 provides an overview of the descriptive statistics, including the factor loadings, Cronbach’s alpha, composite reliability, and average variance extracted (AVE) for each latent variable in the model.

The factor loadings indicate the strength of the relationship between each indicator and its corresponding latent variable. All factor loadings exceeded the recommended threshold of .7, confirming that the indicators effectively measured their respective constructs (Hair et al., 2019). Internal consistency and reliability were assessed using Cronbach's alpha and composite reliability, both of which should surpass the minimum threshold of .70 for acceptability (Taber, 2018). As shown in Table 4, all latent variables achieved Cronbach's alpha and composite reliability values ranging from .718 to .926, demonstrating satisfactory reliability. Convergent validity, measured by the average variance extracted (AVE), reflects the extent to which indicators within a construct are correlated. An AVE value above .5 is deemed acceptable, indicating that the indicators adequately capture the intended construct (dos Santos & Cirillo, 2021). The AVE values for all latent variables in this study ranged from .539 to .713, meeting this criterion.

These results affirm the strong psychometric properties of the measurement model. The robust factor loadings, Cronbach's alpha, composite reliability, and AVE values confirm the reliability and validity of the indicators, ensuring the quality of the data for further analysis (Hair et al., 2019).

Table 4. Factor Loading, Cronbach’s Alpha, Composite Reliability, and Average Variance Extracted

Latent Variable Item Code Factor Loading Cronbach’s Alpha Composite Reliability Average Variance Extracted
Attitude AT1 .856 .892 .980 .681
  AT2 .831      
  AT3 .869      
  AT4 .840      
  AT5 .775      
Subjective Norm SN1 .818 .753 .834 .586
  SN2 .730      
  SN3 .738      
  SN4 .850      
  SN5 .822      
Perceived Behavioral Control PBC1 .814 .718 .823 .539
  PBC2 .760      
  PBC3 .721      
  PBC4 .790      
Optimism OP1 .832 .729 .835 .637
  OP2 .834      
  OP3 .711      
  OP4 .750      
User Innovativeness PI1 .822 .794 .888 .678
  PI2 .881      
  PI3 .783      
  PI4 .787      

Table 4. Continued

Latent Variable Item Code Factor Loading Cronbach’s Alpha Composite Reliability Average Variance Extracted
Perceived Usefulness PUF1 .746 .853 .897 .655
  PUF2 .834      
  PUF3 .777      
  PUF4 .859      
  PUF5 .841      
Facilitating Condition FC1 .830 .849 .871 .640
  FC2 .828      
  FC3 .770      
  FC4 .788      
  FC5 .785      
Perception Towards AI PTA1 .772 .835 .869 .660
  PTA2 .716      
  PTA3 .826      
  PTA4 .831      
Intention IT1 .861 .920 .926 .713
  IT2 .781      
  IT3 .886      
  IT4 .834      
  IT5 .853      

Final ANN Model Plots and Results

To determine the optimal ANN architecture, the training process is conducted over sufficient number of epochs to ensure the stability of the loss function across both the training and test sets. This approach helps prevent overfitting and ensures the selection of a model that generalizes well to unseen data. Figure 3 shows the chosen ANN architecture, which includes multiple input factors, a hidden layer with 80 nodes, and an output node representing the IT.

Figure 15

Figure 3. Optimum ANN Model

With the optimized parameters, the model achieved a remarkable peak accuracy of 98.12% on the SN variable. The training and test loss plots for this variable demonstrate the absence of overfitting or underfitting, consistent with the patterns observed in the AT, PBC, PUF, FC, OP, and PTA variables. The progressive decrease in training loss across epochs, accompanied by a similar trend in validation loss with minimal divergence, reflects a well-fitted model (Aliferis & Simon, 2024; Gavrilov et al., 2018). The graph shows that training loss and validation loss consistently decrease together, with a small gap between them, with validation loss slightly higher than training loss. This observation is substantiated by Figure 4, which visually corroborates these findings.

Figure 16

Figure 4. Example high accuracy plot (SN)

Table 5 provides a comprehensive overview of the optimization process performed for ANN, showing the optimal parameters identified for each feature. Drawing on insights from various studies, these parameters were carefully evaluated and ranked based on their average test performance, which tells about the relative importance of each feature regarding the dependent variable (German et al., 2022). The ANN model with the highest accuracy and the lowest standard deviation is considered to reflect the effect of the feature on the dependent variable.

Table 5. ANN Summary of Results

Factors Node H Activation H Activation O Optimizer Average Train Train – Std.Dev. Average Test
SN 50 tanh softmax adam 0.3268 0.0419 98.12%
AT 50 swish softmax adam 0.3046 0.0407 95.91%
PBC 50 swish sigmoid adam 0.3129 0.0288 95.40%
PUF 50 relu sigmoid adam 0.3625 0.0324 94.43%
FC 50 tanh softmax adam 0.3622 0.0382 94.39%
OP 50 swish softmax adam 0.3420 0.0361 93.70%
PTA 50 relu softmax adam 0.3423 0.0523 92.39%
UI 50 swish softmax adam 0.3581 0.0664 83.11%

Each row in Table 5 represents a separate model run for a single predictor, showing the technical configuration and the resulting predictive accuracy. The key parameters include the number of hidden layer nodes (Node H), the activation functions used in both the hidden and output layers, the optimizer (in this case, Adam), and performance metrics such as average training accuracy, training standard deviation, and average test accuracy.

In practical terms, the average test accuracy reflects how well each variable predicts the intention (IT) when evaluated on unseen data. For instance, subjective norms (SN) achieved the highest accuracy at 98.12%, indicating it is the most reliable predictor of students’ behavioral intention in this study. Other variables like attitude (AT), perceived behavioral control (PBC), and perceived usefulness (PUF) also showed high accuracy levels (above 94%), suggesting strong influence and predictive consistency. The low standard deviation values (Train – Std.Dev.) across variables indicate that the model’s performance is stable and not highly sensitive to data fluctuations. Conversely, User Innovativeness (UI) yielded the lowest predictive accuracy (83.11%) and the highest standard deviation, suggesting that this factor may not consistently influence intention and might introduce variability or noise into the model. Overall, the ANN configuration in this study demonstrates robust and reliable performance, especially for variables closely tied to social and motivational factors.

Based on the data processed, 7 (seven) variables in the ANN model demonstrated high test accuracy, as presented in Table 5. However, it is important to note that 1 (one) variable, PI, exhibited potential signs of overfitting, as evidenced by the individual test-train loss plot shown in Figure 5. Overfitting occurs when the training loss decreases significantly while the validation loss either increases or plateaus (Aliferis & Simon, 2024). This unstable and non-decreasing together in the validation and training loss indicates the model's inability to effectively capture the underlying patterns, resulting in weak accuracy (Gavrilov et al., 2018). In this study, the lower accuracy and greater variability associated with the UI factor suggest overfitting, meaning the model may have captured patterns specific to this dataset but not applicable to broader student populations.

Figure 18

Figure 5.Example overfitting plot (UI)

Result Validation

The ANN model effectively validated the hypothesized relationships between various factors and IT. Table 6 displays the test accuracy for each hypothesis. Seven relationships achieved high accuracy with a low standard deviation, indicating statistically significant results, and were supported by the ANN method. However, there is one relationship with below 90% accuracy, it is UI to IT.

Table 6. Validation of Hypothesis For ANN

H Factors Average Test Test – Std.Dev. Result Hypothesis
H2 SN → IT 98.12% 0.0143 Positive Accepted
H1 AT → IT 95.91% 0.0265 Positive Accepted
H3 PBC → IT 95.40% 0.0206 Positive Accepted
H6 PUF → IT 94.43% 0.0219 Positive Accepted
H7 FC → IT 94.39% 0.0294 Positive Accepted
H4 OP → IT 93.70% 0.0335 Positive Accepted
H8 PTA → IT 92.39% 0.0254 Positive Accepted
H5 UI → IT 83.11% 0.0473 Positive Relative Accepted

To assess the overall performance of the ANN model, a Taylor diagram in Figure 6 was generated. The Taylor diagram has emerged as a valuable tool in the field of machine learning, enabling researchers to compare the performance of various algorithms across multiple metrics (Anwar et al., 2024; Ghalami et al., 2020). This approach is particularly well-suited for the present study, as it allows for a comprehensive evaluation of the chosen ANN model. This diagram is a useful metric for evaluating the similarity between modeled and observed values by considering three key statistics: correlation coefficient, standard deviation, and centered root mean square error (RMSE). From the diagram, the models exhibited robust significance, characterized by correlation values exceeding 80% and RMSEA values below 20%. These results underscored consistent and reliable findings, with strong correlations indicating considerable accuracy rates, thus affirming the reliability of the ANN model (Izzaddin et al., 2024).

Furthermore, the type of ANN Taylor diagram as shown in Figure 6 allows simultaneous comparison of three critical statistical metrics: the standard deviation (plotted along the radial axes), the correlation coefficient (represented by curved lines), and the centered root mean square error (RMSE, shown as concentric red arcs). In this visualization, each symbol represents one predictor variable (e.g., SN, AT, PBC), with its proximity to the origin indicating better model performance. The closer a point is to the reference point (usually the observed or ideal value), the higher the correlation and the lower the error, signaling a stronger and more accurate prediction.

Figure 20

Figure 6. ANN Taylor Diagram

Moreover, as illustrated in the diagram, most variables, including subjective norms (SN), attitude (AT), and perceived behavioral control (PBC) cluster near the origin, reflecting high correlation coefficients (above .95) and low standard deviations and RMSE values. This confirms the ANN model’s robustness and precision in predicting the intention to adopt AIED tools. Conversely, the user innovativeness (UI) variable is plotted farther from the origin, indicating weaker predictive accuracy and potential overfitting. Overall, the Taylor diagram provides a concise and comprehensive visual summary that validates the ANN model’s high performance and helps identify which behavioral factors contribute most significantly to the intention to use AIED in self-directed learning.

 Discussion

Eight (8) factors were analyzed where perceived benefits were found to have the highest significant influence. Subjective Norm (SN) was seen to be the most significant factor affecting IT at 98.12%. AT, PBC, PUF, FC, OP, and PTA also influenced IT with high significance with accuracy of more than 90%. However, UI was found to be not significant to IT (accuracy below 90%) due to the have shown signs of overfitting in the model. Table 6 presents a summary of the accepted hypothesis outputs due to significance.

The findings of this study reveal several key insights into the behavioral intention of Indonesian higher education students to adopt AIED tools for self-directed learning (SDL). The high predictive accuracies observed in Subjective Norms (SN), Attitude (AT), and Perceived Behavioral Control (PBC) emphasize the strong influence of social, emotional, and control-related factors on students' technology adoption behaviors. These high behavioral intention scores suggest that students are not only aware of AIED tools but are also strongly inclined to integrate them into their learning routines, especially when supported by peers, mentors, and their academic environment.

The significance of SN as the strongest predictor highlights the importance of social influence and peer validation in technology use, particularly in a collectivist culture like Indonesia’s. Educational institutions should therefore focus on leveraging peer networks and instructor endorsements to promote AIED use. This is supported by research by C. Wang et al. (2024), where it was found that subjective norms encourage students to use AI in learning.

Furthermore, the high influence of AT and PBC implies that students’ positive perceptions and confidence in using AIED tools are crucial for adoption, suggesting that training programs and awareness campaigns could further enhance uptake. AT was found to be the second most significant factor, thereby supporting H1. This indicates that students view AIED as something important and enjoyable for SDL. Andrews et al. (2021) stated that attitude has a significant impact on librarians’ intention to adopt AI. In addition, PBC was the third significant factor that influenced the intention of students to use AIED in SDL, thereby supporting H3. This indicates that students have the ability or experience in using AIED, so they do not find it difficult to use and control it. Mohr and Kühl (2021) in their study stated that perceived behavioral control has the most influence on AI acceptance.

PUF was also found to have a significant factor in the students' intention to use AIED for SDL, thereby supporting H6. This refers to the level of student’s confidence in the usefulness of a specific technology that will improve their performance or help them achieve certain goals. In research conducted by Alhashmi et al. (2020), PUF was one of the critical success factors required to implement AI projects in the health sector in providing services for patient monitoring. The next variable, FC was found to be the significant factor of intention, thereby supporting H7. This is an indication that students have essential resources such as supporting technology, financial, and infrastructure. Xian (2021) stated that facilitating condition is a significant indicator of intention to use AI in leisure. Another factor, OP, has been proven to have a significant factor in the intention to use AIED in SDL, thereby supporting H4. High results on this factor mean that students are optimistic that AIED will be easy to use and interesting for them. In research conducted by Jo and Baek (2023), optimism affects the continued intention to use AI personal assistants. After that, PTA has been proven to be a significant factor in the intention to use AIED tools among Indonesian higher education students, thereby supporting H8. The results indicate that the majority of students have a good PTA related to AIED and trust in its use. In addition, according to Ajitha et al. (2024), perception towards AI is also related to user satisfaction in using AI applications in respondents' daily lives in Karnataka, India. This will certainly provide output in the form of continuous intention to use AI.

UI was found to be the last significant factor of intention in this study, thereby supporting H5, but has a lower significance than other variables due to it being below 90%. UI is the tendency to be a technology pioneer or early adopter, an individual desires to experiment and explore AIED tools before they become mainstream. This finding is somewhat unexpected, as innovativeness is often associated with early adoption. However, in this context, it may reflect a tendency among students to wait for social validation before exploring new tools, indicating a gap between curiosity and actual usage. This may also point to barriers such as lack of time, confidence, or access to training that prevent even tech-curious students from adopting AIED tools independently. These results also show that students' desire to become pioneers in the use of AIED is still not very high. Several studies found that there is no effect of user innovativeness on the adoption intention of some types of technologies (Ciftci et al., 2020; Liljander et al., 2006; Melián-González et al., 2019). However, Okumus et al. (2018) stated that innovativeness is a significant factor in intention to use.

The ANN model's overall strength in prediction, with average accuracy of 93.43%, reinforces the suitability of machine learning in educational behavioral studies. The consistency of high accuracy and low standard deviation across most variables suggests that the model captured relevant non-linear relationships well. However, one limitation is the lower accuracy and greater variance in UI, which might be due to external factors not captured in the current model, such as institutional support, prior tech experience, or personality traits. Future studies could consider integrating these additional dimensions or testing hybrid models to further refine prediction reliability.

Theoretical Implication

Previous studies have been conducted to predict students' intention to use technology, including AIED. In this study, evaluating Indonesian higher education students' intention to use AIED tools for self-directed learning (SDL) through machine learning ensembles demonstrates strong predictive power, revealing the factors that contribute to this intention. One of the algorithms used in this study is the ANN, which is favored for pattern recognition because it simulates the biological functions of neurons in the brain (Harsh et al., 2016; Manning et al., 2014). Generally, ANN is applied for classifying large datasets, forecasting time series, and performing function estimation or regression (Gallo, 2015). In terms of prediction, ANN has been shown to outperform statistical methods like function estimation or regression, due to its ability to conduct multi-layered analyses of complex data sets, resulting in higher predictive and classification accuracy (Cakir & Sita, 2020). In a similar context, Niazkar & Niazkar (2020) employed ANN to estimate COVID-19 cases across multiple countries to support the development of health-related policies based on observed data patterns. In addition, Shanbehzadeh et al. (2022) used ANN to predict the mortality risk in hospitalized COVID-19 patients in 1,710 hospitals in Iran to help related parties in preparing bed availability and further treatment for patients. Thus, ANN played a crucial role in shaping and evaluating policies and strategies, including their effectiveness, during the COVID-19 pandemic.

Compared with the existing literature on the interest in using AI, this study provides a new and comprehensive approach to analyzing the factors for using AIED, especially in self-paced learning. Nevertheless, the results of this study show a high level of accuracy of ANN with 93.43% average. It can be concluded that the accuracy level of ANN is reliable and can be used as a framework for future research under the same discipline.

Practical Implication

Assessing the factors that influence the intention to use AIED tools for SDL among Indonesian college students is crucial for technology developers and educators. The findings indicate that subjective norms (SN), representing social influence, emerge as the most significant determinant of students' intention to use AIED tools. This underscores that students are strongly motivated by the desire to align with prevailing social trends and avoid being left behind by their peers. On the other hand, the factor with the lowest influence, user innovativeness (UI), highlights a limited inclination among students to embrace innovation or take the initiative as early adopters or pioneers of AIED tools. In the context of newly introduced AIED applications, students tend to defer adoption until their social environment has first demonstrated usage, rather than independently pioneering the technology.

Conclusion

Based on the application of artificial neural networks, it was determined that 8 (eight) constructs significantly influenced university students' intention (IT) to use AIED tools for SDL purposes. The ANN model demonstrated an impressive average accuracy rate of 93.43% with a standard deviation of 0.0274 across 150 epochs and 80 hidden nodes, confirming the reliability of the analysis without overfitting. Among the analyzed constructs, subjective norms (SN) emerged as the most significant factor influencing the intention to adopt AIED tools as the IT construct, achieving a predictive accuracy of 98.12%. This underscores the role of social influence in shaping students' decisions, where peer pressure and recommendations from respected individuals significantly drive adoption behavior.

From the perspective of educational management, this study provides actionable insights. The significant role of SN and AT suggests the need for educational institutions to foster a supportive social environment and positive attitudes toward AIED tools in learning activities. Investments in facilitating conditions, such as infrastructure and financial support, are also important, as indicated by the strong influence of FC on students' intentions in this study.

This study contributes to the growing body of literature on AIED and serves as a framework for future research on technology adoption in education, particularly in developing countries like Indonesia, where there remains significant potential for scaling such innovations. Moreover, the application of machine learning ensembles effectively predicted college students' intentions and technology acceptance, as evidenced by the high level of accuracy demonstrated in this study's findings. Therefore, this research framework can serve as a valuable foundation for future studies exploring human behavior.

Recommendations

Based on the study’s findings, several targeted recommendations are proposed to increase the adoption of AIED tools for self-directed learning among Indonesian university students. First, educational institutions should harness the power of social influence by encouraging peer collaboration and instructor support, as Subjective Norm emerged as the most significant predictor of behavioral intention. Second, promoting positive attitudes and strengthening student self-confidence through structured training and orientation programs can also increase adoption, especially by highlighting the ease and value of AI tools. Third, ensuring adequate infrastructural support, such as internet access, technical guidance, and integration of AIED platforms into the learning ecosystem, remains critical, given the strong role of Facilitating Conditions and Perceived Usefulness. Fourth, increasing student familiarity and trust in AI technologies through basic AI literacy initiatives and practical exposure can further build acceptance. Fifth, and most importantly, regional disparities in access and readiness should be addressed through inclusive policy strategies that expand digital resources. These recommendations align with the behavioral, technical, and perceptual factors analyzed in this study and offer practical direction for educators, developers, and policymakers. This research yields a new, empirically validated model that not only explains students’ intentions to use AIED but also informs strategic actions to support equitable and sustainable AI integration in education.

Limitations

The findings of this study present promising results that can serve as a foundation for future research on the adoption of AIED tools for students' SDL in higher education. However, there are still two aspects that can be further evaluated because there are limitations observed in this study. First, the majority of respondents who participated in the digital survey were university students from Java Island, which may have influenced their level of technological facility and experience. Second, this study has not taken into account the demographic characteristics of respondents, such as gender, age, education level, and field of study to be processed into machine learning as factors that may influence the use of AIED.

Ethics Statements

The submitted work is original and has not been published elsewhere in any form or language (partially or fully). The studies involving human participants were reviewed and approved by the Ethics Committee of Yogyakarta State University. All participants gave written informed consent to participate in the study. All relevant ethical guidelines and principles were carefully considered during the preparation of this article. The research, including data collection, analysis, and interpretation, was conducted according to strict ethical standards to minimize any potential impacts on humans and the environment. A comprehensive ethical evaluation was performed prior to the study, assessing all potential risks and benefits. Participation in the study was voluntary, and informed consent was obtained from all participants. The privacy and confidentiality of participants were maintained, and appropriate measures were taken to ensure anonymity.

Conflict of Interest

The authors declare no conflicts of interest.

Authorship Contribution Statement

Darmono: Conceptualization, design, reviewing, securing funding, supervision. Setiawan: Data acquisition, data analysis, statistical analysis, format editing, final approval. Ma’ruf: Technical support, material support, drafting manuscript.

Generative AI Statement

As the authors of this work, we used the AI tool included in Machine Learning (ML), namely Artificial Neural Network (ANN) for the purpose of data processing. After using this AI tool, we reviewed and verified the final version of our work. We, as the author(s), take full responsibility for the content of our published work.

References

Aghaei, S., Shahbazi, Y., Pirbabaei, M., & Beyti, H. (2023). A hybrid SEM-neural network method for modeling the academic satisfaction factors of architecture students. Computers and Education: Artificial Intelligence, 4, Article 100122. https://doi.org/10.1016/j.caeai.2023.100122

Ahmad, S. F., Han, H., Alam, M. M., Rehmat, M. K., Irshad, M., Arraño-Muñoz, M., & Ariza-Montes, A. (2023). Impact of artificial intelligence on human loss in decision making, laziness and safety in education. Humanities and Social Sciences Communications, 10, Article 311. https://doi.org/10.1057/s41599-023-01787-8 

Ajitha, S., Geevarathna, & Huxley, S. (2024). Exploring public perception, awareness, and satisfaction with AI applications in Karnataka, India: The role of individual characteristics and media influence. International Journal of System Assurance Engineering and Management. Advance online publication. https://doi.org/10.1007/s13198-024-02594-3   

Ajzen, I. (1985). From intentions to actions: A theory of planned behavior. In J. Kuhl, & J. Beckmann (Eds.), Action control from cognition behavior (pp. 11-39). Springer. https://doi.org/10.1007/978-3-642-69746-3_2  

Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior and Human Decision Processes, 50(2), 179-211. https://doi.org/10.1016/0749-5978(91)90020-T  

Ajzen, I. (2020). The theory of planned behavior: Frequently asked questions. Human Behavior and Emerging Technologies, 2, 314-324. https://doi.org/10.1002/hbe2.195  

Akudjedu, T. N., Torre, S., Khine, R., Katsifarakis, D., Newman, D., & Malamateniou, C. (2023). Knowledge, perceptions, and expectations of Artificial Intelligence in radiography practice: A global radiography workforce survey. Journal of Medical Imaging and Radiation Sciences, 54(1), 104-116. https://doi.org/10.1016/j.jmir.2022.11.016  

Algerafi, M. A. M., Zhou, Y., Alfadda, H., & Wijaya, T. T. (2023). Understanding the factors influencing higher education students' intention to adopt artificial intelligence-based robots. IEEE Access: Practical Innovations, Open Solutions, 11, 99752-99764. https://doi.org/10.1109/ACCESS.2023.3314499

Alhashmi, S. F. S., Salloum, S. A., & Abdallah, S. (2020). Critical success factors for implementing artificial intelligence (AI) projects in Dubai Government United Arab Emirates (UAE) health sector: Applying the extended technology acceptance model (TAM). Advances in Intelligent Systems and Computing, 1058, 393-405. https://doi.org/10.1007/978-3-030-31129-2_36  

Aliferis, C., & Simon, G. (2024). Overfitting, underfitting, and general model overconfidence and under-performance: Pitfalls and best practices in machine learning and AI. In G. J. Simon, & C. Aliferis (Eds.), Artificial intelligence and machine learning in health care and medical sciences (pp. 477-524). Springer. https://doi.org/10.1007/978-3-031-39355-6_10  

Almaiah, M. A., Alfaisal, R., Salloum, S. A., Hajjej, F., Shishakly, R., Lutfi, A., Alrawad, M., Al Mulhem, A., Alkhdour, T., & Al-Maroof, R. S. (2022). Measuring institutions’ adoption of artificial intelligence applications in online learning environments: Integrating the innovation diffusion theory with technology adoption rate. Electronics, 11(20), Article 3291. https://doi.org/10.3390/electronics11203291  

Al Omari, O., Alshammari, M., Al Jabri, W., Al Yahyaei, A., Aljohani, K. A., Sanad, H. M., Al-Jubouri, M. B., Bashayreh, I., Fawaz, M., ALBashtawy, M., Alkhawaldeh, A., Qaddumi, J., Shalaby, S. A., Abdallah, H. M., AbuSharour, L., Al Qadire, M., & Aljezawi, M. (2024). Demographic factors, knowledge, attitude and perception and their association with nursing students’ intention to use artificial intelligence (AI): A multicentre survey across 10 Arab countries. BMC Medical Education, 24, Article 1456. https://doi.org/10.1186/s12909-024-06452-5 

Althunibat, A. (2015). Determining the factors influencing students' intention to use m-learning in Jordan higher education. Computers in Human Behavior, 52, 65-71. https://doi.org/10.1016/j.chb.2015.05.046  

An, X., Chai, C. S., Li, Y., Zhou, Y., Shen, X., Zheng, C., & Chen, M. (2023). Modeling English teachers’ behavioral intention to use artificial intelligence in middle schools. Education and Information Technologies, 28, 5187-5208. https://doi.org/10.1007/s10639-022-11286-z

Andrews, J. E., Ward, H., & Yoon, J. (2021). UTAUT as a model for understanding intention to adopt AI and related technologies among librarians. Journal of Academic Librarianship, 47(6), Article 102437. https://doi.org/10.1016/j.acalib.2021.102437  

Anwar, H., Khan, A. U., Ullah, B., Taha, A. T. B., Najeh, T., Badshah, M. U., Ghanim, A. A. J., & Irfan, M. (2024). Intercomparison of deep learning models in predicting streamflow patterns: Insight from CMIP6. Scientific Reports, 14, Article 17468. https://doi.org/10.1038/s41598-024-63989-7  

Assele, S. Y., Meulders, M., & Vandebroek, M. (2023). Sample size selection for discrete choice experiments using design features. Journal of Choice Modelling, 49, Article 100436. https://doi.org/10.1016/j.jocm.2023.100436  

Badan Pusat Statistik [Central Bureau of Statistics]. (2023). Jumlah perguruan tinggi, dosen, dan mahasiswa (negeri dan swasta) di bawah Kementerian Pendidikan, Kebudayaan, Riset, dan Teknologi menurut provinsi [Number of universities, lecturers, and students (public and private) under the Ministry of Education, Culture, Research, and Technology by province]. https://bit.ly/studentsanduniversitiesinIndonesia  

Bentler, P. M., & Chou, C.-P. (1987). Practical Issues in Structural Modeling. Sociological Methods and Research, 16(1), 78-117. https://doi.org/10.1177/0049124187016001004  

Bhattacherjee, A. (2001). Understanding information systems continuance: An expectation-confirmation model. MIS Quarterly, 25(3), 351-370. https://doi.org/10.2307/3250921  

Bicknell, K., Brust, C., & Settles, B. (2024). How Duolingo's AI learns what you need to learn: The language-learning app tries to emulate a great human tutor. IEEE Spectrum, 60(3), 28-33. https://doi.org/10.1109/MSPEC.2023.10061631  

Bit, D., Biswas, S., & Nag, M. (2024). The Impact of Artificial Intelligence in Educational System. International Journal of Scientific Research in Science and Technology, 11(4), 419-427. https://doi.org/10.32628/IJSRST2411424  

Brauner, P., Hick, A., Philipsen, R., & Ziefle, M. (2023). What does the public think about artificial intelligence? -A criticality map to understand bias in the public perception of AI. Frontiers in Computer Science, 5, Article 1113903. https://doi.org/10.3389/fcomp.2023.1113903

Brownhill, S. (2021). Asking more key questions of self-reflection. Reflective Practice, 23(2), 279-290. https://doi.org/10.1080/14623943.2021.2013192  

Buabbas, A. J., Miskin, B., Alnaqi, A. A., Ayed, A. K., Shehab, A. A., Syed-Abdul, S., & Uddin, M. (2023). Investigating students’ perceptions towards artificial intelligence in medical education. Healthcare, 11(9), Article 1298. https://doi.org/10.3390/healthcare11091298

Buchori, I., Sugiri, A., Maryono, M., Pramitasari, A., & Pamungkas, I. T. D. (2017). Theorizing spatial dynamics of metropolitan regions: A preliminary study in Java and Madura Islands, Indonesia. Sustainable Cities and Society, 35, 468-482. https://doi.org/10.1016/j.scs.2017.08.022

Bujang, M. A., & Adnan, T. H. (2016). Requirements for minimum sample size for sensitivity and specificity analysis. Journal of Clinical Diagnosis Research, 10(10), YE01-YE06. https://doi.org/10.7860/jcdr/2016/18129.8744

Buyle, R., Van Compernolle, M., Vlassenroot, E., Vanlishout, Z., Mechant, P., & Mannens, E. (2018). Technology readiness and acceptance model as a predictor for the use intention of data standards in smart cities. Media and Communication, 6(4), 127-139. https://doi.org/10.17645/mac.v6i4.1679  

Cakir, S., & Sita, M. (2020). Evaluating the performance of ANN in predicting the concentrations of ambient air pollutants in Nicosia. Atmospheric Pollution Research, 11(12), 2327-2334. https://doi.org/10.1016/j.apr.2020.06.011  

Castagno, S., & Khalifa, M. (2020). Perceptions of Artificial Intelligence among healthcare staff: A qualitative survey study. Frontiers in Artificial Intelligence, 3, Article 578983. https://doi.org/10.3389/frai.2020.578983  

Chai, C. S., Wang, X., & Xu, C. (2020). An extended theory of planned behavior for the modelling of Chinese secondary school students' intention to learn artificial intelligence. Mathematics, 8(11), Article 2089. https://doi.org/10.3390/math8112089  

Chang, A. (2012). UTAUT and UTAUT 2: A review and agenda for future research. Journal The Winners, 13(2), 106-114. https://doi.org/10.21512/tw.v13i2.656  

Chatterjee, S., Rana, N. P., Dwivedi, Y. K., & Baabdullah, A. M. (2021). Understanding AI adoption in manufacturing and production firms using an integrated TAM-TOE model. Technological Forecasting and Social Change, 170, Article 120880. https://doi.org/10.1016/j.techfore.2021.120880  

Chau, P. Y. K. (1996). An empirical assessment of a modified technology acceptance model. Journal of Management Information Systems, 13(2), 185-204. https://doi.org/10.1080/07421222.1996.11518128  

Chawla, A., & Joshi, H. (2019). Consumer attitude and intention to adopt mobile wallet in India: An empirical study. International Journal of Bank Marketing, 37(7), 1590-1618. https://doi.org/10.1108/IJBM-09-2018-0256  

Chen, B., Zhu, X., & del Castillo, F. D. (2023). Integrating generative AI in knowledge building. Computers and Education: Artificial Intelligence, 5, Article 100184. https://doi.org/10.1016/j.caeai.2023.100184  

Chen, J. V., Tran, A., & Nguyen, T. (2019). Understanding the discontinuance behavior of mobile shoppers as a consequence of technostress: An application of the stress-coping theory. Computers in Human Behavior, 95, 83-93. https://doi.org/10.1016/j.chb.2019.01.022  

Chen, M.-F., & Lin, N.-P. (2018). Incorporation of health consciousness into the technology readiness and acceptance model to predict app download and usage intentions. Internet Research, 28(2), 351-373. https://doi.org/10.1108/IntR-03-2017-0099  

Chiu, T. K. F. (2023). The impact of generative AI (GenAI) on practices, policies and research direction in education: A case of ChatGPT and Midjourney. Interactive Learning Environments, 32(10), 6187-6203. https://doi.org/10.1080/10494820.2023.2253861  

Choe, J. Y., Kim, J. J., & Hwang, J. (2021). Innovative marketing strategies for the successful construction of drone food delivery services: Merging TAM with TPB. Journal of Travel & Tourism Marketing, 38(1), 16-30. https://doi.org/10.1080/10548408.2020.1862023  

Choi, S., Jang, Y., & Kim, H. (2022). Influence of pedagogical beliefs and perceived trust on teachers' acceptance of educational artificial intelligence tools. International Journal of Human-Computer Interaction, 39(4), 910-922. https://doi.org/10.1080/10447318.2022.2049145  

Chu, T.-H., & Chen, Y.-Y. (2016). With good we become good: Understanding e-learning adoption by theory of planned behavior and group influences. Computers and Education, 92-93, 37-52. https://doi.org/10.1016/j.compedu.2015.09.013  

Ciftci, O., Choi, E.-K., & Berezina, K. (2020). Customer intention to use facial recognition technology at quick-service restaurants. E-Review of Tourism Research, 17(5), 753-763. https://ertr-ojs-tamu.tdl.org/ertr/article/view/558  

Connor Desai, S., & Reimers, S. (2019). Comparing the use of open and closed questions for Web-based measures of the continued-influence effect. Behavior Research Methods, 51, 1426-1440. https://doi.org/10.3758/s13428-018-1066-z  

Costa, E. B., Fonseca, B., Santana, M. A., de Araújo, F. F., & Rego, J. (2017). Evaluating the effectiveness of educational data mining techniques for early prediction of students' academic failure in introductory programming courses. Computers in Human Behavior, 73, 247-256. https://doi.org/10.1016/j.chb.2017.01.047

Davis, F. D. (1986). A technology acceptance model for empirically testing new end-user information systems: Theory and results (Doctoral dissertation, Massachusetts Institute of Technology). DSpace@MIT. http://hdl.handle.net/1721.1/15192  

dos Santos, P. M., & Cirillo, M. Â. (2021). Construction of the average variance extracted index for construct validation in structural equation models with adaptive regressions. Communications in Statistics - Simulation and Computation, 52(4), 1639-1650. https://doi.org/10.1080/03610918.2021.1888122  

Duong, C. D., Nguyen, T. H., Ngo, T. V. N., Dao, V. T., Do, N. D., & Pham, T. V. (2024). Exploring higher education students' continuance usage intention of ChatGPT: Amalgamation of the information system success model and the stimulus-organism-response paradigm. International Journal of Information and Learning Technology, 41(5), 556-584. https://doi.org/10.1108/IJILT-01-2024-0006

Ebadi, S., & Raygan, A. (2023). Investigating the facilitating conditions, perceived ease of use and usefulness of mobile-assisted language learning. Smart Learning Environments, 10, Article 30. https://doi.org/10.1186/s40561-023-00250-0  

Eckle, K., & Schmidt-Hieber, J. (2019). A comparison of deep networks with ReLU activation function and linear spline-type methods. Neural Networks, 110, 232-242. https://doi.org/10.1016/j.neunet.2018.11.005  

Elfwing, S., Uchibe, E., & Doya, K. (2018). Sigmoid-weighted linear units for neural network function approximation in reinforcement learning. Neural Networks, 107, 3-11. https://doi.org/10.1016/j.neunet.2017.12.012  

El-Sefy, M., Yosri, A., El-Dakhakhni, W., Nagasaski, S., & Wiebe, L. (2021). Artificial neural network for predicting nuclear power plant dynamic behaviors. Nuclear Engineering and Technology, 53(10), 3275-3285. https://doi.org/10.1016/j.net.2021.05.003  

Faham, E., & Asghari, H. (2019). Determinants of behavioral intention to use E-textbooks: A study in Iran's agricultural sector. Computers and Electronics in Agriculture, 165, Article 104935. https://doi.org/10.1016/j.compag.2019.104935  

Folkinshteyn, D., & Lennon, M. (2016). Braving Bitcoin: A technology acceptance model (TAM) analysis. Journal of Information Technology Case and Application Research, 18(4), 220-249. https://doi.org/10.1080/15228053.2016.1275242  

Foroughi, B., Senali, M. G., Iranmanesh, M., Khanfar, A., Ghobakhloo, M., Annamalai, N., & Naghmeh-Abbaspour, B. (2023). Determinants of intention to use ChatGPT for educational purposes: Findings from PLS-SEM and fsQCA. International Journal of Human-Computer Interaction, 40(17), 4501-4520. https://doi.org/10.1080/10447318.2023.2226495  

Fošner, A. (2024). University students' attitudes and perceptions towards AI tools: Implications for sustainable educational practices. Sustainability, 16(19), Article 8668. https://doi.org/10.3390/su16198668  

Franco-Gonçalo, P., Leite, P., Alves-Pimenta, S., Colaço, B., Gonçalves, L., Filipe, V., McEvoy, F., Ferreira, M., & Ginja, M. (2024). Automated assessment of pelvic longitudinal rotation using computer vision in canine hip dysplasia screening. Veterinary Sciences, 11(12), Article 630. https://doi.org/10.3390/vetsci11120630  

Gallo, C. (2015). Artificial neural networks: Tutorial. In Encyclopedia of information science and technology (3rd ed.). IGI Global. https://doi.org/10.4018/978-1-4666-5888-2.ch626  

Gavrilov, A. D., Jordache, A., Vasdani, M., & Deng, J. (2018). Preventing model overfitting and underfitting in convolutional neural networks. International Journal of Software Science and Computational Intelligence, 10(4), 19-28. https://doi.org/10.4018/IJSSCI.2018100102  

German, J. D., Ong, A. K. S., Perwira Redi, A. A. N., & Robas, K. P. E. (2022). Predicting factors affecting the intention to use a 3PL during the COVID-19 pandemic: A machine learning ensemble approach. Heliyon, 8(11), Article e11382. https://doi.org/10.1016/j.heliyon.2022.e11382  

Ghalami, H., Mohamadifar, A., Sorooshian, A., & Jansen, J. D. (2020). Machine-learning algorithms for predicting land susceptibility to dust emissions: The case of the Jazmurian Basin, Iran. Atmospheric Pollution Research, 11(8), 1303-1315. https://doi.org/10.1016/j.apr.2020.05.009  

Grnholdt, L., & Martensen, A. (2005). Analysing customer satisfaction data: A comparison of regression and artificial neural networks. International Journal of Market Research, 47(2), 121-130. https://doi.org/10.1177/147078530504700201

Gumasing, M. J. J., Ong, A. K. S., Sy, M. A. P. C., Prasetyo, Y. T., & Persada, S. F. (2023). A machine learning ensemble approach to predicting factors affecting the intention and usage behavior towards online groceries applications in the Philippines. Heliyon, 9(10), Article e20644. https://doi.org/10.1016/j.heliyon.2023.e20644  

Guo, L., Burke, M. G., & Griggs, W. M. (2025). A new framework to predict and visualize technology acceptance: A case study of shared autonomous vehicles. Technological Forecasting and Social Change, 212, Article 123960. https://doi.org/10.1016/j.techfore.2024.123960

Hair, J. F., Risher, J. J., Sarstedt, M., & Ringle, C. M. (2019). When to use and how to report the results of PLS-SEM. European Business Review, 31(1), 2-24. https://doi.org/10.1108/EBR-11-2018-0203  

Harsh, K., Bharath, N., & Kuldeep, S. (2016). An introduction to artificial neural network. International Journal of Advance Research and Innovative Ideas in Education, 1(5), 27-30. https://bit.ly/43AQBEX

Hassan, H. G., Nassar, M., & Abdien, M. K. (2024). The influence of optimism and innovativeness on customers' perceptions of technological readiness in five-star hotels. Pharos International Journal of Tourism and Hospitality, 3(1), 70-80. https://doi.org/10.21608/pijth.2024.265141.1010  

Hegde, J., & Rokseth, B. (2020). Applications of machine learning methods for engineering risk assessment – A review. Safety Science, 122, Article 104492. https://doi.org/10.1016/j.ssci.2019.09.015  

Helmiatin, Hidayat, A., & Kahar, M. R. (2024). Investigating the adoption of AI in higher education: A study of public universities in Indonesia. Cogent Education, 11(1), Article 2380175. https://doi.org/10.1080/2331186X.2024.2380175

Holden, R. J., & Karsh, B.-T. (2010). The technology acceptance model: Its past and its future in health care. Journal of Biomedical Informatics, 43(1), 159-172. https://doi.org/10.1016/j.jbi.2009.07.002  

iFlyTek. (2024). 执红笔点鼠,高考评卷背后的技术变革 [From holding the 'red pen' to holding the 'mouse': The technological revolution behind the college entrance examination marking]. https://edu.iflytek.com/solution/examination  

Igbokwe, I. C. (2023). Application of artificial intelligence (AI) in educational management. International Journal of Scientific Research and Publications, 13(3), 300-306. http://dx.doi.org/10.29322/IJSRP.13.03.2023.p13536  

Irfan, T., Gunawan, T. S., & Wanayumini, W. (2023). Comparison of SGD, RMSprop, and Adam optimization in animal classification using CNNs. International Conference on Information Science and Technology Innovation (ICoSTEC), 2(1), 45-51.  

Izzaddin, A., Langousis, A., Totaro, V., Yaseen, M., & Iacobellis, V. (2024). A new diagram for performance evaluation of complex models. Stochastic Environmental Research and Risk Assessment, 38, 2261-2281. https://doi.org/10.1007/s00477-024-02678-3  

Jang, H.-J., & Noh, G.-Y. (2017). Extended technology acceptance model of VR head-mounted display in early stage of diffusion. Journal of Digital Convergence, 15(5), 353-361. https://doi.org/10.14400/JDC.2017.15.5.353  

Janjua, J. I., Zulfiqar, S., Khan, T. A., & Ramay, S. A. (2023). Activation function conundrums in the modern machine learning paradigm. In Proceedings of the 2023 International Conference on Computing Advancements (ICCA). IEEE. https://doi.org/10.1109/ICCA59364.2023.10401760

Janpla, S., & Piriyasurawong, P. (2020). The development of an intelligent multilevel item bank model for the national evaluation of undergraduates. Universal Journal of Educational Research, 8(9), 4163-4172. https://doi.org/10.13189/ujer.2020.080942  

Jiao, J., & Cao, X. (2024). Research on designers' behavioral intention toward Artificial Intelligence-Aided Design: Integrating the Theory of Planned Behavior and the Technology Acceptance Model. Frontiers in Psychology, 15, Article 1450717. https://doi.org/10.3389/fpsyg.2024.1450717  

Jo, H., & Baek, E.-M. (2023). Retracted Article: Customization, loneliness, and optimism: drivers of intelligent personal assistant continuance intention during COVID-19. Humanities and Social Sciences Communications, 10, Article 529. https://doi.org/10.1057/s41599-023-02021-1  

Kamalov, F., Calonge, D. S., & Gurrib, I. (2023). New era of artificial intelligence in education: Towards a sustainable multifaceted revolution. Sustainability, 15(16), Article 12451. https://doi.org/10.3390/su151612451  

Kampa, R. K. (2023). Combining technology readiness and acceptance model for investigating the acceptance of m-learning in higher education in India. Asian Association of Open Universities Journal, 18(2), 105-120. https://doi.org/10.1108/AAOUJ-10-2022-0149  

Kasneci, E., Sessler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., Gasser, U., Groh, G., Günnemann, S., Hüllermeier, E., Krusche, S., Kutyniok, G., Michaeli, T., Nerdel, C., Pfeffer, J., Poquet, O., Sailer, M., Schmidt, A., Seidel, T., … Kasneci, G. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences, 103, Article 102274. https://doi.org/10.1016/j.lindif.2023.102274  

Kays, K. M., Keith, T. L., & Broughal, M. T. (2013). Best practice in online survey research with sensitive topics. In N. Sappleton (Ed.), Advancing research methods with new technologies (pp. 157-168). IGI Global. https://www.igi-global.com/gateway/chapter/75944  

Kim, N. J., & Kim, M. K. (2022). Teacher's perceptions of using an artificial intelligence-based educational tool for scientific writing. Frontiers in Education, 7, Article 755914. https://doi.org/10.3389/feduc.2022.755914  

Kim, T., & Chiu, W. (2019). Consumer acceptance of sports wearable technology: The role of technology readiness. International Journal of Sports Marketing and Sponsorship, 20(1), 109-126. https://doi.org/10.1108/IJSMS-06-2017-0050  

Kuhail, M. A., Alturki, N., Alramlawi, S., & Alhejori, K. (2023). Interacting with educational chatbots: A systematic review. Education and Information Technologies, 28, 973-1018. https://doi.org/10.1007/s10639-022-11177-3  

Lederer, J. (2021). Activation functions in artificial neural networks: A systematic overview. ArXiv. https://doi.org/10.48550/arXiv.2101.09957  

Leh, J. (2022, July 13). AI in LMS: 10 must-see innovations for learning professionals. Talented Learning. https://talentedlearning.com/ai-in-lms-innovations-learning-professionals-must-see/  

Li, X., Jiang, M. Y.-C., Jong, M. S.-Y., Zhang, X., & Chai, C.-S. (2022). Understanding medical students' perceptions of and behavioral intentions toward learning artificial intelligence: A survey study. International Journal of Environmental Research and Public Health, 19(14), Article 8733. https://doi.org/10.3390/ijerph19148733  

Liljander, V., Gillberg, F., Gummerus, J., & van Riel, A. (2006). Technology readiness and the evaluation and adoption of self-service technologies. Journal of Retailing and Consumer Services, 13(3), 177-191. https://doi.org/10.1016/j.jretconser.2005.08.004  

Lin, M.-L., & Yu, T.-K. (2018). Patent applying or not applying: What factors motivating students' intention to engage in patent activities. Eurasia Journal of Mathematics, Science and Technology Education, 14(5), 1843-1858. https://doi.org/10.29333/ejmste/85420  

Liu, Y.-L. E., & Huang, Y.-M. (2024). Exploring the perceptions and continuance intention of AI-based text-to-image technology in supporting design ideation. International Journal of Human–Computer Interaction, 41(1), 694-706. https://doi.org/10.1080/10447318.2024.2311975

Loeng, S. (2020). Self-directed learning: A core concept in adult education. Education Research International, 2020(1), Article 3816132. https://doi.org/10.1155/2020/3816132  

Lugito, N. P. H., Cucunawangsih, C., Suryadinata, N., Kurniawan, A., Wijayanto, R., Sungono, V., Sabran, M. Z., Albert, N., Budianto, C. J., Rubismo, K. Y., Purushotama, N. B. S. A., & Zebua, A. (2024). Readiness, knowledge, and perception towards artificial intelligence of medical students at Faculty of Medicine, Pelita Harapan University, Indonesia: A cross-sectional study. BMC Medical Education, 24, Article 1044. https://doi.org/10.1186/s12909-024-06058-x  

Madar, N. K., Teeni-Harari, T., Icekson, T., & Sela, Y. (2019). Optimism and entrepreneurial intentions among students: The mediating role of emotional intelligence. Journal of Entrepreneurship Education, 22(4). https://bit.ly/3FfXCCW  

Mahendra, M. W., Nurkamilah, N., & Sari, C. P. (2023). Artificial-intelligence powered app as learning aid in improving learning autonomy: Students' perspective. Journal of English Education and Society, 8(1), 122-129. https://bit.ly/4kqXLlS  

Mailizar, M., Burg, D., & Maulina, S. (2021). Examining university students' behavioural intention to use e-learning during the COVID-19 pandemic: An extended TAM model. Education and Information Technologies, 26, 7057-7077. https://doi.org/10.1007/s10639-021-10557-5

Manning, T., Sleator, R. D., & Walsh, P. (2014). Biologically inspired intelligent decision making: A commentary on the use of artificial neural networks in bioinformatics. Bioengineered, 5(2), 80-95. https://doi.org/10.4161/bioe.26997

Masyitoh, S. L., Ma'ruf, K., & Setiawan, R. J. (2024). Local binary pattern and principal component analysis for low-light face recognition. IEEE 11th International Conference on Electrical Engineering, Computer Science and Informatics (EECSI) (pp. 211-216). IEEE. https://doi.org/10.1109/EECSI63442.2024.10776119  

Maurya, R., Aggarwal, D., Gopalakrishnan, T., & Pandey, N. N. (2023). Enhancing deep neural network convergence and performance: A hybrid activation function approach by combining ReLU and ELU activation function. In Proceedings of the 2023 International Conference on Intelligent Control and Instrumentation (ICI) (pp. 1-5). IEEE. https://doi.org/10.1109/ICI60088.2023.10421353  

McNeeley, S. (2012). Sensitive issues in surveys: Reducing refusals while increasing reliability and quality of responses to sensitive survey items. In L. Gideon (Ed.), Handbook of survey methodology for the social sciences (pp. 377-396). Springer. https://doi.org/10.1007/978-1-4614-3876-2_22

Melián-González, S., Gutiérrez-Taño, D., & Bulchand-Gidumal, J. (2019). Predicting the intentions to use chatbots for travel and tourism. Current Issues in Tourism, 24(2), 192-210. https://doi.org/10.1080/13683500.2019.1706457  

Mengi, A., Singh, R. P., Mengi, N., Kalgotra, S., & Singh, A. (2024). A questionnaire study regarding knowledge, attitude and usage of artificial intelligence and machine learning by the orthodontic fraternity of Northern India. Journal of Oral Biology and Craniofacial Research, 14(5), 500-506. https://doi.org/10.1016/j.jobcr.2024.06.004  

Michel-Villarreal, R., Vilalta-Perdomo, E., Salinas-Navarro, D. E., Thierry-Aguilera, R., & Gerardou, F. S. (2023). Challenges and opportunities of generative AI for higher education as explained by ChatGPT. Education Sciences, 13(9), Article 856. https://doi.org/10.3390/educsci13090856  

Mohr, S., & Kühl, R. (2021). Acceptance of artificial intelligence in German agriculture: An application of the technology acceptance model and the theory of planned behavior. Precision Agriculture, 22, 1816-1844. https://doi.org/10.1007/s11119-021-09814-x  

Niazkar, H. R., & Niazkar, M. (2020). Application of artificial neural networks to predict the COVID-19 outbreak. Global Health Research and Policy, 5, Article 50. https://doi.org/10.1186/s41256-020-00175-y  

Nicolaou, A. I., & Masoner, M. M. (2013). Sample size requirements in structural equation models under standard conditions. International Journal of Accounting Information Systems, 14(4), 256-274. https://doi.org/10.1016/j.accinf.2013.11.001  

Nie, J., Zheng, C., Zeng, P., Zhou, B., Lei, L., & Wang, P. (2020). Using the theory of planned behavior and the role of social image to understand mobile English learning check-in behavior. Computers and Education, 156, Article 103942. https://doi.org/10.1016/j.compedu.2020.103942  

Njock, P. G. A., Shen, S.-L., Zhou, A., & Modoni, G. (2021). Artificial neural network optimized by differential evolution for predicting diameters of jet grouted columns. Journal of Rock Mechanics and Geotechnical Engineering, 13(6), 1500-1512. https://doi.org/10.1016/j.jrmge.2021.05.009  

Nurmaini, S. (2021). The Artificial Intelligence Readiness for Pandemic Outbreak COVID-19: Case of Limitations and Challenges in Indonesia. Computer Engineering and Applications, 10(1), 9-21. https://bit.ly/3HgcEcw

Obaid, T., & Aldammagh, Z. (2021). Predicting mobile banking adoption: An integration of TAM and TPB with trust and perceived risk. SSRN Electronic Journal, 17, 35-46. https://doi.org/10.2139/ssrn.3761669 

Ocaña-Fernández, Y., Valenzuela-Fernández, L. A., & Garro-Aburto, L. L. (2019). Inteligencia artificial y sus implicaciones en la educación superior. Propósitos y Representaciones, 7(2), 536-568. http://dx.doi.org/10.20511/pyr2019.v7n2.274  

Okumus, F., Ali, A., Bilgihan, A., & Ozturk, A. B. (2018). Psychological factors influencing customers’ acceptance of smartphone diet apps when ordering food at restaurants. International Journal of Hospitality Management, 72, 67-77. https://doi.org/10.1016/j.ijhm.2018.01.001  

Oliveira, T., Thomas, M., & Espadanal, M. (2014). Assessing the determinants of cloud computing adoption: An analysis of the manufacturing and services sectors. Information and Management, 51(5), 497-510. https://doi.org/10.1016/j.im.2014.03.006  

Ozkan, S., & Kanat, I. E. (2011). E-government adoption model based on theory of planned behavior: Empirical validation. Government Information Quarterly, 28, 503-513. https://doi.org/10.1016/j.giq.2010.10.007  

Parasuraman, A. (2000). Technology readiness index (TRI): A multiple-item scale to measure readiness to embrace new technologies. Journal of Service Research, 2(4), 307-320. https://doi.org/10.1177/109467050024001  

Park, E. S., & Park, M. S. (2020). Factors of the Technology Acceptance Model for Construction IT. Applied Sciences, 10(22), Article 8299. https://doi.org/10.3390/app10228299  

Polyportis, A. (2023). A longitudinal study on artificial intelligence adoption: Understanding the drivers of ChatGPT usage behavior change in higher education. Frontiers in Artificial Intelligence, 6, Article 1324398. https://doi.org/10.3389/frai.2023.1324398  

Ramachandran, P., Zoph, B., & Le, Q. V. (2017). Searching for activation functions. ArXiv. https://doi.org/10.48550/arXiv.1710.05941  

Rodriguez-Ascaso, A., Boticario, J. G., Finat, C., & Petrie, H. (2017). Setting accessibility preferences about learning objects within adaptive elearning systems: User experience and organizational aspects. Expert Systems, 34(4), Article e12187. https://doi.org/10.1111/exsy.12187  

Ruiz-Rojas, L. I., Acosta-Vargas, P., De-Moreta-Llovet, J., & Gonzalez-Rodriguez, M. (2023). Empowering education with generative artificial intelligence tools: Approach with an instructional design matrix. Sustainability, 15(15), Article 11524. https://doi.org/10.3390/su151511524

Sain, Z. H., Sain, S. H., & Serban, R. (2024). Implementing artificial intelligence in educational management systems: a comprehensive study of opportunities and challenges. Asian Journal of Managerial Science, 13(1), 23–31. https://doi.org/10.70112/ajms-2024.13.1.4235

Shanbehzadeh, M., Nopour, R., & Kazemi-Arpanahi, H. (2022). Design of an artificial neural network to predict mortality among COVID-19 patients. Informatics in Medicine Unlocked, 31, Article 100983. https://doi.org/10.1016/j.imu.2022.100983  

Shatravin, V., Shashev, D., & Shidlovskiy, S. (2022). Sigmoid activation implementation for neural networks hardware accelerators based on reconfigurable computing environments for low-power intelligent systems. Applied Sciences, 12(10), Article 5216. https://doi.org/10.3390/app12105216  

Shatravin, V., Shashev, D., & Shidlovskiy, S. (2023). Implementation of the SoftMax activation for reconfigurable neural network hardware accelerators. Applied Sciences, 13(23), Article 12784. https://doi.org/10.3390/app132312784  

Shuhaiber, A. (2016). How facilitating conditions impact students' intention to use virtual lectures? An empirical evidence. AICT 2016: The Twelfth Advanced International Conference on Telecommunications (pp. 68-75). https://bit.ly/shuhaiber2016ann   

Singh, S., & Sagar, R. (2021). A critical look at online survey or questionnaire-based research studies during COVID-19. Asian Journal of Psychiatry, 65, Article 102850. https://doi.org/10.1016/j.ajp.2021.102850  

Sohaib, O., Hussain, W., Asif, M., Ahmad, M., & Mazzara, M. (2020). A PLS-SEM neural network approach for understanding cryptocurrency adoption. IEEE Access, 8, 13138-13150. https://doi.org/10.1109/ACCESS.2019.2960083  

Sohn, K., & Kwon, O. (2020). Technology acceptance theories and factors influencing artificial intelligence-based intelligent products. Telematics and Informatics, 47, Article 101324. https://doi.org/10.1016/j.tele.2019.101324  

Song, J., Kim, J., & Cho, K. (2018). Understanding users' continuance intentions to use smart-connected sports products. Sport Management Review, 21(5), 477-490. https://doi.org/10.1016/j.smr.2017.10.004  

Song, Y., & Wang, S. (2024). A survey and research on the use of artificial intelligence by Chinese design-college students. Buildings, 14(9), Article 2957. https://doi.org/10.3390/buildings14092957  

Strzelecki, A. (2023). To use or not to use ChatGPT in higher education? A study of students' acceptance and use of technology. Interactive Learning Environments, 32(9), 5142-5155. https://doi.org/10.1080/10494820.2023.2209881  

Suh, W., & Ahn, S. (2022). Development and validation of a scale measuring student attitudes toward artificial intelligence. SAGE Open, 12(2), 1-12. https://doi.org/10.1177/21582440221100463  

Sun, Y., Lyu, Y., Lin, P.-H., & Lin, R. (2022). Comparison of cognitive differences of artworks between artist and artistic style transfer. Applied Sciences, 12(11), Article 5525. https://doi.org/10.3390/app12115525  

Taber, K. S. (2018). The use of Cronbach’s alpha when developing and reporting research instruments in science education. Research in Science Education, 48, 1273-1296. https://doi.org/10.1007/s11165-016-9602-2  

Tai, M. C.-T. (2020). The impact of artificial intelligence on human society and bioethics. Tzu Chi Medical Journal, 32(4), 339-343. https://doi.org/10.4103/tcmj.tcmj_71_20  

Valle, N. N., Kilat, R. V., Lim, J., General, E., Dela Cruz, J., Colina, S. J., Batican, I., & Valle, L. (2024). Modeling learners' behavioral intention toward using artificial intelligence in education. Social Sciences and Humanities Open, 10, Article 101167. https://doi.org/10.1016/j.ssaho.2024.101167

Velli, K., & Zafiropoulos, K. (2024). Factors that affect the acceptance of educational ai tools by Greek teachers-a structural equation modelling study. European Journal of Investigation in Health, Psychology and Education, 14(9), 2560-2579. https://doi.org/10.3390/ejihpe14090169  

Venkatesh, V., & Davis, F. D. (2000). A theoretical extension of the Technology Acceptance Model: Four longitudinal field studies. Management Science, 46(2), 169-332. https://doi.org/10.1287/mnsc.46.2.186.11926  

Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 27(3), 425-478. https://doi.org/10.2307/30036540  

Venkatesh, V., Thong, J. Y. L., & Xu, X. (2012). Consumer acceptance and use of information technology: extending the unified theory of acceptance and use of technology. MIS Quarterly, 36(1), 157-178. https://doi.org/10.2307/41410412  

Wang, C., Wang, H., Li, Y., Dai, J., Gu, X., & Yu, T. (2024). Factors influencing university students’ behavioral intention to use generative artificial intelligence: Integrating the theory of planned behavior and AI literacy. International Journal of Human–Computer Interaction. Advance online publication. https://doi.org/10.1080/10447318.2024.2383033  

Wang, S., Wang, F., Zhu, Z., Wang, J., Tran, T., & Du, Z. (2024). Artificial intelligence in education: A systematic literature review. Expert Systems with Applications, 252(Part A), Article 124167. https://doi.org/10.1016/j.eswa.2024.124167  

Weng, X., Xia, Q., Gu, M., Rajaram, K., & Chiu, T. K. F. (2024). Assessment and learning outcomes for generative AI in higher education: A scoping review on current research status and trends. Australasian Journal of Educational Technology, 40(6), 37-55. https://doi.org/10.14742/ajet.9540  

Wollny, S., Schneider, J., Di Mitri, D., Weidlich, J., Rittberger, M., & Drachsler, H. (2021). Are we there yet? A systematic literature review. Frontiers in Artificial Intelligence, 4, Article 654924. https://doi.org/10.3389/frai.2021.654924  

Xian, X. (2021). Psychological factors in consumer acceptance of artificial intelligence in leisure economy: A structural equation model. Journal of Internet Technology, 22(3), 697-705. https://jit.ndhu.edu.tw/article/view/2526

Yang, L. (2024). Theoretical analysis of Adam optimizer in the presence of gradient skewness. International Journal of Applied Science, 7(2), 27-42. https://doi.org/10.30560/ijas.v7n2p27  

Yen, H. R. (2005). An attribute-based model of quality satisfaction for internet self-service technology. Service Industries Journal, 25(5), 641-659. https://doi.org/10.1080/02642060500100833  

Yildirim, Y., Camci, F., & Aygar, E. (2023). Advancing self-directed learning through artificial intelligence. In M. J. Rodrigues, & I. T. Rodrigues (Eds.), Advancing self-directed learning in higher education (pp. 146-157). IGI Global. https://doi.org/10.4018/978-1-6684-6772-5.ch009  

Yousefzadeh, M., Hosseini, S. A., & Farnaghi, M. (2021). Spatiotemporally explicit earthquake prediction using deep neural network. Soil Dynamics and Earthquake Engineering, 144, Article 106663. https://doi.org/10.1016/j.soildyn.2021.106663  

Yuan, Z., Liu, J., Deng, X., Ding, T., & Wijaya, T. T. (2023). Facilitating conditions as the biggest factor influencing elementary school teachers' usage behavior of dynamic mathematics software in China. Mathematics, 11(6), Article 1536. https://doi.org/10.3390/math11061536  

Yusriadi, Y., Rusnaedi, R., Siregar, N. A., Megawati, S., & Sakkir, G. (2023). Implementation of artificial intelligence in Indonesia. International Journal of Data and Network Science, 7, 283-294. https://doi.org/10.5267/j.ijdns.2022.10.005    

Zabukovšek, S. S., Kalinic, Z., Bobek, S., & Tominc, P. (2018). SEM–ANN based research of factors’ impact on extended use of ERP systems. Central European Journal of Operations Research, 27, 703-735. https://doi.org/10.1007/s10100-018-0592-1

Zheng, Y., Zhang, Q., Chow, S. S. M., Peng, Y., Tan, S., Li, L., & Yin, S. (2023). Secure softmax/sigmoid for machine-learning computation. In ACSAC '23: Proceedings of the 39th Annual Computer Security Applications Conference (pp. 463-476). ACM. https://doi.org/10.1145/3627106.3627175  

Zhong, S., Zhang, K., Bagheri, M., Burken, J. G., Gu, A., Li, B., Ma, X., Marrone, B. L., Ren, Z. J., Schrier, J., Shi, W., Tan, H., Wang, T., Wang, X., Yu, X., Zhu, J.-J., & Zhang, H. (2021). Machine learning: New ideas and tools in environmental science and engineering. Environmental Science and Technology, 55(19), 12741-12754. https://doi.org/10.1021/acs.est.1c01339  

Zhou, J., & Zhang, H. (2024). Factors influencing university students' continuance intentions towards self-directed learning using artificial intelligence tools: insights from structural equation modeling and fuzzy-set qualitative comparative analysis. Applied Sciences, 14(18), Article 8363. https://doi.org/10.3390/app14188363 

...