This blog summarizes research trends based on published paper abstracts. Specific numbers or findings may contain inaccuracies. For scholarly rigor, always consult the original papers cited in each post.
The decade from 2015 to 2025 has transformed how educational assessment is designed, delivered, and experienced. Online quizzes, automated essay scoring, adaptive testing platforms, e-portfolios, peer assessment systems, andβmost recentlyβAI-powered formative assessment have moved from experimental periphery to institutional mainstream. The COVID-19 pandemic accelerated this transition by years, converting emergency remote assessment into permanent digital infrastructure.
But the question that assessment researchers have been asking with increasing urgency is whether digital assessment measures what educators care about. The technology can efficiently test factual recall, procedural skill, and pattern recognition. Whether it can assess critical thinking, creative problem-solving, ethical reasoning, and the capacity to integrate knowledge across domainsβthe competencies that higher education claims to developβremains contested.
The Systematic Landscape
Zainuddin, Wasis, and Ekohariadi (2026) provide a comprehensive systematic literature review examining technology-based assessment in learning, published between 2015 and mid-2025. Using SLR protocol and analyzing 105 empirical and conceptual studies identified through the Scopus database, the review maps the evolution of digital assessment across a decade.
The review identifies several trajectory-level patterns. Early in the decade (2015-2018), digital assessment research focused primarily on feasibility: can assessments be delivered online with acceptable reliability? The evidence from this period was generally positiveβonline delivery did not systematically reduce assessment quality for most question types.
The middle period (2019-2022), catalyzed by the pandemic, shifted focus to scalability and integrity: how do you assess at scale while preventing cheating? This period produced extensive research on proctoring technologies, randomized question pools, time-limited assessments, and honor codesβwith mixed results on effectiveness.
The current period (2023-2025) centers on intelligence and adaptation: can AI make assessment personalized, formative, and capable of measuring complex competencies? This is where the gap between ambition and evidence is widest.
What Digital Assessment Measures Well
Tahir, Saputra, and Othman (2025) contribute a systematic review specifically focused on online assessment in higher education. The shift towards digital learning has accelerated the adoption of online assessment tools, and the review examines what the accumulated evidence says about their effectiveness.
The review confirms that digital assessment works well for certain purposes:
- Knowledge testing: Multiple-choice, short-answer, and fill-in-the-blank assessments translate effectively to digital formats with no loss of reliability.
- Immediate feedback: Digital platforms can provide instant automated feedback that is impossible in paper-based assessment, supporting formative learning cycles.
- Accessibility: Online assessment enables students with disabilities to use assistive technologies, students in remote locations to participate without travel, and institutions to offer flexible timing.
- Learning analytics integration: Digital assessment generates data that can be analyzed in real time to identify struggling students, problematic questions, and curriculum gaps.
What Digital Assessment Measures Poorly
The same reviews identify persistent limitations:
- Higher-order thinking: Automated assessment of analysis, synthesis, evaluation, and creation remains unreliable. AI scoring systems can evaluate surface features of essays (coherence, vocabulary, structure) but struggle with the quality of argumentation, the depth of analysis, and the originality of insight.
- Collaborative competencies: Group projects, peer interaction, and collaborative problem-solving resist standardized digital assessment because the outcomes depend on social processes that individual-level metrics cannot capture.
- Practical skills: Laboratory work, clinical skills, fieldwork, and performance-based competencies require physical demonstration that digital environments can simulate but cannot fully replace.
- Authenticity: The disconnect between assessment tasks (which are typically structured, time-limited, and individual) and professional practice (which is typically messy, extended, and collaborative) is amplified rather than reduced by digital delivery.
Measuring Digital Literacy Itself
Suri, Festiyed, and Azhar (2025) address a meta-level challenge: how do you assess the digital literacy that students need to succeed in digital assessment environments? Their systematic review and bibliometric analysis examines digital literacy assessment instruments, competency dimensions, and challenges across educational levels.
Digital literacy includes technical proficiency, information evaluation, online collaboration, creativity, and ethical technology use. The review reveals that existing assessment instruments tend to overweight technical skills (can the student use the platform?) and underweight critical skills (can the student evaluate online information? Navigate digital ethics?). This measurement bias is consequential: if institutions measure only what digital assessment instruments can easily test, they will systematically undervalue the competencies that digital citizens most need.
Sari, Wicaksana, and Rahman (2025) examine the emerging frontier: adaptive AI-driven formative assessment. In the era of digital transformation and AI dominance, the paper argues that cognitive and social-emotional skills have become vital competencies from early stages of education, and that adaptive instructional systems powered by AI offer a pathway to assess and develop these competencies in real time.
The AI-adaptive approach promises to solve a fundamental limitation of traditional assessment: fixed difficulty. In a conventional test, every student answers the same questions, which means the assessment is optimally informative only for students near the test's difficulty level. Adaptive testing adjusts difficulty in real time based on the student's responses, maintaining optimal information gain throughout the assessment.
However, adaptive assessment also introduces new concerns: algorithmic bias in difficulty adjustment (if the system underestimates a student's ability, it may present systematically easier items, creating a ceiling effect), opacity of scoring (students and teachers may not understand why the system assigned a particular score), and the assumption that learning can be meaningfully decomposed into discrete, hierarchically ordered skills that a branching algorithm can navigate.
Claims and Evidence
<
| Claim | Evidence | Verdict |
|---|
| Digital assessment maintains reliability compared to paper-based formats | Tahir et al. (2025): generally supported for structured question types | β
Supported |
| Digital assessment can measure higher-order thinking effectively | Zainuddin et al. (2026): persistent gap between ambition and capability | β Refuted (with current tools) |
| AI-adaptive assessment improves formative feedback | Sari et al. (2025): conceptually promising; large-scale evidence limited | β οΈ Uncertain |
| Digital literacy assessment adequately captures critical digital competencies | Suri et al. (2025): overweights technical skills, underweights evaluative and ethical dimensions | β Refuted |
Open Questions
Can generative AI transform assessment from product evaluation to process evaluation? If AI can analyze not just the final answer but the reasoning process (drafts, revisions, search patterns, time allocation), could it assess learning more authentically than traditional output-based assessment?How should institutions balance assessment security with assessment authenticity? Proctored, locked-down assessments are secure but artificial. Open-book, take-home assessments are authentic but vulnerable to AI assistance. Is there a middle path?What happens when the assessment tool becomes the learning environment? As adaptive assessment platforms increasingly function as learning environments (adjusting content based on performance), the distinction between assessment and instruction blurs. Is this integration beneficial or does it reduce assessment independence?How do we ensure digital assessment equity across the global digital divide? Students with unreliable internet, older devices, or shared computing access are systematically disadvantaged by digital assessment. What design principles minimize this inequity?Implications
A decade of digital assessment research converges on a practical conclusion: digital tools are well suited for assessing what is easy to assess (factual knowledge, procedural skill, structured problem-solving) and poorly suited for assessing what is hard to assess (critical thinking, creativity, ethical judgment, collaborative competence). This alignment between technological capability and assessment target is not coincidentalβit reflects the fundamental limitation of computational assessment: computers are good at evaluating outputs that can be specified in advance, and poor at evaluating outputs whose value lies in their unpredictability.
The implication is not that digital assessment should be abandoned but that it should be used for what it does well, combined with human assessment for what it does not. The optimal assessment system for most educational contexts is hybrid: digital tools for efficiency, scalability, and data richness; human judgment for complexity, nuance, and the evaluation of genuinely creative work.
λ©΄μ±
μ‘°ν: μ΄ κ²μλ¬Όμ μ 보 μ 곡μ λͺ©μ μΌλ‘ ν μ°κ΅¬ λν₯ κ°μμ΄λ€. νμ μ μλ¬Όμμ μΈμ©νκΈ° μ μ μλ³Έ λ
Όλ¬Έμ ν΅ν΄ ꡬ체μ μΈ μ°κ΅¬ κ²°κ³Ό, ν΅κ³ λ° μ£Όμ₯μ λ°λμ νμΈν΄μΌ νλ€.
κ³ λ±κ΅μ‘μμμ λμ§νΈ νκ°: 10λ
κ°μ λ°μ κ³Ό μ§μλλ 격차
2015λ
λΆν° 2025λ
κΉμ§μ 10λ
μ κ΅μ‘ νκ°κ° μ€κ³λκ³ , μνλλ©°, κ²½νλλ λ°©μμ λ³νμμΌ°λ€. μ¨λΌμΈ ν΄μ¦, μλ μμΈμ΄ μ±μ , μ μν μν νλ«νΌ, e-ν¬νΈν΄λ¦¬μ€, λλ£ νκ° μμ€ν
, κ·Έλ¦¬κ³ κ°μ₯ μ΅κ·Όμλ AI κΈ°λ° νμ± νκ°κ° μ€νμ μ£Όλ³λΆμμ μ λμ μ£Όλ₯λ‘ μ΄λνμλ€. COVID-19 ν¬λ°λ―Ήμ κΈ΄κΈ μ격 νκ°λ₯Ό μꡬμ μΈ λμ§νΈ μΈνλΌλ‘ μ νμν€λ©΄μ μ΄λ¬ν μ νμ μλ
μλΉκ²Όλ€.
κ·Έλ¬λ νκ° μ°κ΅¬μλ€μ΄ μ μ λ κ°νκ² μ κΈ°ν΄ μ¨ μ§λ¬Έμ λμ§νΈ νκ°κ° κ΅μ‘μλ€μ΄ μ€μνκ² μ¬κΈ°λ κ²μ μΈ‘μ νλκ°μ΄λ€. κΈ°μ μ μ¬μ€μ μκΈ°, μ μ°¨μ κΈ°μ , κ·Έλ¦¬κ³ ν¨ν΄ μΈμμ ν¨μ¨μ μΌλ‘ κ²μ¬ν μ μλ€. λ°λ©΄ κ³ λ±κ΅μ‘μ΄ κ°λ°νκ³ μ μ£Όμ₯νλ μλμΈ λΉνμ μ¬κ³ , μ°½μμ λ¬Έμ ν΄κ²°, μ€λ¦¬μ μΆλ‘ , κ·Έλ¦¬κ³ μ¬λ¬ μμμ κ±Έμ³ μ§μμ ν΅ν©νλ λ₯λ ₯μ νκ°ν μ μλκ°μ λν΄μλ μ¬μ ν λ
Όμμ΄ κ³μλκ³ μλ€.
체κ³μ μ κ²½
Zainuddin, Wasis, Ekohariadi(2026)λ 2015λ
λΆν° 2025λ
μ€λ°κΉμ§ λ°νλ νμ΅μμμ κΈ°μ κΈ°λ° νκ°λ₯Ό κ²ν νλ ν¬κ΄μ μΈ μ²΄κ³μ λ¬Έν κ³ μ°°μ μ 곡νλ€. SLR νλ‘ν μ½μ μ¬μ©νκ³ Scopus λ°μ΄ν°λ² μ΄μ€λ₯Ό ν΅ν΄ νμΈλ 105νΈμ μ€μ¦μ Β·κ°λ
μ μ°κ΅¬λ₯Ό λΆμν μ΄ κ³ μ°°μ 10λ
μ κ±ΈμΉ λμ§νΈ νκ°μ λ°μ μ 체κ³μ μΌλ‘ μ 리νλ€.
μ΄ κ³ μ°°μ λͺ κ°μ§ κΆ€μ μμ€μ ν¨ν΄μ νμΈνλ€. 10λ
μ μ΄λ°(2015-2018)μ λμ§νΈ νκ° μ°κ΅¬λ μ£Όλ‘ μ€ν κ°λ₯μ±μ μ΄μ μ λ§μΆμλ€: νμ© κ°λ₯ν μ λ’°λλ₯Ό κ°μΆλ©΄μ μ¨λΌμΈμΌλ‘ νκ°λ₯Ό μνν μ μλκ°? μ΄ μκΈ°μ μ¦κ±°λ λμ²΄λ‘ κΈμ μ μ΄μλ€. μ¨λΌμΈ μνμ΄ λλΆλΆμ λ¬Έν μ νμμ νκ° νμ§μ 체κ³μ μΌλ‘ μ νμν€μ§ μλλ€λ κ²μ΄μλ€.
ν¬λ°λ―ΉμΌλ‘ μ΄λ°λ μ€κ° μκΈ°(2019-2022)λ νμ₯μ±κ³Ό 무결μ±μΌλ‘ μ΄μ μ΄ μ νλμλ€: λΆμ νμλ₯Ό λ°©μ§νλ©΄μ λκ·λͺ¨λ‘ νκ°λ₯Ό μννλ λ°©λ²μ 무μμΈκ°? μ΄ μκΈ°μλ νλ‘ν°λ§ κΈ°μ , 무μμ λ¬Έν ν, μκ° μ ν νκ°, κ·Έλ¦¬κ³ λͺ
μ μμ½μ κ΄ν κ΄λ²μν μ°κ΅¬κ° μ΄λ£¨μ΄μ‘μΌλ, ν¨κ³Όμ±μ λν΄μλ μκ°λ¦° κ²°κ³Όκ° λμλ€.
νμ¬ μκΈ°(2023-2025)λ μ§λ₯κ³Ό μ μμ μ€μ¬μΌλ‘ νλ€: AIκ° νκ°λ₯Ό κ°μΈννκ³ , νμ±μ μΌλ‘ λ§λ€λ©°, 볡μ‘ν μλμ μΈ‘μ ν μ μλλ‘ ν μ μλκ°? μ΄κ²μ΄ λ°λ‘ ν¬λΆμ μ¦κ±° μ¬μ΄μ κ²©μ°¨κ° κ°μ₯ ν° μμμ΄λ€.
λμ§νΈ νκ°κ° μ μΈ‘μ νλ κ²
Tahir, Saputra, Othman(2025)μ κ³ λ±κ΅μ‘μμμ μ¨λΌμΈ νκ°μ μ΄μ μ λ§μΆ 체κ³μ κ³ μ°°μ μ 곡νλ€. λμ§νΈ νμ΅μΌλ‘μ μ νμ μ¨λΌμΈ νκ° λꡬμ λμ
μ κ°μννμμΌλ©°, μ΄ κ³ μ°°μ μΆμ λ μ¦κ±°κ° κ·Έ ν¨κ³Όμ±μ λν΄ λ¬΄μμ λ§νλμ§λ₯Ό κ²ν νλ€.
μ΄ κ³ μ°°μ λμ§νΈ νκ°κ° νΉμ λͺ©μ μ λν΄ ν¨κ³Όμ μμ νμΈνλ€:
- μ§μ νκ°: μ λ€ν, λ¨λ΅ν, λΉμΉΈ μ±μ°κΈ° νκ°λ μ λ’°λμ μμ€ μμ΄ λμ§νΈ νμμΌλ‘ ν¨κ³Όμ μΌλ‘ μ νλλ€.
- μ¦κ°μ νΌλλ°±: λμ§νΈ νλ«νΌμ μ§ν νκ°μμλ λΆκ°λ₯ν μ¦κ°μ μΈ μλ νΌλλ°±μ μ 곡νμ¬ νμ±μ νμ΅ μ¬μ΄ν΄μ μ§μν μ μλ€.
- μ κ·Όμ±: μ¨λΌμΈ νκ°λ μ₯μ κ° μλ νμλ€μ΄ 보쑰 κΈ°μ μ μ¬μ©ν μ μλλ‘ νκ³ , μ격 μ§μμ νμλ€μ΄ μ΄λ μμ΄ μ°Έμ¬ν μ μλλ‘ νλ©°, κΈ°κ΄μ΄ μ μ°ν μκ° μΌμ μ μ 곡ν μ μλλ‘ νλ€.
- νμ΅ λΆμ ν΅ν©: λμ§νΈ νκ°λ μ΄λ €μμ κ²ͺκ³ μλ νμ, λ¬Έμ μλ λ¬Έν, κ΅μ‘κ³Όμ μ 격차λ₯Ό νμ
νκΈ° μν΄ μ€μκ°μΌλ‘ λΆμλ μ μλ λ°μ΄ν°λ₯Ό μμ±νλ€.
λμ§νΈ νκ°κ° μ μΈ‘μ νμ§ λͺ»νλ κ²
λμΌν κ³ μ°°λ€μ μ§μμ μΈ νκ³λ₯Ό νμΈνλ€:
- κ³ μ°¨μμ μ¬κ³ (Higher-order thinking): λΆμ, μ’
ν©, νκ°, μ°½μ‘°μ λν μλνλ νκ°λ μ¬μ ν μ λ’°νκΈ° μ΄λ ΅λ€. AI μ±μ μμ€ν
μ μμΈμ΄μ νλ©΄μ νΉμ§(μΌκ΄μ±, μ΄ν, ꡬ쑰)μ νκ°ν μ μμ§λ§, λ
Όμ¦μ μ§, λΆμμ κΉμ΄, ν΅μ°°μ λ
μ°½μ±μ λν΄μλ μ΄λ €μμ κ²ͺλλ€.
- νμ
μλ(Collaborative competencies): κ·Έλ£Ή νλ‘μ νΈ, λλ£ μνΈμμ©, νλ ₯μ λ¬Έμ ν΄κ²°μ νμ€νλ λμ§νΈ νκ°μ μ ν©νμ§ μμλ°, κ·Έ κ²°κ³Όκ° κ°μΈ μμ€μ μ§νλ‘λ ν¬μ°©ν μ μλ μ¬νμ κ³Όμ μ μμ‘΄νκΈ° λλ¬Έμ΄λ€.
- μ€κΈ° λ₯λ ₯(Practical skills): μ€νμ€ μμ
, μμ κΈ°μ , νμ₯ μ€μ΅, μν κΈ°λ° μλμ 물리μ μμ°μ μꡬνλ©°, λμ§νΈ νκ²½μ΄ μ΄λ₯Ό λͺ¨μ¬ν μλ μμ΄λ μμ ν λ체νμ§λ λͺ»νλ€.
- μ§μ μ±(Authenticity): νκ° κ³Όμ (μΌλ°μ μΌλ‘ ꡬ쑰νλμ΄ μκ³ , μκ°μ΄ μ νλλ©°, κ°μΈ λ¨μλ‘ μ΄λ£¨μ΄μ§λ)μ μ€μ μ λ¬Έ μ€μ²(μΌλ°μ μΌλ‘ 볡μ‘νκ³ , μ₯κΈ°κ°μ κ±ΈμΉλ©°, νλ ₯μ μΈ)μ¬μ΄μ 괴리λ λμ§νΈ λ°©μμΌλ‘ μ νλλ€κ³ ν΄μ μ€μ΄λ€κΈ°λ³΄λ€ μ€νλ € μ¬νλλ€.
λμ§νΈ 리ν°λ¬μ μ체μ μΈ‘μ
Suri, Festiyed, Azhar(2025)λ λ©ν μμ€μ κ³Όμ λ₯Ό λ€λ£¬λ€. λ°λ‘ νμμ΄ λμ§νΈ νκ° νκ²½μμ μ±κ³΅νλ λ° νμν λμ§νΈ 리ν°λ¬μλ₯Ό μ΄λ»κ² νκ°ν κ²μΈκ°μ λ¬Έμ μ΄λ€. μ΄λ€μ 체κ³μ λ¬Έν κ³ μ°° λ° κ³λμμ§νμ λΆμμ κ΅μ‘ μμ€ μ λ°μ κ±Έμ³ λμ§νΈ 리ν°λ¬μ νκ° λꡬ, μλ μ°¨μ, κ·Έλ¦¬κ³ κ΄λ ¨ κ³Όμ λ€μ κ²ν νλ€.
λμ§νΈ 리ν°λ¬μμλ κΈ°μ μ μλ ¨λ, μ 보 νκ°, μ¨λΌμΈ νμ
, μ°½μμ±, μ€λ¦¬μ κΈ°μ νμ©μ΄ ν¬ν¨λλ€. μ΄ λ¬Έν κ³ μ°°μ κΈ°μ‘΄ νκ° λꡬλ€μ΄ κΈ°μ μ λ₯λ ₯(νμμ΄ νλ«νΌμ μ¬μ©ν μ μλκ°?)μ κ³Όλνκ² κ°μ‘°νκ³ , λΉνμ λ₯λ ₯(νμμ΄ μ¨λΌμΈ μ 보λ₯Ό νκ°ν μ μλκ°? λμ§νΈ μ€λ¦¬λ₯Ό νμν μ μλκ°?)μ κ³Όμνκ°νλ κ²½ν₯μ΄ μμμ λ°νλ€. μ΄λ¬ν μΈ‘μ νΈν₯μ μ€μν κ²°κ³Όλ₯Ό λ³λλ€. κΈ°κ΄μ΄ λμ§νΈ νκ° λκ΅¬λ‘ μ½κ² μΈ‘μ ν μ μλ κ²λ§ μΈ‘μ νλ€λ©΄, λμ§νΈ μλ―Όμκ² κ°μ₯ νμν μλμ 체κ³μ μΌλ‘ κ³Όμνκ°νκ² λ κ²μ΄λ€.
AI μ μν νμ±νκ°
Sari, Wicaksana, Rahman(2025)μ μλ‘κ² λΆμνλ μμμΈ AI κΈ°λ° μ μν νμ±νκ°λ₯Ό κ²ν νλ€. μ΄ λ
Όλ¬Έμ λμ§νΈ μ νκ³Ό AI μ§λ°°μ μλμ μΈμ§μ Β·μ¬νμ μμ κΈ°μ μ΄ κ΅μ‘μ μ΄κΈ° λ¨κ³λΆν° ν΅μ¬ μλμΌλ‘ λΆκ°λκ³ μμΌλ©°, AI κΈ°λ° μ μν κ΅μ μμ€ν
μ΄ μ΄λ¬ν μλμ μ€μκ°μΌλ‘ νκ°νκ³ λ°μ μν€λ κ²½λ‘λ₯Ό μ 곡νλ€κ³ μ£Όμ₯νλ€.
AI μ μν μ κ·Όλ²μ μ ν΅μ νκ°μ κ·Όλ³Έμ νκ³μΈ κ³ μ λ λμ΄λ λ¬Έμ λ₯Ό ν΄κ²°ν κ²μ μ½μνλ€. κΈ°μ‘΄ μνμμλ λͺ¨λ νμμ΄ λμΌν λ¬Έμ μ λ΅νλ―λ‘, νκ°λ μνμ λμ΄λ μμ€μ κ·Όμ ν νμλ€μκ²λ§ μ΅μ μ μ 보λ₯Ό μ 곡νλ€. μ μν νκ°λ νμμ μλ΅μ λ°λΌ μ€μκ°μΌλ‘ λμ΄λλ₯Ό μ‘°μ ν¨μΌλ‘μ¨, νκ° μ λ°μ κ±Έμ³ μ΅μ μ μ 보 νλμ μ μ§νλ€.
κ·Έλ¬λ μ μν νκ°λ μλ‘μ΄ μ°λ € μ¬νλ μ κΈ°νλ€. λμ΄λ μ‘°μ μμμ μκ³ λ¦¬μ¦ νΈν₯(μμ€ν
μ΄ νμμ λ₯λ ₯μ κ³Όμνκ°ν κ²½μ°, 체κ³μ μΌλ‘ λ μ¬μ΄ λ¬Ένμ μ μνμ¬ μ²μ₯ ν¨κ³Όλ₯Ό μ λ°ν μ μμ), μ±μ μ λΆν¬λͺ
μ±(νμκ³Ό κ΅μ¬κ° μμ€ν
μ΄ νΉμ μ μλ₯Ό λΆμ¬ν μ΄μ λ₯Ό μ΄ν΄νμ§ λͺ»ν μ μμ), κ·Έλ¦¬κ³ νμ΅μ΄ λΆκΈ° μκ³ λ¦¬μ¦μ΄ νμν μ μλ μ΄μ°μ μ΄κ³ μκ³μ μΌλ‘ μ λ ¬λ κΈ°μ λ€λ‘ μλ―Έ μκ² λΆν΄λ μ μλ€λ κ°μ μ΄ κ·Έκ²μ΄λ€.
μ£Όμ₯κ³Ό κ·Όκ±°
<
| μ£Όμ₯ | κ·Όκ±° | νμ |
|---|
| λμ§νΈ νκ°λ μ§ν νκ° λ°©μκ³Ό λΉκ΅νμ¬ μ λ’°λλ₯Ό μ μ§νλ€ | Tahir et al.(2025): ꡬ쑰νλ λ¬Έν μ νμ λν΄ λμ²΄λ‘ μ§μ§λ¨ | β
μ§μ§λ¨ |
| λμ§νΈ νκ°λ κ³ μ°¨μμ μ¬κ³ λ₯Ό ν¨κ³Όμ μΌλ‘ μΈ‘μ ν μ μλ€ | Zainuddin et al.(2026): λͺ©νμ μ€μ μλ μ¬μ΄μ κ²©μ°¨κ° μ§μλ¨ | β λ°λ°λ¨(νμ¬ λꡬ κΈ°μ€) |
| AI μ μν νκ°λ νμ±μ νΌλλ°±μ κ°μ νλ€ | Sari et al.(2025): κ°λ
μ μΌλ‘ μ λ§νλ λκ·λͺ¨ κ·Όκ±°λ μ νμ μ | β οΈ λΆνμ€ |
| λμ§νΈ 리ν°λ¬μ νκ°κ° ν΅μ¬ λμ§νΈ μλμ μΆ©λΆν ν¬μ°©νλ€ | Suri et al. (2025): κΈ°μ μ κΈ°μ μ κ³Όλνκ°νκ³ , νκ°μ Β·μ€λ¦¬μ μ°¨μμ κ³Όμνκ°ν¨ | β λ°λ°λ¨ |
λ―Έν΄κ²° μ§λ¬Έ
μμ±ν AIλ νκ°λ₯Ό μ°μΆλ¬Ό νκ°μμ κ³Όμ νκ°λ‘ μ νν μ μλκ°? AIκ° μ΅μ’
λ΅λ³λΏλ§ μλλΌ μΆλ‘ κ³Όμ (μ΄μ, μμ , κ²μ ν¨ν΄, μκ° λ°°λΆ)μ λΆμν μ μλ€λ©΄, μ ν΅μ μΈ μ°μΆλ¬Ό κΈ°λ° νκ°λ³΄λ€ λ μ§μ μ± μκ² νμ΅μ νκ°ν μ μμκΉ?κΈ°κ΄μ νκ° λ³΄μκ³Ό νκ° μ§μ μ±μ μ΄λ»κ² κ· ν μκ² μ μ§ν΄μΌ νλκ°? κ°λ
κ΄μ΄ μλ νμν νκ°λ 보μμ±μ΄ λμ§λ§ μΈμμ μ΄λ€. κ°λ°©ν κ΅μ¬ νμ©, κ°μ μ μΆ νκ°λ μ§μ μ±μ΄ μμ§λ§ AI μ§μμ μ·¨μ½νλ€. μ€κ° κ²½λ‘κ° μ‘΄μ¬νλκ°?νκ° λκ΅¬κ° νμ΅ νκ²½ μμ²΄κ° λ λ λ¬΄μ¨ μΌμ΄ μΌμ΄λλκ°? μ μν νκ° νλ«νΌμ΄ μ μ λ νμ΅ νκ²½μΌλ‘ κΈ°λ₯νλ©΄μ(μ±κ³Όμ λ°λΌ μ½ν
μΈ λ₯Ό μ‘°μ ), νκ°μ κ΅μ μ¬μ΄μ ꡬλΆμ΄ λͺ¨νΈν΄μ§λ€. μ΄λ¬ν ν΅ν©μ μ μ΅νκ°, μλλ©΄ νκ°μ λ
립μ±μ μ½νμν€λκ°?μ μΈκ³ λμ§νΈ 격차 μ λ°μ κ±Έμ³ λμ§νΈ νκ°μ ννμ±μ μ΄λ»κ² 보μ₯ν μ μλκ°? λΆμμ ν μΈν°λ·, ꡬν κΈ°κΈ°, λλ 곡μ μ»΄ν¨ν
νκ²½μ κ°μ§ νμλ€μ λμ§νΈ νκ°μμ 체κ³μ μΌλ‘ λΆλ¦¬ν μμΉμ λμΈλ€. μ΄λ¬ν λΆνλ±μ μ΅μννλ μ€κ³ μμΉμ 무μμΈκ°?μμ¬μ
10λ
κ°μ λμ§νΈ νκ° μ°κ΅¬λ νλμ μ€μ©μ μΈ κ²°λ‘ μΌλ‘ μλ ΄λλ€: λμ§νΈ λꡬλ νκ°νκΈ° μ¬μ΄ κ²(μ¬μ€μ μ§μ, μ μ°¨μ κΈ°μ , ꡬ쑰νλ λ¬Έμ ν΄κ²°)μ νκ°νλ λ° μ ν©νλ©°, νκ°νκΈ° μ΄λ €μ΄ κ²(λΉνμ μ¬κ³ , μ°½μμ±, μ€λ¦¬μ νλ¨, νλ ₯μ μλ)μ νκ°νλ λ°λ μ ν©νμ§ μλ€. κΈ°μ μ μλκ³Ό νκ° λμ μ¬μ΄μ μ΄λ¬ν μ λ ¬μ μ°μ°μ΄ μλλ©°, μ΄λ μ»΄ν¨ν° κΈ°λ° νκ°μ κ·Όλ³Έμ μΈ νκ³λ₯Ό λ°μνλ€: μ»΄ν¨ν°λ μ¬μ μ λͺ
μλ μ μλ μ°μΆλ¬Όμ νκ°νλ λ° λ₯μνκ³ , κ·Έ κ°μΉκ° μμΈ‘ λΆκ°λ₯μ±μ μλ μ°μΆλ¬Όμ νκ°νλ λ°λ μν¬λ₯΄λ€.
μ΄κ²μ΄ λμ§νΈ νκ°λ₯Ό ν¬κΈ°ν΄μΌ νλ€λ μλ―Έλ μλλ©°, μ€νλ € λμ§νΈ νκ°κ° μ μννλ κ²μ νμ©νκ³ , κ·Έλ μ§ λͺ»ν κ²μλ μΈκ° νκ°μ κ²°ν©ν΄μΌ νλ€λ κ²μ μμ¬νλ€. λλΆλΆμ κ΅μ‘ λ§₯λ½μμ μ΅μ μ νκ° μμ€ν
μ νΌν©νμ΄λ€: ν¨μ¨μ±, νμ₯μ±, λ°μ΄ν° νλΆμ±μ μν λμ§νΈ λꡬμ, 볡μ‘μ±, λ―Έλ¬ν¨, κ·Έλ¦¬κ³ μ§μ μΌλ‘ μ°½μμ μΈ μμ
μ νκ°λ₯Ό μν μΈκ°μ νλ¨μ κ²°ν©νλ κ²μ΄λ€.
References (4)
[1] Zainuddin, A., Wasis, & Ekohariadi. (2026). A Systematic Review of Digital Assessment Trends in Education from 2015-2025. Multidisciplinary Reviews, 2026, 433.
[2] Tahir, M., Saputra, S., & Othman, S. (2025). Online Assessment in Higher Education: A Systematic Literature Review. Multidisciplinary Reviews, 2026, 024.
[3] Suri, N.A., Festiyed, & Azhar, M. (2025). Measuring What Matters: A Systematic Review and VOSviewer-Based Bibliometric Approach to Digital Literacy Assessment. Research in Learning Technology, 33, 3413.
[4] Sari, N.K., Wicaksana, M.F., & Rahman, M.A. (2025). Adaptive AI-Driven Formative Assessment in Early Childhood Education: A Systematic Review and Meta-Analysis on Cultivating Social-Emotional Learning and Early Moderatio. Child Education Journal, 7(3), 8412.