What We Learned About AI and Education in 2023
From Disruption to Integration: AI Responsive Education in 2023
In 2023, my readers were kind enough to make many comments and requests to me in their replies to my Newsletter issues. The question I was asked the most was regarding the impact of AI on students, teachers, parents and the future of education. I keep thinking about this topic.
ChatGPT created a scenario for many schools, universities and classrooms that we are still dealing with today in 2024. I reached out again toof the Newsletters to summarize the state of things on these and related fronts.
Nick is one of those pioneer teachers thinking about this at another level. I invite you to sign-up to his Newsletter is you enjoy this topic and this article.
Guest contributor’s Newsletter: Educating AI by Nick Potkalitsky
This 5,000 word essay is pretty comprehensive. Nick is an innovative educator developing AI-responsive instructional methods and approaches for today's schools. There are a number of really great Newsletters on AI in education, but Nick started his relatively recently and has an enormous dedication to the topic.
You may know of teachers, policy makers, parents and other interested parties who may find this information useful and helpful:
If you want to read my deep dives, and enjoy my topics and guest contributors, join premium to support the publication.
Sections in this Essay ~ Table of Contents
💡 Paradigm Shifts
📣 Call to Action
✍🏻 AI Literacy
📊 Models and Interfaces
🎓 Tutoring and Engagement
📜 Writing Instruction
📋 Evaluation and Assessment
👉🏻 Bias, Disability, Accessibility
👌 Recommendations for AI-Responsive Educators
By, Ph.D., December, 2023.
1. Paradigm Shifts
In the past fifty years, the field of education has steadily transformed through the introduction and adoption of new technology. The calculator, the desktop computer, the internet, and big data analysis — each of these technologies challenged educators’ received notions of knowledge acquisition, memory processing, analysis, synthesis, and assessment. This often resulted in shortsighted bans and prohibitions, or more subtle resistance to change.
Yet, in each instance, a resilient group of teachers, administrators, researchers, and policymakers advocated for experimentation in the utilization, implementation, and integration of these tools and resources into schools and classrooms. While these efforts have yielded many methods, approaches, and processes that are no longer practiced today, this experimental phase — and the accompanying openness to varying degrees of success and setbacks — was absolutely essential.
Through experimentation, educators learned the scope, reach, and breadth of each new technology. Questions arose:
How does this new tool impact existing systems of knowledge, and methods of instruction and assessment?
What new systems of knowledge and methods of instruction does this tool offer?
How does this tool serve to broaden or complicate the purpose of educational institutions?
And what steps might schools take to effectively, efficiently, and ethically engage with those challenges?
At the end of this difficult process of integration and implementation, the phase of normalization begins. Schools, universities, and other educational institutions emerge with new, classroom-tested, and evidence-based practices. These practices engage students in the vital work of discovering their creative purpose and achieving mastery of course objectives conducive to the realization of that purpose. The circle closes, at least until the next technological challenge arrives.
2. Call to Action
Now, a year after the launch of ChatGPT, educators, administrators, and educational researchers have the opportunity to advance AI and Education beyond mere disruption, towards a robust program of experimentation, implementation, and integration. Admittedly, there are many obstacles ahead.
Our most capable models continue to underperform in areas like transparency, privacy, and bias.
Many school leaders, superintendents, administrators, and a handful of educational researchers oppose the full or even partial integration of AI systems on various educational, pedagogical, philosophical, and practical grounds.
Meanwhile, students are eager for more than just a “give-it-a-try” methodology. They seek a rigorous, evidence-based curriculum that prepares them for the realities of work in college and beyond. And if we educators are being honest, this curriculum does not exist yet.
Since the launch of ChatGPT, we have learned much about AI and Education. This newsletter builds on my September report on the State of AI in Education, published here at AI Supremacy. As an educator, writer, and researcher, I broadly support accelerating efforts to implement and integrate AI in today's schools and classrooms.
As the AI and Education landscape continues to evolve, teachers need more AI literacy training, support in developing AI-responsive curricula, and collaborative spaces to maximize efforts to innovate and experiment with AI-responsive instructional practices and strategies.
Currently, many schools and districts are still in the committee phase regarding AI and Education. Meanwhile, faculty and students are developing usage practices and habits that may be challenging to modify as more information about best practices becomes available.
For these reasons, schools, districts, administrators, and teachers need to start bridging the gap with frameworks, perspectives, insights, and strategies. These will lay the groundwork for the evidence-based and classroom-tested pedagogies that will likely emerge in the next few years. I will conclude this newsletter with some recommendations on how to bridge this gap.
Finally, educators experimenting with and collaborating towards AI-Responsive Education need to stay informed of the latest research. Academic journals are beginning to publish studies on the best practices of AI integration and implementation. While our classroom efforts are a valuable source of experimental data, we can also enhance our progress and effectiveness by leveraging best practices emerging from more controlled research settings.
In 2024, I will use my Substack, Educating AI, to bring educators the most valuable insights from other school settings and academia. More broadly, I will explore:
Exploration of specific AI integrations and applications that enhance student development in knowledge, skills, and competencies across various educational, cognitive, professional, and institutional dimensions.
In-depth analysis of lessons learned from LLM-based learning and its implications for student education in classroom settings.
Interviews and insights from students and educators actively engaging with LLMs in their daily routines and educational practices.
Comprehensive curriculum outlines showcasing lesson plans and strategies for AI-enhanced writing courses.
Expanded literary and philosophical discussions addressing the ethical dimensions and implications of AI in education.
Review and synthesis of the latest academic research on effective practices for integrating and implementing AI in contemporary classrooms.
Continued examination of outcomes-based grading methodologies and their role in facilitating AI-responsive educational approaches.
Please subscribe to get regular updates on these and other critical issues in AI and Education. Consider opting for a paid subscription to enable me to devote more time and resources towards providing you with the most current insights and effective strategies for adopting and applying artificial intelligence in today’s classrooms.
More by the Author
Figure 2: “Fictionalized Biden vs. AI,” Created by Midjourney, 12/19/23.
AI policy has seen rapid development in the fall of 2023. In late October, the Biden Administration released an Executive Order on AI. This order specifies that the National Institute of Standards and Technology (NIST) is tasked with establishing "red team testing" standards, a cybersecurity approach critical for ensuring the safety of AI systems before their public release.
This development is expected to influence educational technology companies and schools, potentially shaping how they implement AI systems. Throughout 2023, the Biden Administration faced challenges in passing substantial data privacy legislation and regulations. This lack of legislative groundwork has rendered the Biden Executive Order less effective in implementing its recommendations.
That said, the Family Educational Rights and Privacy Act (FERPA) continues to offer broad protection for students, including student writing, and limits the extent to which student data can be input into systems, including AI models. However, the situation becomes more complex when students input their writing into an AI system voluntarily.Figure 3. “FERPA Rules of the Road,” https://tech.ed.gov/privacy/ferpa_image/.
Educators should consider the following questions before asking students to use a Large Language Model (LLM) in class:
Are there ways to use a LLM without signing up for a full account?
How old do students need to be to sign up for their own account?
What information must students provide to sign up for a personal account?
What information does a company store about its users?
How long does a company retain that information?
How do AI companies create identity composites of their users?
What kinds of texts should students be entering into LLMs? Why?
What kinds of texts should students avoid entering into LLMs? Why?
In the absence of strong guidance from federal or state authorities, educational institutions bear the great responsibility of providing a safety net for both educators and students using these rapidly advancing products. Emerging wisdom from ed-tech and ed-training suggests avoiding overly specific policies and regulations. But teachers and students need real guidelines regarding AI use and access:
Can teachers ask age-eligible students to sign up for their own accounts?
How can students who are not age-eligible use generative AI?
Is there a school-wide policy on AI case-use?
Do teachers have the freedom to establish case-use policies responsive to their individual curricula?
Should students assume they are allowed to use generative AI unless specified otherwise?
Or should AI use be presumed prohibited unless specified otherwise?
Is there a preferred LLM on campus, or can faculty and students choose models on a case-by-case basis?
For schools designing and implementing more prohibitive policies, it's crucial to include comprehensive rationales. While I personally view such prohibitive policies as counterproductive at this stage, their successful implementation hinges on convincing faculty and students of their ultimate benefit.
I will revisit the topic of rationales when offering guidance on writing instruction and bridging the gap between AI instructional theory and practice.
4. AI Literacy
Equipping administrators, teachers, and staff with up-to-date knowledge on AI is one of the most pressing issues in AI and Education today. Schools, universities, and other educational institutions find themselves in a knowledge gap. A recent poll in an article in The Economic Times reported that at least 49% of educators do not feel adequately trained to manage the impact of AI in their classrooms.
The good news is that a wide array of high-quality AI literacy resources are currently available to schools. Educational organizations need only to dedicate some time to sift through these copious offerings, select a pathway that best suits their faculty and staff's needs, and allocate time during professional development for the initial stages of AI literacy training.
Casal-Ortero et al. define “AI literacy” as “a set of skills that enable a solid understanding of AI through three priority axes: learning about AI, understanding how AI works, and learning for life with AI.” In late November 2023, Big Tech demonstrated a commitment to increasing global workforces’ AI literacy by launching free online training courses.
Microsoft’s Generative AI training, available on GitHub, offers 12 lessons focused on Gen AI principles and applications for development. Amazon aims to provide free AI skills training to 2 million people by 2025 with its 'AI Ready' commitment.
For a more school-oriented approach to AI literacy, administrators and educators should consult the 'AI Guidance For Schools Toolkit,' which offers a variety of resources, including a comprehensive 100-slide deck covering all three priority axes of AI literacy.
The next stage in development will involve creating similar resources for students. CommonSense Media offers an 8-course series of videos for 6-12 graders that serves as an excellent starting point towards a more comprehensive curriculum. In addition, teachers should know that CommonSource Media also offers comprehensive reviews of many commercial and educational AI services.
Figure 4. CommonSense Media Review of ChatGPT (Nov. 2023) https://www.commonsensemedia.org/ai-ratings/chatgpt
5. Models and Interfaces
Since the introduction of ChatGPT in November 2022, significant progress has been made in the field of cloud-based artificial intelligence, particularly in large language models.
Major LLMs such as ChatGPT, Bard, Claude, and LLaMa have undergone multiple updates, improving in key areas:
Translation capabilities are now more accurate and nuanced.
Models' ability to understand and respond to diverse queries has increased.
Processing speed and operational efficiency have been optimized
Models have become more robust and reliable across a wide array of inputs and scenarios.
In the spring of 2023, educational critics focused on LLMs’ tendency to produce hallucinations, where AI models generate incorrect or irrelevant information. March 2023 saw the release of ChatGPT4 by OpenAI, a model trained using Reinforcement Learning with Human Feedback (RLHF), which refines AI models based on human input to enhance response quality and alignment with human judgment.
This approach significantly improved the output's reliability and accuracy, as well as the model's coherence in extended interactions, by aligning purposes through user interactions and text generation.
Despite these advancements, completely eliminating hallucinations in LLMs remains an intractable problem. The technology community, including Microsoft engineers Adam Tauman Kalaii and Santosh S. Vempala, is beginning to argue that LLMs’ vast capacities inherently include a degree of unreliability and inaccuracy independent of the training set or transformer.
As AI systems increasingly gain real-time internet access for fact verification, the argument against LLMs due to hallucinations somewhat weakens. However, most real-time search functionalities, still in their nascent stages, are available only in paid versions of systems like Bard, ChatGPT, and Claude. When students use these functions for writing, they often find the tools to retrieve top search results without evaluating their quality or credibility.
Perplexity.ai offers a free LLM with basic search capabilities, but it's worth noting that the company has recently turned its focus to the development of "uncensored" models. Their recent social media posts state, “Our models prioritize intelligence, usefulness, and versatility in tasks, without imposing moral judgments or limitations,” aligning with Elon Musk’s xAI approach.
Figure 5. “Perplexity.ai Logo.”
Microsoft's Co-Pilot AI-assistant, with its latest version, showcases advancements in search functionality via a reinvigorated Bing. AI search functionalities are predicted to make very rapid improvements in 2024, and students and teachers will benefit greatly from these developments.
In the fall of 2024, educators, researchers, and critics called insistently for different interfaces to engage with LLMs, beyond the standard chatbot access point, which, while interactive, can induce user passivity and present model outputs as authoritative. Companies like Penpal, Language Model Sketchbook, and Spatial Canvases are developing alternative platforms that allow crafting of writing with varied AI assistance and gathering of multiple drafts in one window.
Such innovations are crucial as they help "materialize" the thought processes, enabling students to maintain control over their content and voice during AI-assisted writing. New interfaces will change the tenor of the conversation surrounding AI and education as students become more active participants in the process of text generation.
In the upcoming year, educational institutions are poised to make critical decisions regarding their use of cloud-based, commercial AI services. A notable concern among critics is the potential for increased reliance on these services to prompt companies to restrict access and escalate fees.
Throughout the 2023-24 academic year, students have primarily engaged with AI through personalized logins to access LLMs, which require yielding email addresses and phone numbers. A transition to private or proprietary models could offer a solution to the privacy concerns associated with disclosing personal information. In response, several universities are developing their own AI models, tailored to meet specific standards of security, safety, and legal compliance.
This shift reflects a growing awareness of the need to balance technological advancement with ethical and privacy considerations in the educational sector.
6. Tutoring and Engagement
Figure 6. “Fictionalized Sal and Al Grabbing Coffee with AI,”Created by Midjourney, 12/19/23.
Since the summer of 2024, the AI tutorial landscape has become increasingly crowded. One of the most developed, commercially available AI tutorial systems is Khanmigo, a joint venture between Khan Academy and OpenAI. Khanmigo is an AI-powered tutor that offers personalized, interactive learning experiences, utilizing at times a Socratic or inquiry-based teaching approach.
Rather than merely providing answers, it engages students in problem-solving, skill tests, and writing prompts by asking probing questions. Available for a monthly or annual fee, it can be accessed through a Khan Academy account.
The introduction of systems like Khanmigo has reignited debates on the effectiveness of personalized tutoring and its role within the broader context of AI in education. Sean Geraghty and Mike Goldstein of the Fordham Institute recently discussed human tutoring's dual functions: motivational intervention and instructional intervention. They also note that tutorial programs typically cater to two types of audiences: more motivated and less motivated students.
Comparing human-to-human tutoring with non-AI computer tutoring, these authors found that while human tutoring offers more motivational content, student motivation is a key driver of success. Applying these findings to AI-based systems, Geraghty and Goldstein somewhat shockingly predict that software like Khanmigo will significantly increase time spent on tutorial systems. Khan Academy currently motivates 11% of its users to engage for the recommended 30 minutes a day; Khanmigo is projected to raise this to 20%.
In the ed-tech and training sectors, claims about the efficacy of AI tutoring systems often accompany notions, explicit or implied, that human teachers might become less essential in future educational models. Dan Meyer, in his Substack Mathworlds, counters claims regarding the potential effectiveness of AI-based tutorial systems and the teacher-replaceability thesis. Meyer emphasizes that tutoring is more about motivation than most AI companies acknowledge.
The Department of Education's May report on “AI and the Future of Learning and Teaching” predicts that the initial offerings of AI-tutorial systems may present a limited range of learning designs, pathways, and approaches. These systems might benefit only certain student segments, highlighting the need for more diverse and inclusive AI educational tools.
Figure 7. The Long Tail of Learner Variability, “AI and the Future of Learning and Education,” 46.
In addition, Meyer and others conceptualize the classroom environment as a hub of personal, social, emotional, cognitive, instructional, institutional, and practical decisions, each branching into numerous options and choices during a school day. While AI systems might automate a subset of these skills and choices, integrating them effectively into the dynamic and unpredictable context of students' lived experiences remains a considerable challenge.
7. Writing Instruction
The rapid advancement of AI systems in generating coherent text presents both opportunities and challenges for educational applications, particularly in writing instruction. As a writing instructor, I've experienced a range of emotions in response to AI's text-generating capabilities, from shock and dismay to acceptance and excitement.
Currently, writing instruction in most schools is more disrupted than transitioning or experimenting. In high school settings, studies suggest that 10-20% of students use AI to assist with the majority of their writing assignments. These figures should be a wake-up call for educators who underestimate AI's presence or overestimate the efficacy of AI detectors. The most appropriate instructional response is to adapt existing methods to the challenges and opportunities presented by AI.
Figure 8. “Pew Research Study, Nov. 16, 2023.”
When students use AI systems for writing, they engage in three distinct skill sets: 1) Content knowledge, 2) Prompt-engineering, and 3) Core writing skills.
First, as Lance Cummings highlights in Cyborgs Writing, quality AI composition requires detailed content knowledge and understanding of causal and relational connections. This "front-loading" of AI systems is crucial for generating high-quality compositions.
Second, prompt-engineering, accessible to students without coding knowledge, demands high-level rhetorical, organizational, logical, and logistical competencies. Students must formalize internal and structural components of assignments, requiring rigorous forethought and planning.
Third, throughout the detail-arrangement and prompt-engineering process, clear, concise, and intentional written language guides the AI system. This process involves not only initial composition but also editing and revising of prompt texts in order to maximize the efficiency and efficacy of AI assistance.
In my Substack Educating AI, I've explored at length the role of AI-assisted writing in relation to the key educational factor of knowledge-constitution. Writing is a fundamental process for constructing knowledge, as seen in the educational theory of Writing to Learn. Cognitive processes during writing involve memory access and the synthetic formulation of new thoughts, with novel propositions formed through the sentence production process.
Educators need to develop comprehensive curricula that embrace the possibilities of AI-assisted composition while preserving the necessities of existing writing skills. The challenge lies in managing the interaction between AI-assisted and traditional writing instruction. Educators face questions such as the timing of AI introduction, assessment of student agency in AI-assisted work, and instruction in efficient and ethical AI practices.
These questions extend beyond the scope of a single newsletter, touching on very complicated and nuanced educational topics. A practical first step for teachers is to familiarize themselves with their students' unique writing voices. Grounded in this understanding, educators can more effectively chart a course through the rapidly changing educational terrain, positioning themselves to not just cope but excel.
8. Evaluation and Assessment
Integrating AI into writing instruction and assessment, while a complex task requiring much reflection and oversight, offers significant benefits for educators and students. Current AI systems, including free versions, serve effectively as writing assistants, copyeditors, proofreaders, and thought-partners.
Educators can use AI systems in ongoing projects for authentic assessments, helping students achieve specific objectives and outcomes. From a pedagogical standpoint, it's vital for students to learn how to respond to AI-generated feedback, focusing on preserving their own voice and style, differentiating between original text and feedback, and evaluating feedback relevance and accuracy.
Success from a data standpoint will depend on developing strategies for relaying AI chatbot feedback to teachers, ensuring accuracy, fidelity, reliability, and accountability in the process. As the AI ecosystem evolves with more responsive chatbots, smaller language models, and systems functioning independently of internet connectivity, a broader range of AI-based tools for assessment and evaluation will become available.
This expansion includes options beyond large language models like Claude, Bard, and ChatGPT, offering diverse and adaptable choices in educational contexts.
The integration of AI into educational assessment is challenging and requires a nuanced understanding of the technology's capabilities and limitations. A primary concern is ensuring AI feedback aligns with pedagogical objectives and accurately reflects students' understanding.
Teachers must skillfully interpret AI-generated feedback and determine its relevance to individual student needs and learning paths. Additionally, ethical considerations, especially concerning data privacy and potential algorithmic bias, must be carefully considered.
Effective AI use in assessment also demands that educators stay updated on technological developments and critically evaluate the suitability of different AI tools for various educational contexts.
This highlights the need for ongoing professional development in AI and education, enabling educators to leverage AI's potential while focusing on equity and student well-being.
In an important post, the team at AI x Education urges educators to “reimagine assessments in the era of AI.” Echoing initiatives at the University of Sydney, they advocate for a “two-lane” assessment approach. This includes ‘assessment of learning,’ supervised assessments authenticating mastery of specific knowledge and skills, and ‘assessment for learning,’ unsupervised assessments encouraging responsible AI engagement.
This “two-lane” approach transforms quizzes, tests, and exams from mere memorization exercises into knowledge-constituting experiences. The University of Sydney’s approach not only allows students to demonstrate mastery within a discipline but also prepares them for “productive and responsible participation in an AI-integrated society.”
9. Bias, Disability, Accessibility
In the spring and summer of 2023, the discourse around AI in education shifted from focusing mainly on plagiarism to broader concerns including biases, privacy, accessibility, and ethical considerations. In my previous post for AI Supremacy, I delved into these issues extensively. Since then, Michael Woudenberg, Alejandro Piad Morffis, Arvind Narayanan, and Sayash Kapoor have further clarified the biases, assumptions, and preconceptions present in current LLMs.
Figure 10. “AI Snake Oil Logo.”
As highlighted by Narayanan and Kapoor in a critical post at AI Snake Oil, there are three levels of biases in any LLM:
ChatGPT and similar chatbots exhibit ‘implicit association bias,’ akin to human biases, evident in word associations that sometimes reveal occupation-gender stereotypes, likely originating from their pre-training data.
The challenge in defining and eliciting consistent 'opinions' from chatbots leads to 'opinion bias.' Influenced by both pre-training and fine-tuning phases, the ambiguity in 'opinion' limits research value in this area.
The most critical type is 'behavioral bias,' observed in chatbots' regular interactions. While theoretically mitigatable through fine-tuning, practical challenges arise, like false refusals to answer benign questions, especially those requiring politically neutral responses.
Narayanan and Kapoor’s analysis underscores a key point in the Department of Education’s “Report on AI and the Future of Learning and Teaching”: “Bias is intrinsic to how AI algorithms are developed using historical data, and anticipating all impacts of biased data and algorithms during system design can be challenging” (8). However, once educators start distinguishing between bias levels and types, they find themselves better positioned to utilize these tools in the classroom.
In fact, the presence of certain biases can become educational opportunities, as teachers ask students to develop prompts that either amplify or counteract these tendencies. Recent research from Anthropic suggests that level 2 and 3 biases in LLMs can be mitigated by simply instructing the model to “ignore demographics” in analyses.
In the fall of 2023, emerging research suggested that LLMs could assist students with disabilities, support universal design principles, and enhance curriculum accessibility, thereby tempering concerns about AI biases. Although still experimental, early results are promising. A few significant studies include:
In October 2023, Xiaoming Zhai and Sahrish Panjwani-Charania published a meta-analysis of 16 studies where school-age students with dyslexia used various AI applications, including adaptive learning, chatbots, and interactive robots. The study categorized AI support into four levels: 1) substitution, 2) augmentation, 3) modification, and 4) redefinition. The findings indicate AI’s potential to aid students with learning disabilities, underscoring the need for further research.
The Learning Disabilities Resources Foundation (LDRF) is investigating AI uses in diagnosing dyslexia, learning disabilities, executive function issues, and ADHD. Their report notes the traditionally high costs of diagnosing these disorders, which are often not covered by health insurance. The foundation hopes AI, in skilled hands, will reduce diagnostic costs and increase accessibility.
Riikka Hofmann’s study, conducted through the University of Cambridge Medical Education Group, further illuminates how AI can assist in diagnosis. In this study, 178 trainee teachers in Germany analyzed written content from six “fictionalized ‘simulated’ pupils with potential learning difficulties.” The trainees demonstrated a 10% increase in diagnostic abilities when provided with AI summaries and analyses. Hofmann concludes that while AI did not directly aid in diagnosing specific disorders, it improved teachers' diagnostic efficiency.
Figure 11. “Riikka Hofmann Quote.”
While confronting the inherent biases of AI systems and navigating their ethical complexities, the potential benefits, particularly in enhancing educational accessibility and catering to diverse learning needs, are noteworthy. As AI progresses, it challenges educators not merely to adapt but to thoughtfully innovate in their teaching approaches.
This evolving landscape underscores the necessity of ongoing research and cautious implementation, advocating for the use of AI as a tool for inclusive education, albeit with a measured approach. Looking forward, the journey involves a critical balance between refining AI technologies and reevaluating educational philosophies and practices, with the aim of developing a learning environment that is adaptable and responsive, yet mindful of the diverse needs of all learners.
10. Recommendations for AI-Responsive Educators
The transition to AI-enhanced education requires educators to balance enthusiasm for innovation with personal well-being. This balance involves recognizing the risks of overexertion, as seen in efforts to rapidly develop AI-responsive curricula.
Such endeavors, while growth-inducing, can lead to exhaustion, highlighting the importance of self-care. Educators are reminded that adapting to educational changes, especially those involving AI, should be approached as a sustained journey, not a rushed effort, emphasizing the need for patience and mindful pacing in this transformative process.
A critical step for educators is to take stock of their actual knowledge about Gen AI. This spans from those adept in technology to those still grappling with the basics of AI literacy.
Educators need to embody their understanding of LLMs in the classroom, sharing their experiences and insights openly with students. This transparency fosters an environment where both strengths and limitations of AI are acknowledged, enhancing the learning experience.
In the realm of AI in education, hastily adopted solutions, such as AI detectors or bans, often miss the mark. Emphasizing slow, deliberate, and experimental change is crucial. This approach, rooted in the principle of 'do no harm,' prioritizes meaningful and sustainable integration of AI into educational settings over quick, possibly harmful fixes.
The teacher-student relationship should be anchored in trust, honesty, and accountability, especially in light of the challenges posed by AI. Educators must engage students as informed partners in the learning process, respecting their individuality and agency. This involves openly discussing the use of AI tools, their advantages, and their limitations, thereby nurturing a collaborative learning environment.
It's essential to move beyond polarized views on AI in education, focusing instead on developing nuanced, evidence-based rationales. These rationales should be compelling to students, addressing immediate, tangible benefits of AI use in education, such as preparing for future job markets or enhancing traditional learning methods. This involves triangulating around divisive debates to present balanced, persuasive arguments.
The interaction with current LLMs in educational settings highlights their potential as tools for brainstorming and editing. Educators are exploring innovative ways to integrate AI into the writing process, focusing on enhancing student creativity and autonomy.
This involves a careful examination of AI's influence on student thought processes, ensuring it serves as a catalyst for original ideas rather than a crutch that limits creative exploration. The goal is to harness AI's capabilities in a way that enriches the writing experience while respecting the student’s individual thought journey.
Educators are encouraged to think beyond current paradigms, imagining how AI might radically transform educational practices in the future. This involves contemplating scenarios where traditional methods like reading and writing are augmented or even replaced by AI-driven techniques.
Such forward-thinking requires educators to be open to new possibilities, reassessing the value of established practices in light of emerging technologies.
In responding to the challenges of Gen AI, collaboration is key. Educators should approach curriculum updates with precision, formulating research questions that align with specific educational goals. By experimenting collaboratively, teachers can collectively navigate the complexities of AI, moving from individual anecdotes to a broader, evidence-based understanding of AI's impact on education.
This process mirrors the natural evolution of technology integration in teaching, where experimentation and shared learning pave the way for institutional adaptation.
Thank you for engaging with Educating AI's Year-End Review. It's my hope that you found it insightful and enlightening. We are indeed navigating through an exhilarating era, and 2024 promises to be a year marked by significant transformation and innovation. I urge you, my readers, to embrace the changes to come and to collaborate in forging best practices. These practices are vital for equipping our students with the skills necessary to make their way into an uncertain future. It is only through our collective efforts that we can effectively move from the present phase of disruption to a fruitful era of experimentation and integration.
Open to Further Communication
I am always open to your queries and discussions. Feel free to reach out to me at firstname.lastname@example.org with any questions.
Additionally, I offer AI literacy and content consulting services for those looking to deepen their understanding or implement AI in their educational practices. I am excited to work with individual teachers, schools, districts, companies, or organizations as they develop best practices and strategies for AI-responsive education.
Happy New Year!
Nick Potkalitsky, Ph.D.
In this conclusive post of the "Year-End Comprehensive Review: Part 3," I develop strategies for AI-responsive writing instruction, offering educators tools for integrating AI into classroom teaching.
This study dives into Ian McEwan's "Machines Like Me," applying narrative analysis tools to unravel the novel's complex ethical dynamics and their implications for understanding artificial intelligence.
This article delves into the burgeoning integration of Gen AI in education, highlighting innovative classroom applications alongside a critical gap in AI literacy instruction.
An analysis of the complex landscape of AI in education, examining the varied perspectives shaping our response to technological advancements and the potential for an "AI Culture War."
This article explores the evolving nature of writing in the context of Gen AI, questioning its impact on human experiences enriched by traditional writing. It delves into how AI is reshaping the processes of generating written content, posing crucial questions about the future of human growth, development, and the intrinsic value of writing.