From the Cool Cat Teacher Blog by Vicki Davis
Subscribe to the 10 Minute Teacher Podcast anywhere you listen to podcasts.

A new MIT Media Lab study took two groups of writers — one started with AI, one started with their own brain. Then they swapped. The group that started with their own thinking before bringing in AI? They had a clear advantage. As teachers, we keep getting pushed into “love AI” or “ban AI” camps. The truth is in the middle, and it starts with the order of operations. Brain first. AI second.
Sponsor. This episode is sponsored by EF Explore America and their STEM Tours. Lead your students on a STEM tour to places on the cutting edge of innovation to show them how STEM thinking often shows up where you least expect it. Imagine your students coding robots with MassRobotics at MIT, exploring marine ecosystems in Florida's coral reefs, or even sitting down to talk with a former spy in Washington, D.C. If you want to inspire your students and give them a fresh perspective on the power of STEM, visit efexploreamerica.com/STEM.
Browse EF Explore America STEM Tours →
This week we are talking with Philip Seyfried — doctoral student at Teachers College, Columbia University, decade-long middle school ELA teacher, and co-author of AI-Enhanced Literacy: Practical Steps for Deepening Reading and Writing Instruction. We dig into the brain-first approach, why AI detectors don't work (and what does), how to monitor AI in the classroom without policing it, and how to build the kind of trust that lets students tell you the truth about how they're actually using these tools.
Listen to the Show

Watch this video on YouTube.Subscribe to the Cool Cat Teacher Channel on YouTube
Watch on YouTube · Subscribe to the channel for new episodes every week.
Key Takeaways for Teachers from Philip Seyfried
- Brain first, AI second — the MIT Media Lab study reveals a clear order of operations. Researchers gave one group AI from the start of a writing task and one group only their own thinking. The group that started with their brain — and added AI second — had a clear advantage. The cognitive scaffolding built first lets AI accelerate the work instead of replacing it. (Note: This study is not yet peer reviewed so remember that as you hear this research.)
- Yes-AND, not either-or. Decades of classroom practice still work — writers' notebooks, paper books, partner talk, collaborative spaces. Don't throw them out for AI. Phil's frame: keep what works AND add what's new.
- AI detectors don't work — and they're harming students and teachers. Real writers — including researcher Danah Boyd — get falsely flagged for using em dashes or words like “delve.” MIT itself has shown detectors are unreliable. The fix isn't a better detector — it's a better classroom process.
- Build trust so students can tell you the truth about AI use. “I don't want it to feel yucky,” Phil says. If a student says “Grammarly helped me with sentence structure” or even “I copied and pasted” — that's where teachable moments live. Stigmatize it and students go underground.
- Push AI to students, not just teachers. Vicki's classroom approach: have students feed their rubric AND their paper into AI to get a bulleted list of where they may not be meeting the standards — BEFORE the paper reaches her. Phil agrees the answer depends on age (high school students are ready to use these tools themselves; third graders need a different approach), but the principle holds: teach AI literacy by letting students “speak back to the algorithm.”
- The real gift of AI is more space to be human in the room. Phil shares a story of a teacher who tells her students “this sentence — right here — this is where I paused and reread it again because it's so beautiful.” That's the kind of feedback no AI can give. If AI takes the commas-and-capitalization work off our plates, we have more time for what matters most. And — both Vicki and Phil push back hard on anthropomorphizing AI. Phil shows pictures of data centers in every presentation now. The “cloud” is just servers. The model has training data, not feelings.
Resources Mentioned in This Episode
- Book — AI-Enhanced Literacy: Practical Steps for Deepening Reading and Writing Instruction by Philip Seyfried & Mary Ehrenworth (ASCD). Phil and his co-author's book on bringing AI into reading and writing instruction without losing what works.
- Book — Co-Intelligence: Living and Working with AI by Ethan Mollick (Portfolio, 2024). Phil's recommendation for teachers just starting their AI journey: spend “three sleepless nights” with AI before bringing it into your classroom.
- Phil's website — ai-enhancedliteracy.org: companion site to the book with classroom examples and resources.
- MIT Media Lab — “Your Brain on ChatGPT” study: the brain-first/AI-second research Phil references on cognitive resource-building during writing tasks. (Note: 2025 preprint, n=54, not yet peer-reviewed; the lead researcher herself has cautioned against alarmist framing.)
- Danah Boyd's LinkedIn post on being falsely accused of using AI — “Academics, who among you is being accused…” The exact post Vicki references during the conversation.
- EF Explore America STEM Tours: this episode's sponsor. Code robots at MIT, study marine ecosystems in Florida's coral reefs, or meet a former spy in Washington, D.C. Visit efexploreamerica.com/STEM.
About Philip Seyfried

Philip Seyfried is a doctoral student in curriculum and teaching at Teachers College, Columbia University, focusing his research on the intersection of digital literacy and artificial intelligence in education. With more than a decade of experience as a middle school language arts and literature teacher, he now supports schools and edtech companies as a literacy and digital literacy consultant. Seyfried is the co-author of AI-Enhanced Literacy: Practical Steps for Deepening Reading and Writing Instruction.
Connect with Philip:
- Website: ai-enhancedliteracy.org
- LinkedIn: linkedin.com/in/philip-seyfried
- Book: AI-Enhanced Literacy at ASCD
Other Shows for K–12 Teachers Navigating AI
- Episode 932 — Jheri South: ADHD Misconceptions and Classroom Strategies — building classroom trust with neurodivergent learners; pairs well with Phil's trust-and-process frame.
- Episode 931 — Karim Meghji: Free AI Resources for Teachers (Hour of AI) — Code.org's CEO on getting your students AI-literate without expensive tools.
- Episode 929 — Malia Hollowell: Brain Friendly Reading Strategies — the cognitive-science companion to Phil's brain-first AI approach.
Phil appeared on Season 4 Episode 11 of Cool Cat Teacher Talk on Radio and TV — it will be aired soon on youtube.
Listen and Subscribe
Loved this episode? Take 30 seconds to leave a rating or review wherever you listen. It helps more teachers find the show — and means the world to me. Thank you!
Episode Transcript
This transcript was generated using AI and has been reviewed by humans for accuracy. Minor errors or artifacts may remain.
Click to read the full transcript
Vicki Davis (00:05): Today's show is sponsored by EF Explore America and the STEM Tours. To show your students how STEM impacts the world up close and in action, go to efexploreamerica.com/STEM. And stay tuned at the end of the show to learn more.
Vicki Davis (00:25): Philip Seyfried is a doctoral student in curriculum and teaching at Teachers College, Columbia University. And he researches how digital literacy and artificial intelligence intersect in K-12 learning. Phil spent over a decade teaching middle school language arts, but now he works at a higher level with schools and edtech companies about literacy and digital literacy. He is the co-author of AI-Enhanced Literacy: Practical Steps for Deepening Reading and Writing Instruction from ASCD. So Phil, you talk about brain-first practices and learning theory as it relates to AI. How did you start this work and say, hey, we're going to put the brain first?
Philip Seyfried (01:10): One of the things — co-author Mary and I started really working on this project a couple of years ago. We started to really see how AI is the future of education in a lot of ways. If not only for the reason because it's here, right? The kind of technology that really transforms the way that we think about what's possible in learning and education. We've always been really interested in digital literacies — kids read differently on a computer or on a tablet — and so we wanted to figure out what is happening differently with AI that's different than a book or working with a human. There's a lot that we didn't know at first. There was a lot of experimentation. And this is a technology that was just thrust upon all of us and opened up to the world one day. But what's great is there's been some wonderful research coming out of MIT's Media Lab. They had this great study where they took participants and they gave them these writing tasks. Some of them had AI available to them right from the start. Some of them had the internet available. And then others, they only had to use their brain — they couldn't really use any other technology other than their own thinking. And what was really interesting about that study is later on they switched the two groups.
Vicki Davis (02:10): Mm-hmm.
Philip Seyfried (02:18): So what ended up happening is the group that started with their brain ended up getting AI in the back end. And then vice versa, the group that started with AI, they ended up having to use only their brain. And there was a clear advantage of those who started with their own thinking first and then moved to AI. And what was so interesting that I found in that study was that you could build up your cognitive resources — get your brain on fire with your thinking, getting your ideas organized together, getting your best thoughts out there. If you bring in AI after that, it sort of accelerates your thinking, your work, challenges the thoughts that you've already established.
Vicki Davis (02:35): Hmm.
Philip Seyfried (02:57): Versus if you do it the other way around — if you start with AI too soon, before you've done some thinking, maybe even a little bit of writing, maybe some talking to somebody through your ideas — what ends up happening is AI sort of fills you up with all the ideas that it's bringing to you. Your sense of ownership is not going to really be there. And when we think about what's important in classrooms, we're really trying to get students to have a full sense of ownership over their words and their work and their learning — and to be able to see how they can use these tools to accelerate themselves, especially when they are trying to learn something that they want to do with independence later on.
Vicki Davis (03:33): So many times it seems people try to push us into “I love AI” or “I hate AI.” But true application and true teaching is in the middle of “okay, this is a good use, this is not a good use.” It seems like you're saying, your brain — start with brainstorming. So what should that process before you bring AI in look like in a classroom?
Philip Seyfried (03:55): We are so good in classrooms already of things that have been working for decades. We have kids in our classrooms that are using writers' notebooks. They have paper books in their hands. They have pencils. They're set up in partnerships so they can turn and talk to someone. Our classroom spaces are very, very collaborative. And those are things that we know work, and there's years and decades of research behind those practices. And so really what we're saying is don't get rid of that.
Vicki Davis (04:00): Mm-hmm.
Philip Seyfried (04:21): Don't let the excitement of a new technology completely change what you know already works. What we want to do is add what's working from these new avenues and opportunities to the frameworks that already work really, really well. What we're trying to help teachers see is living in this “yes-AND” time period. It's not an either-or with AI. It's “yes, you can do the things that you know work. Yes, you could also dabble and try some new things out and see how that goes — and do that in a measured way. And we can learn from each other at this time.” If we get into the space where we sort of ban it and say “well, you can't use it ever” — I don't know what that's going to mean for these kids as they're growing up and now they're going to be in the workforce. And these are the tools that they're going to be expected to pick up and to use well. And I would so rather see kids learn how to use those tools really well right now in their K-12 education — especially in a safer place where they can make some mistakes, because they will. They are going to overly lean on some of these tools at times because they want to sound smart. But we can address that. If they leave our classrooms, it's almost too [late].
Vicki Davis (05:29): And I have had some students on my show before and they said, “Ms. Davis, the only people getting caught using AI are the ones who don't know how to use it.” MIT — that you just quoted — they found AI detectors don't work. AI detectors don't work. AI detectors don't work. And what happens? You get a letter that says “we're not going to tolerate artificial intelligence. We have the greatest AI detector and it's going to catch all of you hooligans.” And then the kids are just like, “okay, I wrote it myself.” danah boyd, a respected researcher, was just writing on LinkedIn and she was saying that she had written this paper and had been accused of using AI to write part of the paper because she likes em-dashes. I like em-dashes. I've always used dashes. You could look on my blog from 2005 when I started blogging — I have dashes. And now I wrote something for somebody and they said, “Hey, you used AI on this because you have dashes in it.” Okay. Well, I might do that. And I might use the word “delve.” But that doesn't mean I'm using AI. So human AI detectors are no good and other AI detectors aren't any good. So what do you tell the schools that are like, “we want to have academic integrity, but we need some help here”?
Philip Seyfried (06:35): One of the things I've been telling schools — especially since there is so much concern over this — there's the concern that what are students doing after school hours? Are they using AI in ways that are not so productive? I think one of the things we know we can do is we can really control what's happening during the school day. They might go off and use these tools perhaps at home, and that to some degree might concern us, but in other ways that's the way of the world at this point. They're also using video games and other digital devices too. But what really gets teachers is — is that showing up in their classrooms later on? One of the ways we can really address this is to actually give students AI tools in the classroom that we can do some monitoring in. So there's a number of edtech platforms that allow you to set up a chatbot. And then you are able to see behind the scenes how that kid is having a discussion with that AI. And on top of that, instructions that you might say — “prompt the student with some questions to help them do some of their best writing, but don't do the writing for them” — and then the AI doesn't do that. And so if you were to combine that with making sure that students have class time to do their writing while they're there in front of you and getting your support as a teacher, and then doing some peer feedback work with another student in the classroom — there's going to be no question that students are really putting a lot of effort and energy and writing work into the drafts that you're seeing in front of you.
Vicki Davis (07:39): Mm-hmm.
Philip Seyfried (08:03): We don't want to necessarily then say “AI never comes into that process.” I think it's going to be the kind of decision we're going to have to sort of make case by case. And I do believe that it's so great to get away from technology, get away from screens, sit down with a pen and paper — even just a device that's not connected to the internet at that moment — and you're just doing your best writing and you're in that moment experiencing that. I think we need to make sure that we're also creating those moments too. That way we have the best of both worlds, and students do deserve to have those opportunities to sit with an idea that people don't look at just for a little bit before they get feedback. I think we have to really think about all the contours of what it means to be a writer, what it means to be a reader in a school space nowadays. And if we start to do that, what it does is it creates opportunities for us to have relations of trust in the classroom. What I most care about is, if I ask a student “hey, can you tell me a little bit about your writing process? What were the tools that you were using? How did you use those tools?” — I want to be able to have an open and honest conversation. I don't want it to feel yucky. If a kid says “well, Grammarly helped me at the end with some of my sentence structure,” right? Or “I was stuck at this one point, so I asked AI these three questions and it came back with this. I didn't like these three things that it said, but I did like this one. So then I tried it out here.” That's where real learning is happening. And if the kid says “I copied and pasted,” I also want them to feel safe enough to say that. So I can say “you know, as a writing process — let's think about some other things you can now do that you're ready for as a writer. AI might give you something that fills the blank page and some ideas. And now it's a great mentor text for you for the kind of writing you can do later on on your own.” And I think there's that idea of we're always increasing towards independence. We're always trying to boost students' confidence in their own abilities — and to not stigmatize these technologies in a way that kids now use it in secret and that we're not able to give them the actual support that they need if only we just knew how they were using the tool.
Vicki Davis (10:09): Mm-hmm. There are a lot of teachers who say, “hey, I'm grading with AI.” And I'm like, why? I'm teaching my students to take that paper they've written. We've gone through the process together. Give them the prompts to feed in my rubric and feed in their paper, and get a bulletized list of “here are some suggestions for where you might not be meeting the standards in the rubric.” Not having AI rewrite it, but having it give them a bulleted list. And then by the time it gets to me, you've already done your AI stuff. Like — what's the point of me using AI? Why not push it to them?
Philip Seyfried (10:46): Yeah, that's a really great question. And people are trying to work that out right now. I find such a big variety in what people are doing in practice. It really depends on where your students are as learners — and particularly, how old are they? For high school students, absolutely teaching them how to use these tools themselves makes perfect sense. But if you're in a third grade classroom, that's not going to be the same sort of approach we're going to take.
Philip Seyfried (11:17): One thing is we shouldn't just accept any feedback readily. We should really think — what is our purpose as a writer? And the second thing that does is it's actually teaching AI literacy. You want students to learn to speak back to these algorithms and to catch when the algorithm is not really working for them — and what are some things we can do to make sure that it's aligned with my own purposes. And so what that does is when you get to that point, then the teacher's freed up to give some of the human feedback and response that only a human can do.
I heard a teacher the other day say that she likes to give the kind of feedback where she'll tell a student “this here, right here, this sentence — this is where I paused and I lingered and I reread that sentence again and again because it's so beautiful.”
Vicki Davis (12:02): Hmm.
Philip Seyfried (12:05): And I think about — what effect does that have on a young person, a writer, a student, a learner for the rest of their life, to have a teacher say “it was at this moment in your writing that I just had to take a deep breath because it was so beautiful what you were saying right here.” And that's still the kind of feedback that — even if a computer could give that level of feedback, the authenticity of that relationship is so important. We have to remember that we're humans in the room. And as teachers, so much of our value is that we're another person really cultivating other people to become beautiful adults in the future. And I worry that if we're just catching commas and periods and capitalization, maybe we're sort of missing the point of the opportunities that we have. So I hope AI gives us more space to just be really human in the room.
Vicki Davis (12:55): I love how you're speaking about AI. Way too many people anthropomorphize it. I like what you're saying about that — AI is a tool to help the humans in the room become more remarkable humans. But when people start saying “AI is this” or “it is that,” or they start saying “AI got angry at me” — all this anthropomorphism — that's where I, as a teacher and as a human being, start pushing back and saying, hey, this is a great tool, but it's a really sorry human being. It's not a human at all.
Philip Seyfried (13:30): Every presentation I give now, I always show a picture of data centers — like the inside of a data center, because what you'll see is all these servers, and it's really just wall-to-wall servers. We tend to think of AI as this invisible intelligence somewhere in the cloud. And really — what is the cloud? It's us actually accessing a computer that's offsite. That's all that really is.
Vicki Davis (13:54): Mm.
Philip Seyfried (13:55): We can so easily forget that this isn't actually a human, because it's using natural language. But we have to always keep in mind that there's some algorithm behind this — there's training data behind this. And I think if we don't get to those sort of critical literacies of AI, we would really miss an opportunity. We can't be blind to what's going on right now. Otherwise then the teens in the room and the young kids — they're going to figure it out on their own. And they're going to shape it no matter what we do. But we want to be ready to help them navigate this moment, because it is a different moment than in the past.
Vicki Davis (14:35): We can't expect them to be fully developed adults about how to use it wisely. So as we finish up, Phil — for the teachers who are listening to you, they're convinced. Where do you tell beginners to start their journey of finding the appropriate place for AI in their classroom?
Philip Seyfried (14:51): Actually, I wouldn't start with the classroom is what I would tell teachers — if they're just getting started with AI. Ethan Mollick, who's got this great book Co-Intelligence, says that we need three sleepless nights with AI. So I would say get onto an AI system of your choice, whether that's ChatGPT or Claude or Google Gemini. Pick one of the big systems that you know you'll use again — and just test it out. Ask it questions, see how it responds.
Vicki Davis (15:20): We've been talking to Philip Seyfried — the book is AI-Enhanced Literacy: Practical Steps for Deepening Reading and Writing Instruction from ASCD. It has been very insightful. Thanks so much, Phil.
Vicki Davis — Postroll (15:32): EF STEM Tours: If you're a STEM teacher like me, you want your students to see how STEM impacts the real world — not just read about it. On an EF Explore America STEM tour, they might code robots with MassRobotics at MIT, explore marine ecosystems in Florida's coral reefs, or even sit down with a former spy in Washington, D.C. to discover how STEM thinking shows up where you least expect it. Every itinerary is designed by experts to amplify what you teach through hands-on experiences that can't be replicated in the classroom. Visit efexploreamerica.com/STEM and see what an EF Explore America STEM tour can do for your students. Some of the greatest things I've ever done with my students have been tours. They make it all easy for you. So again, check out efexploreamerica.com/STEM.
Disclosure of Material Connection: This is a sponsored episode and blog post. EF Explore America has compensated me to share information about EF Explore America STEM Tours. However, all opinions expressed are my own. I have personally reviewed these resources and only recommend tools I believe offer genuine value to classroom teachers. My endorsement is limited to the educational products and services discussed in this episode. I am disclosing this in accordance with the Federal Trade Commission's 16 CFR, Part 255: “Guides Concerning the Use of Endorsements and Testimonials in Advertising.” The sponsor has no impact on the editorial content of this show.
The post Brain First, AI Second: Teaching Writing in the AI Era appeared first on Cool Cat Teacher Blog by Vicki Davis @coolcatteacher helping educators be excellent every day. Meow!
If you're seeing this on another site, they are "scraping" my feed and taking my content to present it to you so be aware of this.
from Cool Cat Teacher Blog
https://www.coolcatteacher.com/e934/




