Friday, July 19, 2024

Where AI Fails to Connect

From the Cool Cat Teacher Blog by Vicki Davis

Follow @coolcatteacher on Twitter

We connect as humans.

As I walked into the Roman Colosseum on Tuesday, I imagined the animals they put under the arena for 24 hours without food or water. And I felt the fear of a person in another area getting ready to face such a beast.

Connection.

Across time and space, we humans are connected.

As I walked near the House of Vestals, which kept the flames of Rome alive, I considered what it would feel like to have such a role in keeping the flame alive and living in that home. To hope you lived long enough to “age out” of the work and the training so you could leave the house and perhaps marry and have your own children. To be left out.

Then I saw The Long Game on the plane, and the main character's wife was talking about her inability to have children and how she felt left out. She avoided playgrounds because she was disconnected. She wasn't a mom. She felt like everyone had something she didn't.

I remembered what it felt like in middle school watching the whole class plan a party and knowing I wouldn't be invited. I felt like others got to do something, and I was so unwanted. Like something was wrong with me.

Left out.

Unconnected.

As I write this to you, you also connect to these stories because you're human. Most of us know what it is like to be left out. And if we don't, we know what it is like to leave others out and see the hurt on their faces. We also might have children who are left out. Whether it is perceived or real, we cannot fix it for them. The hurt of being disconnected is universal.

And those feelings connect us.

The Imitation Game

These are the things that algorithms and bots crawling the web can learn to imitate but cannot feel. They do not know what it feels like to be left out, to be afraid, to be connected, or to be disconnected.

You are connected, and we are connected.

Besides sharing this earth and this current time in history, we also share a common humanity—the broken humanity of good, evil, joy, pain, suffering, and solitude. Having parents—or not having them—having loved ones—or not having them—all of these experiences connect us in humanity.

If you watch any of the reality competitions, you know we love those moments when talent is discovered, people show real emotion, and common connections are shown. Those are the videos that millions of people watch—the ones that connect.

And so, while humans connect, AI imitates humans. It isn't human. It cannot feel. It cannot relate. It can imitate us, however, and sometimes in very convincing ways. But it is artificial. It is not human.

So, let me explain something really important about the process of imitating humans.

A tool cannot imitate what it has not observed.

A Big Failure of AI When I Was Working on a Creative Task

I am working on revising my curriculum for computer applications and computer science with ethical, appropriate uses of AI. (Honestly, what I'm seeing out there as an “AI curriculum” is probably AI-generated and worthless. It wasn't written by humans using AI and I have no use for it. I have to do this myself so it can be done right with my students.)

So, I was building an advanced spreadsheet and using AI to support me. As I worked, I documented a list of the skills I need to teach my students about building spreadsheets with AI support. This includes many formulas that will now be accessible to them, as well as the methods of communicating with AI about spreadsheets to make it easy.

For this work, I built a spreadsheet that I call the “Brain Grade Sheet Game” to basically give me a “grade” on various neurologically beneficial habits using the research from the book A Winner's Brain. I started small and built up the spreadsheet as I went.

I used a custom GPT on ChatGPT called Sheets Expert, which was specific for Google Sheets support. During this process, I documented 20 techniques or formulas I need to teach my students to make them into spreadsheet wizards.

So, then I wrote a summary of some of those skills and wanted it to determine what was missing and wrote this prompt into the chat I used as AI support for the creation of the spreadsheet.

A copy of the chat where I asked AI to make a list of skills used in the chat that I need to teach my students about spreadsheets.
A portion of the chat where I was working to summarize the skills people need to use AI spreadsheets. I'm not including the whole thing, as this blog post is not on that particular topic.

What Chat GPT Could Not Do: Include Creative Thinking

The response ChatGPT gave me was frustrating. Let me explain why.

One specific thing I've discovered is that when working with a spreadsheet and using AI support, you must give each formula a working name. Then, you can refer back to them, unleashing so much power.

For example, I wrote the formula for how much water a person needs to hydrate their brain, which I called the “WATER” formula. The formula I used to calculate a sleep score was called the “SLEEP” formula. (A name helps because the cell locations change constantly, but the purpose of the formula does not. I also do not think in numbers and letters, but in words, so that helps also to have a name of the formula that is part of the purpose of the formula.)

No matter what I did, frustratingly, I could not get ChatGPT to include that one aspect of creating spreadsheets using AI in the comprehensive list I was trying to get it to create of those skills. It also wouldn't include several other items in my personal list.

In fact, it was downright awful and didn't include any of the more, what I consider novel elements of using AI with spreadsheets.

My Observation: AI Is Biased to FAMILIAR things

I want to share an anecdotal theory of something I'm observing about my own use of AI.

We already know that the experts call AI a “stochastic parrot.” However, I want to expand past that to make an observation here.

When I use AI, and it is exposed to truly creative things—novel things—things that everyone isn't saying in a certain way, AI fails. AI omits. AI ignores.

So, after I saw AI completely leave my own unique observations of working with AI and spreadsheets out of the list it made, I started looking at other things, and I'm seeing a pattern here.

Note that I gave it three of those things and directly asked it to write them into the list and it repeatedly left them out.

Repeatedly. It omitted what I considered novel, new info even when I gave it that information by typing it into the chat.

I started noticing. Connecting. And I made these observations on other uses of AI:

  • When I feed my transcripts into AI and ask for summaries – it leaves out some of the more novel, unique ideas.
  • When I ask AI to summarize things for me, it often ignores the unique, novel aspects that resonate with me as a human.
  • When I use AI to interview me as I drive in the car using my own custom GPT, it often omits the unique, novel things I say – even the great one-liners I really like to use, I have had to go back and find them in the transcript continually since I've been using AI.
  • AI seems to recognize and include things it has seen before, but, for me, can often omit novel ideas and creative thought.

Could it be that AI omits or minimalizes things it hasn't seen before in its large language datasets?

Let me ask what you think: Does AI have a bias toward what is familiar and known and against what is unique and truly creative? What have you observed?

We need to ask hard questions about what AI omits.

How This Changes How I'm Using AI

So, here is where I am with creativity and AI. When I'm doing truly creative tasks, I'm not bothering with using any AI tool.

Why?

  • INTELLECTUAL PROPERTY AND BIAS. First, I want to have creative control over my own ideas and don't want to use it to train AI on a public AI tool. This is for intellectual property issues as well as privacy of being able to think and wrestle with ideas knowing that in the process of wrestling with ideas, I will make errors and mistakes in logic and do not want to bias the model.
  • BEING CREATIVE. Second, I am concerned it will inadvertently dampen my own creative thought as it filters what it gives me.
  • LACK OF TRUST. Third, when I'm engaging in truly creative tasks and research, I do not have a trust level for AI to tell me everything I need to know because it is biased with what is familiar and creative work.

I have some questions questions that I hope some of my brilliant AI researcher friends will study (if they already have, feel free to leave the links to those studies in the comments.):

  • When we use AI meeting summaries, will it leave out the truly unique, novel ideas for solving problems?
  • Can we trust AI to include what is most important? (This can be world-changing responsibility if it is a meeting of cancer researchers discussing possible cures or lawyers meeting to discuss the fate of a client.)
  • Is AI's seeming bias for what is familiar going to stymie human creativity?
  • Is AI's seeming bias for what is familiar going to further embed bias and harmful thoughts into an organization by using what is familiar?
  • What is the role of AI tools in creative thought?

We humans are connected. We are human. AI is not. AI imitates us. It cannot imitate what it has never observed.

Therefore, I believe that we as humans must, I stress, MUST, supervise AI in all its forms.

Whether it is reading over meeting minutes to see if it got it right, reviewing the summary of a long email to ensure that important elements are not being left out, or just testing AI in use cases like this one to see how it functions.

Blindly trusting AI is unwise. We must engage our brains. So, now I ask this:

  • Are we blindly trusting an untrustworthy tool that is still developing? That we know is full of bias?
  • Are we going to overload ourselves with meaningless words?

    (Sales reps are using these tools for sure, as I have tons of guilt-inducing, completely ludicrously written, overly verbose emails trying to sell me something. If I see a salesperson is obviously using AI to communicate with me, I block them and do not respond. I'm not having that manipulation in my life.)
  • Why use 100 words when 10 will do?

    (And when I write a 100-word email that I generated with 10 words and you use AI to summarize my 100-word email into 10 words – aren't we using AI in needless GPU cycles and WASTING ENERGY — you can't talk about being environmentally friendly if you support the needless, pointless use of AI because we are literally running out of electricity to do it all.)

With that, I have reminded myself that I need to stop writing. If AI were writing this, it would know that the ideal length for a blog post is 500-750 words because most humans won't read more than the first paragraph.

And that, my friends, is a problem.

I believe that in part of living a successful life, we need to learn to read, learn what to read, and learn to filter out the meaningless drivel written by non-human emotion-manipulating bots to find real human writing and thinking is going to be part of being successful.

AI is a great tool for some tasks. Not all tasks.

The least of all actually being human. And for that reason, there are tasks for which AI will be unconnected from my work – like this post — where I used spell check and nothing else.

So we can better connect and share information. Part of my criteria for letting someone in my inbox or on my bookshelf is their uniquely human mode of writing. Their creativity.

I want to be more connected to humans. We should also be testing AI ourselves, connecting, and sharing what we're observing.

Because as we continue the human experience, we need to clearly know that sometimes to better connect with one another and to better connect with a brighter future, it will mean we need to…

disconnect

from AI.

I'm back from my summer sabbatical, friends. If you want to see the other things I've written about AI, you can check out this section of my blog.

The post Where AI Fails to Connect appeared first on Cool Cat Teacher Blog by Vicki Davis @coolcatteacher helping educators be excellent every day. Meow!


from Cool Cat Teacher Blog
https://www.coolcatteacher.com/where-ai-fails-to-connect/

No comments:

Post a Comment