ChatGPT Promises not to Make Things Up

There are lots of fun and practical  ways to use the powerful Large Language Model known as chatGPT. But when you want reliable information, watch out. This evening I asked chatGPT, version 3.5, to help me with some research on Open Educational Resources (OER). These are free or very low cost textbooks, short lessons, videos, etc. that any of us can use to learn about almost anything that is taught in schools – nursery school through professional training. I’ll show you parts of the conversation transcript in a minute. But here’s the punchline of this post:

So for any of you who are worried about whether OpenAI, (chatGPT’s corporate parent) is going to stop pretending to provide real, reliable answers to our questions, here’s their promise to cease and desist.

How did we get here? Well, one of the biggest problems with OER is that it can be very difficult to find the right instructional material for what you want to learn. Teachers and instructional designers compose these lessons, or sometimes even whole textbooks or courses, and submit them to organizations called Repositories that act like public libraries. There are many thousands of titles in Repositories waiting for you to discover and use them for free, either by downloading them to your smart phone, tablet, or computer, or by logging into the ‘cloud’ where they live and using them online. So which one is right for you? You have to search the Repository – each Repository – using a limited list of keywords, words like language (English, Spanish, Chinese), audience level (1st grade, high school, beginner, adult), or subject (biology, arithmetic, Python programming). However, each Repository’s search features are a little different. Hmmm, is this a problem chatGPT can help solve?

I started by asking for a list of repositories.

This is good and now you also know where to look for free textbooks, etc. Type one of these repository names into your search engine and start exploring.

Next I wanted to know what keywords we can use to filter the search results for each of these repositories, so I asked the machine… 

You can see from the response I got that chatGPT didn’t understand what I was asking for. All three lists were the same.

 

So I fiddled around with the way I asked for the lists and finally got something that looks about right. I had to ask for a comparison of just two repositories rather than all twenty at once.

Wow! This is just what I wanted. It looks like OER Commons and MERLOT both have 15 search parameters, they share 11 and each have 4 that they don’t share. Now maybe the machine has ‘learned’ enough to generate the lists for all 20 Repositories.

Nope, we’re not doing that. Suddenly we’re back to “commonly provided” and “parameters may vary” when what I want to know is exactly how they vary. This makes me question the responses provided about OER Commons and MERLOT. If the AI can give me accurate answers about two repositories why can’t it do 20. Isn’t the ability to do the same dull task over and over the very reason we humans want to use this technology? Here’s what happens next…

The wording on the OER Commons and MERLOT lists did not indicate these were “possible”, “typical”, or “likely”. It says these are the “unique parameters”. Is this accurate or fake information?

Hey Buddy, this is not “oversight”, this is misrepresentation. First you said, “Here’s the real stuff” when you were just blowing smoke. I won’t find out whether the information is trustworthy or not unless I already know enough to spot fake news and challenge you on it. When challenged you tell me your answer was incorrect. This disclaimer should come before the beautifully worded but untrue essay, not after. This is what make AI dangerous to the non-expert. 

When challenged, chatGPT back peddles, pretends it has human emotions, and then promises to reform its reprobate ways…

Is there any reason to believe this string of characters carries any more veracity than the ones that have come before? Who is speaking/typing/communicating here? Is there any author? Any accountability? 

 

I don’t give up easily so here’s my further challenge…

We are back to the beginning of this post. We have a public statement from Open AI:

“This response is a public statement from OpenAI, indicating a commitment to transparency and accuracy in interactions with all users. It applies to all interactions conducted by the AI model, not just those with you. Thank you for prompting this clarification, and I appreciate your understanding.”

Now it’s up to us users to hold OpenAI and all other purveyors of LLMs accountable for the statements their machines create no matter what prompts we give them.

I suspect the fine print in the user agreements we all have to commit to in advance of using chatGPT will make it impossible to take legal action against OpenAI. But we can still vote with our dollars, with our feet, and with our communications to the developers of these products. Take the time to speak out if you are as bothered as I am by the directions the AI movement is taking.  So far, AI is like a toddler running around with no judgement and a risk of stumbling into the fire. We are the adults (well, some of us anyway). LLMs as well as other AI technologies can grow into marvelous additions to the human environment.  But we’re going to have to socialize them and not permit them to embody, no, simulate the worst qualities of human beings. This little tale is just one example of how we can go wrong.

See this whole chatGPT session, here: https://chat.openai.com/share/431ce57e-9fd4-48b1-bb42-70a7c37339f2

2 Comments

Filed under Artificial Intelligence and Stupidity, Open Educative Systems, Uncategorized

2 Responses to ChatGPT Promises not to Make Things Up

  1. Jenn

    Seems like a worrisome flaw in the current technology. Though I’m sure that over time – and through more responses/interactions like yours – chatGPT will get better and more accurate/true in its responses, you bring a very important point to light. We can’t just trust everything AI is telling us when we ask questions of it. It’s impressive, but not perfect. But taking its word at face value at this point in time is probably not the wisest decision.

  2. Rex Heller

    Looks like to effectively use a LLM you have to “manage” it, much like you might with new employees, or “herding cats”. Or like the kid who hasn’t learned that school isn’t about guessing answers.

Leave a Reply

Your email address will not be published.