Category Archives: Artificial Intelligence and Stupidity

What is “Artificial Intelligence”? How is it different from “Natural Intelligence” or “Human Intelligence”? What can it do for or to us humans? As we humans implement more automated functions for computing machinery what are the best ways of deploying them? Will carbon-based intelligence ultimately give way to silicon-based life? Much has been thought and written about these questions. Many of us are experiencing the consequences of poorly thought out AI applications. I’ll post my ruminations here with the hope that you will comment, argue, discuss and enlarge on what I’ve written.

How will life have changed by 2040?



Looking at the past from the future (photo by Liza Loop)

What stands out as most significant to you? Why? What is most likely to be gained and lost in the next 15 or so years? Here are my positive and negative scenarios…

I imagine positive and negative futures for the year 2040 without predicting whether or which are most likely to occur. Most significant, and a component in both scenarios, is an increase in humanity’s ability to produce the goods and services necessary for individual human survival accompanied by a decrease in both environmental pollution and erosion of stocks of natural capital. This boils down to the potential for what has been called “the age of abundance”. Let’s take a quick look at some positives and negatives while noting that an increase in our ability to do something does not imply that it is likely to happen.
In the positive take, by 2040 ordinary people will have far more choice in lifestyle and decreased risk of dying from disease (genetic, environmental, or contagious), exposure (to cold, heat, lack of food or water, and poisons), or civil violence (either as wide scale war, personal attack, or small group terrorism). Accidental death may be unchanged or increase because some people may choose to take more risks. Death by abortion or infanticide is likely to be less frequent as we become more skilled at preventing conception.
A survey of the living will reveal people enjoying a much broader range of lifestyles without the social stigma that was attached to many lifestyles in the 2020s. For example, voluntary ‘homelessness’ or ‘nomadism’ will be considered a valid choice at any age. Similarly, many more people are choosing ‘simplicity’ or ‘sparse’ paths in order to avoid the responsibility of caring for and storing possessions they don’t use every day even when they reside in one geographic location.
With the decline of ‘owning stuff’ as the primary indicator of social status, there is a rise in acclaim for people who contribute by caring for others or by producing and donating artistic creations. The existence of Universal Basic Income and effective Universal Education permits social service workers, artists, adventurers, and scholars to eschew wealth accumulation and focus on their avocations. At the same time, those who so choose are free to exercise the historic values of control of goods and services in excess of their ability to consume them.
Lost in this scenario is the necessity for competition which many people in the 2020s still rely on as a primary motivator. Abundance is a condition where there are enough basic resources to eliminate zero-sum games and if-you-live-I-must-die conundrums. Under abundance, competition is only one of many lifestyle choices for humans.
Another “loss” I hope for by 2040 is the high value placed on large families. Rather than proud parents enjoying being surrounded by 10 of their own children, in 2040 a ‘family’ of 12 or 20 would include great grandparents and 3rd cousins as well as parents and children. This is an example of how a relatively small change in social attitudes can have profound effects on how humans impact the planet.
A negative view of life in 2040 incorporates the trends and fears being discussed now in 2023 and 24. Little has changed in our social and economic institutions over the past two centuries. This has led to further concentration of wealth and growing dysfunction in global civil society. The power brokers of 15 years ago have co opted the increase in productive capacity enabled by machine automation and AI without instituting compensating channels for redistribution of what has been produced. Stockpiles of consumer goods are targets to be ‘liberated’. The military-industrial complex survives on the demand generated by ongoing small wars that have not yet succeeded in destroying the worldwide productive infrastructure rather than on genuine human need. Population growth has continued apace resulting in an exponential rise in the number of humans living in extreme poverty, misery, and despair. The ubiquity of video communication allows rising aspirations among the world’s poor and physical migration as they are continuously exposed to narratives of luxury they cannot attain.
Of particular interest to educators in this negative scenario is the lost opportunity to spread know-how among the less fortunate. High aspiration without the knowledge and skills to fulfill these wants decreases overall perception of well-being even under conditions of increasing availability of food, water, consumer goods, and health care. In this negative future, we have continued to train AIs and each other that the goal of educating humans is to enable them to be successful competitors in the employment market at the same time that we are decreasing the demand for human muscle and brain power. Unemployment is rampant while employers lament the lack of adequately trained workers.
This view is frighteningly likely given that AGI is still way beyond the 2040 horizon. While there is no reason to anticipate that an AGI would spontaneously develop the competitive, amoral, greedy personality exhibited by some humans, there is also no reason to assume that guideposts against such an outcome will be put in place by today’s researchers and developers.
Why do I envision these changes for 2040? It is because the environmental conditions under which humans evolved have changed while many of our socially reinforced values have lagged behind. Behaviors that were a ‘good fit’ for humans existing ‘in the wild’ no longer ensure our individual survival from birth to the time our children reach reproductive age. Like many other species, humans are able to produce many more offspring than they are able to nurture. By maintaining the belief that every child we are able to conceive is innately valuable and should have a right to life, we endanger ourselves and those with whom we share the planet. By relying on an economic theory founded on an assumption of scarcity, we inhibit our willingness to embrace abundance even in the face of the capacity to produce it. AI technology accelerates our productive capacity. However, if we continue to train both neural networks and semantic systems with rules, data, and beliefs that sustained us during eons past but ignore today’s realities, we cannot blame the AIs for the result.

For more from Liza, please visit and comment on:

New Economic Thinking – Analysis – Action
Learning Options * Open Portal

1 Comment

Filed under Artificial Intelligence and Stupidity, Future Gazing, Wealth Distribution

ChatGPT Promises not to Make Things Up

There are lots of fun and practical  ways to use the powerful Large Language Model known as chatGPT. But when you want reliable information, watch out. This evening I asked chatGPT, version 3.5, to help me with some research on Open Educational Resources (OER). These are free or very low cost textbooks, short lessons, videos, etc. that any of us can use to learn about almost anything that is taught in schools – nursery school through professional training. I’ll show you parts of the conversation transcript in a minute. But here’s the punchline of this post:

So for any of you who are worried about whether OpenAI, (chatGPT’s corporate parent) is going to stop pretending to provide real, reliable answers to our questions, here’s their promise to cease and desist.

How did we get here? Well, one of the biggest problems with OER is that it can be very difficult to find the right instructional material for what you want to learn. Teachers and instructional designers compose these lessons, or sometimes even whole textbooks or courses, and submit them to organizations called Repositories that act like public libraries. There are many thousands of titles in Repositories waiting for you to discover and use them for free, either by downloading them to your smart phone, tablet, or computer, or by logging into the ‘cloud’ where they live and using them online. So which one is right for you? You have to search the Repository – each Repository – using a limited list of keywords, words like language (English, Spanish, Chinese), audience level (1st grade, high school, beginner, adult), or subject (biology, arithmetic, Python programming). However, each Repository’s search features are a little different. Hmmm, is this a problem chatGPT can help solve?

I started by asking for a list of repositories.

This is good and now you also know where to look for free textbooks, etc. Type one of these repository names into your search engine and start exploring.

Next I wanted to know what keywords we can use to filter the search results for each of these repositories, so I asked the machine… 

You can see from the response I got that chatGPT didn’t understand what I was asking for. All three lists were the same.

 

So I fiddled around with the way I asked for the lists and finally got something that looks about right. I had to ask for a comparison of just two repositories rather than all twenty at once.

Wow! This is just what I wanted. It looks like OER Commons and MERLOT both have 15 search parameters, they share 11 and each have 4 that they don’t share. Now maybe the machine has ‘learned’ enough to generate the lists for all 20 Repositories.

Nope, we’re not doing that. Suddenly we’re back to “commonly provided” and “parameters may vary” when what I want to know is exactly how they vary. This makes me question the responses provided about OER Commons and MERLOT. If the AI can give me accurate answers about two repositories why can’t it do 20. Isn’t the ability to do the same dull task over and over the very reason we humans want to use this technology? Here’s what happens next…

The wording on the OER Commons and MERLOT lists did not indicate these were “possible”, “typical”, or “likely”. It says these are the “unique parameters”. Is this accurate or fake information?

Hey Buddy, this is not “oversight”, this is misrepresentation. First you said, “Here’s the real stuff” when you were just blowing smoke. I won’t find out whether the information is trustworthy or not unless I already know enough to spot fake news and challenge you on it. When challenged you tell me your answer was incorrect. This disclaimer should come before the beautifully worded but untrue essay, not after. This is what make AI dangerous to the non-expert. 

When challenged, chatGPT back peddles, pretends it has human emotions, and then promises to reform its reprobate ways…

Is there any reason to believe this string of characters carries any more veracity than the ones that have come before? Who is speaking/typing/communicating here? Is there any author? Any accountability? 

 

I don’t give up easily so here’s my further challenge…

We are back to the beginning of this post. We have a public statement from Open AI:

“This response is a public statement from OpenAI, indicating a commitment to transparency and accuracy in interactions with all users. It applies to all interactions conducted by the AI model, not just those with you. Thank you for prompting this clarification, and I appreciate your understanding.”

Now it’s up to us users to hold OpenAI and all other purveyors of LLMs accountable for the statements their machines create no matter what prompts we give them.

I suspect the fine print in the user agreements we all have to commit to in advance of using chatGPT will make it impossible to take legal action against OpenAI. But we can still vote with our dollars, with our feet, and with our communications to the developers of these products. Take the time to speak out if you are as bothered as I am by the directions the AI movement is taking.  So far, AI is like a toddler running around with no judgement and a risk of stumbling into the fire. We are the adults (well, some of us anyway). LLMs as well as other AI technologies can grow into marvelous additions to the human environment.  But we’re going to have to socialize them and not permit them to embody, no, simulate the worst qualities of human beings. This little tale is just one example of how we can go wrong.

See this whole chatGPT session, here: https://chat.openai.com/share/431ce57e-9fd4-48b1-bb42-70a7c37339f2

2 Comments

Filed under Artificial Intelligence and Stupidity, Open Educative Systems, Uncategorized

Internet hopes and fears in 10 years

I just filled out a survey about what I think the best and worst consequences of digital technology are going to be for humans. I’m in a sort of cynical mood but perhaps you’ll find my responses interesting. If you find the questions stimulating, do feel free to reply with some of your own answers. I love comparing points of view.

BEST AND MOST BENEFICIAL changes

* Human-centered development of digital tools and systems – safely advancing human progress in these systems
Nature’s experiments are random, not intentional or goal directed. We humans operate in a similar way, exploring what is possible and then trimming away most of the more hideous outcomes. We will continue to develop devices that do the tasks humans used to do thereby saving us both mental and physical labor. This trend will continue resulting in more leisure time available for non-survival pursuits.

* Human connections, governance and institutions – improving social and political interactions
We will continue to enjoy expanded synchronous communication that will include an increasing variety of sensory data. Whatever we can transmit in near real time we will also be able to store and retrieve to enjoy later – even after death. This could result in improved social and political interactions but not necessarily.

* Human rights – abetting good outcomes for citizens.
Increased communication will not advance human “rights” but it might make human “wrongs” more visible so that they can be diminished.

* Human knowledge – verifying, updating, safely archiving and elevating the best of it
Advances in digital storage and retrieval will let us preserve and transmit larger quantities of human knowledge. Whether what is stored is verifiable, safe, or worthy of “elevation” is an age-old question and not significantly changed by digitization.

* Human health and well-being – helping people be safer, healthier, happier
Huge advances in medicine and the ability to manipulate genetics are in store. This will be beneficial to some segments of the population. Agricultural efficiency resulting in increased plant-based food production as well as artificial, meat-like protein will provide the possibility of eliminating human starvation. This could translate into improved well-being – or not.

* Other – you are welcome to write about an area that does not fit in the categories listed above
IMHO, the most beneficial outcomes of our “store and forward” technologies are to empower individuals to access the world’s knowledge and visual demonstrations of skill directly, without requiring an educational institution to act as “middleman”. Learners will be able to hail teachers and learning resources just like they call a ride service today.

yellow robot looking to the right, standing in front of white building
MOST HARMFUL OR MENACING changes

The biggest threat to humanity posed by current digital advances is the possibility of switching from an environment of scarcity to one of abundance. Humans evolved, both physically and psychologically, as prey animals eeking out a living from an inadequate supply of resources. Those who survived were both fearful and aggressive, protecting their genetic relatives, hoarding for their families, and driving away or killing strangers and nonconformists. Although our species has come a long way toward peaceful and harmonious self-actualization,  vestiges of the old fearful behavior persist. 

Consider what motivates the continuance of copyright laws when the marginal cost of providing access to a creative work approaches zero. Should the author continue to be paid beyond the cost of producing the work?

* Human-centered development of digital tools and systems – falling short of advocates’ goals
This is a repeat of the gun violence argument. Does the problem lie with the existence of the gun or the actions of the shooter?

* Human connections, governance and institutions – endangering social and political interactions
Any major technology change endangers the social and political status quo. The question is, can humans adapt to the new actions available to them. We are seeing new opportunities to build marketplaces for the exchange of goods and services. This is creating new opportunities to scam each other in some very old (snake oil) and very new (online ransomware) ways. We don’t yet know how to govern or regulate these new abilities. In addition, although the phenomenon of confirmation bias or echo chambers is not exactly new (think “Christendom” in 15th century Europe), word travels faster and crowds are larger than they were 6 centuries ago. So is digital technology any more threatening today than guns and roads were then? Every generation believe the end is nigh and brought on by change toward “wickedness”. If change is dangerous than we are certainly in for it!

* Human rights – harming the rights of citizens
The biggest threat here is that humans will not be able to overcome their fear and permit their fellows to enjoy the benefits of abundance brought about by automation and AI.

* Human knowledge – compromising or hindering progress.
The threat lies in increasing human dependance on machines – both mechanical and digital. We are at risk of forgetting how to take care of ourselves without them. Increasing leisure and abundance might seem like “progress” but they can also lull us into believing that we don’t need to stay mentally and physically fit and agile.

* Human health and well-being – threatening individuals’ safety, health and happiness
In today’s context of increasing ability to extend healthy life, the biggest threat is human overpopulation. We don’t get too upset if thousands of lemmings jump off a cliff but a large number of human deaths is a no no, no matter how small a percentage of the total population it is. Humanity cannot continue to improve its “health and well-being” indefinitely if it remains planet bound. Our choices are to put more effort into building extraterrestrial human habitat or self-limiting our numbers. In the absences of one of these alternatives, one group of humans is going to be deciding which members of other groups live or die. This is not a likely recipe for human happiness.

* Other – you are welcome to write about an area that does not fit in the categories listed above

1 Comment

Filed under Artificial Intelligence and Stupidity, Future Gazing, Uncategorized

Watch what you sign up for – Hidden contracts and life in the digital stone age

Recently I needed some new domain names for a website I’m planning. Easy, right? Find one of those registration companies, search for the name you want, fill in the blanks, including the one that says you’ve read the “Terms and Conditions”, and plug in your credit card information.

Did you actually read those Terms and Conditions? Here’s section 20 (who gets to the 20th paragraph of legalese?) of the agreement I just signed.

I know this screen shot is a little hard to read so I’ll repeat what I underlined in green:

…that we own all…information…generated from the domain name database. You further agree and acknowledge that 
we own…(c) the name, postal address, e-mail address, voice telephone number,…all contacts for the domain name registration…

I’m not a lawyer but a simple English interpretation of this set of phrases could be that I just transferred ownership of my name and address to that registrar. I wouldn’t want to have to prove that this isn’t what the agreement means. Could it be that the registrar now has the right to sell my name, address, e-mail, and phone number to the highest bidder?

Perhaps the final sentence in the section is meant to reassure me: “We do not have any ownership interest in your specific personal registration information outside of our rights in our domain name database.” But my stuff is in your domain name database and you just said you own it.

Maybe if I had the stamina to read and understand all the clauses in the Registration Agreement I’d not be worried. Maybe if I were a lawyer…maybe if Section 1. didn’t mention that I am also agreeing to a Supplemental Agreement that is linked to this page:

I conclude that we are in the “stone age of the digital age“. We have begun to invent digital tools that enable humans to accomplish much that was impossible without such implements. But we have just begun the invention process. The tools and the rules we are establishing for their use (e.g. Terms and Conditions agreements) are rough – coarse compared with what our descendants will have. For now, we are all suffering from the virtual cuts and bruises (and crimes) that result from the crudeness of today’s digital instruments.

I hope this registrar doesn’t sell my personal information. I hope my bank (the one that requires me to take responsibility for the security of my financial information and then urges me to use online banking and “go paperless”) doesn’t get hacked. I hope the camera and microphone on my “smart phone” are not constantly surveilling me even when I think I’ve turned them off. Life in any stone age is risky…

By the way, have you ever looked at the “indemnification clause” in one of those many agreements you sign, or noticed that most contracts require that you give up your right to a jury trial? Just for fun, read the back of your parking garage ticket and hope that the guy pulling up next to you isn’t carrying any stones.

 

 

 

 

 

Leave a Comment

Filed under Artificial Intelligence and Stupidity

More Musings on Online Privacy

Have you seen this recent viral article on the information social media companies collect about you?

Are you ready?PrivacyArticlePic.jpg

Trust

In the one of the first replies to this article (no, I didn’t read them all) the writer asks “Do you really trust Google?” My answer is “yes, completely”. Google is a for profit information company and I trust it to collect every scrap of information about me and the rest of the world it can. I trust it to profit from this information in any way it can. A company, like a robot or an AI, has no human morality, no conscience, no moral compass. These characteristics have to come from the humans who control it — or abdicate that control.

“Trust” does not stand alone as a concept. To be meaningful it has to be accompanied by answers to the questions, trust whom? to do what? under what circumstances? And it is we humans who must supply those answers, thoughtfully.

Thoughts Left Out

What this article doesn’t say explicitly is that turning on the security controls doesn’t stop the tracking. The next step to privacy is to be very specific about what actions to take. For example, if I turn wi-fi off on my computer is it really offline or just not reporting to me? It isn’t in the interest of Google to tell us these things. The company’s advertisers (other profit-making companies) want us to expose ourselves so that they can entice us to buy their products. They don’t want us to control their access to information about us and they don’t want us to control our impulse to buy. Why would they make it easy for us to protect that information? Why would we “trust” them to do so?

I’ve been teaching since 1975 that the only way to stop a robot (or computer) is to detach it from its power source. That means, if it’s, say, solar powered, get inside it and snip the wires between the charger, the batteries and the cpu. This will also work if you want to stop a computer program designed to collect information that will result in giving it the ability to present images to us that will trigger behavior we might later regret. Asking a company to act against its self-interest seems unlikely to succeed. No matter how many apologies or assurances Facebook publishes, its survival depends on keeping your information flowing in. In the end, I don’t think greater privacy controls will solve the problem. Rather, we need to accept responsibility for responding to those oh-so-effective triggers.

What is Privacy?

The other thesis the article misses is that our whole concept/expectation of privacy has changed in the last 3 centuries. It used to be that “privacy” was the cultural practice of averting one’s eyes rather than today’s assumption that we ought to be able to prevent direct access to information. If you were a servant in an upperclass household, a clerk in a bank, or perhaps a resident in a multifamily, Native American longhouse, you saw a lot of things you never talked about. It’s only recently that a significant number of  humans have lived in conditions where information we now expect to be “private” was not readily available across a broad swath of neighbors, relatives, and tradespeople. The more I confront this topic the more convinced I become that we will find relief from our distress in cultural adaptation, not technical fixes.

Maybe some of today’s youngsters have got a better idea. Just take naked pictures of yourself and post them.

Leave a Comment

Filed under Artificial Intelligence and Stupidity

Another Encounter with Artificial Stupidity

This morning I was researching “learning analytics” and my search led, after many, many clicks, to this web page:

Screen Shot 2016-07-02 at 10.33.57 AM

Note the purple chat window pop-up that appeared in the lower lefthand corner of my active window. Great, I thought, someone wants to know why I’m reading this page.

Here’s the ensuing dialog:


pastedGraphic.png   Nadia Dennis

Nadia Dennis

10:26 am   Hi, sorry to disturb 🙂
Can I ask which school you
represent?

Visitor

10:27 am   I have a nonprofit research organization called LO*OP Center, Inc. LO*OP stands for Learning Options*Open Portal.

Nadia Dennis

10:27 am   I’ll be glad to help, could I have your name please?

Visitor

10:27 am   Liza

Nadia Dennis

10:27 am   Nice to meet you Liza!  To know how I could properly address your concern, would just like to clarify if I am chatting with a student, a teacher or a parent?

Visitor

10:28 am   There are other designations. I’m an educational researcher. I also identify as a student, teacher and parent.

Nadia Dennis

10:29 am   Can we verify if you currently have Compass Learning in your school?

Visitor

10:29 am   If you want to have a meaningful conversation you’ll have to free yourself from your script. It’s not a school.

10:30 am   I’m not going to buy anything from Compass. Do you really want to chat with me?

Nadia Dennis

10:30 am   You may call us at 866.586.7387 or email successteam@compasslearning.com

Visitor

10:31 am   This chat is a fine example of ARTIFICIAL STUPIDITY. It does nothing to encourage me to contact Compass Learning again.

Read

Nadia Dennis

10:31 am   If there is nothing else, I’ll close this chat window. If you need anything else, feel free to reopen chat. Thanks for visiting.


Poor Nadia Dennis. She flunked the Turing Test. I could not distinguish her from a robot — a particularly unsophisticated robot at that.

Increasingly I am encountering Artificial not-so Intelligent voices and typists when I telephone an organization or use “live” chat on the internet. Even when I can determine that the voice is that of a living human being that person is often reduced to serving as a computer peripheral. By this I mean that the person is constrained to read responses from a preprogrammed script and has no personal skills with which to address my topic or problem.

I have to admit to becoming verbally abusive when I find myself in either situation. Since the AI has no feelings (no matter how often it claims to experience  “gladness” or “sorrow”) my emotional venting has no consequence. However, no live operator deserves my expressions of wrath.

There are two significant personal consequences and two societal outcomes that I ask my  readers to consider and comment on.

  1. Personally, I am usually angry by the time I work though the artificial stupidity and finally contact a human being who may be able to help me. I invariably look back on the whole interaction with sadness and regret. My day is diminished.
  2. As an already somewhat isolated senior citizen I leave these interactions even more lonely for meaningful human contact. I am beginning to dread asking for help via phone or  computer. I have little hope that this situation will improve as I grow more frail.
  3. i-hate-attFrom a societal perspective, it’s probably not a good idea for businesses to piss off their customers. If you do a web search using keywords ‘hate’ and ‘AT&T’ you’ll find plenty of evidence supporting the growing dissatisfaction with the customer service provided by this large corporation. Such frustration is not unique to AT&T. It begins when consumers try to contact the company and must thread their way through a maze of automated options and recorded voices professing delight, sorrow and desire to please. It often ends with a meaningless survey.
  4. Perhaps the most dire consequence of our increasing reliance on simulated human-to-human interaction is what it does to employees. First it deskills a large number of them – – the human-as-peripheral effect I mentioned earlier. Second, it decimates the job opportunities for semi-skilled workers. Companies claim that they must automate to remain competitive and/or profitable. A follow-up effect of shrinking employment is the separation of the worker from money – the means of obtaining the goods and services their former employers must sell to remain in business. Third, it spawns a generation of young people who feel helpless. They are taught in school that academic success is the path to economic prosperity. But only the best and brightest are able to compete with well-designed AIs and robots.

Don’t get me wrong. I’m not a luddite battling against all forms of automation techniques. I am strongly opposed to two things: Bad Design and attempts to pass off or disguise machines as human. Poor design exposes us to one form of ‘artificial stupidity’ that wastes our time and fails to solve our problems or provide us with usable information. Clouding the distinction between the human and the machine demeans both types of entity.

So where is the light at the end of this dark tunnel? A clean, carefully-designed, clearly demarked human-machine symbiosis. Humans need to be being creative, non-routine, emotive, person-to-person. Machines should continue to be employed to augment human productivity and enhance human life and planetary sustainability. To reach these goals we humans must evolve new socioeconomic institutions that permit the wealth we are generating with our machines to be distributed broadly throughout the people of the planet. Education is one important key to such evolution. So now, having ranted at length, I’ll return to my search for tools to enhance human learning and teaching.

1 Comment

Filed under Artificial Intelligence and Stupidity