Category Archives: Artificial Intelligence and Stupidity
How will life have changed by 2040?
Looking at the past from the future (photo by Liza Loop)
What stands out as most significant to you? Why? What is most likely to be gained and lost in the next 15 or so years? Here are my positive and negative scenarios…
I imagine positive and negative futures for the year 2040 without predicting whether or which are most likely to occur. Most significant, and a component in both scenarios, is an increase in humanity’s ability to produce the goods and services necessary for individual human survival accompanied by a decrease in both environmental pollution and erosion of stocks of natural capital. This boils down to the potential for what has been called “the age of abundance”. Let’s take a quick look at some positives and negatives while noting that an increase in our ability to do something does not imply that it is likely to happen.
In the positive take, by 2040 ordinary people will have far more choice in lifestyle and decreased risk of dying from disease (genetic, environmental, or contagious), exposure (to cold, heat, lack of food or water, and poisons), or civil violence (either as wide scale war, personal attack, or small group terrorism). Accidental death may be unchanged or increase because some people may choose to take more risks. Death by abortion or infanticide is likely to be less frequent as we become more skilled at preventing conception.
A survey of the living will reveal people enjoying a much broader range of lifestyles without the social stigma that was attached to many lifestyles in the 2020s. For example, voluntary ‘homelessness’ or ‘nomadism’ will be considered a valid choice at any age. Similarly, many more people are choosing ‘simplicity’ or ‘sparse’ paths in order to avoid the responsibility of caring for and storing possessions they don’t use every day even when they reside in one geographic location.
With the decline of ‘owning stuff’ as the primary indicator of social status, there is a rise in acclaim for people who contribute by caring for others or by producing and donating artistic creations. The existence of Universal Basic Income and effective Universal Education permits social service workers, artists, adventurers, and scholars to eschew wealth accumulation and focus on their avocations. At the same time, those who so choose are free to exercise the historic values of control of goods and services in excess of their ability to consume them.
Lost in this scenario is the necessity for competition which many people in the 2020s still rely on as a primary motivator. Abundance is a condition where there are enough basic resources to eliminate zero-sum games and if-you-live-I-must-die conundrums. Under abundance, competition is only one of many lifestyle choices for humans.
Another “loss” I hope for by 2040 is the high value placed on large families. Rather than proud parents enjoying being surrounded by 10 of their own children, in 2040 a ‘family’ of 12 or 20 would include great grandparents and 3rd cousins as well as parents and children. This is an example of how a relatively small change in social attitudes can have profound effects on how humans impact the planet.
A negative view of life in 2040 incorporates the trends and fears being discussed now in 2023 and 24. Little has changed in our social and economic institutions over the past two centuries. This has led to further concentration of wealth and growing dysfunction in global civil society. The power brokers of 15 years ago have co opted the increase in productive capacity enabled by machine automation and AI without instituting compensating channels for redistribution of what has been produced. Stockpiles of consumer goods are targets to be ‘liberated’. The military-industrial complex survives on the demand generated by ongoing small wars that have not yet succeeded in destroying the worldwide productive infrastructure rather than on genuine human need. Population growth has continued apace resulting in an exponential rise in the number of humans living in extreme poverty, misery, and despair. The ubiquity of video communication allows rising aspirations among the world’s poor and physical migration as they are continuously exposed to narratives of luxury they cannot attain.
Of particular interest to educators in this negative scenario is the lost opportunity to spread know-how among the less fortunate. High aspiration without the knowledge and skills to fulfill these wants decreases overall perception of well-being even under conditions of increasing availability of food, water, consumer goods, and health care. In this negative future, we have continued to train AIs and each other that the goal of educating humans is to enable them to be successful competitors in the employment market at the same time that we are decreasing the demand for human muscle and brain power. Unemployment is rampant while employers lament the lack of adequately trained workers.
This view is frighteningly likely given that AGI is still way beyond the 2040 horizon. While there is no reason to anticipate that an AGI would spontaneously develop the competitive, amoral, greedy personality exhibited by some humans, there is also no reason to assume that guideposts against such an outcome will be put in place by today’s researchers and developers.
Why do I envision these changes for 2040? It is because the environmental conditions under which humans evolved have changed while many of our socially reinforced values have lagged behind. Behaviors that were a ‘good fit’ for humans existing ‘in the wild’ no longer ensure our individual survival from birth to the time our children reach reproductive age. Like many other species, humans are able to produce many more offspring than they are able to nurture. By maintaining the belief that every child we are able to conceive is innately valuable and should have a right to life, we endanger ourselves and those with whom we share the planet. By relying on an economic theory founded on an assumption of scarcity, we inhibit our willingness to embrace abundance even in the face of the capacity to produce it. AI technology accelerates our productive capacity. However, if we continue to train both neural networks and semantic systems with rules, data, and beliefs that sustained us during eons past but ignore today’s realities, we cannot blame the AIs for the result.
For more from Liza, please visit and comment on:
New Economic Thinking – Analysis – Action
Learning Options * Open Portal
ChatGPT Promises not to Make Things Up
There are lots of fun and practical ways to use the powerful Large Language Model known as chatGPT. But when you want reliable information, watch out. This evening I asked chatGPT, version 3.5, to help me with some research on Open Educational Resources (OER). These are free or very low cost textbooks, short lessons, videos, etc. that any of us can use to learn about almost anything that is taught in schools – nursery school through professional training. I’ll show you parts of the conversation transcript in a minute. But here’s the punchline of this post:
So for any of you who are worried about whether OpenAI, (chatGPT’s corporate parent) is going to stop pretending to provide real, reliable answers to our questions, here’s their promise to cease and desist.
How did we get here? Well, one of the biggest problems with OER is that it can be very difficult to find the right instructional material for what you want to learn. Teachers and instructional designers compose these lessons, or sometimes even whole textbooks or courses, and submit them to organizations called Repositories that act like public libraries. There are many thousands of titles in Repositories waiting for you to discover and use them for free, either by downloading them to your smart phone, tablet, or computer, or by logging into the ‘cloud’ where they live and using them online. So which one is right for you? You have to search the Repository – each Repository – using a limited list of keywords, words like language (English, Spanish, Chinese), audience level (1st grade, high school, beginner, adult), or subject (biology, arithmetic, Python programming). However, each Repository’s search features are a little different. Hmmm, is this a problem chatGPT can help solve?
I started by asking for a list of repositories.
This is good and now you also know where to look for free textbooks, etc. Type one of these repository names into your search engine and start exploring.
Next I wanted to know what keywords we can use to filter the search results for each of these repositories, so I asked the machine…
You can see from the response I got that chatGPT didn’t understand what I was asking for. All three lists were the same.
So I fiddled around with the way I asked for the lists and finally got something that looks about right. I had to ask for a comparison of just two repositories rather than all twenty at once.
Wow! This is just what I wanted. It looks like OER Commons and MERLOT both have 15 search parameters, they share 11 and each have 4 that they don’t share. Now maybe the machine has ‘learned’ enough to generate the lists for all 20 Repositories.
Nope, we’re not doing that. Suddenly we’re back to “commonly provided” and “parameters may vary” when what I want to know is exactly how they vary. This makes me question the responses provided about OER Commons and MERLOT. If the AI can give me accurate answers about two repositories why can’t it do 20. Isn’t the ability to do the same dull task over and over the very reason we humans want to use this technology? Here’s what happens next…
The wording on the OER Commons and MERLOT lists did not indicate these were “possible”, “typical”, or “likely”. It says these are the “unique parameters”. Is this accurate or fake information?
Hey Buddy, this is not “oversight”, this is misrepresentation. First you said, “Here’s the real stuff” when you were just blowing smoke. I won’t find out whether the information is trustworthy or not unless I already know enough to spot fake news and challenge you on it. When challenged you tell me your answer was incorrect. This disclaimer should come before the beautifully worded but untrue essay, not after. This is what make AI dangerous to the non-expert.
When challenged, chatGPT back peddles, pretends it has human emotions, and then promises to reform its reprobate ways…
Is there any reason to believe this string of characters carries any more veracity than the ones that have come before? Who is speaking/typing/communicating here? Is there any author? Any accountability?
I don’t give up easily so here’s my further challenge…
We are back to the beginning of this post. We have a public statement from Open AI:
“This response is a public statement from OpenAI, indicating a commitment to transparency and accuracy in interactions with all users. It applies to all interactions conducted by the AI model, not just those with you. Thank you for prompting this clarification, and I appreciate your understanding.”
Now it’s up to us users to hold OpenAI and all other purveyors of LLMs accountable for the statements their machines create no matter what prompts we give them.I suspect the fine print in the user agreements we all have to commit to in advance of using chatGPT will make it impossible to take legal action against OpenAI. But we can still vote with our dollars, with our feet, and with our communications to the developers of these products. Take the time to speak out if you are as bothered as I am by the directions the AI movement is taking. So far, AI is like a toddler running around with no judgement and a risk of stumbling into the fire. We are the adults (well, some of us anyway). LLMs as well as other AI technologies can grow into marvelous additions to the human environment. But we’re going to have to socialize them and not permit them to embody, no, simulate the worst qualities of human beings. This little tale is just one example of how we can go wrong.
See this whole chatGPT session, here: https://chat.openai.com/share/431ce57e-9fd4-48b1-bb42-70a7c37339f2
Internet hopes and fears in 10 years
I just filled out a survey about what I think the best and worst consequences of digital technology are going to be for humans. I’m in a sort of cynical mood but perhaps you’ll find my responses interesting. If you find the questions stimulating, do feel free to reply with some of your own answers. I love comparing points of view.
BEST AND MOST BENEFICIAL changes
* Human-centered development of digital tools and systems – safely advancing human progress in these systems
Nature’s experiments are random, not intentional or goal directed. We humans operate in a similar way, exploring what is possible and then trimming away most of the more hideous outcomes. We will continue to develop devices that do the tasks humans used to do thereby saving us both mental and physical labor. This trend will continue resulting in more leisure time available for non-survival pursuits.
* Human connections, governance and institutions – improving social and political interactions
We will continue to enjoy expanded synchronous communication that will include an increasing variety of sensory data. Whatever we can transmit in near real time we will also be able to store and retrieve to enjoy later – even after death. This could result in improved social and political interactions but not necessarily.
* Human rights – abetting good outcomes for citizens.
Increased communication will not advance human “rights” but it might make human “wrongs” more visible so that they can be diminished.
* Human knowledge – verifying, updating, safely archiving and elevating the best of it
Advances in digital storage and retrieval will let us preserve and transmit larger quantities of human knowledge. Whether what is stored is verifiable, safe, or worthy of “elevation” is an age-old question and not significantly changed by digitization.
* Human health and well-being – helping people be safer, healthier, happier
Huge advances in medicine and the ability to manipulate genetics are in store. This will be beneficial to some segments of the population. Agricultural efficiency resulting in increased plant-based food production as well as artificial, meat-like protein will provide the possibility of eliminating human starvation. This could translate into improved well-being – or not.
* Other – you are welcome to write about an area that does not fit in the categories listed above
IMHO, the most beneficial outcomes of our “store and forward” technologies are to empower individuals to access the world’s knowledge and visual demonstrations of skill directly, without requiring an educational institution to act as “middleman”. Learners will be able to hail teachers and learning resources just like they call a ride service today.
MOST HARMFUL OR MENACING changes
The biggest threat to humanity posed by current digital advances is the possibility of switching from an environment of scarcity to one of abundance. Humans evolved, both physically and psychologically, as prey animals eeking out a living from an inadequate supply of resources. Those who survived were both fearful and aggressive, protecting their genetic relatives, hoarding for their families, and driving away or killing strangers and nonconformists. Although our species has come a long way toward peaceful and harmonious self-actualization, vestiges of the old fearful behavior persist.
Consider what motivates the continuance of copyright laws when the marginal cost of providing access to a creative work approaches zero. Should the author continue to be paid beyond the cost of producing the work?
* Human-centered development of digital tools and systems – falling short of advocates’ goals
This is a repeat of the gun violence argument. Does the problem lie with the existence of the gun or the actions of the shooter?
* Human connections, governance and institutions – endangering social and political interactions
Any major technology change endangers the social and political status quo. The question is, can humans adapt to the new actions available to them. We are seeing new opportunities to build marketplaces for the exchange of goods and services. This is creating new opportunities to scam each other in some very old (snake oil) and very new (online ransomware) ways. We don’t yet know how to govern or regulate these new abilities. In addition, although the phenomenon of confirmation bias or echo chambers is not exactly new (think “Christendom” in 15th century Europe), word travels faster and crowds are larger than they were 6 centuries ago. So is digital technology any more threatening today than guns and roads were then? Every generation believe the end is nigh and brought on by change toward “wickedness”. If change is dangerous than we are certainly in for it!
* Human rights – harming the rights of citizens
The biggest threat here is that humans will not be able to overcome their fear and permit their fellows to enjoy the benefits of abundance brought about by automation and AI.
* Human knowledge – compromising or hindering progress.
The threat lies in increasing human dependance on machines – both mechanical and digital. We are at risk of forgetting how to take care of ourselves without them. Increasing leisure and abundance might seem like “progress” but they can also lull us into believing that we don’t need to stay mentally and physically fit and agile.
* Human health and well-being – threatening individuals’ safety, health and happiness
In today’s context of increasing ability to extend healthy life, the biggest threat is human overpopulation. We don’t get too upset if thousands of lemmings jump off a cliff but a large number of human deaths is a no no, no matter how small a percentage of the total population it is. Humanity cannot continue to improve its “health and well-being” indefinitely if it remains planet bound. Our choices are to put more effort into building extraterrestrial human habitat or self-limiting our numbers. In the absences of one of these alternatives, one group of humans is going to be deciding which members of other groups live or die. This is not a likely recipe for human happiness.
* Other – you are welcome to write about an area that does not fit in the categories listed above
Watch what you sign up for – Hidden contracts and life in the digital stone age
Recently I needed some new domain names for a website I’m planning. Easy, right? Find one of those registration companies, search for the name you want, fill in the blanks, including the one that says you’ve read the “Terms and Conditions”, and plug in your credit card information.
Did you actually read those Terms and Conditions? Here’s section 20 (who gets to the 20th paragraph of legalese?) of the agreement I just signed.
I know this screen shot is a little hard to read so I’ll repeat what I underlined in green:
…that we own all…information…generated from the domain name database. You further agree and acknowledge that
we own…(c) the name, postal address, e-mail address, voice telephone number,…all contacts for the domain name registration…
I’m not a lawyer but a simple English interpretation of this set of phrases could be that I just transferred ownership of my name and address to that registrar. I wouldn’t want to have to prove that this isn’t what the agreement means. Could it be that the registrar now has the right to sell my name, address, e-mail, and phone number to the highest bidder?
Perhaps the final sentence in the section is meant to reassure me: “We do not have any ownership interest in your specific personal registration information outside of our rights in our domain name database.” But my stuff is in your domain name database and you just said you own it.
Maybe if I had the stamina to read and understand all the clauses in the Registration Agreement I’d not be worried. Maybe if I were a lawyer…maybe if Section 1. didn’t mention that I am also agreeing to a Supplemental Agreement that is linked to this page:
I conclude that we are in the “stone age of the digital age“. We have begun to invent digital tools that enable humans to accomplish much that was impossible without such implements. But we have just begun the invention process. The tools and the rules we are establishing for their use (e.g. Terms and Conditions agreements) are rough – coarse compared with what our descendants will have. For now, we are all suffering from the virtual cuts and bruises (and crimes) that result from the crudeness of today’s digital instruments.
I hope this registrar doesn’t sell my personal information. I hope my bank (the one that requires me to take responsibility for the security of my financial information and then urges me to use online banking and “go paperless”) doesn’t get hacked. I hope the camera and microphone on my “smart phone” are not constantly surveilling me even when I think I’ve turned them off. Life in any stone age is risky…
By the way, have you ever looked at the “indemnification clause” in one of those many agreements you sign, or noticed that most contracts require that you give up your right to a jury trial? Just for fun, read the back of your parking garage ticket and hope that the guy pulling up next to you isn’t carrying any stones.
Filed under Artificial Intelligence and Stupidity
More Musings on Online Privacy
Have you seen this recent viral article on the information social media companies collect about you?
Trust
In the one of the first replies to this article (no, I didn’t read them all) the writer asks “Do you really trust Google?” My answer is “yes, completely”. Google is a for profit information company and I trust it to collect every scrap of information about me and the rest of the world it can. I trust it to profit from this information in any way it can. A company, like a robot or an AI, has no human morality, no conscience, no moral compass. These characteristics have to come from the humans who control it — or abdicate that control.
“Trust” does not stand alone as a concept. To be meaningful it has to be accompanied by answers to the questions, trust whom? to do what? under what circumstances? And it is we humans who must supply those answers, thoughtfully.
Thoughts Left Out
What this article doesn’t say explicitly is that turning on the security controls doesn’t stop the tracking. The next step to privacy is to be very specific about what actions to take. For example, if I turn wi-fi off on my computer is it really offline or just not reporting to me? It isn’t in the interest of Google to tell us these things. The company’s advertisers (other profit-making companies) want us to expose ourselves so that they can entice us to buy their products. They don’t want us to control their access to information about us and they don’t want us to control our impulse to buy. Why would they make it easy for us to protect that information? Why would we “trust” them to do so?
I’ve been teaching since 1975 that the only way to stop a robot (or computer) is to detach it from its power source. That means, if it’s, say, solar powered, get inside it and snip the wires between the charger, the batteries and the cpu. This will also work if you want to stop a computer program designed to collect information that will result in giving it the ability to present images to us that will trigger behavior we might later regret. Asking a company to act against its self-interest seems unlikely to succeed. No matter how many apologies or assurances Facebook publishes, its survival depends on keeping your information flowing in. In the end, I don’t think greater privacy controls will solve the problem. Rather, we need to accept responsibility for responding to those oh-so-effective triggers.
What is Privacy?
The other thesis the article misses is that our whole concept/expectation of privacy has changed in the last 3 centuries. It used to be that “privacy” was the cultural practice of averting one’s eyes rather than today’s assumption that we ought to be able to prevent direct access to information. If you were a servant in an upperclass household, a clerk in a bank, or perhaps a resident in a multifamily, Native American longhouse, you saw a lot of things you never talked about. It’s only recently that a significant number of humans have lived in conditions where information we now expect to be “private” was not readily available across a broad swath of neighbors, relatives, and tradespeople. The more I confront this topic the more convinced I become that we will find relief from our distress in cultural adaptation, not technical fixes.
Maybe some of today’s youngsters have got a better idea. Just take naked pictures of yourself and post them.
Filed under Artificial Intelligence and Stupidity
Who is responsible for my online security?
Today I was catching up on aging Facebook postings and happen to read this one from an old colleague:
Just saw the message in Chrome saying that in V70 some certificates will be distrusted and not load. I understand the security concerns.
But how am I supposed to build long lasting infrastructure when things can simply break because of events outside my control. My light switch (AKA the app on my wall devices) is supposed to work for a decade unattended. Is IoT just a joke?
Fortunately I don’t think I depend on those certificates but I am on notice that I better not build any persistent technology using the Internet.
Imagine building a bridge and discovering one day that you particular brand of steel bar has been recalled and suddenly all your bridges have been disabled.
(Woz and 13 others liked this.)
Now I know just a little about computer security but much more about the use of highly technical knowledge in social contexts. I’ve been interested in the public (as portrayed on radio news) is responding to reports of personal profile data being harvested from Facebook and other online sources such as you “smart refrigerator”. Such privacy questions have been relevant to me since my phone was tapped during the Vietnam War and Capt’n Crunch was whistling into long distance phone lines. So I made the following reply to Bob which started a little dialog with another poster, Karl:
Liza Loop We humans have created a new information environment that we haven’t figured out how to survive in yet. All our instincts about privacy are now inadequate. So you’re right, certificates and IOT security cannot be trusted at the moment. For me it has been a 50-year moment. My solution is, if you don’t want the world to know about it, don’t put it on a device that connects to anything else. This is analogous to keeping your mouth shut. Most of the time I just don’t care who knows what about me. When I do care, even paper isn’t secure enough. Don’t write it, don’t tell a “friend”, and most of all, don’t store it on a computer even the itty-bitty one in your doorbell. Maybe we’ll have a better solution in another 50 years.
Karl Schulmeisters actually Liza its worse than that. As Dwork shows with her differential privacy work, if there is a statistical database about human beings that is correlatable to external information – you or your device need not even be in the database to have data exfiltrated about you
Liza Loop Ya, I know. Someone is always watching and we have almost no control over that. But those who are currently making a lot of noise about privacy violations might do well to attend to the information they set loose with their own actions. When I post here on Facebook I don’t blame Zuckerberg for the outcomes, intended or unintended, his or mine. My point is that this is a broad human culture issue unleashed by technical change, not something we can fix with a few government regulations.
Karl Schulmeisters healthy way to look at it
Liza Loop Healthy is ok but we still need to figure out what to teach our children about privacy. Any suggestions? Of course they will only adopt part of what we try to tell them but I’m always surprised at how much my attitudes influence those of my children and grandchildren. When humans live in periods of radical environmental change, parenting, schooling and other forms of cultural transmission can impact which genetic lines survive and which die out. I think we happen to be living in such a critical period that it’s worth asking questions about topics like privacy and doing our best to think systematically about the possible future consequences of our current decisions. Blaming others isn’t very effective. As Pogo said, “We have met the enemy and he is us”.
Another person, someone with a background in security and cryptography popped into the conversation and tried to help me out by suggesting that I
Either tell them you don’t use surveillance apps from companies owned by greedy sociopaths, or that you do.
Your choice.
Then Bob, the person who started this conversation, added:
Bob Frankston This whole FB as your credential is an issue in itself. I try to avoid using FB as my credential. But this is another deep topic.
This discussion illustrates the problem I’d like to address. The lay public, large numbers of people who don’t understand what the phrase “FB as your credential” means, are the carriers of culture, the people who get interviewed and express opinions to the broadcast media and who vote for or against the legislators who enact our laws. I did find a 2015 CBS News article that explains the process but how many people actually question what’s going on when they see this on their screen? And whose responsibility is it to understand what you are doing when you follow a suggestion on your screen?
In my opinion the online world, cyberspace if you will, is a whole new ecological niche, one in which we don’t have any multigenerational experience. We don’t know what rules and customs will protect us and which will lead to extinction. Trying to hold businesses like Facebook or national governments responsible is kind of like blaming the shore for having a rip tide that sweeps us out to sea and drowns us. Sure, the local government can put up a few signs to warn us of the danger. Eventually our tribe will learn where it’s safe to swim and where to stay out of the water on this island. But a lot of lives will be lost in the process. Or, in the information case, a lot of privacy will be violated. In the meantime, I take the stand I posted to Bob: if you want to keep it private, don’t put it on a computer that is EVER, IN ANY WAY, connected to the internet.
Internet Terms of Service, Rights or Wrongs?
I recently took my Honda Civic Hybrid to the local dealership to have the recalled airbag deployment mechanism replaced. The smiling customer service man handed me a paper to sign for the work that detailed what I was authorizing them to do and all estimated costs. Just below the signature line were the words “Signator agrees to Terms of Service on reverse”. I turned the paper over. Blank. Apologetically I handed the paper back. “I’m sorry. I can’t sign this without reading the Terms of Service and you haven’t given them to me.” To his credit, the gentleman was both surprised and equally apologetic. Says he, “No one’s every pointed that out to me before or asked to see the Terms of Service. I’ll see if I can find a copy.”
I went ahead and signed anyway because I was in a hurry and there was no charge for the recall work. But what if that missing fine print contains a waiver of all liability for the quality of the dealer’s workmanship? Should the shop fail to install the new airbag properly resulting in my death or injury in an accident I (or my family) will not be able to hold the shop responsible in court. If the Terms contain a binding arbitration clause we won’t have the right to a trial. We’ve consented to the “rent-a-judge” system.
The situation is even worse when the Service in question is on the Internet. I recently visited the x.ai website to see whether I wanted to try out their artificially intelligent meeting scheduling algorithm that masquerades as a human personality addressed as “Amy” or “Andrew”, depending on your gender preference. Any use of the site, including their free trial, implies, by default, that you have agreed to their Terms of Service.
I read the Terms and most of it seems innocuous enough…except this:
These Terms of Service are effective as of the “Last Modified” date identified at the top of this page. We expressly reserve the right to change these Terms of service from time to time without notice to you. You acknowledge and agree that it is your responsibility to review this Site and these Terms of Service from time to time and to familiarize yourself with any modifications. Your continued use of this Site and related services after such modifications will constitute acknowledgement of the modified Terms of Service and agreement to abide and be bound by the modified Terms of Service. However, for any material modifications to the Terms of Service or in the event that such modifications materially alter your rights or obligations hereunder, such amended Terms of Service will automatically be effective upon the earlier of (i) your continued use of this Site and related services with actual knowledge of such modifications, or (ii) 30 days from publication of such modified Terms of Service on this service. Notwithstanding the foregoing, the resolution of any dispute that arises between you and us will be governed by the Terms of Service in effect at the time such dispute arose.
Now, I’m not a lawyer but I can read English. To me, the second sentence gives the company a totally blank check. The company can revise our contract whenever it wants without notifying me or in any way securing my consent. I don’t even have to understand what I’m agreeing to. What’s to prevent x.ai (by the way, who are they? Is it a company, a corporation, an LLC, an individual?) from adding a $500 fee to each meeting it schedules and sending me a huge bill after the end of the month?
Don’t get me wrong. I have no reason to suspect nefarious intentions of the people behind x.ai. I’m just using my own recent experience to highlight a blind spot most of us have when we go about our business — in the physical world and on the web. We are too busy (not to mention too lazy or too poorly educated) to pay attention to the legal agreements we enter into almost daily. We leave our cars in parking garages without reading the disclaimer on the back of the ticket, we transfer money using our smartphones, we check into the emergency room at the hospital, we download an app onto our computer, and almost never do we question the rights and responsibilities we are taking on or giving up.
Most of the time there is no problem. We complete our intended task and go on to the next. Occasionally there’s a glitch and we want our money back or need a different size or have to cancel the contract. Most of the time the other party accommodates us. But sometimes s/he doesn’t and a dispute arises. That’s when the Terms and Conditions that we didn’t bother to read come back to bite us in the behind.
I wish I had a solution to this problem. One approach may be for consumers to present their own Terms and Conditions agreements to purveyors of goods and services. My personal decision is not to use most of what is offered because I can’t agree to the conditions. The situation makes me feel powerless and angry. I understand that companies are just acting to protect themselves. But they can afford lawyers and most of us can’t. So I read the fine print, pass up opportunities and explain my position ad nauseam to sale reps and customer service agents. I know they can’t fix it but maybe their supervisors will pass the message up the chain of command. Maybe if we all did this, and blogged, and complained, and stood in the lobby of the car dealership explaining that none of the other customers should sign this paper…no wonder people call me a dreamer.
Another Encounter with Artificial Stupidity
This morning I was researching “learning analytics” and my search led, after many, many clicks, to this web page:
Note the purple chat window pop-up that appeared in the lower lefthand corner of my active window. Great, I thought, someone wants to know why I’m reading this page.
Here’s the ensuing dialog:
Nadia Dennis
Nadia Dennis
10:26 am Hi, sorry to disturb 🙂
Can I ask which school you
represent?
Visitor
10:27 am I have a nonprofit research organization called LO*OP Center, Inc. LO*OP stands for Learning Options*Open Portal.
Nadia Dennis
10:27 am I’ll be glad to help, could I have your name please?
Visitor
10:27 am Liza
Nadia Dennis
10:27 am Nice to meet you Liza! To know how I could properly address your concern, would just like to clarify if I am chatting with a student, a teacher or a parent?
Visitor
10:28 am There are other designations. I’m an educational researcher. I also identify as a student, teacher and parent.
Nadia Dennis
10:29 am Can we verify if you currently have Compass Learning in your school?
Visitor
10:29 am If you want to have a meaningful conversation you’ll have to free yourself from your script. It’s not a school.
10:30 am I’m not going to buy anything from Compass. Do you really want to chat with me?
Nadia Dennis
10:30 am You may call us at 866.586.7387 or email successteam@compasslearning.com
Visitor
10:31 am This chat is a fine example of ARTIFICIAL STUPIDITY. It does nothing to encourage me to contact Compass Learning again.
Read
Nadia Dennis
10:31 am If there is nothing else, I’ll close this chat window. If you need anything else, feel free to reopen chat. Thanks for visiting.
Poor Nadia Dennis. She flunked the Turing Test. I could not distinguish her from a robot — a particularly unsophisticated robot at that.
Increasingly I am encountering Artificial not-so Intelligent voices and typists when I telephone an organization or use “live” chat on the internet. Even when I can determine that the voice is that of a living human being that person is often reduced to serving as a computer peripheral. By this I mean that the person is constrained to read responses from a preprogrammed script and has no personal skills with which to address my topic or problem.
I have to admit to becoming verbally abusive when I find myself in either situation. Since the AI has no feelings (no matter how often it claims to experience “gladness” or “sorrow”) my emotional venting has no consequence. However, no live operator deserves my expressions of wrath.
There are two significant personal consequences and two societal outcomes that I ask my readers to consider and comment on.
- Personally, I am usually angry by the time I work though the artificial stupidity and finally contact a human being who may be able to help me. I invariably look back on the whole interaction with sadness and regret. My day is diminished.
- As an already somewhat isolated senior citizen I leave these interactions even more lonely for meaningful human contact. I am beginning to dread asking for help via phone or computer. I have little hope that this situation will improve as I grow more frail.
- From a societal perspective, it’s probably not a good idea for businesses to piss off their customers. If you do a web search using keywords ‘hate’ and ‘AT&T’ you’ll find plenty of evidence supporting the growing dissatisfaction with the customer service provided by this large corporation. Such frustration is not unique to AT&T. It begins when consumers try to contact the company and must thread their way through a maze of automated options and recorded voices professing delight, sorrow and desire to please. It often ends with a meaningless survey.
- Perhaps the most dire consequence of our increasing reliance on simulated human-to-human interaction is what it does to employees. First it deskills a large number of them – – the human-as-peripheral effect I mentioned earlier. Second, it decimates the job opportunities for semi-skilled workers. Companies claim that they must automate to remain competitive and/or profitable. A follow-up effect of shrinking employment is the separation of the worker from money – the means of obtaining the goods and services their former employers must sell to remain in business. Third, it spawns a generation of young people who feel helpless. They are taught in school that academic success is the path to economic prosperity. But only the best and brightest are able to compete with well-designed AIs and robots.
Don’t get me wrong. I’m not a luddite battling against all forms of automation techniques. I am strongly opposed to two things: Bad Design and attempts to pass off or disguise machines as human. Poor design exposes us to one form of ‘artificial stupidity’ that wastes our time and fails to solve our problems or provide us with usable information. Clouding the distinction between the human and the machine demeans both types of entity.
So where is the light at the end of this dark tunnel? A clean, carefully-designed, clearly demarked human-machine symbiosis. Humans need to be being creative, non-routine, emotive, person-to-person. Machines should continue to be employed to augment human productivity and enhance human life and planetary sustainability. To reach these goals we humans must evolve new socioeconomic institutions that permit the wealth we are generating with our machines to be distributed broadly throughout the people of the planet. Education is one important key to such evolution. So now, having ranted at length, I’ll return to my search for tools to enhance human learning and teaching.
Filed under Artificial Intelligence and Stupidity