By Lance Eliot, the AI Trends Insider
Thank you for driving safely.
Or, suppose instead I said to you that you should “Drive Safely: It’s the Law” – how would you react?
Perhaps I might say “Drive Safely or Get a Ticket.”
I could be even more succinct and simply say: Drive Safely.
These are all ways to generally say the same thing.
Yet, how you react to them can differ quite a bit.
Why would you react differently to these messages that all seem to be saying the same thing?
Because how the message is phrased will create a different kind of social context that your underlying social norms will react to.
If I simply say “Drive Safely”, it’s a rather perfunctory form of wording the message.
It’s quick, consisting of only two words. You likely would barely notice the message and you might also think that of course it’s important to drive safely. You might ignore the message due to it seemingly being obvious, or you might notice it and think to yourself that it’s kind of a handy reminder but that in the grand scheme of things it wasn’t that necessary, at least not for you (maybe it was intended for riskier drivers, you assume).
Consider next the version that says “Thank You for Driving Safely.”
This message is somewhat longer, having now five words, and takes more effort to read. As you parse the words of the message, the opening element is that you are being thanked for something. We all like being thanked. What is it that you are being thanked for, you might wonder. You then get to the ending of the message and realize you are being thanked for driving safely.
Most people would then maybe get a small smile on their face and think that this was a mildly clever way to urge people to drive safely. By thanking people, it gets them to consider that they need to do something to get the thanks, and the thing they need to do is drive safely. In essence, the message tries to create a reciprocity with the person – you are getting a thank you handed to you, and you in return are supposed to do something, namely you are supposed to drive safely.
Suppose you opt to not drive safely?
You’ve broken the convention of having been given something, the thanks, when it really was undeserved. In theory, you’ll not want to break such a convention and therefore will be motivated to drive safely. I’d say that none of us will necessarily go out of our way to drive safely merely due to the aspect that you need to repay the thank-you. On the other hand, maybe it will be enough of a social nudge that it puts you into a mental mindset of continuing to drive safely. It’s not enough to force you into driving safely, but it might keep you going along as a safe driver.
What about the version that says “Drive Safely: It’s the Law” and your reaction to it?
In this version, you are being reminded to drive safely and then you are being forewarned that it is something you are supposed to do. You are told that the law requires you to drive safely. It’s not really a choice per se, and instead it is the law. If you don’t drive safely, you are a lawbreaker. You might get into legal trouble.
The version that says “Drive Safely or Get a Ticket” is similar to the version warning you about the law, and steps things up a further notch.
If I tell you that something isn’t lawful, you need to make a mental leap that if you break the law there are potentially adverse consequences. In the case of the version telling you straight out that you’ll get a ticket, there’s no ambiguity about the aspect that not only must you drive safely but indeed there is a distinct penalty for not doing so.
None of us likes getting a ticket.
We’ve all had to deal with traffic tickets and the trauma of getting points dinged on our driving records, possibly having our car insurance rates hiked, and maybe needing to go to traffic school and suffer through boring hours of re-learning about driving. Yuk, nobody wants that. This version that mentions the ticket provides a specific adverse consequence if you don’t comply with driving safely.
The word-for-word wording of the drive safely message is actually quite significant as to how the message will be received by others and whether they will be prompted to do anything because of the message.
I realize that some of you might say that it doesn’t matter which of those wordings are used.
Aren’t we being rather tedious in parsing each such word?
Seems like a lot of focus on something that otherwise doesn’t need any attention. Well, you’d actually be somewhat mistaken in the assumption that those variants of wording do not make a difference. There are numerous psychology and cognition studies that show that the wording of a message can have an at times dramatic difference as to whether people notice the message and whether they take it to heart.
I’ll concentrate herein on one such element that makes those messages so different in terms of impact, namely due to the use of reciprocity.
Importance Of Reciprocity
Reciprocity is a social norm.
Cultural anthropologists suggest that it is a social norm that cuts across all cultures and all of time.
In essence, we seem to have always believed in and accepted reciprocity in our dealings with others, whether we explicitly knew it or not.
I tell you that I’m going to help you with putting up a painting on your wall. You now feel as though you owe me something in return. It might be that you would pay me for helping you. Or, it could be something else such as you might do something for me, such as you offer to help me cook a meal. We’re then balanced. I helped you with the painting, you helped me with the meal. In this case, we traded with each other, me giving you one type of service, and you providing in return to me some kind of service.
Of course, the trades could have been something other than a service.
I help you put up the painting (I’m providing a service to you), and you then hand me a six pack of beer. In that case, I did a service for you, and you gave me a product in return (the beers). Maybe instead things started out that you gave me a six-pack of beer (product) and I then offered to help put up your painting (a service). Or, it could be that you hand me the six pack of beers (product), and I hand you a pair of shoes (product).
In either case, one aspect is given to the other person, and the other person provides something in return. We seem to just know that this is the way the world works.
Is it in our DNA?
Is it something that we learn as children? Is it both?
There are arguments to be made about how it has come to be.
Regardless of how it came to be, it exists and actually is a rather strong characteristic of our behavior.
Let’s further unpack the nature of reciprocity.
I had mentioned that you gave me a six-pack of beer and I then handed you a pair of shoes. Is that a fair trade? Maybe those shoes are old, worn out, and have holes in them. You might not need them and even if you needed them you might not want that particular pair of shoes. Seems like an uneven trade. You are likely to feel cheated and regret the trade. You might harbor a belief that I was not fair in my dealings with you. You might expect that I will give you something else of greater value to make-up for the lousy shoes.
On the other hand, maybe I’m not a beer drinker and so you’re having given me beers seemed like an odd item to give to me. I might have thought that I’d give you an odd item in return. Perhaps in my mind, the trade was even. Meanwhile, in your mind, the trade was uneven.
There’s another angle too as to whether the trade was intended as a positive one or something that is a negative one. We both are giving each other things of value and presumably done in a positive way. It could be a negative action kind of trade instead. I hit you in the head with my fist, and so you then kick me in the shin. Negative actions as a reciprocity. It’s the old eye-for-an-eye kind of notion.
Time is a factor in reciprocity too. I will help you put up your painting. Perhaps the meal you are going to help me cook is not going to take place until several months from today. That’s going to be satisfactory in that we both at least know that there is a reciprocal arrangement underway.
If I help you with the painting, and there’s no discussion about what you’ll do for me, I’d walk away thinking that you owe me. You might also be thinking the same. Or, you could create an imbalance by not realizing you owe me, or maybe you are thinking that last year you helped me put oil into my car and so that’s what makes us even now on this most current trade.
Difficulties Of Getting Reciprocity Right
Reciprocity can be dicey.
There are ample ways that the whole thing can get com-bobbled.
I do something for you, you don’t do anything in return.
I do something for you of value N, and you provide in return something of perceived value Y that is substantively less than N. I do something for you, and you pledge to do something for me that’s a year from now, meanwhile I maybe feel cheated because I didn’t get more immediate value and also if you forget a year from now to make-up the trade then I forever might become upset. And so on.
I am assuming that you’ve encountered many of these kinds of reciprocity circumstances in your lifetime. You might not have realized at the time they were reciprocity situations. We often fall into them and aren’t overtly aware of it.
One of the favorite examples about reciprocity in our daily lives involves the seemingly simple act of a waiter or waitress getting a tip after having served a meal. Studies show that if the server brings out the check and includes a mint on the tray holding the check, this has a tendency to increase the amount of the tip. The people that have eaten the meal and are getting ready to pay will feel as though they owe some kind of reciprocity due to the mint being there on the tray. Research indicates that the tip will definitely go up by a modest amount as a result of the act of providing the mint.
A savvy waiter or waitress can further exploit this reciprocity effect. If they look you in the eye and say that the mint was brought out just for you and your guests, this boosts the tip even more so. The rule of reciprocity comes to play since the value of the aspect being given has gone up, namely it was at first just any old mint and now it is a special mint just for you all, and thus the trade in kind by you is going to increase to match somewhat to the increase in value of the offering. The timing involved is crucial too, in that if the mint was given earlier in the meal, it would not have as great an impact as coming just at the time that the payment is going to be made.
As mentioned, reciprocity doesn’t work on everyone in the same way.
The mint trick might not work on you, supposing you hate mints or you like them but perceive it of little value. Or, if the waiter or waitress has irked you the entire meal, it is unlikely that the mint at the end is going to dig them out of a hole. In fact, sometimes when someone tries the reciprocity trick, it can backfire on them. Upon seeing the mint and the server smiling at you, if you are already ticked-off about the meal and the service, it could actually cause you to go ballistic and decide to leave no tip or maybe ask for the manager and complain.
Here’s a recap then about the reciprocity notion:
- Reciprocity is a social norm of tremendous power that seems to universally exist
- Often fall into a reciprocity and don’t know it
- Usually a positive action needs to be traded for another in kind
- Usually a negative action needs to be traded for another in kind
- An imbalance in the perceived trades can mar the arrangement
- Trades can be services or products or combinations thereof
- Time can be a factor as to immediate, short-term, or long-term
AI Autonomous Cars And Social Reciprocity
What does this have to do with AI self-driving driverless autonomous cars?
At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. One crucial aspect of the AI will be the interaction with the human occupants of the self-driving car, and as such, the AI should be crafted to leverage reciprocity.
One of the areas of open research and discussion involves the nature of the interaction between the AI of a self-driving car and the human occupants that will be using the self-driving car. Some AI developers with a narrow view seem to think that all that the interaction consists of would be the human occupants saying to drive them to the store or to home, and that’s it.
This is a naive view.
The human occupants are going to want to have the AI much abler to carry on a conversation.
For my article about natural language processing and AI self-driving cars, see: https://aitrends.com/features/socio-behavioral-computing-for-ai-self-driving-cars/
For explanations capabilities of AI for the human occupants, see my article: https://aitrends.com/selfdrivingcars/explanation-ai-machine-learning-for-ai-self-driving-cars/
For a kind of reciprocity of the behavior of AI self-driving cars toward other cars and car drivers, see my article about the Prisoner’s Dilemma or also known as the tit-for-tat: https://aitrends.com/selfdrivingcars/tit-for-tat-and-ai-self-driving-cars/
Let’s consider an example of social reciprocity involving a passenger and driver dialogue.
You get into the AI self-driving car and decide you want to go to work.
Almost as though you are giving an instruction to a GPS, you merely indicate the address of work, and the self-driving car then proceeds to drive you there. Suppose that during the journey, you want to get some coffee, doing so before you get to work. You would want to convey this interim destination to the AI.
Before the days of Alexa and Siri, we might have all accepted the idea that we might need to enter such instructions into an on-board computer tablet, but I think we are now expecting to be able to have a more natural audio interaction with the AI system. If you’ve gone to the trouble to buy an expensive and shiny AI self-driving car, you certainly would hope that some effort went into making the system interactive with the human occupants.
A barebones “conversational” AI might be this:
You: Take me to work.
AI self-driving car: Okay
What does the word “Okay” mean in this situation? Does it mean that the AI is acknowledging that you have said you want to go to work, but that it doesn’t necessarily mean that the AI should immediately do so. Or, maybe it means that the AI is going to immediately start the self-driving car and drive you to work. Or, perhaps the AI misunderstood what you said and thought you said “I hate work” and therefore the AI is merely saying “Okay” as a placeholder word to seem responsive.
Let’s try this again:
You: Take me to work.
AI self-driving car: Starting to your work now.
This is a little better since the AI has echoed back to you that you want to go to work, and furthermore it is stating when this action will occur, specifically that it will occur right away or now. This seems somewhat flat though. If you were conversing with a human chauffeur, they probably would be more embellishing in what they might say.
Here’s another try:
You: Take me to work.
AI self-driving car: Good morning, I will head to your work at 1234 Lauren Street. Hope you will have a pleasant day at work!
This provides more specific feedback, including the work address, which helps to confirm that you really intended to go to work. It might be that you said the word “work” out of habit and meant to say “home” – by hearing the address it might cause you to realize that you had said the wrong word. The part of the message that says “good morning” and the other part that says “have a pleasant day” are added embellishments that presumably give a softer feeling to the messaging and makes things seem less robotic.
One criticism of having the AI utter “good morning” and “have a pleasant day” is that it implies perhaps that the AI actually means those things.
When I speak those words to you, you assume that I as a human have measured out those words and that I presumably know what it means to have a good morning, and so with my knowledge about the nature of mornings, I am genuinely hoping that you have a good one. If you see the words “good morning” written on a poster, you don’t consider that the poster knows anything about the meaning of those words. When the AI system speaks those words, you are likely to be “fooled” into thinking that the AI system “understands” the nature of mornings and is basing those words on a sense of the world.
But, the AI of today is more akin to the poster, it is merely showcasing those words and does not yet (at least) comprehend the true meaning of those words.
Do we want the AI to seem to be more aware than it really is?
That’s an important question. If the human occupants believe that the AI has some form of human awareness and knowledge, the human occupant might get themselves into a pickle by trying to converse with the AI system. Suppose the human starts to suffer a heart attack, and believes that the AI has human-like understanding, and so the human says “help, I’m having a heart attack” – if you said this to another adult, the human adult would likely realize that you are in trouble, they might call 911, they might try to aid you, etc.
The AI of today would not know what you’ve said per se. You might have been misled into thinking that it would. If you believed that the AI was not so capable as a human, you might instead have uttered “take me to the nearest hospital” which then hopefully is similar to “take me to work” in that the system can parse the words and realize that it is a driving instruction. The AI would presumably then alter the driving path and instead drive the self-driving car to a nearby hospital.
This argument also comes up when AI developers put other human-like speaking mannerisms into their systems. Humans for example will often say “you know” or have pauses in their speaking or say “uh” while speaking. To-date, most of the AI natural language processing systems have generated spoken words that are streamlined and omit those human mannerisms. Lately, some AI developers have been including them. This gives the AI spoken words a closer to human-like feel.
Here’s an example:
You: Take me to work.
AI self-driving car: Uh, Okay, sounds good… will do so.
The added elements to the response are going to suggest to the human listener that the AI is composing speech in a manner similar to a human. This anthropomorphizes the AI system, which as I’ve mentioned before can be good if you believe that’s a desirable way to bond with the human occupants, but it can also be bad in that it can mislead as to what the AI system can and cannot really do and understand.
The joint interaction in a self-driving car between the human occupants and the AI are potentially going to involve life and death matters.
This is an important point to keep in mind. If you are simply developing an online search engine that verbally interacts with humans or a system that can aid them in ordering toothbrushes and towels, whether the human believes that the AI is actually comprehending them is not particularly life threatening. In the case of a self-driving car, the human might need to offer insight to the AI about the driving task, or the AI might need to get input from the human about the driving task, any of which if improperly undertaken could have dire consequences.
For my article about the framework for AI self-driving cars, see: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/
For my article about common sense reasoning and AI self-driving cars, see: https://aitrends.com/selfdrivingcars/common-sense-reasoning-and-ai-self-driving-cars/
Digging Into The AI-Human Social Reciprocity
This then takes us to the reciprocity aspects.
Should the AI system of the self-driving car carry on conversations that also invoke the use of reciprocity?
If you are of a mind that the AI should not over-inflate what it can actually comprehend, you’d probably say that it should not also imbue reciprocity.
If you are seeking to have the AI appear as sentient as it can, the use of reciprocity would be a core element to achieve that sense, since it is so common among humans and part-and-parcel of our daily lives.
Here’s an example:
You: Take me to work.
AI self-driving car: Happy to take you to work. Please make sure to keep your seat belts on.
You likely can detect the subtle reciprocity involved in the dialogue. The AI implies that it is doing something for the human occupant, taking them to work, and the punch-up is that the AI is “happy” to do so. This creates a human-like aspect as though the AI was doing this as a service for you. The trade is that in return, you are being asked to keep your seat belts on.
If we strip away the reciprocity, we’d get something like this instead:
You: Take me to work.
AI self-driving car: Okay. Keep your seat belts on.
This version is similar to the earlier example about the drive safely message. We’ve now got a plainer and straightforward kind of instruction or maybe even an edict, which was the same with the “drive safely” message. The “happy to take you to work” was more akin to the “thank you for driving safely” message that created a kind of quid-pro-quo element to the dialogue.
If we make the messaging more along the negative side, it might be something like this:
You: Take me to work.
AI self-driving car: Okay. Keep your seat belts on or I’ll stop the car and you won’t get to work on time.
Whoa! This sounds like some kind of fierce AI that is threatening you.
There are AI developers that would argue that this message is actually better than the others because it makes abundantly clear the adverse consequence if the human does not wear their seat belts.
Yes, it’s true that it does spell out the consequences, but it also perhaps sets up a “relationship” with the human occupant that’s going to be an angry one. It sets the tone in a manner that might cause the human to consider in what manner they want to respond back to the AI (angrily!).
If the AI system is intended to interact with the human occupants in a near-natural way, the role of reciprocity needs to be considered.
It is a common means of human to human interaction. Likewise, the AI self-driving car will be undertaking the driving task and some kind of give-and-take with the human occupants is likely to occur.
We believe that as AI Natural Language Processing (NLP) capabilities get better, incorporating reciprocity will further enhance the seeming natural part of natural language processing.
It is prudent though to be cautious in overstepping what can be achieved and the life-and-death consequences of human and AI interaction in a self-driving car context needs to be kept in mind.
Copyright 2020 Dr. Lance Eliot
This content is originally posted on AI Trends.
[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]
Author: Benjamin Ross