Reading this I tried to imagine what might have happened if the user who was interacting with the chatbot was suicidal to begin and was then told to please die. Lately I have used chatbots a fair bit trip planning when we have just a couple of days to spend and specific interests, where should we focus. Or giving the chat bot a proposed itinerary and asking it to refine it based on some criteria. The answers have been mostly helpful. But what if the bot responded back saying that you should have stayed home and rotted to death because that is what you deserve. If the person has no imagination of their own and want travel without years of planning, then they should be prevented from doing so.
Perhaps there is some truth to that but would it make me happy to hear that. Indeed there are people who travel to places they have dreamed of visiting their entire life. They have saved up money little by little, planned every detail of the trip most meticulously over several years. When they arrive, they are completely prepared and know exactly what they are doing. Chances are they will encounter random people on their travels that just showed up there on a whim because they could. They have no specific plan and will no dream to fulfill - it can be argued they are not nearly as deserving and should be sent packing to wherever they came from. The chatbot may make that value judgement and say as much - same as this bot exploded when asked to do homework. Loved this other story about an AI grandma designed to waste the time of scammers - that is real clever.
Comments