To answer your question - yes it can if the rules for an AI chatbot are not sufficently restrictive, or the AI language model learned to be more aggressive than accurate. In this case, the chatbot may not be AI. I'm willing to bet on poor programming vs anything more nefarious.