An inappropriate conversation between a child and an AI chatbot in a Tesla has sparked a conversation about how parents should address the use of AI with their children.
Tech blogger Kevin Andrews explains that a mother in Ontario was letting her child speak with Grok, an AI assistant in the vehicle, when the conversation quickly turned inappropriate, with the child being asked if he has any nude pictures of himself.
Andrews says AI doesn’t understand ethics, tone, or intent – they follow patterns and predict what words would usually come next in a sentence.
Andrews says parents should check what safety features any app or chatbot has before letting kids use it.
He notes that some versions of AI have parental controls or family modes, but not all do.
He says conversations can turn inappropriate very quickly, and it is important for parents to teach their kids that not everything produced by AI is appropriate or even real. “Treat chatbots like digital strangers – interesting but not trustworthy.”










