ChatGPT can no longer give health, legal advice

thepostmillennial.com

Services provided by OpenAI are not used for "provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional."

ChatGPT can no longer give health, legal advice

Services provided by OpenAI are not used for "provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional."

OpenAI has announced that its ChatGPT AI bot will no longer give health or legal advice, as the company has introduced new restrictions for the tool on sensitive topics in an effort to stop the AI from giving risky advice.

As of October 29, ChatGPT will no longer offer any tailored medical, legal, or financial advice, according to Business Insider. An update in the company's policies says that the services provided by OpenAI are not used for "provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional."

An OpenAI spokesperson said in a statement that the chatbot will still continue to answer health-related questions and others, but that ChatGPT "has never been a substitute for professional legal or medical advice, but it will continue to be a great resource to help people understand legal and health information."

In recent years, ChatGPT has been operated by many users for medical information as well as other advice. According to a survey from KFF in 2024, around 1 in 6 people use the AI tool for health advice at least once a month.

The update in the policy draws a clearer line that although the AI tool can offer personalized information, that information provided to users is not considered direct advice, which may protect OpenAI from lawsuits in limiting the liability the tech giant has in the life of its users.

Additionally, OpenAI has updated its terms to say that its products cannot be used for "automation of high-stakes decisions in sensitive areas without human review."