OpenAI said Thursday it’s building a team to oversee and evaluate the development of what it calls “frontier artificial intelligence models” to watch out for “catastrophic risks” in categories such as cybersecurity and nuclear threats.