A Faster, Better Way to Prevent an Ai Chatbot from Giving Toxic Responses

A new technique can more effectively perform a safety check on an AI chatbotA new technique can more effectively perform a safety check on an AI chatbot.
MIT News Machine learning 3:04 am on May 23, 2024

Featured Image Related to Story

The MIT News Office announces new research where curiosity-driven red-teaming helps assess large language models against company policies, using funding from various organizations. This method ensures user expectations are met upon model deployment.

  • Research: Curiosity-Driven Red-Teaming for Large Language Models
  • Funding Agencies: Hyundai Motor Compan, Quanta Computer Inc, MIT-IBM Watson AI Lab, Amazon Web Services MLRA research grantee, DARPA, US Army Research Office, U.S. Navy & Air Force Research Labs.
  • Purpose: To test chatbot compliance with company policies through a red-team approach.
  • Collaboration: Agrawal's team emphasizes its application in AI model deployment testing.
  • Impact: Ensuring language models behave as expected according to user and policy expectations, benefiting industries adopting AI.


< Previous Story     -     Next Story >

Copy and Copyright Pubcon Inc.
1996-2024 all rights reserved. Privacy Policy.
All trademarks and copyrights held by respective owners.