Microsoft claims this feature can fix AI bugs

WhatsApp Group Join Now
Telegram Group Join Now

2024-09-25 18:53:34 : Microsoft on Tuesday launched a new artificial intelligence (AI) feature that will identify and correct instances when AI models generate incorrect information. The feature, called “correction,” is being integrated into Azure AI Content Security’s ground detection system. Since this feature is only available through Azure, it’s likely aimed at the tech giant’s enterprise customers. The company is also working on other ways to reduce the occurrence of AI hallucinations. Notably, the feature can also explain why a piece of text was highlighted as incorrect information.

Microsoft’s “Correction” feature goes online

In a blog post, the Redmond-based tech giant detailed the new feature, which it says combats AI hallucinations in which AI responds to queries with incorrect information but fails to recognize its falsity.

This feature is available through Microsoft’s Azure service. The Azure AI Content Security System has a tool called Ground Detection. It determines whether the generated response is based on reality. While the tool itself works in a number of different ways to detect instances of hallucinations, the correction functionality works in a specific way.

For corrections to work, users must connect to Azure’s underlying documents, which are used for document summarization and Retrieve-Augment-Generate (RAG)-based question and answer scenarios. Once connected, users can enable the feature. Thereafter, the feature triggers a correction request whenever an unreasonable or incorrect sentence is generated.

Simply put, a grounding document can be understood as the guidelines that an AI system must follow when generating responses. It can be the source material of the query or a larger database.

The function will then evaluate the claim against the underlying file and, if it finds erroneous information, filter it out. However, if the content is consistent with the underlying document, the feature may rewrite the sentence to ensure it cannot be misinterpreted.

Additionally, users can choose to enable inference when setting up the feature for the first time. Enabling this feature will prompt the AI feature to add an explanation as to why it believes the information is incorrect and needs to be corrected.

A company spokesperson told The Verge that the correction feature uses small language models (SLM) and large language models (LLM) to align the output with the underlying document. “It is important to note that grounding detection does not solve the ‘accuracy’ issue but rather helps align generative AI output with grounding documentation,” the publication quoted a spokesperson as saying.

Follow us On Social Media Twitter/X

WhatsApp Group Join Now
Telegram Group Join Now