Istraživači sa UCLA, UCSD, Sveučilišta u Pekingu i Pekinškog instituta za opću umjetnu inteligenciju (BIGAI) nedavno su razvili novi AI sustav koji može korisnicima objasniti svoje procese donošenja odluka.

11. svibnja 2023.
4 Mins Read
206 Views

Tl;dr, one sentence per paragraph, write in croatian language:

A team of researchers created a new AI system that can explain its decision-making procedures to users who aren’t computer scientists. The system may represent a new development in the development of AI that is more trustworthy and understandable. The field of explainable AI aims to build collaborative trust between robots and humans.

The goal of XAI is to increase human and robot collaboration and trust. The DARPA XAI Project was an excellent catalyst for developing this field of study. The team looked into how explainable systems could impact user perceptions and trust in AI during human-machine interactions. Researchers primarily look at categorization task models by demonstrating how AI systems make decisions for the user. The “passive machine-active user” paradigm requires users to constantly check in with XAI systems to grasp the reasoning behind the AI’s judgments.

Researchers have developed a system that aims to achieve what is referred to as “value alignment.” A robot or machine can deduce the reasons behind a human user’s actions or conclusions. This can greatly improve communication and interactivity between humans and robots.

The researchers created a game called “scout exploration,” in which players cooperate to complete missions to train and evaluate their XAI system. The requirement for humans and robots to align their so-called “value functions” is one of the game’s most crucial elements.

Source:

According to researchers, a collection of robots can detect their surroundings in the game, simulating real-world applications where the goal is for the robots to operate independently to reduce the need for human intervention. The user was given a specific value function, represented by the weighting of a few parameters, because the human user cannot interact directly with the environment.

In the scout exploration game, the robot team must make an inference because they do not have access to the value function provided to human users. The robot and human team must deduce this value from one another to finish the mission because it is difficult to explain and communicate.

The robot team must essentially comprehend the human users’ value function based solely on the human’s feedback to complete tasks in “scout exploration.” Human users can provide feedback that helps the robots improve and directs them toward the proper response as they become familiar with their current value estimations.

The system developed by the researchers demonstrated remarkable results in early evaluations, resulting in the on-the-fly and interactive alignment of values in the scout exploration game. The team discovered that while users could accurately perceive the robot’s value functions halfway through the game, the robot aligned with the human user’s value function as early as 25% into the game.

The user’s true value serves as the anchor for a bidirectional value alignment formed by the convergence of I the robots’ value to the user’s true values and the user’s estimate of the robots’ importance to the robots’ current values. The researcher approach highlights the need for constructing intelligent machines that learn and understand our intentions and values through interactions, which is essential to avoid many dystopian science fiction stories depicted in novels and on the big screen.”

XAI systems involve intelligent robots or helpers actively interacting with people, sharing their processes, and enhancing their performance in response to user feedback. This team of academics’ most recent study makes a substantial contribution to the body of knowledge focusing on creating more intelligible AI.

Value alignment is the initial step toward generic human-robot collaboration. Value alignment takes place in this work within the framework of a specific assignment. For instance, researchers anticipate a single home robot to assist us with various daily tasks rather than purchasing numerous robots, each of which can only perform a single task.

The research team also wants to apply our framework to other tasks and actual robots in our subsequent studies. In addition to values, they think a potential route would be to match different components of mental models between humans and robots. The researchers hope to improve human-machine collaboration on various tasks with their new explainable AI paradigm. Additionally, they anticipate that this strategy would boost human confidence in AI-based systems, such as virtual assistants, robots, and intelligent assistants.

This Article is written as a research summary article by Marktechpost Staff based on the research paper 'In situ bidirectional human-robot value alignment'. All Credit For This Research Goes To Researchers on This Project. Check out the paper, project page and reference article.

Please Don't Forget To Join Our ML Subreddit

I am consulting intern at MarktechPost. I am majoring in Mechanical Engineering at IIT Kanpur. My interest lies in the field of machining and Robotics. Besides, I have a keen interest in AI, ML, DL, and related areas. I am a tech enthusiast and passionate about new technologies and their real-life uses.

Izvor

Exit mobile version