Learning XAI: Course Summary



Although Explanaible AI (XAI) is a relatively new field in AI, its impact is proving to be critical with regard to unraveling the current black box state of LLMs and ML algorithms.

Summary

Here is a summary of this excellent course about Explainable AI:

  • Explainable AI (XAI) is a technology that aims to make the decisions and outputs of artificial intelligence systems more understandable and trustworthy for humans.
  • XAI can enable better collaboration and communication between humans and machines, especially for complex and critical problems that require both speed and accuracy.
  • XAI can also enhance the user experience, provide more relevant and accurate recommendations, and allow the system to learn and improve faster and more accurately than non-explainable AI systems.
  • XAI is still a very new and developing technology, and it may take some time before it is widely adopted and used. It is important to compare it with other technologies that also had slow or failed adoption in the past, such as electricity, telephone, or satellite phones.
  • XAI will be most beneficial for technology-heavy applications that are mission-critical, such as finding a cure for cancer, stopping global warming, or creating world peace.
  • XAI is also supported by DARPA, which is investing billions of dollars into XAI research and development with a large number of partners. They have posted information on their website about their motivations, as well as the current state of the art of explainable technologies. You can also see who their partner companies and universities are on the DARPA site.

Citations

Here are a few citations from the course that I think capture this relatively new AI field quite well: 

"By working together, humans and machines could be more flexible, faster, scale better, and make better decisions than either alone. The benefit of explainable AI is that it will allow humans to better understand the rationale and have a deeper trust in the outputs and decisions made by computers, and thus allow for a tighter collaboration between humans and machines. XAI will help to increase transparency and increase collaboration of current black box AI systems. The problem is not that we are afraid that machines will intentionally deceive us or have any ill will toward us. It is that we cannot know if the decisions and recommendations made by the machines are based on the right data, the right set of reasoning, or the correct set of assumptions without deeper insight into the decision-making process. XAI will help humans to have more confidence in the outputs of machines. It will also help humans challenge the assumptions of machines to come up with better results. For relatively simple problems, this is not as important. However, humans and machines will need to collaborate extremely closely together to solve massively complex problems, such as trying to find a cure for cancer, stop global warming, ensure adequate supplies of energy, water, and natural resources, eliminate hunger and poverty, or create world peace."

"Take, for example, a road trip that you're planning. The maps application can become your expert transit advisor, suggesting spots that would be appealing to you, reminding you to buy that souvenir for your neighbor who's watching your dog for you, or the nearest, cleanest bathroom. You are able to understand the AI much better because of the explainability feature, thus, with XAI, your experience becomes much richer, you get the most relevant and accurate recommendations, and the system learns and improves much faster and more accurately than today's nonexplainable AI systems."

"Although there is currently a lot of excitement about XAI, it is important to note that it is still very early in its development as a technology. In this video, we'll discuss the current state of XAI and put it into perspective compared to how other technologies were adopted in the past. To give you some perspective, let's take a look at how long it took for some new technologies to get adopted. Electricity for example was discovered in 1873 and it took 46 years before 1/4 of the population began to use it. The telephone, invented three years after electricity, took 35 years for 1/4 penetration. Of course, there are a huge number of technologies that had high expectations but for a variety of reasons were never successfully adopted. A few examples are Iridium, the satellite phone, Betamax, LaserDisc, the first generation of electric vehicles, and token rings if any of those are familiar to you.

AI itself took a while. The Logic Theorist, which is thought to be the first artificial intelligence program was developed in 1956. The field of AI expanded quickly until about 1974 when the lack of computational power led to very slow growth for the next 10 years. AI had a small resurgence from government funding in the 1980s and continued to build momentum from private funding started in the 1990s. Today with a dramatic reduction in the cost of computing power, storage, and sensors, we are in the golden age of AI."

"XAI will be most important to technology-heavy applications that are mission critical. The weight of being a tech application is heavier than that of being mission-critical. In other words, even non-mission-critical applications will benefit from XAI if they are technology heavy."

"You can also learn a lot more from DARPA, which is investing billions of dollars into XAI R and D with a large number of partners. They have posted information on their website about their motivations, as well as the current state of the art of explainable technologies. You will also be able to see, on the DARPA site, who their partner companies and universities are."

Source:

Learning XAI: Explainable Artificial Intelligence

Comments

Popular Posts