Now Reading
New cybersecurity software challenges AI’s limits

New cybersecurity software challenges AI’s limits

A team of researchers at the University of Surrey has developed a new software that they claim can assess the true level of knowledge possessed by artificial intelligence (AI) systems. The software is designed to verify the accuracy and depth of an AI system’s understanding of a specific subject, which is crucial for ensuring reliable performance in various industries, including healthcare and finance.

The team claims that the software can identify gaps in an AI system’s knowledge and suggest areas for improvement. The software could prove to be an important breakthrough in the area of verification methods for AI-rich programs and decision-making, making AI safer.

The researchers have also defined a “program-epistemic” logic that allows programs to specify their level of knowledge. The system enables programs to think about things that will only be true after they and other processes finish running. This innovation focuses on new methods for automatically verifying epistemic properties for AI-centered programs and analyzing concrete programs (over an arbitrary first-order domain) as well as requirements that are richer than before.

The software developed by the team is capable of verifying how much information an AI system has gained from an organization’s digital database. It can also identify if the AI system is capable of exploiting flaws in software code.

The software can be used as part of a company’s online security protocol to ensure that AI systems are not accessing sensitive data or exploiting software code flaws. The software is considered a significant step towards ensuring the safe and responsible deployment of generative AI models.

Dr Solofomampionona Fortunat Rajaona, the lead author of the paper, said the ability to verify what AI had learnt would give organizations the confidence to safely unleash the power of AI into secure settings. He also mentioned that in many applications, AI systems interact with each other or with humans, such as self-driving cars on highways or hospital robots, and working out what an intelligent AI data system knows is an ongoing problem that the team has taken years to find a working solution for.

The paper also discussed the challenges of evaluating knowledge-centric properties in AI-based decision-making. It argues that the logic of knowledge or epistemic logic has been well-explored in computer science since Hintikka. The researchers created new methods for analyzing how computer programs think and reason. These methods help programs to understand facts not only after they perform an action, but also before they do it.

See Also
UAE Constructs First Artificial Nest for Birds of Prey in Sharjah

Martha Lane Fox, the British tech pioneer, has called for a more rational discussion surrounding the impact of AI and has warned against over-hyping it. While acknowledging that frameworks around AI are necessary, she advocates for a more measured approach from companies in the development of AI technology.

Ms Lane Fox believes AI presents opportunities for society and businesses, but emphasizes that it should be digitized in an ethical and sustainable way. Elon Musk, the founder of Tesla and CEO of Twitter, joined tech leaders in signing an open letter urging AI labs to temporarily pause the development of powerful AI systems for at least six months. The letter expressed concern that AI technology with competitive human-level intelligence could pose significant risks to society.

The letter proposes that the AI labs work on developing safety protocols overseen by an independent panel before training AI systems more powerful than GPT-4. The letter also suggests the need for new regulators, oversight, public funding for AI safety research, and liability for AI-caused harm.

© 2021 The Technology Express. All Rights Reserved.

Scroll To Top