fbpx
Business

Intel, Georgia Tech partner to improve AI defences against deception attacks

Intel will join the Georgia Institute of Technology (Georgia Tech) in leading a ‘Guaranteeing Artificial Intelligence (AI) Robustness against Deception’ (GARD) program team for the Defense Advanced Research Projects Agency (DARPA).

Intel is one of the prime contributors in the four-year multimillion-dollar joint effort that seeks to improve cybersecurity defences against deception attacks. These types of attacks target machine learning (ML) algorithms and attempt to deceive, alter or corrupt the algorithm’s interpretation of data.

AI and ML models are being incorporated into more semi-autonomous and fully-autonomous systems. As such, it’s critical to keep improving the stability, safety and security of models and work to combat deceptive interactions.

Such interactions could include AI misclassification and misinterpretations at the pixel level, leading to image misinterpretation and mislabeling scenarios. Alternatively, subtle modifications to real-world objects could confuse AI perception systems.

The GARD program will help improve AI and ML technologies to defend against these kinds of future deception attacks.

Currently, defence efforts focus on combating specific pre-defined adversarial attacks. However, that means systems remain vulnerable to attacks when tested outside their specified design parameters. GARD hopes to develop broad-based defences that address several possible attacks in given scenarios. The end goal is to establish theoretical ML system foundations that can identify system vulnerabilities as well as create effective defences.

Related Articles

Comments