By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.

library

Blog
/

IQT Labs releases audit report of SkyScan, an automated system that labels images of aircraft

Ryan Ashley, Senior Software Engineer / Andrea Brennen, Senior Vice President & Deputy Director / Mona Gogia, Senior Engineer / Eric Mair, Senior Engineer / Arizbeth Rojas, IQT Labs Intern
Read the Paper
IQT Labs recently completed an AI assurance audit of SkyScan, marking our third assurance audit using the AI Ethics Framework for the Intelligence Community. Read more to hear our findings!

IQT Labs recently completed an AI assurance audit of SkyScan, marking our third assurance audit using the AI Ethics Framework for the Intelligence Community.

Developed at  IQT Labs SkyScan is an automated system that collects and labels images of aircraft. By simultaneously capturing ADS-B (Automatic Dependent Surveillance-Broadcast) signals via a software-controllable radio receiver and images of aircraft in flight, SkyScan can generate a labeled dataset that can be used to train computer vision models to identify various types of aircraft.

As in  IQT Labs' prior audits (FakeFinder, a deepfake detection tool and RoBERTa, a large language model) we assessed a variety of risks, vulnerabilities, and potential concerns posed by the SkyScan system, including:

  • the security of hardware in SkyScan's "Capture Kit";
  • the security of software components in an ML pipeline built from SkyScan data;
  • the ethics of auto-labeled data; and
  • how collection biases in an auto-labeled dataset might introduce biases in the inferences drawn by a classification model trained on that dataset.

One key finding from this work is the many ways that hardware concerns complicate the auditing process, by adding complexity and increasing attack surfaces. To characterize and mitigate these risks, we divided them into three categories: technical, mechanical, and architectural. Then, to help us assess the severity of these risks, we designed and implemented 10 different attacks ì- from GPS and ADS-B spoofing to Model Evasion and Data Poisoning, to a data science twist on a MITM attack that we called "Model-in-the-Middle."

For more information on  IQT Labs AI Audits, check out Interrogating RoBERTa: Inside the challenge of learning to audit AI models and tools and AI Assurance: What happened when we audited a deepfake detection tool called FakeFinder or get in touch with us by emailing labsinfo@iqt.org.