July 29, 2024
In the machine learning (ML) and artificial intelligence (AI) domain, managing, tracking, and visualizing model…
Welcome to issue #6 of The Comet Newsletter!
This week, we take a closer look at Facebook’s attempt to reverse engineer deepfakes and why some researchers think we’ve crossed the line on Autonomous Weaponized Drones.
Additionally, you might find a couple of new resources quite useful: a new, free Hugging Face NLP course, as well as interactive notebooks built for team-based collaboration from Deepnote.
And be sure to follow us on Twitter and LinkedIn — drop us a note if you have something we should cover in an upcoming issue!
Happy Reading,
Austin
Head of Community, Comet
INDUSTRY | WHAT WE’RE READING | PROJECTS | OPINION
Deepfakes—media content generated through machine learning—are a growing cause for concern at law enforcement agencies. Policymakers and analysts in the national security community foresee the integration of cutting-edge ML technologies into large-scale information warfare efforts like the one carried out by Russia during the 2016 U.S. presidential election.
Facebook, arguably the tech giant with the most to lose from the proliferation of manipulated media, have partnered with researchers at Michigan State University to develop a method for detecting Deepfakes that relies on taking an AI-generated image and reverse-engineering the system used to create it.
The system works by running a suspected image through a fingerprint estimation network (FEN) that tries to extract the “fingerprints” of the generative model that created the image. These fingerprints are unique patterns left on Deepfakes that can be used to identify the generative models the deepfakes originated from.
Facebook and MSU say their system can estimate both the network architecture of an algorithm used to create a deepfake and its training loss functions, which evaluate how the algorithm models its training data
Since 2019, the number of deepfakes online has grown from 14,678 to 145,227, an uptick of roughly 900% year over year, according to Sentinel. Meanwhile, Forrester Research estimated in October 2019 that Deepfake fraud scams would cost $250 million by the end of 2020.
The commodification and proliferation of Deepfake detectors is considered a key pillar of the strategy to combat this technology according to Georgetown’s Center for Security and Emerging Technology (CSET), which has published one of the most comprehensive assessments on the threats posed by Deepfakes.
Read the full article from VentureBeat here.
INDUSTRY | WHAT WE’RE READING | PROJECTS | OPINION
Hugging Face’s new initiative for conquering the NLP world is a course on how to build NLP applications using their products. The course covers a basic introduction to NLP, and moves on to advanced topics such as dataset management, custom training loops, speeding up training and specialized architectures.
Check out the full course here.
INDUSTRY | WHAT WE’RE READING | PROJECTS | OPINION
Our friends at Deepnote have been hard at work building a notebook experience that’s tailored for communication and collaboration. Last week, they announced that their Teams Plan will now be completely free for teams of up to 3 collaborators.
Data science is as much a scientific and creative process as an engineering one. It involves failing, learning, and going back to the drawing board. This iterative process is further complicated when you have multiple collaborators working on the same model. Deepnote’s product is engineered from the ground up with collaboration in mind. Their ultimate goal is to provide everyone involved in the data science process—from business stakeholders and product owners to data engineers and customers—a seat at the table.
Some of the highlights from the new Teams Plan are:
Read Deepnote’s full blog post here.
INDUSTRY | WHAT WE’RE READING | PROJECTS | OPINION
The red line for autonomous targeting of humans has now been crossed.
According to a recent UN report, a drone airstrike in Libya from the spring of 2020—made against Libyan National Army forces by Turkish-made STM Kargu-2 drones on behalf of Libya’s Government of National Accord—was conducted by weapons systems with no known humans “in the loop.”
This is the first documented use case of a lethal autonomous weapon system akin to what has elsewhere been called a “Slaughterbot.”
Researchers from the Future of Life Institute argue that beyond the moral complications of handing over life and death decisions to algorithms, the proliferation of such technology will inevitably lead to the creation of new weapons of mass destruction. Autonomous Weaponized Drones share many of the characteristics of small arms and are thus great candidates for distribution on the international arms market.
Autonomous Drones are seen as an inexpensive way for militaries without an advanced air force to compete on the battlefield. This has led to a surge in demand for these systems, with many countries like China supplying some of its most advanced military aerial drones to the Middle East, according to former Secretary of Defense Mark Esper.
Azerbaijan’s decisive advantage over Armenian forces in the 2020 Nagorno-Karabakh conflict has been attributed to their arsenal of cheap, kamikaze “suicide drones.” supplied to them by the Turkish government. While these drones were used on materiel targets such as radar systems and vehicles, a software update is the only thing that stands in the way of the autonomous targeting of humans.
The Future of Life researchers call for an immediate moratorium on the development, deployment, and use of lethal autonomous weapons that target persons, combined with a commitment to negotiate a permanent treaty, as a first step in preventing the widespread use of these systems.
Check out this video from the Future of Life Institute warning us of potential dangers of Slaughterbots.