Helios News - Deepfake detection becoming big business
Race to protect … Some analysts predict that the deepfake detection market could reach more than $5 billion by 2030, growing at a 45% compound annual growth rate. This surge is driven by the escalating risk of deepfake fraud, a growing threat in industries like finance, media, entertainment, and yes—politics and government, too.
The risks of deepfakes to businesses are profound. Fraudsters can manipulate images, videos, and voices to bypass security measures, with financial institutions particularly vulnerable to fake identity verification. Deepfake fraud could result in serious financial losses or damage corporate reputations, making detection tools a necessity.
Let’s look at financial services in particular: Alloy’s 2024 State of Fraud Benchmark Report found that 75% of financial institutions are actively investing in identify risk solutions, defined here as “end-to-end platforms to manage identity, fraud, credit, and compliance risks throughout the customer lifecycle.”
What types of technologies will you be looking to invest in the next 12 months?
Public tech companies are responding with new initiatives, like Microsoft’s (MSFT) Content Integrity Suite, which helps detect fake media, focusing on elections and social media. For its part, Intel (INTC) has developed “FakeCatcher,” a real-time detection tool with 96% accuracy, per the company. Adobe (ADBE) tracks the provenance of digital media by leading the cross-industry Content Authenticity Initiative.
Startups are joining the fight with more targeted solutions. Sentinel’s AI-powered platform helps detect deepfakes for governments, while Reality Defender offers real-time deepfake detection via an API. A professor at UC Berkeley’s School of Information launched GetReal Labs to help businesses address the threat of deepfakes.
And while we’re going deep … let’s cover the regulatory landscape. Just last week, a federal judge temporarily blocked California’s new AI law, AB 2839, which targets the spread of election deepfakes. The law was challenged after Elon Musk reposted a deepfake of Vice President Harris. The court ruled the law could violate free speech protections under the First Amendment, particularly satire and parody. The ruling stalls enforcement, leaving its future uncertain as the upcoming election approaches.
Turns out, this topic really isn’t divided across political lines. A recent poll shows 80% of voters are concerned about the use of deepfakes in the 2024 election, with broad bipartisan support for regulation. Interestingly, Independents say they are the MOST concerned.
Additionally, 83% of respondents believe AI-generated content should be labeled, and voters back legislation to prevent AI-driven fraudulent or harmful media. A Security Magazine survey found that 72% of people worry daily that they themselves will be duped by a deepfake.
0
0 comments
Joseph Giammarino
1
Helios News - Deepfake detection becoming big business
Helios Financial
skool.com/mllk-6606
Getting you started on your journey to financial literacy; from your first investment to your first credit card. Personalized help available.
powered by