Skip to main content

Arkose Labs Becomes First Bot Management Company to Roll Out Protections for Enterprise GPT Applications

Arkose Labs, the global leader in bot management and account security, today announced the launch of its pioneering protection measures for GPT applications, addressing the urgent need for proactive defenses against new attack vectors, like GPT prompt compromise and LLM platform abuse.

This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20240515833035/en/

Arkose Labs protects enterprise GPT applications (Graphic: Business Wire)

Arkose Labs protects enterprise GPT applications (Graphic: Business Wire)

Enterprises deploying GPT applications and providers pioneering LLM platforms are priority targets for bad actors, and the risks are substantial.

Before selecting Arkose Labs, a GPT platform was besieged by over 2 billion bot attacks. The attacks exhausted the platform's processing capacity and cost tens of millions of dollars each month in compute resources. Genuine consumers had trouble accessing the service, as bots dominated the platform, employing proxies and doubling their efforts to scrape the platform’s insights, leveraging compromised account credentials. Within days of deploying Arkose Bot Manager, though, the GPT platform realized a 99.22% reduction in LLM platform abuse.

Arkose Labs’ new capabilities thwart emerging threat vectors, including:

  1. GPT prompt compromise: an attack type where bots are able to programmatically submit prompts and scrape the response with an intention to either train their own models, resell similar services or gain access to proprietary, confidential and personal information.



  2. LLM platform abuse: an attack type that creates unauthorized platform replicas and uses illegal reverse proxying that copies the platform’s insights. Those insights are used to create knock-off services that are increasingly used to generate phishing emails, create deepfake videos, and conduct other illicit acts. The insights are also used by bad actors to circumvent geographical restraints designated by China and other countries.

“Generative AI intensifies cybercrime not only by enhancing traditional attacks, like scraping, but also by introducing new threats like GPT prompt compromise and LLM platform abuse," said Arkose Labs Chief Product Officer Ashish Jain. "The new protective measures we’re releasing today are battle tested and use AI to protect the AI that companies are deploying.”

"Our commitment is to stay ahead of cybercriminals, ensuring that our customers' use of transformative AI technologies remains secure and productive," added Vikas Shetty, vice president, product management, Arkose Labs. “Our proactive measures have proven effective, significantly reducing attack volumes and internal fraud costs while optimizing legitimate users’ experiences.”

Learn more in a new blog also released today.

About Arkose Labs

The world’s leading organizations, including two of the top three banks and the largest tech enterprises, trust Arkose Labs to fight online fraud and keep users safe in digital transactions. Our patented, AI-powered platform detects, traps, and neutralizes bots and bad actors before they can make an impact, without sacrificing the experience of genuine users, and tracks and shares real time, global threat intelligence with our customers. No one else is more proven at scale, provides more proactive support for internal security teams, or outperforms Arkose Labs in sabotaging attackers’ ROI. Our verified customer reviews on G2 reflect the value we add reducing the volume, internal cost, and impact of bot attacks and online fraud. Based in San Mateo, CA, Arkose Labs operates worldwide with offices in Asia, Australia, Central America, and South America.

Contacts

Data & News supplied by www.cloudquote.io
Stock quotes supplied by Barchart
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.