We haven’t seen much week-to-week change lately in data volumes- infostealer logs unfortunately remain extremely prevalent with no signs of abatement. The aggregate lists of credentials extracted from infostealer logs, often referred to as “ULP” lists, also remain in extremely high demand. “ULP” stands for “URL, Login, Password”, referring to the format of the files. The URL indicates the web address the credential can be used at, followed by the username or email, and the plaintext password. This convenient format makes threat actors’ jobs extremely easy- each line of the file represents an all-in-one entry vector. One can easily search for any targeted website or app, simplifying the account take-over process to an absolutely trivial amount of work. Unsurprisingly, these lists are only growing in popularity.
In other news, a high-profile breach of Cisco was alleged by a familiar threat actor, “IntelBroker”, who advertised the purported data for sale on a popular cybercrime forum. If genuine, this data could be seriously concerning especially for Cisco and possibly enterprise customers as well, given the reported contents of the breach as detailed in the forum post “Github projects, Gitlab Projects, SonarQube projects, Source code, hard coded credentials, Certificates, Customer SRCs, Cisco Confidential Documents, Jira tickets, API tokens, AWS Private buckets, Cisco Technology SRCs, Docker Builds, Azure Storage buckets, Private & Public keys, SSL Certificates, Cisco Premium Products & More” A small data sample indicated that at least some employee credentials were included, in the form of email addresses and BCrypt hashed passwords. BCrypt is considered an industry-standard hashing formula and quite secure, although the passwords could still be easily cracked if the employees used common or previously-compromised passwords.
Preliminary research on compromised credentials collected this year indicates that passwords exposed by infostealers are up to fifteen times more likely to be novel- that is, have never been seen before in compromised credential data. This is great news for hackers, because the more ‘fresh’ a credential is, the more likely it is 1) still valid, i.e. the password has not been changed since the compromise occurred, and 2) the account is less likely to have already been compromised by other threat actors…which means that there may be more resources available or more time for the threat actor to establish undetected persistence in the network and perpetrate further crimes like the deployment of ransomware or data exfiltration.
Given the current “AI” zeitgeist, it should be no surprise that threat actors are turning to large language models (LLMs) to improve their “productivity” as well. While security controls on the major models possibly slowed the initial flood, threat actors have been working hard to ‘jailbreak’ the systems- i.e. to use prompts in a way that bypasses the security limitations. privately hosted and open-source LLMs are sufficiently powerful to generate full malware suites. Last week, a threat actor shared what they claim is the first Infostealer generated by artificial intelligence. Describing it as “a really good quality stealer with advanced features like discord stealer, web browser stealing, device info stealing, payment method stealing…”, the threat actor shared both how to access the LLM used to generate the stealer, and a download link for the software itself.
If indeed true, this would mark an expected, but problematic, milestone. LLM technology is developing extremely quickly, and if it increases threat actors’ ability to rapidly iterate on code for malware and vulnerability exploits, it creates a corresponding imperative on security researchers and developers to stay one (or more) steps ahead. The criminal use of LLMs and other generative AI techniques is something we will have to closely monitor and respond to very quickly as the threat landscape changes. For example, signature-based monitoring tools will require more frequent updates and need to preemptively flag a larger number of anomalous behaviors, leading to a larger amount of false positives and human input needed to secure environments.