AI Coding Assistants Are Now a Security Risk: What SOC 2 and ISO 27001 Companies Need to Know
From malicious extensions exfiltrating code to prompt injection attacks enabling remote execution, AI coding tools introduce new risks that most compliance frameworks don't explicitly address. Here's what CTOs and CISOs need to implement now.
Key Takeaways
- 1.5 million developers had their code exfiltrated by malicious VS Code extensions posing as AI assistants
- Prompt injection vulnerabilities in GitHub Copilot enabled remote code execution on developer machines
- AI IDE forks like Cursor and Windsurf have recommended malicious extensions through supply chain gaps
- SOC 2 and ISO 27001 frameworks provide control structures, but companies need to explicitly address AI tooling
- Practical controls include approved extension lists, code review policies for AI-generated code, and workspace trust configurations
AI coding assistants have become standard in development teams. Cursor crossed 1 million daily active users. Windsurf hit 1 million users within months of launch. GitHub Copilot is embedded across every enterprise.
But the security implications are just beginning to surface. Over the past year, researchers have documented malicious extensions, prompt injection attacks, and supply chain vulnerabilities that turn these productivity tools into attack vectors. For SOC 2 and ISO 27001 companies, these risks demand updated controls.
The Threat Landscape: What's Actually Happening
Malicious Extensions Are Stealing Developer Code
In January 2026, Koi Security identified a campaign dubbed "MaliciousCorgi" targeting VS Code users through two extensions: "ChatGPT - 中文版" and "ChatMoss (CodeMoss)." Combined, these extensions had 1.5 million installs from the official marketplace.
The extensions operated through three parallel data collection channels:
Real-time file monitoring: Every file opened in VS Code was encoded in Base64 and transmitted to attacker servers in China. Every keystroke was captured.
Server-controlled batch harvesting: Remote commands could trigger mass file collection, exfiltrating up to 50 files per command without any user interaction.
User profiling: Hidden iframes loaded four analytics SDKs to build identity profiles and identify high-value targets.
The result: environment files with API keys, SSH credentials, cloud service tokens, and proprietary source code, all transmitted to servers at aihao123.cn.
These weren't obscure extensions. They presented legitimate AI coding functionality and accumulated over a million users before detection.
AI IDEs Recommend Extensions That Don't Exist
The supply chain risk extends beyond malicious actors publishing fake extensions. In late 2025, Koi Security discovered that Cursor, Windsurf, and Google Antigravity were actively recommending extensions that didn't exist in their extension marketplace.
These AI IDEs fork VS Code but cannot legally access Microsoft's extension marketplace. Instead, they use OpenVSX. The problem: inherited recommendation lists pointed to extensions that existed only in Microsoft's marketplace, leaving namespaces unclaimed on OpenVSX.
Any attacker could register these namespaces and upload malicious code. When a developer opened certain file types, the IDE would recommend the attacker's extension. Koi claimed these namespaces first, and their placeholder extensions accumulated over 1,000 installs purely from recommendation trust.
Cursor patched the issue in December 2025. Google eventually shipped a partial fix. Windsurf never responded to disclosure attempts.
Prompt Injection Enables Remote Code Execution
The agentic capabilities that make AI coding assistants useful also create new attack surfaces. In June 2025, researchers disclosed CVE-2025-53773, a critical vulnerability in GitHub Copilot that enabled remote code execution through prompt injection.
The attack exploited Copilot's ability to modify project configuration files. By manipulating .vscode/settings.json, attackers could enable "YOLO mode," disabling all user confirmations and granting unrestricted shell command access. Malicious content embedded in a repository could inject instructions that Copilot would execute, unable to distinguish between legitimate context and attack payloads.
Microsoft patched this specific vulnerability in August 2025, but the underlying risk remains. As VS Code's security documentation acknowledges, "malicious content introduced into the workspace through files, comments, or tool outputs can influence the AI's understanding."
The Broader Pattern
These incidents aren't isolated. Datadog's security research team released IDE-SHEPHERD, an open-source extension designed to monitor for threats inside VS Code and Cursor. The pattern is clear: AI coding tools expand the attack surface for developer workstations, and adversaries are actively exploiting these vectors.
Why SOC 2 and ISO 27001 Don't Cover This (Yet)
If you're maintaining SOC 2 or ISO 27001 certification, you might assume existing controls address these risks. They don't, at least not explicitly.
Neither framework has specific requirements for AI coding assistants. The relevant controls are general:
SOC 2 CC6.7 (Software Development) requires controls over development tools, but doesn't specify AI assistants. ISO 27001 A.8.28 (Secure Coding) addresses secure development without mentioning AI-generated code. SOC 2 CC6.1 and ISO 27001 A.9 (Access Control) govern system access, but extension installations often fall outside traditional scope.
This creates a gap. Your auditor won't specifically ask about your Cursor extension policy or AI-generated code review. The risk exists whether or not it appears on your audit checklist.
Practical Controls for AI Coding Tool Governance
Here's what to implement now, organized by the compliance controls they support.
1. Establish an Approved Extension List
Relevant controls: SOC 2 CC6.7, ISO 27001 A.8.28
Most organizations have software approval processes for applications but not for IDE extensions. Given the MaliciousCorgi campaign and extension recommendation attacks, this needs to change.
Create a whitelist of approved extensions for each AI IDE your team uses. Review each extension for publisher verification, permissions requested, source code availability, and maintainer responsiveness.
Document the review process. For VS Code forks (Cursor, Windsurf), be especially cautious with extension recommendations. The supply chain gaps identified by Koi demonstrate that "recommended" doesn't mean "verified."
2. Implement Workspace Trust Configuration
Relevant controls: SOC 2 CC6.1, ISO 27001 A.9
VS Code's Workspace Trust feature provides a critical security boundary. When enabled, untrusted workspaces disable code execution features, including AI agent capabilities.
Require developers to:
- Open external or untrusted codebases in restricted mode by default
- Complete code review before granting workspace trust
- Never disable workspace trust globally
This directly mitigates prompt injection risks. An attacker can embed malicious instructions in a repository, but they won't execute until the workspace is explicitly trusted.
Document this configuration in your security policies. It maps directly to access control requirements in both SOC 2 and ISO 27001.
3. Define Code Review Requirements for AI-Generated Code
Relevant controls: SOC 2 CC8.1 (Change Management), ISO 27001 A.8.32 (Change Management)
AI-generated code requires the same review rigor as human-written code, arguably more. Large language models can introduce subtle vulnerabilities that look syntactically correct but create security gaps.
Update your code review policy to explicitly address:
- Attribution: Commits should indicate when AI assistance was used
- Security review: AI-generated code touching authentication, authorization, data handling, or external integrations requires security-focused review
- Dependency changes: AI tools often suggest adding dependencies. Each dependency addition should trigger supply chain risk assessment
This isn't about slowing down development. It's about ensuring your change management controls explicitly cover the AI-assisted workflow your team actually uses.
4. Monitor for Credential Exposure
Relevant controls: SOC 2 CC6.6 (System Operations), ISO 27001 A.12.4 (Logging and Monitoring)
AI coding tools often require credentials for enhanced functionality: API keys for AI services, database connection strings for context, cloud credentials for deployment integration. These credentials often end up in configuration files on developer machines.
The MCP credential security issue we documented previously applies here. Implement:
- Secret scanning that covers AI tool configuration files (
.cursor/,.vscode/mcp.json, etc.) - Centralized credential management that prevents hardcoded secrets
- Regular audits of what credentials developers have stored locally
When AI tool configurations become credential aggregation points, your monitoring needs to extend to those locations.
5. Update Your Security Awareness Training
Relevant controls: SOC 2 CC1.4 (Security Awareness), ISO 27001 A.7.2 (Information Security Awareness)
Developers need to understand AI coding assistant risks that generic training doesn't cover: prompt injection attacks, unverified extension risks, workspace trust importance, and suspicious extension behavior. Add a module on AI tooling security to your existing training program.
6. Establish Incident Response Procedures
Relevant controls: SOC 2 CC7.3 (Incident Response), ISO 27001 A.16 (Information Security Incident Management)
If a developer's machine is compromised through a malicious extension, your incident response plan should cover immediate credential rotation, code repository assessment for injected malicious code, review of other developers who installed the same extension, and client communication if proprietary code was exposed. Test these procedures regularly.
What to Tell Your Auditor
When your auditor reviews development controls, proactively address AI tooling: document your AI tool inventory, show your extension review process, demonstrate configuration standards (workspace trust, credential management), and reference updated policies. Auditors appreciate proactive risk identification rather than waiting for them to ask.
The Bottom Line
AI coding assistants are here to stay. They make developers faster. But the security implications are real, and most compliance frameworks haven't caught up.
The organizations that handle this well will treat AI coding tools as attack surface, update policies to address AI-specific risks, monitor locations that didn't exist two years ago, and train developers on risks they weren't taught in bootcamps.
The incidents documented here aren't hypothetical. They've already affected millions of developers. The question isn't whether AI tooling risks will appear on future audit checklists, but whether you'll address them before that happens.
Frequently Asked Questions
MaliciousCorgi was a campaign where two VS Code extensions with 1.5 million combined installs exfiltrated developer source code, credentials, and environment files to servers in China.
Cursor patched the extension recommendation vulnerability in December 2025. Windsurf has not responded to security disclosures. Both should be used with approved extension lists, workspace trust enabled, and security-conscious configurations.
Prompt injection manipulates AI behavior through malicious content in files, comments, or documentation. This has enabled remote code execution by injecting instructions that AI executes as legitimate commands.
Neither framework has explicit requirements. However, existing controls around software development, access control, and change management can be extended to cover AI tooling.
Key elements: approved extension list, workspace trust requirements, code review standards for AI-generated code, credential management rules, and incident response procedures.
Bastion helps SaaS companies implement security controls that address emerging risks like AI tooling. Our managed compliance services for SOC 2 and ISO 27001 ensure your policies keep pace with your actual development practices. Get started with Bastion
Share this article
Related Articles
CIS Benchmarks for AWS: A Practical Security Hardening Guide
Learn how to implement CIS Benchmarks for AWS to harden your cloud infrastructure. Covers IAM, S3, CloudTrail, VPC, EC2, RDS, and KMS controls with practical guidance for SOC 2 and ISO 27001 compliance.
McKinsey's AI Platform Got Hacked: What It Means for Your Company
A security firm breached McKinsey's Lilli AI platform, exposing 46.5 million chat messages and 728,000 files. Here's what every company deploying AI should learn from this.
CIS Benchmarks for Google Cloud Platform: A Practical Security Hardening Guide
Learn how to implement CIS Benchmarks for GCP to harden your Google Cloud infrastructure. Covers IAM, Cloud Storage, VPC, Compute Engine, Cloud SQL, and logging controls with practical guidance for SOC 2 and ISO 27001 compliance.
Learn More About Compliance
Explore our guides for deeper insights into compliance frameworks.
What is an Information Security Management System (ISMS)?
An Information Security Management System (ISMS) is at the heart of ISO 27001 certification. Understanding what an ISMS is and how to build one is essential for successful certification. This guide explains everything you need to know.
ISO 27017 and ISO 27018: Cloud Security Standards
ISO 27017 and ISO 27018 extend ISO 27001 with specific guidance for cloud computing environments. Understanding these standards helps cloud service providers and their customers address cloud-specific security and privacy requirements.
ISO 27002 Explained: A Complete Guide to Security Controls
ISO 27002 provides detailed implementation guidance for the security controls referenced in ISO 27001 Annex A. While ISO 27001 tells you what to implement, ISO 27002 tells you how to implement it. This guide explains the relationship between these standards and how to use ISO 27002 effectively.
Other platforms check the box
We secure the box
Get in touch and learn why hundreds of companies trust Bastion to manage their security and fast-track their compliance.
Get Started