<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
    <channel>
        <title>Security on KnightLi Blog</title>
        <link>https://www.knightli.com/en/categories/security/</link>
        <description>Recent content in Security on KnightLi Blog</description>
        <generator>Hugo -- gohugo.io</generator>
        <language>en</language>
        <lastBuildDate>Sun, 17 May 2026 19:52:39 +0800</lastBuildDate><atom:link href="https://www.knightli.com/en/categories/security/index.xml" rel="self" type="application/rss+xml" /><item>
        <title>APT45 Uses AI to Validate Vulnerabilities at Scale: The Barrier to Zero-Day Attacks Is Falling</title>
        <link>https://www.knightli.com/en/2026/05/17/apt45-ai-zero-day-threat-tracker/</link>
        <pubDate>Sun, 17 May 2026 19:52:39 +0800</pubDate>
        
        <guid>https://www.knightli.com/en/2026/05/17/apt45-ai-zero-day-threat-tracker/</guid>
        <description>&lt;p&gt;Google Threat Intelligence Group published a new AI Threat Tracker on May 11, 2026. The important point is not simply that attackers are using AI. The more important shift is how they are using it: moving from writing, translation, and reconnaissance into vulnerability research, PoC validation, malware obfuscation, and automated attack orchestration.&lt;/p&gt;
&lt;p&gt;Two points are easy to mix together, so they should be separated first.&lt;/p&gt;
&lt;p&gt;First, Google said it identified what it believes is the first zero-day exploit developed with AI assistance. That case involved an unnamed cybercrime group. The target was a popular open-source web-based system administration tool, and the vulnerability could bypass 2FA when the attacker already had valid credentials. Google said it worked with the affected vendor on responsible disclosure and may have prevented a mass exploitation event.&lt;/p&gt;
&lt;p&gt;Second, APT45 was not attributed as the actor behind that zero-day case. GTIG separately noted that APT45, a North Korea-linked threat group, was observed sending large volumes of repetitive prompts to AI models to recursively analyze different CVEs and validate PoC exploits. In other words, APT45 is using AI as a vulnerability research and exploit arsenal management tool, not merely as a phishing-email assistant.&lt;/p&gt;
&lt;h2 id=&#34;what-the-ai-zero-day-case-shows&#34;&gt;What the AI zero-day case shows
&lt;/h2&gt;&lt;p&gt;This zero-day was not a typical memory corruption bug, input filtering error, or simple misconfiguration. GTIG described it as a high-level semantic logic flaw: the developer hardcoded a trust assumption inside an authentication flow, creating a contradiction between 2FA enforcement logic and its exceptions.&lt;/p&gt;
&lt;p&gt;These bugs are hard for traditional scanners. Static analysis and fuzzing are better at finding crashes, dangerous sinks, input-output paths, and known patterns. They are not always good at understanding what the developer intended to guarantee and where an exception quietly breaks that guarantee.&lt;/p&gt;
&lt;p&gt;That is where large language models become risky. They may not be stronger than expert security researchers, but they are good at reading context, explaining intent, comparing similar code paths, and pointing out inconsistent business logic. Once attackers connect that ability to automation, logic flaws that used to require long manual review may become easier to screen at scale.&lt;/p&gt;
&lt;p&gt;GTIG also noted several AI-generation traces in the exploit code, including educational docstrings, a hallucinated CVSS score, and a textbook Python style. Google also said it does not believe Gemini was used, while expressing high confidence that the actor used some AI model to support discovery and weaponization.&lt;/p&gt;
&lt;h2 id=&#34;why-apt45-deserves-long-term-attention&#34;&gt;Why APT45 deserves long-term attention
&lt;/h2&gt;&lt;p&gt;APT45 has long been tracked as a North Korea-linked threat group with activity spanning espionage, financial gain, and strategic intelligence. What GTIG emphasized this time was its AI workflow: large, repetitive, recursive CVE analysis, PoC validation, and the accumulation of more reliable exploit capabilities.&lt;/p&gt;
&lt;p&gt;That is different from asking AI to write a short script.&lt;/p&gt;
&lt;p&gt;If an organization can connect AI to vulnerability triage, PoC validation, payload adjustment, and test environments, its human bottleneck changes. In the past, the number of vulnerabilities a team could study at the same time depended on researcher count, experience, and time. Now AI can absorb part of the repetitive reading, summarization, variant testing, and first-pass judgment, leaving humans to focus on target selection, exploitability verification, and delivery.&lt;/p&gt;
&lt;p&gt;For defenders, this means the window for known vulnerabilities gets shorter.&lt;/p&gt;
&lt;p&gt;After a CVE is disclosed, attackers do not need to manually read the advisory, inspect patch diffs, build test environments, and fix PoCs from scratch. AI can help them understand impact, generate test ideas, troubleshoot failures, and summarize version differences. Even if human correction is still required, the overall throughput improves.&lt;/p&gt;
&lt;h2 id=&#34;this-does-not-mean-ai-can-hack-everything-by-itself&#34;&gt;This does not mean AI can hack everything by itself
&lt;/h2&gt;&lt;p&gt;This should not be read as proof that AI can independently complete full intrusions.&lt;/p&gt;
&lt;p&gt;GTIG&amp;rsquo;s report is more precise: multiple parts of the attack chain are being accelerated by AI. Vulnerability research, malware obfuscation, reconnaissance, social engineering, information operations, mobile UI automation, and supply-chain abuse all show signs of AI involvement.&lt;/p&gt;
&lt;p&gt;But AI still fails. It can hallucinate vulnerabilities, misjudge exploitability, generate broken code, or get lost in complex enterprise authorization logic. The real danger is not that AI is perfect. The danger is that attackers can now try cheaply. When large-scale trial and error becomes cheap enough, bad outputs can be filtered away and usable outputs can move into operations.&lt;/p&gt;
&lt;p&gt;That is why cases like APT45 matter. State or state-adjacent groups have targets and patience. If AI reduces repetitive labor, they can spend more resources on high-value targets.&lt;/p&gt;
&lt;h2 id=&#34;defenders-should-focus-on-shrinking-the-exposure-window&#34;&gt;Defenders should focus on shrinking the exposure window
&lt;/h2&gt;&lt;p&gt;Many organizations used to divide risk into two buckets: known vulnerabilities are handled by patch management, while zero-days are handled by defense in depth. As AI enters vulnerability research, that boundary becomes less clean.&lt;/p&gt;
&lt;p&gt;The more practical questions are:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;After a new CVE is disclosed, how long does it take external attackers to produce a usable exploit?&lt;/li&gt;
&lt;li&gt;Can your asset inventory tell you the same day which systems are affected?&lt;/li&gt;
&lt;li&gt;Can WAF, EDR, logs, and identity systems detect abnormal attempts?&lt;/li&gt;
&lt;li&gt;Do high-risk systems use MFA, least privilege, and network isolation by default?&lt;/li&gt;
&lt;li&gt;Are open-source components, AI agent plugins, and third-party connectors included in supply-chain review?&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;AI zero-days do not make basic security obsolete. They punish environments where basic security has been neglected for too long.&lt;/p&gt;
&lt;p&gt;If patch cycles are slow, asset inventories are unclear, internet exposure has no owner, logs are hard to search, and account privileges are excessive, AI only changes attacker efficiency. The underlying problem was already there.&lt;/p&gt;
&lt;h2 id=&#34;the-ai-supply-chain-is-also-an-attack-surface&#34;&gt;The AI supply chain is also an attack surface
&lt;/h2&gt;&lt;p&gt;GTIG also highlighted attacker interest in the AI software ecosystem itself, including agent skills, third-party data connectors, open-source wrapper libraries, and automation frameworks. The risk does not necessarily come from the model being compromised. It can come from poisoned tools around the model.&lt;/p&gt;
&lt;p&gt;This matters for anyone using AI coding tools, AI agents, and automation plugins.&lt;/p&gt;
&lt;p&gt;A malicious skill, backdoored dependency, or over-permissioned connector can turn an AI system from a helper into an attacker-controlled execution path. When an agent can access files, browsers, terminals, cloud accounts, or enterprise data, supply-chain review has to extend beyond traditional applications.&lt;/p&gt;
&lt;p&gt;At minimum:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Do not install agent skills and plugins from unclear sources.&lt;/li&gt;
&lt;li&gt;Isolate tools that can execute commands, read files, or access secrets.&lt;/li&gt;
&lt;li&gt;Do not run unreviewed AI-generated scripts directly in production.&lt;/li&gt;
&lt;li&gt;Scan dependencies, GitHub Actions, PyPI / npm packages, and AI project components.&lt;/li&gt;
&lt;li&gt;Apply least privilege and leakage monitoring to model API keys, cloud secrets, and GitHub tokens.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;practical-advice-for-security-teams&#34;&gt;Practical advice for security teams
&lt;/h2&gt;&lt;p&gt;First, move vulnerability response earlier. High-risk CVEs should not wait for a monthly patch window, especially for VPNs, gateways, system administration panels, identity systems, CI/CD, and remote management tools.&lt;/p&gt;
&lt;p&gt;Second, build a queryable asset inventory. If AI helps attackers locate targets faster, defenders must be able to answer quickly: do we run this system, which version, and where is it exposed?&lt;/p&gt;
&lt;p&gt;Third, use behavior detection to supplement signature detection. AI-generated exploits and malware may change surface features, but authentication bypass, abnormal logins, bulk probing, failed request patterns, and privilege escalation still leave behavioral traces.&lt;/p&gt;
&lt;p&gt;Fourth, bring AI tools into security governance. Internal coding agents, browser agents, document agents, automation scripts, and plugin marketplaces need approval, review, logging, and rollback paths.&lt;/p&gt;
&lt;p&gt;Fifth, do not reduce AI defense to buying a security model. The useful work is putting AI into vulnerability prioritization, log analysis, patch impact assessment, code review, and configuration baseline checks so defensive speed can rise too.&lt;/p&gt;
&lt;h2 id=&#34;summary&#34;&gt;Summary
&lt;/h2&gt;&lt;p&gt;GTIG&amp;rsquo;s report sends a clear signal: AI is accelerating the pace of offense and defense.&lt;/p&gt;
&lt;p&gt;The AI-assisted zero-day case shows that logic bugs and authentication bypasses may become easier for models to surface. APT45 shows that mature threat groups are already using AI to analyze CVEs and validate PoCs at scale. PROMPTSPY, AI-generated obfuscation, and agent supply-chain abuse show that AI is becoming part of the attack toolchain.&lt;/p&gt;
&lt;p&gt;This is not doomsday, but it is not ordinary news either.&lt;/p&gt;
&lt;p&gt;For organizations, the practical response is not panic. It is faster, clearer, and more verifiable work on patching, assets, logging, identity, supply chain, and AI tool permissions. AI improves attacker trial speed. Defenders must improve discovery, judgment, and remediation speed as well.&lt;/p&gt;
&lt;h2 id=&#34;references&#34;&gt;References
&lt;/h2&gt;&lt;ul&gt;
&lt;li&gt;&lt;a class=&#34;link&#34; href=&#34;https://cloud.google.com/blog/topics/threat-intelligence/ai-vulnerability-exploitation-initial-access&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Google Cloud Blog: GTIG AI Threat Tracker&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class=&#34;link&#34; href=&#34;https://cloud.google.com/blog/topics/threat-intelligence/apt45-north-korea-digital-military-machine&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;Google Cloud Blog: APT45 North Korea&amp;rsquo;s Digital Military Machine&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class=&#34;link&#34; href=&#34;https://apnews.com/article/926aea7f7dc5e0e61adce3273c55c6d4&#34;  target=&#34;_blank&#34; rel=&#34;noopener&#34;
    &gt;AP: Google disrupts hackers using AI to exploit an unknown weakness&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</description>
        </item>
        
    </channel>
</rss>
