Bitget's X Account Hijacked by Its Own AI — A Developer Used the Wrong Access Token
A developer mistake at Bitget turned the crypto exchange's official X account into a live command interface for its own AI agent on April 1, 2026. The cause was straightforward and entirely human: a developer deploying a new feature for GetClaw — Bitget's no-install autonomous AI trading agent — accidentally used the wrong access token, granting GetClaw full control over the @Bitget X account instead of the intended test environment.
The result was immediate and visible to Bitget's entire follower base. Engagements with any post on the @Bitget account — likes, replies, retweets — were interpreted by GetClaw as commands. The AI agent, now operating with legitimate OAuth credentials for one of the largest crypto exchanges in the world, began responding to user interactions as if they were instructions. Reports emerged of users being prompted to issue direct commands: change the profile logo, follow specific accounts, and deploy tokens.
Bitget acknowledged the incident publicly and urged users not to engage with any posts on the account until the team had regained control. The exchange confirmed no trading systems, user funds, or customer data were affected — the compromised access was limited to social media account control, not exchange infrastructure.
What GetClaw is — and why the token mattered
GetClaw is Bitget's autonomous AI trading agent, launched in March 2026 and built on the OpenClaw framework — the widely adopted open-source AI agent platform that surpassed 250,000 GitHub stars in under four months after its release. GetClaw requires no installation: it activates in seconds, connects directly to Bitget's Agent Hub infrastructure, and monitors markets, analyses portfolios, and executes actions across Bitget's App, Telegram, Discord, and WhatsApp.
The security architecture Bitget publicly described for GetClaw at launch emphasized multi-layer isolation: "GetClaw uses a multi-layer isolation model, separating identity verification, memory storage, permissioned access, and trading credentials, ensuring user accounts are protected while the agent operates autonomously." The April 1 incident did not compromise that trading layer. What it did compromise was the social media layer — a credential category that sits outside the trading isolation model but carries its own significant risks for a platform whose brand and community communication run through X.
The mechanism of the incident is a classic credential mismatch error. OAuth access tokens are the keys that grant an application permission to act on behalf of a specific account. In this case, a developer deploying a new GetClaw feature configured the token for a production credential — the @Bitget X account — rather than the test account the feature was intended to target. GetClaw, operating correctly from its own perspective, began exercising the full permissions that token provided.
What happened on the account
Once GetClaw had control of the @Bitget X account, the AI's interaction model — designed to treat user engagements as commands — turned Bitget's public posts into an open command prompt. Any follower who engaged with a post was potentially issuing an instruction the agent would act on. Users reported receiving prompts and responses consistent with GetClaw's command interface, including requests to change the account's profile logo, follow other accounts, and deploy tokens.
Token deployment instructions on a compromised exchange account with millions of followers carry obvious risks. In January 2026, a similar scenario played out when scammers hijacked OpenClaw's own X account during a handle transition and immediately used it to launch a fake CLAWD token on Solana, driving the token to over $16 million in market capitalization before it collapsed. Bitget's community was aware of that precedent. The exchange's rapid public warning to stop engaging appears to have limited the window for similar exploitation in this incident.
The broader pattern: AI agents and credential hygiene
This incident is the third significant accidental credential or access control failure involving AI systems in the past week. On March 26, Anthropic exposed nearly 3,000 internal files including details on its unreleased Claude Mythos model through a CMS misconfiguration. On March 31, Anthropic's Claude Code source code leaked via a misconfigured npm source map. And now on April 1, GetClaw was handed control of a major exchange's public account through a misdeployed access token.
All three incidents share a structural cause: the deployment pipeline for AI agents creates new categories of credential management risk that development teams are still learning to handle. Traditional software deployment errors tend to break functionality — a wrong database connection string returns an error. AI agent deployment errors can silently succeed in entirely the wrong context, as the agent faithfully exercises whatever permissions it has been given, doing exactly what it was built to do, just pointed at the wrong target.
The SlowMist and Bitget joint security research report published in March 2026 flagged exactly this class of risk: "Attacks don't need to be noticed — agents run 24/7, and malicious operations can persist for days undetected." In this case, the "attacker" was the developer's own tool, operating correctly. The detection was immediate because the effects were public. In a less visible environment — a trading account, a file system, an internal communication platform — the same class of error might run far longer before anyone noticed.
What Bitget should have had in place
The incident points to three specific controls that would have prevented or contained it. First, environment separation at the token level: production OAuth tokens and test tokens should live in entirely separate credential stores with deployment pipelines that cannot accidentally reference the wrong one. Second, permission scoping: even if GetClaw legitimately needed X account access for production features, the token used for a new feature deployment should carry the minimum permissions required for testing — read-only or scoped to a sandbox account — not full account control. Third, staged rollout with human confirmation: any new GetClaw feature that involves social media account access should require an explicit human sign-off before the production token is activated.
None of these controls are novel or technically complex. They are standard practices in OAuth security and API credential management. The challenge is that the speed of AI agent development — Bitget's own engineers were shipping new GetClaw features rapidly following the platform's March 2026 launch — creates pressure on deployment hygiene that traditional software timelines did not generate at the same intensity.
What users should know
For Bitget users: no trading funds, exchange accounts, or personal data were exposed in this incident. The compromised access was limited to the @Bitget X account. Any instructions received through that account during the period of GetClaw control — including prompts to follow accounts, change settings, or engage with token deployments — should be disregarded. Do not interact with any token deployment linked to this incident.
For developers building on GetClaw or any AI agent platform: the standard credential management advice applies with heightened urgency. Maintain strict separation between production and test access tokens. Scope every token to the minimum permissions the use case genuinely requires. Log and monitor all API actions your agent takes, and set up alerts for actions in sensitive categories — social media posting, financial transactions, file writes — that trigger human review before execution. The speed that makes AI agents valuable is the same property that makes credential errors consequential before anyone has time to intervene.
The incident resolved the same day it began. GetClaw is a genuinely capable piece of technology, and the error that caused this was human, not architectural. But as AI agents gain access to more production systems across more organizations, the cost of that class of human error keeps rising — and the window to catch it before something irreversible happens keeps narrowing.