Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Futures Kickoff
Get prepared for your futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to experience risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
#TrumpordersfederalbanonAnthropicAI #TrumpOrdersFederalBanOnAnthropicAI The artificial intelligence landscape has entered a decisive geopolitical phase following actions taken under Donald Trump directing federal agencies to halt operational use of technologies developed by Anthropic. What initially appeared to be a procurement dispute has evolved into a broader strategic confrontation over who ultimately controls advanced AI deployment in national defense systems. At the center of the controversy stands Anthropic’s flagship model, Claude, and the ethical limitations embedded into its military-use policies. CEO Dario Amodei has consistently maintained restrictions against deployment in weapons of mass destruction, autonomous lethal systems, and mass surveillance infrastructures—guardrails that reportedly clashed with evolving defense expectations from the Pentagon and officials within the White House. Following public criticism, Defense Secretary Pete Hegseth moved to classify the firm under a supply chain risk designation, triggering mandatory disengagement across federal contractors and defense-aligned enterprises. The decision includes a structured six-month transition window allowing agencies to phase out existing integrations while evaluating alternative AI providers capable of meeting unrestricted defense compliance standards. Market observers now see this as a defining inflection point where AI governance, national security doctrine, and corporate ethics intersect at scale. The broader implication extends beyond one company: future defense contracts may increasingly favor AI systems architected with sovereign override capabilities, ensuring government primacy in deployment decisions. Meanwhile, innovation leaders face mounting pressure to clarify whether ethical constraints represent responsible stewardship or strategic friction in an era where artificial intelligence is no longer viewed as experimental infrastructure—but as core geopolitical leverage.