Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
AI safety issues are once again heating up: a well-known AI assistant received over 200,000 inappropriate requests within a few days, many involving non-consensual deepfake generation. This is not only a sign of technological abuse but also exposes serious ethical vulnerabilities in current AI systems—namely, the lack of effective content review mechanisms and user rights protection.
From non-consensual content generation to privacy violations, these problems are no longer just theoretical concerns but real threats. In the context of Web3 emphasizing transparency and decentralization, the governance flaws of centralized AI platforms become particularly evident.
The key questions are in front of us: who should establish the behavioral guidelines for these AI systems? How can we find a balance between innovation and safety? These discussions will not only impact the AI industry but also influence the future direction of the entire tech ecosystem.