






Responsibilities: Develop AGI alignment and control frameworks, build AI value alignment systems, create AI risk assessment models. Lead international AI safety standardization. Requirements: PhD in AI Safety/Alignment, 10+ years in AI ethics and safety research, contributions to Anthropic/OpenAI safety teams. Publications in AI safety top conferences.


