Open-weight advanced AI models -- systems whose parameters are freely available for download and adaptation -- are reshaping the global AI landscape. As these models rapidly close the performance gap with closed alternatives, they enable breakthrough research and broaden access to powerful tools. However, once released, they cannot be recalled, and their built-in safeguards can be bypassed through fine-tuning or jailbreaking, posing risks that current governance frameworks are not equipped to address. This report moves beyond the binary framing of ``open'' versus ``closed'' AI. We assess the current landscape of open-weight advanced AI, examining technical capabilities, risk profiles, and regulatory responses across the European Union, United States, China, the United Kingdom, and international forums. We find significant disparities in safety practices across developers and jurisdictions, with no commonly adopted standards for determining when or how advanced models should be released openly. We propose a tiered, safety-anchored approach to model release, where openness is determined by rigorous risk assessment and demonstrated safety rather than ideology or commercial pressure. We outline actionable recommendations for developers, evaluators, standard-setters, and policymakers to enable responsible openness while investing in technical safeguards and societal preparedness.