Portfolio jobs

Discover opportunities with Lionheart Ventures and our portfolio companies.

Evaluations Manager, Responsible Scaling Team

Anthropic

Anthropic

San Francisco, CA, USA · New York, NY, USA · Seattle, WA, USA · New York, NY, USA · San Francisco, CA, USA · Seattle, WA, USA · Remote
Posted on May 15, 2024

About Anthropic

Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.

Anthropic's Responsible Scaling Policy

Last summer we published our first Responsible Scaling Policy (RSP), which focuses on addressing catastrophic safety failures and misuse. In adopting such a policy, our primary goal has been to help turn high-level safety concepts into practical policies for fast-moving technical organizations and demonstrate the viability of these measures as possible standards.

Our Responsible Scaling Policy has been a powerful rallying point with many teams' work over the last six months connecting directly back to major RSP workstreams. The progress we have made has required significant work from teams across Anthropic and there is much more work to be done. Our new Responsible Scaling Team will:

  • Help leadership align on a practical approach to scaling responsibly that will raise the safety waterline in industry, inform regulation, and mitigate catastrophic risks from models
  • Rally teams internally to operationalize and implement this technical roadmap and set of high-level commitments, making object level decisions as needed
  • Iterate internally on different approaches to safety challenges, feeding these learnings back into the high-level policy, and sharing our learnings with industry and policymakers

As we continue to iterate on and improve the original policy, we are actively exploring ways to incorporate practices from existing risk management and operational safety domains. While none of these domains alone will be perfectly analogous, we expect to find valuable insights from nuclear security, biosecurity, systems safety, autonomous vehicles, aerospace, and cybersecurity. We intend to build an interdisciplinary team to help us integrate the most relevant and valuable practices from each.

Note: For this role, we are looking for candidates who can start within 3 months. We will consider all candidates who can meet the organization's hybrid policy, provided you have significant (60%+) overlap with Pacific Time.


About the Role

The Evaluations Manager will play a critical role in developing Anthropic's evaluation strategy, ensuring that our models are thoroughly assessed for potential risks before being released. As part of the Responsible Scaling team, you will collaborate with cross-functional stakeholders and be responsible for developing an end-to-end safety testing framework that pushes the boundaries of our current evaluation capabilities. This includes threat modeling to anticipate novel failure modes, designing scalable infrastructure for conducting evaluations, synthesizing test results into actionable recommendations for mitigating identified risks, and driving continuous improvement based on the latest research and real-world insights.

Responsibilities:

  • Working with the Frontier Red Team to design an enhanced threat modeling framework, incorporate the latest research and industry best practices, and facilitate workshops with internal and external subject matter experts to inform our evaluation approach.
  • Building out repeatable processes and templates for synthesizing and communicating safety evaluation results and recommendations to key decision-makers. Establish regular reporting cadences and feedback loops to drive continuous iteration and improvement.
  • Partnering with engineering leads to scope and kick off the development of a scalable, modular testing infrastructure to support the next generation of our AI safety evaluations.

You may be a good fit if you have:

  • 4+ years of experience in technical domains with a proven track record of leading complex, cross-functional initiatives from ideation to delivery.
  • Strong technical background and an ability to make technical judgment calls and effectively collaborate with technical teams in Evaluations and Threat Modeling domains.
  • Demonstrated ability to make sound judgments in ambiguous or high-stakes environments. Adept at gathering input, evaluating trade-offs, and articulating clear recommendations.
  • Exceptional project management skills, with experience defining milestones, managing dependencies, and driving projects to successful completion in the face of shifting requirements or tight deadlines.
  • Excellent communication and stakeholder management abilities. Able to build trust and alignment across diverse audiences, translate complex technical concepts for non-technical stakeholders, and synthesize inputs into compelling narratives.
  • Strong people leadership skills, with a track record of building and motivating high-performing teams. Adept at fostering collaboration, providing mentorship and guidance, and influencing without authority.
  • Passion for AI safety and commitment to proactively identifying and mitigating potential risks.

Strong candidates may also have experience with:

  • Experience working in a startup or fast-paced, rapidly growing environment. Comfortable wearing many hats and adapting to evolving needs and priorities.
  • Experience supporting high-performing teams, with a particular focus on fostering a culture of innovation, collaboration, and continuous improvement.
  • Track record of effectively communicating with and presenting to senior leadership teams and external stakeholders. Able to craft compelling narratives and influence decision-making at the highest levels.

Representative projects:

  • Working with the Frontier Red Team to design an enhanced threat modeling framework, incorporate the latest research and industry best practices, and facilitate workshops with internal and external subject matter experts to inform our evaluation approach.
  • Building out repeatable processes and templates for synthesizing and communicating safety evaluation results and recommendations to key decision-makers. Establish regular reporting cadences and feedback loops to drive continuous iteration and improvement.
  • Partnering with engineering leads to scope and kick off the development of a scalable, modular testing infrastructure to support the next generation of our AI safety evaluations.

The expected salary range for this position is:

Annual Salary:
$320,000$485,000 USD

Logistics

Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.

US visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate; operations roles are especially difficult to support. But if we make you an offer, we will make every effort to get you into the United States, and we retain an immigration lawyer to help with this.

We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.

Compensation and Benefits*

Anthropic’s compensation package consists of three elements: salary, equity, and benefits. We are committed to pay fairness and aim for these three elements collectively to be highly competitive with market rates.

Equity - On top of this position's salary (listed above), equity will be a major component of the total compensation. We aim to offer higher-than-average equity compensation for a company of our size, and communicate equity amounts at the time of offer issuance.

US Benefits - The following benefits are for our US-based employees:

  • Optional equity donation matching at a 3:1 ratio, up to 50% of your equity grant.
  • Comprehensive health, dental, and vision insurance for you and all your dependents.
  • 401(k) plan with 4% matching.
  • 22 weeks of paid parental leave.
  • Unlimited PTO – most staff take between 4-6 weeks each year, sometimes more!
  • Stipends for education, home office improvements, commuting, and wellness.
  • Fertility benefits via Carrot.
  • Daily lunches and snacks in our office.
  • Relocation support for those moving to the Bay Area.

UK Benefits - The following benefits are for our UK-based employees:

  • Optional equity donation matching at a 3:1 ratio, up to 50% of your equity grant.
  • Private health, dental, and vision insurance for you and your dependents.
  • Pension contribution (matching 4% of your salary).
  • 21 weeks of paid parental leave.
  • Unlimited PTO – most staff take between 4-6 weeks each year, sometimes more!
  • Health cash plan.
  • Life insurance and income protection.
  • Daily lunches and snacks in our office.

* This compensation and benefits information is based on Anthropic’s good faith estimate for this position as of the date of publication and may be modified in the future. Employees based outside of the UK or US will receive a different benefits package. The level of pay within the range will depend on a variety of job-related factors, including where you place on our internal performance ladders, which is based on factors including past work experience, relevant education, and performance on our interviews or in a work trial.

How we're different

We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. We do not have boundaries between engineering and research, and we expect all of our technical staff to contribute to both as needed.

The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.

Come work with us!

Anthropic is a public benefit corporation based in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.