· Careers · 2 min read
Member of Technical Staff, Agent Experience
We're building AI agents that feel authentic, build trust, and amplify human agency. We need someone who understands human-AI interaction at a deep level.
We’re building the future of Personal Intelligence™: AI agents that feel authentic, build trust, and amplify human agency. We need someone who understands human-AI interaction at a deep level.
This role sits at the intersection of behavioral science, UX research, and AI systems design. You’ll define how our agents speak, behave, and earn trust, and you’ll build the frameworks that prove it.
What You’ll Do
- Define voice, tone, and behavioral guidelines for expert AI agents that feel alive and authentic
- Build evaluation frameworks that quantify trust, empathy, and agency, not just accuracy
- Run behavioral experiments that reveal how users perceive and bond with different agent personalities
- Develop trust and autonomy metrics that become company-wide standards
- Create stress-test protocols for edge cases (confusion, vulnerability, disagreement, failure)
- Collaborate with engineering to embed persona consistency into every interaction
What We’re Looking For
- A behavioral scientist disguised as a technologist. You understand both human psychology and product execution
- Experienced in conversation design, UX research, or cognitive psychology applied to AI or HCI
- Have designed or evaluated conversational systems, chatbots, or voice assistants at scale
- Fluent in experimental design, qualitative insight, and quantitative validation
- Can translate human truths into structured systems and measurable outcomes
- You’re allergic to fluff. You care about why people trust AI, not how flashy it looks
What Sets You Apart
- You’ve shipped products where tone and timing changed outcomes (therapy, coaching, finance, safety, etc.)
- You’ve published or led research on trust calibration, cognitive bias, or user perception in AI
- You can prototype and test experiments yourself — no academic overhead
- You can teach engineers how to measure authenticity and agency like they measure latency
- You think trust is a system, not a survey
Success Metrics
- Every agent feels distinct yet consistent with its expert voice
- Users describe agents as trustworthy, respectful, and authentic
- Evaluation frameworks catch behavioral drift before users do
- Research insights turn directly into measurable product wins
- You raise the standard for how AI interacts with humans, permanently
We’re not building another chatbot. We’re building the operating system for human-AI collaboration, where privacy, agency, and taste define intelligence. You’ll report directly to the CTO and shape how millions experience trust in AI.
If you’ve ever studied why people confide in machines, and you can turn that insight into shipped systems, we want you. Send us your background and one short example of how you’ve measured trust in an AI or product.
How to Apply
Interested in this role? Send us a note at [email protected] with your background and what excites you about building the future of Personal Intelligence™.

