ANALOGY PLUS
Growing companies hit an invisible ceiling when meaning and operations fragment. We help you redesign your operating system so AI and humans can co-create effectively.
What we do
We help founder-led companies (roughly $1–10M revenue) get operational clarity and an intelligent operating system. We start from shared meaning—a clear model of how your business actually works—then build AI-native systems on top so data, decisions, and teams align instead of fragment.
We also run a workshop and 1-on-1 sessions for those who want to learn the approach or go deeper. Workshop & sessions →
Projects
Work we've done with others and products we've built—operational clarity, shared meaning, and systems that scale.
CIMA — Post-disaster housing repair matching
We supported the design of the Convergence, Inventory, Matching, and Assignment (CIMA) platform for the NSF Civic Innovation Challenge. CIMA connects donated materials and volunteer labor with repair needs for displaced, vulnerable households after severe weather, reducing inequities in recovery times. Work included systems architecture, API design, and aligning supply with need so recovery organizations can match resources to those who need them most.
ScitoSim — Human-relevant safety intelligence
We integrated existing apps and databases into the ScitoSim web platform, led a redesign, and implemented solutions for transcriptomics, data synthesis, and PBPK (physiologically based pharmacokinetic) modeling. The platform helps toxicologists and regulatory professionals understand what is known, what remains uncertain, and how to address gaps in human safety assessment.
Signals — Career transition and values alignment
Our own product: a career transition tool that helps people align their career with their values. We designed and built the platform at getsignals.ooo to support clearer, more intentional career decisions.
RiskPulse — Personal AI and labor risk assessment
Our own product: a platform that gives people a personal "elevation certificate" for AI and labor risk—where they sit in risk space, not just generalized maps. We apply catastrophe-risk thinking (e.g. FEMA-style frameworks) to model exposure, vulnerability, and adaptation by role and task. Users get a score, a near-term window of safety, and prescriptive actions; the system bootstraps from usage data to improve the model over time.
Work with us
We help you get operational clarity and an intelligent operating system—shared meaning first, then AI-native systems that scale. Get in touch to discuss your goals, or explore our workshop and 1-on-1 sessions.