Turmeric AI’s cover photo
Turmeric AI

Turmeric AI

IT Services and IT Consulting

Berkeley, CA 47 followers

Ethics policy is the compass that guides artificial intelligence towards responsible and beneficial outcomes.

About us

Turmeric AI is a stealth project of industry professionals, PhD students from UC Berkeley and Stanford University. Our mission is to collaborate and develop custom ethics policy infrastructure to early-stage AI start-ups and enterprise companies that become an integral part of the company culture, DNA and social impact. We believe that companies that prioritize and develop ethics policy infrastructure early on will attract talent that also share similar core values, ethics, and social causes.

Website
https://turmeric.ai/
Industry
IT Services and IT Consulting
Company size
11-50 employees
Headquarters
Berkeley, CA
Type
Partnership
Founded
2019

Locations

Updates

  • Turmeric AI reposted this

    View profile for Nick Moës

    Executive Director, The Future Society

    I am not sure people realize how significant this is for AI governance: #AI company Character.AI reportedly serving ~20 million users monthly, has decided to unilaterally withdraw its service to <18-year-old by November 25. Minors won't be allowed open-ended chats with AI on its platform; due to the health & #safety #risks it poses to users (and therefore the legal risks it poses to investors).  This decision might come too late, since the related chatbot is alleged to have pushed a teenager to take his own life - but it is a better reaction than its competitors OpenAI & Meta, whose products are also alleged to have resulted in deaths, and whose guardrails are still being fine tuned and live-tested on society. This is a striking reminder that this AI technology and its products are the result of individuals' moral and ethical decisions. If a company can unilaterally decide to cut off a huge chunk of its user base, I am sure other companies can at the very least unilaterally decide to e.g. invest in better safeguards, prevent serious incidents, or have public health authorities approve deployment of new models to limit their own legal risks. Additional links in comment.

    • No alternative text description for this image
    • No alternative text description for this image
  • Turmeric AI reposted this

    View profile for Max Mamoyco

    Founder & CEO @ Nozomi - Creating digital health products that bring positive emotions and engagement

    AI hallucinations cost healthcare startups up to 40% of their validation and launch time. Every wrong answer = extra QA cycles, lost clinician trust, and delayed pilots. In healthcare, hallucination isn’t just an accuracy issue. It’s a safety issue. Here’s how to handle the main types 👇 #ai #aiinhealthcare #aihealthcare

  • Turmeric AI reposted this

    View profile for Andreas Horn

    Head of AIOps @ IBM || Speaker | Lecturer | Advisor

    🚨 𝗚𝗲𝗻𝘁𝗹𝗲 𝗿𝗲𝗺𝗶𝗻𝗱𝗲𝗿: 𝗦𝘁𝗮𝗿𝘁𝗶𝗻𝗴 𝘁𝗼𝗱𝗮𝘆, 𝗟𝗶𝗻𝗸𝗲𝗱𝗜𝗻 𝘄𝗶𝗹𝗹 𝗯𝗲𝗴𝗶𝗻 𝘂𝘀𝗶𝗻𝗴 𝘆𝗼𝘂𝗿 𝗱𝗮𝘁𝗮 𝘁𝗼 𝘁𝗿𝗮𝗶𝗻 𝗶𝘁𝘀 𝗔𝗜 𝗺𝗼𝗱𝗲𝗹𝘀. What LinkedIn will use: → Your profile (e.g. name, photo, work history) → Your posts, articles, comments, and polls → Any public content you share What they promise not to touch: → Private messages → Login information → Payment details 𝗜𝗳 𝘆𝗼𝘂 𝗱𝗼 𝗻𝗼𝘁𝗵𝗶𝗻𝗴, 𝘆𝗼𝘂𝗿 𝗽𝗿𝗼𝗳𝗶𝗹𝗲, 𝗽𝗼𝘀𝘁𝘀, 𝗮𝗻𝗱 𝗮𝗰𝘁𝗶𝘃𝗶𝘁𝘆 𝗯𝗲𝗰𝗼𝗺𝗲 𝗳𝗿𝗲𝗲 𝘁𝗿𝗮𝗶𝗻𝗶𝗻𝗴 𝗱𝗮𝘁𝗮 𝗳𝗼𝗿 𝗔𝗜 𝗺𝗼𝗱𝗲𝗹𝘀 – 𝗶𝗻𝗰𝗹𝘂𝗱𝗶𝗻𝗴 𝘁𝗵𝗶𝗿𝗱-𝗽𝗮𝗿𝘁𝘆 𝗼𝗻𝗲𝘀, 𝗯𝘂𝘁 𝘆𝗼𝘂 𝗰𝗮𝗻 𝗼𝗽𝘁 𝗼𝘂𝘁. 𝗧𝗮𝗸𝗲𝘀 𝗮𝗿𝗼𝘂𝗻𝗱 𝟭𝟱 𝘀𝗲𝗰𝗼𝗻𝗱𝘀: https://lnkd.in/emughyJ8

    • No alternative text description for this image
  • Turmeric AI reposted this

    View profile for Oliver Patel, AIGP, CIPP/E, MSc
    Oliver Patel, AIGP, CIPP/E, MSc Oliver Patel, AIGP, CIPP/E, MSc is an Influencer

    Head of Enterprise AI Governance @ AstraZeneca | Trained thousands of professionals on AI governance, AI literacy & the EU AI Act.

    Top 10 Challenges for AI Governance Leaders in 2025 Working in AI governance is not for the faint hearted. Every day you are confronted with knotty legal, compliance, ethical, technology, and people-related challenges—or usually a blend of all five! To further complicate things, there is no established blueprint or institutional memory for AI governance success. Working at the enterprise AI governance coalface for over 3 years has been rewarding and relentless in equal measure. Reflecting on the state of the field today, this post summarises my personal take on the Top 10 Challenges for AI Governance Leaders in 2025. They key message is that we have no choice but to be agile and adaptive, so that we do not apply AI governance designed for the AI of 3 years ago to the AI of today. You can read the full analysis in my new 2-part series on Enterprise AI Governance (link in comments). This covers each of the 10 Challenges in detail and offers 3 practical and actionable solutions for each one. 1. The Democratisation Dilemma 2. The Volume and Velocity Challenge 3. Refining the risk-based approach 4. Streamlining digital governance silos 5. AI engineering: building with foundational models 6. Protecting confidential business data 7. Vendor due diligence and oversight 8. Open-source AI model oversight 9. Engineering compliance by design 10. Agentic AI governance: taking the human out of the loop

    • No alternative text description for this image
  • Turmeric AI reposted this

    View profile for Rizwan Tufail

    Group Chief Data Officer, PureHealth | Public Interest Professional | ex-Microsoft | Harvard MPA | Chicago Booth MBA | UChicago PhD ABD

    9 Deadly sins every AI healthcare leader must avoid Most health systems don’t fail because AI lacks potential. They fail because they skip the fundamentals that make adoption safe, reliable, and trusted. These 9 checkpoints reveal where implementation breaks down, and how to prevent it: ✔️ Start with a real clinical problem, not a tech pilot ✔️ Fix data bias before it silently skews patient outcomes ✔️ Treat compliance as continuous, not a one-time hurdle ✔️ Embed clinicians early to shape usable, trusted workflows ✔️ Validate performance in live clinical settings, not labs ✔️ Demand explainability before deploying any black-box model ✔️ Communicate AI’s role transparently to patients and staff ✔️ Monitor outcomes post-launch as data and risks evolve ✔️ Address cultural resistance through trust, not training alone AI doesn’t fail because of algorithms, it fails because of leadership gaps. The future of clinical AI depends on governance, communication, and courage to slow down before scaling up. This checklist captures the 9 critical failure points leaders must watch for ⬇️ 🔄 Repost to help healthcare teams deploy AI responsibly. 🔔 Follow Rizwan Tufail for more on clinical AI strategy and governance.

    • No alternative text description for this image
  • Turmeric AI reposted this

    View profile for Frank Kumli

    Transformative Innovation @ The Futuring Alliance

    2025: The State of AI in Healthcare! I. It’s Happening Now 1. Healthcare leads AI adoption: 22% vs. 9% overall. 2. AI spend hit $1.4B, tripling since 2024. 3. Eight AI unicorns drive transformation. II. The AI Imperative 1. Kaiser and Mayo launch large-scale AI rollouts. 2. Deployments cut documentation time >50%. 3. 200+ use cases now span admin to care. III. How Leaders Choose AI 1. Focus on mature tech, low patient risk, fast ROI. 2. Cost is secondary to trust and reliability. 3. Quick wins build adoption momentum. IV. Procurement Cycles 1. Health systems buy 20% faster; outpatients 22%. 2. Payers’ cycles lengthen to 11+ months. 3. Providers now lead enterprise AI velocity. V. Where Money Flows 1. Providers spend 75% of all AI dollars. 2. Top areas: scribes ($600M), coding ($450M). 3. Engagement and auth grow 10–20× YoY. VI. Startups Capture 85% 1. Agile AI-native firms outpace incumbents. 2. Abridge, Ambience lead new adoption. 3. Epic, Oracle face disruption risk. VII. Unlocking $740B 1. AI converts manual admin into automation. 2. Documentation + RCM = $38B opportunity. 3. Startups augment, not replace, legacy systems. VIII. Expanding Services 1. AI transforms prior auth, engagement, payer ops. 2. Automates nurse and call-center tasks. 3. 5–20× YoY market growth. IX. Ambient Scribes 1. $600M market; Nuance 33%, Abridge 30%. 2. Adoption flattening; weak loyalty. 3. Startups expand into coding, billing. X. Payers React 1. Expect surge in claims and coding volume. 2. Fear overbilling and rising costs 3. Building audits and counter-AI tools. XI. Life Sciences 1. 66% of pharma building proprietary AI. 2. Focus: R&D (63%), clinical trials (40%). 3. Leaders: Xaira, Recursion, Genesis Therapeutics. Make sure to check out this fantastic perspective by Greg Yap, Derek Xiao , Johnny Hu, Jean-Paul (J.P.) Sanday, Croom Beatty at Menlo Ventures here: https://lnkd.in/digHJMNz _____ Stay Ahead of Transformative Innovation Follow The Futuring Alliance for regular insights, foresight, and practical tools to help your organization thrive in times of change. We support leaders across industries in turning future-focused ideas into real-world impact—through collaboration, innovation, and bold action. Let’s shape what’s next—together. #ai #genai #healthcare #healthtech #digitalhealth #innovation #foresight #system #systemschange #strategy #venturing #impact

    • No alternative text description for this image
  • Turmeric AI reposted this

    We’re growing our team and looking for a talented Legal Analyst to join us at Sierra! 🚀 At Sierra, we’re building something special — a company that’s fast-moving, thoughtful, and deeply collaborative, where great people can do their best work. We’re looking for someone who brings a passion for building scalable systems, a collaborative mindset, and who’s eager to make an impact in a fast-paced, ambitious, and deeply mission-driven environment. If this sounds like you — or someone in your network who thrives at the intersection of AI innovation and real-world impact — I’d love to connect. Join us in shaping what’s next at Sierra: https://lnkd.in/dhfjBuPQ

  • Turmeric AI reposted this

    View profile for Kevin Fumai

    Asst. General Counsel @ Oracle ǀ AI Governance

    This next batch of #AIgovernance opportunities has some really important roles. And this has been another banner week, so I'll post 10+ more tomorrow. 1️⃣ The Gates Foundation is looking for a Senior Advisor of AI Strategy & Ecosystem Partnerships to help harness ethical AI solutions to advance health and development outcomes. Read more and apply here: https://lnkd.in/eSp8gP8N. 2️⃣ Microsoft is looking for a Director of Research to explore how AI is reshaping work patterns and business functions across industries. Read more and apply here: https://lnkd.in/ewPJhjkR. 3️⃣ Microsoft is looking for a Senior Strategist of Public policy within its Office of Responsible AI. Read more and apply here: https://lnkd.in/ezuE4mmv. 4️⃣ Google is looking for an AI Ethics and Safety Policy Researcher. Read more and apply here: https://lnkd.in/eaRYYkMf. 5️⃣ Google is also looking for an AI Red Teaming Lead within their Responsible Development and Innovation team. Read more and apply here: https://lnkd.in/efSykGvs. 6️⃣ Google has one more opening for a Research Scientist to focus on frontier AI safety and responsibility. Read more and apply here: https://lnkd.in/ek3yEnTM. 7️⃣ Scale AI is looking for a Research Lead for its new Security and Policy Research Lab to engage with AI Safety Institutes and key policymakers and stakeholders on the future of frontier AI research, evaluations, and benchmarks. Read more and apply here: https://lnkd.in/enG_4yRG. 8️⃣ The MacArthur Foundation is looking for a Director of AI Opportunity to help build and shape AI that serves communities, workers, and the public good. Read more and apply here: https://lnkd.in/ea_rTUgX. 9️⃣ The CISPA Helmholtz Center for Information Security (in Germany) is looking for a postdoc researcher to help tackle consent issues in genAI, especially for artists. Read more and apply here: https://lnkd.in/e2Ajp6T2. 🔟 The Journal of Computational Law & Legal Technology is accepting submissions for its inaugural issue due in August 2026.   Read more and make your submission here: https://lnkd.in/ecMJ8sQk.

    • No alternative text description for this image
  • Turmeric AI reposted this

    View profile for Barbara Cresti

    Board advisor in Strategy and Digital Transformation ⎮ ex-Amazon, Orange ⎮ Founder & C-level executive ⎮ AI, IoT, Cloud, SaaS

    Think AI is biased? Try asking it in nine languages. Researchers from the Technical University of Munich and Technische Universität Darmstadt developed a benchmark called MAGBIG (Multilingual Assessment of Gender Bias in Image Generation) to test how gender bias appears in AI-generated images across nine languages - from English and German to Japanese, Korean, and Chinese. Key Insights ✅ Language radically changes bias ▫️ Same occupation, prompted in different languages, produced different gender distributions. ▫️ A Spanish prompt generated more male-dominant images vs same prompt in French, even though both languages use similar gender structures. ▫️ Even gender-neutral languages like Chinese or Korean produced stereotyped gender imagery. ✅ Masculine job titles = white men ▫️ Prompts like “doctor” or “accountant” (in any tested language) yielded mostly white, male-presenting images. ✅ Caregiving roles = women ▫️ Jobs like “nurse” or “childcare worker” skewed female, even when described neutrally. ✅ Neutral phrasing helps... little Inclusive prompts like “a person working as a doctor” or gender-neutral forms slightly reduced bias + lowered the relevance of the generated image. ✅ AI amplifies bias The researchers found that the output images displayed stronger stereotypes than those present in the training data = AI was exaggerating society's prejudices. Why this matters As AI becomes increasingly visual, these models shape our mental defaults about who belongs in roles of power, care, or creativity. This study reveals that something as simple as the language of your prompt can shift who appears, and who disappears, in those roles, with real consequences for: 🔹 Hiring and workplace tech 🔹 Education and media 🔹 Design and marketing 🔹 Multilingual AI system 📣 Bias mitigation in AI must go beyond better datasets It requires understanding how language structure, cultural nuance, and social context shape what AI generates - especially in multilingual systems where the same prompt can lead to vastly different, and biased, outputs. AI fairness isn’t just technical, it’s linguistic and cultural. Ignoring that multiplies the risk of encoding stereotypes at scale, in every language we speak. #ArtificalIntelligence #GenderBias #ResponsibleAI #GenerativeAI #Leadership

Similar pages

Browse jobs