Why Ethical AI Must Act, Not Just Talk: A Conversation With ActionAI CEO Miriam Haart

I see AI as a huge bridge for collaboration in this region. Both Israel and the Gulf are very active about investing in AI, which leads to a new path towards collaboration in the region. The UAE is actively investing in AI initiatives and trying to transform their core operations at scale. Israel and their startup mentality is developing secure-focused AI solutions at scale. I think it is natural for AI  to be one of the strongest mechanisms for mutual collaboration and opportunity in the region, because both sides have so much to gain from working together.
AI That Acts: CloudTweaks focuses heavily on digital transformation. How does “AI that acts” change the ROI for an enterprise compared to the “AI that just talks” models we have seen?
We don’t like to make assumptions at this company. We’re very intentional about putting in the time, the effort, and the research needed to make informed decisions. That applies to everything, but especially to how we handle AI training data. Most companies rely on data that already exists on the internet, which means they’re inheriting whatever biases are already baked into that data. We take a different approach. We use a dedicated AI labeling team to create custom training data that is specialized and specific to the use cases we’re solving for. We’re not just accepting what’s available. We’re putting in the work to build data sets that are accurate, intentional, and reflective of the outcomes we want to see. That level of rigor comes from a philosophy of never assuming something is right just because it’s the default.
AI and VR have already converged in different ways. Many games use AI itself to create really incredible game experiences. Thus, the potential for manipulation increases too.  The stakes of getting it wrong go up significantly when someone is fully immersed. Having taught VR at Stanford and now working at the center of AI safety at ActionAI, it’s clear to me that the principles are the same regardless of the environment. You need transparency in how AI makes decisions. You need evaluation against ground truth so you can verify accuracy. You need monitoring so you can catch problems in real time. And you need a protocol like ExEx that steps in when the AI isn’t confident, rather than letting it push through and hope for the best. Whether it’s a financial automation or a virtual simulation, the philosophy doesn’t change. AI needs to be accountable.
The “Unorthodox” Edge: You’ve lived in worlds defined by strict rules and “fundamentalist” structures. How does that personal history help you spot the “hidden biases” in AI training data that others might miss?
That said, avoiding AI because it can influence our decision-making isn’t necessarily the right approach either. AI can surface insights we may not have seen on our own and can genuinely lead to better decisions. The issue isn’t that AI has influence. The issue is when that influence is invisible. When you don’t understand how or why AI reached a certain conclusion, that’s when it starts to function like a fundamentalist system, one that tells you what to think without showing you why.
Diversity of thought is a structural requirement if you want to build AI that doesn’t repeat the mistakes society has already made. AI learns from data, and data reflects the world we’ve built, including all its biases. What I’ve learned from my own experience is that the people who’ve questioned the status quo in their own lives are often the best at spotting bias in technology. That perspective is incredibly valuable. Boards need people who bring ethical reasoning, not just business optimization. They need leaders from non-traditional backgrounds who understand what it means to be on the receiving end of biased systems. And they need people who are comfortable asking hard questions even when the pressure is to move fast. At ActionAI, we don’t make assumptions. We put in the research and diverse perspectives needed to make decisions that hold up. 
Female Leadership in Deep Tech: As a founder and activist, what specific “diversity of thought” do you believe is necessary at the board level to prevent AI from repeating historical social errors?
From VR to AI: You taught Virtual Reality at Stanford. Do you see AI and VR converging in a way that requires even stricter safety protocols than what we have today?
The SITT Test & Cognitive Freedom: In your TED Talk, you discuss thinking freely. As AI becomes more integrated into our decision-making, how can we ensure technology remains an assistant rather than a fundamentalist influence?
The 88% Bias Crisis: With 88% of organizations facing negative impacts from AI bias at scale, what is the first step a CEO must take to move from “reactive” to “proactive” safety?
The biggest shift I see in 2026 is the move from AI experimentation to AI execution in mission-critical operations. For the last couple of years, most enterprises have kept AI in safe, low-risk environments. Content generation, summarization, analysis. Those are valuable, but they’re still in the category of AI that talks. AI that gives you information but doesn’t act on it. This year, I expect a major wave of companies crossing into AI that acts. AI that executes on financial workflows, compliance processes, procurement, and operations. The companies that succeed won’t be the ones who deploy the fastest. They’ll be the ones who deploy with the right reliability and safety infrastructure in place. You can’t run mission-critical workflows on hope and a chatbot. You need evaluation, monitoring, and explainability built in from the start. That’s exactly what we’ve built at ActionAI, and I think 2026 is the year the market catches up to that reality.
The best way to prevent that is transparency and accountability. When you can see how AI is making its decisions, you stay in control of your own thinking. You can evaluate the reasoning, challenge it, and ultimately make a more informed choice. That’s the philosophy we live by at ActionAI. We don’t ask people to blindly trust AI. We make AI explainable so that humans always remain the final decision-makers.
The 2026 Roadmap: Looking ahead, what is the one major shift in AI “actionability” you expect to see by the end of this year?
The Entry Point: You’ve moved from Stanford’s CS department to the center of the AI safety conversation. What is the core mission of ActionAI that you feel is currently missing from the broader tech ecosystem?
This is a great question, and the answer lies in how we interact with AI. Just like any other system, whether it’s a social system or a social media platform, there are always forces that can influence how we think. It’s very important that we develop the ability to analyze and reason on our own, independent of any external system.
At ActionAI, we’ve built trust infrastructure across three key layers. The first is our ExEx protocol, Explainable Exceptions. When our AI’s confidence drops below a certain threshold, it reroutes the task to a human with a clear explanation of why it couldn’t complete the action confidently. You’re never blindsided by a mistake. The second is our full evaluation lifecycle. Every automation is evaluated on a node-by-node level against ground truth data, not as a one-time check but as a continuous process of benchmarking and validating accuracy at every step. The third is live monitoring. Once an automation is deployed, we have real-time visibility into how it’s performing. If accuracy dips or patterns shift, you know immediately. When you put all three together, trust is earned and continuously verified. The companies that win will be the ones that build on infrastructure where they never have to guess.
The Middle Eastern Tech Bridge: Through your activism and the UAE partnership, how do you see AI acting as a bridge for cross-border collaboration in the coming years?
The second part of that protocol is continuous re-evaluation. It’s not enough to set guidelines once and move on. You need to be consistently benchmarking your AI against ground truth data to see how it’s performing inside your company. Not how it performed six months ago, but how it’s performing right now. That’s how you catch bias and accuracy issues before they scale into real business problems. At ActionAI, this philosophy of continuous evaluation is core to everything we build by providing the tools to verify constantly.
ActionAI’s core mission statement is to execute safe, reliable AI. At Stanford, I studied Ethics in AI and the two main pillars in safe, ethical AI is transparency and accountability. They are also the two pillars that are lacking in the broader tech ecosystem. The tech ecosystem has mostly got away with what is known as “Black Box AI” because there is still such a value add in non-mission-critical automations. Automations that connect your email to your slack to your drive are extremely powerful. But what about an automation that does internal auditing? Or deep compliance analysis and reporting? That requires AI that can be counted on. That’s what’s missing in the broader tech ecosystem and it’s our mission here at ActionAI to solve it. To do so, we’ve built reliability from the ground up. We created Action, our platform for building reliable AI automations that allow the developer to evaluate the orchestration on a node-by-node level against ground truth data. We have our ExEx (Explainable Exceptions) protocol that reroutes automations to humans with an explanation when an AI confidence level drops. We’ve also implemented live monitoring, LLM optimizers, and even an agent that assists you in the development of these mission-critical automations. 
As AI rapidly shifts from experimental tools to systems that execute real world decisions, questions of bias, safety, and accountability are no longer theoretical but operational. In this CloudTweaks interview, we speak with Miriam Haart, CEO of ActionAI, Stanford trained computer scientist, and star of Netflix’s My Unorthodox Life. Haart sits at the intersection of technology, ethics, and culture, advocating for AI systems that are transparent, reliable, and accountable. We explore how ActionAI is redefining AI that acts, why black box models fall short in mission critical environments, and how ethical design can ensure humans remain in control as AI becomes deeply embedded in enterprise decision making.
AI that acts completely transforms the role of AI at the enterprise level. It takes AI from analysis and recommendation to reliable, executed action. Most companies spend up to 25% of their revenue on operational costs, things like invoice processing, compliance reporting, vendor management, and internal auditing.  Our platform allows enterprises to automate a majority of their operational needs with a level of reliability that mission-critical workflows demand. That alone drives significant ROI. But we take it a step further. Beyond cost reduction, our AI enables companies to expand their margins by executing on new strategies for growth and expansion. It’s not just about doing what you already do more efficiently. It’s about giving enterprises the capacity to pursue opportunities they didn’t have the bandwidth for before. That’s the real transformation. AI that talks gives you a report. AI that acts gives you results.
The Authenticity Gap: On your podcast Faking It, you dive into what’s real. In an era of deepfakes, how is ActionAI building “trust infrastructure” for the enterprise?
The first step is to have clear protocols in place for how you manage AI within your ecosystem. That means properly laying out where AI is being used, in what ways it’s being used, and importantly, in what ways it should or should not be used. Most companies deploy AI and then react when something goes wrong. A proactive approach means defining those boundaries before anything breaks.
Growing up in a fundamentalist environment taught me to never just accept things at face value. I learned early on to question the status quo, to dig deeper, and to always ask why something is the way it is. That mindset has carried directly into how I approach AI and how we operate at ActionAI.

Similar Posts