Securing AI at scale: Databricks and Noma close the inference vulnerability gap

Gartner’s recent analysis confirms that enterprise demand for advanced AI Trust, Risk, and Security Management (TRiSM) capabilities is surging. Gartner predicts that through 2026, over 80% of unauthorized AI incidents will result from internal misuse rather than external threats, reinforcing the urgency for integrated governance and real-time AI security…
Traditional cybersecurity prioritizes perimeter defenses, leaving AI inference vulnerabilities dangerously overlooked. Andrew Ferguson, Vice President at Databricks Ventures, highlighted this critical security gap in an exclusive interview with VentureBeat, emphasizing customer urgency regarding inference-layer security. “Our customers clearly indicated that securing AI inference in real-time is crucial, and Noma uniquely delivers that capability,” Ferguson said. “Noma directly addresses the inference security gap with continuous monitoring and precise runtime controls.”
Braun expanded on this critical need. “We built our runtime protection specifically for increasingly complex AI interactions,” Braun explained. “Real-time threat analytics at the inference stage ensure enterprises maintain robust runtime defenses, minimizing unauthorized data exposure and adversarial model manipulation.”

Securing AI inference demands real-time analytics and runtime defense, Gartner finds

Read full source via Venture Beat
Databricks Ventures and Noma Security are confronting these inference-stage threats head-on. Backed by a fresh million Series A round led by Ballistic Ventures and Glilot Capital, with strong support from Databricks Ventures, the partnership aims to address the critical security gaps that have hindered enterprise AI deployments.
“The number one reason enterprises hesitate to deploy AI at scale fully is security,” said Niv Braun, CEO of Noma Security, in an exclusive interview with VentureBeat. “With Databricks, we’re embedding real-time threat analytics, advanced inference-layer protections, and proactive AI red teaming directly into enterprise workflows. Our joint approach enables organizations to accelerate their AI ambitions safely and confidently finally,” Braun said.
By Louis Columbus
CISOs know precisely where their AI nightmare unfolds fastest. It’s inference, the vulnerable stage where live models meet real-world data, leaving enterprises exposed to prompt injection, data leaks, and model jailbreaks.

Similar Posts

  • How to become a prompt engineer

    Sep 19, 2025 Matleena S. A prompt engineer is a specialist who creates precise instructions, known as prompts, for artificial intelligence (AI) models to generate accurate, relevant, and high-quality outputs. This role is…

  • Entity theming with Pinto | PreviousNext

    namespace Drupalmy_project_profileEntityBlockContent; use DrupalbcaAttributeBundle; use Drupalmy_project_profileTraitsDescriptionTrait; use Drupalmy_project_profileTraitsImageTrait; use Drupalmy_project_profileTraitsTitleTrait; #[Bundle(entityType: self::ENTITY_TYPE_ID, bundle: self::BUNDLE)] final class Card extends MyProjectBlockContentBase { use TitleTrait; use DescriptionTrait; use ImageTrait; public const string BUNDLE = ‘card’; } Let’s set up our Card bundle class:EntityViewBuilders are PHP classes that contain logic on how to build (or…

  • Libsyn Podcast : Talking Drupal #451

    For show notes visit:www.talkingDrupal.com/451 Today we are talking about Drupal Marketing with version numbers, what competitors are doing, and Learning to Just Saying Drupal with guest Ivan Stegic. We’ll also cover Trash as…