19 January, 2026
chips-media-and-visionary-ai-launch-first-ai-based-image-processor

Chips&Media and Visionary.ai have announced a groundbreaking partnership to develop the world’s first fully AI-based image signal processing system. This innovative collaboration aims to replace traditional hardware-based image signal processors (ISPs) that have dominated digital imaging for decades. By leveraging advanced artificial intelligence, the companies intend to shift the entire image formation process into software that operates on neural processing units (NPUs).

The primary focus of this collaboration is to enhance video processing capabilities in real time, particularly in low-light conditions, which often present challenges for conventional cameras. As digital imaging expands from smartphones to autonomous vehicles and augmented reality devices, both companies recognize that the existing hardware architecture of ISPs is becoming increasingly inadequate.

Oren Debbi, co-founder and CEO of Visionary.ai, stated, “This is the first full end-to-end ISP pipeline that runs entirely on an NPU, without relying on a hardware ISP at all.” The new approach processes RAW sensor data directly on an NPU or GPU, enabling significant flexibility in tuning and optimization through over-the-air updates, without altering the physical silicon.

A key aspect of this innovative system is its ability to train custom neural networks for each image sensor. Visionary.ai has developed an automated training platform capable of producing a new model within a few hours using only a minimal amount of video clips. Debbi emphasized that this rapid integration capability allows the company to scale across various sensors and platforms without the lengthy tuning cycles associated with traditional ISPs.

While AI-enhanced ISPs are already present in many smartphones and cameras, the collaboration asserts that these systems remain predominantly hardware-centric. Typically, manufacturers integrate neural networks as isolated components that do not process core RAW data, which is still managed by fixed-function hardware. Debbi explained, “The image formation pipeline is neural-first, not a classic ISP with a few AI add-ons.”

Conventional camera control functions, such as exposure and white balance, can still employ traditional methods, but Debbi anticipates that AI-based solutions for these elements will advance rapidly. The new neural-first pipeline allows for the optimization of image quality without being constrained by fixed hardware blocks or manual parameter adjustments.

The potential benefits of this AI-driven approach are particularly pronounced in challenging lighting conditions. Traditional ISP pipelines often struggle with noise suppression, leading to the loss of fine detail and forcing the use of sharpening algorithms that can create artificial-looking images. Debbi noted, “You see the biggest difference in the hard cases where classic ISPs have to trade off detail, noise, and artifacts — very low light, high dynamic range, and mixed lighting.”

The AI-based system promises cleaner shadows, more stable color, and reduced temporal artifacts in video. Additionally, the neural pipeline adapts to scene dynamics, minimizing issues such as ghosting and shimmer without compromising natural detail during subject movement.

While the initial focus is on video applications, Debbi acknowledges that still photography can also benefit from this full AI-based ISP. The underlying architecture is designed to process sequences of images, allowing manufacturers to enhance imaging outcomes significantly. He noted that current on-device neural imaging often occurs after the ISP, where vital sensor information has already been discarded.

Visionary.ai’s expertise in efficient RAW-domain processing positions the company to either replace existing ISPs entirely or integrate seamlessly with current pipelines to perform specific functions, such as AI denoising. The software-defined AI ISP can also address platforms with limited or no ISP hardware, broadening the scope of camera capabilities.

One consideration with AI-based imaging is power consumption. The system allows manufacturers to operate in different modes, balancing power use against image quality based on application needs. Debbi stated, “We’re able to run on a very small NPU and consume only slightly more than imaging with a traditional ISP, and that gap continues to shrink.”

The WAVE-N NPU from Chips&Media is designed for high-throughput vision workloads and serves as a reference implementation for the AI ISP. This technology demonstrates an end-to-end neural imaging pipeline that can operate in real time on video-focused AI hardware. Notably, the AI ISP is hardware-agnostic, enabling manufacturers to adapt the software pipeline to various NPUs or GPUs, accommodating their system-on-chip (SoC) architecture, power requirements, and budget constraints.

As the demand for enhanced imaging continues to grow, both companies are aware that traditional fixed-function ISPs are unlikely to vanish immediately. Nonetheless, the shift toward programmable AI compute is evident. According to Debbi, “Software updates faster than silicon, adapts better to new sensors and use cases, and ultimately reduces cost and complexity.”

Chips&Media and Visionary.ai aim to debut this full AI-based ISP at CES 2026, positioning their collaboration as a significant step towards reshaping the imaging industry. As AI compute capabilities evolve and deployment tools improve, software-defined imaging pipelines are expected to surpass classical ISPs across various categories, heralding a new era in digital imaging technology.