top of page

Real-Time Generative Creative, the Missing Layer in Out-of-Home

  • Felipe Ramírez-Rodríguez
  • Dec 7, 2025
  • 13 min read


Table of Contents


  1. Introduction

  2. The Structural Gap Between DCO and RTGC

    • 2.1 Why DCO Never Became Real-Time Creative

    • 2.2 How Research in Music and Generative Systems Informs RTGC

  3. Why OOH Needed RTGC

    • 3.1 AI-Accelerated Creative Workflows (Pre-Delivery Systems)

    • 3.2 Early RTGC Examples (Impression-Time Systems)

    • 3.3 Industry Signals Inside OOH

    • 3.4 Authenticity, Memory, and RTGC

    • 3.5 Attention and Measurement Trends

    • 3.6 Programmatic Stack as the Natural Home for RTGC

    • 3.7 Adaptive Environments and Contextual Variation

    • 3.8 Alignment With Delivery and Measurement Layers

  4. The Conditions That Make RTGC Possible

    • 4.1 Pre-Existing OOH Infrastructure

    • 4.2 Hardware Evolution Enabling RTGC

    • 4.3 Three Technical Developments Unlocking RTGC

      • 4.3.1 Maturation of Generative Models

      • 4.3.2 LLM Reasoning Layers

      • 4.3.3 Lessons From Generative Music and Identity Modeling

  5. How RTGC Works

    • 5.1 Generative Model Layer

    • 5.2 LLM Reasoning Layer

    • 5.3 LoRA Identity Layer

    • 5.4 Format-Native Generation

    • 5.5 Full RTGC Pipeline Execution

  6. Why RTGC Matters: The Industry’s Blind Spot

  7. RTGC and the Shift in Power Across OOH and Digital Advertising

  8. Conclusion


1. Introduction


Out-of-Home has matured into a data-driven, programmatic medium. Screens refresh in seconds. CMS systems schedule globally. DSPs optimize pacing and delivery. Audience intelligence integrates mobile data, sensors, and computer vision. Despite this progress, creative workflows remain static. They operate with the logic of broadcast—an old school driving concept—not real-time adaptation.


Real-Time Generative Creative (RTGC) changes this structure. RTGC generates customized images, customized motion, customized headlines, and customized audio at the exact moment an audience is identified. This aligns the creative layer with the intelligence, speed, and responsiveness already present in OOH infrastructure.


The result is a medium that behaves like a responsive creative and personalized system. Creative adapts to context, audience behavior, and environmental conditions while preserving strict brand integrity. RTGC, within the frame of personalization, becomes the operational bridge between data and execution.


An expanded technical version of this article will be published on the Outdoor Media Intelligence Blog (OMI Blog) in January 2026. If you want deeper analysis and workflow info, you will find them there.


2. The Structural Gap Between DCO and RTGC


2.1 Why DCO Never Became Real-Time Creative To understand RTGC, the industry must begin with accuracy about what it calls dynamic today. For more than a decade, Dynamic Creative Optimization (DCO) has been positioned as the creative mechanism that modernized OOH. DCO allowed operators to update headlines, rotate product images, swap weather-based variations, trigger promotions by daypart, and automate location-based messaging. These shifts mattered because they reduced manual labor and increased scheduling flexibility.


But the creative logic behind DCO never evolved. DCO does not generate anything. It rearranges elements inside a predefined template. The framing is fixed. The visual hierarchy is fixed. The creative was conceived months before it ever appears on a screen. The industry often calls this "real-time creative," yet the adaptability is limited to swapping predefined components inside layouts that never change.


It is important to ground this in proper context. Dynamic creative became possible as soon as programmatic systems allowed networks to ingest external data signals. Weather cues, daypart triggers, location logic, and countdown clocks were bolted onto templates. Programmatic improved automation and scale, but it did not change the creative blueprint. DCO never advanced beyond conditional replacement. Creative remained static in structure, even as the environment around the screen became dynamic.


This distinction matters. OOH has real-time delivery, real-time data, and real-time measurement. All happening in the real world. Creative is the last component still operating under a fixed, prebuilt framework, which in that sense makes it artificial to the real-time reach of audience exposure to OOH ads.


2.2 How Research in Music and Generative Systems Informs RTGC Part of this perspective came from my research practice at Alchemusical, my small research hub where we explore AI, music, and sound. The studio explored generative tools early, originally for music and audio personalization inspired by pioneers like Dr. Lee Bartel (University of Toronto) and researchers like Teppo Särkämö. This work combined my interest in teams such as Suno, led by Dr. Mikey Shulman and Dr. Georg Kucsko, and LUCID, developed by Dr. Russo and his team (TMU).


Exploring these systems exposed the value of identity modeling, consistency layers, and context‑responsive outputs. In one instance the application is personalized AI‑generated music such as Suno, Udio, AIVA, etc., and in others wellness tools such as LUCID, Brain.fm, Mubert, and especially SoundMind, a newer platform combining audio and visual therapy. These ideas were not developed as advertising solutions, but they shaped how I later understood the potential for real‑time adaptation in public space. The intuition that creative could respond to conditions, not templates, came from that period of experimentation and informed my view of RTGC long before these conversations entered the OOH industry.


If real‑time generative outputs already function in wellness, performance, and therapeutic contexts, why is the same concept not applied at scale in advertising? The truth is that it is already used, but not inside OOH. This is the gap dynamic creative never addressed. DCO automated variation. RTGC transforms communication. Modern generative AI finally provides the technical foundation to make this adaptation occur in true real time.


3. Why OOH Needed RTGC


Major digital platforms already run early versions of generative adaptation. These systems are not true real-time engines, but they signal where the market is moving. To avoid confusion, there are two clear categories.


3.1 AI-Accelerated Creative Workflows (Pre-Delivery Systems) These platforms generate or modify assets during the campaign cycle. Output is pre-rendered, not impression-level.


  • Omneky: Multimodal generation tied to performance signals, creating new visual and copy variants during active campaigns.

  • Movio and Synthesia: Adaptive video engines that adjust pacing, structure, and segments based on audience attributes before rendering.

  • Waymark: Localized AI-generated video with automated voice, script logic, and scene assembly.

  • Pencil and Kive: Iterative visual experiment tools that generate new creative concepts instead of rotating static variants.

  • Meta Advantage+ Creative: Automated asset recombination using LLM reasoning to produce variations that did not exist in the upload set.

  • Google Performance Max: LLM-driven image and copy synthesis with updates tied to live performance feedback, but still produced ahead of delivery.


These systems prove that AI-driven creative is active across digital platforms, but none generate content at impression time. They remain pre-delivery engines.


3.2 Early RTGC Examples (Impression-Time Systems) These platforms synthesize creative at impression time. They are rare, but they validate the RTGC model. Many of these deployments originate in Asia, where AI adoption curves differ from Western markets.


  • Netflix Dynamic Artwork Engine: Generating personalized artwork for each viewer at impression time.

  • TikTok Dynamic AI Avatars in China: Producing real-time avatar videos triggered by user-level signals.

  • Alibaba ICE: Generating product visuals and short videos at impression time during peak sales cycles.

  • JD.com AI livestream ads: Blending real-time generative video into live commerce sessions based on viewer behavior.

  • ByteDance Volcano Engine pilots: Generating scenes and backgrounds per user at impression time.

  • Tencent Cloud Vision Ads: Synthesizing personalized visuals at impression time on page load.

  • Amazon in-store screens: Generating scenes in real time based on vision models and local context.


These examples show that real-time generative output is possible and already deployed by global leaders. The gap is not technology. The gap is adoption in Western advertising environments.


3.3 Industry Signals Inside OOH Recent coverage shows AI entering the OOH stack with clear momentum. Digital Signage Today highlighted (“How AI and APIs Are Reshaping Out-of-Home Advertising,” 2024) how AI and APIs are reshaping DOOH workflows, pushing the medium toward real-time, data-driven operations. True, until you try to personalize creative at scale and end up with a ComfyUI workflow that looks like the wiring diagram of a failed Colombian/Soviet satellite.


StackAdapt also pointed to (“The Future of Out-of-Home Advertising,” 2024) AI-driven creative experimentation and dynamic messaging as central to OOH’s next wave of innovation.


3.4 Authenticity, Memory, and RTGC Ryan Laul (Talon), in his November 24, 2025 article for The Drum (“OOH has a role to play in the AI discovery journey”), argued that OOH is a trusted real-world signal in an AI-driven discovery process. He emphasized that as search shifts from keywords to conversational AI, authenticity becomes the most valuable input. OOH provides that authenticity because it delivers a physical, time-stamped moment that AI cannot fabricate.


His argument reinforces the same direction: context matters, and creative must respond to it. RTGC strengthens this logic. Authenticity grows when the message itself reacts to the moment. A personalized creative output delivered at impression time turns the message into an experience, not a broadcast. Experiences stick because they embed into memory and influence later behavior. In an AI-driven discovery cycle, those memory-rich moments become the strongest prompts consumers bring into their next search or conversational query. RTGC makes OOH not only authentic but personally relevant, strengthening the exact behavioral signals marketers value, including attention, recall and intent.


3.5 Attention and Measurement Trends Together, these signals show a channel preparing for deeper AI use. RTGC becomes the natural next layer. If OOH is a trusted real-world touchpoint in an AI-dominated discovery cycle, then the creative itself must align with live context. RTGC makes creative a live output tied to the same signals that already drive targeting, analytics, and delivery.


A related industry shift strengthens this point. Billups launched an attention measurement framework last year (AdExchanger, Anthony Vargas, October 2, 2024) that scores OOH placements using real-world signals like dwell time, viewing angle, screen brightness, seasonality and mobile device flow. It reflects the move toward granular, signal-based decisioning. Attention becomes a measurable input, not a guess. RTGC aligns with this shift because creative rendered at impression time increases the probability of attention, which is the metric Billups and others are steering the category toward.


3.6 Programmatic Stack as the Natural Home for RTGC The natural home for RTGC is not CMS vendors or legacy operators running fixed loops. The natural home sits inside the programmatic stack, the DSPs and exchanges built for data signals, impression-level logic and dynamic decisioning. Broadsign’s acquisition of Place Exchange shows this direction clearly. As Ari Buchalter explained in his AdExchanger interview on the acquisition (James Hercher, November 25, 2025), DOOH buying relies on RTB pipes but behaves differently from web and mobile. Formats vary, measurement is location-based and most transactions run on deal IDs rather than open auction. This environment is designed for decisioning at the moment of delivery, not static rotation. RTGC fits here by design because it treats creative as another decision layer in the programmatic workflow.


3.7 Adaptive Environments and Contextual Variation OOH environments shift constantly. Audiences move in waves shaped by weather, transit cycles, venue type, and density. Dwell behavior expands or contracts as environmental pressure changes. A static creative loop ignores these variations and leaves value on the table. Some formats still benefit from a broad one-to-many strategy, such as highway billboards. Location-based OOH is different. It has the conditions for granular, context-aware adaptation.


3.8 Alignment With Delivery and Measurement Layers RTGC aligns the creative layer with the delivery and measurement layers already in place. Instead of forecasting relevance months ahead and producing fixed assets, RTGC interprets live signals and com- poses creative for the moment. It replaces assumption with on-site adaptation. OOH does not need new hardware for this. It needs a generative layer operating at the same speed as its programmatic stack. The networks that adopt this first will set the benchmark the rest follow.


4. The Conditions That Make RTGC Possible


4.1 Pre-Existing OOH Infrastructure Real-time generative creative depends on a disciplined technical chain that functions like a production system. OOH had the foundation for this years before generative systems were viable. Networks across airports, malls, transit, and roadside screens already operated with enterprise-grade CMS platforms capable of distributing content globally in seconds. DSPs already evaluated pacing and targeting decisions continuously.


Sensor networks already validated presence, movement, dwell, and density. Computer vision (and I promised not to call it "facial detection") already estimated gaze behavior and group composition without identifying individuals. Mobility data, although highly imperfect, already modeled the trajectory of audiences and their visit patterns at scale.


4.2 Hardware Evolution Enabling RTGC A parallel shift is happening on the hardware side, but it does not need a long detour. The point is simple. Several companies are upgrading roadside and large‑format displays in ways that make RTGC easier to deploy.

One example is KA‑Dynamic Color, led by CEO Arnon Kraemer. Their reflective, low‑power display lets static board owners swap creative on demand without LED cost or maintenance.


DigiTile behaves like a digital unit while keeping the operational profile of a static board. It turns static faces into action‑ready inventory at a fraction of the price. This matters because it lowers friction. This is not an endorsement, but an example of the category’s direction. RTGC needs dynamic surfaces. Anything that helps convert static boards into update‑ready units expands the number of screens capable of running impression‑time creative.


4.3 Three Technical Developments Unlocking RTGC 4.3.1 Maturation of Generative Models Generative AI matured into production stability. Diffusion models such as SDXL and Flux generate images by gradually removing noise from patterns, guided by text instructions and brand signals. In short, they start with static noise and repeatedly refine it until a coherent image emerges. Modern diffusion systems no longer produce distorted shapes or unstable geometry; they generate clean, consistent visuals that follow brand structure and layout rules when paired with an identity layer.


Video engines like Veo, Runway, Pika, and Hailuo—already widely used in advertising and 3D animation workflows—extend this logic with latent‑space motion modeling. This enables consistent temporal coherence instead of frame‑by‑frame hallucination (this is not only a coding issue; the creative output also suffers from it).


4.3.2 LLM Reasoning Layers Large language models (LLMs) function as reasoning layers. They structure prompts, enforce tone, maintain semantic stability, and translate brand requirements into precise creative instructions. Their outputs guide diffusion models, aligning narrative intent with visual structure.


4.3.3 Lessons From Generative Music and Identity Modeling A similar logic applies in generative music. At Alchemusical, we saw this firsthand. A large transformer reads text or musical cues, builds structural and stylistic signals, and feeds them into the audio pipeline while LoRA, when used, adds lightweight adjustments that teach new behaviors without retraining the full model. The diffusion layer then receives all conditioning inputs and denoises latent sound into a coherent track with the requested style, rhythm, and mood. Latent‑audio handling and latency are major issues in music production for anyone who has worked in recording or digital audio workflows, which is why these advances are so relevant to how we think about real‑time systems.


This clarity deepened during recent conversations at a music and neuroscience conference in Canada with one of the researchers from the Lucid team, Dr. Adiel Mallik. I did not have the chance to speak with Dr. Russo directly, even though Lucid is very much his creation, but his team’s published work informed much of the discussion. Examining the scientific direction behind Suno—founded by Drs. Mikey Shulman and Georg Kucsko, both PhD physicists—reinforced these ideas even further.


Learning from them how transformers shape emotional contour and how diffusion models resolve latent audio into clean musical structure highlighted the parallels with what we aim to achieve visually in RTGC. Their explanations validated the same principle we observed at Alchemusical: generative systems respond to conditioning, identity, and context in ways that mirror how adaptive visuals behave in public space, in OOH advertising.


A similar focus on practical, workflow-driven AI has also appeared in the work of Dino Burbidge, founder of Dinova in London and a long-time innovator across technology, media, and creative industries. I met Dino earlier this year at an IBO event, where he presented an educated and grounded perspective on AI for the OOH community. His approach emphasized practical experimentation over hype, showing how teams can explore AI tools in ways that complement existing workflows rather than disrupt them. That same mindset aligns with RTGC: real-time adaptation grows out of applied experimentation, operational awareness, and disciplined engineering, not abstract theory.


These technologies matured in parallel—diffusion models, transformer‑based reasoning, LoRA identity systems, and GPU execution graphs. Their convergence creates the conditions that could make full RTGC viable. GPU routing solved the latency issue, well known to anyone familiar with real-time music production. Execution graphs keep model weights preloaded, coordinate routing and validate outputs so generation stays within the strict time windows required for real-time OOH delivery.


5. How RTGC Works


RTGC functions as a continuous production pipeline.


5.1 Generative Model Layer Generative AI matured into production stability. Diffusion models such as SDXL and Flux generate images by gradually removing noise from patterns, guided by text instructions and brand signals. Modern diffusion systems no longer produce distorted shapes or unstable geometry; they generate clean, consistent visuals that follow brand structure and layout rules when paired with an identity layer. Video engines like Veo, Runway, Pika, and Hailuo extend this logic with latent‑space motion modeling. This enables consistent temporal coherence instead of frame‑by‑frame hallucination.


5.2 LLM Reasoning Layer Large language models (LLMs) function as reasoning layers that structure prompts, enforce brand tone, maintain semantic stability across variations, and translate brand requirements into executable creative instructions.

5.3 LoRA Identity Layer LoRA acts as an identity layer for the brand. It stores geometry, style, tone, color logic, and typography inside a compact, controllable structure, allowing infinite variation without compromise.


5.4 Format-Native Generation RTGC generates assets natively at each required format’s geometry. No resizing. No cropping. No template distortion. This solves OOH’s format fragmentation at the source.


5.5 Full RTGC Pipeline Execution A brand identity layer anchors the system. The audience vector provides context. A structured prompt is generated. The generative engines execute. The system validates the output. The DSP selects placement. The CMS delivers the asset instantly. The system regenerates creative continuously as conditions change. RTGC does not refresh content. It recreates it.


6. Why RTGC Matters: The Industry’s Blind Spot


OOH invested deeply in modernization. It built programmatic pipelines, adopted advanced measurement, and embraced audience intelligence. These investments created the impression that OOH had already become real-time. But the medium never updated its creative model. A real-time pipe delivering static creative is still static.


This is the blind spot. The industry mistook dynamic delivery for dynamic communication. The screen updated in real time, but the message did not. Relevance degraded. Attention weakened. Attribution plateaued. CPMs stagnated. Creative remained fixed while everything around it moved.


RTGC fixes the structural flaw. It aligns creative with the same real-time intelligence that drives targeting and delivery. It elevates OOH to parity with mobile, CTV, and social, where creative and data operate together instead of in separate cycles. OOH finally receives a creative layer built for the environments it serves.


7. RTGC and the Shift in Power Across OOH and Digital Advertising


RTGC changes the power structure of Out-of-Home and aligns it with the evolution that reshaped all digital advertising. Digital media achieved dominance when it moved from one-to-many broadcast into one-to-one relevance. Platforms like Google, Meta, Amazon, and TikTok built their competitive advantage by matching the message to the individual and the moment.


OOH never crossed that threshold, even as every other major digital channel moved decisively toward individualized messaging. It digitized screens but kept the creative model anchored to broadcast logic.


RTGC changes this. It does not attempt one-to-one personalization because physical environments cannot identify individuals. Instead, RTGC introduces a mode of communication unique to public space: one-to-the-right-many. The message aligns with the situation, not the person. Creative adapts to density, dwell, movement, and environmental pressure. It responds not to identity, but to context.


This reframes competitive advantage. Networks that adopt RTGC no longer compete on footprint. They compete on intelligence. Their performance rises because relevance rises. Their CPMs increase because impressions represent genuine opportunities to see. Agencies shift budgets toward adaptive networks because they deliver more value per play.


Networks that remain static fall behind because context outpaces their creative. Screens become commodities. RTGC becomes the differentiator. RTGC does not make OOH more digital. It makes OOH more intelligent. It gives OOH the adaptive communication structure modern cities, audiences, and advertisers require. It elevates OOH from static media into a context-aware communication system.


8. Conclusion


OOH built real-time delivery, real-time data, and real-time measurement, but creative never evolved with it. RTGC completes that system. Screens become interfaces. Creative becomes adaptive. Messages align with context rather than assumptions. The next decade of OOH will belong to the networks that match the intelligence of their delivery systems with equally intelligent creative.

 

 
 
 

Comments


bottom of page