table of content:
HOME
/
BLOG
/
Scenario Generation in Autonomous Driving: Where VX360 Steps In

Scenario Generation in Autonomous Driving: Where VX360 Steps In

VX360 Helping Autonomous Driving

Autonomous vehicles (AVs) have long been touted as the future of mobility, but reaching full autonomy remains a significant challenge. One of the most crucial aspects of developing an autonomous driving stack is scenario generation—the process of creating realistic, edge-case driving situations to train, test, and validate AV systems.

Here, we’ll explore how scenario generation works, the limitations of current methods, and how the NATIX VX360’s ability to capture 360° real-world footage provides a transformative solution to accelerate this process.

What is Scenario Generation?

Scenario generation is a crucial process in the testing, verification, and validation of advanced driver assistance systems (ADAS) and autonomous driving systems (ADS). It forms the backbone of simulation-based testing, ensuring that these systems perform reliably and safely under a wide range of traffic conditions and edge cases.

The term "scenario generation" encompasses a diverse set of methods, including extracting scenarios from real-world driving data, to replicate these scenarios in a simulation environment, to create safer, more reliable autonomous vehicles (AV). For example, a scenario could include a vehicle performing a right turn at an intersection while a cyclist crosses the intersection despite having a red light, or a pedestrian jaywalking. These “near-miss” scenarios are critical for the autonomous driving stack to be trained on if we intend for them to be roaming the streets.

Capturing instances of erratic human behavior on the road is extremely important due to its unpredictability. However, existing approaches both to the extraction of data, and the generation of these simulations often differ in the inputs they use and the outputs they produce, leading to varying levels of abstraction and complexity in the resulting scenarios.

Scenario Generation for Autonomous Driving

Why is Scenario Generation Important?

In 2022, the importance of scenario-based testing surged with the introduction of an EU regulation that mandates that automated driving systems must be validated against a minimum set of traffic scenarios relevant to their operational design domain. For specific functions, such as lane-keeping assistance, even predefined test scenarios are required.

To meet these standards, developers must identify, create, or generate a wide array of traffic scenarios for both development and testing. The diversity and complexity of scenarios are critical to testing systems in environments ranging from routine traffic conditions to rare “edge cases” or ”Scenarios” (sudden weather changes, unexpected obstacles, unusual traffic behaviors, or complex urban traffic patterns). This caused companies to seek solutions for generating these scenarios, which resulted in several methods to do so.

Current Methods for Testing and Validation

Testing a new autonomous driving (AD) functionality (e.g. automatic right turn) requires companies to simulate or experience countless traffic scenarios to ensure safety and reliability. Currently, AD developers rely on two primary methods, each with its own limitations:

  1. Real-World Testing

This involves manufacturing a vehicle equipped with all necessary sensors (360° cameras, LiDAR, radar) and validation equipment and testing the autonomous driving functionality on real roads. In the testing stage, the vehicle attempts to cover as big a range of edge cases and situations as possible. However, this approach is extremely time-consuming and costly. To capture the amount of data needed, hundreds of these vehicles need to roam the streets, which is nearly impossible, resulting in a slow and expensive process:

  • Time: Validation through real-world testing alone is estimated that a staggering 215 billion miles of testing would be required, with crashes or near-miss scenarios, which are crucial for developing safer AD models, occurring once in every 100 million miles of driving out in the real world. Due to the sheer amount of data needed, hundreds of these test vehicles need to be manufactured, which is impossible for any original equipment manufacturer (OEM).
  • Cost: Building the car and equipping it with all the necessary sensors presents a costly barrier, as each vehicle and the necessary equipment can cost up to $1M to manufacture. That is before taking into consideration that each jurisdiction and environment presents unique challenges, requiring significant investments in fleets, personnel, and logistics.
  1. Simulation-Based Testing

Simulations aim to mimic real-world traffic behavior and recreate critical scenarios, such as a bike running a red light during a turn. There are several advantages to this practice, especially the ability to test the autonomous driving stack on uncommon situations such as near-misses and crashes at a more scalable rate than real-world testing. There are two common practices for simulation generation, each with its challenges:

  • Manually Generated Scenarios
    Experts manually build simulation scenarios based on theoretical traffic interactions. While this presents an effective solution to replicating real-world scenarios, creating a single complex scenario, such as a busy intersection with pedestrians and cyclists, can take weeks to months to generate. The more complex the scenario, the longer it takes to generate and train/test the autonomous driving stack effectively, AD companies require millions of such scenarios across all jurisdictions, making this approach extremely labor-intensive and slow.
  • Scenario Generation Using Real-World Data

Generating scenarios from real-world driving footage is the ideal testing, verification, and validation method for the autonomous driving stack, but it faces several hurdles. The concept itself ensures a consistent representation of real-world driving scenarios, including all the possible elements drivers interact with within the complete traffic domain. However, this method faces several blockers, as it requires a large amount of diverse data from real traffic, which is hard to collect, and then it has to be processed and translated into scenarios. This use case is where NATIX VX360 can impact the most in the near-term.

  • Synthetic Scenario Generation via Generative AI
    A more recent practice has emerged in an attempt to minimize the time and investment required to generate scenarios using Generative AI to create synthetic traffic simulations. Companies like NVIDIA and Uber recently invested $200M for a solution using Generative AI to construct millions of scenarios automatically. However, the effectiveness of these synthetic models also depends on the data used to construct the testing grounds, and would highly benefit from real-life footage to increase the quality and integrity of the scenarios generated. This use case can also benefit from VX360 data to train the GenAI models and build higher-quality, reality-based synthetic scenarios.
Current problems of Scenario Generation for Autonomous Driving Systems

The Importance of Real-World Footage and a Complete 360° View

Now that we’ve discussed the current methods for simulation-based training, testing, and validation via generating scenarios for the autonomous driving stack, it’s important to note the limitations of these existing tools and the importance of real-world footage for such simulators to succeed. For example, researchers reported that in existing simulators, aside from walking on sidewalks and crosswalks, pedestrians do not interact with vehicles, and in some cases, pedestrians were not even modeled. Therefore, scenario generation still requires significant work.

This is where real-world footage comes in. Either for automatically generated scenarios from footage, or synthetic scenario generation via generative AI (train on this footage), real-world footage leaves no room for error, giving simulations the precise data needed to simulate reality, which increases the reliability and safety of the autonomous driving stack.

However, a 360° view is a must for a simulation based on real-life footage. Tracking an ego car’s perspective is easy enough with a front-facing camera, but to fully replicate a scenario based on reality, you must also take everything around the ego vehicle into account. From a bike overtaking from the right to a pedestrian crossing a red light at the vehicle’s blind spot, even something as simple as an automatic right turn requires an all-encompassing view.

NATIX VX360: A Game-Changer For 360° Camera Footage Collection

The importance of 360° real camera footage makes VX360 a revolutionary device. The VX360 taps into Tesla's front, side, and rear cameras, creating a ready-made, cost-effective solution for collecting real-world 360° imagery. By collecting data from Tesla’s 4 cameras, we can extract information that is invaluable for scenario generation, such as:

  • All Surrounding Contexts: Vehicles, pedestrians, infrastructure, and environmental elements from every angle.
  • Dynamic Interactions: E.g., how multiple cars navigate a merging lane or how pedestrians respond to vehicle movements.
  • Rich Edge Cases: Rare and unpredictable events captured organically without needing millions of miles of driving.

Technically speaking, anyone can build a device that could tap into Tesla’s cameras to extract driving footage, but it's easier said than done when taking into account the necessity of collecting the data at scale, and the rightfully rigid privacy laws that were put in place to protect the users. Another limiting aspect is the ability to convert video footage into “classified scenarios,” meaning, extracting and classifying all the scenarios of interest from a driving log. The technology stack and expertise needed for such an operation is one of the blockers for using real-world driving footage for scenario building.

NATIX's Solution - Real World Footage  for Autonomous driving systems training

That’s what sets the VX360 apart from the rest of the pack. Leveraging the NATIX DePIN model, which has already proven its success by garnering over 200K drivers through the Drive& app, sets the ground for data collection at scale. This method enables global data collection 10x faster and cheaper than traditional approaches. Through a decentralized economy combined with crypto and non-crypto rewards, drivers have the incentive to use the VX360, with an estimated 3-6 months for the return on investment. When it comes to scenario classification, NATIX can fill the gap through partnerships and collaborations with leading companies, reducing engineering costs by having a dedicated system to categorize the relevant scenarios automatically. 

The NATIX VX360 also has three privacy-preserving pillars that ensure that the data collected is up to standard with the existing laws:

  1. Privacy zones: Drivers can set a radius around their private areas, be it their home, office, or anywhere sacred. The VX360 does not collect footage in privacy zones.
  2. No internal camera access: The VX360 only accesses Tesla’s front, side, and rear cameras. The cabin camera is out of the question, so users can drive in their comfort.
  3. AI anonymization: Before the data is shared with NATIX, AI anonymization algorithms are employed to blur personally identifiable information, such as faces and license plates.

Translating Real-World Driving Footage To Scenario Generation

We already mentioned that the data collected by the VX360 is used to support the autonomous driving stack, but how exactly does that happen? Footage collected by the VX360 is automatically uploaded and anonymized, and from there, it is shared with NATIX. From that point, AI can be used to fully automate the process.

The data collected can be used in two different ways: It can either be used to train GenAI models to produce better synthetic-based scenario generation, or scenario generation can occur automatically from the footage collected, segmented into distinct scenarios using advanced AI tools, and eliminate most of the manual effort. AI algorithms can process VX360 footage automatically to:

  • Perform data quality checks.
  • Label the scenario's elements (e.g., vehicles, cyclists, or road signs), weather, and road conditions.
  • Classify and segment footage into relevant datasets (e.g. “Right Turn Scenarios”).
  • Generate a scenario database for potential customers.
  • Format Scenario for the customer for ease of use.

Simulation platforms can use VX360 data to access edge-case scenarios, ensuring they reflect real-world physics and interactions rather than idealized assumptions. By integrating pre-segmented 360° footage into the scenario generation pipeline, NATIX is helping AV companies build safer, more efficient systems by training, testing, and validating models on diverse, real-world edge cases, and accelerating the timeline for achieving Level 5 autonomy. For AV developers and mapping innovators, VX360 represents a revolutionary leap—delivering the real-world complexity and detail needed to shape the autonomous future.

Closing Thoughts

The path to full autonomy is paved with complex scenarios and edge cases, and real-world video data is crucial to accelerate simulation generation, as it provides authentic interactions and dynamics. However, this method requires 360° camera footage to capture every angle of a scenario. With VX360, NATIX offers the perfect blend of comprehensiveness, cost-efficiency, and scalability—bringing us one step closer to a safer, autonomous future. 

available on