Invented by Yeung; Fai, Knott; Michael Adkins, Krishnan; Abhiram, Chan; Brian K., Rivian IP Holdings, LLC

Today, capturing and sharing moments while on the road has become an everyday part of owning a modern vehicle, especially electric vehicles (EVs). With cameras built into cars, drivers can record their journeys for memories, safety, and even fun video journals. But as more video is captured, there is a growing problem: how do you find the best parts, keep what matters, and maybe even add cool effects, when there is just too much footage to look through? A new patent application tackles this challenge by using artificial intelligence (AI) and smart data processing to automatically create special videos from EV journeys, saving the best moments and letting users add creative touches. In this article, we will explore the background of this invention, the science and earlier solutions it builds on, and explain the clever new ideas it introduces.

Background and Market Context

Imagine you are on a long road trip in your electric vehicle with friends or family. Your car has cameras facing forward, back, and to the sides, all recording every second of your adventure. Maybe you take a drive through the mountains, a busy city, or along a sunny beach. When you get home and want to make a short video to remember your trip or share with others, you are faced with hours and hours of footage. Watching it all, picking the best scenes, and cutting them together is a huge job. Even if your car does this for you, its memory might run out and the best clips could get deleted before you even see them.

This is not just a personal problem—it’s a growing issue as more people use cars with cameras for many reasons. Some want to relive family vacations. Others use cameras for safety, like recording accidents or strange events. Some car makers offer features where you can make a video journal of your trip or use special effects to add fun or helpful information, like showing your route or adding animated characters.

But the more cameras a car has, and the more people drive, the bigger the pile of video gets. Car storage is limited. Sorting through video is boring and slow. Deciding what to keep and what to erase is hard, especially if you do not want to lose important or exciting moments.

Car companies, technology creators, and drivers all want a better way to handle this. They want to:

  • Automatically find and keep the most interesting or important video moments
  • Make it easy for users to create short, fun, or meaningful video summaries
  • Add special effects or information to videos, like showing speed, route, or even virtual objects
  • Save space by deleting boring or repeated footage, but never lose the best parts

This patent application is aimed right at these problems. It is about a system that uses smart computer programs (AI) and the data from the car to figure out which video pieces are worth keeping, which should be deleted, and how to put the good ones together into a new, easy-to-share video. It even lets you add effects or information that match what happened in the video, like an animated animal running alongside your car in a forest scene or a chart showing your speed during a race.

The timing is perfect. As more EVs are sold and technology in cars gets better, more drivers will want these features. The invention helps car makers stand out, gives users a better experience, and saves time and storage space for everyone.

Scientific Rationale and Prior Art

To understand what makes this invention special, let’s look at what has been possible before and the science that makes it work.

Cars with cameras are not new. Many modern vehicles, especially electric cars, have several cameras that record what happens in front, behind, and on the sides. Some systems even let you record your drive and save it to a memory stick or upload it to the cloud. There are “dash cams” that record everything for safety or insurance. Some cars can even make short highlight videos if you press a button.

But these older systems have some big limits. Usually, they just record everything and save it. If the memory fills up, the oldest videos are deleted, even if they contain something special. If you want to make a video journal or a highlight reel, you have to watch all the footage yourself and use video editing software. This takes a lot of time and patience. Some solutions let you press a button to save a clip, but you need to know exactly when something interesting is happening.

People have tried to make this easier. Some apps and programs use simple rules to pick out video clips, like “save 30 seconds before and after a crash,” or “keep video when the car is moving fast.” Others let you mark favorite spots during your drive. Few systems try to use smart technology to look at the video and decide what is interesting, fun, or important.

There has also been work on adding effects or information to videos. For example, some apps let you overlay a map, speed, or other driving data. A few apps let you add simple graphics or stickers. But these usually need you to do the work yourself, picking where to put things and what to add. They do not automatically match the effect to what is happening in the video.

Artificial intelligence (AI) and machine learning (ML) have started to change this. AI can look at pictures and videos and figure out what is in them—like cars, people, animals, road signs, or even the type of place (such as city, mountains, or beach). AI can also learn from many examples what kinds of scenes people like, what makes a video exciting, or when something special happens.

Some recent research uses AI to find the best scenes in a long video or to summarize a trip. These systems often use computer vision to look at each frame, recognize objects or places, and score how interesting each part is. Some can even use data from the car—like speed, GPS location, or sensor readings—to help decide what to keep.

But even with these tools, there are still big challenges:

  • How do you pick the best pieces from many cameras at the same time?
  • How do you avoid making a video that jumps around too fast between different views?
  • How can you automatically add effects or information that match the scene and do not look out of place?
  • How can the system make sure it does not delete important video clips when memory is low?

Previous work often focused on one or two of these problems, but not all at once. There were no simple systems that could do everything: pick the best moments, use data from the car, avoid sudden scene changes, add matching effects, and manage storage all at the same time.

This patent application takes all these ideas and brings them together in a system designed for modern electric vehicles. It uses AI trained on lots of driving video to understand what is happening in each scene, uses car data to add context, picks the best video fragments from all the cameras, puts them together smoothly, and lets you add custom effects that actually match the moment. It even keeps track of storage space and makes sure the best parts are never lost.

Invention Description and Key Innovations

Now let’s dive into how this invention works and what makes it stand out from anything before.

At its core, this system is a set of computer programs (running on the car, a server, or the cloud) that work together to do several things:

  • Gather all the video from the car’s cameras during a drive
  • Break each video into small pieces (called fragments), each tied to a short time period and linked with car data like speed or location
  • Use AI to look at each fragment and decide what type of scene it shows (like city, mountain, beach, accident, or family moment)
  • Score each fragment for how interesting or important it is, using both the video content and the car’s data (for example, a sudden stop or high speed might raise the score)
  • Pick the best fragment from all the cameras for each moment in time, making sure not to switch views too quickly (to keep the video smooth and easy to watch)
  • Put the chosen fragments together into one new video (a composite video) that summarizes the whole trip or a special event
  • Let users or the AI add extra content to the video, like showing the route, adding virtual objects, or displaying car data, in a way that matches the scene and does not distract
  • Keep track of storage space, and only delete fragments that are not important, so you never lose the best moments even if memory is tight

Let’s walk through each part, using simple words and examples.

From Raw Video to Smart Fragments

As soon as you start your EV and begin a trip, the cameras start recording. Each camera (front, back, sides) makes its own video, all covering the same time period but from different views. These videos are chopped into small fragments—say, every three seconds. For each fragment, the system saves not just the pictures and sound, but also data from the car: how fast you were going, where you were, what mode the car was in (like off-road or city), and even sensor readings like G-force or sudden stops.

AI Looks for the Best Scenes

Once a drive is done, or even while you are still driving, the system uses AI programs trained to understand different types of scenes. Each fragment from every camera is fed into the AI, which decides what is shown (for example, “mountain view,” “city traffic,” “beach,” or “accident”). It also looks at the car data to spot moments that might be important—like a sudden stop, a change in speed, or a sharp turn.

The system then gives each fragment a score. The score is higher if the scene is special (like a beautiful sunset or a family moment), or if the car data shows something interesting happened (like a near miss or exciting event). User preferences can also change the score—if you love mountain views, those get a higher score.

Smooth Video Without Jumpy Cuts

A big challenge in making videos from many cameras is that switching views too often makes the video hard to watch. To fix this, the system uses a clever “bias” when picking which fragment to use next. If the last chosen fragment came from the front camera, the system is more likely to stick with the front camera for the next fragment—unless something much better is happening in another view. The longer it stays with one camera, the less bias it gives, so it will eventually switch if there is a good reason. This keeps the video smooth and natural, not jumpy and confusing.

Making the Final Video and Adding Effects

Once the best fragments are picked, the system puts them together into a new video—a highlight reel or video journal of your trip. You can set how long you want the video to be, or what scenes to focus on. The system can also automatically speed up boring parts (like long straight roads) and slow down exciting moments (like a cool jump or sudden turn).

One of the coolest features is how the system can add extra effects or information to the video. It uses more AI to look at the scene and decide what to add. For example, if you are driving through the woods, it might add an animated deer running alongside. If you are in a race, it might show your speed or a map of your route. The effects are chosen to match the actual scene, so they look natural and add to the story.

These effects are not just dropped anywhere—they are placed carefully in the video, using AI to find the right spot in the frame, so they look like they belong. The system can even create new effects on the fly using advanced AI models, based on what is happening in the video.

Smart Storage: Never Lose the Best Moments

What if your car’s memory is nearly full? The system has a solution for that too. Every video fragment gets a “priority score” based on how important it is (using the AI scene analysis, car data, and user preferences). When space runs low, the system deletes the fragments with the lowest scores first—like long stretches of empty road or when the car is parked. Fragments with high scores—meaning they show something special—are protected and never deleted until you say so.

This way, you always have room for new footage, but you never lose the moments that matter. If there’s an accident or a funny family moment, the system saves those automatically.

How It All Comes Together: An Example

Suppose you take your EV on a weekend trip. You drive through the city, then into the mountains, stopping at a lake for a picnic. Your car’s cameras record everything. When you get home, you want to make a three-minute video to share.

Here’s what happens:

  • The system collects all the video and splits it into short fragments, each tagged with where, when, and what was happening in the car.
  • The AI looks at each piece, figures out the type of scene, and scores it. It notices the pretty mountain view, the moment when you swerved to avoid a deer, and the picnic by the lake.
  • For each moment, it picks the best camera view, making sure not to switch too often, and puts the fragments together in order.
  • The system checks your preferences (maybe you love nature scenes) and makes sure to include more of the mountain and lake shots.
  • It adds a little animation: a bird flying across the sky during the lake picnic, your route traced on a map, and your speed during the mountain drive.
  • If the car is running out of space, it deletes footage of the boring city streets, but keeps all the good parts safe.
  • You get a polished, shareable video with all the highlights, in just a few clicks.

Why This Is Innovative

This invention brings together several key ideas in a way that has not been done before:

  • Using AI trained on lots of real-world driving footage to understand what is happening in every scene, not just relying on simple rules
  • Combining video content with car data (speed, location, mode, sensor readings) to score and pick the best moments
  • Making sure the final video is smooth, not jumpy, by using a smart bias that avoids switching views too often
  • Allowing both automatic and user-driven customization, so you get the video you want without hours of manual editing
  • Adding effects and information that actually match what’s happening, placed in just the right spot
  • Smart storage management, so you never lose the moments that matter, even if the car’s memory is full

All these pieces work together in a system designed for the real needs of EV drivers, using today’s best AI technology. The result is a system that saves time, makes video journals easy and fun, and ensures that your favorite moments are never lost.

Conclusion

The world of electric vehicles is changing fast, and so is the way we capture and share our journeys. This patent application shows how smart computer programs and car data can work together to solve real problems—saving drivers time, making memories easier to keep and share, and protecting what matters most. By using AI to pick the best scenes, smoothly combine them, add creative effects, and manage storage, this invention offers a complete solution for anyone who wants more from their in-car cameras. Whether you’re an EV owner, a car company, or a tech fan, this technology points the way to a future where your car helps you remember, relive, and share every journey—without any hassle.

Click here https://ppubs.uspto.gov/pubwebapp/ and search 20250218178.