Invented by Marom; Raz, Cohen; Dror, Zukerman; Jonatan
Today’s blog dives into a cutting-edge patent application that brings a new way to handle large sets of cybersecurity data. This invention focuses on how computer systems can filter and present security alerts by using smart selection of what data to show to security analysis engines, especially the ones powered by AI. Let’s break down what this technology means for the world of cybersecurity, how it compares to what’s already out there, and what makes it unique.
Background and Market Context
Almost every business, school, and government has some kind of computer network. These networks are the heart of daily operations, keeping data flowing and people connected. But as we all know, these networks are always at risk. Bad actors can slip through cracks using malware, ransomware, or other tricks to steal or lock up important information. With so much at stake, organizations spend a lot of money on tools and teams to watch for trouble and keep their systems safe.
Most security systems create tons of data. Every time someone logs in, opens a file, or changes a setting, it gets recorded in a log file. These logs are like the diary of the network, tracking every move. Firewalls, servers, laptops, and even printers write their own entries. Over time, these logs pile up, creating a mountain of information. Sorting through this mountain to spot the odd or dangerous events (“anomalies”) is a tough job. Many security teams use special software and even artificial intelligence to help, but the tools can get overwhelmed by the sheer amount of data.
The problem is getting worse as organizations grow and as more devices are connected. A single bank or hospital can generate millions of security events every day. Security teams don’t have the time or resources to look at every item. What’s needed is a way to quickly spot the important, unusual things and bring them to the attention of experts or automated systems, without getting bogged down by normal, harmless activities. That’s where this patent application steps in, with a smart way to filter and present only the most relevant information to the systems (like AI engines) that decide if action needs to be taken.
Scientific Rationale and Prior Art
Let’s talk about how people have tried to solve this problem before, and why those solutions don’t always hit the mark.
Traditionally, security teams used simple rules to spot problems. For example, if someone logs in from two countries within an hour, that’s suspicious. Or if a user suddenly downloads a huge number of files, that could be a sign of trouble. These rules are hard-coded and easy to understand, but they can miss clever attacks and often generate lots of false alarms.
As networks grew, companies began using machine learning and artificial intelligence to help spot dangerous activity. These systems can look for patterns that humans might miss. They can “learn” what normal behavior looks like and flag anything odd. Some systems use models called “large language models” (LLMs, like the brains behind ChatGPT), which can process lots of text and help summarize or explain what’s happening in the logs.
The challenge with these AI systems is that they can only process so much information at once. Think of the AI as having a window – it can look through this window and see only a limited number of events at a time. If you try to cram too much data through the window, the AI gets confused or slows down. If you give it too little, it may miss the context it needs to make a smart judgment.
Some past solutions simply fed the AI all the data and hoped it could sort out what matters. Others tried to guess which parts were most important, but sometimes left out things the AI needed. In both cases, the results can be slow, expensive, or inaccurate.
There’s also a history of using “enrichment” – adding more details to log entries to make them easier to understand. For example, a login event might be enriched with information about which country it came from or whether the user has ever done that action before. This helps, but doesn’t solve the problem of information overload.
The new invention described in this patent builds on these ideas but goes a step further. Instead of sending all the data or just random samples to the analysis engine, it carefully curates which events to send based on what’s unusual and what’s normal, focusing on the most relevant information for each potential security incident. This approach is designed to work well with both older systems and the newest AI engines, saving time and computer power while improving the quality of the analysis.
Invention Description and Key Innovations
Now let’s get into the heart of the invention and see why it stands out from the crowd.
This patent introduces a computer system that acts like a smart security helper. When something strange happens—say, a user logs in from a new country or a device tries to access a file it shouldn’t—the system flags this as an “anomalous” event. But instead of just sending this one event to the analysis engine, it adds more context by picking out similar events that are “normal.” This way, the AI or security analyst can see what’s different and what’s usual, making it much easier to decide if the anomaly is a real threat or just a harmless blip.
Here’s how it works, step by step:
First, the system receives a security event that looks odd. This could be anything from a strange login to a file being changed in an unexpected way. The system checks what’s unusual about the event—maybe it’s the country, the time of day, or the device used.
Next, the system searches its database of past security events to find another event of the same type (for example, a login) for the same user or device, but where everything looked normal. This “normal” event is picked to match the property that was strange in the first place. For example, if the anomaly is a login from a new country, the normal event might be a recent login from the user’s usual country.
Then, the system creates a special package of information. This package includes:
- The original, anomalous event
- A tag saying this event is unusual
- The normal, matching event
- A tag saying this event is normal
This package is sent to the analysis engine (which could be an AI like a large language model or another type of analysis tool). The analysis engine reviews the package and decides what to do—maybe it will generate a summary of what’s happening, send an alert to a human, restrict access, or take other action to keep the network safe.
But the system can do even more. It can add extra context to help the analysis engine make an even better decision. For example, it might randomly pick a few other normal events for the same user or device, or it might look for normal events from a group of similar users in the organization. By carefully choosing which events to include, the system gives the analysis engine just enough information to make a smart call, without overwhelming it.
There are several clever features in this design:
1. Smart Filtering: Instead of dumping all the data into the analysis engine, the system filters and selects only the most relevant events. This keeps things fast and efficient, even in very large organizations with lots of events.
2. Flexible Context Building: The system can choose different types of context depending on what’s needed. It might focus on events from the same user, the same device, or the same group within the company. This flexibility means the system can adapt to many situations.
3. Tailored for AI Engines: The invention is designed with AI analysis in mind. It creates inputs that fit within the “window” of what a large language model can process, making sure the AI has enough context but isn’t overwhelmed.
4. Automated Security Actions: Based on the analysis engine’s output, the system can trigger automatic responses. These could include sending alerts to security teams, locking accounts, isolating devices, or other steps to protect the organization.
5. Enrichment and Group Analysis: Before sending events to the analysis engine, the system can enrich them with extra details or compare them to group behavior. For instance, if a user logs in from an unusual country, the system can check what countries are common for the whole team or department and include that context.
6. Efficient Use of Resources: By reducing the amount of data sent to analysis engines, the system saves both time and computing power. This is especially important for AI models, which can be expensive to run on large datasets.
These innovations work together to make sure security teams and AI tools see the most important information first, with just the right amount of context to make smart, quick decisions. The result is a system that can spot real threats faster and with fewer false alarms, while also making the best use of modern AI technologies.
Conclusion
This patent application sets a new bar for handling security data in large organizations. By combining smart filtering, context-aware event selection, and AI-friendly packaging, it tackles the core problems facing today’s cybersecurity teams: information overload, slow response times, and inefficient use of advanced analysis engines. As networks get bigger and threats become more complex, systems like this will be key to staying ahead of attackers and keeping our digital lives safe. If you manage or build cybersecurity solutions, paying attention to inventions like this one could give you the edge you need to protect what matters most.
Click here https://ppubs.uspto.gov/pubwebapp/ and search 20250335584.




