Invented by Yoon; Young Seung, Zhou; Yuetong, Kumar; Parveen, Wang; Hengchao, Wang; Yuan, Wan; Lijie, Huang; Jing, Fu; Mingang, Gupta; Himanshu, Agarwal; Mohit, Medina; Francisco Javier

Technology is always moving forward. Today, many companies use network apps to connect users with offers, services, and even deliveries. The secret to making these platforms work best is always knowing what to offer and at what value. This article looks closely at a new patent system for optimizing these feature values in real time, using smart models and data. Let’s break down what this invention means, where it fits in the market, how it builds on past work, and what makes it special.

Background and Market Context

Every day, millions of people use network applications to order rides, get food delivered, shop online, and more. Behind the scenes, these platforms must decide what offers to show each user, how much to charge, and which features will get the best results. The choices they make impact not just sales, but how happy users and drivers feel about the service.

Take last mile delivery as an example. These are the final steps of getting a product from a warehouse or store to a customer’s home. Companies like ride-share and food delivery apps need to set fair prices for drivers while also making sure customers find value. Too high a price, and customers may leave. Too low, and drivers may not accept the task.

The same is true for e-commerce, online marketplaces, and many other networked services. Each relies on feature values — like price, incentives, or discounts — to encourage specific actions. These values often need to change quickly based on supply, demand, and user behavior.

In the past, companies often set these feature values based on simple rules or guesses. They might start with a base price, then add “surge” amounts if supply gets tight. However, this approach can lead to values that are either too high or too low, missing the sweet spot that drives the best results. Over time, small errors add up, costing companies money or hurting user trust.

The market is now demanding smarter solutions. With more competition and higher customer expectations, companies need ways to fine-tune offers in real time — not just for one user, but across all users and situations. They want systems that learn from data and adjust automatically, keeping the network balanced and efficient.

This is where the new patent system comes in. It gives companies a way to set these feature values dynamically, using advanced models that learn from real-world results. By doing this, they can better match prices and offers to what users actually want, making the network work for everyone.

Scientific Rationale and Prior Art

To understand this new invention, let’s look at how things were done before, and why that left room for improvement.

Most old systems used a simple two-step method. First, they would set a base feature value — say, a normal price for a delivery. If that price was not bringing in enough drivers or customers, they would add an extra amount, sometimes called a “surge.” This surge was meant to encourage faster responses or to cover busy times.

While this worked to some extent, it had several problems. The base value was often a guess, based on old data or fixed rules. Surge amounts could be too small or too big, because they didn’t really learn from what was happening right now. This led to overpaying or missing out on customers. Adjustments were slow and not always fair.

Some platforms tried to use basic data analysis or simple machine learning to improve things. For example, they might look at how many drivers were available, how many orders were coming in, or how quickly offers were accepted. They could use this data to tweak their rules a bit, but these models were limited. They didn’t always capture complex patterns, and they didn’t adjust for every unique offer or time slot.

Over time, researchers and product teams explored using more advanced tools, like neural networks, logistic regression, or deep learning. These tools can find deeper links between supply, demand, and behavior. But even then, most systems only used the models to make small tweaks. They still relied on averages or set increments, not real optimization.

A key problem was the inability to set a clear “feature reduction goal.” In other words, if a company wanted to lower costs by a certain amount on average, they didn’t have a good way to spread that reduction across many different offers and times without making the network unstable. They also struggled to tell the difference between a needed adjustment (to meet a goal) and the actual effect that adjustment would have on user behavior.

Some prior art described systems for dynamic pricing, demand prediction, or supply management. For example, ride-hailing companies patented ways to match drivers and riders, using some smart rules. Other companies used machine learning to predict busy times or suggest prices. But these systems often lacked a way to set and meet specific feature goals, or to adjust feature values in a way that was both optimized and fair across many users and offers.

The new invention builds on these ideas but goes further. It provides a way to set a feature reduction goal — like lowering average base price by a set amount — and then uses a trained feature value model to decide how much to adjust each offer. The model can learn from data, test new ideas, and keep the system balanced. It even works across groups of offers, time slots, or other segments, making sure the average adjustment meets the goal without hurting overall performance.

This is a big step forward, because it allows for both precision and flexibility. The system can give each offer its own optimized value, based on current data, while still reaching the big-picture goals set by the business.

Invention Description and Key Innovations

Now, let’s dive into how this invention works, in simple terms, and what makes it special.

At its core, the system is made up of a computer (with memory and a processor) that follows a set of instructions. Here’s what it does:

It starts by creating a set of “weights” for each offer. These weights measure how important different features are, such as time, location, or user type. The system then looks at the company’s goal — for example, to lower the average price by a certain amount — and gets the current base value for each offer.

The real magic happens when the system uses a trained model to figure out how much to adjust each offer’s feature value. This model is not static; it has learned from past data and can use current information to make the best adjustment. Importantly, the adjustment it recommends is not always the same as the goal. Instead, it takes into account how the adjustment will affect user behavior, supply, and acceptance rates.

Once it knows how much to adjust, the system applies this change to the base value, creating an optimized value for each offer. It then sends these new offers — with their optimized values — out to users’ devices. This happens fast and can cover many offers at once.

One key feature is that, even though each offer may get a different adjustment, the average adjustment across all offers matches the company’s goal. For example, if the goal is to lower prices on average by $1, some offers might be lowered by more, some by less, but the average will hit the target. The system can group offers by time slots (like hours of the day) or other sets, applying special adjustments for each group as needed.

The model itself uses advanced techniques. It may have two layers: the first gets all the needed parameters (like supply, demand, or user attributes), and the second layer decides the exact adjustment. Sometimes, the adjustment is a simple multiplier (like 0.97 to lower a price by 3%). The weights used by the model can be created using logistic regression, which is a smart way to relate supply and acceptance rates.

When used in a last mile delivery setting, the system can optimize the “base trip value” — the main price offered for a delivery. But it can work with any feature in any network app, like discounts, incentives, or even non-monetary values.

The invention also describes how to train the model. It uses real data, tests different adjustments, and learns over time. It can take in feedback, such as user acceptance rates or direct comments, and use this to improve the model further.

There are extra features too. For example, after the model suggests an adjustment, the system can round the value to the nearest allowed increment, or set a minimum to avoid going too low. It can test new settings using experiments (called switchback variants) to see if the changes work well before rolling them out everywhere.

Overall, the invention offers a way to:

– Dynamically adjust feature values in real time, for each offer and user.
– Meet business goals (like lowering costs) without hurting supply or acceptance.
– Learn from data, improve over time, and test new ideas safely.
– Apply to many types of network apps, not just delivery.

This makes the system much smarter and more flexible than old methods. It helps companies stay competitive, keeps users happy, and makes the whole network run smoother.

Conclusion

The world of network platforms is getting more complex. Users expect fairness, speed, and good value, while companies need to balance costs and supply. The new system for dynamic feature value optimization brings machine learning and real-time data together to solve these challenges. By setting clear goals and letting the model decide the best adjustments, the system delivers value for both businesses and users. As more companies use smart, data-driven approaches like this, the future of network platforms will be more efficient, responsive, and user-friendly.

Click here https://ppubs.uspto.gov/pubwebapp/ and search 20250363341.