Invented by Jo; Hojun, Kim; Hyeonwu, Lee; Yoonjin
Storage is the invisible backbone of every digital experience—whether you’re streaming a movie, saving a document, or running a business application. But as data grows and tasks get more complex, our storage devices need to be both fast and smart. Today, we’ll explore a new patent application that’s all about making storage devices, like SSDs, more flexible and responsive to your needs, thanks to a clever way of handling how they read data.
Background and Market Context
Not too long ago, hard disk drives (HDDs) ruled the world of digital storage. They were big, slow, and full of tiny moving parts. Then came solid state drives (SSDs), which changed everything by using flash memory—tiny chips that store data electronically, not mechanically. SSDs are much faster, tougher, and quieter than HDDs. This is why you now find them everywhere: in laptops, phones, cars, drones, and even data centers powering the cloud.
But the story doesn’t end there. As more people and businesses use digital devices, the demands on SSDs keep growing. Users want instant access to their files, while companies expect their servers to handle many tasks at once, without slowing down. Modern SSDs need to juggle lots of requests, coming from many different sources, each with different needs. Some requests need to be answered right away (like loading an app), while others focus on moving lots of data quickly (like backing up files).
To keep up, storage devices must be smarter about how they handle these requests. It’s not enough to be just “fast”; they need to be flexible—able to switch between different ways of reading data, depending on what the situation calls for. This is where the new patent comes in, offering a way for storage devices to pick the best read method on-the-fly, improving both speed and efficiency.
In big data centers, this flexibility is even more important. Here, servers might serve thousands of users at the same time, each with different expectations. Some might be watching a video (which needs smooth, steady data flow), while others are doing quick searches (which need fast responses). A storage device that can adjust its behavior in real time helps keep everyone happy, with no wasted time or power.
In short, the market is hungry for storage that’s not just fast, but also smart—able to adapt to changing needs without extra hardware or cost. This patent tackles exactly that challenge.
Scientific Rationale and Prior Art
To understand what’s new here, let’s look at how SSDs have traditionally handled reading data, and why that approach can fall short.
Inside every SSD, there’s a controller (think of it as the brain) and a collection of nonvolatile memory chips (the storage itself). When a request comes in—say, to load a photo—the controller sends a set of commands to the memory chips, telling them what data to pull and when. These commands include the main “read” instruction and some “status check” commands to make sure the operation is going as planned.
In the past, SSDs followed a fixed pattern for reading data. For example, they might always use a method that focuses on keeping response time (latency) as low as possible. This is great for some situations, like when you want a program to open instantly. But it’s not ideal when you need to move a lot of data quickly (bandwidth), such as during large file transfers or backups.
Older storage devices often couldn’t switch between these different priorities. They’d stick with one method, even if it wasn’t the best for the current job. This led to wasted time—sometimes the device would wait longer than needed before starting the next task, or use extra commands that slowed down the overall data flow.
Some earlier efforts tried to improve flexibility, like using smarter queues or letting users manually set certain preferences. But these solutions were either too basic, too manual, or didn’t go far enough in adapting to real-time changes in demand.
The main technical challenge is that different types of read requests have different needs. For example:
– Latency-sensitive requests (like opening an app): Need the fastest possible response, even if it means using more status checks.
– Bandwidth-sensitive requests (like streaming video or copying large files): Need to move as much data as possible in a short time, even if each individual response takes a bit longer.
No single fixed scheme works best for both. What’s missing in previous designs is the ability for the SSD to decide, on its own and in real time, which method to use for each request—and to switch instantly when conditions change.
This patent introduces a new system for making those decisions. It lets the storage device analyze each read request, decide what’s most important (speed or data flow), and then pick the best set of commands for the job. If the situation changes mid-stream—say, if the number of waiting requests suddenly jumps—the device can switch to a different method without missing a beat.
In short, the scientific rationale here is simple: Different jobs need different approaches, and the best storage device is one that can adjust on the fly, maximizing performance without extra hardware or complexity.
Invention Description and Key Innovations
Now let’s break down what the patent actually claims, in plain English, and why it matters for real-world users.
At its heart, this invention is about letting the storage device choose between two (or more) different ways of reading data, based on what’s most important for each request. Here’s how it works:
Every time a read request comes in—whether it’s from an external device (like your computer asking for a file) or generated inside the storage device itself (like during background maintenance)—the controller looks at the “attribute” of the request. This attribute might be based on several things, such as:
– The type of request (random or sequential, internal or external)
– How many requests are waiting in line (queue depth)
– The characteristics of the device making the request (is it a server, a user PC, or a specific domain?)
– The physical function or channel being used (helpful in big, multi-user environments)
Depending on these details, the controller decides whether to use the first read scheme (focused on low latency) or the second read scheme (focused on high bandwidth). Each scheme uses a different “command set”—a package of instructions sent to the memory chips.
The First Read Scheme (Low Latency):
- The controller sends the main read command to the memory.
- It waits a set amount of time (the “read time”) for the data to be fetched internally.
- Then it sends a first status check command to see if the operation is done.
- During the next interval, the memory device both outputs the data and prepares itself for the next command.
- After this, a second status check command is sent to ensure everything is ready for the next step.
This method uses two status check commands. It’s great for situations where you want the quickest possible response to each request. However, because of the extra checking, it can slow down overall data flow when you have lots of requests.
The Second Read Scheme (High Bandwidth):
- The controller sends the main read command.
- It waits for both the data fetch and the preparation for the next command to finish (these are done together in a longer interval).
- Then it sends a single status check command to confirm that both steps are done.
- Finally, the data is output, and the next command can be sent almost immediately.
This method uses only one status check command, and the controller can move to the next request more quickly after the data is delivered. This boosts data flow, making it ideal for heavy workloads where moving lots of data is more important than the fastest individual response.
The real magic is in how the device decides which approach to use, and how it can switch between them instantly as soon as conditions change. For example, if the number of waiting requests suddenly spikes, the controller might switch from the latency-focused method to the bandwidth-focused method. Or, if a special request comes in from a high-priority user, it can flip back to the low-latency method.
The patent also describes how this decision process works inside the controller:
– An analysis logic circuit looks at the request and figures out key details (type of request, queue depth, host device, etc.).
– A determination logic circuit turns this analysis into a simple decision: which scheme to use.
– A read scheme control logic circuit then puts together the right command set and sends it to the memory.
This setup can be expanded to handle more than two schemes or more complex decision rules, if needed.
What’s also important is that this system doesn’t require any changes to the physical hardware of the memory chips. It’s all about smarter control and better use of existing resources. That means device makers can improve performance without adding cost or complexity.
The patent even covers multi-host and multi-function environments, like big data centers, where different users or servers might have different needs. The controller can identify which user or function is making each request, and pick the best read method accordingly.
Finally, the patent claims cover both the method (how the controller acts) and the device itself (the SSD or other storage product that uses this method).
Why This Matters in Practice
For everyday users, this means your devices can feel snappier and more responsive, especially when you’re running lots of programs at once or moving big files. For companies running servers or cloud applications, it means storage can adapt to handle changing workloads, keeping performance high and power use low. And for device makers, it’s a way to get more out of their existing hardware, just by making the controller software and logic smarter.
As more devices and users connect to the cloud, this kind of flexible, dynamic storage will only become more important. Whether you’re a gamer, a business owner, or a data center operator, you’ll benefit from storage that can adapt in real time, always choosing the best method for the job at hand.
Conclusion
In a world where fast, reliable storage is more important than ever, this patent offers a powerful new tool for making SSDs and other storage devices smarter and more flexible. By letting the controller choose the best way to read data, based on real-time needs, we get the best of both worlds: quick responses when we need them, and high data flow when it matters most. All of this happens without extra hardware or manual settings—just smarter, more adaptive control behind the scenes.
For users, this means better performance and smoother experiences. For businesses, it means storage that can keep up with demanding, ever-changing workloads. And for the industry as a whole, it points the way toward a new generation of storage devices that are not just fast, but truly intelligent.
Click here https://ppubs.uspto.gov/pubwebapp/ and search 20250217038.