Free
Message: Flash Memory Storage Questions

"Where were digital files stored when the power was turned off during the timeframe of the Flashback patents?"

Others were using flash, however...

During that time frame the challenge was to make the flash more versatile, where one of the main issues was the clarity of flash itself. Back then it had to be perfect with no bad cells in order to be utilized. Norris/Daberko came up with a method to utilize bad cell flash...where IMO, all development / format issues cultivated from there into an OS.

The method that they implemented was able to truncate bad cells within an erase block while attending to data segment management around them. Where in that method they were able to salvage the balance of the erase block, while others were disabling the whole erase block if it had a bad cell within. This was very important then, because the erase blocks were very large for the species aviable at that time. Typically, for a 1 megabyte(8 megabit) flash, there were only 16 erase blocks each being 64K in size. Real Estate was important, giving up full blocks for one bad cell was not the way to go. With that problem implementers wanted pristine flash having no bad cells.

That issue is still important and is part of the Markman at hand relating to Patent 337.

...............................

For that time frame most all other developments were detailed with a simple format having contiguous data structures. A very strict physical data arrangement where data recall depended on all data being specifically arranged. If the ducks where not in line....you end up with a failure. It was because of this that pristine flash was preferred as well.

Norris/Daberko implemented ideas that allowed non-contiguous data structures getting away from the traditional contiguous methods and the problems of precise data segment placements. They can implement both contiguous(non traditional) and non contiguous issues, however they both follow the same invented methods.

Others trying to get beyond the issues of traditional contiguous data arrangements began to implement status quo computer methods having virtual directive structures(ie..FAT) in order to manage the physical data.

Norris/daberko where beyond this, where their physical data management methods do not involve the use of virtual directive structures that the status quo had to deal with. Especially when updating data to flash erase blocks....as you can not simply over write flash like you can with a traditional HDD. They were beyond the problem of having to deal with erase block issues when updating or creating files.

With that, they developed methods and API that allow advanced editing of data and other issues.

It's all still important, including an issue of portability for the above issues, being very important, as all the issues can be seamlessly tied to any higher level OS attributes.

Now, all that aside, they have I/O methods to interface all that ability...and it requires RAM!!!! , however, in a non traditional way. And this is part of the current Markman ruling we are waiting on.

The defendants know exactly what e.Digital does, they just do not want it to be recognized by the judge.

"I believe that I know the answers to these questions, but if my answers are correct, I just can't undersand how this stock is .085 right now.'

With that, what's your prospective?

doni

Share
New Message
Please login to post a reply