6533b82efe1ef96bd1292701
RESEARCH PRODUCT
<title>Managing compressed multimedia data in a memory hierarchy: fundamental issues and basic solutions</title>
Jari VeijalainenEetu Ojanensubject
Hardware_MEMORYSTRUCTURESFlat memory modelTheoretical computer scienceMultimediaMemory hierarchyComputer scienceThrashingcomputer.software_genreMemory mapMemory managementPhysical addressVirtual memoryInterleaved memorycomputerdescription
The purpose of the work is to discuss the fundamental issues and solutions in managing compressed and uncompressed multimedia data, especially voluminous continuous mediatypes (video, audio) and text in a memory hierarchy with four levels (main memory, magnetic disk, (optical or magnetic) on-line/near-line low-speed memory, and slow off-line memory, i.e. archive). We view the multimedia data in such a database to be generated, (compressed), and stored into the memory hierarchy (at the lowest non-archiving level), and subsequently retrieved, (decompressed), and presented. If unused, the data either travels down in the memory hierarchy or it is compressed and stored at the same level. We first discuss the general prerequisites of the memory hierarchy, like program locality and decreasing storage costs and performance of each deeper level. To discuss the issues in a greater depth a schematic four level memory hierarchy model is presented. Multimedia data poses, as compared to conventional data, three new requirements for a memory hierarchy. First, continuous multimedia data (e.g. audio and especially video) have real-time requirements for the retrieval time, not present in a conventional memory hierarchy supporting e.g. a virtual memory. Second, single multimedia objects are often very large, requiring hundreds of megabytes, even gigabytes of memory. From the memory hierarchy point of view the latter fact necessitates partial storage strategies at different levels. Third, the data is so voluminous that compression becomes an interesting alternative, because of considerable savings in storage capacity and I/O and network bandwith. Based on the real time requirements of continuous multimedia data one can set boundaries for the maximum retrieval time Tr max Further, knowing the average retrieval speed S i of the particular memory level i for an arbitrary object X one can determine the deepest possible (i.e. slowest) level the data can be placed on. The inequality Tr(O j ) = Size(C k (O j ))/S i + Size(C k (O j ))/S Dk < Tr max (O j ) induces an initial placement policy for all the multimedia data objects O j into the memory hierarchy. Time Tr(O j ) can be understood as retrieval distance of object O j consisting of actual retrieval time (first term) from level i and decompression time (second term). The latter depends on the decompression algorithm D k . The inequality forms the basis for the storage capacity planning and performance characteristics of the levels. It also guides the design of replacement algorithms that move the data between memory levels. In a distributed system decompression can be done either by client, agent/proxy, or server. This raises further optimization problems. The option of compressing data instead of flushing it downwards in the memory hierarchy requires new properties of the algorithms managing the memory hierarchy. LRU does not seem to function well in this context. It also requires that additional minimal-quality parameters are made known to the algorithms (in case of lossy compression techniques are used). Because objects are large and could be partially stored compressed or uncompressed at different levels, a further question is, can parts of the object can be compressed and later decompressed and presented without the need to retrieve the entire object. How big are parts or granules that can be decompressed in isolation from other parts?
year | journal | country | edition | language |
---|---|---|---|---|
1998-10-05 | SPIE Proceedings |