ContactPerson: jliu3@cse.buffalo.edu ### Begin Citation ### Do not delete this line ### %R 2004-13 %U /tmp/jliu.pdf %A Liu, Jiangjiang %T INFORMATION PATTERN AWARE DESIGN STRATEGIES FOR NANOMETER-SCALE ADDRESS BUSES %D August 31, 2004 %I Department of Computer Science and Engineering, SUNY Buffalo %K computer system design, memory system, address bus, power consumption, performance, cost, data compression %Y Design %X The growing disparity in processor and memory performance has forced designers to allocate an increasing fraction of die area to communication (I/O buffers, pads, pins, on- and off-chip buses) and storage (registers, caches, main memory) components of the memory system to enable low-latency and high-bandwidth access to large amounts of information (addresses, instructions, and data). Consequently, the memory system has become critical to system performance, power consumption, and cost. In this dissertation, we consider three types of redundancies related to information communicated and stored in the memory system, with the main focus being on information communicated on nanometer-scale address buses. They are temporal redundancy , information redundancy, and energy redundancy. To take advantage of these redundancies, we analyze and design information pattern aware strategies to exploit various patterns in information communicated and stored in a multi-level memory hierarchy to derive gains in performance, energy efficiency, and cost. Our main contributions are as follows. (1) A comprehensive limit study on the benefits of address, instruction, and data compression at all levels of the memory system considering a wide variety of factors. (2) A technique called hardware-only compression (HOC), in which narrow bus widths are used for underutilized buses to reduce cost, novel encoding schemes are employed to reduce power consumption, and concatenation and other methods are applied to mitigate performance penalty. (3) A detailed analysis of the performance, energy, and cost trade-offs possible with two cache-based dynamic address compression schemes. (4) A highly energy- and performance-efficient dynamic address compression methodology for nanometer-scale address buses. Many of the principles underlying this methodology are also applicable to instruction and data bus compression. All our analysis and design has been performed in the context of real-world benchmark suites such as SPEC CPU2000 and using execution-driven simulators like Shade and SimpleScalar. Our analysis shows that ample opportunities exist for applying compression throughout the memory system. Further, we show that our address compression methods can simultaneously provide significant improvements in energy efficiency, cost, and latency compared to an uncompressed bus.