Explore Documents. Computer Architecture and Organization 8th Edition. Uploaded by Sana Haq. Did you find this document useful? Is this content inappropriate? Report this Document. Flag for inappropriate content. Download now. Related titles. Carousel Previous Carousel Next. Jump to Page. Search inside document. Rudresh Aravapalli. Bilal Hassan. Fernando Narvaez Alvarez.
Bijay Mishra. Raimundo Lima. Sanjeev Singh Eduhub. Gina Jiminie. Sanjay Singh Bhadauriya. Nurdeny Pribadi. T10 is a Technical Committee of the National Committee on Information Technology Standards and is responsible for lower-level interfaces. There are also links to related Web sites. Infiniband Trade Association. Includes technical information and vendor pointers.
Includes technical information and vendor pointers on firewire. Chapter 8 - Operating Systems. Includes an online newsletter and links to other sites. Chapter 9 - Computer Arithmetic. Includes IEEE documents, related publications and papers, and a useful set of links related to computer arithmetic. Elementary Computer Mathematics A basic survey. Includes Java-generated problems with solutions. Computer Arithmetic Tragedies Discussion of disasters caused by computer numerical errors.
Chapter 10 - Instruction Sets. Gavin's Guide to 80x86 Assembly A good, concise overview of x86 assembler language. The Art of Assembly Language Programming. A page on-line mega-book on the subject. Should be enough for any student of the subject. Chapter 18 - Multicore. Multicore Association. Vendor organization promoting the development of and use of multicore technology.
Berkeley Parallel Computer Lab One of the best sites to start looking for references pertaining to anything in parallel software, multicore applications, etc. Chapter 20 - Digital Logic. Cache operation overviewCPU requests contents of memory locationCheck cache for this dataIf present, get from cache fast If not present, read required block from main memory to cacheThen deliver from cache to CPUCache includes tags to identify which block of main memory is in each cache slot.
Cache AddressingWhere does cache sit? Between processor and virtual memory management unitBetween MMU and main memoryLogical cache virtual cache stores data using virtual addressesProcessor accesses cache directly, not thorough physical cacheCache access faster, before MMU address translationVirtual addresses use same address space for different applicationsMust flush cache on each context switchPhysical cache stores data using main memory physical addresses.
Size does matterCostMore cache is expensiveSpeedMore cache is faster up to a point Checking cache for data takes timeSpeed and Size trade off.
Mapping FunctionWhich memory block is residing at a cache lineCache of 64kByteCache block of 4 bytesi. Direct MappingEach block of main memory maps to only one cache linei. Victim CacheLower miss penaltyRemember what was discardedAlready fetchedUse again with little penaltyFully associative4 to 16 cache linesBetween direct mapped L1 cache and next memory level. Associative MappingA main memory block can load into any line of cacheMemory address is interpreted as tag and wordTag uniquely identifies block of memoryEvery lines tag is examined for a matchCache searching gets expensive.
Tag 22 bitWord2 bitAssociative MappingAddress Structure22 bit tag stored with each 32 bit block of dataCompare tag field with tag entry in cache to check for hitLeast significant 2 bits of address identify which 16 bit word is required from 32 bit data blocke. Set Associative MappingCache is divided into a number of setsEach set contains a number of linesA given block maps to any line in a given sete. Block B can be in any line of set ie.
Attach a USE bit 0 or 1 based on referenceFirst in first out FIFO replace block that has been in cache longestLeast frequently usedreplace block which has had fewest hitsAttach a counter with each location. Cache CoherenceBus watching with write throughWrite operations by cache controllers are observedContents of caches are invalidatedIt depends upon write through policyHardware Transparency Hardware updates memory.
Rest of caches are updated via hardwareNon cacheable memoryPortion of memory is shared by more than one processor.
0コメント