计算机组成与设计硬件/软件接口 (MIPS版)

  买了一本《计算机组成与设计硬件/软件接口(MIPS版)》,非科班出身的我,从事计算机行业已经8年了,却对计算机的基础什么也不专业。有点惭愧,因为对时间的流逝而感到遗憾!行了,但有言说,多无实义!言归正传,看书!思考!

  这本书的英文名称是《Computer Organization and Design The HardWare / Software Interface》 Fifth Edition Asian Edition, 可以这样翻译《计算机组成与设计——硬件/软件接口》第5版,亚洲版。作者:David A. Patterson   John L. Hennessy

嘿!真想完完整整的把这书从头到尾的翻译一遍!看清计算机的真正技术。也为自己不在下为了每次下岗而心烦!我买的这本书应该是计算机的基础书吧!这个问题,我还是有点不感认同,因为我的计算机职业素质真的没有。也许除了打字和聊天,我别的防真的没有好好地想一想为什么?人到中年,时间过得如飞一般逝去。可是,自己却还是一无所知。

  好了看前言吧!看看这本书讲什么,有什么可以从这本书中得到呢?

  Preface

  The most beautiful thing we can exprence is the the mysterious. It is the source of all true art and science.      Albert Einstein   What I believe, 1930 

  看看这句话,一开篇,就提科学巨人Albert Einstein。 这也许是名人效应吧!可是名言,之所以能成为名言,也许就是这些高人的生活感悟吧。《What I believe》这是Albert Einstein是的一篇散文吧!翻译过来也不难,就是:“我所经历过最美好的事情是神秘事件,是所有真正科学和艺术的源泉”。看这话得多经典。不亏是大家的风范。

  About this book (关于这本书)

  We believe learning in computer science and engineering should reflect the current state of the field,as well as introduce the principles that are shaping computing. We also feel that readers in every specialty of computing need to appreciate the organizational paradigms that determine the capabilities, performance, energy, and ultimately, the success of computer systems.

  Modern computer technology requires professionals of every  computing specialty to understand both hardware and software. The interaction between hardware and software at a variety of levels also offers a framework for understanding the fundamentals of computing. Whether your primary interest is hardware or software, computer science or electrical engineering, the central ideas in computer organization and design are the same. Thus, our emphasis in this book is to show the relationship between hardware and software and to focus on the concepts that are the basis for current computers.

  The recent switch from uniprocessor to multicore microprocessors confirmed the soundness perspective, given since the first edition. While programmers could ignore the advice and rely on computer architects, compiler writes, and silicon engineers to make their programs run faster, or be more energy-efficient without change, that era is over. For programs to run faster, they must become parallel. While the goal of many researchers is to make it possible for programmers to be unaware of the underlying parallel nature of the hardware they are programming, it will take many years to realize this vision. Our view is that for at least the next decade, most programmers are going to have to understand the hardware / software interface if they want programs to run efficiently on parallel computers.

  The audience for this book includes thoes with little experience in assembly language or logic design who need to understand basic computer organization as well as readers with backgrounds in assembly language and / or logic design who want to learn how to design a computer or understand how a system works and why it performs as it does. 

  码完了本书的简介,但是,不知道该 不该将此书再读下去,以什么样的方式去读书,亦或怎样去写一些笔记。我想用一个翻译软件和一些相关的单词记录本。千万不要再产生读书无用论的想法。人到中年,学点知识充电。翻译是一件苦差事,幸好,现在,有一些网上的计算机翻译软件。只是,翻译出的内容有时好像是一个小孩子在玩堆积木,而不是让一个人去看懂他的思想。字可以拼接,但是思想要的心领神会,在于传递。我用的翻译软件为http://dictionary.cambridge.org/dictionary/english-chinese-simplified/glacial。自我感觉翻译的还行。言归正传,开始翻译:

  关于这本书:

  我相信,在计算机科学与工程的学习过程中,应该折射出该领域的现行状态,也就是说应该介绍正在形成计原的原理。我也觉得——在计算机专业领域的读者来说,都应该重视组织结构范式,决定其的能力、性能、能量,及最后,计算机系统的完整性。

  现代计算机技术要求每个计算机专业人员都能理解计算机的硬件和软件。在不同级别的硬件和软件相互交互,也提供了理解计算机基础的框架。 无论你的主要兴趣爱好是在于硬件还是软件,计算机科学或者电子工程,基于计算机组织和设计的核心思想是相同的。因此,在这本书中,我们强调的是去显示硬件和软件之间的关系,并且聚焦于当代计算机的基本概念。自从第一版问世以来,最近从单核处理器到多核微处器的转变证实了稳健性的观点。然而,程序员可能忽略的衷告,却依赖于计算机架构师,编译器编写人员,和硅工程师使他们的程序运行更快,或者在没有改变的情况下更节能,那个时代已经结束。为了程序运行更快,他们必须变成并列式的。然而,许多研究人员的目标是使程序员不知道它们所编程的硬件的底层并行属性,但要实现这一愿景还需要许多年。我们的观点是,至少在接下来的十年中,,绝大多数的计算机程序员必须理解硬件和软件接口,如果他们想让程序有序地运行在并行计算机上。

  这本书的读者包括那些,在汇编语言和逻辑设计方面没有一丁点经验的程序员,他们需要理解基本的计算机组织;以及具有汇编语言和逻辑设计的读者,他们想学习如何设计计算机或了解系统的工作原理,以及为什么它们必须如此地执行。

  好不容易,看完了这一段,好难呀!看似,适合我去读的一本书,却在英文上遇到了困难,所以我又购买了两本英文文法和翻译的书。想让自己,梦里梦活地看懂一本似懂非懂的书,这样的学习方式一点也不严谨!可是,我也没什么好的办法,因为我是非科班出身,充其量也是一个业余的计算机爱好者吧。这两本介绍英语的书分别是:《大学英语语法第五版讲座与测试》华东理工大学出版社徐广联主编,和《英语用法指南第三版》。投资了许多钱去买书,也花费了许多时间,只希望自己成为一个专业的人。

  20180103午后,刚睡来,精神可佳。希望自己好好地学习,如果再来上一次划转分流,我还能做什么?自己还年轻,学点东西!强大自己的脑子。

About the other Book

Some readers may be familiar with Computer Architecture: A Quantitative Approach, popularly known as Hennessy and Patterson. (This book in turn is often called Patterson and Hennessy.) Our motivation in writing the earlier book was to describe the principles of computer architecture using solid engineering fundamentals and quantitative cost/performance tradeoffs. We used an approach that combined examples and measurements, based on commercial systems, to create realistic design experiences. Our goal was to demonstrate that computer architecture could be learned using quantitative methodologies instead of a descriptive approach. It was intended for the serious computing professional who wanted a detailed understanding of computers. 

  A majority of the readers for this book do not plan to become computer architects. The performance and energy efficiency of future software systems will be dramatically affected, however, by how well software designers understand the basic hardware techniquies at work in a system. Thus, compiler writers,operating system designers, database programmers, and most other software engineers need a firm grounding in the principles presented in this book. Similarly, hardware designers must understand clearly the effects of their work on software application. 

  Thus, we know that this  book had to be much more than a subset of the material in Computer Architecture, and the material was extensively revised to match the different audience. We were so happy with the result that the subsequent editions of Computer Architecture were revised to remove most of the introductory material; hence,there is much less overlap today than with the first editions of both books.

  这是我第一次看到,一本书的前言中,有论及其它书的内容。作者是怎么想的,是想多卖几本书,还是觉得内容相关。好了不管那么多。在本书的后面有一个推荐阅读的书系中提到了这本书。《计算机体系结构,量化研究方法》(英文版第五版。)

关于其它的书:

一些读者可能熟悉《计算机体系架构:量化研究方法》,俗称Hennessy & Patterson. (这本书又经常被称为 Patterson  & Hennessy )。 前几版书的写作过程中,我们的初衷是用扎实的工程基础和量化式的性价比来描述计算机体系架构的原理。我们使用一种方法:结合实例和量化,基于一个商业系统,为了创造一种逼真的设计体验。我们的目标是去展示:使用量化方式学习计算机体系架构代替描述性的方法。这是为了那些想要详细了解计算机的专业计算机人士而设计的。

  本书的读者中大部分并不想成为计算机架构师,未来软件系统的性能和资源有效利有率将受到显著的影响,毫无疑问,软件设计者应该对系统所工作的基本硬件技术有所了解。因此,编译器的编写人员,操作系统的设计者,数据库程序员,以其绝大多样的软件工程师须要一个牢固的基础:本书中所体现的原理。同样,硬件设计人员必须清楚地理解:他们的工作对软件应用程序的影响。

  因此,我们知道:这本书不单单是《计算机体系架构》中诸多材料中的一个子集,而且,对这些材料进行了广泛的修改,以适应不同的读者。我们对结果非常满意,《计算机体系架构》的后续版本已经修改、删除了绝大部分介绍性的材料;因此,和第一版相比,今天的后续版本很少有重复的部分。

 

About the Asian Edition

  With the consent of the authors, we have developed this Asian Edition of Computer Organization and Design: The Hardware / software Interface, to better reflect local teaching practice of computer course in Asian classrooms and development of computer technology in this region. The major adjustment content include:

  # An introduction to the "TH-2 High Performance Computing system" ( as a demonstration of cluster computing system) to replace Appendix A on digital logical, and a new section on " Networks-on-Chip" as Appendix F. Both reflect the lastest progress in computer technology and can serve as good reference for readers.

  # Abridgment of some sections of Chapter 2 to better suit the current  curriculums applied in Asian classrooms.

  With these adjustments listed above, the Asian Edition is enhanced with local features while keeping the main structure and knowledge framework of the original version.

  Special thanks  go to Prof. Zhiying Wang, Prof. Chung-Ping Chung, Associate Prof. Li Shen and Dr. sheng Ma, for their contributions to the development of this Asian Edition.

 关于亚洲版:

  在作者的同意下,我们开发了这个亚洲版本《Computer Organization  and Design: The Hardware / software Interface》为了更好地反映关于当地计算机课程教学实践在亚洲课堂上,也开发了与该地区对应的计算机技术。主要调整内容包括:

  #介绍了“TH-2高性能计算机系统”(作为集群计算机系统的演示)取代了附录A 数字逻辑的内容。还新增加了“网络级芯片”作为附录F。两者反映了最先进的计算机技术,可提供作者良好的参考。

  #删除了第二章中的一些章节, 以便更适合亚洲课堂的课程风格。

  通过上述调整,亚洲版保持了原版的主要结构和知识框架,增强了本地特色。

  特别鸣谢 Prof. Zhiying Wang,Prof. Chung-Ping Chun,还有Prof. Li Shen 和Dr. sheng Ma 这版亚洲版所做的贡献。

Changes for the Fifth Edition

We had six major goals for the fifth edition of Computer Organization and Design: demonstrate the importance of understanding hardware with a running example; highlight major themes across the topics using margin icons that are introduced early; update examples to reflect changeover from PC era to PostPC era; spread the material on I/O throughout the book rather than isolating it into a single chapter; update the technical content to reflect changes in the industry since the publication of the fourth edition in 2009; and put appendices and optional sections online instead of including a CD to lower costs and to make this edition viable as an electronic book.

第五版的变革

在《计算机组织和设计》第五版中,我们有六个主要的目标:通过一个正在运行中的实例,展示理解硬件的重要性。使用先前介绍过页面边距图标来突显重要的主题贯穿话题,更新实例,折射出从PC时代到后 PC时代。輸入輸出方面的材料貫穿全書,而不是隔离分开使它成为独立的一章。 在2009年,自从第四版问世以来,更新了折射该行业变化的技术内容;将可选章节和附录放在线上,代替了一张CD,以降低成本,而成为本版的电子书。

 

 

Chapter or Appendix Sections Software focus Hardware focus comments
1. Computer Abstractions and Technology  1.1 to 1.11    
  ☯ 1.12 (History)      
         
2. Instructions: Language of the computer 2.1 to 2.12      
  ☯ 2.13 ( Compilers & Java  )      
  2.14 to 2.18      
  ☯ 2.19 ( history )      
E. RISC Instruction-Set Architecture E1 to E7       
3. Arithmetic for Computers  3.1 to 3.5       
  3.6 to 3.8 ( subword Parallelism )      
  3.9 to 3.10 ( Fallacies )      
  ☯ 3.11 ( history )      
  2.11 ( History )      
4. The Processor  4.1 ( Overview )      
  4.2 ( Logic Conventions )      
  4.3 to 4.4 ( Simple Implementation )      
  4.5 ( Pipelining Overview )      
  4.6 ( Pipelining Datapath )      
  4.7 to 4.9 ( Hazards, Exceptions )      
  4.10 to 4.12 ( Parallel, Real Stuff )      
  ☯ 4.13  ( Verilog Pipeline Control )      
  4.14 to 4.15 ( Fallacies )      
  ☯ 4.16 ( History )      
D. Mapping Control to Hardware D1 to D6      
5. Large and Fast: Exploiting Memory Hierarchy  5.1 to 5.10       
  ☯ 5.11 ( Redundant Arrays of Inexpensive Disk )      
  ☯ 5.12 ( Verilog Cache Controller)      
  5.13 to 5.16       
  ☯ 5.17 ( History )      
6. Parallel Process from Client to cloud 6.1 to 6.8       
  ☯ 6.9 ( Network )      
  6.10 to 6.14       
  ☯ 6.15 ( History )      
A.   Assembles, linkers, and to cloud A.1 to A.11       
C.   Graphics Processor Units C.1  to C.10       

 

 

  Read careful(仔细读)       Read if have time (如果有时间,就读)          Reference(参考)       Review or read (浏览或阅读)         Read for culture (文化类阅读)

 

 

 Before discussing the goals in details, let's look at the table on page vii. It shows the hardware and software paths throught the material. Chapter 1, 4, 5, and 6 are found on both paths, no matter what the experience or the focus. Chapter1 discusses the importance of energy and how it motivates the switch from signle core to multicore microprocessors and introduces the eight great ideas in computer architecture. Chapter 2 is likely to be review material for the hardware-oriented, but it is essential reading for the software-oriented, especially for those readers interested in learning more about compilers and object-oriented programming languages. Chapter 3 is for readers interested in constructing a datapath or in learning more about floating-point arithmetic. Some will skip parts of Chapter 3, either because they don't need them or because they offer a review. However, we introduce the running example of matrix multiple in this chapter, showing how subword parallels offers a fourflod improvement , so don't skip section 3.6 to 3.8.  Chapter  4 explains pipelined processors. Section 4.1, 4.5 and 4.10 give overviews and Section 4.12 gives the next performance boosts for matrix multiply for those with a software focus. Those with a hardware focus, however, will find that this chapter presents core material; they may also, depending on their background, want to read Appendix C on logic design first. The last chapter on multicores, multiprocessors, and clusters, is mostly new  content and should be read by everyone. It was significantly reorganized in this edition to make the flow of ideas more natural  and to include much more depth on GPUs, warehouse scale computers, and the hardware-software interface of network interface cards that are key to clusters.

  The first of the six goals for this fifth edition was to demonstrate the importance of understanding modern hardware to get good performance and energy efficiency with a concrete example. As mentioned above, we start with subword parallelism in Chapter 3 to improve matrix multiply by a factor of 4. We double performance in Chapter 4 by unrolling the loop to demonstrate the value of instruction level parallelism. Chapter 5 doubles performance again by optimizing for caches using blocking. Finally, Chapter 6 demonstrates a speedup of 14 from 16 processor by using thread-level parallelism. All four optimizations in total add just 24 lines of C code to our intitial matrix multiply example.

  The second goal was to help readers separate the forest from the trees by identifying eight great ideas of computer architecture early and then pointing out all the places throughout the rest of the book. We use (hopefully) easy to remember margin icons and highlight the corresponding word in the text to remind readers of these eight themes. There are nearly 100 citations in the book. No Chapter has less than seven  examples of great ideas,  and no idea is cited less than five times.Performance via parallelism, pipelining, and prediction are the three most popular great ideas, followed closed by Moore's Law. The processor chapter (4) is the one with the most examples, which is not a surprise since it probably received the most attention from computer architects. The one great idea found in every computer is performance via parallelism, which is a pleasant observation given the recent emphasis in parallelism in the field and in editions of this book. 

  The third goal was to recognize the generation change in computing from the PC era to the PostPc era by this edition with our examples and material. Thus, Chapter 1 dives into the guts of tablet computer rather than a PC, and  Chapter 6 describes the computing infrastructure of the cloud. We also feature the ARM, which is the instructions set of choice in the personal mobile devices of the PostPC era, as well as the x86 instruction set that dominated the PC Era and (so far) dominates cloud computing. 

  The fourth goal was to spread the I/O material throughout the book rather than have it in its own chapter, much as we spread parallelism throughout all the chapters in the fourth edition. Hence, I/O material in this eidtion can be found in Sections 1.4, 4.9, 5.2, 5.5, 5.11, and 6.9. The throught is that readers (and instructors) are more likely to cover I/O if it's not segregated to its own chapter.

  This is a fast-moving field, and, as is always the case for our new editions, an important goal is to update the technical content. The running example is the ARM Cortex A8  and Intel Core i7, reflecting our PostPC Era. Other highlights include an overview the new 61-bit instruction set of ARMv8, a tutorial on GPUs that explains their unique terminology, more depth on the warehouse scale computers that make up the cloud, and a deep dive into 10 Gigabyte Ethernet cards.

  To keep the main book short and compatible with electronic book, we placed the optional material as online appendices instead of on a companion CD as in prior editions.

  Finally, we updated all the excises in this book.

  While some elements changed, we have preserved useful elements from prior editions. To make the book better as a reference, we still place definitions of new terms in the margins at their first occurrence. The book element called "understanding Program Performance" sections helps readers understand the performance of their programs and how to improve it, just as the "Hardware / Software Interface " book element helped readers understand  the tradeoffs at this interface. "The Big Picture" sections remains so that the reader see the forest despite all the trees. "Check Yourself" sections help readers to confirm their comprehension of the material on the first time through with answers provided at the end of each chapter. This edition still includes the green MIPS refernce card, which was inspired by the "Green card" of IBM System / 360. This card has been updated and should ne a handy reference when writting MIPS assembly programs.

 

Changes for the Fifth Edions

We have collected a great deal of material to help instructions teach courses using this book. Solutions to exercises, figures from the book, lecture slides, and other materials are available to adopters from the publisher. Check the publisher's web site for more information:

textbook.elsevier.com /9780124077263

 

Concluding Remarks

If you read the following acknowleagements section, you will see that we went to great leghts to correct mistakes. Since a book goes through many printings, we have the opportunity to make even more  corrections. If you uncover any remaining, resilient bugs, please contact the publisher by electronic mail at cod5asiabugs@mkp.com or by low-tech mail using the address found on the copyright page.

  This edition is the second break in the long-standing collaboration between Hennessy and Patterson, which started in 1989. The demands of running one of the world's great universities meant that President Hennessy could not longer make the substantial commitment to create a new edition. The remaining author felt once again like a tightrope walker without a safety net. Hence, the people in the acknowledgments and Berkeley colleagues played an even larger role in shaping the contents of this book. Neverthless, this time around there is only one author to blam for the new material in what you are about to read.

 

Acknowledgments for the Fifth Edition 

With every editoin of this book, we are very fortunate to receive help from many readers, reviewers, and contributors. Each of these people has helped to make this book better.

  Chapter 6 was so extensively revised that we did a separate review for ideas and contents, and I mad changes based on the feedback from every reviewer. I'd like to thank Christos Kozyrakis of Stanford University for suggesting using the network interface for clusters to demonstrate the hardware-software interface of I/O and for suggestions on organizing the rest of the chapter; Mario Flagsilk of Stanford University for providing details,diagrams, and performance measurements of the NetFPGA NIC; and the following for suggestions on how to improve the chapter: David Kaeli of Northeastern University, Partha Ranganathan of HP Labs, David Wood of the University of Wisconsin, and my Berkeley colleagues Siamak Faridai,Shoaib Kamil, Yunsup Lee, Zhangxi Tan, and Andrew Waterman.

  Special thanks goes to Rimas Avizenis of UC Berkeley,  who developed the various versions of matrix multiply and supplied the performance numbers as well. As I worked with his father while I was a graduate student at UCLA, it was a nice symmetry to work with Rimas at UCB.

  I also wish to thank my longtime collaborator Randy katz of UC Berkeley, who helped develop the concept of great ideas in computer architecture as part of the extensive revision of an undergraduate class that we did together.

  I'd like to thank David Kirk, John Nickolls, and their colleagues at NVIDIA (Michael Garland, John Montrym, Doug Voorhies, Lars Nyland, Erik Lindholm, Paulius Micikevicius, Massimiliano Fatica, Stuart Oberman, and vasily Volkov) for writing the first in-depth appendix on GPUs. I'd like express again my appreciation to Jim Larus, recently named Dean of the school of Computer and communications Science at EPFL, for his willingness in contributing his expertise on assembly language programming, as well as for welcoming readers of this book with regard to using the simulator he developed and maintains.

  I am also very grateful to Jason Bakos of the University of South Carolina, who updated and created new exercises for this edition, working from originals prepared for the fourth edition by Perry Alexander (The University of Kansas); Javier Bruguera (Universidade de santiago de Compostela); Matthew Farrens (University of California, Davis); David Kaeli (Northeastern University); Nicole kaiyan (University of Adelaide); John Oliver (Cal Poly, san Luis Obispo); Milos Prvulovic (Georgia Tech); and Jichuan Chang, Jacob Leverich, Kevin Lim, and Partha Ranganathan (all from Hewlett-packed).

  Additional thanks goes to Jason Bakos for developing the new slides.

  I am grateful to the many instructions who have answered publishes surveys, reviewed our proposals, and attended focus groups to analyze and respond to our plans for this edition. They include the following individuals: Focus Groups in 2012: Bruce Barton (suffolk County Community College), Jeff Braun (Montana Tech), Ed Gehringer (North Carolina State), Michael Goldweber (Xavier University), Ed Harcourt (St. Lawrence University), Mark Hill (University of Wisconsin, Madison), Patrick Homer (University of Arizona), Norm Jouppi (HP Labs), Dave Zachary Kurmas (Grand Valley State University), Jae C. Oh (Syracuse University), Lu Peng (LSU), Milos Prvulovic (Georiga Tech), Partha Ranganathan (HP Labs), David Wood (University of Wisconsin), Crig Zilles (University of Illinois at Urbana-Champaign), Surveys and Reviews: Mahmound Abou-Nasr (Wayne State University), Perry Alexander (The University of Kansas), Hakan Aydin(George Mason University), Hussein Badr (State University of New York at Stony Brook), Mac Baker (Virginia Military Institute), Ron Branes (George Mason University), Douglas Blough (Georgia Institute of Technology), Kevin Bolding (Seattle Pacific University), Miodrag Bolic (Universty of Ottawa), John Bonomo (Westminster College), Jeff Braun (Montana Tech), Tom Briggs (Shippensburg University), Scott Burgess (Humboldt State University), Fazli Can (Bilkent University), Warren R. Carithers (Rochester Institute of Technology), Bruce Carlton (Mesa Community College), Nicholas Carter (University of Illinois at Urbana-Champaign), Anthony Cocchi (The City University of New York), Don Cooley (Utah State University), Robert D. Cupper (Allegheny College), Edward W.Davis (North Carolina State University), Nathaniel J. Davis (Air Force Institute of Technology), Molisa Derk (Oklahoma City University), Derek Eager (University of Saskatchewan), Ernest Ferguson (Northwest Missouri State University), Rhonda Kay Gaede (The University of Alabama), Etienne M. Gagnon (UQAM), Costa Gerousis (Christopher Newport University), Paul Gillard (Memorial University of Newfoundland), Michael Goldweber (Xavier University), Georgia Grant (College of San Mateo), Merrill Hall (The Master's College), Tyson Hall (Southern Adventist University), Ed Harcourt (St. Lawrence University), Justin E. Harlow (University of south Florida), Paul  F. Hemler (Hampden-Sydeny College), Kenneth Hopkinson (Cornell University), Steve J. Hodges (Cabrillo College), Kenneth Hopkinson (Cornell University), Dalton Hunkins (St. Bonaventure University), Baback Izadi (State University of New York--New Paltz), Reza Jafari, Robert W. Johnson (Colorado Technical University), Bharat Joshi (University of North Carolina, Charlotte), Nagarajan Kandasamy (Drexel University), Rajiv Kapadia, Ryan Kastner (University of California, santa Barbara), E. J. Kim (Texas A & University), Jihong Kim (Seoul National University), Jim Kirk (Union University), Geoffrey S. Knauth (Lycoming College), Manish M.Kochal (Wayne State), Suzan Koknar-Tezel (Saint Joseph's University), Angkul Kongmunvattana (Columbus State University), April Kontostathis (Ursinus College), Christos Kozyrakis (Stanford University), Danny Krizanc (Wesleyan University), Ashok Kumar, S. Kumar (The University of Texas), Zachary Kurmas (Grand Vally State University), Robert N.Lea (University of Houston), Baoxin Li (Arizona State University), Li Liao (University of Delaware), Gary Livingston (University of Massachusetts), Michael Lyle, Douglas W. Lynn (Oregon Institute of Technology), Yashwant K Malaiya (Colorado State University), Bill Mark (University of Texas at Austin), Ananda Mondal (Claflin University), Alvin Moser Neebel (Loras College),  John Nestor (Lafayette College), Jae C. Oh (Syracuse University), Joe Oldham (Centre College), Timour Paltashev, James Parkerson(University of Arkansas), Shauak Pawagi (SUNY at Stony Brook), Steve Pearce, Ted Pedersen (University of Minnesota), Lu Peng (Louisiana State University), Gregory D Peterson (The University of Tennessee), Milos Prvulovic (Georgia Tech), Partha Ranganathan (HP Labs), Dejan Raskovic(University of Alaska, Fairbanks) Brad Richards (University of Puget Sound), Roman Rozanov , Louis Rubinfield (Villanova University), Md Abdus Salam (southern University), Augustine Samba (Kent state University), Robert Schaefer (Daniel Webster College), Carolyn J. C. Schauble (Colorado state University), Keith Schubert (CSU San Bernardino), William L. Schultz, Kelly Shaw (University of Richmond), shahram shirani (McMaster University), Scott Sigman (Drury University), Bruce smith, David Smith, Jeff W.  (University of Gerogia, Athens), Mark Smotherman (Clemson University), Philip Snyder (Johns Hopkins University), Alex Sprintson (Texas A&M), Timothy D. Stanley (Brigham Young University), Dean Stevens (Morningside Collage), Nozar Tabrizi (Kettering University), Yuval Tamir (UCLA), Alexander Taubin (Boston University), will Thacker (Winthrop University), Mithuna Thottethodi (UC San Diego), Rama Viswanathan (Beloit College), Ken Vollmar (Missouri State University), Guoping Wang (Indiana-Purdue University), Patricia Wenner (Bucknell University), Kent Wilken (University of California, Davis), David Wolfe (Gustavus Adolphus College), David Wood (University of Wisconsin, Madiscon), Ki Hwan Yum (University of Texas, San Antonio), Mohamed Zahran (City College of New York), Gerald D. Zarnett (Ryerson University), Nian Zhang (South Dakota School of Mines & Technology), Jiling Zhong (Troy University), Huiyang Zhou (The University of Central Florida), Weiyu Zhu (Illinois Wesleyan University).

  A special thanks also goes to Mark Southerman for making multiple passes to find technical and writing glitches that significantly improved the quality of this edition. 

  We wish to thank the extended Morgan Kaufman family for agreeing to publish this book again under the able leadership of Tod Green and Nate McFadden: T certainly couldn't have completed the book without them. We also want to extend thanks to Lisa Jones, who managed the book production process, and Russell Purdy, who did the cover design. The new cover cleverly connects the PostPC Era content of this edition to the cover of the first edition.

  The contributions of the nearly 150 people we mentioned here have helped make this fifth edition what I hope will be our best book yet. Enjoy!

David A. Patterson.

 

 

 

About the Author

David A. Patterson.   has been teaching computer architecture at the University of California, Berkeley, since joining the faculty in 1997, where he holds the pardee Chair of Computer Science. His teaching has been honored by the Distinguished Teaching Award from the University of California, the Karlstrom Award from ACM, and the Mulligan Education Medal and Undergraduate Teaching Award from IEEE. Patterson received the IEEE Technical Achievement ACM Eckert-Mauchly Award for contributions to RISC, and he shared the IEEE Johnson Information Storage Award for contributions to RAID. He also shared the IEEE John von Neumann Medal and the C & C Prize with John Hennessy. Like his co-author, Patterson is a Fellow of the American Academy of Arts and Sciences, the Computer History Museum, ACM, and IEEE, and he was elected to the National Academy of Engineering, the National Academy of Sciences, and the Silion Valley Engineering Hall of Frame. He served on the Information Technology Advisory Committee to the U.S. President, as chair of the CS divivsion in the Berkeley EECS department, as chair of the computing Research Association, and as President of ACM. This record led to Distinguished Service Awards from ACM to CRA.

At Berkeley, Patterson led the design and implementation of RISC I , likely the first VLSI reduced instruction set computer, and the foundation of the commercial SPARC architecture. He was a leader of the Redundant Array of Inexpensive Disks(RAID) project, which led to developable storage systems from many companies. He was also involved in the Network of workstation (NOW) project, which led to cluster technology used by Internet companies and later to cloud computing. These projects earned three dissertation awards from ACM. His current research projects are Algorithm-Machine-People and Algorithms and Specializers for Provably Optimal Implementations with Resilience and Efficiency. The AMP Lab is developing scalable machine  learning algorithms, warehouse-scale-computer-friendly programming models, and crowd-sourcing tool to gain valuable insights quickly from big data in the cloud. The ASPIRE Lab uses deep hardware and software co-tuning to achieve the hightest possible performance and energy efficiency for mobile and rack computing systems.

John L. Hennessy  is the  tenth president of Stanford University, where he has been a member of the faculty since 1977 in the departments of electrical engineering and computer science. Hennessy is a Fellow of the IEEE and ACM; a member of the National Academy of Engineering, the National Academy of Arts and Sciences. Among his many awards are the 2001 Eckert-Mauchly Award for his contributions to RISC technology, the 2001 Seymour Cray Computer Engineering Award, and the 2000 John von Neuman Award, which he shared with David Pattson. He has also received seven honorary doctorates.

In 1981, he started the MIPS project at Stanford with a handful of graduate students. After completing the project in 1984, he took a leave from the university to cofound MIPS Computer System (now MIPS Technologies), which developd one of the first commercial RISC microprocessors. As of 2006, over 2 billion MIPS microprocess have been shiped in devices ranging from Video games and palmtop  computers to laser printers and network switches. Hennessy subsequently  led  the DASH (Director Architecture for shared Memory) project, which prototyped the first scalable cache coherent multiprocessor; many of the key ideas have been adopted in modern multiprocessors. In addition to his technical activities and university responsibilites, he has continued to work with numerous start-up both as an early-stage advisor and an investor.

 

posted @ 2017-12-28 09:51  ZQXTXK  阅读(7579)  评论(2编辑  收藏  举报