Modern Control Systems_P1

GLOBAL EDITION

FOURTEENTH EDITION

and

(2)

1. Modern Control Systems

This page is intentionally left blank

2. Modern Control Systems

3. FOURTEENTH EDITION GLOBAL EDITION

Richard C. Dorf

University of California, Davis

Robert H. Bishop

University of South Florida Please contact https://support.pearson.com/getsupport/s/ with any queries on this content.

Cover Image: Nguyen Quang Ngoc Tonkin/Shutterstock

Pearson Education Limited

KAO Two

KAO Park

Hockham Way

Harlow

CM17 9SR

United Kingdom

and Associated Companies throughout the world

Visit us on the World Wide Web at: www.pearsonglobaleditions.com

(C) Pearson Education Limited 2022

The rights of Richard C. Dorf and Robert H. Bishop to be identified as the author of this work have been asserted by him in accordance with the Copyright, Designs and Patents Act 1988.

Authorized adaptation from the United States edition, entitled Modern Control Systems, 14th Edition, ISBN 978-013-730725-8 by Richard C. Dorf and Robert H. Bishop published by Pearson Education (C) 2022.

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without either the prior written permission of the publisher or a license permitting restricted copying in the United Kingdom issued by the Copyright Licensing Agency Ltd, Saffron House, 6-10 Kirby Street, London EC1N 8TS. For information regarding permissions, request forms and the appropriate contacts within the Pearson Education Global Rights & Permissions department, please visit www.pearsoned.com/permissions/.

Attributions of third-party content appear on the appropriate page within the text.

PEARSON, ALWAYS LEARNING is an exclusive trademark owned by Pearson Education, Inc. or its affiliates in the U.S. and/or other countries.

Unless otherwise indicated herein, any third-party trademarks that may appear in this work are the property of their respective owners and any references to third-party trademarks, logos or other trade dress are for demonstrative or descriptive purposes only. Such references are not intended to imply any sponsorship, endorsement, authorization, or promotion of Pearson's products by the owners of such marks, or any relationship between the owner and Pearson Education, Inc. or its affiliates, authors, licensees, or distributors.

This eBook may be available as a standalone product or integrated with other Pearson digital products like MyLab and Mastering. This eBook may or may not include all assets that were part of the print version. The publisher reserves the right to remove any material in this eBook at any time.

ISBN 10: \(1 - 292 - 42237 - 8\) (print)

ISBN 13: \(978 - 1 - 292 - 42237 - 4\) (print)

ISBN 13: 978-1-292-42235-0 (uPDF eBook)

British Library Cataloguing-in-Publication Data

A catalogue record for this book is available from the British Library.

eBook formatted by B2R Technologies Pvt. Ltd. Dedicated to the memory of Professor Richard C. Dorf This page is intentionally left blank

4. Brief Contents

Preface 15

About the Authors 27

CHAPTER 1 Introduction to Control Systems 29

CHAPTER 2 Mathematical Models of Systems 79

CHAPTER 3 State Variable Models 184

Chapter $4\ $ Feedback Control System Characteristics 256

Chapter 5 The Performance of Feedback Control Systems 321

CHAPTER 6 The Stability of Linear Feedback Systems 394

CHAPTER 7 The Root Locus Method 446

CHAPTER 8 Frequency Response Methods \(\mathbf{545}\)

CHAPTER 9 Stability in the Frequency Domain 622

ChAPTER 10 The Design of Feedback Control Systems 728

CHAPTER 11 The Design of State Variable Feedback Systems 812

ChAPTER 12 Robust Control Systems 882

Chapter 13 Digital Control Systems 945

References 997

Index 1014 This page is intentionally left blank

5. Contents

6. Preface 15

About the Authors 27

7. CHAPTER 1 Introduction to Control Systems 29

1.1 Introduction 30

1.2 Brief History of Automatic Control 33

1.3 Examples of Control Systems 39

1.4 Engineering Design 46

1.5 Control System Design 47

1.6 Mechatronic Systems 50

1.7 Green Engineering 54

1.8 The Future Evolution of Control Systems 55

1.9 Design Examples 57

1.10 Sequential Design Example: Disk Drive Read System 62

1.11 Summary 63

Skills Check \(63 \cdot\) Exercises 66 - Problems 68 • Advanced

Problems 73 • Design Problems \(75 \bullet\) Terms and Concepts 78

8. CHAPTER 2 Mathematical Models of Systems 79

2.1 Introduction 80

2.2 Differential Equations of Physical Systems 80

2.3 Linear Approximations of Physical Systems 85

2.4 The Laplace Transform 88

2.5 The Transfer Function of Linear Systems 95

2.6 Block Diagram Models 107

2.7 Signal-Flow Graph Models 112

2.8 Design Examples 119

2.9 The Simulation of Systems Using Control Design Software 136

2.10 Sequential Design Example: Disk Drive Read System 150

2.11 Summary 153

Skills Check 154 • Exercises 158 • Problems 164 • Advanced

Problems 176 • Design Problems 178 • Computer Problems 180 •

Terms and Concepts 182

9. CHAPTER 3 State Variable Models 184

3.1 Introduction 185

3.2 The State Variables of a Dynamic System 185

3.3 The State Differential Equation 188 3.4 Signal-Flow Graph and Block Diagram Models 194

3.5 Alternative Signal-Flow Graph and Block Diagram Models 205

3.6 The Transfer Function from the State Equation 209

3.7 The Time Response and the State Transition Matrix 210

3.8 Design Examples 214

3.9 Analysis of State Variable Models Using Control Design Software 228

3.10 Sequential Design Example: Disk Drive Read System 232

3.11 Summary 235

Skills Check 236 • Exercises \(239 \bullet\) Problems 242 - Advanced Problems 250 • Design Problems 252 • Computer Problems 253 • Terms and Concepts 254

10. CHAPTER 4 Feedback Control System Characteristics 256

4.1 Introduction 257

4.2 Error Signal Analysis 259

4.3 Sensitivity of Control Systems to Parameter Variations 261

4.4 Disturbance Signals in a Feedback Control System 264

4.5 Control of the Transient Response 269

4.6 Steady-State Error 272

4.7 The Cost of Feedback 274

4.8 Design Examples 275

4.9 Control System Characteristics Using Control Design Software 285

4.10 Sequential Design Example: Disk Drive Read System 291

4.11 Summary 295

Skills Check 296 • Exercises 300 • Problems 304 • Advanced Problems 310 • Design Problems 313 • Computer Problems 317 • Terms and Concepts 320

11. CHAPTER 5 The Performance of Feedback Control Systems 321

5.1 Introduction 322

5.2 Test Input Signals 322

5.3 Performance of Second-Order Systems 325

5.4 Effects of a Third Pole and a Zero on the Second-Order System Response 330

5.5 The \(s\)-Plane Root Location and the Transient Response 335

5.6 The Steady-State Error of Feedback Control Systems 337

5.7 Performance Indices 344

5.8 The Simplification of Linear Systems 349

5.9 Design Examples 352

5.10 System Performance Using Control Design Software 364

5.11 Sequential Design Example: Disk Drive Read System 370 5.12 Summary 372

Skills Check 373 • Exercises 376 • Problems \(379 \bullet\) Advanced

Problems 385 • Design Problems 387 • Computer Problems 390 •

Terms and Concepts 393

12. CHAPTER 6 The Stability of Linear Feedback Systems 394

6.1 The Concept of Stability 395

6.2 The Routh-Hurwitz Stability Criterion 399

6.3 The Relative Stability of Feedback Control Systems 407

6.4 The Stability of State Variable Systems 408

6.5 Design Examples 411

6.6 System Stability Using Control Design Software 419

6.7 Sequential Design Example: Disk Drive Read System 425

6.8 Summary 427

Skills Check 428 • Exercises 431 - Problems 433 • Advanced

Problems 438 • Design Problems 441 - Computer Problems 443 •

Terms and Concepts 445

13. CHAPTER 7 The Root Locus Method 446

7.1 Introduction 447

7.2 The Root Locus Concept 447

7.3 The Root Locus Procedure 452

7.4 Parameter Design by the Root Locus Method 466

7.5 Sensitivity and the Root Locus 472

7.6 PID Controllers 477

7.7 Negative Gain Root Locus 488

7.8 Design Examples 493

7.9 The Root Locus Using Control Design Software 502

7.10 Sequential Design Example: Disk Drive Read System 508

7.11 Summary 510

Skills Check 514 • Exercises 518 • Problems 522 • Advanced

Problems 531 • Design Problems 535 - Computer Problems 541 •

Terms and Concepts 543

14. CHAPTER 8 Frequency Response Methods \(\mathbf{545}\)

8.1 Introduction 546

8.2 Frequency Response Plots 548

8.3 Frequency Response Measurements 569

8.4 Performance Specifications in the Frequency Domain 571

8.5 Log-Magnitude and Phase Diagrams 574

8.6 Design Examples 575

8.7 Frequency Response Methods Using Control Design Software 584 8.8 Sequential Design Example: Disk Drive Read System 589

8.9 Summary 591

Skills Check 596 • Exercises 601 • Problems 604 • Advanced

Problems 613 • Design Problems 615 • Computer Problems 618 •

Terms and Concepts 620

15. CHAPTER 9 Stability in the Frequency Domain 622

9.1 Introduction 623

9.2 Mapping Contours in the \(s\)-Plane 624

9.3 The Nyquist Criterion 630

9.4 Relative Stability and the Nyquist Criterion 641

9.5 Time-Domain Performance Criteria in the Frequency Domain 648

9.6 System Bandwidth 655

9.7 The Stability of Control Systems with Time Delays 655

9.8 Design Examples 659

9.9 PID Controllers in the Frequency Domain 677

9.10 Stability in the Frequency Domain Using Control Design Software 678

9.11 Sequential Design Example: Disk Drive Read System 686

9.12 Summary 689

Skills Check 698 • Exercises 701 • Problems 707 • Advanced

Problems 717 • Design Problems 720 • Computer Problems 725 • Terms and Concepts 727

16. CHAPTER 10 The Design of Feedback Control Systems 728

10.1 Introduction 729

10.2 Approaches to System Design 730

10.3 Cascade Compensators 731

10.4 Phase-Lead Design Using the Bode Plot 735

10.5 Phase-Lead Design Using the Root Locus 741

10.6 System Design Using Integration Compensators 747

10.7 Phase-Lag Design Using the Root Locus 750

10.8 Phase-Lag Design Using the Bode Plot 753

10.9 Design on the Bode Plot Using Analytical Methods 758

10.10 Systems with a Prefilter 759

10.11 Design for Deadbeat Response 762

10.12 Design Examples 764

10.13 System Design Using Control Design Software 774

10.14 Sequential Design Example: Disk Drive Read System 781

10.15 Summary 783

Skills Check 784 • Exercises 788 - Problems 792 • Advanced Problems 801 • Design Problems 804 • Computer Problems 808 • Terms and Concepts 811

17. CHAPTER 11 The Design of State Variable Feedback Systems 812

11.1 Introduction 813

11.2 Controllability and Observability 813

11.3 Full-State Feedback Control Design 819

11.4 Observer Design 825

11.5 Integrated Full-State Feedback and Observer 829

11.6 Reference Inputs 835

11.7 Optimal Control Systems 837

11.8 Internal Model Design 845

11.9 Design Examples 848

11.10 State Variable Design Using Control Design Software 855

11.11 Sequential Design Example: Disk Drive Read System 860

11.12 Summary 862

Skills Check 862 • Exercises 866 • Problems 868 • Advanced Problems 872 • Design Problems 875 • Computer Problems 878 • Terms and Concepts 881

18. CHAPTER 12 Robust Control Systems 882

12.1 Introduction 883

12.2 Robust Control Systems and System Sensitivity 884

12.3 Analysis of Robustness 888

12.4 Systems with Uncertain Parameters 890

12.5 The Design of Robust Control Systems 892

12.6 The Design of Robust PID-Controlled Systems 896

12.7 The Robust Internal Model Control System 900

12.8 Design Examples 903

12.9 The Pseudo-Quantitative Feedback System 914

12.10 Robust Control Systems Using Control Design Software 916

12.11 Sequential Design Example: Disk Drive Read System 919

12.12 Summary 921

Skills Check 923 • Exercises 927 • Problems 929 • Advanced Problems 933 • Design Problems 936 • Computer Problems 941 • Terms and Concepts 944

19. ChAPTER 13 Digital Control Systems 945

13.1 Introduction 946

13.2 Digital Computer Control System Applications 946

13.3 Sampled-Data Systems 948

13.4 The \(z\)-Transform 951

13.5 Closed-Loop Feedback Sampled-Data Systems 955

13.6 Performance of a Sampled-Data, Second-Order System 959 13.7 Closed-Loop Systems with Digital Computer Compensation 961

13.8 The Root Locus of Digital Control Systems 964

13.9 Implementation of Digital Controllers 968

13.10 Design Examples 968

13.11 Digital Control Systems Using Control Design Software 977

13.12 Sequential Design Example: Disk Drive Read System 982

13.13 Summary 984

Skills Check 984 • Exercises 988 • Problems 990 • Advanced

Problems 992 • Design Problems 993 - Computer Problems 995 •

Terms and Concepts 996

References 997

Index 1014

20. WEB RESOURCES

APPENDIX A MATLAB Basics

APPENDIX B MathScript RT Module Basics

APPENDIX C Symbols, Units, and Conversion Factors

APPENDIX D Laplace Transform Pairs

APPEndix E An Introduction to Matrix Algebra

appendix F Decibel Conversion

APPENDIX G Complex Numbers

APPENDIX H \(z\)-Transform Pairs

APPENDIX I Discrete-Time Evaluation of the Time Response

APPENDIX J Design Aids

21. Preface

22. MODERN CONTROL SYSTEMS-THE BOOK

Global issues such as climate change, clean water, sustainability, pandemics, waste management, emissions reduction, and minimizing raw material and energy use have led many engineers to re-think existing approaches to engineering design. One outcome of the evolving design strategy is to consider green engineering and human-centered design. The goal of these approaches to engineering is to design products that minimize pollution, reduce the risk to human health, and improve the living environment. Applying the principles of green engineering and human-centered design highlights the power of feedback control systems as an enabling technology.

To reduce greenhouse gases and minimize pollution, it is necessary to improve both the quality and quantity of our environmental monitoring systems. One example is to use wireless measurements on mobile sensing platforms to measure the external environment. Another example is to monitor the quality of the delivered power to measure leading and lagging power, voltage variations, and waveform harmonics. Many green engineering systems and components require careful monitoring of current and voltages. For example, current transformers are used in various capacities for measuring and monitoring current within the power grid network of interconnected systems used to deliver electricity. Sensors are key components of any feedback control system because the measurements provide the required information as to the state of the system so the control system can take the appropriate action.

The role of control systems will continue to expand as the global issues facing us require ever increasing levels of automation and precision. In the book, we present key examples from green engineering such as wind turbine control and modeling of a photovoltaic generator for feedback control to achieve maximum power delivery as the sunlight varies over time.

The wind and sun are important sources of renewable energy around the world. Wind energy conversion to electric power is achieved by wind energy turbines connected to electric generators. The intermittency characteristic of the wind makes smart grid development essential to bring the energy to the power grid when it is available and to provide energy from other sources when the wind dies down or is disrupted. A smart grid can be viewed as a system comprised of hardware and software that routes power more reliably and efficiently to homes, businesses, schools, and other users of power in the presence of intermittency and other disturbances. The irregular character of wind direction and power also results in the need for reliable, steady electric energy by using control systems on the wind turbines themselves. The goal of these control devices is to reduce the effects of wind intermittency and the effect of wind direction change. Energy storage systems are also critical technologies for green engineering. We seek energy storage systems that are renewable, such as fuel cells. Active control can be a key element of effective renewable energy storage systems as well. Another exciting development for control systems is the evolution of the Internet of Things - a network of physical objects embedded with electronics, software, sensors and connectivity. As envisioned, each of the millions of the devices on the network will possess an embedded computer with connectivity to the Internet. The ability to control these connected devices will be of great interest to control engineers. Indeed, control engineering is an exciting and a challenging field. By its very nature, control engineering is a multidisciplinary subject, and it has taken its place as a core course in the engineering curriculum. It is reasonable to expect different approaches to mastering and practicing the art of control engineering. Since the subject has a strong mathematical foundation, we might approach it from a strictly theoretical point of view, emphasizing theorems and proofs. On the other hand, since the ultimate objective is to implement controllers in real systems, we might take an ad hoc approach relying only on intuition and hands-on experience when designing feedback control systems. Our approach is to present a control engineering methodology that, while based on mathematical fundamentals, stresses physical system modeling and practical control system designs with realistic system specifications.

We believe that the most important and productive approach to learning is for each of us to rediscover and re-create anew the answers and methods of the past. Thus, the ideal is to present the student with a series of problems and questions and point to some of the answers that have been obtained over the past decades. The traditional method-to confront the student not with the problem but with the finished solution - is to deprive the student of all excitement, to shut off the creative impulse, to reduce the adventure of humankind to a dusty heap of theorems. The issue, then, is to present some of the unanswered and important problems that we continue to confront, for it may be asserted that what we have truly learned and understood, we discovered ourselves.

The purpose of this book is to present the structure of feedback control theory and to provide a sequence of exciting discoveries as we proceed through the text and problems. If this book is able to assist the student in discovering feedback control system theory and practice, it will have succeeded.

23. WHAT'S NEW IN THIS EDITION

This latest edition of Modern Control Systems incorporates the following key updates:

$\square\ $ Available as both an eText and print book.

$\square\ $ Video solutions for select problems throughout the text.

$\square\ $ Interactive figures added throughout the eText to enhance student learning.

$\square\ $ In the eText, interactive Skills Check multiple-choice questions at the end of each chapter.

$\square\ $ Over \(20\%\) new or updated problems. There are over 980 end-of-chapter exercises, problems, advanced problems, design problems, and computer problems.

$\square\ $ Expanded use of color for clarity of presentation.

$\square\ $ An updated companion website available at www.pearsonglobaleditions.com for students and faculty.

24. THE AUDIENCE

This text is designed for an introductory undergraduate course in control systems for engineering students. There is very little demarcation between the various engineering areas in control system practice; therefore, this text is written without any conscious bias toward one discipline. Thus, it is hoped that this book will be equally useful for all engineering disciplines and, perhaps, will assist in illustrating the utility of control engineering. The numerous problems and examples represent all fields, and the examples of the sociological, biological, ecological, and economic control systems are intended to provide the reader with an awareness of the general applicability of control theory to many facets of life. We believe that exposing students of one discipline to examples and problems from other disciplines will provide them with the ability to see beyond their own field of study. Many students pursue careers in engineering fields other than their own. We hope this introduction to control engineering will give students a broader understanding of control system design and analysis.

In its first thirteen editions, Modern Control Systems has been used in seniorlevel courses for engineering students at many colleges and universities globally. It also has been used in courses for engineering graduate students with no previous background in control engineering.

25. THE FOURTEENTH EDITION

With the fourteenth edition, we have created an interactive e-textbook to fully use rich, digital content for Modern Control Systems to enhance the learning experience. This version contains embedded videos, dynamic graphs, live Skills Check quizzes, and active links to additional resources. The electronic version provides a powerful interactive experience that would be difficult, if not impossible, to achieve in a print book.

A companion website is also available to students and faculty using the fourteenth edition. The website contains many resources, including the \(m\)-files in the book, Laplace and \(z\)-Transform tables, written materials on matrix algebra and complex numbers, symbols, units, and conversion factors, and an introduction to MATLAB and to the LabVIEW MathScript RT Module. The MCS website is available at www.pearsonglobaleditions.com.

We continue the design emphasis that historically has characterized Modern Control Systems. Using the real-world engineering problems associated with designing a controller for a disk drive read system, we present the Sequential Design Example, which is considered sequentially in each chapter using the methods and concepts in that chapter. Disk drives are used in computers of all sizes and they represent an important application of control engineering. Various aspects of the design of controllers for the disk drive read system are considered in each chapter. For example, in Chapter 1 we identify the control goals, identify the variables to be controlled, write the control specifications, and establish the preliminary system configuration for the disk drive. Then, in Chapter 2, we obtain models of the process, sensors, and actuators. In the remaining chapters, we continue the design process, stressing the main points of the chapters.

In the same spirit as the Sequential Design Example, we present a design problem that we call the Continuous Design Problem to give students the opportunity to build upon a design problem from chapter to chapter. High-precision machinery places stringent demands on table slide systems. In the Continuous Design Problem, students apply the techniques and tools presented in each chapter to the development of a design solution that meets the specified requirements.

The computer-aided design and analysis component of the book continues to evolve and improve. Also, many of the solutions to various components of the Sequential Design Example utilize m-files with corresponding scripts included in the figures.

A Skills Check section is included at the end of each chapter. In each Skills Check section, we provide three sets of problems to test your knowledge of the chapter material. This includes True or False, Multiple Choice, and Word Match problems. To obtain direct feedback, you can check your answers with the answer key provided at the conclusion of the end-of-chapter problems. The book is organized around the concepts of control system theory as they have been developed in the frequency and time domains. An attempt has been made to make the selection of topics, as well as the systems discussed in the examples and problems, modern in the best sense. Therefore, this book includes discussions on robust control systems and system sensitivity, state variable models, controllability and observability, computer control systems, internal model control, robust PID controllers, and computer-aided design and analysis, to name a few. However, the classical topics of control theory that have proved to be so very useful in practice have been retained and expanded.

Building Basic Principles: From Classical to Modern. Our goal is to present a clear exposition of the basic principles of frequency and time-domain design techniques. The classical methods of control engineering are thoroughly covered: Laplace transforms and transfer functions; root locus design; Routh-Hurwitz stability analysis; frequency response methods, including Bode, Nyquist, and Nichols; steady-state error for standard test signals; second-order system approximations; and phase and gain margin and bandwidth. In addition, coverage of the state variable method is significant. Fundamental notions of controllability and observability for state variable models are discussed. Full state feedback design with Ackermann's formula for pole placement is presented, along with a discussion on the limitations of state variable feedback. Observers are introduced as a means to provide state estimates when the complete state is not measured.

Upon this strong foundation of basic principles, the book provides many opportunities to explore topics beyond the traditional. In the latter chapters, we present introductions into more advanced topics of robust control and digital control, as well as an entire chapter devoted to the design of feedback control systems with a focus on practical industrial lead and lag compensator structures. Problem solving is emphasized throughout the chapters. Each chapter (but the first) introduces the student to the notion of computer-aided design and analysis.

Progressive Development of Problem-Solving Skills. Reading the chapters, attending lectures and taking notes, and working through the illustrated examples are all part of the learning process. But the real test comes at the end of the chapter with the problems. The book takes the issue of problem solving seriously. In each chapter, there are five problem types:

\(\square\) Exercises

$\square\ $ Problems

$\square\ $ Advanced Problems

$\square\ $ Design Problems

\(\square\) Computer Problems

For example, the problem set for Mathematical Models of Systems, Chapter 2 includes 31 exercises, 51 problems, 9 advanced problems, 6 design problems, and 10 computer-based problems. The exercises permit the students to readily utilize the concepts and methods introduced in each chapter by solving relatively straightforward exercises before attempting the more complex problems. The problems require an extension of the concepts of the chapter to new situations. The advanced problems represent problems of increasing complexity. The design problems emphasize the design task; the computer-based problems give the student practice with problem solving using computers. In total, the book contains more than 980 problems. The abundance of problems of increasing complexity gives students confidence in their problem solving ability as they work their way from the exercises to the design and computer-based problems. An instructor's manual, available to all adopters of the text for course use, contains complete solutions to all end-of-chapter problems.

A set of m-files, the Modern Control Systems Toolbox, has been developed by the authors to supplement the text. The m-files contain the scripts from each computer-based example in the text. You may retrieve the m-files from the companion available at www.pearsonglobaleditions.com.

Design Emphasis without Compromising Basic Principles. The all-important topic of design of real-world, complex control systems is a major theme throughout the text. Emphasis on design for real-world applications addresses interest in design by ABET and industry.

The design process consists of seven main building blocks that we arrange into three groups:

  1. Establishment of goals and variables to be controlled, and definition of specifications (metrics) against which to measure performance

  2. System definition and modeling

  3. Control system design and integrated system simulation and analysis

In each chapter of this book, we highlight the connection between the design process and the main topics of that chapter. The objective is to demonstrate different aspects of the design process through illustrative examples.

Various aspects of the control system design process are illustrated in detail in many examples across all the chapters, including applications of control design in robotics, manufacturing, medicine, and transportation (ground, air, and space).

Each chapter includes a section to assist students in utilizing computer-aided design and analysis concepts and in reworking many of the design examples. Generally, m-files scripts are provided that can be used in the design and analyses of the feedback control systems. Each script is annotated with comment boxes that highlight important aspects of the script. The accompanying output of the script (generally a graph) also contains comment boxes pointing out significant elements. The scripts can also be utilized with modifications as the foundation for solving other related problems.

Learning Enhancement. Each chapter begins with a chapter preview describing the topics the student can expect to encounter. The chapters conclude with an end-of-chapter summary, skills check, as well as terms and concepts. These sections Topics emphasized in this example

just one or two topics.
In this column remarks relate the design topics on the left to specific sections, figures, equations, and tables in the example.

(1) Establishment of goals, variables to be controlled, and specifications.

  1. System definition and modeling.

(3) Control system design, simulation, and analysis.
If the performance does not meet the specifications, then iterate the configuration.
Write the specifications

Obtain a model of the process, the actuator, and the sensor

Describe a controller and select key parameters to be adjusted

Optimize the parameters and analyze the performance

If the performance meets the specifications, then finalize the design.

reinforce the important concepts introduced in the chapter and serve as a reference for later use.

Color is used to add emphasis when needed and to make the graphs and figures easier to interpret. For example, consider the computer control of a robot to spray-paint an automobile. We might ask the student to investigate the closed-loop system stability for various values of the controller gain \(K\) and to determine the response to a unit step disturbance, \(T_{d}(s) = 1/s\), when the input \(R(s) = 0\). The associated figure assists the student with (a) visualizing the problem, and (b) taking the next step to develop the transfer function model and to complete the analyses.

26. THE ORGANIZATION

Chapter 1 Introduction to Control Systems. Chapter 1 provides an introduction to the basic history of control theory and practice. The purpose of this chapter is to describe the general approach to designing and building a control system.

(a)

(b)

Chapter 2 Mathematical Models of Systems. Mathematical models of physical systems in input-output or transfer function form are developed in Chapter 2. A wide range of systems are considered.

Chapter 3 State Variable Models. Mathematical models of systems in state variable form are developed in Chapter 3. The transient response of control systems and the performance of these systems are examined.

Chapter 4 Feedback Control System Characteristics. The characteristics of feedback control systems are described in Chapter 4. The advantages of feedback are discussed, and the concept of the system error signal is introduced.

Chapter 5 The Performance of Feedlback Control Systems. In Chapter 5, the performance of control systems is examined. The performance of a control system is correlated with the s-plane location of the poles and zeros of the transfer function of the system.

(a)

(b)

Chapter 6 The Stability of Linear Feedback Systems. The stability of feedback systems is investigated in Chapter 6. The relationship of system stability to the characteristic equation of the system transfer function is studied. The RouthHurwitz stability criterion is introduced.

Chapter 7 The Root Locus Method. Chapter 7 deals with the motion of the roots of the characteristic equation in the \(s\)-plane as one or two parameters are varied. The locus of roots in the \(s\)-plane is determined by a graphical method. We also introduce the popular PID controller and the Ziegler-Nichols PID tuning method.

Chapter 8 Frequency Response Methods. In Chapter 8, a steady-state sinusoid input signal is utilized to examine the steady-state response of the system as the frequency of the sinusoid is varied. The development of the frequency response plot, called the Bode plot, is considered.

Chapter 9 Stability in the Frequency Domain. System stability utilizing frequency response methods is investigated in Chapter 9. Relative stability and the Nyquist criterion are discussed. Stability is considered using Nyquist plots, Bode plots, and Nichols charts.

Chapter 10 The Design of Feedback Control Systems. Several approaches to designing and compensating a control system are described and developed in Chapter 10. Various candidates for service as compensators are presented and it is shown how they help to achieve improved performance. The focus is on lead and lag compensators.

Chapter 11 The Design of State Variable Feedback Systems. The main topic of Chapter 11 is the design of control systems using state variable models. Full-state feedback design and observer design methods based on pole placement are discussed. Tests for controllability and observability are presented, and the concept of an internal model design is discussed.

Chapter 12 Robust Control Systems. Chapter 12 deals with the design of highly accurate control systems in the presence of significant uncertainty. Five methods for robust design are discussed, including root locus, frequency response, ITAE methods for robust PID controllers, internal models, and pseudo-quantitative feedback.

Chapter 13 Digital Control Systems. Methods for describing and analyzing the performance of computer control systems are described in Chapter 13. The stability and performance of sampled-data systems are discussed.

27. ACKNOWLEDGMENTS

We wish to express our sincere appreciation to the following individuals who have assisted us with the development of this Fourteenth edition, as well as all previous editions: John Hung, Auburn University; Zak Kassas, University of California-Irvine; Hanz Richter, Cleveland State Universtiy; Abhishek Gupta, The Ohio State University; Darris White, Embry Riddle Aeronautical University; John K. Schueller, University of Florida; Mahmoud A. Abdallah, Central Sate University ( \(OH)\); John N. Chiasson, University of Pittsburgh; Samy El-Sawah, California State Polytechnic University, Pomona; Peter J. Gorder, Kansas State University; Duane Hanselman, University of Maine; Ashok Iyer, University of Nevada, Las Vegas; Leslie R. Koval, University of Missouri-Rolla; L. G. Kraft, University of New Hampshire; Thomas Kurfess, Georgia Institute of Technology; Julio C. Mandojana, Mankato State University; Luigi Mariani, University of Padova; Jure Medanic, University of Illinois at Urbana- Champaign; Eduardo A. Misawa, Oklahoma State University; Medhat M. Morcos, Kansas State University; Mark Nagurka, Marquette University; D. Subbaram Naidu, Idaho State University; Ron Perez, University of Wisconsin-Milwaukee; Carla Schwartz, The MathWorks, Inc.; Murat Tanyel, Dordt College; Hal Tharp, University of Arizona; John Valasek, Texas A & M University; Paul P. Wang, Duke University; and Ravi Warrier, GMI Engineering and Management Institute. Special thanks to Greg Mason, Seattle University, and Jonathan Sprinkle, University of Arizona, for developing the interactives and the video solutions.

28. ACKNOWLEDGMENTS FOR THE GLOBAL EDITION

Pearson would like to acknowledge and thank the following for the Global Edition:

29. CONTRIBUTORS

Benjamin Chong, University of Leeds

Murat Doğruel, Marmara University

Quang Ha, University of Technology Sydney

Ashish Rajeshwar Kulkarni, Delhi Technological University

Savita Nema, Maulana Azad National Institute of Technology Bhopal

Mark Ovinis, Universiti Teknologi PETRONAS

Bidyadhar Subudhi, National Institute of Technology Rourkela

30. REVIEWERS

Quang Ha, University of Technology Sydney

Shen Hin Lim, University of Waikato

Mark Ovinis, Universiti Teknologi PETRONAS

Fuwen Yang, Griffith University

31. OPEN LINES OF COMMUNICATION

The authors would like to establish a line of communication with the users of Modern Control Systems. We encourage all readers to send comments and suggestions for this and future editions. By doing this, we can keep you informed of any general-interest news regarding the textbook and pass along comments of other users.

Keep in touch!

Robert H. Bishop

robertbishop@usf.edu This page is intentionally left blank

32. About the Authors

Richard C. Dorf was Emeriti Faculty of Electrical and Computer Engineering at the University of California, Davis. Known as an instructor who was highly concerned with the discipline of electrical engineering and its application to social and economic needs, Professor Dorf wrote and edited several successful engineering textbooks and handbooks, including the best selling Engineering Handbook, second edition and the third edition of the Electrical Engineering Handbook. Professor Dorf was also co-author of Technology Ventures, a leading textbook on technology entrepreneurship. Professor Dorf was a Fellow of the IEEE and a Fellow of the ASEE. Dr. Dorf held a patent for the PIDA controller.

Robert H. Bishop is the Dean of Engineering at the University of South Florida, President and CEO of the Institute of Applied Engineering, and a Professor in the Department of Electrical Engineering. Prior to coming to The University of South Florida, he was the Dean of Engineering at Marquette University and before that a Department Chair and Professor of Aerospace Engineering and Engineering Mechanics at The University of Texas at Austin where he held the Joe J. King Professorship and was a Distinguished Teaching Professor. Professor Bishop started his engineering career as a member of the technical staff at the Charles Stark Draper Laboratory. He authors the well-known textbook for teaching graphical programming entitled Learning with LabVIEW and is also the editor-in-chief of the Mechatronics Handbook. Professor Bishop remains an active teacher and researcher and has authored/co-authored over one hundred and forty-five journal and conference papers. He is a Fellow of the AIAA, a Fellow of the American Astronautical Society (AAS), a Fellow of the American Association for the Advancement of Science (AAAS) and active in ASEE and in the Institute of Electrical and Electronics Engineers (IEEE). This page is intentionally left blank

33. CHAPTER

34. Introduction to Control

35. Systems

1.1 Introduction 30

1.2 Brief History of Automatic Control 33

1.3 Examples of Control Systems 39

1.4 Engineering Design 46

1.5 Control System Design 47

1.6 Mechatronic Systems 50

1.7 Green Engineering 54

1.8 The Future Evolution of Control Systems 55

1.9 Design Examples 57

1.10 Sequential Design Example: Disk Drive Read System 62

1.11 Summary 63

36. PREVIEW

A control system consists of interconnected components to achieve a desired purpose. In this chapter, we discuss open- and closed-loop feedback control systems. We examine examples of control systems through the course of history. Early systems incorporated many of the basic ideas of feedback that are employed in modern control systems. A design process is presented that encompasses the establishment of goals and variables to be controlled, definition of specifications, system definition, modeling, and analysis. The iterative nature of design allows us to handle the design gap effectively while accomplishing necessary trade-offs in complexity, performance, and cost. Finally, we introduce the Sequential Design Example: Disk Drive Read System. This example will be considered sequentially in each chapter of this book. It represents a practical control system design problem while simultaneously serving as a useful learning tool.

37. DESIRED OUTCOMES

Upon completion of Chapter 1, students should be able to:

\(\square\) Give illustrative examples of control systems and describe their relationship to key contemporary issues.

$\square\ $ Recount a brief history of control systems and their role in society.

$\square\ $ Predict the future of controls in the context of their evolutionary pathways.

$\square\ $ Recognize the elements of control system design and possess an appreciation of appreciate controls in the context of engineering design.

37.1. INTRODUCTION

Engineers create products that help people. Our quality of life is sustained and enhanced through engineering. To accomplish this, engineers strive to understand, model, and control the materials and forces of nature for the benefit of humankind. A key area of engineering that reaches across many technical areas is the multidisciplinary field of control system engineering. Control engineers are concerned with understanding and controlling segments of their environment, often called systems, which are interconnections of elements and devices for a desired purpose. The system might be something as clear-cut as an automobile cruise control system, or as extensive and complex as a direct brain-to-computer system to control a manipulator. Control engineering deals with the design (and implementation) of control systems using linear, time-invariant mathematical models representing actual physical nonlinear, time-varying systems with parameter uncertainties in the presence of external disturbances. As computer systems-especially embedded processors - have become less expensive, require less power and space, while growing more computationally powerful, at the same time that sensors and actuators have simultaneously experienced the same evolution to more capability in smaller packages, the application of control systems has grown in number and complexity. A sensor is a device that provides a measurement of a desired external signal. For example, resistance temperature detectors (RTDs) are sensors used to measure temperature. An actuator is a device employed by the control system to alter or adjust the environment. An electric motor drive used to rotate a robotic manipulator is an example of a device transforming electric energy to mechanical torque.

The face of control engineering is rapidly changing. The age of the Internet of Things (IoT) presents many intriguing challenges in control system applications in the environment (think about more efficient energy use in homes and businesses), manufacturing (think 3D printing), consumer products, energy, medical devices and healthcare, transportation (think about automated cars!), among many others [14]. A challenge for control engineers today is to be able to create simple, yet reliable and accurate mathematical models of many of our modern, complex, interrelated, and interconnected systems. Fortunately, many modern design tools are available, as well as open source software modules and Internet-based user groups (to share ideas and answer questions), to assist the modeler. The implementation of the control systems themselves is also becoming more automated, again assisted by many resources readily available on the Internet coupled with access to relatively inexpensive computers, sensors, and actuators. Control system engineering focuses on the modeling of a wide assortment of physical systems and using those models to design controllers that will cause the closed-loop systems to possess desired performance characteristics, such as stability, relative stability, steady-state tracking with prescribed maximum errors, transient tracking (percent overshoot, settling time, rise time, and time to peak), rejection of external disturbances, and robustness to modeling uncertainties. The extremely important step of the overall design and implementation process is designing the control systems, such as PID controllers, lead and lag controllers, state variable feedback controllers, and other popular controller structures. That is what this textbook is all about! FIGURE 1.1

Process to be controlled.

FIGURE 1.2 Open-loop control system (without feedback).

Control system engineering is based on the foundations of feedback theory and linear system analysis, and it integrates the concepts of network theory and communication theory. It is founded on a strong mathematical foundation, yet is very practical and impacts our lives every day in almost all we do. Indeed, control engineering is not limited to any engineering discipline but is equally applicable to aerospace, agricultural, biomedical, chemical, civil, computer, industrial, electrical, environmental, mechanical, nuclear engineering, and even computer science. Many aspects of control engineering can also be found in studies in systems engineering.

A control system is an interconnection of components forming a system configuration that will provide a desired system response. The basis for analysis of a system is the foundation provided by linear system theory, which assumes a causeeffect relationship for the components of a system. A component, or process, to be controlled can be represented graphically, as shown in Figure 1.1. The input-output relationship represents the cause-and-effect relationship of the process, which in turn represents a processing of the input signal to provide a desired output signal. An open-loop control system uses a controller and an actuator to obtain the desired response, as shown in Figure 1.2. An open-loop system is a system without feedback.

38. An open-loop control system utilizes an actuating device to control the process directly without using feedback.

In contrast to an open-loop control system, a closed-loop control system utilizes an additional measure of the actual output to compare the actual output with the desired output response. The measure of the output is called the feedback signal. A simple closed-loop feedback control system is shown in Figure 1.3. A feedback control system is a control system that tends to maintain a prescribed relationship of one system variable to another by comparing functions of these variables and using the difference as a means of control. With an accurate sensor, the measured output is a good approximation of the actual output of the system.

FIGURE 1.3

Closed-loop feedback control system (with feedback). A feedback control system often uses a function of a prescribed relationship between the output and reference input to control the process. Often the difference between the output of the process under control and the reference input is amplified and used to control the process so that the difference is continually reduced. In general, the difference between the desired output and the actual output is equal to the error, which is then adjusted by the controller. The output of the controller causes the actuator to modulate the process in order to reduce the error. For example, if a ship is heading incorrectly to the right, the rudder is actuated to direct the ship to the left. The system shown in Figure 1.3 is a negative feedback control system, because the output is subtracted from the input and the difference is used as the input signal to the controller. The feedback concept is the foundation for control system analysis and design.

39. A closed-loop control system uses a measurement of the output and feedback of this signal to compare it with the desired output (reference or command).

A closed-loop control has many advantages over open-loop control, including the ability to reject external disturbances and improve measurement noise attenuation. We incorporate disturbances and measurement noise in the block diagram as external inputs, as illustrated in Figure 1.4. External disturbances and measurement noise are inevitable in real-world applications and must be addressed in practical control system designs.

The feedback systems in Figures 1.3 and 1.4 are single-loop feedback systems. Many feedback control systems contain more than one feedback loop. A common multiloop feedback control system is illustrated in Figure 1.5 with an inner loop and an outer loop. In this scenario, the inner loop has a controller and a sensor and the outer loop has a controller and sensor. Other varieties of multiloop feedback systems are considered throughout the book as they represent more practical situations found in real-world applications. However, we use the single-loop feedback system for learning about the benefits of feedback control systems since the outcomes readily scale to multiloop systems.

Due to the increasing complexity of systems under active control and the interest in achieving optimum performance, the importance of control system engineering continues to grow. Furthermore, as the systems become more complex, the interrelationship of many controlled variables must be considered in the control scheme. A block diagram depicting a multivariable control system is shown in Figure 1.6.

FIGURE 1.4

Closed-loop feedback system with external disturbances and measurement noise.

FIGURE 1.5 Multiloop feedback system with an inner loop and an outer loop.

FIGURE 1.6 Multivariable control system.

A common example of an open-loop control system is a microwave oven set to operate for a fixed time. An example of a closed-loop control system is a person steering an automobile (assuming his or her eyes are open) by looking at the auto's location on the road and making the appropriate adjustments.

The introduction of feedback enables us to control a desired output and can improve accuracy, but it requires attention to the issues of stability and performance.

39.1. BRIEF HISTORY OF AUTOMATIC CONTROL

The use of feedback to control a system has a fascinating history. The first applications of feedback control appeared in the development of float regulator mechanisms in Greece in the period 300 to 1 в.с. [1, 2, 3]. The water clock of Ktesibios used a float regulator. An oil lamp devised by Philon in approximately 250 B.c. used a float regulator in an oil lamp for maintaining a constant level of fuel oil. Heron of Alexandria, who lived in the first century A.D., published a book entitled Pneumatica, which outlined several forms of water-level mechanisms using float regulators [1].

The first feedback system to be invented in modern Europe was the temperature regulator of Cornelis Drebbel (1572-1633) of Holland [1]. Dennis Papin (1647-1712) invented the first pressure regulator for steam boilers in 1681. Papin's pressure regulator was a form of safety regulator similar to a pressure-cooker valve. FIGURE 1.7 Watt's flyball governor.

The first automatic feedback controller used in an industrial process is generally agreed to be James Watt's flyball governor, developed in 1769 for controlling the speed of a steam engine [1,2]. The all-mechanical device, illustrated in Figure 1.7, measured the speed of the output shaft and utilized the movement of the flyball to control the steam valve and therefore the amount of steam entering the engine. As depicted in Figure 1.7, the governor shaft axis is connected via mechanical linkages and beveled gears to the output shaft of the steam engine. As the steam engine output shaft speed increases, the ball weights rise and move away from the shaft axis and through mechanical linkages the steam valve closes and the engine slows down.

The first historical feedback system is the water-level float regulator said to have been invented by I. Polzunov in 1765 [4]. The level regulator system is illustrated in Figure 1.8. The float detects the water level and controls the valve that covers the water inlet in the boiler.

The next century was characterized by the development of automatic control systems through intuition and invention. Efforts to increase the accuracy of the control system led to slower attenuation of the transient oscillations and even to unstable systems. It then became imperative to develop a theory of automatic control. In 1868, J. C. Maxwell formulated a mathematical theory related to control theory using a differential equation model of a governor [5]. Maxwell's study was concerned with the effect various system parameters had on the system performance. During the same period, I. A. Vyshnegradskii formulated a mathematical theory of regulators [6].

Prior to World War II, control theory and practice developed differently in the United States and western Europe than in Russia and eastern Europe. The main FIGURE 1.8

Water-level float regulator.

impetus for the use of feedback in the United States was the development of the telephone system and electronic feedback amplifiers by Bode, Nyquist, and Black at Bell Telephone Laboratories [7-10, 12].

Harold S. Black graduated from Worcester Polytechnic Institute in 1921 and joined Bell Laboratories of American Telegraph and Telephone (AT&T). At that time, the major task confronting Bell Laboratories was the improvement of the telephone system and the design of improved signal amplifiers. Black was assigned the task of linearizing, stabilizing, and improving the amplifiers that were used in tandem to carry conversations over distances of several thousand miles. After years of working on oscillator circuits, Black had the idea of negative feedback amplifiers as a way to avoid self-oscillations. His idea would enhance the stability of circuit stability over a wide range of frequency bands [8].

The frequency domain was used primarily to describe the operation of the feedback amplifiers in terms of bandwidth and other frequency variables. In contrast, the eminent mathematicians and applied mechanicians in the former Soviet Union inspired and dominated the field of control theory. The Russian theory tended to utilize a time-domain formulation using differential equations.

The control of an industrial process (manufacturing, production, and so on) by automatic rather than manual means is often called automation. Automation is prevalent in the chemical, electric power, paper, automobile, and steel industries, among others. The concept of automation is central to our industrial society. Automatic machines are used to increase the production of a plant. Industries are concerned with the productivity per worker of their plants. Productivity is defined as the ratio of physical output to physical input [26]. In this case, we are referring to labor productivity, which is real output per hour of work.

A large impetus to the theory and practice of automatic control occurred during World War II when it became necessary to design and construct automatic airplane piloting, gun-positioning systems, radar antenna control systems, and other military systems based on the feedback control approach. The complexity and expected performance of these military systems necessitated an extension of the available control techniques and fostered interest in control systems and the development of new insights and methods. Prior to 1940, for most cases, the design of control systems was an art involving a trial-and-error approach. During the 1940s, mathematical and analytical methods increased in number and utility, and control engineering became an engineering discipline in its own right [10-12].

Another example of the discovery of an engineering solution to a control system problem was the creation of a gun director by David B. Parkinson of Bell Telephone Laboratories. In the spring of 1940, Parkinson was intent on improving the automatic level recorder, an instrument that used strip-chart paper to plot the record of a voltage. A critical component was a small potentiometer used to control the pen of the recorder through an actuator. If a potentiometer could be used to control the pen on a level recorder, might it be capable of controlling other machines such as an antiaircraft gun? [13].

After considerable effort, an engineering model was delivered for testing to the U.S. Army on December 1, 1941. Production models were available by early 1943 , and eventually 3000 gun controllers were delivered. Input to the controller was provided by radar, and the gun was aimed by taking the data of the airplane's present position and calculating the target's future position.

Frequency-domain techniques continued to dominate the field of control following World War II with the increased use of the Laplace transform and the complex frequency plane. During the 1950s, the emphasis in control engineering theory was on the development and use of the \(s\)-plane methods and, particularly, the root locus approach. Furthermore, during the 1980s, the use of digital computers for control components became routine. The technology of these new control elements to perform accurate and rapid calculations was formerly unavailable to control engineers. These computers are now employed especially for process control systems in which many variables are measured and controlled simultaneously by the computer.

With the advent of Sputnik and the space age, another new impetus was imparted to control engineering. It became necessary to design complex, highly accurate control systems for missiles and space probes. Furthermore, the necessity to minimize the weight of satellites and to control them very accurately has spawned the important field of optimal control. Due to these requirements, the time-domain methods developed by Liapunov, Minorsky, and others have been met with great interest. Theories of optimal control developed by L. S. Pontryagin in the former Soviet Union and R. Bellman in the United States, as well as studies of robust systems, have contributed to the interest in time-domain methods. Control engineering must consider both the time-domain and the frequency-domain approaches simultaneously in the analysis and design of control systems.

A notable advance with worldwide impact is the U.S. space-based radionavigation system known as the Global Positioning System or GPS [82-85]. In the distant past, various strategies and sensors were developed to keep explorers on the oceans from getting lost, including following coastlines, using compasses to point north, and sextants to measure the angles of stars, the moon, and the sun above the horizon. The early explorers were able to estimate latitude accurately, but not longitude. It was not until the 1700s with the development of the chronometer that, when used with the sextant, the longitude could be estimated. Radio-based navigation systems began to appear in the early twentieth century and were used in World War II. With the advent of Sputnik and the space age, it became known that radio signals from satellites could be used to navigate on the ground by observing the Doppler shift of the received radio signals. Research and development culminated in the 1990s with 24 navigation satellites (known as the GPS) that solved the fundamental problem that explorers faced for centuries by providing a dependable mechanism to pinpoint the current location. Freely available on a continuous worldwide basis, GPS provides very reliable location and time information anytime, day or night, anywhere in the world. Using GPS as a sensor to provide position (and velocity) information is a mainstay of active control systems for transportation systems in the air, on the ground, and on the oceans. The GPS assists relief and emergency workers to save lives, and helps us with our everyday activities including the control of power grids, banking, farming, surveying, and many other tasks.

Global navigation satellite services (such as GPS, GLONASS, and Galileo) providing position, navigation, and timing data coupled with evolving wireless mobile technology, highly capable mobile computing systems and devices, global geographic information systems, and semantic web are supporting the evolving field of ubiquitous positioning [100-103]. These systems can provide information on the location of people, vehicles, and other objects as a function of time across the globe. As personal ubiquitous computing [104] contiues to push active control technology to the edge where the action is taking place, we will be faced with many opportunities to design and field autonomous systems based on the firm ground of system theoretic concepts covered in this introductory text on modern control systems.

The evolution of the Internet of Things (IoT) is having a transformational impact on the field of control engineering. The idea of the IoT, first proposed by Kevin Ashton in 1999, is the network of physical objects embedded with electronics, software, sensors, and connectivity - all elements of control engineering [14]. Each of the "things" on the network has an embedded computer with connectivity to the Internet. The ability to control connected devices is of great interest to control engineers, but there remains much work to be done, especially in establishing standards [24]. The International Data Corporation estimates that there will be 41.6 billion IoT devices generating 79.4 zettabytes (ZB) of data by the year 2025 [106]. One ZB is equal to one trillion GB! Figure 1.9 presents a technology roadmap that illustrates that in the near future control engineering is likely to play a role in creating active control applications for connected devices (adopted from [27]).

A selected history of control system development is summarized in Table 1.1.

FIGURE 1.9 Technology roadmap to the Internet of Things enhanced with artificial intelligence with applications to control engineering (Source: SRI Business Intelligence).

40. Table 1.1 Selected Historical Developments of Control Systems

1769 James Watt's steam engine and governor developed.

1868 J. C. Maxwell formulates a mathematical model for a governor control of a steam engine.

1913 Henry Ford's mechanized assembly machine introduced for automobile production.

1927 H. S. Black conceives of the negative feedback amplifier and H. W. Bode analyzes feedback amplifiers.

1932 H. Nyquist develops a method for analyzing the stability of systems.

1941 Creation of first antiaircraft gun with active control.

1952 Numerical control (NC) developed at Massachusetts Institute of Technology for control of machine-tool axes.

1954 George Devol develops "programmed article transfer," considered to be the first industrial robot design.

1957 Sputnik launches the space age leading, in time, to miniaturization of computers and advances in automatic control theory.

1960 First Unimate robot introduced, based on Devol's designs. Unimate installed in 1961 for tending die-casting machines. Table 1.1 (continued)

$1980\ $ Robust control system design widely studied.

1983 Introduction of the personal computer (and control design software soon thereafter) brought the tools of design to the engineer's desktop.

1990 The government ARPANET (the first network to use the Internet Protocol) was decommissioned and private connections to the Internet by commercial companies rapidly spread.

$1994\ $ Feedback control widely used in automobiles. Reliable, robust systems demanded in manufacturing.

1995 The Global Positioning System (GPS) was operational providing positioning, navigation, and timing services worldwide.

1997 First ever autonomous rover vehicle, known as Sojourner, explores the Martian surface.

2007 The Orbital Express mission performed the first autonomous space rendezvous and docking.

2011 The NASA Robonaut R2 became the first US-built robot on the International Space Station designed to assist with crew extravehicular activities (EVAs).

2013 For the first time, a vehicle - known as BRAiVE and designed at the University of Parma, Italy-moved autonomously on a mixed traffic route open to public traffic without a passenger in the driver seat.

2014 Internet of Things (IoT) enabled by convergence of key systems including embedded systems, wireless sensor networks, control systems, and automation.

2016 Space X successfully lands the first rocket on an autonomous spaceport drone ship controllrd by an autonomus robot.

2019 Alphabet's Wing begins making first commercial drone deliveries in the US.

40.1. EXAMPLES OF CONTROL SYSTEMS

Control engineering is concerned with the analysis and design of goal-oriented systems. Therefore the mechanization of goal-oriented policies has grown into a hierarchy of goal-oriented control systems. Modern control theory is concerned with systems that have self-organizing, adaptive, robust, learning, and optimum qualities.

41. EXAMPLE 1.1 Automated vehicles

Driving an automobile is a pleasant task when the auto responds rapidly to the driver's commands. The era of autonomous or self-driving vehicles is almost upon us \(\lbrack 15,19\), 20]. The autonomous vehicle must be able to sense the changing environment, perform trajectory planning, prescribe the control inputs that include steering and turning, accelerating and braking, and many other functions typically handled by the driver, and actually implement the control strategy. Steering is one of the critical functions of autonomous vehicles. A simple block diagram of an automobile steering control system is shown in Figure 1.10(a). The desired course is compared with a measurement of the actual course in order to generate a measure of the error, as shown in Figure 1.10(b). This measurement is obtained by visual and tactile (body movement) feedback, as provided by the feel of the steering wheel by the hand (sensor). This feedback system is a familiar version of the steering control system in an ocean liner or the flight controls in a large airplane. A typical direction-of-travel response is shown in Figure 1.10(c).

(a)

(b)

FIGURE 1.10

(a) Automobile steering control system. (b) The driver uses the difference between the actual and the desired direction of travel to generate a controlled adjustment of the steering wheel. (c) Typical direction-of-travel response.

(c)

42. EXAMPLE 1.2 Human-in-the-loop control

A basic, manually controlled closed-loop system for regulating the level of fluid in a tank is shown in Figure 1.11. The input is a reference level of fluid that the operator is instructed to maintain. (This reference is memorized by the operator.) The power amplifier is the operator, and the sensor is visual. The operator compares the actual level with the desired level and opens or closes the valve (actuator), adjusting the fluid flow out, to maintain the desired level.

43. EXAMPLE 1.3 Humanoid robots

The use of computers integrated with machines that perform tasks like a human worker has been foreseen by several authors. In his famous 1923 play, entitled FIGURE 1.11

A manual control system for regulating the level of fluid in a tank by adjusting the output valve. The operator views the level of fluid through a port in the side of the tank.

R.U.R. [48], Karel Capek called artificial workers robots, deriving the word from the Czech noun robota, meaning "work."

A robot is a computer-controlled machine and involves technology closely associated with automation. Industrial robotics can be defined as a particular field of automation in which the automated machine (that is, the robot) is designed to substitute for human labor \(\lbrack 18,33\rbrack\). Thus robots possess certain humanlike characteristics. Today, the most common humanlike characteristic is a mechanical manipulator that is patterned somewhat after the human arm and wrist. Some devices even have anthropomorphic mechanisms, including what we might recognize as mechanical arms, wrists, and hands [28]. An example of an anthropomorphic robot is shown in Figure 1.12. We recognize that the automatic machine is well suited to some tasks, as noted in Table 1.2, and that other tasks are best carried out by humans [106].

44. EXAMPLE 1.4 Electric power industry

There has been considerable discussion recently concerning the gap between practice and theory in control engineering. However, it is natural that theory precedes the applications in many fields of control engineering. Nonetheless, it is interesting to note that in the electric power industry, the largest industry in the United States, the gap is relatively insignificant. The electric power industry is primarily interested in energy conversion, control, and distribution. It is critical that computer control be increasingly applied to the power industry in order to improve the efficient use of energy resources. Also, the control of power plants for minimum waste emission has become increasingly important. The modern, large-capacity plants, which exceed several hundred megawatts, require automatic control systems that account for the interrelationship of the process variables and optimum power production. It is common to have 90 or more manipulated variables under coordinated control. A simplified model showing several of the important control variables of a large boiler-generator system is shown in Figure 1.13. This is an example of the importance of measuring many variables, such as pressure and oxygen, to provide information to the computer for control calculations.

The electric power industry has used the modern aspects of control engineering for significant and interesting applications. It appears that in the process industry, the factor that maintains the applications gap is the lack of instrumentation to measure all the important process variables, including the quality and composition of FIGURE 1.12

The Honda ASIMO humanoid robot. ASIMO walks, climbs stairs, and turns corners. (David Coll Blanco/ Alamy Stock Photo)

Table 1.2 Task Difficulty: Human Versus Automatic Machine Tasks Difficult for a Machine Tasks Difficult for a Human

Displaying real emotions Acting based on ethical principles Precise coordination with other robots Anticipating human actions and responses Acquiring new skills on its own
Operating in toxic environments

Highly repetitive activities

Deep underwater surveys

Outer planet space exploration

Working diligently with no breaks for long periods

the product. As these instruments become available, the applications of modern control theory to industrial systems should increase measurably.

45. EXAMPLE 1.5 Biomedical engineering

There have been many applications of control system theory to biomedical experimentation, diagnosis, prosthetics, and biological control systems [22, 23, 48]. The control systems under consideration range from the cellular level to the central nervous system and include temperature regulation and neurological, respiratory, and cardiovascular control. Most physiological control systems are closed-loop systems. However, we find not one controller but rather control loop within control loop, forming a hierarchy of systems. The modeling of the structure of biological processes confronts the analyst with a high-order model and a complex structure. Prosthetic devices aid millions of people worldwide. Recent advances in feedback control FIGURE 1.13 Coordinated control system for a boiler-generator.

technology will profoundly transform the lives of amputees and people living with paralysis. Much progress has been made in the restoration of sensation of touch and pain and in connecting prosthetic limb sensors with haptic feedback directly back to the brain. Figure 1.14 depicts a prosthetic had and arm with the same dexterity as a human arm. Especially fascinating are advances in brain-controlled feedback of prosthetic limbs enabling the power of the human brain to guide the movement [39]. Another fascinating advance in the development of prosthetic limbs is to make possible the sense of touch and pain [22]. Much progress has been made in the restoration of sensation of touch and pain and in connecting sensors to the prosthetic limbs with haptic feedback directly back to the brain.

46. EXAMPLE 1.6 Social, economic, and political systems

It is interesting and valuable to attempt to model the feedback processes prevalent in the social, economic, and political spheres. This approach is undeveloped at present but appears to have a bright future. Society is composed of many feedback systems and regulatory bodies, which are controllers exerting the forces on society necessary to maintain a desired output. A simple lumped model of the national income feedback control system is shown in Figure 1.15. This type of model helps the analyst to understand the effects of government control and the dynamic effects of government spending. Of course, many other loops not shown also exist, since, theoretically, government spending cannot exceed the tax collected without generating a deficit, which is itself a control loop containing the Internal Revenue Service and the Congress. In a socialist country, the loop due to consumers is deemphasized and government control is emphasized. In that case, the measurement FIGURE 1.15

A feedback control system model of the national income.

FIGURE 1.14

Recent advances in electronic prosthetics have resulted in the development of a prosthetic hand and arm that has the same dexterity as a human arm. (Kuznetsov Dmitriy/Shutterstock).

block must be accurate and must respond rapidly; both are very difficult characteristics to realize from a bureaucratic system. This type of political or social feedback model, while usually nonrigorous, does impart information and understanding.

47. EXAMPLE 1.7 Unmanned aerial vehicles

The ongoing area of research and development of unmanned aerial vehicles (UAVs) is full of potential for the application of control systems. These aircrafts are also known as drones. An example of a drone is shown in Figure 1.16. Drones are unmanned but are usually controlled by ground operators. Typically they do not operate autonomously, and their inability to provide the level of safety required in a complex airspace keeps them from flying freely in the commercial airspace although package delivery via drones has begun. One significant challenge is to develop control systems that will avoid in-air collisions. Ultimately, the goal is to employ the drone autonomously in FIGURE 1.16

A commercial drone (GuruXOX/ Shuttterstock).

such applications as aerial photography to assist in disaster mitigation, survey work to assist in construction projects, crop monitoring, and continuous weather monitoring. An intriguing emerging area of applied research is the integration of artificial intelligence (AI) and drones [74]. Smart unmanned aircraft will require significant deployment of advanced control systems throughout the airframe.

48. EXAMPLE 1.8 Industrial control systems

Other familiar control systems have the same basic elements as the system shown in Figure 1.3. A refrigerator has a temperature setting or desired temperature, a thermostat to measure the actual temperature and the error, and a compressor motor for power amplification. Other examples in the home are the oven, furnace, and water heater. In industry, there are many examples, including speed controls; process temperature and pressure controls; and position, thickness, composition, and quality controls [17, 18].

Feedback control systems are used extensively in industrial applications. Thousands of industrial and laboratory robots are currently in use. Manipulators can pick up objects weighing hundreds of pounds and position them with an accuracy of one-tenth of an inch or better [28]. Automatic handling equipment for home, school, and industry is particularly useful for hazardous, repetitious, dull, or simple tasks. Machines that automatically load and unload, cut, weld, or cast are used by industry to obtain accuracy, safety, economy, and productivity [28, 41].

Another important industry, the metallurgical industry, has had considerable success in automatically controlling its processes. In fact, in many cases, the control theory is being fully implemented. For example, a hot-strip steel mill is controlled for temperature, strip width, thickness, and quality.

There has been considerable interest recently in applying the feedback control concepts to automatic warehousing and inventory control. Furthermore, automatic control of agricultural systems (farms) is receiving increased interest. Automatically controlled silos and tractors have been developed and tested. Automatic control of wind turbine generators, solar heating and cooling, and automobile engine performance are important modern examples [20,21].

48.1. ENGINEERING DESIGN

Engineering design is the central task of the engineer. It is a complex process in which both creativity and analysis play major roles.

49. Design is the process of conceiving or inventing the forms, parts, and details of a system to achieve a specified purpose.

Design activity can be thought of as planning for the emergence of a particular product or system. Design is an innovative act whereby the engineer creatively uses knowledge and materials to specify the shape, function, and material content of a system. The design steps are (1) to determine a need arising from the values of various groups, covering the spectrum from public policy makers to the consumer; (2) to specify in detail what the solution to that need must be and to embody these values; (3) to develop and evaluate various alternative solutions to meet these specifications; and (4) to decide which one is to be designed in detail and fabricated.

An important factor in realistic design is the limitation of time. Design takes place under imposed schedules, and we eventually settle for a design that may be less than ideal but considered "good enough." In many cases, time is the only competitive advantage.

A major challenge for the designer is writing the specifications for the technical product. Specifications are statements that explicitly state what the device or product is to be and do. The design of technical systems aims to provide appropriate design specifications and rests on four characteristics: complexity, trade-offs, design gaps, and risk.

Complexity of design results from the wide range of tools, issues, and knowledge to be used in the process. The large number of factors to be considered illustrates the complexity of the design specification activity, not only in assigning these factors their relative importance in a particular design, but also in giving them substance either in numerical or written form, or both.

The concept of trade-off involves the need to resolve conflicting design goals, all of which are desirable. The design process requires an efficient compromise between desirable but conflicting criteria.

In making a technical device, we generally find that the final product does not appear as originally visualized. For example, our image of the problem we are solving does not appear in written description and ultimately in the specifications. Such design gaps are intrinsic in the progression from an abstract idea to its realization.

This inability to be absolutely sure about predictions of the performance of a technological object leads to major uncertainties about the actual effects of the designed devices and products. These uncertainties are embodied in the idea of unintended consequences or risk. The result is that designing a system is a risk-taking activity.

Complexity, trade-off, gaps, and risk are inherent in designing new systems and devices. Although they can be minimized by considering all the effects of a given design, they are always present in the design process.

Within engineering design, there is a fundamental difference between the two major types of thinking that must take place: engineering analysis and synthesis. Attention is focused on models of the physical systems that are analyzed to provide insight and that indicate directions for improvement. On the other hand, synthesis is the process by which these new physical configurations are created.

Design is a process that may proceed in many directions before the desired one is found. It is a deliberate process by which a designer creates something new in response to a recognized need while recognizing realistic constraints. The design process is inherently iterative-we must start somewhere! Successful engineers learn to simplify complex systems appropriately for design and analysis purposes. A gap between the complex physical system and the design model is inevitable. Design gaps are intrinsic in the progression from the initial concept to the final product. We know intuitively that it is easier to improve an initial concept incrementally than to try to create a final design at the start. In other words, engineering design is not a linear process. It is an iterative, nonlinear, creative process.

The main approach to the most effective engineering design is parameter analysis and optimization. Parameter analysis is based on (1) identification of the key parameters, (2) generation of the system configuration, and (3) evaluation of how well the configuration meets the needs. These three steps form an iterative loop. Once the key parameters are identified and the configuration synthesized, the designer can optimize the parameters. Typically, the designer strives to identify a limited set of parameters to be adjusted.

49.1. CONTROL SYSTEM DESIGN

The design of control systems is a specific example of engineering design. The goal of control engineering design is to obtain the configuration, specifications, and identification of the key parameters of a proposed system to meet an actual need.

The control system design process is illustrated in Figure 1.17. The design process consists of seven main building blocks, which we arrange into three groups:

  1. Establishment of goals and variables to be controlled, and definition of specifications (metrics) against which to measure performance.

  2. System definition and modeling.

  3. Control system design and integrated system simulation and analysis.

In each chapter of this book, we will highlight the connection between the design process illustrated in Figure 1.17 and the main topics of that chapter. The objective is to demonstrate different aspects of the design process through illustrative examples. We have established the following connections between the chapters in this book and the design process block diagram:

  1. Establishment of goals, control variables, and specifications: Chapters 1,3, 4, and 13.

  2. System definition and modeling: Chapters 2-4, and 11-13.

  3. Control system design, simulation, and analysis: Chapters 4-13.

The first step in the design process consists of establishing the system goals. For example, we may state that our goal is to control the velocity of a motor accurately. The second step is to identify the variables that we desire to control (for example, the velocity of the motor). The third step is to write the specifications in terms of the accuracy we must attain. This required accuracy of control will then lead Topics emphasized in this example

Establish the control goals

Identify the variables to be controlled

rite the specifications

If the performance does not meet the specifications, then iterate the configuration.
In this column remarks relate the design topics on the left to specific sections, figures, equations, and tables in the example.

(1) Establishment of goals, variables to be controlled, and specifications.

(2) System definition and modeling.

(3) Control system design, simulation, and analysis.

FIGURE 1.17 The control system design process.

to the identification of a sensor to measure the controlled variable. The performance specifications will describe how the closed-loop system should perform and will include (1) good regulation against disturbances, (2) desirable responses to commands, (3) realistic actuator signals, (4) low sensitivities, and (5) robustness.

As designers, we proceed to the first attempt to configure a system that will result in the desired control performance. This system configuration will normally consist of a sensor, the process under control, an actuator, and a controller, as shown in Figure 1.3. The next step consists of identifying a candidate for the actuator. This will, of course, depend on the process, but the actuation chosen must be capable of effectively adjusting the performance of the process. For example, if we wish to control the speed of a rotating flywheel, we will select a motor as the actuator. The sensor, in this case, must be capable of accurately measuring the speed. We then obtain a model for each of these elements.

Students studying controls are often given the models, frequently represented in transfer function or state variable form, with the understanding that they represent the underlying physical systems, but without further explanation. An obvious question is, where did the transfer function or state variable model come from? Within the context of a course in control systems, there is a need to address key questions surrounding modeling. To that end, in the early chapters, we will provide insight into key modeling concerns and answer fundamental questions: How is the transfer function obtained? What basic assumptions are implied in the model development? How general are the transfer functions? However, mathematical modeling of physical systems is a subject in and of itself. We cannot hope to cover the mathematical modeling in its entirety, but interested students are encouraged to seek outside references (see, for example, [76-80]).

The next step is the selection of a controller, which often consists of a summing amplifier that will compare the desired response and the actual response and then forward this error-measurement signal to an amplifier.

The final step in the design process is the adjustment of the parameters of the system to achieve the desired performance. If we can achieve the desired performance by adjusting the parameters, we will finalize the design and proceed to document the results. If not, we will need to establish an improved system configuration and perhaps select an enhanced actuator and sensor. Then we will repeat the design steps until we are able to meet the specifications, or until we decide the specifications are too demanding and should be relaxed.

The design process has been dramatically affected by the advent of powerful and inexpensive computers, and effective control design and analysis software. For example, the Boeing 777 was the world's first \(100\%\) digitally designed civilian aircraft.The benefits of this design approach to Boeing was a 50% saving in development costs, a 93% reduction in design change and rework rate, and a 50-80% reduction in problems compared with traditional manufacturing [56]. The follow-on project, known as the Boeing 787 Dreamliner, was developed without physical prototypes. In many applications, the availability of digital design tools, including the certification of the control system in realistic computer simulations, represents a significant cost reduction in terms of money and time.

Another notable innovation in design is the generative design process coupled with artificial intelligence [57]. Generative design is an iterative design process that typically utilizes a computer program to generate a (potentially large) number of designs based on a given set of constraints provided by the designer. The designer then fine-tunes the feasible solution provided by the computer program by adjusting the constraint space to reduce the number of viable solutions. For example, the generative design is revolutionizing aircraft design [58]. The application of the highly computer-intensive generative design process in feedback control theory remains an open question. However, the generative design process concept can also be applied in a more traditional (less computationally intensive) environment to enhance the design process in Figure 1.17. For example, once a single design has been found that meets the specifications, the process can be repeated by selecting different system configurations and controller structures. After a number of controllers are designed that meet the specifications, the designer can then begin to narrow the design by adjusting the constraints. There are facets of the generative design process that will be illuminated in this book as we discuss the control system design process.

In summary, the controller design problem is as follows: Given a model of the system to be controlled (including its sensors and actuators) and a set of design goals, find a suitable controller, or determine that none exists. As with most of engineering design, the design of a feedback control system is an iterative and nonlinear process. A successful designer must consider the underlying physics of the plant under control, the control design strategy, the controller design architecture (that is, what type of controller will be employed), and effective controller tuning strategies. In addition, once the design is completed, the controller is often implemented in hardware, and hence issues of interfacing with hardware can appear. When taken together, these different phases of control system design make the task of designing and implementing a control system quite challenging [73].

49.2. MECHATRONIC SYSTEMS

A natural stage in the evolutionary process of modern engineering design is encompassed in the area known as mechatronics [64]. The term mechatronics was coined in Japan in the 1970s [65-67]. Mechatronics is the synergistic integration of mechanical, electrical, and computer systems and has evolved over the past 30 years, leading to a new breed of intelligent products. Feedback control is an integral aspect of modern mechatronic systems. One can understand the extent that mechatronics reaches into various disciplines by considering the components that make up mechatronics [68-71]. The key elements of mechatronics are (1) physical systems modeling, (2) sensors and actuators, (3) signals and systems, (4) computers and logic systems, and (5) software and data acquisition. Feedback control encompasses aspects of all five key elements of mechatronics, but is associated primarily with the element of signals and systems, as illustrated in Figure 1.18.

Advances in computer hardware and software technology coupled with the desire to increase the performance-to-cost ratio has revolutionized engineering design. New products are being developed at the intersection of traditional disciplines of engineering, computer science, and the natural sciences. Advancements in traditional disciplines are fueling the growth of mechatronics systems by providing

FIGURE 1.18

The key elements of mechatronics [64].

"enabling technologies." A critical enabling technology was the microprocessor which has had a profound effect on the design of consumer products. We should expect continued advancements in cost-effective microprocessors and microcontrollers, novel sensors and actuators enabled by advancements in applications of microelectromechanical systems (MEMS), advanced control methodologies and real-time programming methods, networking and wireless technologies, and mature computer-aided engineering (CAE) technologies for advanced system modeling, virtual prototyping, and testing. The continued rapid development in these areas will only accelerate the pace of smart (that is, actively controlled) products.

An exciting area of mechatronic system development in which control systems will play a significant role is the area of alternative energy production and consumption. Hybrid fuel automobiles and efficient wind power generation are two examples of systems that can benefit from mechatronic design methods. In fact, the mechatronic design philosophy can be effectively illustrated by the example of the evolution of the modern automobile [64]. Before the 1960s, the radio was the only significant electronic device in an automobile. Today, many automobiles have many microcontrollers, and a multitude of sensors, and thousands of lines of software code. A modern automobile can no longer be classified as a strictly mechanical machine - it has been transformed into a comprehensive mechatronic system.

50. EXAMPLE 1.9 Hybrid fuel vehicles

A hybrid fuel automobile, depicted in Figure 1.19, utilizes a conventional internal combustion engine in combination with a battery (or other energy storage device such as a fuel cell or flywheel) and an electric motor to provide a propulsion system capable of doubling the fuel economy over conventional automobiles. Although these hybrid vehicles will never be zero-emission vehicles (since they have internal combustion engines), they can reduce the level of harmful emissions by onethird to one-half, and with future improvements, these emissions may reduce even further. As stated earlier, the modern automobile requires many advanced control systems to operate. The control systems must regulate the performance of

FIGURE 1.19

The hybrid fuel automobile can be viewed as a mechatronic system.

(Marmaduke St. John/Alamy Stock Photo.)

the engine, including fuel-air mixtures, valve timing, transmissions, wheel traction control, antilock brakes, and electronically controlled suspensions, among many other functions. On the hybrid fuel vehicle, there are additional control functions that must be satisfied. Especially necessary is the control of power between the internal combustion engine and the electric motor, determining power storage needs and implementing the battery charging, and preparing the vehicle for low-emission start-ups. The overall effectiveness of the hybrid fuel vehicle depends on the combination of power units that are selected (e.g., battery versus fuel cell for power storage). Ultimately, however, the control strategy that integrates the various electrical and mechanical components into a viable transportation system strongly influences the acceptability of the hybrid fuel vehicle concept in the marketplace.

The second example of a mechatronic system is the advanced wind power generation system.

51. EXAMPLE 1.10 Wind power

Many nations in the world today are faced with unstable energy supplies. Additionally, the negative effects of fossil fuel utilization on the quality of our air are well documented. Many nations have an imbalance in the supply and demand of energy, consuming more than they produce. To address this imbalance, many engineers are considering developing advanced systems to access other sources of energy, such as wind energy. In fact, wind energy is one of the fastest-growing forms of energy generation in the United States and in other locations around the world. A wind farm is illustrated in Figure 1.20.

By the end of 2019, the installed global wind energy capacity was over \(650.8GW\). In the United States, there was enough energy derived from wind to power over 27.5 million homes, according to the American Wind Energy Association. For the past 40 years, researchers have concentrated on developing technologies that work well in high wind areas (defined to be areas with a wind speed of at least \(6.7\text{ }m/s\) at a height of \(10\text{ }m\) ). Most of the easily accessible high wind sites in the United States are now utilized, and improved technology must be developed to make lower wind areas more cost effective. New developments are required in materials and

FIGURE 1.20 Efficient wind power generation. (Photo courtesy of NASA)

aerodynamics so that longer turbine rotors can operate efficiently in the lower winds, and in a related problem, the towers that support the turbine must be made taller without increasing the overall costs. In addition, advanced controls will be required to achieve the level of efficiency required in the wind generation drive train. Newer wind turbines can operate in wind speeds less than \(1mph\).

52. EXAMPLE 1.11 Wearable computers

Many contemporary control systems are embedded control systems [81]. Embedded control systems employ on-board special-purpose digital computers as integral components of the feedback loop. Many new wearable products include embedded computers. This includes wristwatches, eyeglasses, sports wristbands, e-textiles, and computer garments. Figure 1.21 illustrates the popular computer eyeglasses. For example, the glasses devices might enable physicians to access and manage data and display the data when they need it during a patient examination. One might imagine future applications where the device would monitor and track the doctor's eye movements and use that information in a feedback loop to very precisely control a medical instrument during a procedure. The utilization of wearable computers in feedback control applications is in its infancy and the possibilities are enormous.

Advances in sensors, actuators, and communication devices are leading to a new class of embedded control systems that are networked using wireless technology, thereby enabling spatially-distributed control. Embedded control system designers must be able to understand and work with various network protocols, diverse operating systems and programming languages. While the theory of systems and controls serves as the foundation for the modern control system design, the design process is rapidly expanding into a multi-disciplinary enterprise encompassing multiple engineering areas, as well as information technology and computer science.

Advances in alternate energy products, such as the hybrid automobile and the generation of efficient wind power generators, provide vivid examples of mechatronics development. There are numerous other examples of intelligent systems poised to enter our everyday life, including autonomous rovers, smart home appliances (e.g., dishwashers, vacuum cleaners, and microwave ovens), wireless network-enabled devices, "human-friendly machines" [72] that perform robotassisted surgery, and implantable sensors and actuators.

FIGURE 1.21

Wearable computers can assist a physician provide better healthcare delivery. (Wavebreak Media Ltd/123RF.)

52.1. GREEN ENGINEERING

Global issues such as climate change, clean water, sustainability, waste management, emissions reduction, and minimizing raw material and energy use have caused many engineers to re-think existing approaches to engineering design in critical areas. One outcome of the evolving design strategy is to consider an approach that has come to be known as "green engineering." The goal of green engineering is to design products that will minimize pollution, reduce the risk to human health, and improve the environment. The basic principles of green engineering are [86]:

  1. Engineer processes and products holistically, use systems analysis, and integrate environmental impact assessment tools.

  2. Conserve and improve natural ecosystems while protecting human health and well-being.

  3. Use life-cycle thinking in all engineering activities.

  4. Ensure that all material and energy inputs and outputs are as inherently safe and benign as possible.

  5. Minimize depletion of natural resources.

  6. Strive to prevent waste.

  7. Develop and apply engineering solutions, while being cognizant of local geography, aspirations, and cultures.

  8. Create engineering solutions beyond current or dominant technologies; improve, innovate, and invent technologies to achieve sustainability.

  9. Actively engage communities and stakeholders in development of engineering solutions.

Putting the principles of green engineering into practice leads us to a deeper understanding of the power of feedback control systems as an enabling technology. For example, in Section 1.9, we present a discussion on smart grids. Smart grids aim to deliver electrical power more reliably and efficiently in an environmentally friendly fashion. This in turn will potentially enable the large-scale use of renewable energy sources, such as wind and solar, that are naturally intermittent. Sensing and feedback are key technology areas that enable the smart grids [87]. Green engineering applications can be classified into one of five categories [88]:

  1. Environmental Monitoring

  2. Energy Storage Systems

  3. Power Quality Monitoring

  4. Solar Energy

  5. Wind Energy

As the field of green engineering matures, it is almost certain that more applications will evolve, especially as we apply the eighth principle (listed above) of green engineering to create engineering solutions beyond current or dominant technologies and improve, innovate, and invent technologies. In the subsequent chapters, we present examples from each of these areas. There is a global effort underway to reduce greenhouse gases from all sources. To accomplish this, it is necessary to improve both the quality and quantity of our environmental monitoring systems. An example is using wireless measurements on a cabled robotic controlled mobile sensing platform moving along the forest understory to measure key environmental parameters in a rain forest.

Energy storage systems are critical technologies for green engineering. There are many types of energy storage systems. The energy storage system we are most familiar with is the battery. Batteries are used to power most of the electronic devices in use today; some batteries are rechargeable and some are single-use throwaways. To adhere to green engineering principles, we would favor energy storage systems that are renewable. A very important energy storage device for green engineering systems is the fuel cell.

The problems associated with power quality monitoring are varied and can include leading and lagging power, voltage variations, and waveform harmonics. Many of the green engineering systems and components require careful monitoring of current and voltages. An interesting example would be the modeling of current transformers that are used in various capacities for measuring and monitoring within the power grid network of interconnected systems used to deliver electricity.

Efficiently converting solar energy into electricity is an engineering challenge. Two technologies for generation of electricity using sunshine are solar photovoltaic and solar thermal. With photovoltaic systems the sunlight is converted directly to electricity, and with solar thermal the sun heats water to create steam that is used to power steam engines. Designing and deploying solar photovoltaic systems for solar power generation is one approach employing green engineering principles to utilize the sun's energy to power our homes, offices, and businesses.

Power derived from wind is an important source of renewable energy around the world. Wind energy conversion to electric power is achieved by wind energy turbines connected to electric generators. The intermittency characteristic of wind energy makes the smart grid development essential to bring the energy to the power grid when it is available and to provide energy from other sources when the wind dies down or is disrupted. The irregular character of wind direction and power also results in the need for reliable, steady electric energy by using control systems on the wind turbines themselves. The goal of these control devices is to reduce the effects of wind intermittency and the effect of wind direction change.

The role of control systems in green engineering will continue to expand as the global issues facing us require ever increasing levels of automation and precision.

52.2. THE FUTURE EVOLUTION OF CONTROL SYSTEMS

The continuing goal of control systems is to provide extensive flexibility and a high level of autonomy. Two system concepts are approaching this goal by different evolutionary pathways, as illustrated in Figure 1.22. Today's industrial robot is perceived as quite autonomous-once it is programmed, further intervention is not normally required. Because of sensory limitations, these robotic systems have limited flexibility in adapting to work environment changes; improving perception is the motivation of computer vision research. The control system is very adaptable, FIGURE 1.22

Evolution of control systems and autonomy.

but it relies on human supervision. Advanced robotic systems are striving for task adaptability through enhanced sensory feedback. Research areas concentrating on artificial intelligence, sensor integration, computer vision, and off-line CAD/CAM programming will make systems more universal and economical. Control systems are moving toward autonomous operation as an enhancement to human control. Research in supervisory control, human-machine interface methods, and computer database management are intended to reduce operator burden and improve operator efficiency. Many research activities are common to robotics and control systems and are aimed at reducing implementation cost and expanding the realm of application. These include improved communication methods and advanced programming languages.

The easing of human labor by technology, a process that began in prehistory, is entering a new stage. The acceleration in the pace of technological innovation inaugurated by the Industrial Revolution has until recently resulted mainly in the displacement of human muscle power from the tasks of production. The current revolution in computer technology is causing an equally momentous social change, the expansion of information gathering and information processing as computers extend the reach of the human brain [16].

Control systems are used to achieve (1) increased productivity and (2) improved performance of a device or system. Automation is used to improve productivity and obtain high-quality products. Automation is the automatic operation or control of a process, device, or system. We use automatic control of machines and processes to produce a product reliably and with high precision [28]. With the demand for flexible, custom production, a need for flexible automation and robotics is growing [17, 25].

The theory, practice, and application of automatic control is a large, exciting, and extremely useful engineering discipline. One can readily understand the motivation for a study of modern control systems.

52.3. DESIGN EXAMPLES

In this section we present illustrative design examples. This is a pattern that we will follow in all subsequent chapters. Each chapter will contain a number of interesting examples in a special section entitled Design Examples meant to highlight the main topics of the chapter. At least one example among those presented in the Design Example section will be a more detailed problem and solution that demonstrates one or more of the steps in the design process shown in Figure 1.17. In the first example, we discuss the development of the smart grid as a concept to deliver electrical power more reliably and efficiently as part of a strategy to provide a more environmentally friendly energy delivery system. The smart grid will enable the large-scale use of renewable energy sources that depend on the natural phenomenon to generate power and which are intermittent, such as wind and solar. Providing clean energy is an engineering challenge that must necessarily include active feedback control systems, sensors, and actuators. In the second example presented here, a rotating disk speed control illustrates the concept of open-loop and closed-loop feedback control. The third example is an insulin delivery control system in which we determine the design goals, the variables to control, and a preliminary closed-loop system configuration.

53. EXAMPLE 1.12 Smart grid control systems

A smart grid is as much a concept as it is a physical system. In essence, the concept is to deliver power more reliably and efficiently while remaining environmentally friendly, economical, and safe \(\lbrack 89,90\rbrack\). A smart grid can be viewed as a system comprised of hardware and software that routes power more reliably and efficiently to homes, businesses, schools, and other users of power. One view of the smart grid is illustrated schematically in Figure 1.23. Smart grids can be national or local in scope. One can even consider home smart grids (or microgrids). In fact, smart grids encompass a wide and rich field of investigation. As we will find, control systems play a key role in smart grids at all levels.

One interesting aspect of the smart grid is real-time demand side management requiring a two-way flow of information between the user and the power generation system [91]. For example, smart meters are used to measure electricity use in the home and office. These sensors transmit data to utilities and allow the utility to transmit control signals back to a home or building. These smart meters can control and turn on or off home and office appliances and devices. Smart home-energy devices enable the homeowners to control their usage and respond to price changes at peak-use times.

The five key technologies required to implement a successful modern smart grid include (i) integrated communications, (ii) sensing and measurements, (iii) advanced components, (iv) advanced control methods, and (v) improved interfaces and decision support [87]. Two of the five key technologies fall under the general category of control systems, namely (ii) sensing and measurements and (iii) advanced control methods. It is evident that control systems will play a key role in realizing the modern smart grid. The potential impact of the smart grid on delivery of power is very high. Currently, the total U.S. grid includes 9,200 units generating over 1 million MW of capacity over 300,000 miles of transmission lines. A smart grid will use sensors,

FIGURE 1.23 Smart grids are distribution networks that measure and control usage.

controllers, the Internet, and communication systems to improve the reliability and efficiency of the grid. It is estimated that deployment of smart grids could reduce emissions of \({CO}_{2}\) by 12 percent by 2030 [91].

One of the elements of the smart grid are the distribution networks that measure and control usage. In a smart grid, the power generation depends on the market situation (supply/demand and cost) and the power source available (wind, coal, nuclear, geothermal, biomass, etc.). In fact, smart grid customers with solar panels or wind turbines can sell their excess energy to the grid and get paid as microgenerators [92]. In the subsequent chapters, we discuss various control problems associated with pointing solar panels to the sun and with prescribing the pitch of the wind turbine blades to manage the rotor speed thereby controlling the power output.

Transmission of power is called power flow and the improved control of power will increase its security and efficiency. Transmission lines have inductive, capacitive, and resistive effects that result in dynamic impacts or disturbances. The smart grid must anticipate and respond to system disturbances rapidly. This is referred to as self-healing. In other words, a smart grid should be capable of managing significant disturbances occurring on very short time scales. To accomplish this, the self-healing process is constructed around the idea of a feedback control system where self-assessments are used to detect and analyze disturbances so that corrective action can be applied to restore the grid. This requires sensing and measurements to provide information to the control systems. One of the benefits of using smart grids is that renewable energy sources that depend on intermittent natural phenomena (such as wind and sunshine) can potentially be utilized more efficiently by allowing for load shedding when the wind dies out or clouds block the sunshine.

Feedback control systems will play an increasingly important role in the development of smart grids as we move to the target date. It may be interesting to recall the various topics discussed in this section in the context of control systems as each chapter in this textbook unfolds new methods of control system design and analysis.

54. EXAMPLE 1.13 Rotating disk speed control

Many modern devices employ a rotating disk held at a constant speed. For example, spinning disk conformal microscopes enable line-cell imaging in biomedical applications. Our goal is to design a system for rotating disk speed control that will ensure that the actual speed of rotation is within a specified percentage of the desired speed [40, 43]. We will consider a system without feedback and a system with feedback.

To obtain disk rotation, we will select a DC motor as the actuator because it provides a speed proportional to the applied motor voltage. For the input voltage to the motor, we will select an amplifier that can provide the required power.

The open-loop system (without feedback) is shown in Figure 1.24(a). This system uses a battery source to provide a voltage that is proportional to the desired speed. This voltage is amplified and applied to the motor. The block diagram of the openloop system identifying the controller, actuator, and process is shown in Figure 1.24(b).

To obtain a feedback system, we need to select a sensor. One useful sensor is a tachometer that provides an output voltage proportional to the speed of its shaft. Thus the closed-loop feedback system takes the form shown in Figure 1.25(a). The block diagram model of the feedback system is shown in Figure 1.25(b). The error voltage is generated by the difference between the input voltage and the tachometer voltage.

We expect the feedback system of Figure 1.25 to be superior to the open-loop system of Figure 1.24 because the feedback system will respond to errors and act to

(a)

FIGURE 1.24

(a) Open-loop (without feedback) control of the speed of a rotating disk. (b) Block diagram model.

(b)

(a)

FIGURE 1.25

(a) Closed-loop control of the speed of a rotating disk. (b) Block diagram model.

(b)

reduce them. With precision components, we could expect to reduce the error of the feedback system to one-hundredth of the error of the open-loop system.

55. EXAMPLE 1.14 Insulin delivery control system

Control systems have been utilized in the biomedical field to create implanted automatic drug-delivery systems to patients [29-31]. Automatic systems can be used to regulate blood pressure, blood sugar level, and heart rate. A common application of control engineering is in the field of drug delivery in which mathematical models of the dose-effect relationship of the drugs are used. A drug-delivery system implanted in the body uses a closed-loop system since miniaturized glucose sensors are now available. The best solutions rely on individually programmable, pocket-sized insulin pumps that can deliver insulin.

The blood glucose and insulin concentrations for a healthy person are shown in Figure 1.26. The system must provide the insulin from a reservoir implanted within the diabetic person. Therefore, the control goal is:

56. Control Goal

Design a system to regulate the blood sugar concentration of a diabetic by controlled dispensing of insulin.

Referring to Figure 1.26, the next step in the design process is to define the variable to be controlled. Associated with the control goal we can define the variable to be controlled to be:

57. Variable to Be Controlled
Blood glucose concentration

In subsequent chapters, we will have the tools to quantitatively describe the control design specifications using a variety of steady-state performance FIGURE 1.26

The blood glucose and insulin levels for a healthy person.
FIGURE 1.27

(a) Open-loop (without feedback) control and (b) closed-loop control of blood glucose.

specifications and transient response specifications, both in the time-domain and in the frequency domain. At this point, the control design specifications will be qualitative and imprecise. In that regard, for the problem at hand, we can state the design specification as:

58. Control Design Specifications

Provide a blood glucose level for the diabetic that closely approximates (tracks) the glucose level of a healthy person.

Given the design goals, variables to be controlled, and control design specifications, we can now propose a preliminary system configuration. A closed-loop system uses a fully implantable glucose sensor and miniature motor pump to regulate the insulin delivery rate as shown in Figure 1.27. The feedback control system uses a sensor to measure the actual glucose level and compare that level with the desired level, thus turning the motor pump on when it is required.

(a)

Desired glucose level

(b)

58.1. SEQUENTIAL DESIGN EXAMPLE: DISK DRIVE READ SYSTEM

We will use the design process of Figure 1.17 in each chapter to identify the steps that we are accomplishing. For example, in Chapter 1 we (1) identify the control goal, (2) identify the variables to control, (3) write the initial specifications for the variables, and (4) establish the preliminary system configuration.

Information can be readily and efficiently stored on magnetic disks. Hard disk drives (HDD) are used in notebook computers and larger computers of all sizes and are essentially all standardized as defined by ANSI standards. Even with the advent of advanced storage technologies, such as cloud storage, flash memory, and solid-state drives (SSDs), the HDD remains an important storage media. The role of the HDD is changing from fast and primary storage to slow storage with enormous capacity [50]. The installation of SSD units are surpassing HDD units for the first time. The SSD units are known to have much better performance than HDD, however, the difference in cost per gigabyte ratio is about \(6:1\), and that is expected to remain that way until 2030. Among the many reasons to keep our interest in HDD units is that it is anticipated that about \(90\%\) of the required capacity for cloud computing applications will be realized with HHDs moving into the foreseeable future \(\lbrack 51,62\rbrack\). In the past, disk drive designers have concentrated on increasing data density and data access times. Designers are now considering employing disk drives to perform tasks historically delegated to central processing units (CPUs), thereby leading to improvements in the computing environment [63]. Three areas of "intelligence" under investigation include off-line error recovery, disk drive failure warnings, and storing data across multiple disk drives. Consider the basic diagram of a disk drive shown in Figure 1.28. The goal of the disk drive reader device is to position the reader head to read the data stored on a track on the disk. The variable to accurately control is the position of the reader head (mounted on a slider device). The disk rotates at a speed between 1800 and 10,000 rpm, and the head "flies" above the disk at a distance of less than \(100\text{ }nm\). The initial specification for the position accuracy is \(1\mu m\). Furthermore, we plan to be able to move the head from track a to track b within \(50\text{ }ms\), if possible. Thus, we establish an initial system configuration

FIGURE 1.28

(a) A disk drive (Ragnarock/ Shutterstock.) (b) Diagram of a disk drive.

(a)

(b) FIGURE 1.29

Closed-loop control system for disk drive.

as shown in Figure 1.29. This proposed closed-loop system uses a motor to actuate (move) the arm to the desired location on the disk. We will consider the design of the disk drive further in Chapter 2.

58.2. SUMMARY

In this chapter, we discussed open- and closed-loop feedback control systems. Examples of control systems through the course of history were presented to motivate and connect the subject to the past. In terms of contemporary issues, key areas of application were discussed, including humanoid robots, unmanned aerial vehicles, wind energy, hybrid automobiles, and embedded control. The central role of controls in mechatronics was discussed. Mechatronics is the synergistic integration of mechanical, electrical, and computer systems. Finally, the design process was presented in a structured form and included the following steps: the establishment of goals and variables to be controlled, definition of specifications, system definition, modeling, and analysis. The iterative nature of design allows us to handle the design gap effectively while accomplishing necessary trade-offs in complexity, performance, and cost.

59. SKILLS CHECK

In this section, we provide three sets of problems to test your knowledge: True or False, Multiple Choice, and Word Match. To obtain direct feedback, check your answers with the answer key provided at the conclusion of the end-of-chapter problems. In the following True or False and Multiple Choice problems, circle the correct answer.

  1. The flyball governor is generally agreed to be the first automatic feedback controller used in an industrial process.

True or False

  1. A closed-loop control system uses a measurement of the output and feedback of the signal to compare it with the desired input.

True or False

  1. Engineering synthesis and engineering analysis are the same.

True or False

  1. The block diagram in Figure 1.30 is an example of a closed-loop feedback system.

True or False

FIGURE 1.30 System with control device, actuator, and process. 5. A multivariable system is a system with more than one input and/or more than one output.

  1. Early applications of feedback control include which of the following?
    a. Water clock of Ktesibios
    b. Watt's flyball governor
    c. Drebbel's temperature regulator
    d. All of the above

  2. Important modern applications of control systems include which of the following?
    a. Safe automobiles
    b. Autonomous robots
    c. Automated manufacturing
    d. All of the above

  3. Complete the following sentence:

Control of an industrial process by automatic rather than manual means is often called
a. negative feedback
b. automation
c. a design gap
d. a specification

  1. Complete the following sentence: are intrinsic in the progression from an initial concept to the final product.
    a. Closed-loop feedback systems
    b. Flyball governors
    c. Design gaps
    d. Open-loop control systems

  2. Complete the following sentence:

Control engineers are concerned with understanding and controlling segments of their environments, often called
a. systems
b. design synthesis
c. trade-offs
d. risk

  1. Early pioneers in the development of systems and control theory include:
    a. H. Nyquist
    b. H. W. Bode
    c. H. S. Black
    d. All of the above

  2. Complete the following sentence:

An open-loop control system utilizes an actuating device to control a process
a. without using feedback
b. using feedback
c. in engineering design
d. in engineering synthesis 13. A system with more than one input variable or more than one output variable is known by what name?
a. Closed-loop feedback system
b. Open-loop feedback system
c. Multivariable control system
d. Robust control system

  1. Control engineering is applicable to which fields of engineering?
    a. Mechanical and aerospace
    b. Electrical and biomedical
    c. Chemical and environmental
    d. All of the above

  2. Closed-loop control systems should have which of the following properties:
    a. Good regulation against disturbances
    b. Desirable responses to commands
    c. Low sensitivity to changes in the plant parameters
    d. All of the above

In the following Word Match problems, match the term with the definition by writing the correct letter in the space provided.

a. Optimization

b. Risk

c. Complexity of design

d. System

e. Design

f. Closed-loop feedback control system

g. Flyball governor

h. Specifications

i. Synthesis

j. Open-loop control system

k. Feedback signal

I. Robot

m. Multivariable control system
The output signal is fed back so that it subtracts from the input signal.

A system that uses a measurement of the output and compares it with the desired output. A set of prescribed performance criteria.

A measure of the output of the system used for feedback to control the system.

A system with more than one input variable or more than one output variable.

The result of making a judgment about how much compromise must be made between conflicting criteria.

An interconnection of elements and devices for a desired purpose.

A reprogrammable, multifunctional manipulator used for a variety of tasks.

A gap between the complex physical system and the design model intrinsic to the progression from the initial concept to the final product.

The intricate pattern of interwoven parts and knowledge required.

The ratio of physical output to physical input of an industrial process.

The process of designing a technical system.

A system that utilizes a device to control the process without using feedback.
n. Design gap
o. Positive feedback
p. Negative feedback
q. Trade-off
r. Productivity
s. Engineering design
t. Process
u. Control system
v. Automation

Uncertainties embodied in the unintended consequences of a design.

The process of conceiving or inventing the forms, parts, and details of a system to achieve a specified purpose.

The device, plant, or system under control.

The output signal is fed back so that it adds to the input signal.

An interconnection of components forming a system configuration that will provide a desired response.

The control of a process by automatic means.

The adjustment of the parameters to achieve the most favorable or advantageous design.

The process by which new physical configurations are created.

A mechanical device for controlling the speed of a steam engine.

60. EXERCISES

Exercises are straightforward applications of the concepts of the chapter.

The following systems can be described by a block diagram showing the cause-effect relationship and the feedback (if present). Identify the function of each block and the desired input variable, output variable, and measured variable. Use Figure 1.3 as a model where appropriate.

E1.1 Describe typical sensors that can measure each of the following [93]:

a. Linear position
b. Velocity (or speed)
c. Nongravitational acceleration
d. Rotational position (or angle)
e. Rotational velocity
f. Temperature
g. Pressure
h. Liquid (or gas) flow rate
i. Torque
j. Force
k. Earth's magnetic field
p. Heart rate

\[i(t) \]

FIGURE E1.3 Partial block diagram of an optical source. E1.2 Describe typical actuators that can convert the following [93]:
a. Mechanical energy to fluidic energy
b. Mechanical energy to electrical energy
c. Electrical energy to mechanical energy
d. Kinetic energy to electrical energy
e. Electrical energy to heat

E1.3 A CD player laser beam focusing system has an array of photodiodes that is used to determine if the laser beam is in focus. The laser beam focus is controlled by an input current to a lens focusing motor. A microprocessor controls the input current to the motor by comparing the output from the array of photodiodes. Complete the block diagram representing this closed-loop control system shown in Figure E1.3, identifying the output, input, and measured variables, and the control device.

E1.4 A surgeon uses a control system, that is a robot surgical system, to perform surgery remotely. Sketch a block diagram to illustrate this feedback system.

E1.5 Fly-fishing is a sport that challenges the person to cast a small feathery fly using a light rod and line. The goal is to place the fly accurately and lightly on the distant surface of the stream [59]. Describe the fly-casting process and a model of this process.

E1.6 An autofocus camera will adjust the distance of the lens from the film by using a beam of infrared or ultrasound to determine the distance to the subject [42]. Sketch a block diagram of this control system, and briefly explain its operation.

E1.7 Because a sailboat cannot sail directly into the wind, and traveling straight downwind is usually slow, the shortest sailing distance is rarely a straight line. Thus sailboats tack upwind-the familiar zigzag courseand jibe downwind. A tactician's decision of when to tack and where to go can determine the outcome of a race.

Describe the process of tacking a sailboat as the wind shifts direction. Sketch a block diagram depicting this process.

E1.8 An autonomous self-driving vehicle can sense its environment and navigate without human input. Describe a simplified feedback control system for a guidance system that ensures the vehicle navigates its surroundings safely.

E1.9 Describe the block diagram of the control system of a skateboard with a human rider.

E1.10 Describe the process of human biofeedback used to regulate factors such as pain or body temperature. Biofeedback is a technique whereby a human can, with some success, consciously regulate pulse, reaction to pain, and body temperature.

E1.11 Future advanced commercial aircraft will be E-enabled. This will allow the aircraft to take advantage of continuing improvements in computer power and network growth. Aircraft can continuously communicate their location, speed, and critical health parameters to ground controllers, and gather and transmit local meteorological data. Sketch a block diagram showing how the meteorological data from multiple aircraft can be transmitted to the ground, combined using ground-based powerful networked computers to create an accurate weather situational awareness, and then transmitted back to the aircraft for optimal routing.

E1.12 Unmanned aerial vehicles (UAVs) are being developed to operate in the air autonomously for long periods of time. By autonomous, we mean that there is no interaction with human ground controllers. Sketch a block diagram of an autonomous UAV that is tasked for crop monitoring using aerial photography. The UAV must photograph and transmit the entire land area by flying a pre-specified trajectory as accurately as possible.

E1.13 Consider the inverted pendulum shown in Figure E1.13. Sketch the block diagram of a feedback control system. Identify the process, sensor, actuator, and controller. The objective is keep the pendulum in the upright position, that is to keep \(\theta = 0\), in the presence of disturbances.

FIGURE E1.13 Inverted pendulum control.

E1.14 Sketch a block diagram of a person playing a video game. Suppose that the input device is a joystick and the game is being played on a desktop computer. E1.15 For people with diabetes, keeping track of and maintaining blood glucose at safe levels is very important. Continuous blood glucose monitors and readers are available that enable a measurement of blood glucose with a painless scan rather than a fingerprick, as illustrated in Figure E1.15. Sketch a block diagram with a continuous blood glucose monitor and a reader and their possible control actions they might implement as they manage a high blood glucose reading.

FIGURE E1.15 A continuous blood glucose monitoring system

61. PROBLEMS

Problems require extending the concepts of this chapter to new situations.

The following systems may be described by a block diagram showing the cause-effect relationship and the feedback (if present). Each block should describe its function. Use Figure 1.3 as a model where appropriate.

P1.1 Automobiles have variable windshield wiper speed settings for different rain intensity. Sketch a block diagram of a wiper system where the driver sets the wiper speed. Identify the function of each element of the variable speed control of the wiper system.

P1.2 Control systems can use a human operator as part of a closed-loop control system. Sketch the block diagram of the valve control system shown in Figure P1.2.

P1.3 In a chemical process control system, it is valuable to control the chemical composition of the product. To do so, a measurement of the composition can be obtained by using an infrared stream analyzer, as shown in Figure P1.3. The valve on the additive stream may be controlled. Complete the control feedback loop, and sketch a block diagram describing the operation of the control loop.

FIGURE P1.2 Fluid-flow control.

P1.4 The accurate control of a nuclear reactor is important for power system generators. Assuming the number of neutrons present is proportional to the power level, an ionization chamber is used to measure the power level. The current \(i_{O}\) is proportional to the power level. The position of the graphite control rods moderates the power level. Complete the control system of the nuclear reactor shown in Figure P1.4 and sketch the block diagram describing the operation of the feedback control loop.

FIGURE P1.3 Chemical composition control.

FIGURE P1.4 Nuclear reactor control.

P1.5 A light-seeking control system, used to track the sun, is shown in Figure P1.5. The output shaft, driven by the motor through a worm reduction gear, has a bracket attached on which are mounted two photocells. Complete the closed-loop system so that the system follows the light source.

P1.6 Feedback systems do not always involve negative feedback. Economic inflation, which is evidenced by continually rising prices, is a positive feedback system. A positive feedback control system, as shown in Figure P1.6, adds the feedback signal to the input signal, and the resulting signal is used as the input to the process. A simple model of the price-wage inflationary spiral is shown in Figure P1.6. Add additional feedback loops, such as legislative control or control of the tax rate, to stabilize the system. It is assumed that an increase in workers' salaries, after some time delay, results in an increase in prices. Under what conditions could prices be stabilized by falsifying or delaying the availability of cost-of-living data? How would a national wage and price economic guideline program affect the feedback system?

P1.7 The story is told about the sergeant who stopped at the jewelry store every morning at nine o'clock and compared and reset his watch with the chronometer in the window. Finally, one day the sergeant went into the store and complimented the owner on the accuracy of the chronometer.

"Is it set according to time signals from Arlington?" asked the sergeant.

"No," said the owner, "I set it by the five o'clock cannon fired from the fort each afternoon. Tell me,

FIGURE P1.6 Positive feedback.

Sergeant, why do you stop every day and check your watch?" fort!"

The sergeant replied, "I'm the gunner at the

Is the feedback prevalent in this case positive or negative? The jeweler's chronometer loses two minutes each 24-hour period and the sergeant's watch loses three minutes during each eight hours. What is the net time error of the cannon at the fort after 12 days?

P1.8 In a public address system, when the microphone is placed too close to the loudspeaker, a positive feedback system is inadvertently created. The audio input from the microphone is amplified, which comes out through the loudspeaker. This audio output is received by the microphone again, which gets amplified further, and comes out through the loudspeaker again. This positive loop gain is known as audio feedback or the Larsen effect, and causes the system to overload, resulting in a high-pitched sound. Construct the corresponding feedback model, and identify each block of the model.

P1.9 Models of physiological control systems are valuable aids to the medical profession. A model of the heart-rate control system is shown in Figure P1.9 [23, 48]. This model includes the processing of the nerve signals by the brain. The heart-rate control system is, in fact, a multivariable system, and the variables \(x,y,w,v,z\), and \(u\) are vector

FIGURE P1.5 A photocell is mounted in each tube. The light reaching each cell is the same in both only when the light source is exactly in the middle as shown.

FIGURE P1.9 Heart-rate control.

variables. In other words, the variable \(x\) represents many heart variables \(x_{1},x_{2},\ldots,x_{n}\). Examine the model of the heart-rate control system and add or delete blocks, if necessary. Determine a control system model of one of the following physiological control systems:

  1. Respiratory control system

  2. Adrenaline control system

  3. Human arm control system

  4. Eye control system

  5. Pancreas and the blood-sugar-level control system

  6. Circulatory system

P1.10 The role of air traffic control systems is increasing as airplane traffic increases at busy airports. Engineers are developing air traffic control systems and collision avoidance systems using the Global Positioning System (GPS) navigation satellites [34, 55]. GPS allows each aircraft to know its position in the airspace landing corridor very precisely. Sketch a block diagram depicting how an air traffic controller might use GPS for aircraft collision avoidance.

P1.11 Automatic control of water level using a float level was used in the Middle East for a water clock \(\lbrack 1,11\rbrack\). The water clock (Figure P1.11) was used from sometime before Christ until the 17th century. Discuss the operation of the water clock, and establish how the float provides a feedback control that maintains the accuracy of the clock. Sketch a block diagram of the feedback system.

P1.12 An automatic turning gear for windmills was invented by Meikle in about \(1750\lbrack 1,11\rbrack\). The fantail gear shown in Figure P1.12 automatically turns the windmill into the wind. The fantail windmill at right angle to the mainsail is used to turn the turret. The gear ratio is of the order of 3000 to 1 . Discuss the operation of the windmill, and establish the feedback operation that maintains the main sails into the wind.

FIGURE P1.11 Water clock. (From Newton, Gould, and Kaiser, Analytical Design of Linear Feedback Controls. Wiley, New York, 1957, with permission.)

P1.13 A common example of a two-input control system is an automobile power transmission system, with a gear shifter and an accelerator pedal. The objective is to obtain (1) a desired speed and (2) a desired torque. Sketch a block diagram of the closed-loop control system.

P1.14 Adam Smith (1723-1790) discussed the issue of free competition between the participants of an economy in his book Wealth of Nations. It may be said that Smith employed social feedback mechanisms to explain his theories [41]. Smith suggests that (1) the available workers as a whole compare the various possible employments and enter that one offering the greatest rewards, and (2) in any employment the

FIGURE P1.12 Automatic turning gear for windmills. (From Newton, Gould, and Kaiser, Analytical Design of Linear Feedback Controls. Wiley, New York, 1957, with permission.)

rewards diminish as the number of competing workers rises. Let \(r =\) total of rewards averaged over all trades, \(c =\) total of rewards in a particular trade, and \(q =\) influx of workers into the specific trade. Sketch a feedback system to represent this system.

P1.15 Small computers are used as part of a start-stop system in automobiles to control emissions and obtain improved gas mileage. A computer-controlled start-stop system that automatically stops and restarts an engine to reduce the time the engine idles could improve gas mileage and reduce unwanted polluting emissions significantly. Sketch a block diagram for such a system for an automobile.

P1.16 All humans have experienced a fever associated with an illness. A fever is related to the changing of the control input in the body's thermostat. This thermostat, within the brain, normally regulates temperature near \(98^{\circ}F\) in spite of external temperatures ranging from \(0^{\circ}F\) to \(100^{\circ}F\) or more. For a fever, the input, or desired, temperature is increased. Even to many scientists, it often comes as a surprise to learn that fever does not indicate something wrong with body temperature control but rather well-contrived regulation at an elevated level of desired input. Sketch a block diagram of the temperature control system and explain how aspirin will lower a fever.

P1.17 Baseball players use feedback to judge a fly ball and to hit a pitch [35]. Describe a method used by a

FIGURE P1.18 Pressure regulator.

batter to judge the location of a pitch so that he can have the bat in the proper position to hit the ball.

P1.18 A cutaway view of a commonly used pressure regulator is shown in Figure P1.18. The desired pressure is set by turning a calibrated screw. This compresses the spring and sets up a force that opposes the upward motion of the diaphragm. The bottom side of the diaphragm is exposed to the water pressure that is to be controlled. Thus the motion of the diaphragm is an indication of the pressure difference between the desired and the actual pressures. It acts like a comparator. The valve is connected to the diaphragm and moves according to the pressure difference until it reaches a position in which the difference is zero. Sketch a block diagram showing the control system with the output pressure as the regulated variable.

P1.19 Ichiro Masaki of General Motors has patented a system that automatically adjusts a car's speed to keep a safe distance from vehicles in front. Using a video camera, the system detects and stores a reference image of the car in front. It then compares this image with a stream of incoming live images as the two cars move down the highway and calculates the distance. Masaki suggests that the system could control steering as well as speed, allowing drivers to lock on to the car ahead and get a "computerized tow." Sketch a block diagram for the control system. P1.20 A high-performance race car with an adjustable wing (airfoil) is shown in Figure P1.20. Develop a block diagram describing the ability of the airfoil to keep a constant road adhesion between the car's tires and the race track surface. Why is it important to maintain good road adhesion?

FIGURE P1.20 A high-performance race car with an adjustable wing.

P1.21 The potential of employing two or more helicopters for transporting payloads that are too heavy for a single helicopter is a well-addressed issue in the civil and military rotorcraft design arenas [37]. Overall requirements can be satisfied more efficiently with a smaller aircraft by using multilift for infrequent peak demands. Hence the principal motivation for using multilift can be attributed to the promise of obtaining increased productivity without having to manufacture larger and more expensive helicopters. A specific case of a multilift arrangement where two helicopters jointly transport payloads has been named twin lift. Figure P1.21 shows a typical "two-point pendant" twin lift configuration in the lateral/vertical plane.

Develop the block diagram describing the pilots' action, the position of each helicopter, and the position of the load.

P1.22 Engineers want to design a control system that will allow a building or other structure to react to the force of an earthquake much as a human would. The structure would yield to the force, but only so much, before developing strength to push back [47]. Develop a block diagram of a control system to reduce the effect of an earthquake force.

P1.23 Engineers at the Science University of Tokyo are developing a robot with a humanlike face [52]. The

FIGURE P1.21 Two helicopters used to lift and move a large load.

robot can display facial expressions, so that it can work cooperatively with human workers. Sketch a block diagram for a facial expression control system of your own design.

P1.24 An innovation for an intermittent automobile windshield wiper is the concept of adjusting its wiping cycle according to the intensity of the rain [54]. Sketch a block diagram of the wiper control system.

P1.25 In the past 50 years, over 20,000 metric tons of hardware have been placed in Earth's orbit. During the same time span, over 15,000 metric tons of hardware returned to Earth. The objects remaining in Earth's orbit range in size from large operational spacecraft to tiny flecks of paint. There are over 500,000 objects in Earth's orbit \(1\text{ }cm\) or larger in size. About 20,000 of the space objects are currently tracked from groundstations on the Earth. Space traffic control [61] is becoming an important issue, especially for commercial satellite companies that plan to "fly" their satellites through orbit altitudes where other satellites are operating, and through areas where high concentrations of space debris may exist. Sketch a block diagram of a space traffic control system that commercial companies might use to keep their satellites safe from collisions while operating in space.

P1.26 NASA is developing a compact rover designed to transmit data from the surface of an asteroid back to Earth, as illustrated in Figure P1.26. The rover will use a camera to take panoramic shots of the asteroid surface. The rover can position itself so that the camera can be pointed straight down

FIGURE P1.26 Microrover designed to explore an asteroid. (Photo courtesy of NASA.) at the surface or straight up at the sky. Sketch a block diagram illustrating how the microrover can be positioned to point the camera in the desired direction. Assume that the pointing commands are relayed from the Earth to the microrover and that the position of the camera is measured and relayed back to Earth.

P1.27 A direct methanol fuel cell is an electrochemical device that converts a methanol water solution to electricity [75]. Like rechargeable batteries, fuel cells directly convert chemicals to energy; they are very often compared to batteries, specifically rechargeable batteries. However, one significant difference between rechargeable batteries and direct methanol fuel cells is that, by adding more methanol water solution, the fuel cells recharge instantly. Sketch a block diagram of the direct methanol fuel cell recharging system that uses feedback to continuously monitor and recharge the fuel cell.

62. ADVANCED PROBLEMS

Advanced problems represent problems of increasing complexity.

AP1.1 The development of robotic microsurgery devices will have major implications on delicate eye and brain surgical procedures. One such device is shown in Figure AP1.1. Haptic (force and tactile) feedback can greatly help a surgeon by mimicking the physical interaction that takes place between the microsurgery robotic manipulator and human tissue. Sketch a block diagram for a haptic and tactile subsystem with a microsurgical device in the loop being operated by a surgeon. Assume that the force of the end-effector on the microsurgical device can be measured and is available for feedback.

AP1.2 Advanced wind energy systems are being installed in many locations throughout the world as a way for nations to deal with rising fuel prices and energy shortages, and to reduce the negative effects of fossil fuel utilization on the quality of the air. The modern windmill can be viewed as a mechatronic system. Think about how an advanced wind energy system would be designed as a mechatronic system. List the various components of the wind energy system and associate each component with one of the five elements of a mechatronic system: physical system modeling, signals and systems, computers and logic systems, software and data acquisition, and sensors and actuators.

AP1.3 Many modern luxury automobiles have an advanced driver-assistance systems (ADAS) option. The collision avoidance feature of an ADAS system uses radars to detect nearby obstacles to notify drivers of potential collisions. Figure AP1.3 illustrates the

FIGURE AP1.1 Microsurgery robotic manipulator. (Photo courtesy of NASA.)

collision avoidance feature of an ADAS system. Sketch a block diagram of this ADAS feedback control system. In your own words, describe the control problem and the challenges facing the designers of the control system.

AP1.4 Adaptive optics has applications to a wide variety of key control problems, including imaging of the human retina and large-scale, ground-based astronomical

FIGURE AP1.3 A collision avoidance feature of an ADAS system.

observations [98]. In both cases, the approach is to use a wavefront sensor to measure distortions in the incoming light and to actively control and compensate to the errors induced by the distortions. Consider the case of an extremely large ground-based optical telescope, possibly an optical telescope up to 100 meters in diameter. The telescope components include deformable mirrors actuated by micro-electro-mechanical (MEMS) devices and sensors to measure the distortion of the incoming light as it passes through the turbulent and uncertain atmosphere of Earth.

There is at least one major technological barrier to constructing a \(100 - m\) optical telescope. The numerical computations associated with the control and compensation of the extremely large optical telescope can be on the order of \(10^{10}\) calculations each
\(1.5\text{ }ms\). If we assume that the computational capability is available, then one can consider the design of a feedback control system that uses the available computational power. We can consider many control issues associated with the large-scale optical telescope. Some of the controls problems that might be considered include controlling the pointing of the main dish, controlling the individual deformable mirrors, and attenuating the deformation of the dish due to changes in outside temperature.

Describe a closed-loop feedback control system to control one of the deformable mirrors to compensate for the distortions in the incoming light. Figure AP1.4 shows a diagram of the telescope with a single deformable mirror. Suppose that the mirror has an associated MEMS actuator that can be used to

FIGURE AP1.4 Extremely large optical telescope with deformable mirrors for atmosphere compensation. vary the orientation. Also, assume that the wavefront sensor and associated algorithms provide the desired configuration of the deformable mirror to the feedback control system.

AP1.5 The Burj Dubai is the tallest building in the world [94]. The building, shown in Figure AP1.5, stands at over \(800\text{ }m\) with more than 160 stories. There are 57 elevators servicing this tallest free-standing structure in the world. Traveling at up to \(10\text{ }m/s\), the elevators have the world's longest travel distance from lowest to highest stop. Describe a closed-loop feedback control system that guides an elevator of a highrise building to a desired floor while maintaining a reasonable transit time [95]. Remember that high accelerations will make the passengers uncomfortable.

FIGURE AP1.5 The world's tallest building in Dubai. (Photo courtesy of Obstando Images/Alamy.)

AP1.6 The robotic vacuum cleaner depicted in Figure AP1.6 is an example of a mechatronic system that aids humans in maintaining their homes. A dirt detection control system would enable the robotic vacuum cleaner to vacuum the same area more than once if the dirt level is unsatisfactory, since a single pass may not be enough to adequately remove a high level of dirt. If the robotic vacuum cleaner detects more dirt than usual, it should vacuum the same area until the sensors detect lesser dirt in that area. Describe a closed-loop feedback control system to

FIGURE AP1.6 A robotic vacuum cleaner communicates with the base station as it maneuvers around the room. (Photo courtesy of Hugh Threlfall/Alamy.)

detect an acceptable level of dirt, so that the robotic vacuum cleaner will vacuum the same area again.

AP1.7 Space \(X\) has developed a very important system to allow for recovery of the first stage of their Falcon rocket at sea, as depicted in Figure AP1.7. The landing ship is an autonomous drone ship. Sketch a block diagram describing a control system that would control the pitch and roll of the landing ship on the sea.

FIGURE AP1.7 Space \(X\) return landing on sea- based drone ship.

63. DESIGN PROBLEMS

Design problems emphasize the design task. Continuous design problems (CDP) build upon a design problem from chapter to chapter.

CDP1.1 Increasingly stringent requirements of modern, high-precision machinery are placing increasing demands on slide systems [53]. The typical goal is to accurately control the desired path of the table shown in Figure CDP1.1. Sketch a block diagram model of a feedback system to achieve the desired goal. The table can move in the \(x\) direction as shown.

FIGURE CDP1.1 Machine tool with table.

DP1.1 Background noise affects the audio output quality of a headphone. Noise-cancelling headphones use active noise control to reduce this unwanted ambient noise. Sketch a block diagram of an "active noise control" feedback system that will reduce the effect of unwanted noise. Indicate the device within each block.

DP1.2 Aircraft are fitted with autopilot control that, at the press of a button, automatically controls the flight path of an aircraft, without manual control by a pilot. In this way, the pilot can focus on monitoring the flight path, weather, and onboard systems. Design a feedback control in block diagram form for an autopilot system.

DP1.3 Describe a feedback control system in which a user utilizes a smart phone to remotely monitor and control a washing machine as illustrated in Figure DP1.3. The control system should be able to start and stop the wash cycle, control the amount of detergent and the water temperature, and provide notifications on the status of the cycle.

DP1.4 As part of the automation of a dairy farm, the automation of cow milking is under study [36]. Design a milking machine that can milk cows four or five times a day at the cow's demand. Sketch a block diagram and indicate the devices in each block.

DP1.5 A large, braced robot arm for welding large structures is shown in Figure DP1.5. Sketch the block diagram of a closed-loop feedback control system for accurately controlling the location of the weld tip.

DP1.6 Vehicle traction control, which includes antiskid braking and antispin acceleration, can enhance vehicle performance and handling. The objective of this control is to maximize tire traction by preventing locked brakes as well as tire spinning during acceleration. Wheel slip, the difference between the vehicle speed and the wheel speed, is chosen as the controlled

FIGURE DP1.3 Using a smart phone to remotely monitor and control a washing machine. (Photo courtesy of Mikkel William/E+/Getty Images.)

variable because of its strong influence on the tractive force between the tire and the road [19]. The adhesion coefficient between the wheel and the road reaches a maximum at a low slip. Develop a block diagram model of one wheel of a traction control system.

DP1.7 The Hubble space telescope was repaired and modified in space on several occasions [44, 46, 49]. One

FIGURE DP1.5 Robot welder. challenging problem with controlling the Hubble is damping the jitter that vibrates the spacecraft each time it passes into or out of the Earth's shadow. The worst vibration has a period of about 20 seconds, or a frequency of 0.05 hertz. Design a feedback system that will reduce the vibrations of the Hubble space telescope.

DP1.8 A challenging application of control design is the use of nanorobots in medicine. Nanorobots will require onboard computing capability, and very tiny sensors and actuators. Fortunately, advances in biomolecular computing, bio-sensors, and actuators are promising to enable medical nanorobots to emerge within the next decade [99]. Many interesting medical applications will benefit from nanorobotics. For example, one use might be to use the robotic devices to precisely deliver anti-HIV drugs or to combat cancer by targeted delivering of chemotherapy as illustrated in Figure DP1.8.

At the present time, we cannot construct practical nanorobots, but we can consider the control design process that would enable the eventual development and installation of these tiny devices in the medical field. Consider the problem of designing a nanorobot to deliver a cancer drug to a specific location within the human body. The target site might be the location of a tumor, for example. Suggest one or more control goals that might guide the design process. Recommend the variables that should be controlled and provide a list of reasonable specifications for those variables.

FIGURE DP1.8 An artist illustration of a nanorobot interacting with human blood cells.

DP1.9 Consider the human transportation vehicle (HTV) depicted in Figure DP1.9. The self-balancing HTV is actively controlled to allow safe and easy transportation of a single person [97]. Describe a closed-loop feedback control system to assist the rider of the HTV in balancing and maneuvering the vehicle.

FIGURE DP1.9 Personal transportation vehicle. (Photo courtesy of Sergiy Kuzmin/Shutterstock.)

DP1.10 In addition to maintaining automobile speed, many vehicles can also maintain a prescribed distance to an automobile in front, as illustrated in Figure DP1.10. Design a feedback control sysytem that can maintain cruise speed at a prescribed distance to the vehicle in front. What happens if the leading vehicle slows down below the desired cruise speed?

FIGURE DP1.10 Maintaining cruise speed at a prescribed distance.

64. ANSWERS TO SKILLS CHECK

True or False: (1) True; (2) True; (3) False; (4) False; (5) True

Multiple Choice: (6) d; (7) d; (8) b; (9) c; (10) a; (11) d; (12) a; (13) c; (14) d; (15) d
Word Match (in order, top to bottom): p, f, h, k, m, q, d, l, n, c, r, s, j, b, e, t, o, u, v, a, i, g

65. TERMS AND CONCEPTS

Actuator A device employed by the control system to alter or adjust the environment.

Analysis The process of examining a system in order to gain a better understanding, provide insight, and find directions for improvement.

Automation The control of a process by automatic means.

Closed-loop feedback control system A system that uses a measurement of the output and compares it with the desired output to control the process.

Complexity of design The intricate pattern of interwoven parts and knowledge required.

Control system An interconnection of components forming a system configuration that will provide a desired response.

Control system engineering An engineering discipline that focuses on the modeling of a wide assortment of physical systems and using those models to design controllers that will cause the closed-loop systems to possess desired performance characteristics.

Design The process of conceiving or inventing the forms, parts, and details of a system to achieve a specified purpose.

Design gap A gap between the complex physical system and the design model intrinsic to the progression from the initial concept to the final product.

Disturbance An unwanted input signal that affects the output signal.

Embedded control Feedback control system that employs on-board special-purpose digital computers as integral components of the feedback loop.

Engineering design The process of designing a technical system.

Feedback signal A measure of the output of the system used for feedback to control the system.

Flyball governor A mechanical device for controlling the speed of a steam engine.

Hybrid fuel automobile An automobile that uses a conventional internal combustion engine in combination with an energy storage device to provide a propulsion system.

Internet of Things (IoT) Network of physical objects embedded with electronics, software, sensors, and connectivity.

Measurement noise An unwanted input signal that affects the measured output signal.
Mechatronics The synergistic integration of mechanical, electrical, and computer systems.

Multiloop feedback control system A feedback control system with more than one feedback control loop.

Multivariable control system A system with more than one input variable or more than one output variable.

Negative feedback An output signal fed back so that it subtracts from the input signal.

Open-loop control system A system that uses a device to control the process without using feedback. Thus the output has no effect upon the signal to the process.

Optimization The adjustment of the parameters to achieve the most favorable or advantageous design.

Plant See Process.

Positive feedback An output signal fed back so that it adds to the input signal.

Process The device, plant, or system under control.

Productivity The ratio of physical output to physical input of an industrial process.

Risk Uncertainties embodied in the unintended consequences of a design.

Robot Programmable computers integrated with a manipulator. A reprogrammable, multifunctional manipulator used for a variety of tasks.

Sensor A device that provides a measurement of a desired external signal.

Specifications Statements that explicitly state what the device or product is to be and to do. A set of prescribed performance criteria.

Synthesis The process by which new physical configurations are created. The combining of separate elements or devices to form a coherent whole.

System An interconnection of elements and devices for a desired purpose.

Trade-off The result of making a judgment about how to compromise between conflicting criteria.

Ubiquitous computing A concept in which computing is made available everywhere at any time and can occur on any device.

Ubiquitous positioning A concept in which positioning systems identify the location and position of people, vehicles and objects in time at any location indoors and outdoors.

66. CHAPTER

67. Mathematical Models

of Systems

2.1 Introduction 80

2.2 Differential Equations of Physical Systems 80

2.3 Linear Approximations of Physical Systems 85

2.4 The Laplace Transform 88

2.5 The Transfer Function of Linear Systems 95

2.6 Block Diagram Models 107

2.7 Signal-Flow Graph Models 112

2.8 Design Examples 119

2.9 The Simulation of Systems Using Control Design Software 136

2.10 Sequential Design Example: Disk Drive Read System 150

2.11 Summary 153

68. PREVIEW

Mathematical models of physical systems are key elements in the design and analysis of control systems. The dynamic behavior is generally described by ordinary differential equations. We will consider a wide range of systems. Since most physical systems are nonlinear, we will discuss linearization approximations which allow us to use Laplace transform methods. We will then proceed to obtain the input-output relationship in the form of transfer functions. The transfer functions can be organized into block diagrams or signal-flow graphs to graphically depict the interconnections. Block diagrams and signal-flow graphs are very convenient and natural tools for designing and analyzing complicated control systems. We conclude the chapter by developing transfer function models for the various components of the Sequential Design Example: Disk Drive Read System.

69. DESIRED OUTCOMES

Upon completion of Chapter 2, students should be able to:

\(\square\) Recognize that differential equations can describe the dynamic behavior of physical systems.

  • Utilize linearization approximations through Taylor series.

  • Understand the application of Laplace transforms and their role in obtaining transfer functions.

$\square\ $ Interpret block diagrams and signal-flow graphs and explain their role in analyzing control systems.

\(\square\) Describe the important role of modeling in the control system design process.

69.1. INTRODUCTION

To understand and control complex systems, one must obtain quantitative mathematical models of these systems. It is necessary therefore to analyze the relationships between the system variables and to obtain a mathematical model. Because the systems under consideration are dynamic in nature, the descriptive equations are usually differential equations. Furthermore, if these equations can be linearized, then the Laplace transform can be used to simplify the method of solution. In practice, the complexity of systems and our ignorance of all the relevant factors necessitate the introduction of assumptions concerning the system operation. Therefore we will often find it useful to consider the physical system, express any necessary assumptions, and linearize the system. Then, by using the physical laws describing the linear equivalent system, we can obtain a set of time-invariant, ordinary linear differential equations. Finally, using mathematical tools, such as the Laplace transform, we obtain a solution describing the operation of the system. In summary, the approach to dynamic system modeling can be listed as follows:

  1. Define the system and its components.

  2. Formulate the mathematical model and fundamental necessary assumptions based on basic principles.

  3. Obtain the differential equations representing the mathematical model.

  4. Solve the equations for the desired output variables.

  5. Examine the solutions and the assumptions.

  6. If necessary, reanalyze or redesign the system.

69.2. DIFFERENTIAL EQUATIONS OF PHYSICAL SYSTEMS

The differential equations describing the dynamic performance of a physical system are obtained by utilizing the physical laws of the process [1-4]. Consider the torsional spring-mass system in Figure 2.1 with applied torque \(T_{a}(t)\). Assume the torsional spring element is massless. Suppose we want to measure the torque \(T_{S}(t)\) transmitted to the mass \(m\). Since the spring is massless, the sum of the torques acting on the spring itself must be zero, or

\[T_{a}(t) - T_{s}(t) = 0 \]

which implies that \(T_{s}(t) = T_{a}(t)\). We see immediately that the external torque \(T_{a}(t)\) applied at the end of the spring is transmitted through the torsional spring. Because of this, we refer to the torque as a through-variable. In a similar manner, the angular rate difference associated with the torsional spring element is

\[\omega(t) = \omega_{s}(t) - \omega_{a}(t) \]

FIGURE 2.1

(a) Torsional spring-mass system. (b) Spring element.

(a)

(b)
Thus, the angular rate difference is measured across the torsional spring element and is referred to as an across-variable. These same types of arguments can be made for most common physical variables (such as force, current, volume, flow rate, etc.). A more complete discussion on through- and across-variables can be found in \(\lbrack 26,27\rbrack\). A summary of the through- and across-variables of dynamic systems is given in Table 2.1 [5]. Information concerning the International System (SI) of units associated with the various variables discussed in this section can be found online, as well in many handy references, such as the MCS website. \(\ ^{\dagger}\) For example, variables that measure temperature are degrees Kelvin in SI units, and variables that measure length are meters. A summary of the describing equations for lumped, linear, dynamic elements is given in Table 2.2 [5]. The equations in

System $$\begin
                                    \text{~}\text{Variable}\text{~} \\         
                                    \text{~}\text{Through}\text{~} \\          
                                    \text{~}\text{Element}\text{~}             
                                    \end{matrix}$$                             | $$\begin{matrix}                        
                                                                                \text{~}\text{Integrated}\text{~} \\     
                                                                                \text{~}\text{Through-}\text{~} \\       
                                                                                \text{~}\text{Variable}\text{~}          
                                                                                \end{matrix}$$                           | $$\begin{matrix}                                   
                                                                                                                          \text{~}\text{Variable}\text{~} \\                  
                                                                                                                          \text{~}\text{Across}\text{~} \\                    
                                                                                                                          \text{~}\text{Element}\text{~}                      
                                                                                                                          \end{matrix}$$                                      | $$\begin{matrix}                              
                                                                                                                                                                               \text{~}\text{Integrated}\text{~} \\           
                                                                                                                                                                               \text{~}\text{Across-}\text{~} \\              
                                                                                                                                                                               \text{~}\text{Variable}\text{~}                
                                                                                                                                                                               \end{matrix}$$                                 |

| Electrical | Current, \(i\) | Charge, \(q\) | $$\begin{matrix}
\text{}\text{Voltage}\text{} \
\text{}\text{difference,}\text{}v_{21}
\end{matrix}$$ | Flux linkage, \(\lambda_{21}\) |
| $$\begin{matrix}
\text{}\text{Mechanical}\text{} \
\text{}\text{translational}\text{}
\end{matrix}$$ | Force, \(F\) | $$\begin{matrix}
\text{}\text{Translational}\text{} \
\text{}\text{momentum,}\text{}P
\end{matrix}$$ | $$\begin{matrix}
\text{}\text{Velocity}\text{} \
\text{}\text{difference,}\text{}v_{21}
\end{matrix}$$ | $$\begin{matrix}
\text{}\text{Displacement}\text{} \
\text{}\text{difference,}\text{}y_{21}
\end{matrix}$$ |
| $$\begin{matrix}
\text{}\text{Mechanical}\text{} \
\text{}\text{rotational}\text{}
\end{matrix}$$ | Torque, \(T\) | $$\begin{matrix}
\text{}\text{Angular}\text{} \
\text{}\text{momentum,}\text{}h
\end{matrix}$$ | $$\begin{matrix}
\text{~}\text{Angular velocity}\text{~} \
\text{}\text{difference,}\text{}\omega_{21}
\end{matrix}$$ | $$\begin{matrix}
\text{}\text{Angular}\text{} \
\text{}\text{displacement}\text{} \
\text{}\text{difference,}\text{}\theta_{21}
\end{matrix}$$ |
| Fluid | $$\begin{matrix}
\text{}\text{Fluid}\text{} \
\text{~}\text{volumetric rate}\text{~} \
\text{~}\text{of flow,}\text{~}Q
\end{matrix}$$ | Volume, \(V\) | $$\begin{matrix}
\text{}\text{Pressure}\text{} \
\text{}\text{difference,}\text{}P_{21}
\end{matrix}$$ | $$\begin{matrix}
\text{}\text{Pressure}\text{} \
\text{}\text{momentum,}\text{}\gamma_{21}
\end{matrix}$$ |
| Thermal | $$\begin{matrix}
\text{~}\text{Heat flow}\text{~} \
\text{}\text{rate,}\text{}q
\end{matrix}$$ | $$\begin{matrix}
\text{~}\text{Heat energy,}\text{~} \
H
\end{matrix}$$ | $$\begin{matrix}
\text{}\text{Temperature}\text{} \
\text{}\text{difference,}\text{}\mathcal{T}_{21}
\end{matrix}$$ | |

\(\ ^{\dagger}\) The companion website is available at www.pearsonglobaleditions.com. Table 2.2 are idealized descriptions and only approximate the actual conditions (for example, when a linear, lumped approximation is used for a distributed element).

70. Table 2.2 Summary of Governing Differential Equations for Ideal Elements

71. Type of
Element

Capacitive storage

Energy dissipators

Inductive storage

72. Physical

Element

(a)

Electrical inductance

Equation

\[v_{21} = L\frac{di}{dt} \]

Energy \(E\) or

Power \(\mathcal{P}\)

\[E = \frac{1}{2}Li^{2} \]

\[v_{21} = \frac{1}{k}\frac{dF}{dt} \]

\[E = \frac{1}{2}\frac{F^{2}}{k} \]

\[\omega_{21} = \frac{1}{k}\frac{dT}{dt} \]

\[E = \frac{1}{2}\frac{T^{2}}{k} \]

\[P_{21} = I\frac{dQ}{dt} \]

\[E = \frac{1}{2}IQ^{2} \]

Fluid inertia

Electrical capacitance

\[i = C\frac{dv_{21}}{dt} \]

\[E = \frac{1}{2}Cv_{21}^{2} \]

Translational mass

\[F = M\frac{dv_{2}}{dt}\ E = \frac{1}{2}Mv_{2}^{2} \]

\[T = J\frac{d\omega_{2}}{dt} \]

\[E = \frac{1}{2}J\omega_{2}^{2} \]

\[T \rightarrow \underset{\omega_{2}}{\longrightarrow}\underset{\begin{matrix} \omega_{1} \\ \text{~}\text{constant}\text{~} \end{matrix}}{\overset{\circ}{\longrightarrow}} =\]

Fluid capacitance

\[Q = C_{f}\frac{dP_{21}}{dt}\ E = \frac{1}{2}C_{f}P_{21}^{2} \]

\[Q \rightarrow \underset{P_{2}}{\circ} \longrightarrow P_{f} \]

Thermal capacitance

\[q = C_{t}\frac{d\mathcal{T}_{2}}{dt}\ E = C_{t}\mathcal{T}_{2} \]

Electrical resistance

\[i = \frac{1}{R}v_{21} \]

\[\mathcal{P} = \frac{1}{R}v_{21}^{2} \]

Translational damper

\[F = bv_{21} \]

\[\mathcal{P} = bv_{21}^{2} \]

Rotational damper

\[T = b\omega_{21} \]

\[\mathcal{P} = b\omega_{21}^{2} \]

Fluid resistance

\[Q = \frac{1}{R_{f}}P_{21} \]

\[\mathcal{P} = \frac{1}{R_{f}}P_{21}^{2} \]

Thermal resistance

\[q = \frac{1}{R_{t}}\mathcal{T}_{21}\ \mathcal{P} = \frac{1}{R_{t}}\mathcal{T}_{21} \]

73. Nomenclature

$\square\ $ Through-variable: \(F =\) force, \(T =\) torque, \(i =\) current, \(Q =\) fluid volumetric flow rate, \(q =\) heat flow rate.

\(\square\) Across-variable: \(v =\) translational velocity, \(\omega =\) angular velocity, \(v =\) voltage, \(P =\) pressure, \(\mathcal{T} =\) temperature.

$\square\ $ Inductive storage: \(L =\) inductance, \(1/k =\) reciprocal translational or rotational stiffness, \(I =\) fluid inertance.

$\square\ $ Capacitive storage: \(C =\) capacitance, \(M =\) mass, \(J =\) moment of inertia, \(C_{f} =\) fluid capacitance, \(C_{t} =\) thermal capacitance.

\(\square\) Energy dissipators: \(R =\) resistance, \(b =\) viscous friction, \(R_{f} =\) fluid resistance, \(R_{t} =\) thermal resistance.

The symbol \(v\) is used for both voltage in electrical circuits and velocity in translational mechanical systems and is distinguished within the context of each differential equation. For mechanical systems, one uses Newton's laws; for electrical systems, Kirchhoff's voltage laws. For example, the simple spring-mass-damper mechanical system shown in Figure 2.2(a) is described by Newton's second law of motion. The free-body diagram of the mass \(M\) is shown in Figure 2.2(b). In this spring-mass-damper example, we model the wall friction as a viscous damper, that is, the friction force is linearly proportional to the velocity of the mass. In reality the friction force may behave in a more complicated fashion. For example, the wall friction may behave as a Coulomb damper. Coulomb friction, also known as dry friction, is a nonlinear function of the mass velocity and possesses a discontinuity around zero velocity. For a well-lubricated, sliding surface, the viscous friction is appropriate and will be used here and in subsequent spring-mass-damper examples. Summing the forces acting on \(M\) and utilizing Newton's second law yields

\[M\frac{d^{2}y(t)}{dt^{2}} + b\frac{dy(t)}{dt} + ky(t) = r(t) \]

where \(k\) is the spring constant of the ideal spring and \(b\) is the friction constant. Equation (2.1) is a second-order linear constant-coefficient (time-invariant) differential equation.

FIGURE 2.2

(a) Spring-massdamper system. (b) Free-body diagram.

(a)

(b) FIGURE 2.3

RLC circuit.

Alternatively, one may describe the electrical \(RLC\) circuit of Figure 2.3 by utilizing Kirchhoff's current law. Then we obtain the following integrodifferential equation:

\[\frac{v(t)}{R} + C\frac{dv(t)}{dt} + \frac{1}{L}\int_{0}^{t}\mspace{2mu} v(t)dt = r(t). \]

The solution of the differential equation describing the process may be obtained by classical methods such as the use of integrating factors and the method of undetermined coefficients [1]. For example, when the mass is initially displaced a distance \(y(0) = y_{0}\) and released, the dynamic response of the system can be represented by an equation of the form

\[y(t) = K_{1}e^{- \alpha_{1}t}sin\left( \beta_{1}t + \theta_{1} \right). \]

A similar solution is obtained for the voltage of the \(RLC\) circuit when the circuit is subjected to a constant current \(r(t) = I\). Then the voltage is

\[v(t) = K_{2}e^{- \alpha_{2}t}cos\left( \beta_{2}t + \theta_{2} \right). \]

A voltage curve typical of an \(RLC\) circuit is shown in Figure 2.4.

To reveal further the close similarity between the differential equations for the mechanical and electrical systems, we shall rewrite Equation (2.1) in terms of velocity:

\[v(t) = \frac{dy(t)}{dt}\text{.}\text{~} \]

Then we have

\[M\frac{dv(t)}{dt} + bv(t) + k\int_{0}^{t}\mspace{2mu} v(t)dt = r(t) \]

FIGURE 2.4

Typical voltage response for an \(RLC\) circuit. One immediately notes the equivalence of Equations (2.5) and (2.2) where velocity \(v(t)\) and voltage \(v(t)\) are equivalent variables, usually called analogous variables, and the systems are analogous systems. Therefore the solution for velocity is similar to Equation (2.4), and the response for an underdamped system is shown in Figure 2.4. The concept of analogous systems is a very useful and powerful technique for system modeling. The voltage-velocity analogy, often called the force-current analogy, is a natural one because it relates the analogous through- and across-variables of the electrical and mechanical systems. Another analogy that relates the velocity and current variables is often used and is called the force-voltage analogy [21, 23].

Analogous systems with similar solutions exist for electrical, mechanical, thermal, and fluid systems. The existence of analogous systems and solutions provides the analyst with the ability to extend the solution of one system to all analogous systems with the same describing differential equations. Therefore what one learns about the analysis and design of electrical systems is immediately extended to an understanding of fluid, thermal, and mechanical systems.

73.1. LINEAR APPROXIMATIONS OF PHYSICAL SYSTEMS

A great majority of physical systems are linear within some range of the variables. In general, systems ultimately become nonlinear as the variables are increased without limit. For example, the spring-mass-damper system of Figure 2.2 is linear and described by Equation (2.1) as long as the mass is subjected to small deflections \(y(t)\). However, if \(y(t)\) were continually increased, eventually the spring would be overextended and break. Therefore the question of linearity and the range of applicability must be considered for each system.

A system is defined as linear in terms of the system excitation and response. In the case of the electrical network, the excitation is the input current \(r(t)\) and the response is the voltage \(v(t)\). In general, a necessary condition for a linear system can be determined in terms of an excitation \(x(t)\) and a response \(y(t)\). When the system at rest is subjected to an excitation \(x_{1}(t)\), it provides a response \(y_{1}(t)\). Furthermore, when the system is subjected to an excitation \(x_{2}(t)\), it provides a corresponding response \(y_{2}(t)\). For a linear system, it is necessary that the excitation \(x_{1}(t) + x_{2}(t)\) result in a response \(y_{1}(t) + y_{2}(t)\). This is the principle of superposition.

Furthermore, the magnitude scale factor must be preserved in a linear system. Again, consider a system with an input \(x(t)\) that results in an output \(y(t)\). Then the response of a linear system to a constant multiple \(\beta\) of an input \(x\) must be equal to the response to the input multiplied by the same constant so that the output is equal to \(\beta y(t)\). This is the property of homogeneity.

74. A linear system satisfies the properties of superposition and homogeneity.

A system characterized by the relation \(y(t) = x^{2}(t)\) is not linear, because the superposition property is not satisfied. A system represented by the relation \(y(t) = mx(t) + b\) is not linear, because it does not satisfy the homogeneity property. However, this second system may be considered linear about an operating point \(x_{0},y_{0}\) for small changes \(\Delta x\) and \(\Delta y\). When \(x(t) = x_{0} + \Delta x(t)\) and \(y(t) = y_{0} + \Delta y(t)\), we have

\[y(t) = mx(t) + b \]

or

\[y_{0} + \Delta y(t) = mx_{0} + m\Delta x(t) + b. \]

Therefore, \(\Delta y(t) = m\Delta x(t)\), which satisfies the necessary conditions.

The linearity of many mechanical and electrical elements can be assumed over a reasonably large range of the variables [7]. This is not usually the case for thermal and fluid elements, which are more frequently nonlinear in character. Fortunately, however, one can often linearize nonlinear elements assuming small-signal conditions. This is the normal approach used to obtain a linear equivalent circuit for electronic circuits and transistors. Consider a general element with an excitation (through-) variable \(x(t)\) and a response (across-) variable \(y(t)\). Several examples of dynamic system variables are given in Table 2.1. The relationship of the two variables is written as

\[y(t) = g(x(t)), \]

where \(g(x(t))\) indicates \(y(t)\) is a function of \(x(t)\). The normal operating point is designated by \(x_{0}\). Because the curve (function) is continuous over the range of interest, a Taylor series expansion about the operating point may be utilized [7]. Then we have

\[y(t) = g(x(t)) = g\left( x_{0} \right) + \left. \ \frac{dg}{dx} \right|_{x(t) = x_{0}}\frac{\left( x(t) - x_{0} \right)}{1!} + \left. \ \frac{d^{2}g}{dx^{2}} \right|_{x(t) = x_{0}}\frac{\left( x(t) - x_{0} \right)^{2}}{2!} + \cdots. \]

The slope at the operating point,

\[m = \left. \ \frac{dg}{dx} \right|_{x(t) = x_{0}} \]

is a good approximation to the curve over a small range of \(x(t) - x_{0}\), the deviation from the operating point. Then, as a reasonable approximation, Equation (2.7) becomes

\[y(t) = g\left( x_{0} \right) + \left. \ \frac{dg}{dx} \right|_{x(t) = x_{0}}\left( x(t) - x_{0} \right) = y_{0} + m\left( x(t) - x_{0} \right). \]

Finally, Equation (2.8) can be rewritten as the linear equation

\[y(t) - y_{0} = m\left( x(t) - x_{0} \right) \]

or

\[\Delta y(t) = m\Delta x(t) \]

Consider the case of a mass, \(M\), sitting on a nonlinear spring, as shown in Figure 2.5(a). The normal operating point is the equilibrium position that occurs when the spring force balances the gravitational force \(Mg\), where \(g\) is the gravitational constant. Thus, we obtain \(f_{0} = Mg\), as shown. For the nonlinear spring with \(f(t) = y^{2}(t)\), the equilibrium position is \(y_{0} = (Mg)^{1/2}\). The linear model for small deviation is

\[\Delta f(t) = m\Delta y(t), \]

FIGURE 2.5

(a) A mass sitting on a nonlinear spring.

(b) The spring force versus \(y(t)\).

(a)

(b)

where

\[m = \left. \ \frac{df}{dy} \right|_{y(t) = y_{0}}, \]

as shown in Figure 2.5(b). Thus, \(m = 2y_{0}\). A linear approximation is as accurate as the assumption of small signals is applicable to the specific problem.

If the dependent variable \(y(t)\) depends upon several excitation variables, \(x_{1}(t),x_{2}(t),\ldots,x_{n}(t)\), then the functional relationship is written as

\[y(t) = g\left( x_{1}(t),x_{2}(t),\ldots,x_{n}(t) \right). \]

The Taylor series expansion about the operating point \(x_{1_{0}},x_{2_{0}},\ldots,x_{n_{0}}\) is useful for a linear approximation to the nonlinear function. When the higher-order terms are neglected, the linear approximation is written as

\[\begin{matrix} y(t) = & g\left( x_{1_{0}},x_{2_{0}},\ldots,x_{n_{0}} \right) + \left. \ \frac{\partial g}{\partial x_{1}} \right|_{x(t) = x_{0}}\left( x_{1}(t) - x_{1_{0}} \right) + \left. \ \frac{\partial g}{\partial x_{2}} \right|_{x(t) = x_{0}}\left( x_{2}(t) - x_{2_{0}} \right) \\ & \ + \cdots + \left. \ \frac{\partial g}{\partial x_{n}} \right|_{x(t) = x_{0}}\left( x_{n}(t) - x_{n_{0}} \right), \end{matrix}\]

where \(x_{0}\) is the operating point. Example 2.1 will clearly illustrate the utility of this method.

75. EXAMPLE 2.1 Pendulum oscillator model

Consider the pendulum oscillator shown in Figure 2.6(a). The torque on the mass is

\[T(t) = MgLsin\theta(t) \]

where \(g\) is the gravity constant. The equilibrium condition for the mass is \(\theta_{0} = 0^{\circ}\). The nonlinear relation between \(T(t)\) and \(\theta(t)\) is shown graphically in Figure 2.6(b). The first derivative evaluated at equilibrium provides the linear approximation, which is

\[T(t) - \left. \ T_{0} \cong MgL\frac{\partial sin\theta}{\partial\theta} \right|_{\theta(t) = \theta_{0}}\left( \theta(t) - \theta_{0} \right), \]

FIGURE 2.6

Pendulum oscillator.

(a)

(b)

where \(T_{0} = 0\). Then, we have

\[T(t) = MgL\theta(t). \]

This approximation is reasonably accurate for \(- \pi/4 \leq \theta \leq \pi/4\). For example, the response of the linear model for the swing through \(\pm 30^{\circ}\) is within \(5\%\) of the actual nonlinear pendulum response.

75.1. THE LAPLACE TRANSFORM

The ability to obtain linear time-invariant approximations of physical systems allows the analyst to consider the use of the Laplace transformation. The Laplace transform method substitutes relatively easily solved algebraic equations for the more difficult differential equations \(\lbrack 1,3\rbrack\). The time-response solution is obtained by the following operations:

  1. Obtain the linearized differential equations.

  2. Obtain the Laplace transformation of the differential equations.

  3. Solve the resulting algebraic equation for the transform of the variable of interest.

The Laplace transform exists for linear differential equations for which the transformation integral converges. Therefore, for \(f(t)\) to be transformable, it is sufficient that

\[\int_{0^{-}}^{\infty}\mspace{2mu} f(t) \mid e^{- \sigma_{1}t}dt < \infty, \]

for some real, positive \(\sigma_{1}\) [1]. The \(0^{-}\)indicates that the integral should include any discontinuity, such as a delta function at \(t = 0\). If the magnitude of \(f(t)\) is \(|f(t)| < Me^{\alpha t}\) for all positive \(t\), the integral will converge for \(\sigma_{1} > \alpha\). The region of convergence is therefore given by \(\infty > \sigma_{1} > \alpha\), and \(\sigma_{1}\) is known as the abscissa of absolute convergence. Signals that are physically realizable always have a Laplace transform. The Laplace transformation for a function of time, \(f(t)\), is

\[F(s) = \int_{0^{-}}^{\infty}\mspace{2mu} f(t)e^{- st}dt = \mathcal{L}\{ f(t)\}. \]

The inverse Laplace transform is written as

\[f(t) = \frac{1}{2\pi j}\int_{\sigma - j\infty}^{\sigma + j\infty}\mspace{2mu} F(s)e^{+ st}ds. \]

The transformation integrals have been employed to derive tables of Laplace transforms that are used for the great majority of problems. A table of important Laplace transform pairs is given in Table 2.3. A more complete list of Laplace transform pairs can be found in many references, including at the MCS website.

76. Table 2.3 Important Laplace Transform Pairs

\[\begin{matrix} & \frac{\mathbf{f}(\mathbf{t})}{\text{~}\text{Step function,}\text{~}u(t)} \\ & e^{- at} \\ & sin\omega t \\ & cos\omega t \\ & t^{n} \\ & f^{(k)}(t) = \frac{d^{k}f(t)}{dt^{k}} \\ & \ \int_{- \infty}^{t}\mspace{2mu}\mspace{2mu} f(t)dt \end{matrix}\]

Impulse function \(\delta(t)\)

\(e^{- at}sin\omega t\)$$

e^{-a t} \cos \omega t

$$\(\frac{1}{\omega}\left\lbrack (\alpha - a)^{2} + \omega^{2} \right\rbrack^{1/2}e^{- at}sin(\omega t + \phi),\)

\(\phi = \tan^{- 1}\frac{\omega}{\alpha - a}\)$$

\frac{\omega_{n}}{\sqrt{1-\zeta^{2}}} e^{-\zeta \omega_{n} t} \sin \omega_{n} \sqrt{1-\zeta^{2}} t, \zeta<1

$$\(\frac{1}{a^{2} + \omega^{2}} + \frac{1}{\omega\sqrt{a^{2} + \omega^{2}}}e^{- at}sin(\omega t - \phi),\)$$

\phi=\tan ^{-1} \frac{\omega}{-a}

$$\(1 - \frac{1}{\sqrt{1 - \zeta^{2}}}e^{- \zeta\omega_{n}t}sin\left( \omega_{n}\sqrt{1 - \zeta^{2}}t + \phi \right),\)$$

\phi=\cos ^{-1} \zeta, \zeta<1

$$\(\frac{\alpha}{a^{2} + \omega^{2}} + \frac{1}{\omega}\left\lbrack \frac{(\alpha - a)^{2} + \omega^{2}}{a^{2} + \omega^{2}} \right\rbrack^{1/2}e^{- at}sin(\omega t + \phi)\text{.}\text{~}\)$$

\phi=\tan ^{-1} \frac{\omega}{\alpha-a}-\tan ^{-1} \frac{\omega}{-a}

$$

77. \(F(s)\)

\[\frac{1}{s} \]

\[\frac{1}{s + a} \]

\[\frac{\omega}{s^{2} + \omega^{2}} \]

\[\frac{s}{s^{2} + \omega^{2}} \]

\[\frac{n!}{s^{n + 1}} \]

\[\begin{matrix} & s^{k}F(s) - s^{k - 1}f\left( 0^{-} \right) - s^{k - 2}f^{'}\left( 0^{-} \right) \\ & \ - \ldots - f^{(k - 1)}\left( 0^{-} \right) \\ & \frac{F(s)}{s} + \frac{1}{s}\int_{- \infty}^{0}\mspace{2mu}\mspace{2mu} f(t)dt \end{matrix}\]

1

\[\begin{matrix} \frac{\omega}{(s + a)^{2} + \omega^{2}} \\ \frac{s + a}{(s + a)^{2} + \omega^{2}} \\ \frac{s + \alpha}{(s + a)^{2} + \omega^{2}} \end{matrix}\]

\(\frac{\omega_{n}^{2}}{s^{2} + 2\zeta\omega_{n}s + \omega_{n}^{2}}\)$$

\frac{1}{s\left[(s+a)^({2}+\omega){2}\right]}

$$

\[\frac{\omega_{n}^{2}}{s\left( s^{2} + 2\zeta\omega_{n}s + \omega_{n}^{2} \right)} \]

\[\frac{s + \alpha}{s\left\lbrack (s + a)^{2} + \omega^{2} \right\rbrack} \]

Alternatively, the Laplace variable \(s\) can be considered to be the differential operator so that

\[s \equiv \frac{d}{dt} \]

Then we also have the integral operator

\[\frac{1}{s} \equiv \int_{0^{-}}^{t}\mspace{2mu} dt. \]

The inverse Laplace transformation is usually obtained by using the Heaviside partial fraction expansion. This approach is particularly useful for systems analysis and design because the effect of each characteristic root or eigenvalue can be clearly observed.

To illustrate the usefulness of the Laplace transformation and the steps involved in the system analysis, reconsider the spring-mass-damper system described by Equation (2.1), which is

\[M\frac{d^{2}y(t)}{dt^{2}} + b\frac{dy(t)}{dt} + ky(t) = r(t). \]

We wish to obtain the response, \(y(t)\), as a function of time. The Laplace transform of Equation (2.18) is

\[M\left( s^{2}Y(s) - sy\left( 0^{-} \right) - \frac{dy}{dt}\left( 0^{-} \right) \right) + b\left( sY(s) - y\left( 0^{-} \right) \right) + kY(s) = R(s). \]

When

\[r(t) = 0,\ \text{~}\text{and}\text{~}\ y\left( 0^{-} \right) = y_{0},\ \text{~}\text{and}\text{~}\left. \ \ \frac{dy}{dt} \right|_{t = 0^{-}} = 0 \]

we have

\[Ms^{2}Y(s) - Msy_{0} + bsY(s) - by_{0} + kY(s) = 0. \]

Solving for \(Y(s)\), we obtain

\[Y(s) = \frac{(Ms + b)y_{0}}{Ms^{2} + bs + k} = \frac{p(s)}{q(s)}. \]

The denominator polynomial \(q(s)\), when set equal to zero, is called the characteristic equation because the roots of this equation determine the character of the time response. The roots of this characteristic equation are also called the poles of the system. The roots of the numerator polynomial \(p(s)\) are called the zeros of the system; for example, \(s = - b/M\) is a zero of Equation (2.21). Poles and zeros are critical frequencies. At the poles, the function \(Y(s)\) becomes infinite, whereas at the zeros, the function becomes zero. The complex frequency \(s\)-plane plot of the poles and zeros graphically portrays the character of the natural transient response of the system.

For a specific case, consider the system when \(k/M = 2\) and \(b/M = 3\). Then Equation (2.21) becomes

\[Y(s) = \frac{(s + 3)y_{0}}{(s + 1)(s + 2)}. \]

FIGURE 2.7

An s-plane pole and zero plot.

The poles and zeros of \(Y(s)\) are shown on the \(s\)-plane in Figure 2.7.

Expanding Equation (2.22) in a partial fraction expansion, we obtain

\[Y(s) = \frac{k_{1}}{s + 1} + \frac{k_{2}}{s + 2}, \]

where \(k_{1}\) and \(k_{2}\) are the coefficients of the expansion. The coefficients \(k_{i}\) are called residues and are evaluated by multiplying through by the denominator factor of Equation (2.22) corresponding to \(k_{i}\) and setting \(s\) equal to the root. Evaluating \(k_{1}\) when \(y_{0} = 1\), we have

\[\begin{matrix} k_{1} & \ = \left. \ \frac{\left( s - s_{1} \right)p(s)}{q(s)} \right|_{s = s_{1}} \\ & \ = \left. \ \frac{(s + 1)(s + 3)}{(s + 1)(s + 2)} \right|_{s_{1} = - 1} = 2 \end{matrix}\]

and \(k_{2} = - 1\). Alternatively, the residues of \(Y(s)\) at the respective poles may be evaluated graphically on the \(s\)-plane plot, since Equation (2.24) may be written as

\[\begin{matrix} k_{1} & \ = \left. \ \frac{s + 3}{s + 2} \right|_{s = s_{1} = - 1} \\ & \ = \left. \ \frac{s_{1} + 3}{s_{1} + 2} \right|_{s_{1} = - 1} = 2. \end{matrix}\]

The graphical representation of Equation (2.25) is shown in Figure 2.8. The graphical method of evaluating the residues is particularly valuable when the order of the characteristic equation is high and several poles are complex conjugate pairs.

FIGURE 2.8

Graphical evaluation of the residues. The inverse Laplace transform of Equation (2.22) is then

\[y(t) = \mathcal{L}^{- 1}\left\{ \frac{2}{s + 1} \right\} + \mathcal{L}^{- 1}\left\{ \frac{- 1}{s + 2} \right\}. \]

Using Table 2.3, we find that

\[y(t) = 2e^{- t} - 1e^{- 2t} \]

Finally, it is usually desired to determine the steady-state or final value of the response of \(y(t)\). For example, the final or steady-state rest position of the springmass-damper system may be calculated. The final value theorem states that

\[\lim_{t \rightarrow \infty}\mspace{2mu} y(t) = \lim_{s \rightarrow 0}\mspace{2mu} sY(s) \]

where a simple pole of \(Y(s)\) at the origin is permitted, but poles on the imaginary axis and in the right half-plane and repeated poles at the origin are excluded. Therefore, for the specific case of the spring-mass-damper, we find that

\[\lim_{t \rightarrow \infty}\mspace{2mu} y(t) = \lim_{s \rightarrow 0}\mspace{2mu} sY(s) = 0. \]

Hence the final position for the mass is the normal equilibrium position \(y = 0\).

Reconsider the spring-mass-damper system. The equation for \(Y(s)\) may be written as

\[Y(s) = \frac{(s + b/M)y_{0}}{s^{2} + (b/M)s + k/M} = \frac{\left( s + 2\zeta\omega_{n} \right)y_{0}}{s^{2} + 2\zeta\omega_{n}s + \omega_{n}^{2}}, \]

where \(\zeta\) is the dimensionless damping ratio, and \(\omega_{n}\) is the natural frequency of the system. The roots of the characteristic equation are

\[s_{1},s_{2} = - \zeta\omega_{n} \pm \omega_{n}\sqrt{\zeta^{2} - 1} \]

where, in this case, \(\omega_{n} = \sqrt{k/M}\) and \(\zeta = b/(2\sqrt{kM})\). When \(\zeta > 1\), the roots are real and the system is overdamped; when \(\zeta < 1\), the roots are complex and the system is underdamped. When \(\zeta = 1\), the roots are repeated and real, and the condition is called critical damping.

When \(\zeta < 1\), the response is underdamped, and

\[s_{1,2} = - \zeta\omega_{n} \pm j\omega_{n}\sqrt{1 - \zeta^{2}}. \]

The \(s\)-plane plot of the poles and zeros of \(Y(s)\) is shown in Figure 2.9, where \(\theta = \cos^{- 1}\zeta\). As \(\zeta\) varies with \(\omega_{n}\) constant, the complex conjugate roots follow a

FIGURE 2.9

An s-plane plot of the poles and zeros of \(Y(s)\).

FIGURE 2.10

The locus of roots as \(\zeta\) varies with \(\omega_{n}\) constant.

circular locus, as shown in Figure 2.10. The transient response is increasingly oscillatory as the roots approach the imaginary axis when \(\zeta\) approaches zero.

The inverse Laplace transform can be evaluated using the graphical residue evaluation. The partial fraction expansion of Equation (2.30) is

\[Y(s) = \frac{k_{1}}{s - s_{1}} + \frac{k_{2}}{s - s_{2}}. \]

Since \(s_{2}\) is the complex conjugate of \(s_{1}\), the residue \(k_{2}\) is the complex conjugate of \(k_{1}\) so that we obtain

\[Y(s) = \frac{k_{1}}{s - s_{1}} + \frac{{\widehat{k}}_{1}}{s - {\widehat{s}}_{1}} \]

where the hat indicates the conjugate relation. The residue \(k_{1}\) is evaluated from Figure 2.11 as

\[k_{1} = \frac{y_{0}\left( s_{1} + 2\zeta\omega_{n} \right)}{s_{1} - {\widehat{s}}_{1}} = \frac{y_{0}M_{1}e^{j\theta}}{M_{2}e^{j\pi/2}}, \]

where \(M_{1}\) is the magnitude of \(s_{1} + 2\zeta\omega_{n}\), and \(M_{2}\) is the magnitude of \(s_{1} - {\widehat{s}}_{1}\). A review of complex numbers can be found in many online references, as well as on the MCS website. In this case, we obtain

\[k_{1} = \frac{y_{0}\left( \omega_{n}e^{j\theta} \right)}{2\omega_{n}\sqrt{1 - \zeta^{2}}e^{j\pi/2}} = \frac{y_{0}}{2\sqrt{1 - \zeta^{2}}e^{j(\pi/2 - \theta)}}, \]

FIGURE 2.11

Evaluation of the residue \(k_{1}\).

FIGURE 2.12

Response of the spring-massdamper system.

where \(\theta = \cos^{- 1}\zeta\). Therefore,

\[k_{2} = \frac{y_{0}}{2\sqrt{1 - \zeta^{2}}}e^{j(\pi/2 - \theta)}. \]

Finally, letting \(\beta = \sqrt{1 - \zeta^{2}}\), we find that

\[\begin{matrix} y(t) & \ = k_{1}e^{S_{1}t} + k_{2}e^{S_{2}t} \\ & \ = \frac{y_{0}}{2\sqrt{1 - \zeta^{2}}}\left( e^{j(\theta - \pi/2)}e^{- \zeta\omega_{n}t}e^{j\omega_{n}\beta t} + e^{j(\pi/2 - \theta)}e^{- \zeta\omega_{n}t}e^{- j\omega_{n}\beta t} \right) \\ & \ = \frac{y_{0}}{\sqrt{1 - \zeta^{2}}}e^{- \zeta\omega_{n}t}sin\left( \omega_{n}\sqrt{1 - \zeta^{2}}t + \theta \right). \end{matrix}\]

The solution, Equation (2.37), can also be obtained using item 11 of Table 2.3. The transient responses of the overdamped \((\zeta > 1)\) and underdamped \((\zeta < 1)\) cases are shown in Figure 2.12. The transient response that occurs when \(\zeta < 1\) exhibits an oscillation in which the amplitude decreases with time, and it is called a damped oscillation.

The relationship between the \(s\)-plane location of the poles and zeros and the form of the transient response can be interpreted from the \(s\)-plane pole-zero plots. For example, as seen in Equation (2.37), adjusting the value of \(\zeta\omega_{n}\) varies the \(e^{- \zeta\omega_{n}t}\) envelope, hence the response \(y(t)\) shown in Figure 2.12. The larger the value of \(\zeta\omega_{n}\), the faster the damping of the response, \(y(t)\). In Figure 2.9, we see that the location of the complex pole \(s_{1}\) is given by \(s_{1} = - \zeta\omega_{n} + j\omega_{n}\sqrt{1 - \zeta^{2}}\). So, making \(\zeta\omega_{n}\) larger moves the pole further to the left in the \(s\)-plane. Thus, the connection between the location of the pole in the \(s\)-plane and the step response is apparent-moving the pole \(s_{1}\) farther in the left half-plane leads to a faster damping of the transient step response. Of course, most control systems will have more than one complex pair of poles, so the transient response will be the result of the contributions of all the poles. In fact, the magnitude of the response of each pole, represented by the residue, can be visualized by examining the graphical residues on the \(s\)-plane. We will discuss the connection between the pole and zero locations and the transient and steady-state response more in subsequent chapters. We will find that the Laplace transformation and the \(s\)-plane approach are very useful techniques for system analysis and design where emphasis is placed on the transient and steady-state performance. In fact, because the study of control systems is concerned primarily with the transient and steady-state performance of dynamic systems, we have real cause to appreciate the value of the Laplace transform techniques.

77.1. THE TRANSFER FUNCTION OF LINEAR SYSTEMS

The transfer function of a linear system is defined as the ratio of the Laplace transform of the output variable to the Laplace transform of the input variable, with all initial conditions assumed to be zero. The transfer function of a system (or element) represents the relationship describing the dynamics of the system under consideration.

A transfer function may be defined only for a linear, stationary (constant parameter) system. A nonstationary system, often called a time-varying system, has one or more time-varying parameters, and the Laplace transformation may not be utilized. Furthermore, a transfer function is an input-output description of the behavior of a system. Thus, the transfer function description does not include any information concerning the internal structure of the system and its behavior.

The transfer function of the spring-mass-damper system is obtained from the original Equation (2.19), rewritten with zero initial conditions as follows:

\[Ms^{2}Y(s) + bsY(s) + kY(s) = R(s). \]

Then the transfer function is the ratio of the output to the input, or

\[G(s) = \frac{Y(s)}{R(s)} = \frac{1}{Ms^{2} + bs + k}. \]

The transfer function of the \(RC\) network shown in Figure 2.13 is obtained by writing the Kirchhoff voltage equation, yielding

\[V_{1}(s) = \left( R + \frac{1}{Cs} \right)I(s), \]

expressed in terms of transform variables. We shall frequently refer to variables and their transforms interchangeably. The transform variable will be distinguishable by the use of an uppercase letter or the argument \((s)\).

The output voltage is

\[V_{2}(s) = I(s)\left( \frac{1}{Cs} \right) \]

FIGURE 2.13 An \(RC\) network.

Therefore, solving Equation (2.40) for \(I(s)\) and substituting in Equation (2.41), we have

\[V_{2}(s) = \frac{(1/Cs)V_{1}(s)}{R + 1/Cs}. \]

Then the transfer function is obtained as the ratio \(V_{2}(s)/V_{1}(s)\),

\[G(s) = \frac{V_{2}(s)}{V_{1}(s)} = \frac{1}{RCs + 1} = \frac{1}{\tau s + 1} = \frac{1/\tau}{s + 1/\tau}, \]

where \(\tau = RC\), the time constant of the network. The single pole of \(G(s)\) is \(s = - 1/\tau\). Equation (2.42) could be immediately obtained if one observes that the circuit is a voltage divider, where

\[\frac{V_{2}(s)}{V_{1}(s)} = \frac{Z_{2}(s)}{Z_{1}(s) + Z_{2}(s)} \]

and \(Z_{1}(s) = R,Z_{2} = 1/Cs\).

A multiloop electrical circuit or an analogous multiple-mass mechanical system results in a set of simultaneous equations in the Laplace variable. It is usually more convenient to solve the simultaneous equations by using matrices and determinants \(\lbrack 1,3,15\rbrack\). An introduction to matrices and determinants can be found in many references online, as well as on the MCS website.

Let us consider the long-term behavior of a system and determine the response to certain inputs that remain after the transients fade away. Consider the dynamic system represented by the differential equation

\[\begin{matrix} \frac{d^{n}y(t)}{dt^{n}} & \ + q_{n - 1}\frac{d^{n - 1}y(t)}{dt^{n - 1}} + \ldots + q_{0}y(t) \\ = & p_{n - 1}\frac{d^{n - 1}r(t)}{dt^{n - 1}} + p_{n - 2}\frac{d^{n - 2}r(t)}{dt^{n - 2}} + \cdots + p_{0}r(t), \end{matrix}\]

where \(y(t)\) is the response, and \(r(t)\) is the input or forcing function. If the initial conditions are all zero, then the transfer function is the coefficient of \(R(s)\) in

\[Y(s) = G(s)R(s) = \frac{p(s)}{q(s)}R(s) = \frac{p_{n - 1}s^{n - 1} + p_{n - 2}s^{n - 2} + \cdots + p_{0}}{s^{n} + q_{n - 1}s^{n - 1} + \cdots + q_{0}}R(s). \]

The output response consists of a natural response (determined by the initial conditions) plus a forced response determined by the input. We now have

\[Y(s) = \frac{m(s)}{q(s)} + \frac{p(s)}{q(s)}R(s) \]

where \(q(s) = 0\) is the characteristic equation. If the input has the rational form

\[R(s) = \frac{n(s)}{d(s)} \]

then

\[Y(s) = \frac{m(s)}{q(s)} + \frac{p(s)}{q(s)}\frac{n(s)}{d(s)} = Y_{1}(s) + Y_{2}(s) + Y_{3}(s), \]

where \(Y_{1}(s)\) is the partial fraction expansion of the natural response, \(Y_{2}(s)\) is the partial fraction expansion of the terms involving factors of \(q(s)\), and \(Y_{3}(s)\) is the partial fraction expansion of terms involving factors of \(d(s)\).

Taking the inverse Laplace transform yields

\[y(t) = y_{1}(t) + y_{2}(t) + y_{3}(t). \]

The transient response consists of \(y_{1}(t) + y_{2}(t)\), and the steady-state response is \(y_{3}(t)\).

78. EXAMPLE 2.2 Solution of a differential equation

Consider a system represented by the differential equation

\[\frac{d^{2}y(t)}{dt^{2}} + 4\frac{dy(t)}{dt} + 3y(t) = 2r(t) \]

where the initial conditions are \(y(0) = 1,\frac{dy}{dt}(0) = 0\), and \(r(t) = 1,t \geq 0\).
The Laplace transform yields

\[\left\lbrack s^{2}Y(s) - sy(0) \right\rbrack + 4\lbrack sY(s) - y(0)\rbrack + 3Y(s) = 2R(s). \]

Since \(R(s) = 1/s\) and \(y(0) = 1\), we obtain

\[Y(s) = \frac{s + 4}{s^{2} + 4s + 3} + \frac{2}{s\left( s^{2} + 4s + 3 \right)}, \]

where \(q(s) = s^{2} + 4s + 3 = (s + 1)(s + 3) = 0\) is the characteristic equation, and \(d(s) = s\). Then the partial fraction expansion yields

\[Y(s) = \left\lbrack \frac{3/2}{s + 1} + \frac{- 1/2}{s + 3} \right\rbrack + \left\lbrack \frac{- 1}{s + 1} + \frac{1/3}{s + 3} \right\rbrack + \frac{2/3}{s} = Y_{1}(s) + Y_{2}(s) + Y_{3}(s). \]

Hence, the response is

\[y(t) = \left\lbrack \frac{3}{2}e^{- t} - \frac{1}{2}e^{- 3t} \right\rbrack + \left\lbrack - 1e^{- t} + \frac{1}{3}e^{- 3t} \right\rbrack + \frac{2}{3}, \]

and the steady-state response is

\[\lim_{t \rightarrow \infty}\mspace{2mu} y(t) = \frac{2}{3} \]

79. EXAMPLE 2.3 Transfer function of an op-amp circuit

The operational amplifier (op-amp) belongs to an important class of analog integrated circuits commonly used as building blocks in the implementation of control systems and in many other important applications. Op-amps are active elements (that is, they have external power sources) with a high gain when operating in their linear regions. A model of an ideal op-amp is shown in Figure 2.14. FIGURE 2.14

The ideal op-amp.

The operating conditions for the ideal op-amp are (1) \(i_{1} = 0\) and \(i_{2} = 0\), thus implying that the input impedance is infinite, and (2) \(v_{2} - v_{1} = 0\) (or \(v_{1} = v_{2}\) ). The input-output relationship for an ideal op-amp is

\[v_{0} = K\left( v_{2} - v_{1} \right) = - K\left( v_{1} - v_{2} \right), \]

where the gain \(K\) approaches infinity. In our analysis, we will assume that the linear op-amps are operating with high gain and under idealized conditions.

Consider the inverting amplifier shown in Figure 2.15. Under ideal conditions, we have \(i_{1} = 0\), so that writing the node equation at \(v_{1}\) yields

\[\frac{v_{1} - v_{\text{in}\text{~}}}{R_{1}} + \frac{v_{1} - v_{0}}{R_{2}} = 0. \]

Since \(v_{2} = v_{1}\) (under ideal conditions) and \(v_{2} = 0\) (see Figure 2.15 and compare it with Figure 2.14), it follows that \(v_{1} = 0\). Therefore,

\[- \frac{v_{\text{in}\text{~}}}{R_{1}} - \frac{v_{0}}{R_{2}} = 0, \]

and rearranging terms, we obtain

\[\frac{v_{0}}{v_{\text{in}\text{~}}} = - \frac{R_{2}}{R_{1}}. \]

We see that when \(R_{2} = R_{1}\), the ideal op-amp circuit inverts the sign of the input, that is, \(v_{0} = - v_{\text{in}\text{~}}\) when \(R_{2} = R_{1}\).

80. EXAMPLE 2.4 Transfer function of a system

Consider the mechanical system shown in Figure 2.16 and its electrical circuit ana\(\log\) shown in Figure 2.17. The electrical circuit analog is a force-current analog as outlined in Table 2.1. The velocities \(v_{1}(t)\) and \(v_{2}(t)\) of the mechanical system are

FIGURE 2.15

An Inverting amplifier operating with ideal conditions.

FIGURE 2.16

Two-mass

mechanical system.

FIGURE 2.17

Two-node electric circuit analog \(C_{1} = M_{1},C_{2} = M_{2}\), \(L = 1/k,R_{1} = 1/b_{1}\) \(R_{2} = 1/b_{2}\).

directly analogous to the node voltages \(v_{1}(t)\) and \(v_{2}(t)\) of the electrical circuit. The simultaneous equations, assuming that the initial conditions are zero, are

\[M_{1}sV_{1}(s) + \left( b_{1} + b_{2} \right)V_{1}(s) - b_{1}V_{2}(s) = R(s), \]

and

\[M_{2}sV_{2}(s) + b_{1}V_{2}\left( (s) - V_{1}(s) \right) + k\frac{V_{2}(s)}{s} = 0. \]

These equations are obtained using the force equations for the mechanical system of Figure 2.16. Rearranging Equations (2.47) and (2.48), we obtain

\[\begin{matrix} \left( M_{1}s + \left( b_{1} + b_{2} \right) \right)V_{1}(s) + \left( - b_{1} \right)V_{2}(s) = R(s), \\ \left( - b_{1} \right)V_{1}(s) + \left( M_{2}s + b_{1} + \frac{k}{s} \right)V_{2}(s) = 0, \end{matrix}\]

or, in matrix form,

\[\begin{bmatrix} M_{1}s + b_{1} + b_{2} & - b_{1} \\ - b_{1} & M_{2}s + b_{1} + \frac{k}{s} \end{bmatrix}\begin{bmatrix} V_{1}(s) \\ V_{2}(s) \end{bmatrix} = \begin{bmatrix} R(s) \\ 0 \end{bmatrix}.\]

Assuming that the velocity of \(M_{1}\) is the output variable, we solve for \(V_{1}(s)\) by matrix inversion or Cramer's rule to obtain \(\lbrack 1,3\rbrack\)

\[V_{1}(s) = \frac{\left( M_{2}s + b_{1} + k/s \right)R(s)}{\left( M_{1}s + b_{1} + b_{2} \right)\left( M_{2}s + b_{1} + k/s \right) - b_{1}^{2}}. \]

Then the transfer function of the mechanical (or electrical) system is

\[\begin{matrix} G(s) = & \frac{V_{1}(s)}{R(s)} = \frac{\left( M_{2}s + b_{1} + k/s \right)}{\left( M_{1}s + b_{1} + b_{2} \right)\left( M_{2}s + b_{1} + k/s \right) - b_{1}^{2}} \\ & \ = \frac{\left( M_{2}s^{2} + b_{1}s + k \right)}{\left( M_{1}s + b_{1} + b_{2} \right)\left( M_{2}s^{2} + b_{1}s + k \right) - b_{1}^{2}s}. \end{matrix}\]

If the transfer function in terms of the position \(x_{1}(t)\) is desired, then we have

\[\frac{X_{1}(s)}{R(s)} = \frac{V_{1}(s)}{sR(s)} = \frac{G(s)}{s}. \]

As an example, let us obtain the transfer function of an important electrical control component, the DC motor [8]. A DC motor is used to move loads and is called an actuator.

81. An actuator is a device that provides the motive power to the process.

82. EXAMPLE 2.5 Transfer function of the DC motor

The DC motor is a power actuator device that delivers energy to a load, as shown in Figure 2.18(a); a sketch of a DC motor is shown in Figure 2.18(b). The DC motor converts direct current (DC) electrical energy into rotational mechanical energy. A major fraction of the torque generated in the rotor (armature) of the motor is available to drive an external load. Because of features such as high torque, speed controllability over a wide range, portability, well-behaved speed-torque characteristics, and adaptability to various types of control methods, DC motors are widely used in numerous control applications, including robotic manipulators, tape transport mechanisms, disk drives, machine tools, and servovalve actuators.

The transfer function of the DC motor will be developed for a linear approximation to an actual motor, and second-order effects, such as hysteresis and the voltage drop across the brushes, will be neglected. The input voltage may be applied to the field or armature terminals. The air-gap flux \(\phi(t)\) of the motor is proportional to the field current, provided the field is unsaturated, so that

\[\phi(t) = K_{f}i_{f}(t). \]

The torque developed by the motor is assumed to be related linearly to \(\phi(t)\) and the armature current as follows:

\[T_{m}(t) = K_{1}\phi(t)i_{a}(t) = K_{1}K_{f}i_{f}(t)i_{a}(t) \]

FIGURE 2.18

A DC motor

(a) electrical diagram and

(b) sketch.

(a)

(b)

It is clear from Equation (2.54) that, to have a linear system, one current must be maintained constant while the other current becomes the input current. First, we shall consider the field current controlled motor, which provides a substantial power amplification. Then we have, in Laplace transform notation,

\[T_{m}(s) = \left( K_{1}K_{f}I_{a} \right)I_{f}(s) = K_{m}I_{f}(s), \]

where \(i_{a} = I_{a}\) is a constant armature current, and \(K_{m}\) is defined as the motor constant. The field current is related to the field voltage as

\[V_{f}(s) = \left( R_{f} + L_{f}s \right)I_{f}(s). \]

The motor torque \(T_{m}(s)\) is equal to the torque delivered to the load. This relation may be expressed as

\[T_{m}(s) = T_{L}(s) + T_{d}(s) \]

where \(T_{L}(s)\) is the load torque and \(T_{d}(s)\) is the disturbance torque, which is often negligible. However, the disturbance torque often must be considered in systems subjected to external forces such as antenna wind-gust forces. The load torque for rotating inertia, as shown in Figure 2.18, is written as

\[T_{L}(s) = Js^{2}\theta(s) + bs\theta(s). \]

Rearranging Equations (2.55)-(2.57), we have

\[\begin{matrix} T_{L}(s) & \ = T_{m}(s) - T_{d}(s), \\ T_{m}(s) & \ = K_{m}I_{f}(s), \\ I_{f}(s) & \ = \frac{V_{f}(s)}{R_{f} + L_{f}s}. \end{matrix}\]

FIGURE 2.19

Block diagram model of fieldcontrolled DC motor.

Therefore, the transfer function of the motor-load combination, with \(T_{d}(s) = 0\), is

\[\frac{\theta(s)}{V_{f}(s)} = \frac{K_{m}}{s(Js + b)\left( L_{f}s + R_{f} \right)} = \frac{K_{m}/\left( JL_{f} \right)}{s(s + b/J)\left( s + R_{f}/L_{f} \right)}. \]

The block diagram model of the field-controlled DC motor is shown in Figure 2.19. Alternatively, the transfer function may be written in terms of the time constants of the motor as

\[\frac{\theta(s)}{V_{f}(s)} = G(s) = \frac{K_{m}/\left( bR_{f} \right)}{s\left( \tau_{f}s + 1 \right)\left( \tau_{L}s + 1 \right)}, \]

where \(\tau_{f} = L_{f}/R_{f}\) and \(\tau_{L} = J/b\). Typically, one finds that \(\tau_{L} > \tau_{f}\) and often the field time constant may be neglected.

The armature-controlled DC motor uses the armature current \(i_{a}\) as the control variable. The stator field can be established by a field coil and current or a permanent magnet. When a constant field current is established in a field coil, the motor torque is

\[T_{m}(s) = \left( K_{1}K_{f}I_{f} \right)I_{a}(s) = K_{m}I_{a}(s). \]

When a permanent magnet is used, we have

\[T_{m}(s) = K_{m}I_{a}(s), \]

where \(K_{m}\) is a function of the permeability of the magnetic material.

The armature current is related to the input voltage applied to the armature by

\[V_{a}(s) = \left( R_{a} + L_{a}s \right)I_{a}(s) + V_{b}(s), \]

where \(V_{b}(s)\) is the back electromotive-force voltage proportional to the motor speed. Therefore, we have

\[V_{b}(s) = K_{b}\omega(s), \]

where \(\omega(s) = s\theta(s)\) is the transform of the angular speed and the armature current is

\[I_{a}(s) = \frac{V_{a}(s) - K_{b}\omega(s)}{R_{a} + L_{a}s}. \]

Equations (2.58) and (2.59) represent the load torque, so that

\[T_{L}(s) = Js^{2}\theta(s) + bs\theta(s) = T_{m}(s) - T_{d}(s). \]

FIGURE 2.20

Armature-controlled DC motor.

The relations for the armature-controlled DC motor are shown schematically in Figure 2.20. Using Equations (2.64), (2.67), and (2.68) or the block diagram, and letting \(T_{d}(s) = 0\), we solve to obtain the transfer function

\[\begin{matrix} G(s) = \frac{\theta(s)}{V_{a}(s)} & \ = \frac{K_{m}}{s\left\lbrack \left( R_{a} + L_{a}s \right)(Js + b) + K_{b}K_{m} \right\rbrack} \\ & \ = \frac{K_{m}}{s\left( s^{2} + 2\zeta\omega_{n}s + \omega_{n}^{2} \right)}. \end{matrix}\]

However, for many DC motors, the time constant of the armature, \(\tau_{a} = L_{a}/R_{a}\), is negligible; therefore,

\[G(s) = \frac{\theta(s)}{V_{a}(s)} = \frac{K_{m}}{s\left\lbrack R_{a}(Js + b) + K_{b}K_{m} \right\rbrack} = \frac{K_{m}/\left( R_{a}b + K_{b}K_{m} \right)}{s\left( \tau_{1}s + 1 \right)}, \]

where the equivalent time constant \(\tau_{1} = R_{a}J/\left( R_{a}b + K_{b}K_{m} \right)\).

Note that \(K_{m}\) is equal to \(K_{b}\). This equality may be shown by considering the steady-state motor operation and the power balance when the rotor resistance is neglected. The power input to the rotor is \(K_{b}\omega(t)i_{a}(t)\), and the power delivered to the shaft is \(T(t)\omega(t)\). In the steady-state condition, the power input is equal to the power delivered to the shaft so that \(K_{b}\omega(t)i_{a}(t) = T(t)\omega(t)\); since \(T(t) = K_{m}i_{a}(t)\) (Equation 2.64), we find that \(K_{b} = K_{m}\).

The transfer function concept and approach is very important because it provides the analyst and designer with a useful mathematical model of the system elements. We shall find the transfer function to be a continually valuable aid in the attempt to model dynamic systems. The approach is particularly useful because the \(s\)-plane poles and zeros of the transfer function represent the transient response of the system. The transfer functions of several dynamic elements are given in Table 2.4.

In many situations in engineering, the transmission of rotary motion from one shaft to another is a fundamental requirement. For example, the output power of an automobile engine is transferred to the driving wheels by means of the gearbox and differential. The gearbox allows the driver to select different gear ratios depending on the traffic situation, whereas the differential has a fixed ratio. The speed of the engine in this case is not constant, since it is under the control of the driver. Another example is a set of gears that transfer the power at the shaft of an electric motor to the shaft of a rotating antenna. Examples of mechanical converters are gears, chain drives, and belt drives. A commonly used electric converter is the electric transformer. An example of a device that converts rotational motion to linear motion is the rack-and-pinion gear shown in Table 2.4, item 17.

83. Table 2.4 Transfer Functions of Dynamic Elements and Networks

84. Element or System

\[G(s) \]

  1. Integrating circuit, filter

\[\frac{V_{2}(s)}{V_{1}(s)} = - \frac{1}{RCs} \]

  1. Differentiating circuit

\[\frac{V_{2}(s)}{V_{1}(s)} = - RCs \]

  1. Differentiating circuit

\[\frac{V_{2}(s)}{V_{1}(s)} = - \frac{R_{2}\left( R_{1}Cs + 1 \right)}{R_{1}} \]

  1. Integrating filter

\[\frac{V_{2}(s)}{V_{1}(s)} = - \frac{\left( R_{1}C_{1}s + 1 \right)\left( R_{2}C_{2}s + 1 \right)}{R_{1}C_{2}s} \]

85. Table 2.4 (continued)

86. Element or System

  1. DC motor, field-controlled, rotational actuator

  2. DC motor, armature-controlled, rotational actuator

 

  1. AC motor, two-phase control field, rotational actuator

 

  1. Rotary Amplifier (Amplidyne)

 

  1. Hydraulic actuator \(\lbrack 9,10\rbrack\)

\[\frac{\theta(s)}{V_{f}(s)} = \frac{K_{m}}{s(Js + b)\left( L_{f}s + R_{f} \right)} \]

\[\frac{\theta(s)}{V_{a}(s)} = \frac{K_{m}}{s\left\lbrack \left( R_{a} + L_{a}s \right)\left( J_{s} + b \right) + K_{b}K_{m} \right\rbrack} \]

\[\begin{matrix} \frac{\theta(s)}{V_{c}(s)} & \ = \frac{K_{m}}{s(\tau s + 1)} \\ \tau & \ = J/(b - m) \\ m & \ = \begin{matrix} \text{~}\text{slope of linearized torque-speed}\text{~} \\ \text{~}\text{curve (normally negative)}\text{~} \end{matrix} \end{matrix}\]

\[\begin{matrix} \frac{V_{o}(s)}{V_{c}(s)} & \ = \frac{K/\left( R_{c}R_{q} \right)}{\left( s\tau_{c} + 1 \right)\left( s\tau_{q} + 1 \right)} \\ \tau_{c} & \ = L_{c}/R_{c},\ \tau_{q} = L_{q}/R_{q} \end{matrix}\]

for the unloaded case, \(i_{d} \approx 0,\tau_{c} \approx \tau_{q}\),

\[0.05\text{ }s < \tau_{c} < 0.5\text{ }s \]

\[V_{q},\ V_{34} = V_{d} \]

\[\begin{matrix} \frac{Y(s)}{X(s)} & \ = \frac{K}{s(Ms + B)} \\ K & \ = \frac{Ak_{x}}{k_{p}},\ \text{ }B = \left( b + \frac{A^{2}}{k_{p}} \right) \\ k_{x} & \ = \left. \ \frac{\partial g}{\partial x} \right|_{x_{0},P_{0}},\ k_{p} = \left. \ \frac{\partial g}{\partial P} \right|_{x_{0},P_{0}}, \\ g & \ = g(x,P) = \text{~}\text{flow}\text{~} \\ A & \ = \text{~}\text{area of piston}\text{~} \\ M & \ = \text{~}\text{load mass}\text{~} \\ b & \ = \text{~}\text{load friction}\text{~} \end{matrix}\]

(continued) Table 2.4 (continued)

87. Element or System

\[G(s) \]

  1. Gear train, rotational transformer

\[\begin{matrix} \text{~}\text{Gear ratio}\text{~} & \ = n = \frac{N_{1}}{N_{2}} \\ N_{2}\theta_{L}(t) & \ = N_{1}\theta_{m}(t),\ \theta_{L}(t) = n\theta_{m}(t) \\ \omega_{L}(t) & \ = n\omega_{m}(t) \end{matrix}\]

  1. Potentiometer, voltage control

\[\begin{matrix} \frac{V_{2}(s)}{V_{1}(s)} & \ = \frac{R_{2}}{R} = \frac{R_{2}}{R_{1} + R_{2}} \\ \frac{R_{2}}{R} & \ = \frac{\theta}{\theta_{\max}} \end{matrix}\]

  1. Potentiometer, error detector bridge

\[\begin{matrix} V_{2}(s) & \ = k_{s}\left( \theta_{1}(s) - \theta_{2}(s) \right) \\ V_{2}(s) & \ = k_{s}\theta_{\text{error}\text{~}}(s) \\ k_{s} & \ = \frac{V_{\text{Battery}\text{~}}}{\theta_{\max}} \end{matrix}\]

  1. Tachometer, velocity sensor

\[\begin{matrix} V_{2}(s) & \ = K_{t}\omega(s) = K_{t}s\theta(s) \\ K_{t} & \ = \text{~}\text{constant}\text{~} \end{matrix}\]

  1. DC amplifier

\[\begin{matrix} \frac{V_{2}(s)}{V_{1}(s)} = & \frac{k_{a}}{s\tau + 1} \\ R_{O} = & \text{~}\text{output resistance}\text{~} \\ C_{O} = & \text{~}\text{output capacitance}\text{~} \\ \tau = & R_{O}C_{O},\tau = 1\text{ }s \\ & \begin{array}{r} \text{~}\text{and is often negligible for}\text{~} \end{array} \end{matrix}\]

Table 2.4 (continued)

88. Element or System

  1. Accelerometer, acceleration sensor

 

  1. Thermal heating system

 

  1. Rack and pinion

89. \(G(s)\)

\(x_{o}(t) = y(t) - x_{\text{in}\text{~}}(t)\),

\[\frac{X_{o}(s)}{X_{\text{in}\text{~}}(s)} = \frac{- s^{2}}{s^{2} + (b/M)s + k/M} \]

For low-frequency oscillations, where

\(\omega < \omega_{n}\),

\[\frac{X_{o}(j\omega)}{X_{\text{in}\text{~}}(j\omega)} \simeq \frac{\omega^{2}}{k/M} \]

\(\frac{T(s)}{q(s)} = \frac{1}{C_{t}s + \left( QS + 1/R_{t} \right)}\), where

\(T = T_{o} - T_{e} =\) temperature difference due to thermal process

\(C_{t} =\) thermal capacitance

\(Q =\) fluid flow rate \(=\) constant

\(S =\) specific heat of water

\(R_{t} =\) thermal resistance of insulation

\(q(s) =\) transform of rate of heat flow of heating element

\[x(t) = r\theta(t) \]

converts radial motion

to linear motion

89.1. BLOCK DIAGRAM MODELS

The dynamic systems that comprise feedback control systems are typically represented mathematically by a set of simultaneous differential equations. As we have noted in the previous sections, the Laplace transformation reduces the problem to the solution of a set of linear algebraic equations. Since control systems are concerned with the control of specific variables, the controlled variables must relate to the controlling variables. This relationship is typically represented by the transfer function of the subsystem relating the input and output variables. Therefore, one can correctly assume that the transfer function is an important relation for control engineering. FIGURE 2.21

Block diagram of a DC motor.

FIGURE 2.22 General block representation of two-input, twooutput system.

The importance of this cause-and-effect relationship is evidenced by the facility to represent the relationship of system variables graphically using block diagrams. Block diagrams consist of unidirectional, operational blocks that represent the transfer function of the systems of interest. A block diagram of a field-controlled DC motor and load is shown in Figure 2.21. The relationship between the displacement \(\theta(s)\) and the input voltage \(V_{f}(s)\) is represented in the block diagram.

To represent a system with several variables under control, an interconnection of blocks is utilized. For example, the system shown in Figure 2.22 has two input variables and two output variables [6]. Using transfer function relations, we can write the simultaneous equations for the output variables as

\[Y_{1}(s) = G_{11}(s)R_{1}(s) + G_{12}(s)R_{2}(s), \]

and

\[Y_{2}(s) = G_{21}(s)R_{1}(s) + G_{22}(s)R_{2}(s), \]

where \(G_{ij}(s)\) is the transfer function relating the \(i\) th output variable to the \(j\) th input variable. The block diagram representing this set of equations is shown in Figure 2.23. In general, for \(J\) inputs and \(I\) outputs, we write the simultaneous equation in matrix form as

\[\begin{bmatrix} Y_{1}(s) \\ Y_{2}(s) \\ \vdots \\ Y_{I}(s) \end{bmatrix} = \begin{bmatrix} G_{11}(s) & \ldots & G_{1J}(s) \\ G_{21}(s) & \ldots & G_{2J}(s) \\ \vdots & & \vdots \\ G_{I1}(s) & \ldots & G_{IJ}(s) \end{bmatrix}\begin{bmatrix} R_{1}(s) \\ R_{2}(s) \\ \vdots \\ R_{J}(s) \end{bmatrix}\]

or

\[\mathbf{Y}(s) = \mathbf{G}(s)\mathbf{R}(s). \]

FIGURE 2.23

Block diagram of a two-input, twooutput interconnected system.

Here the \(\mathbf{Y}(s)\) and \(\mathbf{R}(s)\) matrices are column matrices containing the \(I\) output and the \(J\) input variables, respectively, and \(\mathbf{G}(s)\) is an \(I\) by \(J\) transfer function matrix. The matrix representation of the interrelationship of many variables is particularly valuable for complex multi-variable control systems. Background information on matrix algebra can be found on-line and in many references, for example in [21].

The block diagram representation of a given system often can be reduced to a simplified block diagram with fewer blocks than the original diagram. Since the transfer functions represent linear systems, the multiplication is commutative. Thus, in Table 2.5, item 1, we have

\[X_{3}(s) = G_{2}(s)X_{2}(s) = G_{2}(s)G_{1}(s)X_{1}(s). \]

90. Table 2.5 Block Diagram Transformations

91. Transformation

  1. Combining blocks in cascade

Original Diagram

  1. Moving a pickoff point ahead of a block

  2. Moving a summing point behind a block

 

  1. Moving a pickoff point behind a block

 

  1. Moving a summing point ahead of a block

 

  1. Eliminating a feedback loop

Equivalent Diagram

FIGURE 2.24

Negative feedback control system.

When two blocks are connected in cascade, as in Table 2.5, item 1, we assume that

\[X_{3}(s) = G_{2}(s)G_{1}(s)X_{1}(s) \]

holds true. This assumes that when the first block is connected to the second block, the effect of loading of the first block is negligible. Loading and interaction between interconnected components or systems may occur. If the loading of interconnected devices does occur, the engineer must account for this change in the transfer function and use the corrected transfer function in subsequent calculations.

Block diagram transformations and reduction techniques are derived by considering the algebra of the diagram variables. For example, consider the block diagram shown in Figure 2.24. This negative feedback control system is described by the equation for the actuating signal, which is

\[E_{a}(s) = R(s) - B(s) = R(s) - H(s)Y(s). \]

Because the output is related to the actuating signal by \(G(s)\), we have

\[Y(s) = G(s)U(s) = G(s)G_{a}(s)Z(s) = G(s)G_{a}(s)G_{c}(s)E_{a}(s); \]

thus,

\[Y(s) = G(s)G_{a}(s)G_{c}(s)\lbrack R(s) - H(s)Y(s)\rbrack. \]

Combining the \(Y(s)\) terms, we obtain

\[Y(s)\left\lbrack 1 + G(s)G_{a}(s)G_{c}(s)H(s) \right\rbrack = G(s)G_{a}(s)G_{c}(s)R(s). \]

Therefore, the closed-loop transfer function relating the output \(Y(s)\) to the input \(R(s)\) is

\[\frac{Y(s)}{R(s)} = \frac{G(s)G_{a}(s)G_{c}(s)}{1 + G(s)G_{a}(s)G_{c}(s)H(s)}. \]

The reduction of the block diagram shown in Figure 2.24 to a single block representation is one example of several useful techniques. These diagram transformations are given in Table 2.5. All the transformations in Table 2.5 can be derived by algebraic manipulation of the equations representing the blocks. System analysis by the method of block diagram reduction affords a better understanding of the contribution of each component element than possible by the manipulation of equations. The utility of the block diagram transformations will be illustrated by an example using block diagram reduction. FIGURE 2.25

Multiple-loop feedback control system.

EXAMPLE 2.6 Block diagram reduction

The block diagram of a multiple-loop feedback control system is shown in Figure 2.25. It is interesting to note that the feedback signal \(H_{1}(s)Y(s)\) is a positive feedback signal, and the loop \(G_{3}(s)G_{4}(s)H_{1}(s)\) is a positive feedback loop. The block diagram reduction procedure is based on the use of Table 2.5, transformation 6 , which eliminates feedback loops. Therefore the other transformations are used to transform the diagram to a form ready for eliminating feedback loops. First, to eliminate the loop \(G_{3}(s)G_{4}(s)H_{1}(s)\), we move \(H_{2}(s)\) behind block \(G_{4}(s)\) by using transformation 4, and obtain Figure 2.26(a). Eliminating the loop \(G_{3}(s)G_{4}(s)H_{1}(s)\) by using transformation 6 , we obtain Figure 2.26(b). Then, eliminating the inner loop containing \(H_{2}(s)/G_{4}(s)\), we obtain Figure 2.26(c). Finally, by reducing the loop containing \(H_{3}(s)\), we obtain the closed-loop system transfer function as shown in Figure 2.26(d). It is worthwhile to examine the form of the numerator and denominator of this closed-loop transfer function. We note that the numerator is composed of the cascade transfer function of the feedforward elements connecting the input \(R(s)\) and the output \(Y(s)\). The denominator is composed of 1 minus the sum of each loop transfer function. The loop \(G_{3}(s)G_{4}(s)H_{1}(s)\) has a plus sign in the sum to be subtracted because it is a positive feedback loop, whereas the loops \(G_{1}(s)G_{2}(s)G_{3}(s)G_{4}(s)H_{3}(s)\) and \(G_{2}(s)G_{3}(s)H_{2}(s)\) are negative feedback loops. To illustrate this point, the denominator can be rewritten as

\(q(s) = 1 - \left( + G_{3}(s)G_{4}(s)H_{1}(s) - G_{2}(s)G_{3}(s)H_{2}(s) - G_{1}(s)G_{2}(s)G_{3}(s)G_{4}(s)H_{3}(s) \right)\).

This form of the numerator and denominator is quite close to the general form for multiple-loop feedback systems, as we shall find in the following section.

The block diagram representation of feedback control systems is a valuable and widely used approach. The block diagram provides the analyst with a graphical representation of the system interrelationships. Furthermore, the designer can readily visualize the possibilities for adding blocks to the existing system block diagram to alter and improve the system performance. The transition from the block diagram method to a method utilizing a line path representation instead of a block representation is readily accomplished and is presented in the following section.

(a)

(b)

(c)

(d)

FIGURE 2.26 Block diagram reduction of the system of Figure 2.25.

91.1. SIGNAL-FLOW GRAPH MODELS

Block diagrams are adequate for the representation of the interrelationships of controlled and input variables. An alternative method for determining the relationship between system variables has been developed by Mason and is based on a representation of the system by line segments \(\lbrack 4,25\rbrack\). The advantage of the line path method, called the signal-flow graph method, is the availability of a flow graph gain formula, which provides the relation between system variables without requiring any reduction procedure or manipulation of the flow graph.

The transition from a block diagram representation to a directed line segment representation is easy to accomplish by reconsidering the systems of the previous section. A signal-flow graph is a diagram consisting of nodes that are connected by several directed branches and is a graphical representation of a set of linear relations. Signal-flow graphs are particularly useful for feedback control systems because feedback theory is primarily concerned with the flow and processing of signals in systems. The basic element of a signal-flow graph is a unidirectional path segment called a branch, which relates the dependency of an input and an output variable in a manner equivalent to a block of a block diagram. Therefore, the branch relating the output \(\theta(s)\) of a DC motor to the field voltage \(V_{f}(s)\) is similar to the block diagram FIGURE 2.27

Signal-flow graph of the DC motor.

FIGURE 2.28 Signal-flow graph of a two-input, two-output interconnected system.

of Figure 2.21 and is shown in Figure 2.27. The input and output points or junctions are called nodes. Similarly, the signal-flow graph representing Equations (2.71) and (2.72), as well as Figure 2.23, is shown in Figure 2.28. The relation between each variable is written next to the directional arrow. All branches leaving a node will pass the nodal signal to the output node of each branch (unidirectionally). The summation of all signals entering a node is equal to the node variable. A path is a branch or a continuous sequence of branches that can be traversed from one signal (node) to another signal (node). A loop is a closed path that originates and terminates on the same node, with no node being met twice along the path. Two loops are said to be nontouching if they do not have a common node. Two touching loops share one or more common nodes. Therefore, considering Figure 2.28 again, we obtain

\[Y_{1}(s) = G_{11}(s)R_{1}(s) + G_{12}(s)R_{2}(s), \]

and

\[Y_{2}(s) = G_{21}(s)R_{1}(s) + G_{22}(s)R_{2}(s). \]

The flow graph is a graphical method of writing a system of algebraic equations that indicates the interdependencies of the variables. As another example, consider the following set of simultaneous algebraic equations:

\[\begin{matrix} & a_{11}x_{1} + a_{12}x_{2} + r_{1} = x_{1} \\ & a_{21}x_{1} + a_{22}x_{2} + r_{2} = x_{2}. \end{matrix}\]

The two input variables are \(r_{1}\) and \(r_{2}\), and the output variables are \(x_{1}\) and \(x_{2}\). A signal-flow graph representing Equations (2.83) and (2.84) is shown in Figure 2.29. Equations (2.83) and (2.84) may be rewritten as

\[x_{1}\left( 1 - a_{11} \right) + x_{2}\left( - a_{12} \right) = r_{1}, \]

and

\[x_{1}\left( - a_{21} \right) + x_{2}\left( 1 - a_{22} \right) = r_{2} \]

The simultaneous solution of Equations (2.85) and (2.86) using Cramer's rule results in the solutions

\[x_{1} = \frac{\left( 1 - a_{22} \right)r_{1} + a_{12}r_{2}}{\left( 1 - a_{11} \right)\left( 1 - a_{22} \right) - a_{12}a_{21}} = \frac{1 - a_{22}}{\Delta}r_{1} + \frac{a_{12}}{\Delta}r_{2}, \]

FIGURE 2.29

Signal-flow graph of two algebraic equations.

and

\[x_{2} = \frac{\left( 1 - a_{11} \right)r_{2} + a_{21}r_{1}}{\left( 1 - a_{11} \right)\left( 1 - a_{22} \right) - a_{12}a_{21}} = \frac{1 - a_{11}}{\Delta}r_{2} + \frac{a_{21}}{\Delta}r_{1}. \]

The denominator of the solution is the determinant \(\Delta\) of the set of equations and is rewritten as

\[\Delta = \left( 1 - a_{11} \right)\left( 1 - a_{22} \right) - a_{12}a_{21} = 1 - a_{11} - a_{22} + a_{11}a_{22} - a_{12}a_{21}. \]

In this case, the denominator is equal to 1 minus each self-loop \(a_{11},a_{22}\), and \(a_{12}a_{21}\), plus the product of the two nontouching loops \(a_{11}\) and \(a_{22}\). The loops \(a_{22}\) and \(a_{21}a_{12}\) are touching, as are \(a_{11}\) and \(a_{21}a_{12}\).

The numerator for \(x_{1}\) with the input \(r_{1}\) is 1 times \(1 - a_{22}\), which is the value of \(\Delta\) excluding terms that touch the path 1 from \(r_{1}\) to \(x_{1}\). Therefore the numerator from \(r_{2}\) to \(x_{1}\) is simply \(a_{12}\) because the path through \(a_{12}\) touches all the loops. The numerator for \(x_{2}\) is symmetrical to that of \(x_{1}\).

In general, the linear dependence \(T_{ij}(s)\) between the independent variable \(x_{i}\) (often called the input variable) and a dependent variable \(x_{j}\) is given by Mason's signal-flow gain formula \(\lbrack 11,12\rbrack\),

\[T_{ij}(s) = \frac{\sum_{k}^{}\mspace{2mu}\mspace{2mu} P_{ijk}(s)\Delta_{ijk}(s)}{\Delta(s)}, \]

\[\begin{matrix} P_{ijk}(s) & \ = \text{~}\text{gain of kth path from variable}\text{~}x_{i}\text{~}\text{to variable}\text{~}x_{j}, \\ \Delta(s) & \ = \text{~}\text{determinant of the graph,}\text{~} \\ \Delta_{ijk}(s) & \ = \text{~}\text{cofactor of the path}\text{~}P_{ijk}(s), \end{matrix}\]

and the summation is taken over all possible \(k\) paths from \(x_{i}\) to \(x_{j}\). The path gain or transmittance \(P_{ijk}(s)\) is defined as the product of the gains of the branches of the path, traversed in the direction of the arrows with no node encountered more than once. The cofactor \(\Delta_{ijk}(s)\) is the determinant with the loops touching the \(k\) th path removed. The determinant \(\Delta(s)\) is

\[\Delta(s) = 1 - \sum_{n = 1}^{N}\mspace{2mu} L_{n}(s) + \sum_{\substack{n,m \\ \text{~}\text{nontouching}\text{~}}}^{}\mspace{2mu} L_{n}(s)L_{m}(s) - \sum_{\substack{n,m, \\ \text{~}\text{pnontouching}\text{~}}}^{}\mspace{2mu} L_{n}(s)L_{m}(s)L_{p}(s) + \cdots, \]

where \(L_{q}(s)\) equals the value of the \(q\) th loop transmittance. Therefore the rule for evaluating \(\Delta(s)\) in terms of loops \(L_{1}(s),L_{2}(s),L_{3}(s),\ldots,L_{N}(s)\) is \(\Delta = 1 -\) (sum of all different loop gains)

  • (sum of the gain products of all combinations of two nontouching loops)

 

  • (sum of the gain products of all combinations of three nontouching loops)

\[+ \ldots \]

The gain formula is often used to relate the output variable \(Y(s)\) to the input variable \(R(s)\) and is given in somewhat simplified form as

\[T(s) = \frac{\Sigma_{k}P_{k}(s)\Delta_{k}(s)}{\Delta(s)}, \]

where \(T(s) = Y(s)/R(s)\).

Several examples will illustrate the utility and ease of this method. Although the gain Equation (2.90) appears to be formidable, one must remember that it represents a summation process, not a complicated solution process.

92. EXAMPLE 2.7 Transfer function of an interacting system

A two-path signal-flow graph is shown in Figure 2.30(a) and the corresponding block diagram is shown in Figure 2.30(b). An example of a control system with multiple signal paths is a multilegged robot. The paths connecting the input \(R(s)\) and output \(Y(s)\) are

FIGURE 2.30

Two-path interacting system. (a) Signal-flow graph. (b) Block diagram.

(a)

(b) There are four self-loops:

\[\begin{matrix} & L_{1}(s) = G_{2}(s)H_{2}(s),\ L_{2}(s) = H_{3}(s)G_{3}(s), \\ & L_{3}(s) = G_{6}(s)H_{6}(s),\ \text{~}\text{and}\text{~}\ L_{4}(s) = G_{7}(s)H_{7}(s)\text{.}\text{~} \end{matrix}\]

Loops \(L_{1}\) and \(L_{2}\) do not touch \(L_{3}\) and \(L_{4}\). Therefore, the determinant is

\[\begin{matrix} \Delta(s) = & 1 - \left( L_{1}(s) + L_{2}(s) + L_{3}(s) + L_{4}(s) \right) + \\ & \left( L_{1}(s)L_{3}(s) + L_{1}(s)L_{4}(s) + L_{2}(s)L_{3}(s) + L_{2}(s)L_{4}(s) \right). \end{matrix}\]

The cofactor of the determinant along path 1 is evaluated by removing the loops that touch path 1 from \(\Delta(s)\). Hence, we have

\[L_{1}(s) = L_{2}(s) = 0\ \text{~}\text{and}\text{~}\ \Delta_{1}(s) = 1 - \left( L_{3}(s) + L_{4}(s) \right). \]

Similarly, the cofactor for path 2 is

\[\Delta_{2}(s) = 1 - \left( L_{1}(s) + L_{2}(s) \right). \]

Therefore, the transfer function of the system is

\[\begin{matrix} \frac{Y(s)}{R(s)} = & T(s) = \frac{P_{1}(s)\Delta_{1}(s) + P_{2}(s)\Delta_{2}(s)}{\Delta(s)} \\ = & \frac{G_{1}(s)G_{2}(s)G_{3}(s)G_{4}(s)\left( 1 - L_{3}(s) - L_{4}(s) \right)}{\Delta(s)} \\ & \ + \frac{G_{5}(s)G_{6}(s)G_{7}(s)G_{8}(s)\left( 1 - L_{1}(s) - L_{2}(s) \right)}{\Delta(s)} \end{matrix}\]

where \(\Delta(s)\) in given in Equation (2.93).

A similar analysis can be accomplished using block diagram reduction techniques. The block diagram shown in Figure 2.30(b) has four inner feedback loops within the overall block diagram. The block diagram reduction is simplified by first reducing the four inner feedback loops and then placing the resulting systems in series. Along the top path, the transfer function is

\[\begin{matrix} Y_{1}(s) & \ = G_{1}(s)\left\lbrack \frac{G_{2}(s)}{1 - G_{2}(s)H_{2}(s)} \right\rbrack\left\lbrack \frac{G_{3}(s)}{1 - G_{3}(s)H_{3}(s)} \right\rbrack G_{4}(s)R(s) \\ & \ = \left\lbrack \frac{G_{1}(s)G_{2}(s)G_{3}(s)G_{4}(s)}{\left( 1 - G_{2}(s)H_{2}(s) \right)\left( 1 - G_{3}(s)H_{3}(s) \right)} \right\rbrack R(s). \end{matrix}\]

Similarly across the bottom path, the transfer function is

\[\begin{matrix} Y_{2}(s) & \ = G_{5}(s)\left\lbrack \frac{G_{6}(s)}{1 - G_{6}(s)H_{6}(s)} \right\rbrack\left\lbrack \frac{G_{7}(s)}{1 - G_{7}(s)H_{7}(s)} \right\rbrack G_{8}(s)R(s) \\ & \ = \left\lbrack \frac{G_{5}(s)G_{6}(s)G_{7}(s)G_{8}(s)}{\left( 1 - G_{6}(s)H_{6}(s) \right)\left( 1 - G_{7}(s)H_{7}(s) \right)} \right\rbrack R(s). \end{matrix}\]

The total transfer function is then given by

\[\begin{matrix} Y(s) = & Y_{1}(s) + Y_{2}(s) = \left\lbrack \frac{G_{1}(s)G_{2}(s)G_{3}(s)G_{4}(s)}{\left( 1 - G_{2}(s)H_{2}(s) \right)\left( 1 - G_{3}(s)H_{3}(s) \right)} \right.\ \\ & \left. \ + \frac{G_{5}(s)G_{6}(s)G_{7}(s)G_{8}(s)}{\left( 1 - G_{6}(s)H_{6}(s) \right)\left( 1 - G_{7}(s)H_{7}(s) \right)} \right\rbrack R(s). \end{matrix}\]

93. EXAMPLE 2.8 Armature-controlled motor

The block diagram of the armature-controlled DC motor is shown in Figure 2.20. This diagram was obtained from Equations (2.64)-(2.68). The signal-flow diagram is shown in Figure 2.31. Using Mason's signal-flow gain formula, let us obtain the transfer function for \(\theta(s)/V_{a}(s)\) with \(T_{d}(s) = 0\). The forward path is \(P_{1}(s)\), which touches the one loop, \(L_{1}(s)\), where

\[P_{1}(s) = \frac{1}{s}G_{1}(s)G_{2}(s)\text{~}\text{and}\text{~}L_{1}(s) = - K_{b}G_{1}(s)G_{2}(s). \]

Therefore, the transfer function is

\[T(s) = \frac{P_{1}(s)}{1 - L_{1}(s)} = \frac{(1/s)G_{1}(s)G_{2}(s)}{1 + K_{b}G_{1}(s)G_{2}(s)} = \frac{K_{m}}{s\left\lbrack \left( R_{a} + L_{a}s \right)(Js + b) + K_{b}K_{m} \right\rbrack}. \]

The signal-flow graph gain formula provides a reasonably straightforward approach for the evaluation of complicated systems. To compare the method with block diagram reduction, let us reconsider the complex system of Example 2.6.

94. EXAMPLE 2.9 Transfer function of a multiple-loop system

A multiple-loop feedback system is shown in Figure 2.25 in block diagram form. There is no need to redraw the diagram in signal-flow graph form, and we shall proceed using Mason's signal-flow gain formula. There is one forward path \(P_{1}(s) = G_{1}(s)G_{2}(s)G_{3}(s)G_{4}(s)\). The feedback loops are

\[\begin{matrix} L_{1}(s) & \ = - G_{2}(s)G_{3}(s)H_{2}(s),\ L_{2}(s) = G_{3}(s)G_{4}(s)H_{1}(s), \\ \text{~}\text{and}\text{~}\ & L_{3}(s) = - G_{1}(s)G_{2}(s)G_{3}(s)G_{4}(s)H_{3}(s). \end{matrix}\]

FIGURE 2.31

The signal-flow graph of the armature-controlled DC motor.

All the loops have common nodes and therefore are all touching. Furthermore, the path \(P_{1}(s)\) touches all the loops, so \(\Delta_{1}(s) = 1\). Thus, the closed-loop transfer function is

\[T(s) = \frac{Y(s)}{R(s)} = \frac{P_{1}(s)\Delta_{1}(s)}{1 - L_{1}(s) - L_{2}(s) - L_{3}(s)} = \frac{G_{1}(s)G_{2}(s)G_{3}(s)G_{4}(s)}{\Delta(s)} \]

where

\(\Delta(s) = 1 + G_{2}(s)G_{3}(s)H_{2}(s) - G_{3}(s)G_{4}(s)H_{1}(s) + G_{1}(s)G_{2}(s)G_{3}(s)G_{4}(s)H_{3}(s)\).

95. EXAMPLE 2.10 Transfer function of a complex system

Consider the system with several feedback loops and feedforward paths shown in Figure 2.32. The forward paths are

\(P_{1}(s) = G_{1}(s)G_{2}(s)G_{3}(s)G_{4}(s)G_{5}(s)G_{6}(s),\ P_{2}(s) = G_{1}(s)G_{2}(s)G_{7}(s)G_{6}(s)\), and \(P_{3}(s) = G_{1}(s)G_{2}(s)G_{3}(s)G_{4}(s)G_{8}(s)\).

The feedback loops are

\[\begin{matrix} & L_{1}(s) = - G_{2}(s)G_{3}(s)G_{4}(s)G_{5}(s)H_{3}(s).\ L_{2}(s) = - G_{5}(s)G_{6}(s)H_{1}(s), \\ & L_{3}(s) = - G_{8}(s)H_{1}(s),L_{4}(s) = - G_{7}(s)H_{2}(s)G_{2}(s), \\ & L_{5}(s) = - G_{4}(s)H_{4}(s),\ L_{6}(s) = - G_{1}(s)G_{2}(s)G_{3}(s)G_{4}(s)G_{5}(s)G_{6}(s)H_{3}(s), \\ & L_{7}(s) = - G_{1}(s)G_{2}(s)G_{7}(s)G_{6}(s)H_{3}(s),\text{~}\text{and}\text{~} \\ & L_{8}(s) = - G_{1}(s)G_{2}(s)G_{3}(s)G_{4}(s)G_{8}(s)H_{3}(s). \end{matrix}\]

Loop \(L_{5}\) does not touch loop \(L_{4}\) or loop \(L_{7}\), and loop \(L_{3}\) does not touch loop \(L_{4}\); but all other loops touch. Therefore, the determinant is

\[\begin{matrix} \Delta(s) = & 1 - \left( L_{1}(s) + L_{2}(s) + L_{3}(s) + L_{4}(s) + L_{5}(s) + L_{6}(s) + L_{7}(s) + L_{8}(s) \right) \\ & \ + \left( L_{5}(s)L_{7}(s) + L_{5}(s)L_{4}(s) + L_{3}(s)L_{4}(s) \right). \end{matrix}\]

The cofactors are

\[\Delta_{1}(s) = \Delta_{3}(s) = 1\ \text{~}\text{and}\text{~}\ \Delta_{2}(s) = 1 - L_{5}(s) = 1 + G_{4}(s)H_{4}(s). \]

FIGURE 2.32

Signal-flow graph of a multiple-loop system.

Finally, the transfer function is

\[T(s) = \frac{Y(s)}{R(s)} = \frac{P_{1}(s) + P_{2}(s)\Delta_{2}(s) + P_{3}(s)}{\Delta(s)}. \]

95.1. DESIGN EXAMPLES

In this section, we present four illustrative design examples. The first example describes modeling of a photovoltaic generator in a manner amenable to feedback control to achieve maximum power delivery as the sunlight varies over time. Using feedback control to improve the efficiency of producing electricity using solar energy in areas of abundant sunlight is a valuable contribution to green engineering. In the second example, we present a detailed look at modeling of the fluid level in a reservoir. The modeling is presented in a very detailed manner to emphasize the effort required to obtain a linear model in the form of a transfer function. The remaining two examples include an electric traction motor model development and the design of a low-pass filter.

96. EXAMPLE 2.11 Photovoltaic generators

Photovoltaic cells were developed at Bell Laboratories in 1954. Solar cells are one example of photovoltaic cells and convert solar light to electricity. Other types of photovoltaic cells can detect radiation and measure light intensity. The use of solar cells to produce energy supports the principles of green engineering by minimizing pollution. Solar panels minimize the depletion of natural resources and are effective in areas where sunlight is abundant. Photovoltaic generators are systems that provide electricity using an assortment of photovoltaic modules comprised of interconnected solar cells. Photovoltaic generators can be used to recharge batteries, they can be directly connected to an electrical grid, or they can drive electric motors without a battery [34-42].

The power output of a solar cell varies with available solar light, temperature, and external loads. To increase the overall efficiency of the photovoltaic generator, feedback control strategies can be employed to seek to maximize the power output. This is known as maximum power point tracking (MPPT) [34-36]. There are certain values of current and voltage associated with the solar cells corresponding to the maximum power output. The MPPT uses closed-loop feedback control to seek the optimal point to allow the power converter circuit to extract the maximum power from the photovoltaic generator system. We will discuss the control design in later chapters, but here we focus on the modeling of the system.

The solar cell can be modeled as an equivalent circuit shown in Figure 2.33 composed of a current generator, \(I_{PH}\), a light sensitive diode, a resistance series, \(R_{S}\), and a shunt resistance, \(R_{P}\lbrack 34,36 - 38\rbrack\).

FIGURE 2.33

Equivalent circuit of the photovoltaic generator.

FIGURE 2.34

Voltage versus current and power versus current for an example photovoltaic generator at a specific insolation level.

The output voltage, \(V_{PV}\), is given by

\[V_{PV} = \frac{N}{\lambda}ln\left( \frac{I_{PH} - I_{PV} + MI_{0}}{MI_{0}} \right) - \frac{N}{M}R_{S}I_{PV}, \]

where the photovoltaic generator is comprised of \(M\) parallel strings with \(N\) series cells per string, \(I_{0}\) is the reverse saturation current of the diode, \(I_{PH}\) represents the insolation level, and \(\lambda\) is a known constant that depends on the cell material [34-36]. The insolation level is a measure of the amount of incident solar radiation on the solar cells.

Suppose that we have a single silicon solar panel \((M = 1)\) with 10 series cells \((N = 10)\) and the parameters given by \(1/\lambda = 0.05\text{ }V,R_{S} = 0.025\Omega,I_{PH} = 3\text{ }A\), and \(I_{0} = 0.001\text{ }A\). The voltage versus current relationship in Equation (2.99) and the power versus voltage are shown in Figure 2.34 for one particular insolation level where \(I_{PH} = 3\text{ }A\). In Figure 2.34, we see that when \(dP/dI_{PV} = 0\) we are at the maximum power level with an associated \(V_{PV} = V_{mp}\) and \(I_{PV} = I_{mp}\), the values of voltage and current at the maximum power, respectively. As the sunlight varies, the insolation level, \(I_{PH}\), varies resulting in different power curves.

The goal of the power point tracking is to seek the voltage and current condition that maximizes the power output as conditions vary. This is accomplished by varying the reference voltage as a function of the insolation level. The reference voltage is the voltage at the maximum power point as shown in Figure 2.35. The feedback control system should track the reference voltage in a rapid and accurate fashion.

Figure 2.36 illustrates a simplified block diagram of the controlled system. The main components are a power circuit (e.g., a phase control IC and a thyristor bridge), photovoltaic generator, and current transducer. The plant including the

FIGURE 2.35 Maximum power point for varying values of \(I_{PH}\) specifies \(V_{\text{ref}\text{~}}\).

FIGURE 2.36 Block diagram of feedback control system for maximum power transfer.

power circuit, photovoltaic generator, and current transducer is modeled as a second-order transfer function given by

\[G(s) = \frac{K}{s(s + p)}, \]

where \(K\) and \(p\) depend on the photovoltaic generator and associated electronics [35]. The controller, \(G_{c}(s)\), in Figure 2.36 is designed such that as the insolation levels varies (that is, as \(I_{PH}\) varies), the voltage output will approach the reference input voltage, \(V_{\text{ref}\text{~}}(s)\), which has been set to the voltage associated with the maximum power point resulting in maximum power transfer. If, for example, the controller is the proportional plus integral controller

\[G_{c}(s) = K_{P} + \frac{K_{I}}{s}, \]

the closed-loop transfer function is

\[T(s) = \frac{K\left( K_{P}s + K_{I} \right)}{s^{3} + ps^{2} + KK_{P}s + KK_{I}}. \]

We can select the controller gains in Equation (2.101) to place the poles of \(T(s)\) in the desired locations to meet the desired performance specifications.

97. EXAMPLE 2.12 Fluid flow modeling

A fluid flow system is shown in Figure 2.37. The reservoir (or tank) contains water that evacuates through an output port. Water is fed to the reservoir through a pipe controlled by an input valve. The variables of interest are the fluid velocity \(V(\text{ }m/s)\), fluid height in the reservoir \(H(\text{ }m)\), and \(p\) ressure \(p\left( \text{ }N/m^{2} \right)\). The pressure is defined as the force per unit area exerted by the fluid on a surface immersed (and at rest with respect to) the fluid. Fluid pressure acts normal to the surface. For further reading on fluid flow modeling, see [28-30].

The elements of the control system design process emphasized in this example are shown in Figure 2.38. The strategy is to establish the system configuration and then obtain the appropriate mathematical models describing the fluid flow reservoir from an input-output perspective.

The general equations of motion and energy describing fluid flow are quite complicated. The governing equations are coupled nonlinear partial differential equations. We must make some selective assumptions that reduce the complexity of the mathematical model. Although the control engineer is not required to be a fluid dynamicist, and a deep understanding of fluid dynamics is not necessarily acquired during the control system design process, it makes good engineering sense to gain at least a rudimentary understanding of the important simplifying assumptions. For a more complete discussion of fluid motion, see [31-33].

To obtain a realistic, yet tractable, mathematical model for the fluid flow reservoir, we first make several key assumptions. We assume that the water in the tank is incompressible and that the flow is inviscid, irrotational and steady. An incompressible fluid has a constant density \(\rho\left( kg/m^{3} \right)\). In fact, all fluids are compressible to some extent. The compressibility factor, \(k\), is a measure of the compressibility of

FIGURE 2.37

The fluid flow reservoir configuration.

FIGURE 2.38 Elements of the control system design process emphasized in the fluid flow reservoir example.

a fluid. A smaller value of \(k\) indicates less compressibility. Air (which is a compressible fluid) has a compressibility factor of \(k_{\text{air}\text{~}} = 0.98{\text{ }m}^{2}/N\), while water has a compressibility factor of \(k_{H_{2}O} = 4.9 \times 10^{- 10}{\text{ }m}^{2}/N = 50 \times 10^{- 6}{\text{ }atm}^{- 1}\). In other words, a given volume of water decreases by 50 one-millionths of the original volume for each atmosphere (atm) increase in pressure. Thus the assumption that the water is incompressible is valid for our application.

Consider a fluid in motion. Suppose that initially the flow velocities are different for adjacent layers of fluid. Then an exchange of molecules between the two layers tends to equalize the velocities in the layers. This is internal friction, and the exchange of momentum is known as viscosity. Solids are more viscous than fluids, and fluids are more viscous than gases. A measure of viscosity is the coefficient of viscosity \(\mu\left( Ns/m^{2} \right)\). A larger coefficient of viscosity implies higher viscosity. The coefficient of viscosity (under standard conditions, \(20^{\circ}C\) ) for air is \(\mu_{\text{air}\text{~}} = 0.178 \times 10^{- 4}\text{ }N\text{ }s/m^{2}\), and for water we have \(\mu H_{2}O = 1.054 \times 10^{- 3}\text{ }N\text{ }s/m^{2}\).

Therefore water is about 60 times more viscous than air. Viscosity depends primarily on temperature, not pressure. For comparison, water at \(0^{\circ}C\) is about 2 times more viscous than water at \(20^{\circ}C\). With fluids of low viscosity, such as air and water, the effects of friction are important only in the boundary layer, a thin layer adjacent to the wall of the reservoir and output pipe. We can neglect viscosity in our model development. We say our fluid is inviscid.

If each fluid element at each point in the flow has no net angular velocity about that point, the flow is termed irrotational. Imagine a small paddle wheel immersed in the fluid (say in the output port). If the paddle wheel translates without rotating, the flow is irrotational. We will assume the water in the tank is irrotational. For an inviscid fluid, an initially irrotational flow remains irrotational.

The water flow in the tank and output port can be either steady or unsteady. The flow is steady if the velocity at each point is constant in time. This does not necessarily imply that the velocity is the same at every point but rather that an an given point the velocity does not change with time. Steady-state conditions can be achieved at low fluid speeds. We will assume steady flow conditions. If the output port area is too large, then the flow through the reservoir may not be slow enough to establish the steady-state condition that we are assuming exists and our model will not accurately predict the fluid flow motion.

To obtain a mathematical model of the flow within the reservoir, we employ basic principles of science and engineering, such as the principle of conservation of mass. The mass of water in the tank at any given time is

\[m(t) = \rho A_{1}H(t) \]

where \(A_{1}\) is the area of the tank, \(\rho\) is the water density, and \(H(t)\) is the height of the water in the reservoir. The constants for the reservoir system are given in Table 2.6.

In the following formulas, a subscript 1 denotes quantities at the input, and a subscript 2 refers to quantities at the output. Taking the time derivative of \(m(t)\) in Equation (2.102) yields

\[\overset{˙}{m}(t) = \rho A_{1}\overset{˙}{H}(t), \]

where we have used the fact that our fluid is incompressible (that is, \(\overset{˙}{\rho} = 0\) ) and that the area of the tank, \(A_{1}\), does not change with time. The change in mass in the reservoir is equal to the mass that enters the tank minus the mass that leaves the tank, or

\[\overset{˙}{m}(t) = \rho A_{1}\overset{˙}{H}(t) = Q_{1}(t) - \rho A_{2}v_{2}(t), \]

where \(Q_{1}(t)\) is the input mass flow rate, \(v_{2}(t)\) is the exit velocity, and \(A_{2}\) is the output port area. The exit velocity, \(v_{2}(t)\), is a function of the water height. From Bernoulli's equation [39] we have

\[\frac{1}{2}\rho v_{1}^{2}(t) + P_{1} + \rho gH(t) = \frac{1}{2}\rho v_{2}^{2}(t) + P_{2}, \]

Table 2.6 Water Tank Physical Constants

$$\begin
\rho \
\left( kg/m^{3} \right)
\end{matrix}$$ $$\begin
                       g \\                            
                       \left( \text{ }m/s^{2} \right)  
                       \end{matrix}$$                  | $$\begin{matrix}               
                                                        A_{1} \\                        
                                                        \left( {\text{ }m}^{2} \right)  
                                                        \end{matrix}$$                  | $$\begin{matrix}               
                                                                                         A_{2} \\                        
                                                                                         \left( {\text{ }m}^{2} \right)  
                                                                                         \end{matrix}$$                  | $$\begin{matrix} 
                                                                                                                          H^{*} \\          
                                                                                                                          (\text{ }m)       
                                                                                                                          \end{matrix}$$    | $$\begin{matrix} 
                                                                                                                                             Q^{*} \\          
                                                                                                                                             (\text{ }kg/s)    
                                                                                                                                             \end{matrix}$$    |

| 1000 | 9.8 | $$\pi/4$$ | $$\pi/400$$ | 1 | 34.77 |

where \(v_{1}\) is the water velocity at the mouth of the reservoir, and \(P_{1}\) and \(P_{2}\) are the atmospheric pressures at the input and output, respectively. But \(P_{1}\) and \(P_{2}\) are equal, and \(A_{2}\) is sufficiently small \(\left( A_{2} = A_{1}/100 \right)\), so the water flows out slowly and the velocity \(v_{1}(t)\) is negligible. Thus Bernoulli's equation reduces to

\[v_{2}(t) = \sqrt{2gH(t)}. \]

Substituting Equation (2.104) into Equation (2.103) and solving for \(\overset{˙}{H}(t)\) yields

\[\overset{˙}{H}(t) = - \left\lbrack \frac{A_{2}}{A_{1}}\sqrt{2g} \right\rbrack\sqrt{H(t)} + \frac{1}{\rho A_{1}}Q_{1}(t). \]

Using Equation (2.104), we obtain the exit mass flow rate

\[Q_{2}(t) = \rho A_{2}v_{2}(t) = \left( \rho\sqrt{2g}A_{2} \right)\sqrt{H(t)}. \]

To keep the equations manageable, define

\[k_{1}: = - \frac{A_{2}\sqrt{2g}}{A_{1}},\ k_{2}: = \frac{1}{\rho A_{1}},\ \text{~}\text{and}\text{~}\ k_{3}: = \rho\sqrt{2g}A_{2}. \]

Then, it follows that

\[\begin{matrix} \overset{˙}{H}(t) & \ = k_{1}\sqrt{H(t)} + k_{2}Q_{1}(t), \\ Q_{2}(t) & \ = k_{3}\sqrt{H(t)}. \end{matrix}\]

Equation (2.107) represents our model of the water tank system, where the input is \(Q_{1}(t)\) and the output is \(Q_{2}(t)\). Equation (2.107) is a nonlinear, first-order, ordinary differential equation model. The model in Equation (2.107) has the functional form

\[\begin{matrix} \overset{˙}{H}(t) & \ = f\left( H(t),Q_{1}(t) \right), \\ Q_{2}(t) & \ = h\left( H(t),Q_{1}(t) \right), \end{matrix}\]

where

\[f\left( H(t),Q_{1}(t) \right) = k_{1}\sqrt{H(t)} + k_{2}Q_{1}(t)\text{~}\text{and}\text{~}h\left( H(t),Q_{1}(t) \right) = k_{3}\sqrt{H(t)}. \]

A set of linearized equations describing the height of the water in the reservoir is obtained using Taylor series expansions about an equilibrium flow condition. When the tank system is in equilibrium, we have \(\overset{˙}{H}(t) = 0\). We can define \(Q^{*}\) and \(H^{*}\) as the equilibrium input mass flow rate and water level, respectively. The relationship between \(Q^{*}\) and \(H^{*}\) is given by

\[Q^{*} = - \frac{k_{1}}{k_{2}}\sqrt{H^{*}} = \rho\sqrt{2g}A_{2}\sqrt{H^{*}}. \]

This condition occurs when just enough water enters the tank in \(A_{1}\) to make up for the amount leaving through \(A_{2}\). We can write the water level and input mass flow rate as

\[\begin{matrix} H(t) = H^{*} + \Delta H(t), \\ Q_{1}(t) = Q^{*} + \Delta Q_{1}(t), \end{matrix}\]

where \(\Delta H(t)\) and \(\Delta Q_{1}(t)\) are small deviations from the equilibrium (steady-state) values. The Taylor series expansion about the equilibrium conditions is given by

\[\begin{matrix} \overset{˙}{H}(t) = f\left( H(t),Q_{1}(t) \right) = f\left( H^{*},Q^{*} \right) + \left. \ \frac{\partial f}{\partial H} \right|_{\begin{matrix} H = H^{*} \\ Q1 = Q^{*} \end{matrix}}\left( H(t) - H^{*} \right) \\ + \left. \ \frac{\partial f}{\partial Q_{1}} \right|_{\begin{matrix} H = H^{*} \\ Q1 = Q^{*} \end{matrix}}\left( Q_{1}(t) - Q^{*} \right) + \ldots, \end{matrix}\]

where

\[\left. \ \frac{\partial f}{\partial H} \right|_{\begin{matrix} H = H^{*} \\ Q1 = Q^{*} \end{matrix}} = \left. \ \frac{\partial\left( k_{1}\sqrt{H} + k_{2}Q_{1} \right)}{\partial H} \right|_{\begin{matrix} H = H^{*} \\ Q1 = Q^{*} \end{matrix}} = \frac{1}{2}\frac{k_{1}}{\sqrt{H^{*}}},\]

and

\[\left. \ \frac{\partial f}{\partial Q_{1}} \right|_{\begin{matrix} H = H^{*} \\ Q1 = Q^{*} \end{matrix}} = \left. \ \frac{\partial\left( k_{1}\sqrt{H} + k_{2}Q_{1} \right)}{\partial Q_{1}} \right|_{\begin{matrix} H = H^{*} \\ Q1 = Q^{*} \end{matrix}} = k_{2}.\]

Using Equation (2.108), we have

\[\sqrt{H^{*}} = \frac{Q^{*}}{\rho\sqrt{2g}A_{2}}, \]

so that

\[\left. \ \frac{\partial f}{\partial H} \right|_{\begin{matrix} H = H^{*} \\ Q1 = Q^{*} \end{matrix}} = - \frac{A_{2}^{2}}{A_{1}}\frac{g\rho}{Q^{*}}.\]

It follows from Equation (2.109) that

\[\overset{˙}{H}(t) = \Delta\overset{˙}{H}(t), \]

since \(H^{*}\) is constant. Also, the term \(f\left( H^{*},Q^{*} \right)\) is identically zero, by definition of the equilibrium condition. Neglecting the higher order terms in the Taylor series expansion yields

\[\Delta\overset{˙}{H}(t) = - \frac{A_{2}^{2}}{A_{1}}\frac{g\rho}{Q^{*}}\Delta H(t) + \frac{1}{\rho A_{1}}\Delta Q_{1}(t). \]

Equation (2.111) is a linear model describing the deviation in water level \(\Delta H(t)\) from the steady state due to a deviation from the nominal input mass flow rate \(\Delta Q_{1}(t)\).

Similarly, for the output variable \(Q_{2}(t)\) we have

\[\begin{matrix} Q_{2}(t) & \ = Q_{2}^{*} + \Delta Q_{2}(t) = h\left( H(t),Q_{1}(t) \right) \\ \approx & h\left( H^{*},Q^{*} \right) + \left. \ \frac{\partial h}{\partial H} \right|_{\begin{matrix} H = H^{*} \\ Q1 = Q^{*} \end{matrix}}\Delta H(t) + \left. \ \frac{\partial h}{\partial Q_{1}} \right|_{\begin{matrix} H = H^{*} \\ Q1 = Q^{*} \end{matrix}}\Delta Q_{1}(t), \end{matrix}\]

where \(\Delta Q_{2}(t)\) is a small deviation in the output mass flow rate and

\[\left. \ \frac{\partial h}{\partial H} \right|_{\begin{matrix} H = H^{*} \\ Q1 = Q^{*} \end{matrix}} = \frac{g\rho^{2}A_{2}^{2}}{Q^{*}},\]

and

\[\left. \ \frac{\partial h}{\partial Q_{1}} \right|_{\begin{matrix} H = H^{*} \\ Q1 = Q^{*} \end{matrix}} = 0.\]

Therefore, the linearized equation for the output variable \(Q_{2}(t)\) is

\[\Delta Q_{2}(t) = \frac{g\rho^{2}A_{2}^{2}}{Q^{*}}\Delta H(t). \]

For control system design and analysis, it is convenient to obtain the input-output relationship in the form of a transfer function. The tool to accomplish this is the Laplace transform. Taking the time-derivative of Equation (2.113) and substituting into Equation (2.111) yields the input-output relationship

\[\Delta{\overset{˙}{Q}}_{2}(t) + \frac{A_{2}^{2}}{A_{1}}\frac{g\rho}{Q^{*}}\Delta Q_{2}(t) = \frac{A_{2}^{2}g\rho}{A_{1}Q^{*}}\Delta Q_{1}(t). \]

If we define

\[\Omega: = \frac{A_{2}^{2}}{A_{1}}\frac{g\rho}{Q^{*}} \]

then we have

\[\Delta{\overset{˙}{Q}}_{2}(t) + \Omega\Delta Q_{2}(t) = \Omega\Delta Q_{1}(t) \]

Taking the Laplace transform (with zero initial conditions) yields the transfer function

\[\Delta Q_{2}(s)/\Delta Q_{1}(s) = \frac{\Omega}{s + \Omega}. \]

Equation (2.116) describes the relationship between the change in the output mass flow rate \(\Delta Q_{2}(s)\) due to a change in the input mass flow rate \(\Delta Q_{1}(s)\). We can also obtain a transfer function relationship between the change in the input mass flow rate and the change in the water level in the tank, \(\Delta H(s)\). Taking the Laplace transform (with zero initial conditions) of Eq. (2.111) yields

\[\Delta H(s)/\Delta Q_{1}(s) = \frac{k_{2}}{s + \Omega}. \]

Given the linear time-invariant model of the water tank system in Equation (2.115), we can obtain solutions for step and sinusoidal inputs. Remember that our input \(\Delta Q_{1}(s)\) is actually a change in the input mass flow rate from the steady-state value \(Q^{*}\).

Consider the step input

\[\Delta Q_{1}(s) = q_{o}/s \]

where \(q_{o}\) is the magnitude of the step input, and the initial condition is \(\Delta Q_{2}(0) = 0\). Then we can use the transfer function form given in Eq. (2.116) to obtain

\[\Delta Q_{2}(s) = \frac{q_{o}\Omega}{s(s + \Omega)}. \]

The partial fraction expansion yields

\[\Delta Q_{2}(s) = \frac{- q_{o}}{s + \Omega} + \frac{q_{o}}{s}. \]

Taking the inverse Laplace transform yields

\[\Delta Q_{2}(t) = - q_{o}e^{- \Omega t} + q_{o}. \]

Note that \(\Omega > 0\) (see Equation (2.114)), so the term \(e^{- \Omega t}\) approaches zero as \(t\) approaches \(\infty\). Therefore, the steady-state output due to the step input of magnitude \(q_{o}\) is

\[\Delta Q_{2_{ss}} = q_{o}. \]

We see that in the steady state, the deviation of the output mass flow rate from the equilibrium value is equal to the deviation of the input mass flow rate from the equilibrium value. By examining the variable \(\Omega\) in Equation (2.114), we find that the larger the output port opening \(A_{2}\), the faster the system reaches steady state. In other words, as \(\Omega\) gets larger, the exponential term \(e^{- \Omega t}\) vanishes more quickly, and steady state is reached faster.

Similarly for the water level we have

\[\Delta H(s) = \frac{- q_{o}k_{2}}{\Omega}\left( \frac{1}{s + \Omega} - \frac{1}{s} \right). \]

Taking the inverse Laplace transform yields

\[\Delta H(t) = \frac{- q_{o}k_{2}}{\Omega}\left( e^{- \Omega t} - 1 \right). \]

The steady-state change in water level due to the step input of magnitude \(q_{o}\) is

\[\Delta H_{ss} = \frac{q_{o}k_{2}}{\Omega}. \]

Consider the sinusoidal input

\[\Delta Q_{1}(t) = q_{o}sin\omega t \]

which has Laplace transform

\[\Delta Q_{1}(s) = \frac{q_{o}\omega}{s^{2} + \omega^{2}}. \]

Suppose the system has zero initial conditions, that is, \(\Delta Q_{2}(0) = 0\). Then from Equation (2.116) we have

\[\Delta Q_{2}(s) = \frac{q_{o}\omega\Omega}{(s + \Omega)\left( s^{2} + \omega^{2} \right)}. \]

Expanding in a partial fraction expansion and taking the inverse Laplace transform yields

\[\Delta Q_{2}(t) = q_{o}\Omega\omega\left( \frac{e^{- \Omega t}}{\Omega^{2} + \omega^{2}} + \frac{sin(\omega t - \phi)}{\omega\left( \Omega^{2} + \omega^{2} \right)^{1/2}} \right), \]

where \(\phi = \tan^{- 1}(\omega/\Omega)\). So, as \(t \rightarrow \infty\), we have

\[\Delta Q_{2}(t) \rightarrow \frac{q_{o}\Omega}{\sqrt{\Omega^{2} + \omega^{2}}}sin(\omega t - \phi). \]

The maximum change in output flow rate is

\[\left| \Delta Q_{2}(t) \right|_{\max} = \frac{q_{o}\Omega}{\sqrt{\Omega^{2} + \omega^{2}}}. \]

The above analytic analysis of the linear system model to step and sinusoidal inputs is a valuable way to gain insight into the system response to test signals. Analytic analysis is limited, however, in the sense that a more complete representation can be obtained with carefully constructed numerical investigations using computer simulations of both the linear and nonlinear mathematical models. A computer simulation uses a model and the actual conditions of the system being modeled, as well as actual input commands to which the system will be subjected.

Various levels of simulation fidelity (that is, accuracy) are available to the control engineer. In the early stages of the design process, highly interactive design software packages are effective. At this stage, computer speed is not as important as the time it takes to obtain an initial valid solution and to iterate and fine tune that solution. Good graphics output capability is crucial. The analysis simulations are generally low fidelity in the sense that many of the simplifications (such as linearization) made in the design process are retained in the simulation.

As the design matures usually it is necessary to conduct numerical experiments in a more realistic simulation environment. At this point in the design process, the computer processing speed becomes more important, since long simulation times necessarily reduce the number of computer experiments that can be obtained and correspondingly raise costs. Usually these high-fidelity simulations are programmed in FORTRAN, C, C++, MATLAB, LabVIEW or similar languages.

Assuming that a model and the simulation are reliably accurate, computer simulation has the following advantages [13]:

  1. System performance can be observed under all conceivable conditions.

  2. Results of field-system performance can be extrapolated with a simulation model for prediction purposes.

  3. Decisions concerning future systems presently in a conceptual stage can be examined.

  4. Trials of systems under test can be accomplished in a much-reduced period of time.

  5. Simulation results can be obtained at lower cost than real experimentation.

  6. Study of hypothetical situations can be achieved even when the hypothetical situation would be unrealizable at present.

  7. Computer modeling and simulation is often the only feasible or safe technique to analyze and evaluate a system.

The nonlinear model describing the water level flow rate is as follows (using the constants given in Table 2.6):

\[\begin{matrix} \overset{˙}{H}(t) & \ = - 0.0443\sqrt{H(t)} + 1.2732 \times 10^{- 3}Q_{1}(t), \\ Q_{2}(t) & \ = 34.77\sqrt{H(t)}. \end{matrix}\]

FIGURE 2.39

The tank water level time history obtained by integrating the nonlinear equations of motion in Equation (2.119) with \(H(0) = 0.5\text{ }m\) and \(Q_{1}(t) = Q^{*} = 34.77\text{ }kg/s\).

With \(H(0) = 0.5\text{ }m\) and \(Q_{1}(t) = 34.77\text{ }kg/s\), we can numerically integrate the nonlinear model given by Equation (2.119) to obtain the time history of \(H(t)\). and \(Q_{2}(t)\). The response of the system is shown in Figure 2.39. As expected from Equation (2.108), the system steady-state water level is \(H^{*} = 1\text{ }m\) when \(Q^{*} = 34.77\text{ }kg/m^{3}\).

It takes about 250 seconds to reach steady-state. Suppose that the system is at steady state and we want to evaluate the response to a step change in the input mass flow rate. Consider

\[\Delta Q_{1}(t) = 1\text{ }kg/s. \]

Then we can use the transfer function model to obtain the unit step response. The step response is shown in Figure 2.40 for both the linear and nonlinear models. Using the linear model, we find that the steady-state change in water level is \(\Delta H = 5.75\text{ }cm\). Using the nonlinear model, we find that the steady-state change in water level is \(\Delta H = 5.84\text{ }cm\). So we see a small difference in the results obtained from the linear model and the more accurate nonlinear model.

As the final step, we consider the system response to a sinusoidal change in the input flow rate. Let

\[\Delta Q_{1}(s) = \frac{q_{o}\omega}{s^{2} + \omega^{2}}, \]

where \(\omega = 0.05rad/s\) and \(q_{o} = 1\). The total water input flow rate is

\[Q_{1}(t) = Q^{*} + \Delta Q_{1}(t), \]

where \(Q^{*} = 34.77\text{ }kg/s\). The output flow rate is shown in Figure 2.41 . FIGURE 2.40

The response showing the linear versus nonlinear response to a step input.

FIGURE 2.41

The output flow rate response to a sinusoidal variation in the input flow.

The response of the water level is shown in Figure 2.42. The water level is sinusoidal, with an average value of \(H_{av} = H^{*} = 1\text{ }m\). As shown in Equation (2.118), the output flow rate is sinusoidal in the steady state, with

\[\left| \Delta Q_{2}(t) \right|_{\max} = \frac{q_{o}\Omega}{\sqrt{\Omega^{2} + \omega^{2}}} = 0.4\text{ }kg/s. \]

FIGURE 2.42 The water level response to a sinusoidal variation in the input flow.

Thus in the steady state (see Figure 2.41) we expect that the output flow rate will oscillate at a frequency of \(\omega = 0.05rad/s\), with a maximum value of

\[Q_{2_{\max}} = Q^{*} + \left| \Delta Q_{2}(t) \right|_{\max} = 35.18\text{ }kg/s \]

98. EXAMPLE 2.13 Electric traction motor control

The electric motor drive is shown in block diagram form in Figure 2.43(a), incorporating the necessary control. The goal of the design is to obtain a system model and the closed-loop transfer function of the system, \(\omega(s)/\omega_{d}(s)\), select appropriate resistors \(R_{1},R_{2},R_{3}\), and \(R_{4}\), and then predict the system response.

The first step is to describe the transfer function of each block. We propose the use of a tachometer to generate a voltage proportional to velocity and to connect that voltage, \(v_{t}\), to one input of a difference amplifier, as shown in Figure 2.43(b). The power amplifier is nonlinear and can be approximately represented by \(v_{2}(t) = 2e^{3}v_{1}(t) = g\left( v_{1} \right)\), an exponential function with a normal operating point, \(v_{10} = 1.5\text{ }V\). We then obtain a linear model

\[\Delta v_{2}(t) = \left. \ \frac{dg\left( v_{1} \right)}{dv_{1}} \right|_{v_{10}}\Delta v_{1}(t) = 6e^{3v_{10}}\Delta v_{1}(t) = 540\Delta v_{1}(t). \]

Taking the Laplace transform, yields

\[\Delta V_{2}(s) = 540\Delta V_{1}(s). \]

(a)

(b)

(c)

FIGURE 2.43

Speed control of an electric traction motor.

(d)

Also, for the differential amplifier, we have

\[v_{1} = \frac{1 + R_{2}/R_{1}}{1 + R_{3}/R_{4}}v_{\text{in}\text{~}} - \frac{R_{2}}{R_{1}}v_{t}. \]

We wish to obtain an input control that sets \(\omega_{d}(t) = v_{\text{in}\text{~}}\), where the units of \(\omega_{d}\) are \(rad/s\) and the units of \(v_{\text{in}\text{~}}\) are volts. Then, when \(v_{\text{in}\text{~}} = 10\text{ }V\), the steady-state speed is \(\omega = 10rad/s\). We note that \(v_{t} = K_{t}\omega_{d}\) in steady state, and we expect, in balance, the steady-state output to be

\[v_{1} = \frac{1 + R_{2}/R_{1}}{1 + R_{3}/R_{4}}v_{\text{in}\text{~}} - \frac{R_{2}}{R_{1}}K_{t}v_{\text{in}\text{~}}. \]

99. Table 2.7 Parameters of a Large DC Motor

\[{K_{m} = 10 }{J = 2 }{R_{a} = 1 }{b = 0.5 }{L_{a} = 1 }{K_{b} = 0.1}\]

When the system is in balance, \(v_{1} = 0\), and when \(K_{t} = 0.1\), we have

\[\frac{1 + R_{2}/R_{1}}{1 + R_{3}/R_{4}} = \frac{R_{2}}{R_{1}}K_{t}. \]

This relation can be achieved when

\[R_{2}/R_{1} = 10\text{~}\text{and}\text{~}R_{3}/R_{4} = 10. \]

The parameters of the motor and load are given in Table 2.7. The overall system is shown in Figure 2.43(b). Reducing the block diagram in Figure 2.43(c) or the signal-flow graph in Figure 2.43(d) yields the transfer function

\[\begin{matrix} \frac{\omega(s)}{\omega_{d}(s)} & \ = \frac{540G_{1}(s)G_{2}(s)}{1 + 0.1G_{1}(s)G_{2}(s) + 540G_{1}(s)G_{2}(s)} = \frac{540G_{1}(s)G_{2}(s)}{1 + 540.1G_{1}(s)G_{2}(s)} \\ & \ = \frac{5400}{(s + 1)(2s + 0.5) + 5401} = \frac{5400}{2s^{2} + 2.5s + 5401.5} \\ & \ = \frac{2700}{s^{2} + 1.25s + 2700.75}. \end{matrix}\]

Since the characteristic equation is second order, we note that \(\omega_{n} = 52\) and \(\zeta = 0.012\), and we expect the response of the system to be highly oscillatory (underdamped).

100. EXAMPLE 2.14 Design of a low-pass filter

Our goal is to design a first-order low-pass filter that passes signals at a frequency below \(106.1\text{ }Hz\) and attenuates signals with a frequency above \(106.1\text{ }Hz\). In addition, the DC gain should be \(1/2\).

A ladder network with one energy storage element, as shown in Figure 2.44(a), will act as a first-order low-pass network. Note that the DC gain will be equal to \(1/2\) (open-circuit the capacitor). The current and voltage equations are

\[\begin{matrix} & I_{1} = \left( V_{1} - V_{2} \right)G, \\ & I_{2} = \left( V_{2} - V_{3} \right)G, \\ & V_{2} = \left( I_{1} - I_{2} \right)R, \\ & V_{3} = I_{2}Z, \end{matrix}\]

where \(G = 1/R\) and \(Z(s) = 1/Cs\). The signal-flow graph constructed for the four equations is shown in Figure 2.44(b), and the corresponding block diagram is shown in Figure 2.44(c). The three loops are \(L_{1}(s) = - GR = - 1,L_{2}(s) = - GR = - 1\), and

(a)

(b)

FIGURE 2.44

(a) Ladder network, (b) its signal-flow graph, and (c) its block diagram.

(c)

\(L_{3}(s) = - GZ(s)\). All loops touch the forward path. Loops \(L_{1}(s)\) and \(L_{3}(s)\) are nontouching. Therefore, the transfer function is

\[\begin{matrix} T(s) = \frac{V_{3}(s)}{V_{1}(s)} & \ = \frac{P_{1}(s)}{1 - \left( L_{1}(s) + L_{2}(s) + L_{3}(s) \right) + L_{1}(s)L_{3}(s)} = \frac{GZ(s)}{3 + 2GZ(s)} \\ & \ = \frac{1}{3RCs + 2} = \frac{1/(3RC)}{s + 2/(3RC)}. \end{matrix}\]

If one prefers to utilize block diagram reduction techniques, one can start at the output with

\[V_{3}(s) = Z(s)I_{2}(s). \]

But the block diagram shows that

\[I_{2}(s) = G\left( V_{2}(s) - V_{3}(s) \right). \]

Therefore,

\[V_{3}(s) = Z(s)GV_{2}(s) - Z(s)GV_{3}(s) \]

SO

\[V_{2}(s) = \frac{1 + Z(s)G}{Z(s)G}V_{3}(s). \]

We will use this relationship between \(V_{3}(s)\) and \(V_{2}(s)\) in the subsequent development. Continuing with the block diagram reduction, we have

\[V_{3}(s) = - Z(s)GV_{3}(s) + Z(s)GR\left( I_{1}(s) - I_{2}(s) \right), \]

but from the block diagram, we see that

Therefore,

\[I_{1}(s) = G\left( V_{1}(s) - V_{2}(s) \right),\ I_{2}(s) = \frac{V_{3}(s)}{Z(s)}. \]

\[V_{3}(s) = - Z(s)GV_{3}(s) + Z(s)G^{2}R\left( V_{1}(s) - V_{2}(s) \right) - GRV_{3}(s) \]

Substituting for \(V_{2}(s)\) yields

\[V_{3}(s) = \frac{(GR)(GZ(s))}{1 + 2GR + GZ(s) + (GR)(GZ(s))}V_{1}(s). \]

But we know that \(GR = 1\); hence, we obtain

\[V_{3}(s) = \frac{GZ(s)}{3 + 2GZ(s)}V_{1}(s) = \frac{1/(3RC)}{s + 2/(3RC)}. \]

Note that the DC gain is \(1/2\) as expected. The pole is desired at \(p = 2\pi(106.1) = 666.7 = 2000/3\). Therefore, we require \(RC = 0.001\). Select \(R = 1k\Omega\) and \(C = 1\mu F\). Hence, we achieve the filter

\[T(s) = \frac{333.3}{s + 666.7} \]

100.1. THE SIMULATION OF SYSTEMS USING CONTROL DESIGN SOFTWARE

Application of the many classical and modern control system design and analysis tools is based on mathematical models. Most popular control design software packages can be used with systems given in the form of transfer function descriptions. In this book, we will focus on m-file scripts containing commands and functions to analyze and design control systems. Various commercial control system packages are available for student use. The m-files described here are compatible with the MATLAB \(\ ^{\dagger}\) Control System Toolbox and the LabVIEW MathScript RT Module.

\(\ ^{\dagger}\) See Appendix A for an introduction to MATLAB.

\(\ ^{\ddagger}\) See Appendix B for an introduction to LabVIEW MathScipt RT Module. We begin this section by analyzing a typical spring-mass-damper mathematical model of a mechanical system. Using an m-file script, we will develop an interactive analysis capability to analyze the effects of natural frequency and damping on the unforced response of the mass displacement. This analysis will use the fact that we have an analytic solution that describes the unforced time response of the mass displacement.

Later, we will discuss transfer functions and block diagrams. In particular, we are interested in manipulating polynomials, computing poles and zeros of transfer functions, computing closed-loop transfer functions, computing block diagram reductions, and computing the response of a system to a unit step input. The section concludes with the electric traction motor control design of Example 2.13.

The functions covered in this section are roots, poly, conv, polyval, tf, pzmap, pole, zero, series, parallel, feedback, minreal, and step.

A spring-mass-damper mechanical system is shown in Figure 2.2. The motion of the mass, denoted by \(y(t)\), is described by the differential equation

\[M\overset{˙}{y}(t) + b\overset{˙}{y}(t) + ky(t) = r(t). \]

The unforced dynamic response \(y(t)\) of the spring-mass-damper mechanical system is

\[y(t) = \frac{y(0)}{\sqrt{1 - \zeta^{2}}}e^{- \zeta\omega_{n}t}sin\left( \omega_{n}\sqrt{1 - \zeta^{2}}t + \theta \right) \]

where \(\omega_{n} = \sqrt{k/M},\zeta = b/(2\sqrt{kM})\), and \(\theta = \cos^{- 1}\zeta\). The initial displacement is \(y(0)\). The transient system response is underdamped when \(\zeta < 1\), overdamped when \(\zeta > 1\), and critically damped when \(\zeta = 1\). We can visualize the unforced time response of the mass displacement following an initial displacement of \(y(0)\). Consider the underdamped case:

\[\square y(0) = 0.15\text{ }m,\ \omega_{n} = \sqrt{2}\frac{rad}{s},\ \zeta = \frac{1}{2\sqrt{2}}\ \left( \frac{k}{M} = 2,\frac{b}{M} = 1 \right). \]

The commands to generate the plot of the unforced response are shown in Figure 2.45. In the setup, the variables \(y(0),\omega_{n},t\), and \(\zeta\) are input at the command level. Then the script unforced.m is executed to generate the desired plots. This creates an interactive analysis capability to analyze the effects of natural frequency and damping on the unforced response of the mass displacement. One can investigate the effects of the natural frequency and the damping on the time response by simply entering new values of \(\omega_{n}\) and \(\zeta\) at the command prompt and running the script unforced.m again. The time-response plot is shown in Figure 2.46. Notice that the script automatically labels the plot with the values of the damping coefficient and natural frequency. This avoids confusion when making many interactive simulations. Using scripts is an important aspect of developing an effective interactive design and analysis capability.

For the spring-mass-damper problem, the unforced solution to the differential equation was readily available. In general, when simulating closed-loop feedback FIGURE 2.45

Script to analyze the spring-massdamper.
FIGURE 2.46

Spring-massdamper unforced response.

control systems subject to a variety of inputs and initial conditions, it is difficult to obtain the solution analytically. In these cases, we can compute the solutions numerically and to display the solution graphically.

Most systems considered in this book can be described by transfer functions. Since the transfer function is a ratio of polynomials, we begin by investigating how to manipulate polynomials, remembering that working with transfer functions means that both a numerator polynomial and a denominator polynomial must be specified. FIGURE 2.47

Entering the polynomial \(p(s) = s^{3} + 3s^{2} + 4\) and calculating its roots.

Polynomials are represented by row vectors containing the polynomial coefficients in order of descending degree. For example, the polynomial

\[p(s) = s^{3} + 3s^{2} + 4 \]

is entered as shown in Figure 2.47. Notice that even though the coefficient of the \(s\) term is zero, it is included in the input definition of \(p(s)\).

If \(\mathbf{p}\) is a row vector containing the coefficients of \(p(s)\) in descending degree, then \(roots(\mathbf{p})\) is a column vector containing the roots of the polynomial. Conversely, if \(\mathbf{r}\) is a column vector containing the roots of the polynomial, then \(poly(\mathbf{r})\) is a row vector with the polynomial coefficients in descending degree. We can compute the roots of the polynomial \(p(s) = s^{3} + 3s^{2} + 4\) with the roots function as shown in Figure 2.47. In this figure, we show how to reassemble the polynomial with the poly function.

Multiplication of polynomials is accomplished with the conv function. Suppose we want to expand the polynomial

\[n(s) = \left( 3s^{2} + 2s + 1 \right)(s + 4). \]

The associated commands using the conv function are shown in Figure 2.48. Thus, the expanded polynomial is

\[n(s) = 3s^{3} + 14s^{2} + 9s + 4 \]

FIGURE 2.48

Using conv and polyval to multiply and evaluate the polynomials \(\left( 3s^{2} + 2s + 1 \right)\) \((s + 4)\). FIGURE 2.49

(a) The tf function.

(b) Using the tf function to create transfer function objects and adding them using the \(tf\) operator.

(a)

(b)

The function polyval is used to evaluate the value of a polynomial at the given value of the variable. The polynomial \(n(s)\) has the value \(n( - 5) = - 66\), as shown in Figure 2.48 .

Linear, time-invariant system models can be treated as objects, allowing one to manipulate the system models as single entities. In the case of transfer functions, one creates the system models using the tf function; for state variable models one employs the ss function. The use of tf is illustrated in Figure 2.49(a). For example, consider the two system models

\[G_{1}(s) = \frac{10}{s^{2} + 2s + 5}\text{~}\text{and}\text{~}G_{2}(s) = \frac{1}{s + 1}. \]

The systems \(G_{1}(s)\) and \(G_{2}(s)\) can be added using the "+" operator yielding

\[G(s) = G_{1}(s) + G_{2}(s) = \frac{s^{2} + 12s + 15}{s^{3} + 3s^{2} + 7s + 5}. \]

The corresponding commands are shown in Figure 2.49(b) where sys 1 represents \(G_{1}(s)\) and sys2 represents \(G_{2}(s)\). Computing the poles and zeros associated with a transfer function is accomplished by operating on the system model object with the pole and zero functions, respectively, as illustrated in Figure 2.50.

In the next example, we obtain a plot of the pole-zero locations in the complex plane. This will be accomplished using the pzmap function, shown in Figure 2.51. On the pole-zero map, zeros are denoted by an "o" and poles are denoted by an " \(\times\) " If the pzmap function is invoked without left-hand arguments, the plot is generated automatically. FIGURE 2.50

(a) The pole and zero functions.

(b) Using the pole and zero functions to compute the pole and zero locations of a linear system.

FIGURE 2.51

The pzmap function.

(a)

(b)

101. EXAMPLE 2.15 Transfer functions

Consider the transfer functions

\[G(s) = \frac{6s^{2} + 1}{s^{3} + 3s^{2} + 3s + 1}\ \text{~}\text{and}\text{~}\ H(s) = \frac{(s + 1)(s + 2)}{(s + 2i)(s - 2i)(s + 3)}. \]

Using an m-file script, we can compute the poles and zeros of \(G(s)\), the characteristic equation of \(H(s)\), and divide \(G(s)\) by \(H(s)\). We can also obtain a plot of the pole-zero map of \(G(s)/H(s)\) in the complex plane.

The pole-zero map of the transfer function \(G(s)/H(s)\) is shown in Figure 2.52, and the associated commands are shown in Figure 2.53. The pole-zero map shows clearly the five zero locations, but it appears that there are only two poles. FIGURE 2.52

Pole-zero map for \(G(s)/H(s)\).

\(> >\) numg=[ \(\left. \ \begin{matrix} 6 & 0 & 1 \end{matrix} \right\rbrack\); deng=[ \(\left. \ \begin{matrix} 1 & 3 & 3 & 1 \end{matrix} \right\rbrack;\) sysg=tf(numg,deng); >>z=zero(sysg)

This cannot be the case, since we know that for physical systems the number of poles must be greater than or equal to the number of zeros. Using the roots function, we can ascertain that there are in fact four poles at \(s = - 1\). Hence, multiple poles or multiple zeros at the same location cannot be discerned on the pole-zero map. FIGURE 2.54

Open-loop control system (without feedback).

Suppose we have developed mathematical models in the form of transfer functions for a process, represented by \(G(s)\), and a controller, represented by \(G_{c}(s)\), and possibly many other system components such as sensors and actuators. Our objective is to interconnect these components to form a control system.

A simple open-loop control system can be obtained by interconnecting a process and a controller in series as illustrated in Figure 2.54. We can compute the transfer function from \(R(s)\) to \(Y(s)\), as follows.

102. EXAMPLE 2.16 Series connection

Let the process represented by the transfer function \(G(s)\) be

\[G(s) = \frac{1}{500s^{2}}, \]

and let the controller represented by the transfer function \(G_{c}(s)\) be

\[G_{c}(s) = \frac{s + 1}{s + 2} \]

We can use the series function to cascade two transfer functions \(G_{1}(s)\) and \(G_{2}(s)\), as shown in Figure 2.55.

The transfer function \(G_{c}(s)G(s)\) is computed using the series function as shown in Figure 2.56. The resulting transfer function is

\[G_{c}(s)G(s) = \frac{s + 1}{500s^{3} + 1000s^{2}} = \text{~}\text{sys,}\text{~} \]

FIGURE 2.55

(a) Block diagram.

(b) The series function. (a)

(b) FIGURE 2.56

Application of the series function.

FIGURE 2.57

(a) Block diagram. (b) The parallel function.

FIGURE 2.58

A basic control system with unity feedback.

(a)

Transfer function:

(b)

(a)

(b)

where sys is the transfer function name in the m-file script.

Block diagrams quite often have transfer functions in parallel. In such cases, the function parallel can be quite useful. The parallel function is described in Figure 2.57.

We can introduce a feedback signal into the control system by closing the loop with unity feedback, as shown in Figure 2.58. The signal \(E_{a}(s)\) is an error signal; the signal \(R(s)\) is a reference input. In this control system, the controller is in the forward path, and the closed-loop transfer function is

\[T(s) = \frac{G_{c}(s)G(s)}{1 \mp G_{c}(s)G(s)}. \]

FIGURE 2.59

(a) Block diagram.

(b) The feedback function with unity feedback.
FIGURE 2.60

(a) Block diagram.

(b) The feedback function.

(a)

(b)

(a)

(b)

We can utilize the feedback function to aid in the block diagram reduction process to compute closed-loop transfer functions for single- and multiple-loop control systems.

It is often the case that the closed-loop control system has unity feedback, as illustrated in Figure 2.58. We can use the feedback function to compute the closedloop transfer function by setting \(H(s) = 1\). The use of the feedback function for unity feedback is depicted in Figure 2.59.

The feedback function is shown in Figure 2.60 with the associated system configuration, which includes \(H(s)\) in the feedback path. If the input "sign" is omitted, then negative feedback is assumed. FIGURE 2.61

(a) Block diagram. (b) Application of the feedback function.

FIGURE 2.62

A basic control system with the controller in the feedback loop.

(a)

Transfer function:

\[\frac{s + 1}{500{\text{ }s}^{\land}3 + 1000{\text{ }s}^{\land}2 + s + 1} \longleftarrow \frac{Y(s)}{R(s)} = \frac{G_{c}(s)G(s)}{1 + G_{c}(s)G(s)} \]

(b)

103. EXAMPLE 2.17 The feedback function with unity feedback

Let the process, \(G(s)\), and the controller, \(G_{c}(s)\), be as in Figure 2.61(a). To apply the feedback function, we first use the series function to compute \(G_{c}(s)G(s)\), followed by the feedback function to close the loop. The command sequence is shown in Figure 2.61(b). The closed-loop transfer function, as shown in Figure 2.61(b), is

\[T(s) = \frac{G_{c}(s)G(s)}{1 + G_{c}(s)G(s)} = \frac{s + 1}{500s^{3} + 1000s^{2} + s + 1} = \text{~}\text{sys.}\text{~} \]

Another basic feedback control configuration is shown in Figure 2.62. In this case, the controller is located in the feedback path. The closed-loop transfer function is

\[T(s) = \frac{G(s)}{1 \mp G(s)H(s)}. \]

104. EXAMPLE 2.18 The feedback function

Let the process, \(G(s)\), and the controller, \(H(s)\), be as in Figure 2.63(a). To compute the closed-loop transfer function with the controller in the feedback loop, we use FIGURE 2.63

Application of the feedback function:

(a) block diagram,

(b) m-file script.

(a)

\(> >\) numg=[1]; deng=[500 0 0]; sys \(1 =\) tf(numg,deng);

\(> >\) numh=[1 1]; denh=[1 2]; sys2=tf(numh,denh);

\(> >\) sys=feedback(sys 1 ,sys2);

sys

Transfer function:

\[\frac{s + 2}{500s^{\land}3 + 1000s^{\land}2 + s + 1} \longleftarrow \frac{Y(s)}{R(s)} = \frac{G(s)}{1 + G(s)H(s)} \]

(b)

the feedback function. The command sequence is shown in Figure 2.63(b). The closed-loop transfer function is

\[T(s) = \frac{s + 2}{500s^{3} + 1000s^{2} + s + 1} = \text{~}\text{sys.}\text{~} \]

The functions series, parallel, and feedback can be used as aids in block diagram manipulations for multiple-loop block diagrams.

105. EXAMPLE 2.19 Multiloop reduction

A multiloop feedback system is shown in Figure 2.26. Our objective is to compute the closed-loop transfer function, \(T(s)\), with

\[\begin{matrix} & G_{1}(s) = \frac{1}{s + 10},\ G_{2}(s) = \frac{1}{s + 1}, \\ & G_{3}(s) = \frac{s^{2} + 1}{s^{2} + 4s + 4},\ G_{4}(s) = \frac{s + 1}{s + 6}, \\ & H_{1}(s) = \frac{s + 1}{s + 2},\ H_{2}(s) = 2,\ \text{~}\text{and}\text{~}\ H_{3}(s) = 1. \end{matrix}\]

For this example, a five-step procedure is followed:

$\square\ $ Step 1. Input the system transfer functions.

$\square\ $ Step 2. Move \(H_{2}(s)\) behind \(G_{4}(s)\).

$\square\ $ Step 3. Eliminate the \(G_{3}(s)G_{4}(s)H_{1}(s)\) loop.

$\square\ $ Step 4. Eliminate the loop containing \(H_{2}(s)\).

$\square\ $ Step 5. Eliminate the remaining loop and calculate \(T(s)\). FIGURE 2.64 Multiple-loop block reduction.

The five steps are utilized in Figure 2.64, and the corresponding block diagram reduction is shown in Figure 2.27. The result of executing the commands is

\[\text{~}\text{sys}\text{~} = \frac{s^{5} + 4s^{4} + 6s^{3} + 6s^{2} + 5s + 2}{12s^{6} + 205s^{5} + 1066s^{4} + 2517s^{3} + 3128s^{2} + 2196s + 712}. \]

We must be careful in calling this the closed-loop transfer function. The transfer function is defined as the input-output relationship after pole-zero cancellations. If we compute the poles and zeros of \(T(s)\), we find that the numerator and denominator polynomials have \((s + 1)\) as a common factor. This must be canceled before we can claim we have the closed-loop transfer function. To assist us in the polezero cancellation, we will use the minreal function. The minreal function, shown in Figure 2.65, removes common pole-zero factors of a transfer function. The final step in the block reduction process is to cancel out the common factors, as shown in Figure 2.66. After the application of the minreal function, we find that the order of the denominator polynomial has been reduced from six to five, implying one polezero cancellation.

FIGURE 2.65 function. FIGURE 2.66 Application of the minreal function.

\(> >\) num1=[10]; den1=[1 1]; sys1=tf(num1, den1);

\(> >\) num2=[1]; den2=[2 0.5]; sys2=tf(num2,den2);

\(> >\) num3=[540]; den3=[1]; sys3=tf(num3,den3);

\(>\) num4=[0.1]; den4=[1]; sys4=tf(num4,den4);

\(>\) sys5=series(sys 1 ,sys2);

sys6=feedback(sys5,sys4);

Eliminate

inner loop

sys7=series(sys3,sys6);

FIGURE 2.67

Electric traction motor block reduction. >>sys=feedback(sys7,[1])

Compute closed-loop transfer function

Transfer function:

106. EXAMPLE 2.20 Electric traction motor control

Finally, let us reconsider the electric traction motor system from Example 2.13. The block diagram is shown in Figure 2.43(c). The objective is to compute the closedloop transfer function and investigate the response of \(\omega(s)\) to a commanded \(\omega_{d}(s)\). The first step, as shown in Figure 2.67, is to compute the closed-loop transfer function \(\omega(s)/\omega_{d}(s) = T(s)\). The closed-loop characteristic equation is second order with \(\omega_{n} = 52\) and \(\zeta = 0.012\). Since the damping is low, we expect the response to be highly oscillatory. We can investigate the response \(\omega(t)\) to a reference input, \(\omega_{d}(t)\), by utilizing the step function. The step function, shown in Figure 2.68, calculates the unit step response of a linear system. The step function is very important, since control system performance specifications are often given in terms of the unit step response.

If the only objective is to plot the output, \(y(t)\), we can use the step function without left-hand arguments and obtain the plot automatically with axis labels. If we need \(y(t)\) for any purpose other than plotting, we must use the step function with left-hand arguments, followed by the plot function to plot \(y(t)\). We define \(t\) as a row vector containing the times at which we wish the value of the output variable \(y(t)\). We can also select \(t = t_{\text{final}\text{~}}\), which results in a step response from \(t = 0\) to \(t = t_{\text{final}\text{~}}\) and the number of intermediate points are selected automatically.

(a)

FIGURE 2.68

The step function.

(b)

(a)
\(\%\) This script computes the step

\(\%\) response of the traction motor

\(\%\) wheel velocity

\[\% \]

num=[5400]; den=[2 2.5 5402]; sys=tf(num,den); \(t = \lbrack 0:0.005:3\rbrack\)

\(\lbrack y,t\rbrack = step(\) sys,\(t)\);

plot(t,y),grid

xlabel('Time (s)')

ylabel('Wheel velocity')

FIGURE 2.69 (a) Traction motor wheel velocity step response. (b) m-file script.

The step response of the electric traction motor is shown in Figure 2.69. As expected, the wheel velocity response, given by \(y(t)\), is highly oscillatory. Note that the output is \(y(t) \equiv \omega(t)\).

106.1. SEQUENTIAL DESIGN EXAMPLE: DISK DRIVE READ SYSTEM

Our goal for the disk drive system is to position the reader head accurately at the desired track and to move from one track to another. We need to identify the plant, the sensor, and the controller. The disk drive reader uses a permanent magnet FIGURE 2.70

Head mount for reader, showing flexure.
FIGURE 2.71

Block diagram model of disk drive read system.

(a)

(b)

DC motor to rotate the reader arm. The DC motor is called a voice coil motor. The read head is mounted on a slider device, which is connected to the arm as shown in Figure 2.70. A flexure (spring metal) is used to enable the head to float above the disk at a gap of less than \(100\text{ }nm\). The thin-film head reads the magnetic flux and provides a signal to an amplifier. The error signal of Figure 2.71(a) is provided by reading the error from a prerecorded index track. Assuming an accurate read head, the sensor has a transfer function \(H(s) = 1\), as shown in Figure 2.71(b). The model of the permanent magnet DC motor and a linear amplifier is shown in Figure 2.71(b). As a good approximation, we use the model of the armature-controlled DC motor as shown earlier in Figure 2.20 with \(K_{b} = 0\). The model shown in Figure 2.71(b) assumes that the flexure is entirely rigid and does not significantly flex. In future control designs, we should consider the model when the flexure cannot be assumed to be completely rigid. Table 2.8 Typical Parameters for Disk Drive Reader

Parameter Symbol Typical Value
Inertia of arm and read head $$J$$ $$1\text{ }N\text{ }m{\text{ }s}^{2}/rad$$
Friction $$b$$ $$20\text{ }N\text{ }m\text{ }s/rad$$
Amplifier $$K_{a}$$ $$10 - 1000$$
Armature resistance $$R$$ $$1\Omega$$
Motor constant $$K_{m}$$ $$5\text{ }N\text{ }m/A$$
Armature inductance $$L$$ $$1mH$$

Typical parameters for the disk drive system are given in Table 2.8. Thus, we have

\[G(s) = \frac{K_{m}}{s(Js + b)(Ls + R)} = \frac{5000}{s(s + 20)(s + 1000)}. \]

We can also write

\[G(s) = \frac{K_{m}/(bR)}{s\left( \tau_{L}s + 1 \right)(\tau s + 1)}, \]

where \(\tau_{L} = J/b = 50\text{ }ms\) and \(\tau = L/R = 1\text{ }ms\). Since \(\tau \ll \tau_{L}\), we often neglect \(\tau\). Then, we would have

\[G(s) \approx \frac{K_{m}/(bR)}{s\left( \tau_{L}s + 1 \right)} = \frac{0.25}{s(0.05s + 1)} = \frac{5}{s(s + 20)} \]

The block diagram of the closed-loop system is shown in Figure 2.72. Using the block diagram transformation of Table 2.5, we have

\[\frac{Y(s)}{R(s)} = \frac{K_{a}G(s)}{1 + K_{a}G(s)}. \]

Using the approximate second-order model for \(G(s)\), we obtain

FIGURE 2.72

Block diagram of closed-loop system.

\[\frac{Y(s)}{R(s)} = \frac{5K_{a}}{s^{2} + 20s + 5K_{a}}. \]

FIGURE 2.73

The system response of the system shown in Figure 2.72 for

\(R(s) = \frac{0.1}{s}\).

When \(K_{a} = 40\), we have

\[Y(s) = \frac{200}{s^{2} + 20s + 200}R(s) \]

We obtain the step response for \(R(s) = \frac{0.1}{s}\) rad, as shown in Figure 2.73.

106.2. SUMMARY

In this chapter, we have been concerned with quantitative mathematical models of control components and systems. The differential equations describing the dynamic performance of physical systems were utilized to construct a mathematical model. The physical systems under consideration can include a wide-range of mechanical, electrical, biomedical, environmental, aerospace, industrial, and chemical engineering systems. A linear approximation using a Taylor series expansion about the operating point was utilized to obtain a small-signal linear approximation for nonlinear control components. Then, with the approximation of a linear system, one may utilize the Laplace transformation and its related input-output relationship given by the transfer function. The transfer function approach to linear systems allows the analyst to determine the response of the system to various input signals in terms of the location of the poles and zeros of the transfer function. Using transfer function notations, block diagram models of systems of interconnected components were developed. The block relationships were obtained. Additionally, an alternative use of transfer function models in signal-flow graph form was investigated. Mason's signal-flow gain formula was presented and found to be useful for obtaining the relationship between system variables in a complex feedback system. The advantage of the signal-flow graph method was the availability of Mason's signal-flow gain formula, which provides the relationship between system variables without requiring any reduction or manipulation of the flow graph. Thus, we have obtained a useful mathematical model for feedback control systems by developing the concept of a transfer function of a linear system and the relationship among system variables using block diagram and signal-flow graph models. We considered the utility of the computer simulation of linear and nonlinear systems to determine the response of a system for several conditions of the system parameters and the environment. Finally, we continued the development of the Disk Drive Read System by obtaining a model in transfer function form of the motor and arm.

107. SKILLS CHECK

In this section, we provide three sets of problems to test your knowledge: True or False, Multiple Choice, and Word Match. To obtain direct feedback, check your answers with the answer key provided at the conclusion of the end-of-chapter problems. Use the block diagram in Figure 2.74 as specified in the various problem statements.

FIGURE 2.74 Block diagram for the Skills Check.

In the following True or False and Multiple Choice problems, circle the correct answer.

  1. Very few physical systems are linear within some range of the variables.

True or False

  1. The \(s\)-plane plot of the poles and zeros graphically portrays the character of the natural response of a system.

True or False

  1. The roots of the characteristic equation are the zeros of the closed-loop system.

True or False

  1. A linear system satisfies the properties of superposition and homogeneity. True or False

  2. The transfer function is the ratio of the Laplace transform of the output variable to the Laplace transform of the input variable, with all initial conditions equal to zero.

True or False

  1. Consider the system in Figure 2.74 where

\[G_{c}(s) = 10,\ H(s) = 1,\ \text{~}\text{and}\text{~}\ G(s) = \frac{s + 50}{s^{2} + 60s + 500}. \]

If the input \(R(s)\) is a unit step input, \(T_{d}(s) = 0\), and \(N(s) = 0\), the final value of the output \(y(t)\) is:
a. \(y_{ss} = \lim_{t \rightarrow \infty}\mspace{2mu} y(t) = 100\)
b. \(y_{ss} = \lim_{t \rightarrow \infty}\mspace{2mu} y(t) = 1\)
c. \(y_{ss} = limy(t) = 50\)
d. None of the above 7. Consider the system in Figure 2.74 with

\[G_{c}(s) = 20,\ H(s) = 1,\ \text{~}\text{and}\text{~}\ G(s) = \frac{s + 4}{s^{2} - 12s - 65}. \]

When all initial conditions are zero, the input \(R(s)\) is an impulse, the disturbance \(T_{d}(s) = 0\), and the noise \(N(s) = 0\), the output \(y(t)\) is
a. \(y(t) = 10e^{- 5t} + 10e^{- 3t}\)
b. \(y(t) = e^{- 8t} + 10e^{- t}\)
c. \(y(t) = 10e^{- 3t} - 10e^{- 5t}\)
d. \(y(t) = 20e^{- 8t} + 5e^{- 15t}\)

  1. Consider a system represented by the block diagram in Figure 2.75 .

FIGURE 2.75 Block diagram with an internal loop.

The closed-loop transfer function \(T(s) = Y(s)/R(s)\) is
a. \(T(s) = \frac{50}{s^{2} + 55s + 50}\)
b. \(T(s) = \frac{10}{s^{2} + 55s + 10}\)
c. \(T(s) = \frac{10}{s^{2} + 50s + 55}\)
d. None of the above

Consider the block diagram in Figure 2.74 for Problems 9 through 11 where

\[\begin{matrix} G_{c}(s) = 4,\ H(s) = 1,\text{~}\text{and}\text{~}G(s) = \frac{5}{s^{2} + 10s + 5}. \\ T_{d}(s) = 0,\text{~}\text{and}\text{~}N(s) = 0. \end{matrix}\]

  1. The closed-loop transfer function \(T(s) = Y(s)/R(s)\) is:
    a. \(T(s) = \frac{50}{s^{2} + 5s + 50}\)
    b. \(T(s) = \frac{20}{s^{2} + 10s + 25}\)
    c. \(T(s) = \frac{50}{s^{2} + 5s + 56}\)
    d. \(T(s) = \frac{20}{s^{2} + 10s - 15}\) 10. The closed-loop unit step response is:
    a. \(y(t) = \frac{20}{25} + \frac{20}{25}e^{- 5t} - t^{2}e^{- 5t}\)
    b. \(y(t) = 1 + 20te^{- 5t}\)
    c. \(y(t) = \frac{20}{25} - \frac{20}{25}e^{- 5t} - 4te^{- 5t}\)
    d. \(y(t) = 1 - 2e^{- 5t} - 4te^{- 5t}\)

  2. The final value of the unit step response \(y(t)\) is:
    a. \(y_{ss} = \lim_{t \rightarrow \infty}\mspace{2mu} y(t) = 0.8\)
    b. \(y_{ss} = \lim_{t \rightarrow \infty}\mspace{2mu} y(t) = 1.0\)
    c. \(y_{ss} = \lim_{t \rightarrow \infty}\mspace{2mu} y(t) = 2.0\)
    d. \(y_{ss} = \lim_{t \rightarrow \infty}\mspace{2mu} y(t) = 1.25\)

  3. Consider the differential equation

\[\overset{¨}{y}(t) + 2\overset{˙}{y}(t) + y(t) = u(t) \]

where \(y(0) = \overset{˙}{y}(0) = 0\). The poles of this system are:
a. \(s_{1} = - 1,s_{2} = - 1\)
b. \(s_{1} = 1j,s_{2} = - 1j\)
c. \(s_{1} = - 1,s_{2} = - 2\)
d. None of the above

  1. A cart of mass \(m = 1000\text{ }kg\) is attached to a truck using a spring of stiffness \(k = 20,000\text{ }N/m\) and a damper of constant \(b = 200Ns/m\), as shown in Figure 2.76. The truck moves at a constant acceleration of \(a = 0.7\text{ }m/s^{2}\).

FIGURE 2.76 Truck pulling a cart of mass \(m\).

The transfer function between the speed of the truck and the speed of the cart is:
a. \(T(s) = \frac{50}{5s^{2} + s + 100}\)
b. \(T(s) = \frac{20 + s}{s^{2} + 10s + 25}\)
c. \(T(s) = \frac{100 + s}{5s^{2} + s + 100}\)
d. None of the above 14. Consider the closed-loop system in Figure 2.74 with

\[\begin{matrix} G_{c}(s) = 15,\ H(s) = 1,\text{~}\text{and}\text{~}G(s) = \frac{1000}{s^{3} + 50s^{2} + 4500s + 1000}. \\ T_{d}(s) = 0,\text{~}\text{and}\text{~}N(s) = 0. \end{matrix}\]

Compute the closed-loop transfer function and the closed-loop poles.
a. \(T(s) = \frac{15000}{s^{3} + 50s^{2} + 4500s + 16000},s_{1} = - 3.70,s_{2,3} = - 23.15 \pm 61.59j\)
b. \(T(s) = \frac{15000}{50s^{2} + 4500s + 16000},s_{1} = - 3.70,s_{2} = - 86.29\)
c. \(T(s) = \frac{1}{s^{3} + 50s^{2} + 4500s + 16000},s_{1} = - 3.70,s_{2,3} = - 23.2 \pm 63.2j\)
d. \(T(s) = \frac{15000}{s^{3} + 50s^{2} + 4500s + 16000},s_{1} = - 3.70,s_{2} = - 23.2,s_{3} = - 63.2\)

  1. Consider the feedback system in Figure 2.74 with

\[G_{c}(s) = \frac{K(s + 0.3)}{s},\ H(s) = 2s,\ \text{~}\text{and}\text{~}\ G(s) = \frac{1}{(s - 2)\left( s^{2} + 10s + 45 \right)}. \]

Assuming \(R(s) = 0\) and \(N(s) = 0\), the closed-loop transfer function from the disturbance \(T_{d}(s)\) to the output \(Y(s)\) is:
a. \(\frac{Y(s)}{T_{d}(s)} = \frac{1}{s^{3} + 8s^{2} + (2K + 25)s + (0.6K - 90)}\)
b. \(\frac{Y(s)}{T_{d}(s)} = \frac{100}{s^{3} + 8s^{2} + (2K + 25)s + (0.6K - 90)}\)
c. \(\frac{Y(s)}{T_{d}(s)} = \frac{1}{8s^{2} + (2K + 25)s + (0.6K - 90)}\)
d. \(\frac{Y(s)}{T_{d}(s)} = \frac{K(s + 0.3)}{s^{4} + 8s^{3} + (2K + 25)s^{2} + (0.6K - 90)s}\)

In the following Word Match problems, match the term with the definition by writing the correct letter in the space provided.
a. Actuator
b. Block diagrams
c. Characteristic equation
d. Critical damping

e. Damped oscillation

f. Damping ratio

g. DC motor
An oscillation in which the amplitude decreases with time.

A system that satisfies the properties of superposition and homogeneity.

The case where damping is on the boundary between underdamped and overdamped.

A transformation of a function \(f(t)\) from the time domain into the complex frequency domain yielding \(F(s)\).

The device that provides the motive power to the process.

A measure of damping. A dimensionless number for the second-order characteristic equation.

The relation formed by equating to zero the denominator of a transfer function. h. Laplace transform

i. Linear approximation

j. Linear system

k. Mason loop rule

I. Mathematical models

m. Signal-flow graph

n. Simulation

o. Transfer function
Unidirectional, operational blocks that represent the transfer functions of the elements of the system.

A rule that enables the user to obtain a transfer function by tracing paths and loops within a system.

An electric actuator that uses an input voltage as a control variable.

The ratio of the Laplace transform of the output variable to the Laplace transform of the input variable.

Descriptions of the behavior of a system using mathematics.

A model of a system that is used to investigate the behavior of a system by utilizing actual input signals.

A diagram that consists of nodes connected by several directed branches and that is a graphical representation of a set of linear relations.

An approximate model that results in a linear relationship between the output and the input of the device.

108. EXERCISES

Exercises are straightforward applications of the concepts of the chapter.

E2.1 A unity, negative feedback system has a nonlinear function \(y = f(e) = e^{2}\), as shown in Figure E2.1. For an input \(r\) in the range of 0 to 4 , calculate and plot the open-loop and closed-loop output versus input and show that the feedback system results in a more linear relationship.

FIGURE E2.1 Open and closed loop.

E2.2 A thermistor has a response to temperature represented by

\[R = R_{o}e^{- 0.3t}, \]

where \(R_{O} = 5,000\Omega,R =\) resistance, and \(T =\) temperature in degrees Celsius. Find the linear model for the thermistor operating at \(T = 20^{\circ}C\) and for a small range of variation of temperature.

Answer: \(\Delta R = - 3.7\Delta T\)

E2.3 The force versus displacement for a spring is shown in Figure E2.3 for the spring-mass-damper system of Figure 2.1. Graphically find the spring constant for the equilibrium point of \(y = 1.0\text{ }cm\) and a range of operation of \(\pm 2.0\text{ }cm\).

FIGURE E2.3 Spring behavior.

E2.4 A laser printer uses a laser beam to print copy rapidly for a computer. The laser is positioned by a control input \(r(t)\), so that we have

\[Y(s) = \frac{4(s + 50)}{s^{2} + 30s + 200}R(s). \]

The input \(r(t)\) represents the desired position of the laser beam.

(a) If \(r(t)\) is a unit step input, find the output \(y(t)\).

(b) What is the final value of \(y(t)\) ?

Answer: (a) \(y(t) = 1 + 0.6e^{- 20t} - 1.6e^{- 10t}\), (b) \(y_{ss} = 1\)

E2.5 A summing amplifier uses an op-amp as shown in Figure E2.5. Assume an ideal op-amp model, and determine \(v_{o}\).

Answer: \(v_{0} = - \frac{R_{2}}{R_{1}}\left( v_{1} + v_{2} + v_{3} \right)\)

FIGURE E2.5 A summing amplifier using an op-amp.

E2.6 A nonlinear device is represented by the function

\[y = f(x) = Ae^{x}, \]

where the operating point for the input \(x\) is \(x_{o} = 0\), where \(A\) is a constant. Determine a linear approximation valid near the operating point.

Answer: \(y = A + Ax = A(1 + x)\).

E2.7 A lamp's intensity stays constant when monitored by an optotransistor-controlled feedback loop. When the voltage drops, the lamp's output also drops, and optotransistor \(Q_{1}\) draws less current. As a result, a power transistor conducts more heavily and charges a capacitor more rapidly [24]. The capacitor voltage controls the lamp voltage directly. A block diagram of the system is shown in Figure E2.7. Find the closedloop transfer function, \(I(s)/R(s)\) where \(I(s)\) is the lamp intensity, and \(R(s)\) is the command or desired level of light.

E2.8 A control engineer, N. Minorsky, designed an innovative ship steering system in the 1930s for the U.S. Navy. The system is represented by the block diagram shown in Figure E2.8, where \(Y(s)\) is the ship's course, \(R(s)\) is the desired course, and \(A(s)\) is the rudder angle [16]. Find the transfer function \(Y(s)/R(s)\).

(a)

(b)

FIGURE E2.7 Lamp controller.

Answer: \(\frac{Y(s)}{R(s)} =\)

\[\frac{KG_{1}(s)G_{2}(s)/s}{1 + G_{1}(s)H_{3}(s) + G_{1}(s)G_{2}(s)\left\lbrack H_{1}(s) + H_{2}(s) \right\rbrack + KG_{1}(s)G_{2}(s)/s} \]

E2.9 A four-wheel antilock automobile braking system uses electronic feedback to control automatically the brake force on each wheel [15]. A block diagram model of a brake control system is shown in Figure E2.9, where \(F_{f}(s)\) and \(F_{R}(s)\) are the braking force of the front and rear wheels, respectively, and \(R(s)\) is the desired automobile response on an icy road. Find \(F_{f}(s)/R(s)\).

FIGURE E2.8 Ship steering system.

FIGURE E2.9 Brake control system.

FIGURE E2.10 Shock absorber.

E2.10 One of the beneficial applications of an automotive control system is the active control of the suspension system. One feedback control system uses a shock absorber consisting of a cylinder filled with a compressible fluid that provides both spring and damping forces [17]. The cylinder has a plunger

FIGURE E2.11 Spring characteristic.

activated by a gear motor, a displacement-measuring sensor, and a piston. Spring force is generated by piston displacement, which compresses the fluid. During piston displacement, the pressure imbalance across the piston is used to control damping. The plunger varies the internal volume of the cylinder. This system is shown in Figure E2.10. Develop a block diagram model.

E2.11 A spring exhibits a force-versus-displacement characteristic as shown in Figure E2.11. For small deviations from the operating point \(x_{o}\), find the spring constant when \(x_{o}\) is (a) -1.1 , (b) 0 , and (c) 2.8.

E2.12 Off-road vehicles experience many disturbance inputs as they traverse over rough roads. An active suspension system can be controlled by a sensor that looks "ahead" at the road conditions. An example of a simple suspension system that can accommodate the bumps is shown in Figure E2.12. Find the appropriate gain \(K_{1}\) so that the vehicle does not bounce when the desired deflection is \(R(s) = 0\) and the disturbance is \(T_{d}(s)\).

Answer: \(K_{1}K_{2} = 1\)

FIGURE E2.12 Active suspension system. E2.13 Consider the feedback system in Figure E2.13. Compute the transfer functions \(Y(s)/T_{d}(s)\) and \(Y(s)/N(s)\).

E2.14 Find the transfer function

\[\frac{Y_{1}(s)}{R_{2}(s)} \]

for the multivariable system in Figure E2.14.

E2.15 Obtain the differential equations of the circuit in Figure E2.15 in terms of \(i_{1}(t)\) and \(i_{2}(t)\).

E2.16 The position control system for a spacecraft platform is governed by the following equations:

\[\begin{matrix} \frac{d^{2}p(t)}{dt^{2}} + 2\frac{dp(t)}{dt} + 4p(t) & \ = \theta \\ v_{1}(t) & \ = r(t) - p(t) \end{matrix}\]

\[\begin{matrix} \frac{d\theta(t)}{dt} = 0.5v_{2}(t) \\ v_{2}(t) = 8v_{1}(t). \end{matrix}\]

The variables involved are as follows:

\[\begin{matrix} r(t) & \ = \text{~}\text{desired platform position}\text{~} \\ p(t) & \ = \text{~}\text{actual platform position}\text{~} \\ v_{1}(t) & \ = \text{~}\text{amplifier input voltage}\text{~} \\ v_{2}(t) & \ = \text{~}\text{amplifier output voltage}\text{~} \\ \theta(t) & \ = \text{~}\text{motor shaft position}\text{~} \end{matrix}\]

Sketch a signal-flow diagram or a block diagram of the system, identifying the component parts, and determine the system transfer function \(P(s)/R(s)\).

FIGURE E2.13 Feedback system with measurement noise, \(N(s)\), and plant disturbances, \(T_{d}(s)\).

FIGURE E2.14 Multivariable system.

FIGURE E2.15 Electric circuit.

E2.17 A logarithmic amplifier has a diode whose voltage is represented by the relation

\[V = CInI, \]

where \(C\) is a constant and \(I\) is the current across the diode. Determine a linear model for the diode when \(I_{o} = 1\).

E2.18 The output \(y\) and input \(x\) of a device are related by

\[y = x + 1.9x^{3}. \]

(a) Find the values of the output for steady-state operation at the two operating points \(x_{o} = 1.2\) and \(x_{o} = 2.5\).

(b) Obtain a linearized model for both operating points and compare them.

E2.19 The transfer function of a system is

\[\frac{Y(s)}{R(s)} = \frac{15(s + 1)}{s^{2} + 9s + 14}. \]

Determine \(y(t)\) when \(r(t)\) is a unit step input.

Answer: \(y(t) = 1.07 + 1.5e^{- 2t} - 2.57e^{- 7t},t \geq 0\)

E2.20 Determine the transfer function \(V_{0}(s)/V(s)\) of the operational amplifier circuit shown in Figure E2.20. Assume an ideal operational amplifier. Determine the transfer function when \(R_{1} = R_{2} = 170k\Omega,C_{1} =\) \(15\mu F\), and \(C_{2} = 25\mu F\).

FIGURE E2.20 Op-amp circuit.
E2.21 A high-precision positioning slide is shown in Figure E2.21. Determine the transfer function \(X_{p}(s)/X_{\text{in}\text{~}}(s)\) when the drive shaft friction is \(b_{d} = 0.7\), the drive shaft spring constant is \(k_{d} = 2,m_{c} = 1\), and the sliding friction is \(b_{s} = 0.8\).

FIGURE E2.21 Precision slide.

E2.22 The rotational velocity \(\omega\) of the satellite shown in Figure E2.22 is adjusted by changing the length of the beam \(L\). The transfer function between \(\omega(s)\) and the incremental change in beam length \(\Delta L(s)\) is

\[\frac{\omega(s)}{\Delta L(s)} = \frac{8(s + 3)}{(s + 2)(s + 3)^{2}}. \]

The beam length change is \(\Delta L(s) = 2/s\). Determine the response of the rotation \(\omega(t)\).

Answer: \(\omega(t) = 2.67 - 8e^{- 2t} + 5.33e^{- 3t}\)

FIGURE E2.22 Satellite with adjustable rotational velocity. E2.23 Determine the closed-loop transfer function \(T(s) = Y(s)/R(s)\) for the system of Figure E2.23.

FIGURE E2.23 Control system with three feedback loops.

E2.24 An amplifier may have a region of deadband as shown in Figure E2.24. Use an approximation that uses a cubic equation \(y = ax^{3}\) in the approximately linear region. Select \(a\) and determine a linear approximation for the amplifier when the operating point is \(x = 0.6\)

FIGURE E2.24 An amplifier with a deadband region.

E2.25 The block diagram of a system is shown in Figure E2.25. Determine the transfer function \(T(s) = Y(s)/R(s)\).

FIGURE E2.25 Multiloop feedback system.
E2.26 Determine the transfer function \(X_{2}(s)/F(s)\) for the system shown in Figure E2.26. Both masses slide on a frictionless surface and \(k = 1\text{ }N/m\).

\[\text{~}\text{Answer:}\text{~}\frac{X_{2}(s)}{F(s)} = \frac{1}{s^{2}\left( s^{2} + 2 \right)} \]

FIGURE E2.26 Two connected masses on a frictionless surface.

E2.27 Find the transfer function \(Y(s)/T_{d}(s)\) for the system shown in Figure E2.27.

Answer: \(\frac{Y(s)}{T_{d}(s)} = \frac{G_{2}(s)}{1 + G_{1}(s)G_{2}(s)H(s)}\)

FIGURE E2.27 System with disturbance.

E2.28 Determine the transfer function \(V_{o}(s)/V(s)\) for the op-amp circuit shown in Figure E2.28 [1]. Let \(R_{1} = 167k\Omega,R_{2} = 240k\Omega,R_{3} = 1k\Omega,R_{4} = 100k\Omega\), and \(C = 1\mu F\). Assume an ideal op-amp.

E2.29 A system is shown in Fig. E2.29(a).

(a) Determine \(G(s)\) and \(H(s)\) of the block diagram shown in Figure E2.29(b) that are equivalent to those of the block diagram of Figure E2.29(a).

(b) Determine \(Y(s)/R(s)\) for Figure E2.29(b). FIGURE E2.28 Op-amp circuit.

(a)

(b)

FIGURE E2.29 Block diagram equivalence.

E2.30 A system is shown in Figure E2.30.

(a) Find the closed-loop transfer function \(Y(s)/R(s)\)

when \(G(s) = \frac{10}{s^{2} + 2s + 10}\).

(b) Determine \(Y(s)\) when the input \(R(s)\) is a unit step.

(c) Compute \(y(t)\).

FIGURE E2.30 Unity feedback control system.

E2.31 Determine the partial fraction expansion for \(V(s)\), and compute the inverse Laplace transform. The transfer function \(V(s)\) is given by

\[V(s) = \frac{400}{s^{2} + 8s + 400}. \]

109. PROBLEMS

Problems require an extension of the concepts of the chapter to new situations.

P2.1 An electric circuit is shown in Figure P2.1. Obtain a set of simultaneous integrodifferential equations representing the network.

P2.2 A dynamic vibration absorber is shown in Figure \(P2.2\). This system is representative of many situations involving the vibration of machines containing unbalanced components. The parameters \(M_{2}\) and \(k_{12}\) may be chosen so that the main mass \(M_{1}\) does not vibrate in the steady state when \(F(t) = asin\left( \omega_{0}t \right)\). Obtain the differential equations describing the system.

FIGURE P2.1 Electric circuit.

FIGURE P2.2 Vibration absorber.

P2.3 A coupled spring-mass system is shown in Figure P2.3. The masses and springs are assumed to be equal. Obtain the differential equations describing the system.

FIGURE P2.3 Two-mass system.

P2.4 A nonlinear amplifier can be described by the following characteristic:

\[v_{0}(t) = \left\{ \begin{matrix} 2v_{\text{in}\text{~}}^{2} & v_{\text{in}\text{~}} \geq 0 \\ - 22_{\text{in}\text{~}}^{2} & v_{\text{in}\text{~}} < 0 \end{matrix}. \right.\ \]

The amplifier will be operated over a range of \(\pm 0.5\text{ }V\) around the operating point for \(v_{\text{in}\text{~}}\). Describe the amplifier by a linear approximation (a) when the operating point is \(v_{\text{in}\text{~}} = 0\) and (b) when the operating point is \(v_{\text{in}\text{~}} = 1V\). Obtain a sketch of the nonlinear function and the approximation for each case.

P2.5 Fluid flowing through an orifice can be represented by the nonlinear equation

\[Q = K\left( P_{1} - P_{2} \right)^{1/2}, \]

where the variables are shown in Figure P2.5 and \(K\) is a constant [2]. (a) Determine a linear approximation

FIGURE P2.5 Flow through an orifice. for the fluid-flow equation. (b) What happens to the approximation obtained in part (a) if the operating point is \(P_{1} - P_{2} = 0\) ?

P2.6 Using the Laplace transformation, obtain the current \(I_{2}(s)\) of Problem P2.1. Assume that all the initial currents are zero, the initial voltage across capacitor \(C_{1}\) is \(5v(t)\), and the initial voltage across \(C_{2}\) is 10 volts.

P2.7 Obtain the transfer function of the integrating amplifier circuit shown in Figure P2.7, which is an implementation of a first-order low pass filter.

FIGURE P2.7 An integrating amplifier circuit.

P2.8 A bridged-T network is often used in AC control systems as a filter network [8]. The circuit of one bridged-T network is shown in Figure P2.8. Show that the transfer function of the network is

\[\frac{V_{o}(s)}{V_{in}(s)} = \frac{1 + 2R_{1}Cs + R_{1}R_{2}C^{2}s^{2}}{1 + \left( 2R_{1} + R_{2} \right)Cs + R_{1}R_{2}C^{2}s^{2}}. \]

Sketch the pole-zero diagram when \(R_{1} = 0.5,R_{2} = 1\), and \(C = 0.5\).

FIGURE P2.8 Bridged-T network.

P2.9 Determine the transfer function \(X_{1}(s)/F(s)\) for the coupled spring-mass system of Problem P2.3. Sketch the \(s\)-plane pole-zero diagram for low damping when \(M = 1,b/k = 1\), and

\[\zeta = \frac{1}{2}\frac{b}{\sqrt{kM}} = 0.1. \]

P2.10 Determine the transfer function \(Y_{1}(s)/F(s)\) for the vibration absorber system of Problem P2.2. Determine

FIGURE P2.11 Amplidyne and armature-controlled motor.

the necessary parameters \(M_{2}\) and \(k_{12}\) so that the mass \(M_{1}\) does not vibrate in the steady state when \(F(t) = asin\left( \omega_{0}t \right)\).

P2.11 For electromechanical systems that require large power amplification, rotary amplifiers are often used \(\lbrack 8,19\rbrack\). An amplidyne is a power amplifying rotary amplifier. An amplidyne and a servomotor are shown in Figure P2.11. Obtain the transfer function \(\theta(s)/V_{c}(s)\), and draw the block diagram of the system. Assume \(v_{d} = k_{2}i_{q}\) and \(v_{q} = k_{1}i_{c}\).

P2.12 For the open-loop control system described by the block diagram shown in Figure P2.12, determine the value of \(K\) such that \(y(t) \rightarrow 1\) as \(t \rightarrow \infty\) when \(r(t)\) is a unit step input. Assume zero initial conditions.

FIGURE P2.12 Open-loop control system.
P2.13 An electromechanical open-loop control system is shown in Figure P2.13. The generator, driven at a constant speed, provides the field voltage for the motor. The motor has an inertia \(J_{m}\) and bearing friction \(b_{m}\). Obtain the transfer function \(\theta_{L}(s)/V_{f}(s)\) and draw a block diagram of the system. The generator voltage \(v_{g}\) can be assumed to be proportional to the field current \(i_{f}\).

P2.14 A rotating load is connected to a field-controlled DC electric motor through a gear system. The motor is assumed to be linear. A test results in the output load reaching a speed of \(1rad/s\) within \(0.5\text{ }s\) when a constant \(80\text{ }V\) is applied to the motor terminals. The output steady-state speed is \(2.4rad/s\). Determine the transfer function \(\theta(s)/V_{f}(s)\) of the motor, in \(rad/V\). The inductance of the field may be assumed to be negligible. Also, note that the application of \(80\text{ }V\) to the motor terminals is a step input of \(80\text{ }V\) in magnitude.

P2.15 Consider the spring-mass system depicted in Figure \(P2.15\). Determine a differential equation to describe the motion of the mass \(m\). Obtain the system response \(x(t)\) subjected to an impulse input with zero initial conditions.

FIGURE P2.13 Motor and generator.

FIGURE P2.15 Suspended spring-mass-damper system.

P2.16 A mechanical system is shown in Figure P2.16, which is subjected to a known displacement \(x_{3}(t)\) with respect to the reference. (a) Determine the two independent equations of motion. (b) Obtain the equations of motion in terms of the Laplace transform, assuming that the initial conditions are zero. (c) Sketch a signal-flow graph representing the system of equations. (d) Obtain the relationship \(T_{13}(s)\) between \(X_{1}(s)\) and \(X_{3}(s)\) by using Mason's signal-flow gain formula. Compare the work necessary to obtain \(T_{13}(s)\) by matrix methods to that using Mason's signal-flow gain formula.

FIGURE P2.16 Mechanical system.

P2.17 Obtain a signal-flow graph to represent the following set of algebraic equations where \(x_{1}\) and \(x_{2}\) are to be considered the dependent variables and 6 and 11 are the inputs:

\[x_{1} + 3x_{2} = 9,\ 3x_{1} + 6x_{2} = 22. \]

Determine the value of each dependent variable by using the gain formula. After solving for \(x_{1}\) by Mason's signal-flow gain formula, verify the solution by using Cramer's rule.

P2.18 An \(LC\) ladder network is shown in Figure P2.18. One may write the equations describing the network as follows:

\[\begin{matrix} I_{1} = \left( V_{1} - V_{a} \right)Y_{1}, & V_{a} = \left( I_{1} - I_{a} \right)Z_{2}, \\ I_{a} = \left( V_{a} - V_{2} \right)Y_{3}, & V_{2} = I_{a}Z_{4}. \end{matrix}\]

Construct a flow graph from the equations and determine the transfer function \(V_{2}(s)/V_{1}(s)\).

FIGURE P2.18 LC ladder network.

P2.19 The source follower amplifier provides lower output impedance and essentially unity gain. The circuit diagram is shown in Figure P2.19(a), and the small-signal model is shown in Figure P2.19(b). This circuit uses an FET and provides a gain of approximately unity. Assume that \(R_{2} \gg R_{1}\) for biasing purposes and that \(R_{g} \gg R_{2}\). (a) Solve for the amplifier gain. (b) Solve for the gain when \(g_{m} = 1000\mu\Omega\) and \(R_{s} = 25k\Omega\) where \(R_{s} = R_{1} + R_{2}\). (c) Sketch a block diagram that represents the circuit equations.

P2.20 A hydraulic servomechanism with mechanical feedback is shown in Figure P2.20 [18]. The power piston has an area equal to \(A\). When the valve is moved a small amount \(\Delta z\), the oil will flow through to the cylinder at a rate \(p \cdot \Delta z\), where \(p\) is the port coefficient. The input oil pressure is assumed to be constant. From the geometry, we find that \(\Delta z = k\frac{l_{1} - l_{2}}{l_{1}}(x - y) - \frac{l_{2}}{l_{1}}y\). (a) Determine the closed-loop signal-flow graph or block diagram for this mechanical system. (b) Obtain the closed-loop transfer function \(Y(s)/X(s)\).

P2.21 Figure P2.21 shows two pendulums suspended from frictionless pivots and connected at their midpoints by a spring [1]. Assume that each pendulum can be represented by a mass \(M\) at the end of a massless bar of length \(L\). Also assume that the displacement is small and linear approximations can be used for \(sin\theta\) and \(cos\theta\). The spring located in the middle of the bars is unstretched when \(\theta_{1} = \theta_{2}\). The input force is represented by \(f(t)\), which influences the left-hand bar

(a)

(b)

FIGURE P2.19 The source follower or common drain amplifier using an FET.

FIGURE P2.20 Hydraulic servomechanism.

FIGURE P2.21 The bars are each of length \(L\) and the spring is located at \(L/2\).

only. (a) Obtain the equations of motion, and sketch a block diagram for them. (b) Determine the transfer function \(T(s) = \theta_{1}(s)/F(s)\). (c) Sketch the location of the poles and zeros of \(T(s)\) on the \(s\)-plane.

P2.22 A particular form of an operational amplifier is when the feedback loop is short-circuited. This amplifier is known as a voltage follower (buffer amplifier) as shown in Figure P2.22. Show that \(T = V_{o}(s)/V_{\text{in}\text{~}}(s) = 1\). Assume an ideal op-amp. Discuss a practical use of this amplifier.

FIGURE P2.22 A buffer amplifier.

P2.23 The small-signal circuit equivalent to a commonemitter transistor amplifier is shown in Figure P2.23. The transistor amplifier includes a feedback resistor \(R_{f}\). Determine the input-output ratio \(V_{ce}(s)/V_{\text{in}\text{~}}(s)\).

FIGURE P2.23 CE amplifier.

P2.24 A two-transistor series voltage feedback amplifier is shown in Figure P2.24(a). This AC equivalent circuit neglects the bias resistors and the shunt capacitors.

(a)

FIGURE P2.24 Feedback amplifier.

FIGURE P2.25

Black's amplifier.
H. S.

(a)

(b)
A block diagram representing the circuit is shown in Figure P2.24(b). This block diagram neglects the effect of \(h_{re}\), which is usually an accurate approximation, and assumes that \(R_{2} + R_{L} \gg R_{1}\). (a) Determine the voltage gain \(V_{o}(s)/V_{\text{in}\text{~}}(s)\). (b) Determine the current gain \(i_{c2}/i_{b1}\). (c) Determine the input impedance \(V_{\text{in}\text{~}}(s)/I_{b1}(s)\).

P2.25 H. S. Black is noted for developing a negative feedback amplifier in 1927. Often overlooked is the fact that three years earlier he had invented a circuit design technique known as feedforward correction [19]. Recent experiments have shown that this technique offers the potential for yielding excellent amplifier stabilization. Black's amplifier is shown in Figure P2.25(a) in the form recorded in 1924. The block diagram is shown in Figure P2.25(b). Determine the transfer function between the output \(Y(s)\) and the input \(R(s)\) and between the output and the disturbance \(T_{d}(s).G(s)\) is used to denote the amplifier represented by \(\mu\) in Figure P2.25(a).

P2.26 A robot includes significant flexibility in the arm members with a heavy load in the gripper [6, 20]. A two-mass model of the robot is shown in Figure P2.26. Find the transfer function \(Y(s)/F(s)\).

(b)

FIGURE P2.26 The spring-mass-damper model of a robot arm.

P2.27 Magnetic levitation trains provide a high-speed, very low friction alternative to steel wheels on steel rails. The train floats on an air gap as shown in Figure P2.27 [25]. The levitation force \(F_{L}\) is controlled by the coil current \(i\) in the levitation coils and may be approximated by

\[F_{L} = k\frac{i^{2}}{z^{2}}, \]

where \(z\) is the air gap. This force is opposed by the downward force \(F = mg\). Determine the linearized relationship between the air gap \(z\) and the controlling current near the equilibrium condition.

FIGURE P2.27 Cutaway view of train.

P2.28 A multiple-loop model of an urban ecological system might include the following variables: number of people in the city \((P)\), modernization \((M)\), migration into the city \((C)\), sanitation facilities \((S)\), number of diseases \((D)\), bacteria/area \((B)\), and amount of garbage/area \((G)\), where the symbol for the variable is given in parentheses. The following causal loops are hypothesized:

  1. \(P \rightarrow G \rightarrow B \rightarrow D \rightarrow P\)

  2. \(P \rightarrow M \rightarrow C \rightarrow P\)

  3. \(P \rightarrow M \rightarrow S \rightarrow D \rightarrow P\)

  4. \(P \rightarrow M \rightarrow S \rightarrow B \rightarrow D \rightarrow P\)

Sketch a signal-flow graph for these causal relationships, using appropriate gain symbols. Indicate whether you believe each gain transmission is positive or negative. For example, the causal link \(S\) to \(B\) is negative because improved sanitation facilities lead to reduced bacteria/area. Which of the four loops are positive feedback loops and which are negative feedback loops?

FIGURE P2.29 Tilting beam and ball.
P2.29 We desire to balance a rolling ball on a tilting beam as shown in Figure P2.29. We will assume the motor input current \(i\) controls the torque with negligible friction. Assume the beam may be balanced near the horizontal \((\phi = 0)\); therefore, we have a small deviation of \(\phi(t)\). Find the transfer function \(X(s)/I(s)\), and draw a block diagram illustrating the transfer function showing \(\phi(s),X(s)\), and \(I(s)\).

P2.30 The measurement or sensor element in a feedback system is important to the accuracy of the system [6]. The dynamic response of the sensor is important. Many sensor elements possess a transfer function

\[H(s) = \frac{k}{\tau s + 1}. \]

Suppose that a position-sensing photo detector has \(\tau = 10\mu s\). Obtain the step response of the system. Show that the step response is independent of \(k\). Compute the time to reach \(98\%\) of the final value.

P2.31 An interacting control system with two inputs and two outputs is shown in Figure P2.31. Solve for \(Y_{1}(s)/R_{1}(s)\) and \(Y_{2}(s)/R_{1}(s)\) when \(R_{2} = 0\).

FIGURE P2.31 Interacting system.

P2.32 A system consists of two electric motors that are coupled by a continuous flexible belt. The belt also passes over a swinging arm that is instrumented to allow measurement of the belt speed and tension. The basic control problem is to regulate the belt speed and tension by varying the motor torques.

An example of a practical system similar to that shown occurs in textile fiber manufacturing processes when yarn is wound from one spool to another at high speed. Between the two spools, the yarn is processed in a way that may require the yarn speed and tension to be controlled within defined limits. A model of the system is shown in Figure P2.32. Find \(Y_{2}(s)/R_{1}(s)\). Determine a relationship for the system that will make \(Y_{2}\) independent of \(R_{1}\). FIGURE P2.32

A model of the coupled motor drives.

FIGURE P2.33 Idle speed control system.
P2.33 Find the transfer function for \(Y(s)/R(s)\) for the idle-speed control system for a fuel-injected engine as shown in Figure P2.33.

P2.34 The suspension system for one wheel of an oldfashioned pickup truck is illustrated in Figure P2.34. The mass of the vehicle is \(m_{1}\) and the mass of the wheel is \(m_{2}\). The suspension spring has a spring constant \(k_{1}\) and the tire has a spring constant \(k_{2}\). The damping constant of the shock absorber is \(b\). Obtain the transfer function \(Y_{1}(s)/X(s)\), which represents the vehicle response to bumps in the road.

P2.35 A feedback control system has the structure shown in Figure P2.35. Determine the closed-loop transfer function \(Y(s)/R(s)\) (a) by block diagram manipulation and (b) by using a signal-flow graph and Mason's signal-flow gain formula. (c) Select the

FIGURE P2.34 Pickup truck suspension.

gains \(K_{1}\) and \(K_{2}\) so that the closed-loop response to a step input is critically damped with two equal roots at \(s = - 10\). (d) Plot the critically damped response

FIGURE P2.35 Multiloop feedback system.

for a unit step input. What is the time required for the step response to reach \(90\%\) of its final value?

P2.36 A system is represented by Figure P2.36. (a) Determine the partial fraction expansion and \(y(t)\) for a ramp input, \(r(t) = t\), and \(t \geq 0\). (b) Obtain a plot of \(y(t)\) for part (a), and find \(y(t)\) for \(t = 1.0\text{ }s\). (c) Determine the impulse response of the system \(y(t)\) for \(t \geq 0\). (d) Obtain a plot of \(y(t)\) for part (c), and find \(y(t)\) for \(t = 1.0\text{ }s\).

FIGURE P2.36 A third-order system.

P2.37 A two-mass system is shown in Figure P2.37 with an input force \(u(t)\). When \(m_{1} = m_{2} = 1\) and \(K_{1} = K_{2} = 1\), (a) find the set of differential equations describing the system, and (b) compute the transfer function from \(U(s)\) to \(Y(s)\).

P2.38 A winding oscillator consists of two steel spheres on each end of a long slender rod, as shown in Figure P2.38. The rod is hung on a thin wire that can be twisted many revolutions without breaking. The device will be wound up 4000 degrees. How long will it take until the motion decays to a swing of only 10 degrees? Assume that the thin wire has a rotational spring constant of \(2 \times 10^{- 4}\text{ }N\text{ }m/rad\) and that the

FIGURE P2.37 Two-mass system.

FIGURE P2.38 Winding oscillator.

viscous friction coefficient for the sphere in air is \(2 \times 10^{- 4}\text{ }N\text{ }ms/rad\). The sphere has a mass of \(1\text{ }kg\).

P2.39 For the circuit of Figure P2.39, determine the transform of the output voltage \(V_{0}(s)\). Assume that the circuit is in steady state when \(t < 0\). Assume that the switch moves instantaneously from contact 1 to contact 2 at \(t = 0\).

P2.40 A damping device is used to reduce the undesired vibrations of machines. A viscous fluid, such as a heavy oil, is placed between the wheels, as shown in
FIGURE P2.39

Model of an electronic circuit.

FIGURE P2.40 Cutaway view of damping device.

Figure P2.40. When vibration becomes excessive, the relative motion of the two wheels creates damping. When the device is rotating without vibration, there is no relative motion and no damping occurs. Find \(\theta_{1}(s)\) and \(\theta_{2}(s)\). Assume that the shaft has a spring constant \(K\) and that \(b\) is the damping constant of the fluid. The load torque is \(T\).

P2.41 The lateral control of a rocket with a gimbaled engine is shown in Figure P2.41. The lateral deviation from the desired trajectory is \(h\) and the forward rocket speed is \(V\). The control torque of the engine is \(T_{c}(s)\) and the disturbance torque is \(T_{d}(s)\). Derive the describing equations of a linear model of the system, and draw the block diagram with the appropriate transfer functions.

FIGURE P2.41 Rocket with gimbaled engine.

P2.42 In many applications, such as reading product codes in supermarkets and in printing and manufacturing, an optical scanner is utilized to read codes, as shown in Figure P2.42. As the mirror rotates, a friction force is developed that is proportional to its angular speed. The friction constant is equal to \(0.06\text{ }N\text{ }s/rad\), and the moment of inertia is equal to \(0.1\text{ }kg{\text{ }m}^{2}\). The output variable is the velocity \(\omega(t)\). (a) Obtain the differential equation for the motor. (b) Find the response of the system when the input motor torque is a unit step and the initial velocity at \(t = 0\) is equal to 0.7 .

FIGURE P2.42 Optical scanner.

P2.43 An ideal set of gears is shown in Table 2.4, item 10. Neglect the inertia and friction of the gears and assume that the work done by one gear is equal to that of the other. Derive the relationships given in item 10 of Table 2.4. Also, determine the relationship between the torques \(T_{m}\) and \(T_{L}\).

P2.44 An ideal set of gears is connected to a solid cylinder load as shown in Figure P2.44. The inertia of the motor shaft and gear \(G_{2}\) is \(J_{m}\). Determine (a) the inertia of the load \(J_{L}\) and (b) the torque \(T\) at the motor shaft. Assume the friction at the load is \(b_{L}\) and the friction at the motor shaft is \(b_{m}\). Also assume the density of the load disk is \(\rho\) and the gear ratio is \(n\). Hint: The torque at the motorshaft is given by \(T = T_{1} + T_{m}\).

FIGURE P2.44 Motor, gears, and load.

P2.45 To exploit the strength advantage of robot manipulators and the intellectual advantage of humans, a class of manipulators called extenders has been examined [22]. The extender is defined as an active manipulator worn by a human to augment the human's strength. The human provides an input \(U(s)\), as shown in Figure \(P2.45\). The endpoint of the extender is \(P(s)\). Determine the output \(P(s)\) for both \(U(s)\) and \(F(s)\) in the form

\[P(s) = T_{1}(s)U(s) + T_{2}(s)F(s). \]

FIGURE P2.45 Model of extender.

P2.46 A load added to a truck results in a force \(F(s)\) on the support spring, and the tire flexes as shown in Figure P2.46(a). The model for the tire movement is shown in Figure P2.46(b). Determine the transfer function \(X_{1}(s)/F(s)\).
P2.47 The water level \(h(t)\) in a tank is controlled by an open-loop system, as shown in Figure P2.47. A DC motor controlled by an armature current \(i_{a}\) turns a shaft, opening a valve. The inductance of the DC motor is negligible, that is, \(L_{a} = 0\). Also, the rotational friction of the motor shaft and valve is negligible, that is, \(b = 0\). The height of the water in the tank is

\[h(t) = \int_{}^{}\ \lbrack 1.6\theta(t) - h(t)\rbrack dt, \]

the motor constant is \(K_{m} = 10\), and the inertia of the motor shaft and valve is \(J = 6 \times 10^{- 3}\text{ }kg{\text{ }m}^{2}\). Determine (a) the differential equation for \(h(t)\) and \(v(t)\) and (b) the transfer function \(H(s)/V(t)\).

P2.48 The circuit shown in Figure P2.48 is called a lead-lag filter.

(a) Find the transfer function \(V_{2}(s)/V_{1}(s)\). Assume an ideal op-amp.

(b) Determine \(V_{2}(s)/V_{1}(s)\) when \(R_{1} = 250k\Omega\), \(R_{2} = 250k\Omega,C_{1} = 2\mu F\), and \(C_{2} = 0.3\mu F\).

(c) Determine the partial fraction expansion for \(V_{2}(s)/V_{1}(s)\).

P2.49 A closed-loop control system is shown in Figure P2.49.

(a) Determine the transfer function

\[T(s) = Y(s)/R(s). \]

(b) Determine the poles and zeros of \(T(s)\).

(c) Use a unit step input, \(R(s) = 1/s\), and obtain the partial fraction expansion for \(Y(s)\) and the value of the residues.

(b)
FIGURE P2.46

Truck support model.

(a) FIGURE P2.47

Open-loop control system for the water level of a tank.

FIGURE P2.48 Lead-lag filter.

FIGURE P2.49 Unity feedback control system.

(d) Plot \(y(t)\) and discuss the effect of the real and complex poles of \(T(s)\). Do the complex poles or the real poles dominate the response?

P2.50 A closed-loop control system is shown in Figure P2.50.

(a) Determine the transferfunction \(T(s) = Y(s)/R(s)\).

(b) Determine the poles and zeros of \(T(s)\).

(c) Use a unit step input, \(R(s) = 1/s\), and obtain the partial fraction expansion for \(Y(s)\) and the value of the residues.

(d) Plot \(y(t)\) and discuss the effect of the real and complex poles of \(T(s)\). Do the complex poles or the real poles dominate the response?

\[10\Omega \]

\[\underset{i_{a}(t)}{\longrightarrow} \]

(e) Predict the final value of \(y(t)\) for the unit step input.

FIGURE P2.50 Third-order feedback system.

P2.51 Consider the two-mass system in Figure P2.51. Find the set of differential equations describing the system.

FIGURE P2.51 Two-mass system with two springs and one damper.

110. ADVANCED PROBLEMS

AP2.1 A first-order RL circuit consisting of a resistor and an inductor in series driven by a voltage source is one of the simplest analog infinite impulse response electronic filters. For an input voltage of \(5\text{ }V\), the current at \(t = 1\text{ }s\) is \(2\text{ }A\), and the steady state current is \(5\text{ }A\) when \(t \rightarrow \infty\). Determine the transfer function \(I(s)/V(s)\).

AP2.2 A system has a block diagram as shown in Figure AP2.2. Determine the transfer function

\[T(s) = \frac{Y_{2}(s)}{R_{1}(s)}. \]

It is desired to decouple \(Y_{2}(s)\) from \(R_{1}(s)\) by obtaining \(T(s) = 0\). Select \(G_{5}(s)\) in terms of the other \(G_{i}(s)\)

FIGURE AP2.2 Interacting control system.

to achieve decoupling.

AP2.3 Consider the feedback control system in Figure AP2.3. Define the tracking error as

\[E(s) = R(s) - Y(s). \]

(a) Determine a suitable \(H(s)\) such that the tracking error is zero for any input \(R(s)\) in the absence of a disturbance input (that is, when \(T_{d}(s) = 0\) ). (b) Using \(H(s)\) determined in part (a), determine the response \(Y(s)\) for a disturbance \(T_{d}(s)\) when the input \(R(s) = 0\). (c) Is it possible to obtain \(Y(s) = 0\) for an arbitrary disturbance \(T_{d}(s)\) when \(G_{d}(s) \neq 0\) ? Explain your answer.

AP2.4 Consider a DC amplifier given by

\[\frac{V_{2}(s)}{V_{1}(s)} = \frac{k_{a}}{R_{o}C_{o}s + 1}, \]

where \(V_{2}(s)\) is the output voltage and \(V_{1}(s)\) is the input voltage. The system parameters are \(R_{O}\) and

FIGURE AP2.3 Feedback system with a disturbance input.

\(C_{o}\), the output resistance and capacitance, respectively. The DC amplifier is illustrated in Table 2.4. (a) Determine the response of the system to a unit step \(V_{1}(s) = 1/s\). (b) As \(t \rightarrow \infty\), what value does the step response determined in part (a) approach? This is known as the steady-state response. (c) Describe how you would select the system parameters \(R_{o}\) and \(C_{o}\) to increase the speed of response of the system to a step input.

AP2.5 For the three-cart system (Figure AP2.5), obtain the equations of motion. The system has three inputs \(u_{1}(t),u_{2}(t)\), and \(u_{3}(t)\) and three outputs \(x_{1}(t),x_{2}(t)\), and \(x_{3}(t)\). Obtain three second-order ordinary differential equations with constant coefficients. If possible, write the equations of motion in matrix form.

FIGURE AP2.5 Three-cart system with three inputs and three outputs.

AP2.6 Consider the hanging crane structure in Figure AP2.6. Write the equations of motion describing the motion of the cart and the payload. The mass of the cart is \(M\), the mass of the payload is \(m\), the massless rigid connector has length \(L\), and the friction is modeled as \(F_{b}(t) = - b\overset{˙}{x}(t)\) where \(x(t)\) is the distance traveled by the cart.

AP2.7 Consider the unity feedback system described in the block diagram in Figure AP2.7. Compute analytically FIGURE AP2.6

(a) Hanging crane supporting the Space Shuttle Atlantis (photo courtesy of: NASA Jack Pfaller) and (b) schematic representation of the hanging crane structure.

FIGURE AP2.7 Unity feedback control system with controller \(G_{c}(s) = K\).

(a)

(b)

the response of the system to an impulse disturbance. Determine a relationship between the gain \(K\) and the minimum time it takes the impulse disturbance response of the system to reach \(y(t) < 0.5\). Assume that \(K > 0\). For what value of \(K\) does the disturbance response first reach at \(y(t) = 0.5\) at \(t = 0.01\) ?

AP2.8 Consider the cable reel control system given in Figure AP2.8. Find the value of \(K_{t}\) and \(K_{a}\) such that the percent overshoot is P.O. \(\leq 15\%\) and a zero steady state error to a unit step is achieved. Compute the closed-loop response \(y(t)\) analytically and confirm that the steady-state response and P.O. meet the specifications.
AP2.9 Consider the inverting operational amplifier in Figure AP2.9. Find the transfer function \(V_{o}(s)/V_{i}(s)\). Show that the transfer function can be expressed as

\[G(s) = \frac{V_{o}(s)}{V_{i}(s)} = K_{P} + K_{D}s, \]

where the gains \(K_{P}\) and \(K_{D}\) are functions of \(C,R_{1}\), and \(R_{2}\). This circuit is a proportional-derivative (PD) controller.
FIGURE AP2.8

Cable reel control system.

FIGURE AP2.9 An inverting operational amplifier circuit representing a PD controller.

111. DESIGN PROBLEMS

CDP2.1 We want to accurately position a table for a machine as shown in Figure CDP2.1. A traction-drive motor with a capstan roller possesses several desirable characteristics compared to the more popular ball screw. The traction drive exhibits low friction and no backlash. However, it is susceptible to disturbances. Develop a model of the traction drive shown in Figure CDP2.1(a) for the parameters given in Table CDP2.1. The drive uses a DC armature-controlled motor with a capstan roller attached to the shaft. The drive bar moves the linear slide-table. The slide uses an air bearing, so its friction is negligible. We are considering the open-loop model, Figure CDP2.1(b), and its transfer function in this problem. Feedback will be introduced later.

(a)

(b)

FIGURE CDP2.1 (a) Traction drive, capstan roller, and linear slide. (b) The block diagram model.

DP2.1 A control system is shown in Figure DP2.1. With

and

\[G_{1}(s) = \frac{10}{s + 10} \]

\[G_{2}(s) = \frac{1}{s}, \]

112. Table CDP2.1 Typical Parameters for the Armature-Controlled DC Motor and the Capstan and Slide

$$M_{s}$$ Mass of slide $$5.693\text{ }kg$$
$$M_{b}$$ Mass of drive bar $$6.96\text{ }kg$$
$$J_{m}$$ $$\begin
         \text{~}\text{Inertia of}\text{~} \\            
         \text{~}\text{roller, shaft, motor}\text{~} \\  
         \text{~}\text{and tachometer}\text{~}           
         \end{matrix}$$                                  | $$10.91 \cdot 10^{- 3}\text{ }kg{\text{ }m}^{2}$$ |

| | $$\begin{matrix}
\text{~}\text{Roller radius}\text{~} \
r
\end{matrix}$$ | $$31.75 \cdot 10^{- 3}\text{ }m$$ |
| $$b_{m}$$ | Motor damping | $$0.268\text{ }N\text{ }ms/rad$$ |
| $$K_{m}$$ | Torque constant | $$0.8379\text{ }N\text{ }m/amp$$ |
| $$K_{b}$$ | Back emf constant | $$0.838\text{ }V\text{ }s/rad$$ |
| $$R_{m}$$ | Motor resistance | $$1.36\Omega$$ |
| $$L_{m}$$ | Motor inductance | $$3.6mH$$ |

determine the gains \(K_{1}\) and \(K_{2}\) such that the final value \(y(t)\) as \(t \rightarrow \infty\) reaches \(y \rightarrow 1\) and the closed-loop poles are located at \(s_{1} = - 20\) and \(s_{2} = - 0.5\).

DP2.2 The television beam circuit of a television is represented by the model in Figure DP2.2. Select the unknown conductance \(G\) so that the voltage \(v\) is \(24\text{ }V\). Each conductance is given in siemens ( \(S\) ).

DP2.3 An input \(r(t) = t,t \geq 0\), is applied to a black box with a transfer function \(G(s)\). The resulting output response, when the initial conditions are zero, is

\[y(t) = \frac{1}{4}e^{- t} - \frac{1}{100}e^{- 5t} - \frac{6}{25} + \frac{1}{5}t,t \geq 0. \]

Determine \(G(s)\) for this system.

DP2.4 An operational amplifier circuit that can serve as an active low-pass filter circuit is shown in Figure DP2.4. Determine the transfer function of the circuit, assuming an ideal op-amp. Find \(v_{0}(t)\) when the input is \(v_{1}(t) = \delta(t),t \geq 0\). FIGURE DP2.1

Selection of transfer functions.

FIGURE DP2.2 Television beam circuit.

FIGURE DP2.4 Operational amplifier circuit.

FIGURE DP2.5

(a) Typical clock (photo courtesy of Science and Society/ SuperStock) and (b) schematic representation of the pendulum.

(a)
DP2.5 Consider the clock shown in Figure DP2.5. The pendulum rod of length \(L\) supports a pendulum disk. Assume that the pendulum rod is a massless rigid thin rod and the pendulum disc has mass \(m\). Design the length of the pendulum, \(L\), so that the period of motion is 2 seconds. Note that with a period of 2 seconds each "tick" and each "tock" of the clock represents 1 second, as desired. Assume small angles, \(\varphi(t)\), in the analysis so that \(sin\varphi(t) \approx \varphi(t)\). Can you explain why most grandfather clocks are about \(1.5\text{ }m\) or taller?

(b)

113. COMPUTER PROBLEMS

CP2.1 Consider the two polynomials

\[p(s) = s^{2} + 7s + 10 \]

and

\[q(s) = s + 2. \]

Compute the following
(a) \(p(s)q(s)\)
(b) poles and zeros of \(G(s) = \frac{q(s)}{p(s)}\)

CP2.2 Consider the feedback system depicted in Figure CP2.2.

(a) Compute the closed-loop transfer function using the series and feedback functions.

(b) Obtain the closed-loop system unit step response with the step function, and verify that final value of the output is 0.571 .

FIGURE CP2.2 A negative feedback control system.

CP2.3 Consider the differential equation

\[\overset{¨}{y} + 4\overset{˙}{y}(t) + 3y = u, \]

where \(y(0) = \overset{˙}{y}(0) = 0\) and \(u(t)\) is a unit step. Determine the solution \(y(t)\) analytically, and verify by co-plotting the analytic solution and the step response obtained with the step function.

CP2.4 Consider the mechanical system depicted in Figure CP2.4. The input is given by \(f(t)\), and the output is \(y(t)\). Determine the transfer function from \(f(t)\) to \(y(t)\) and, using an \(m\)-file, plot the system response

FIGURE CP2.4 A mechanical spring-mass-damper system.

to a unit step input. Let \(m = 10,k = 1\), and \(b = 0.5\). Show that the peak amplitude of the output is about 1.8.

CP2.5 A satellite single-axis attitude control system can be represented by the block diagram in Figure CP2.5. The variables \(k,a\), and \(b\) are controller parameters, and \(J\) is the spacecraft moment of inertia. Suppose the nominal moment of inertia is \(J = 10.8E8\left( slug{ft}^{2} \right)\), and the controller parameters are \(k = 10.8E8,a = 1\), and \(b = 8\).

(a) Develop an m-file script to compute the closedloop transfer function \(T(s) = \theta(s)/\theta_{d}(s)\).

(b) Compute and plot the step response to a \(10^{\circ}\) step input.

(c) The exact moment of inertia is generally unknown and may change slowly with time. Compare the step response performance of the spacecraft when \(J\) is reduced by \(20\%\) and \(50\%\). Use the controller parameters \(k = 10.8E8,a = 1\), and \(b = 8\) and a \(10^{\circ}\) step input. Discuss your results.

CP2.6 Consider the block diagram in Figure CP2.6.

(a) Use an m-file to reduce the block diagram in Figure CP2.6, and compute the closed-loop transfer function.

FIGURE CP2.5 A spacecraft single-axis attitude control block diagram.

FIGURE CP2.6 A multiple-loop feedback control system block diagram.

(b) Generate a pole-zero map of the closed-loop transfer function in graphical form using the pzmap function.

(c) Determine explicitly the poles and zeros of the closed-loop transfer function using the pole and zero functions and correlate the results with the pole-zero map in part (b).

CP2.7 For the simple pendulum shown in Figure CP2.7, the nonlinear equation of motion is given by

\[\overset{¨}{\theta}(t) + \frac{g}{L}sin\theta(t) = 0, \]

where \(L = 0.5\text{ }m,m = 1\text{ }kg\), and \(g = 9.8\text{ }m/s^{2}\). When the nonlinear equation is linearized about the equilibrium point \(\theta_{0} = 0\), we obtain the linear time-invariant model,

\[\overset{¨}{\theta}(t) + \frac{g}{L}\theta(t) = 0. \]

Create an m-file to plot both the nonlinear and the linear response of the simple pendulum when the initial angle of the pendulum is \(\theta(0) = 30^{\circ}\) and explain any differences.

FIGURE CP2.7 Simple pendulum.
CP2.8 A system has a transfer function

\[\frac{X(s)}{R(s)} = \frac{(20/z)(s + z)}{s^{2} + 3s + 20}. \]

Plot the response of the system when \(R(s)\) is a unit step for the parameter \(z = 5,10\), and 15 .

CP2.9 Consider the feedback control system in Figure CP2.9, where

\[G(s) = \frac{s + 1}{s + 2}\ \text{~}\text{and}\text{~}\ H(s) = \frac{1}{s + 1}. \]

(a) Using an m-file, determine the closed-loop transfer function.

(b) Obtain the pole-zero map using the pzmap function. Where are the closed-loop system poles and zeros?

(c) Are there any pole-zero cancellations? If so, use the minreal function to cancel common poles and zeros in the closed-loop transfer function.

(d) Why is it important to cancel common poles and zeros in the transfer function?

FIGURE CP2.9 Control system with nonunity feedback.

CP2.10 Consider the block diagram in Figure CP2.10. Create an \(m\)-file to complete the following tasks:

(a) Compute the step response of the closed-loop system (that is, \(R(s) = 1/s\) and \(T_{d}(s) = 0\) ) and plot the steady-state value of the output \(Y(s)\) as a function of the controller gain \(0 < K \leq 10\).

(b) Compute the disturbance step response of the closed-loop system (i.e., \(R(s) = 0\) and \(\left. \ T_{d}(s) = 1/s \right)\) and co-plot the steady-state value of the output \(Y(s)\) as a function of the controller gain \(0 < K \leq 10\) on the same plot as in (a) above. (c) Determine the value of \(K\) such that the steadystate value of the output is equal for both the input response and the disturbance response.
FIGURE CP2.10 Block diagram of a unity feedback system with a reference input \(R(s)\) and a disturbance input \(T_{d}(s)\).

114. ANSWERS TO SKILLS CHECK

True or False: (1) False; (2) True; (3) False; (4) True; (5) True

Multiple Choice: (6) d; (7) a; (8) b; (9) b; (10) c; (11) a; (12) a; (13) c; (14) a; (15) a
Word Match (in order, top to bottom): e, j, d, h, a, f, c, \(b,k,g,o,l,n,m,i\)

115. TERMS AND CONCEPTS

Across-Variable A variable determined by measuring the difference of the values at the two ends of an element.

Actuator The device that causes the process to provide the output. The device that provides the motive power to the process.

Analogous variables Variables associated with electrical, mechanical, thermal, and fluid systems possessing similar solutions providing the analyst with the ability to extend the solution of one system to all analogous systems with the same describing differential equations.

Assumptions Statements that reflect situations and conditions that are taken for granted and without proof. In control systems, assumptions are often employed to simplify the physical dynamical models of systems under consideration to make the control design problem more tractable.

Block diagrams Unidirectional, operational blocks that represent the transfer functions of the elements of the system.

Branch A unidirectional path segment in a signal-flow graph that relates the dependency of an input and an output variable.

Characteristic equation The relation formed by equating to zero the denominator of a transfer function.

Closed-loop transfer function A ratio of the output signal to the input signal for an interconnection of systems when all the feedback or feedfoward loops have been closed or otherwise accounted for. Generally obtained by block diagram or signal-flow graph reduction.

Coulomb damper A type of mechanical damper where the model of the friction force is a nonlinear function of the mass velocity and possesses a discontinuity around zero velocity. Also know as dry friction.

Critical damping The case where damping is on the boundary between underdamped and overdamped.

Damped oscillation An oscillation in which the amplitude decreases with time.

Damping ratio A measure of damping. A dimensionless number for the second-order characteristic equation.

DC motor An electric actuator that uses an input voltage as a control variable.

Differential equation An equation including differentials of a function.

Error signal The difference between the desired output \(R(s)\) and the actual output \(Y(s)\); therefore \(E(s) = R(s) - Y(s)\).

Final value The value that the output achieves after all the transient constituents of the response have faded. Also referred to as the steady-state value.

Final value theorem The theorem that states that \(\lim_{t \rightarrow \infty}\mspace{2mu} y(t) = \lim_{s \rightarrow 0}\mspace{2mu} sY(s)\), where \(Y(s)\) is the Laplace transform of \(y(t)\). Homogeneity The property of a linear system in which the system response, \(y(t)\), to an input \(u(t)\) leads to the response \(\beta y(t)\) when the input is \(\beta u(t)\).

Inverse Laplace transform A transformation of a function \(F(s)\) from the complex frequency domain into the time domain yielding \(f(t)\).

Laplace transform A transformation of a function \(f(t)\) from the time domain into the complex frequency domain yielding \(F(s)\).

Linear approximation An approximate model that results in a linear relationship between the output and the input of the device.

Linear system A system that satisfies the properties of superposition and homogeneity.

Linearized Made linear or placed in a linear form. Taylor series approximations are commonly employed to obtain linear models of physical systems.

Loop A closed path that originates and terminates on the same node of a signal-flow graph with no node being met twice along the path.

Mason loop rule A rule that enables the user to obtain a transfer function by tracing paths and loops within a system.

Mathematical models Descriptions of the behavior of a system using mathematics.

Natural frequency The frequency of natural oscillation that would occur for two complex poles if the damping were equal to zero.

Necessary condition A condition or statement that must be satisfied to achieve a desired effect or result. For example, for a linear system it is necessary that the input \(u_{1}(t) + u_{2}(t)\) results in the response \(y_{1}(t) + y_{2}(t)\), where the input \(u_{1}(t)\) results in the response \(y_{1}(t)\) and the input \(u_{2}(t)\) results in the response \(y_{2}(t)\).

Node The input and output points or junctions in a signalflow graph.

Nontouching Two loops in a signal-flow graph that do not have a common node.

Overdamped The case where the damping ratio is \(\zeta > 1\).

Path A branch or a continuous sequence of branches that can be traversed from one signal (node) to another signal (node) in a signal-flow graph.

Poles The roots of the denominator polynomial (i.e., the roots of the characteristic equation) of the transfer function.

Positive feedback loop Feedback loop wherein the output signal is fed back so that it adds to the input signal.
Principle of superposition The law that states that if two inputs are scaled and summed and routed through a linear, time-invariant system, then the output will be identical to the sum of outputs due to the individual scaled inputs when routed through the same system.

Reference input The input to a control system often representing the desired output, denoted by \(R(s)\).

Residues The constants \(k_{i}\) associated with the partial fraction expansion of the output \(Y(s)\), when the output is written in a residue-pole format.

Signal-flow graph A diagram that consists of nodes connected by several directed branches and that is a graphical representation of a set of linear relations.

Simulation A model of a system that is used to investigate the behavior of a system by utilizing actual input signals.

Steady state The value that the output achieves after all the transient constituents of the response have faded. Also referred to as the final value.

\(s\)-plane The complex plane where, given the complex number \(s = s + jw\), the \(x\)-axis (or horizontal axis) is the \(s\)-axis, and the \(y\)-axis (or vertical axis) is the \(jw\)-axis.

Taylor series A power series defined by \(g(x) =\) \(\sum_{m = 0}^{\infty}\mspace{2mu}\frac{g^{(m)}\left( x_{0} \right)}{m!}\left( x - x_{0} \right)^{m}\). For \(m < \infty\), the series is an approximation which is used to linearize functions and system models.

Through-variable A variable that has the same value at both ends of an element.

Time constant The time interval necessary for a system to change from one state to another by a specified percentage. For a first order system, the time constant is the time it takes the output to manifest a \(63.2\%\) change due to a step input.

Transfer function The ratio of the Laplace transform of the output variable to the Laplace transform of the input variable.

Underdamped The case where the damping ratio is \(\zeta < 1\).

Unity feedback A feedback control system wherein the gain of the feedback loop is one.

Viscous damper A type of mechanical damper where the model of the friction force is linearly proportional to the velocity of the mass.

Zeros The roots of the numerator polynomial of the transfer function.

CHAPTER

116. State Variable Models

3.1 Introduction 185

3.2 The State Variables of a Dynamic System 185

3.3 The State Differential Equation 188

3.4 Signal-Flow Graph and Block Diagram Models 194

3.5 Alternative Signal-Flow Graph and Block Diagram Models 205

3.6 The Transfer Function from the State Equation 209

3.7 The Time Response and the State Transition Matrix 210

3.8 Design Examples 214

3.9 Analysis of State Variable Models Using Control Design Software 228

3.10 Sequential Design Example: Disk Drive Read System 232

3.11 Summary 235

117. PREVIEW

In this chapter, we consider system modeling using time-domain methods. We consider physical systems described by an \(n\) th-order ordinary differential equation. Utilizing a (nonunique) set of variables, known as state variables, we can obtain a set of first-order differential equations. We group these first-order equations using a compact matrix notation in a model known as the state variable model. The relationship between signal-flow graph models and state variable models will be investigated. Several interesting physical systems, including a space station and a printer belt drive, are presented and analyzed. The chapter concludes with the development of a state variable model for the Sequential Design Example: Disk Drive Read System.

118. DESIRED OUTCOMES

Upon completion of Chapter 3, students should be able to:

$\square\ $ Define state variables, state differential equations, and output equations.

\(\square\) Recognize that state variable models can describe the dynamic behavior of physical systems and can be represented by block diagrams and signal flow graphs.

\(\square\) Obtain the transfer function model from a state variable model, and vice versa.

$\square\ $ Identify solution methods for state variable models and describe the role of the state transition matrix in obtaining the time responses.

\(\square\) Explain the important role of state variable modeling in control system design.

118.1. INTRODUCTION

In the preceding chapter, we developed and studied several useful approaches to the analysis and design of feedback systems. The Laplace transform was used to transform the differential equations representing the system to an algebraic equation expressed in terms of the complex variable \(s\). Using this algebraic equation, we were able to obtain a transfer function representation of the input-output relationship.

In this chapter, we represent system models utilizing a set of ordinary differential equations in a convenient matrix-vector form. The time domain is the mathematical domain that incorporates the description of the system, including the inputs, outputs, and response, in terms of time, \(t\). Linear time-invariant single-input, single-output models, can be represented via state variable models. Powerful mathematical concepts from linear algebra and matrix-vector analysis, as well as effective computational tools, can be utilized in the design and analysis of control systems in the time domain. Also, these time domain design and analysis methods are readily extended to nonlinear, time-varying, and multiple input-output systems. As we shall see, mathematical models of linear time-invariant physical systems can be represented in either the frequency domain or the time domain. The time domain design techniques are another tool in the designer's toolbox.

119. A time-varying control system is a system in which one or more of the parameters of the system may vary as a function of time.

For example, the mass of an airplane varies as a function of time as the fuel is expended during flight. A multivariable system is a system with several input and output signals.

The time-domain representation of control systems is an essential basis for modern control theory and system optimization. In later chapters, we will have an opportunity to design optimum control systems by utilizing time-domain methods. In this chapter, we develop the time-domain representation of control systems and illustrate several methods for the solution of the system time response.

119.1. THE STATE VARIABLES OF A DYNAMIC SYSTEM

The time-domain analysis and design of control systems uses the concept of the state of a system \(\lbrack 1 - 3,5\rbrack\).

The state of a system is a set of variables whose values, together with the input signals and the equations describing the dynamics, will provide the future state and output of the system.

For a dynamic system, the state of a system is described in terms of a set of state variables \(\mathbf{x}(t) = \left( x_{1}(t),x_{2}(t),\ldots,x_{n}(t) \right)\). The state variables are those variables that determine the future behavior of a system when the present state of the FIGURE 3.1

Dynamic system.

system and the inputs are known. Consider the system shown in Figure 3.1, where \(y(t)\) is the output signal and \(u(t)\) is the input signal. A set of state variables \(x(t) = \left( x_{1}(t),x_{2}(t),\ldots,x_{n}(t) \right)\) forthesystemshownin the figureisasetsuch that knowledge of the initial values of the state variables \(x\left( t_{0} \right) = \left( x_{1}\left( t_{0} \right),x_{2}\left( t_{0} \right),\ldots,x_{n}\left( t_{0} \right) \right)\) at the initial time \(t_{0}\), and of the input signal \(u(t)\) for \(t \geq t_{0}\), suffices to determine the future values of the outputs and state variables [2].

The concept of a set of state variables that represent a dynamic system can be illustrated in terms of the spring-mass-damper system shown in Figure 3.2. The number of state variables chosen to represent this system should be as small as possible in order to avoid redundant state variables. A set of state variables sufficient to describe this system includes the position and the velocity of the mass. Therefore, we will define a set of state variables as \(x(t) = \left( x_{1}(t),x_{2}(t) \right)\), where

\[x_{1}(t) = y(t)\text{~}\text{and}\text{~}\ x_{2}(t) = \frac{dy(t)}{dt}. \]

The differential equation describes the behavior of the system and can be written as

\[M\frac{d^{2}y(t)}{dt^{2}} + b\frac{dy(t)}{dt} + ky(t) = u(t). \]

To write Equation (3.1) in terms of the state variables, we substitute the state variables as already defined and obtain

\[M\frac{dx_{2}(t)}{dt} + bx_{2}(t) + kx_{1}(t) = u(t). \]

Therefore, we can write the equations that describe the behavior of the springmass-damper system as the set of two first-order differential equations

\[\frac{dx_{1}(t)}{dt} = x_{2}(t) \]

and

\[\frac{dx_{2}(t)}{dt} = \frac{- b}{M}x_{2}(t) - \frac{k}{M}x_{1}(t) + \frac{1}{M}u(t) \]

This set of differential equations describes the behavior of the state of the system in terms of the rate of change of each state variable.

As another example of the state variable characterization of a system, consider the \(RLC\) circuit shown in Figure 3.3. The state of this system can be described by a set of state variables \(\mathbf{x}(t) = \left( x_{1}(t),x_{2}(t) \right)\), where \(x_{1}(t)\) is the capacitor voltage FIGURE 3.3

An \(RLC\) circuit.

\(v_{c}(t)\) and \(x_{2}(t)\) is the inductor current \(i_{L}(t)\). This choice of state variables is intuitively satisfactory because the stored energy of the network can be described in terms of these variables as

\[\mathcal{E} = \frac{1}{2}Li_{L}^{2}(t) + \frac{1}{2}Cv_{c}^{2}(t) \]

Therefore \(x_{1}\left( t_{0} \right)\) and \(x_{2}\left( t_{0} \right)\) provide the total initial energy of the network and the state of the system at \(t = t_{0}\). For a passive \(RLC\) network, the number of state variables required is equal to the number of independent energy-storage elements. Utilizing Kirchhoff's current law at the junction, we obtain a first-order differential equation by describing the rate of change of capacitor voltage as

\[i_{c}(t) = C\frac{dv_{c}(t)}{dt} = + u(t) - i_{L}(t) \]

Kirchhoff's voltage law for the right-hand loop provides the equation describing the rate of change of inductor current as

\[L\frac{di_{L}(t)}{dt} = - Ri_{L}(t) + v_{c}(t) \]

The output of this system is represented by the linear algebraic equation

\[v_{o}(t) = Ri_{L}(t). \]

We can rewrite Equations (3.6) and (3.7) as a set of two first-order differential equations in terms of the state variables \(x_{1}(t)\) and \(x_{2}(t)\) as

\[\frac{dx_{1}(t)}{dt} = - \frac{1}{C}x_{2}(t) + \frac{1}{C}u(t) \]

and

\[\frac{dx_{2}(t)}{dt} = + \frac{1}{L}x_{1}(t) - \frac{R}{L}x_{2}(t) \]

The output signal is then

\[y_{1}(t) = v_{o}(t) = Rx_{2}(t). \]

Utilizing Equations (3.8) and (3.9) and the initial conditions of the network represented by \(\mathbf{x}\left( t_{0} \right) = \left( x_{1}\left( t_{0} \right),x_{2}\left( t_{0} \right) \right)\), we can determine the future behavior. The state variables that describe a system are not a unique set, and several alternative sets of state variables can be chosen. For example, for a second-order system, such as the spring-mass-damper or \(RLC\) circuit, the state variables may be any two independent linear combinations of \(x_{1}(t)\) and \(x_{2}(t)\). For the \(RLC\) circuit, we might choose the set of state variables as the two voltages, \(v_{c}(t)\) and \(v_{L}(t)\), where \(v_{L}(t)\) is the voltage drop across the inductor. Then the new state variables, \(x_{1}^{*}(t)\) and \(x_{2}^{*}(t)\), are related to the old state variables, \(x_{1}(t)\) and \(x_{2}(t)\), as

\[x_{1}^{*}(t) = v_{c}(t) = x_{1}(t), \]

and

\[x_{2}^{*}(t) = v_{L}(t) = v_{c}(t) - Ri_{L}(t) = x_{1}(t) - Rx_{2}(t). \]

Equation (3.12) represents the relation between the inductor voltage and the former state variables \(v_{c}(t)\) and \(i_{L}(t)\). In a typical system, there are several choices of a set of state variables that specify the energy stored in a system and therefore adequately describe the dynamics of the system. It is usual to choose a set of state variables that can be readily measured.

An alternative approach to developing a model of a device is the use of the bond graph. Bond graphs can be used for electrical, mechanical, hydraulic, and thermal devices or systems as well as for combinations of various types of elements. Bond graphs produce a set of equations in the state variable form [7].

The state variables of a system characterize the dynamic behavior of a system. The engineer's interest is primarily in physical systems, where the variables typically are voltages, currents, velocities, positions, pressures, temperatures, and similar physical variables. However, the concept of system state is also useful in analyzing biological, social, and economic systems. For these systems, the concept of state is extended beyond the concept of the current configuration of a physical system to the broader viewpoint of variables that will be capable of describing the future behavior of the system.

119.2. THE STATE DIFFERENTIAL EQUATION

The response of a system is described by the set of first-order differential equations written in terms of the state variables \(\left( x_{1}(t),x_{2}(t),\ldots,x_{n}(t) \right)\) and the inputs \(\left( u_{1}(t),u_{2}(t),\ldots,u_{m}(t) \right)\). A set of linear first-order differential equations can be written in general form as

\[\begin{matrix} & {\overset{˙}{x}}_{1}(t) = a_{11}x_{1}(t) + a_{12}x_{2}(t) + \cdots + a_{1n}x_{n}(t) + b_{11}u_{1}(t) + \cdots + b_{1m}u_{m}(t), \\ & {\overset{˙}{x}}_{2}(t) = a_{21}x_{1}(t) + a_{22}x_{2}(t) + \cdots + a_{2n}x_{n}(t) + b_{21}u_{1}(t) + \cdots + b_{2m}u_{m}(t), \\ & \ \vdots \\ & {\overset{˙}{x}}_{n}(t) = a_{n1}x_{1}(t) + a_{n2}x_{2}(t) + \cdots + a_{nn}x_{n}(t) + b_{n1}u_{1}(t) + \cdots + b_{nm}u_{m}(t), \end{matrix}\]

where \(\overset{˙}{x}(t) = dx(t)/dt\). Thus, this set of simultaneous differential equations can be written in matrix form as follows \(\lbrack 2,5\rbrack\) :

\[\frac{d}{dt}\begin{pmatrix} x_{1}(t) \\ x_{2}(t) \\ \vdots \\ x_{n}(t) \end{pmatrix} = \begin{bmatrix} a_{11} & a_{12}\cdots & a_{1n} \\ a_{21} & a_{22}\cdots & a_{2n} \\ \vdots & \cdots & \vdots \\ a_{n1} & a_{n2}\cdots & a_{nn} \end{bmatrix}\begin{pmatrix} x_{1}(t) \\ x_{2}(t) \\ \vdots \\ x_{n}(t) \end{pmatrix} + \begin{bmatrix} b_{11} & \cdots & b_{1n} \\ \vdots & & \vdots \\ b_{n1} & \cdots & b_{nm} \end{bmatrix}\begin{pmatrix} u_{1}(t) \\ \vdots \\ u_{m}(t) \end{pmatrix}.\]

The column matrix consisting of the state variables is called the state vector and is written as

\[\mathbf{x}(t) = \begin{pmatrix} x_{1}(t) \\ x_{2}(t) \\ \vdots \\ x_{n}(t) \end{pmatrix}\]

where the boldface indicates a vector. The vector of input signals is defined as \(\mathbf{u}(t)\). Then the system can be represented by the compact notation of the state differential equation as

\[\overset{˙}{\mathbf{x}}(t) = \mathbf{Ax}(t) + \mathbf{Bu}(t) \]

Equation (3.16) is also commonly called the state equation.

The matrix \(\mathbf{A}\) is an \(n \times n\) square matrix, and \(\mathbf{B}\) is an \(n \times m\) matrix. \(\ ^{\dagger}\) The state differential equation relates the rate of change of the state of the system to the state of the system and the input signals. In general, the outputs of a linear system can be related to the state variables and the input signals by the output equation

\[\mathbf{y}(t) = \mathbf{Cx}(t) + \mathbf{Du}(t) \]

where \(y(t)\) is the set of output signals expressed in column vector form. The statespace representation (or state-variable representation) comprises the state differential equation and the output equation.

We use Equations (3.8) and (3.9) to obtain the state variable differential equation for the \(RLC\) of Figure 3.3 as

\[\overset{˙}{\mathbf{x}}(t) = \begin{bmatrix} 0 & \frac{- 1}{C} \\ \frac{1}{L} & \frac{- R}{L} \end{bmatrix}\mathbf{x}(t) + \begin{bmatrix} \frac{1}{C} \\ 0 \end{bmatrix}u(t)\]

†oldfaced lowercase letters denote vector quantities and boldfaced uppercase letters denote matrices. For an introduction to matrices and elementary matrix operations, refer to the MCS website and references [1] and [2]. and the output as

\[y(t) = \begin{bmatrix} 0 & R \end{bmatrix}\mathbf{x}(t).\]

When \(R = 3,L = 1\), and \(C = 1/2\), we have

\[\overset{˙}{\mathbf{x}}(t) = \begin{bmatrix} 0 & - 2 \\ 1 & - 3 \end{bmatrix}\mathbf{x}(t) + \begin{bmatrix} 2 \\ 0 \end{bmatrix}u(t)\]

and

\[y(t) = \begin{bmatrix} 0 & 3 \end{bmatrix}\mathbf{x}(t).\]

The solution of the state differential equation can be obtained in a manner similar to the method for solving a first-order differential equation. Consider the first-order differential equation

\[\overset{˙}{x}(t) = ax(t) + bu(t), \]

where \(x(t)\) and \(u(t)\) are scalar functions of time. We expect an exponential solution of the form \(e^{at}\). Taking the Laplace transform of Equation (3.20), we have

\[sX(s) - x(0) = aX(s) + bU(s) \]

therefore,

\[X(s) = \frac{x(0)}{s - a} + \frac{b}{s - a}U(s) \]

The inverse Laplace transform of Equation (3.21) is

\[x(t) = e^{at}x(0) + \int_{0}^{t}\mspace{2mu} e^{+ a(t - \tau)}bu(\tau)d\tau. \]

We expect the solution of the general state differential equation to be similar to Equation (3.22) and to be of exponential form. The matrix exponential function is defined as

\[e^{\mathbf{A}t} = exp(\mathbf{A}t) = \mathbf{I} + \mathbf{A}t + \frac{\mathbf{A}^{2}t^{2}}{2!} + \cdots + \frac{\mathbf{A}^{k}t^{k}}{k!} + \cdots \]

which converges for all finite \(t\) and any \(\mathbf{A}\lbrack 2\rbrack\). Then the solution of the state differential equation is found to be

\[\mathbf{x}(t) = exp(\mathbf{A}t)\mathbf{x}(0) + \int_{0}^{t}\mspace{2mu} exp\lbrack\mathbf{A}(t - \tau)\rbrack\mathbf{Bu}(\tau)d\tau. \]

Equation (3.24) may be verified by taking the Laplace transform of Equation (3.16) and rearranging to obtain

\[\mathbf{X}(s) = \lbrack s\mathbf{I} - \mathbf{A}\rbrack^{- 1}\mathbf{x}(0) + \lbrack s\mathbf{I} - \mathbf{A}\rbrack^{- 1}\mathbf{BU}(s), \]

where we note that \(\lbrack s\mathbf{I} - \mathbf{A}\rbrack^{- 1} = \mathbf{\Phi}(s)\) is the Laplace transform of \(\mathbf{\Phi}(t) = exp(\mathbf{A}t)\). Taking the inverse Laplace transform of Equation (3.25) and noting that the second term on the right-hand side involves the product \(\mathbf{\Phi}(s)\mathbf{BU}(s)\), we obtain Equation (3.24). The matrix exponential function describes the unforced response of the system and is called the fundamental or state transition matrix \(\mathbf{\Phi}(t)\). Thus, Equation (3.24) can be written as

\[\mathbf{x}(t) = \mathbf{\Phi}(t)\mathbf{x}(0) + \int_{0}^{t}\mspace{2mu}\mathbf{\Phi}(t - \tau)\mathbf{Bu}(\tau)d\tau \]

The solution to the unforced system (that is, when \(\mathbf{u}(t) = 0\) ) is

\[\begin{pmatrix} x_{1}(t) \\ x_{2}(t) \\ \vdots \\ x_{n}(t) \end{pmatrix} = \begin{bmatrix} \phi_{11} & \cdots & \phi_{1n}(t) \\ \phi_{21} & \cdots & \phi_{2n}(t) \\ \vdots & & \vdots \\ \phi_{n1} & \cdots & \phi_{nn}(t) \end{bmatrix}\begin{pmatrix} x_{1}(0) \\ x_{2}(0) \\ \vdots \\ x_{n}(0) \end{pmatrix}.\]

We note that to determine the state transition matrix, all initial conditions are set to 0 except for one state variable, and the output of each state variable is evaluated. That is, the term \(\phi_{ij}(t)\) is the response of the \(i\) th state variable due to an initial condition on the \(j\) th state variable when there are zero initial conditions on all the other variables. We shall use this relationship between the initial conditions and the state variables to evaluate the coefficients of the transition matrix in a later section. However, first we shall develop several suitable signal-flow state models of systems and investigate the stability of the systems by utilizing these flow graphs.

120. EXAMPLE 3.1 Two rolling carts

Consider the system shown in Figure 3.4. The variables of interest are noted on the figure and defined as: \(M_{1},M_{2} =\) mass of carts, \(p(t),q(t) =\) position of carts, \(u(t) =\) external force acting on system, \(k_{1},k_{2} =\) spring constants, and

FIGURE 3.4

Two rolling carts attached with springs and dampers.

\(b_{1},b_{2} =\) damping coefficients. The free-body diagram of mass \(M_{1}\) is shown in Figure 3.5(b), where \(\overset{˙}{p}(t),\overset{˙}{q}(t) =\) velocity of \(M_{1}\) and \(M_{2}\), respectively. We assume that the carts have negligible rolling friction. We consider any existing rolling friction to be lumped into the damping coefficients, \(b_{1}\) and \(b_{2}\).

Now, given the free-body diagram with forces and directions appropriately applied, we use Newton's second law (sum of the forces equals mass of the object multiplied by its acceleration) to obtain the equations of motion-one equation for each mass. For mass \(M_{1}\) we have

\[M_{1}\overset{¨}{p}(t) + b_{1}\overset{˙}{p}(t) + k_{1}p(t) = u(t) + k_{1}q(t) + b_{1}\overset{˙}{q}(t) \]

where

\[\overset{¨}{p}(t),\overset{¨}{q}(t) = \text{~}\text{acceleration of}\text{~}M_{1}\text{~}\text{and}\text{~}M_{2}\text{, respectively.}\text{~} \]

Similarly, for mass \(M_{2}\) in Figure 3.5(a), we have

\[M_{2}\overset{¨}{q}(t) + \left( k_{1} + k_{2} \right)q(t) + \left( b_{1} + b_{2} \right)\overset{˙}{q}(t) = k_{1}p(t) + b_{1}\overset{˙}{p}(t). \]

We now have a model given by the two second-order ordinary differential equations in Equations (3.28) and (3.29). We can start developing a state-space model by defining

\[\begin{matrix} & x_{1}(t) = p(t), \\ & x_{2}(t) = q(t). \end{matrix}\]

We could have alternatively defined \(x_{1}(t) = q(t)\) and \(x_{2}(t) = p(t)\). The state-space model is not unique. Denoting the derivatives of \(x_{1}(t)\) and \(x_{2}(t)\) as \(x_{3}(t)\) and \(x_{4}(t)\), respectively, it follows that

\[\begin{matrix} & x_{3}(t) = {\overset{˙}{x}}_{1}(t) = \overset{˙}{p}(t), \\ & x_{4}(t) = {\overset{˙}{x}}_{2}(t) = \overset{˙}{q}(t). \end{matrix}\]

Taking the derivative of \(x_{3}(t)\) and \(x_{4}(t)\) yields, respectively,

\[\begin{matrix} {\overset{˙}{x}}_{3}(t) = \overset{¨}{p}(t) = - \frac{b_{1}}{M_{1}}\overset{˙}{p}(t) - \frac{k_{1}}{M_{1}}p(t) + \frac{1}{M_{1}}u(t) + \frac{k_{1}}{M_{1}}q(t) + \frac{b_{1}}{M_{1}}\overset{˙}{q}(t), \\ {\overset{˙}{x}}_{4}(t) = \overset{¨}{q}(t) = - \frac{k_{1} + k_{2}}{M_{2}}q(t) - \frac{b_{1} + b_{2}}{M_{2}}\overset{˙}{q}(t) + \frac{k_{1}}{M_{2}}p(t) + \frac{b_{1}}{M_{2}}\overset{˙}{p}(t), \end{matrix}\]

where we use the relationship for \(\overset{¨}{p}(t)\) given in Equation (3.28) and the relationship for \(\overset{¨}{q}(t)\) given in Equation (3.29). But \(\overset{˙}{p}(t) = x_{3}(t)\) and \(\overset{˙}{q}(t) = x_{4}(t)\), so Equation (3.32) can be written as

\[{\overset{˙}{x}}_{3}(t) = - \frac{k_{1}}{M_{1}}x_{1}(t) + \frac{k_{1}}{M_{1}}x_{2}(t) - \frac{b_{1}}{M_{1}}x_{3}(t) + \frac{b_{1}}{M_{1}}x_{4}(t) + \frac{1}{M_{1}}u(t) \]

and Equation (3.33) as

\[{\overset{˙}{x}}_{4}(t) = \frac{k_{1}}{M_{2}}x_{1}(t) - \frac{k_{1} + k_{2}}{M_{2}}x_{2}(t) + \frac{b_{1}}{M_{2}}x_{3}(t) - \frac{b_{1} + b_{2}}{M_{2}}x_{4}(t). \]

In matrix form, Equations (3.30), (3.31), (3.34), and (3.35) can be written as

\[\overset{˙}{\mathbf{x}}(t) = \mathbf{Ax}(t) + \mathbf{B}u(t) \]

where

\[\begin{matrix} \mathbf{x}(t) = \begin{pmatrix} x_{1}(t) \\ x_{2}(t) \\ x_{3}(t) \\ x_{4}(t) \end{pmatrix} = \begin{pmatrix} p(t) \\ q(t) \\ \overset{˙}{p}(t) \\ \overset{˙}{q}(t) \end{pmatrix}, \\ \mathbf{A} = \begin{bmatrix} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ - \frac{k_{1}}{M_{1}} & \frac{k_{1}}{M_{1}} & - \frac{b_{1}}{M_{1}} & \frac{b_{1}}{M_{1}} \\ \frac{k_{1}}{M_{2}} & - \frac{k_{1} + k_{2}}{M_{2}} & \frac{b_{1}}{M_{2}} & - \frac{b_{1} + b_{2}}{M_{2}} \end{bmatrix},\text{~}\text{and}\text{~}\mathbf{B} = \begin{bmatrix} 0 \\ 0 \\ \frac{1}{M_{1}} \\ 0 \end{bmatrix}, \end{matrix}\]

and \(u(t)\) is the external force acting on the system. If we choose \(p(t)\) as the output, then

\[y(t) = \begin{bmatrix} 1 & 0 & 0 & 0 \end{bmatrix}\mathbf{x}(t) = \mathbf{Cx}(t)\]

Suppose that the two rolling carts have the following parameter values: $k_{1} = 150\text{ }N/m;\ k_{2} = 700\text{ }N/m;b_{1} = 15\text{ }N\text{ }s/m;b_{2} = 30\text{ }N\text{ }s/m;M_{1} = 5\text{ }kg;\ $ and \(M_{2} = 20\text{ }kg\). The response of the two rolling cart system is shown in Figure 3.6 when the initial conditions are \(p(0) = 10\text{ }cm,q(0) = 0\), and \(\overset{˙}{p}(0) = \overset{˙}{q}(0) = 0\) and there is no input driving force, that is, \(u(t) = 0\).

(a)

(b)

FIGURE 3.5 Free-body diagrams of the two rolling carts. (a) Cart 2; (b) Cart 1. FIGURE 3.6

Initial condition response of the two cart system.

120.1. SIGNAL-FLOW GRAPH AND BLOCK DIAGRAM MODELS

The state of a system describes the dynamic behavior where the dynamics of the system are represented by a set of first-order differential equations. Alternatively, the dynamics of the system can be represented by a state differential equation as in Equation (3.16). In either case, it is useful to develop a graphical model of the system and use this model to relate the state variable concept to the familiar transfer function representation. The graphical model can be represented via signal-flow graphs or block diagrams.

As we have learned in previous chapters, a system can be meaningfully described by an input-output relationship, the transfer function \(G(s)\). For example, if we are interested in the relation between the output voltage and the input voltage of the network of Figure 3.3, we can obtain the transfer function

\[G(s) = \frac{V_{0}(s)}{U(s)}. \]

The transfer function for the \(RLC\) network of Figure 3.3 is of the form

\[G(s) = \frac{V_{0}(s)}{U(s)} = \frac{\alpha}{s^{2} + \beta s + \gamma}, \]

where \(\alpha,\beta\), and \(\gamma\) are functions of the circuit parameters \(R,L\), and \(C\), respectively. The values of \(\alpha,\beta\), and \(\gamma\) can be determined from the differential equations that describe the circuit. For the \(RLC\) circuit (see Equations 3.8 and 3.9), we have

\[\begin{matrix} & {\overset{˙}{x}}_{1}(t) = - \frac{1}{C}x_{2}(t) + \frac{1}{C}u(t), \\ & {\overset{˙}{x}}_{2}(t) = \frac{1}{L}x_{1}(t) - \frac{R}{L}x_{2}(t), \end{matrix}\]

and

\[v_{o}(t) = Rx_{2}(t). \]

The flow graph representing these simultaneous equations is shown in Figure 3.7(a), where \(1/s\) indicates an integration. The corresponding block diagram model is shown in Figure 3.7(b). The transfer function is found to be

\[\frac{V_{o}(s)}{U(s)} = \frac{R/\left( LCs^{2} \right)}{1 + R/(Ls) + 1/\left( LCs^{2} \right)} = \frac{R/(LC)}{s^{2} + (R/L)s + 1/(LC)}. \]

Many electric circuits, electromechanical systems, and other control systems are not as simple as the \(RLC\) circuit of Figure 3.3, and it is often a difficult task to determine a set of first-order differential equations describing the system. Therefore, it is often simpler to derive the transfer function of the system and then derive the state model from the transfer function.

The signal-flow graph state model and the block diagram model can be readily derived from the transfer function of a system. However, as we noted in Section 3.3,

FIGURE 3.7

\(RLC\) network. (a) Signal-flow graph. (b) Block diagram.

(a)

(b) there is more than one alternative set of state variables, and therefore there is more than one possible form for the signal-flow graph and block diagram models. There are several key canonical forms of the state-variable representation, such as the phase variable canonical form, that we will investigate in this chapter. In general, we can represent a transfer function as

\[G(s) = \frac{Y(s)}{U(s)} = \frac{b_{m}s^{m} + b_{m - 1}s^{m - 1} + \cdots + b_{1}s + b_{0}}{s^{n} + a_{n - 1}s^{n - 1} + \cdots + a_{1}s + a_{0}} \]

where \(n \geq m\), and all the \(a\) and \(b\) coefficients are real numbers. If we multiply the numerator and denominator by \(s^{- n}\), we obtain

\[G(s) = \frac{b_{m}s^{- (n - m)} + b_{m - 1}s^{- (n - m + 1)} + \cdots + b_{1}s^{- (n - 1)} + b_{0}s^{- n}}{1 + a_{n - 1}s^{- 1} + \cdots + a_{1}s^{- (n - 1)} + a_{0}s^{- n}}. \]

Our familiarity with Mason's signal-flow gain formula allows us to recognize the familiar feedback factors in the denominator and the forward-path factors in the numerator. Mason's signal-flow gain formula was discussed in Section 2.7 and is written as

\[G(s) = \frac{Y(s)}{U(s)} = \frac{\sum_{k}^{}\mspace{2mu}\mspace{2mu} P_{k}(s)\Delta_{k}(s)}{\Delta(s)}. \]

When all the feedback loops are touching and all the forward paths touch the feedback loops, Equation (3.43) reduces to

\[G(s) = \frac{\sum_{k}^{}\mspace{2mu}\mspace{2mu} P_{k}(s)}{1 - \sum_{q - 1}^{N}\mspace{2mu}\mspace{2mu} L_{q}(s)} = \frac{\text{~}\text{Sum of the forward}\text{~} - \text{~}\text{path factors}\text{~}}{1 - \text{~}\text{sum of the feedback loop factors}\text{~}}. \]

There are several flow graphs that could represent the transfer function. Two flow graph configurations based on Mason's signal-flow gain formula are of particular interest, and we will consider these in greater detail. In the next section, we will consider two additional configurations: the physical state variable model and the diagonal (or Jordan canonical) form model.

To illustrate the derivation of the signal-flow graph state model, let us initially consider the fourth-order transfer function

\[\begin{matrix} G(s) & \ = \frac{Y(s)}{U(s)} = \frac{b_{0}}{s^{4} + a_{3}s^{3} + a_{2}s^{2} + a_{1}s + a_{0}} \\ & \ = \frac{b_{0}s^{- 4}}{1 + a_{3}s^{- 1} + a_{2}s^{- 2} + a_{1}s^{- 3} + a_{0}s^{- 4}}. \end{matrix}\]

First we note that the system is fourth order, and hence we identify four state variables \(\left( x_{1}(t),x_{2}(t),x_{3}(t),x_{4}(t) \right)\). Recalling Mason's signal-flow gain formula, we note that the denominator can be considered to be 1 minus the sum of the loop gains. Furthermore, the numerator of the transfer function is equal to the forward-path factor of the flow graph. The flow graph must include a minimum number of integrators equal to the order of the system. Therefore, we use four integrators to represent this system. The necessary flow graph nodes and the four integrators are shown in Figure 3.8. FIGURE 3.8

Flow graph nodes and integrators for fourth-order system.
FIGURE 3.9

Model for \(G(s)\) of Equation (3.45). (a) Signal-flow graph. (b) Block diagram.

(a)

(b)

Considering the simplest series interconnection of integrators, we can represent the transfer function by the flow graph of Figure 3.9. Examining this figure, we note that all the loops are touching and that the transfer function of this flow graph is indeed Equation (3.45). The reader can readily verify this by noting that the forward-path factor of the flow graph is \(b_{0}/s^{4}\) and the denominator is equal to 1 minus the sum of the loop gains.

We can also consider the block diagram model of Equation (3.45). Rearranging the terms in Equation (3.45) and taking the inverse Laplace transform yields the differential equation model

\[\begin{matrix} \frac{d^{4}\left( y(t)/b_{0} \right)}{dt^{4}} & \ + a_{3}\frac{d^{3}\left( y(t)/b_{0} \right)}{dt^{3}} + a_{2}\frac{d^{2}\left( y(t)/b_{0} \right)}{dt^{2}} + a_{1}\frac{d\left( y(t)/b_{0} \right)}{dt} \\ & \ + a_{0}\left( y(t)/b_{0} \right) = u(t). \end{matrix}\]

Define the four state variables as follows:

\[\begin{matrix} & x_{1}(t) = y(t)/b_{0} \\ & x_{2}(t) = {\overset{˙}{x}}_{1}(t) = \overset{˙}{y}(t)/b_{0} \\ & x_{3}(t) = {\overset{˙}{x}}_{2}(t) = \overset{¨}{y}(t)/b_{0} \\ & x_{4}(t) = {\overset{˙}{x}}_{3}(t) = \dddot{y}(t)/b_{0}. \end{matrix}\]

Then it follows that the fourth-order differential equation can be written equivalently as four first-order differential equations, namely,

\[\begin{matrix} & {\overset{˙}{x}}_{1}(t) = x_{2}(t), \\ & {\overset{˙}{x}}_{2}(t) = x_{3}(t), \\ & {\overset{˙}{x}}_{3}(t) = x_{4}(t), \end{matrix}\]

and

\[{\overset{˙}{x}}_{4}(t) = - a_{0}x_{1}(t) - a_{1}x_{2}(t) - a_{2}x_{3}(t) - a_{3}x_{4}(t) + u(t); \]

and the corresponding output equation is

\[y(t) = b_{0}x_{1}(t). \]

The block diagram model can be readily obtained from the four first-order differential equations as illustrated in Figure 3.9(b).

Now consider the fourth-order transfer function when the numerator is a polynomial in \(s\), so that we have

\[\begin{matrix} G(s) & \ = \frac{b_{3}s^{3} + b_{2}s^{2} + b_{1}s + b_{0}}{s^{4} + a_{3}s^{3} + a_{2}s^{2} + a_{1}s + a_{0}} \\ & \ = \frac{b_{3}s^{- 1} + b_{2}s^{- 2} + b_{1}s^{- 3} + b_{0}s^{- 4}}{1 + a_{3}s^{- 1} + a_{2}s^{- 2} + a_{1}s^{- 3} + a_{0}s^{- 4}}. \end{matrix}\]

The numerator terms represent forward-path factors in Mason's signal-flow gain formula. The forward paths will touch all the loops, and a suitable signal-flow graph realization of Equation (3.46) is shown in Figure 3.10(a). The forward-path factors are \(b_{3}/s,b_{2}/s^{2},b_{1}/s^{3}\), and \(b_{0}/s^{4}\) as required to provide the numerator of the transfer function. Recall that Mason's signal-flow gain formula indicates that the numerator of the transfer function is simply the sum of the forward-path factors. This general form of a signal-flow graph can represent the general transfer function of Equation (3.46) by utilizing \(n\) feedback loops involving the \(a_{n}\) coefficients and \(m\) forward-path factors involving the \(b_{m}\) coefficients. The general form of the flow graph state model and the block diagram model shown in Figure 3.10 is called the phase variable

121. canonical form.

The state variables are identified in Figure 3.10 as the output of each energy storage element, that is, the output of each integrator. To obtain the set of first-order differential equations representing the state model of Equation (3.46), we will introduce a new set of flow graph nodes immediately preceding each integrator of Figure 3.10(a) [5,6]. The nodes are placed before each integrator, and therefore they represent the derivative of the output of each integrator. The signal-flow graph, including the added nodes, is shown in Figure 3.11. Using the flow graph of this figure, we are able to obtain the following set of first-order differential equations describing the state of the model:

\[\begin{matrix} & {\overset{˙}{x}}_{1}(t) = x_{2}(t),\ {\overset{˙}{x}}_{2}(t) = x_{3}(t),\ {\overset{˙}{x}}_{3}(t) = x_{4}(t), \\ & {\overset{˙}{x}}_{4}(t) = - a_{0}x_{1}(t) - a_{1}x_{2}(t) - a_{2}x_{3}(t) - a_{3}x_{4}(t) + u(t). \end{matrix}\]

In this equation, \(x_{1}(t),x_{2}(t),\ldots x_{n}(t)\) are the \(n\) phase variables.

(a)

(b)

FIGURE 3.10 Model for \(G(s)\) of Equation (3.46) in the phase variable format. (a) Signal-flow graph. (b) Block diagram.

FIGURE 3.11

Flow graph of Figure 3.10 with nodes inserted.

The block diagram model can also be constructed directly from Equation (3.46). Define the intermediate variable \(Z(s)\) and rewrite Equation (3.46) as

\[G(s) = \frac{Y(s)}{U(s)} = \frac{b_{3}s^{3} + b_{2}s^{2} + b_{1}s + b_{0}}{s^{4} + a_{3}s^{3} + a_{2}s^{2} + a_{1}s + a_{0}}\frac{Z(s)}{Z(s)}. \]

Notice that, by multiplying by \(Z(s)/Z(s)\), we do not change the transfer function, \(G(s)\). Equating the numerator and denominator polynomials yields

\[Y(s) = \left\lbrack b_{3}s^{3} + b_{2}s^{2} + b_{1}s + b_{0} \right\rbrack Z(s) \]

and

\[U(s) = \left\lbrack s^{4} + a_{3}s^{3} + a_{2}s^{2} + a_{1}s + a_{0} \right\rbrack Z(s). \]

Taking the inverse Laplace transform of both equations yields the differential equations

\[y(t) = b_{3}\frac{d^{3}z(t)}{dt^{3}} + b_{2}\frac{d^{2}z(t)}{dt^{2}} + b_{1}\frac{dz(t)}{dt} + b_{0}z(t) \]

and

\[u(t) = \frac{d^{4}z(t)}{dt^{4}} + a_{3}\frac{d^{3}z(t)}{dt^{3}} + a_{2}\frac{d^{2}z(t)}{dt^{2}} + a_{1}\frac{dz(t)}{dt} + a_{0}z(t). \]

Define the four state variables as follows:

\[\begin{matrix} & x_{1}(t) = z(t) \\ & x_{2}(t) = {\overset{˙}{x}}_{1}(t) = \overset{˙}{z}(t) \\ & x_{3}(t) = {\overset{˙}{x}}_{2}(t) = \overset{¨}{z}(t) \\ & x_{4}(t) = {\overset{˙}{x}}_{3}(t) = \dddot{z}(t). \end{matrix}\]

Then the differential equation can be written equivalently as

\[\begin{matrix} & {\overset{˙}{x}}_{1}(t) = x_{2}(t), \\ & {\overset{˙}{x}}_{2}(t) = x_{3}(t), \\ & {\overset{˙}{x}}_{3}(t) = x_{4}(t), \end{matrix}\]

and

\[{\overset{˙}{x}}_{4}(t) = - a_{0}x_{1}(t) - a_{1}x_{2}(t) - a_{2}x_{3}(t) - a_{3}x_{4}(t) + u(t), \]

and the corresponding output equation is

\[y(t) = b_{0}x_{1}(t) + b_{1}x_{2}(t) + b_{2}x_{3}(t) + b_{3}x_{4}(t). \]

The block diagram model can be readily obtained from the four first-order differential equations and the output equation as illustrated in Figure 3.10(b). In matrix form, we can represent the system in Equation (3.46) as

\[\overset{˙}{\mathbf{x}}(t) = \mathbf{Ax}(t) + \mathbf{B}u(t), \]

or

\[\frac{d}{dt}\begin{pmatrix} x_{1} \\ x_{2} \\ x_{3} \\ x_{4} \end{pmatrix} = \begin{bmatrix} 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ - a_{0} & - a_{1} & - a_{2} & - a_{3} \end{bmatrix}\begin{pmatrix} x_{1} \\ x_{2} \\ x_{3} \\ x_{4} \end{pmatrix} + \begin{bmatrix} 0 \\ 0 \\ 0 \\ 1 \end{bmatrix}u(t).\]

The output is

\[y(t) = \mathbf{Cx}(t) = \begin{bmatrix} b_{0} & b_{1} & b_{2} & b_{3} \end{bmatrix}\begin{pmatrix} x_{1} \\ x_{2} \\ x_{3} \\ x_{4} \end{pmatrix}.\]

The graphical structures of Figure 3.10 are not unique representations of Equation (3.46); another equally useful structure can be obtained. A flow graph that represents Equation (3.46) equally well is shown in Figure 3.12(a). In this case, the forward-path factors are obtained by feeding forward the signal \(U(s)\). We will call this model the input feedforward canonical form.

Then the output signal \(y(t)\) is equal to the first state variable \(x_{1}(t)\). This flow graph structure has the forward-path factors \(b_{0}/s^{4},b_{1}/s^{3},b_{2}/s^{2},b_{3}/s\), and all the forward paths touch the feedback loops. Therefore, the resulting transfer function is indeed equal to Equation (3.46).

Associated with the input feedforward format, we have the set of first-order differential equations

\[\begin{matrix} {\overset{˙}{x}}_{1}(t) = - a_{3}x_{1}(t) + x_{2}(t) + b_{3}u(t), & {\overset{˙}{x}}_{2}(t) = - a_{2}x_{1}(t) + x_{3}(t) + b_{2}u(t), \\ {\overset{˙}{x}}_{3}(t) = - a_{1}x_{1}(t) + x_{4}(t) + b_{1}u(t), & \text{~}\text{and}\text{~}\ {\overset{˙}{x}}_{4}(t) = - a_{0}x_{1}(t) + b_{0}u(t). \end{matrix}\]

Thus, in matrix form, we have

\[\frac{d\mathbf{x}(t)}{dt} = \begin{bmatrix} - a_{3} & 1 & 0 & 0 \\ - a_{2} & 0 & 1 & 0 \\ - a_{1} & 0 & 0 & 1 \\ - a_{0} & 0 & 0 & 0 \end{bmatrix}\mathbf{x}(t) + \begin{bmatrix} b_{3} \\ b_{2} \\ b_{1} \\ b_{0} \end{bmatrix}u(t)\]

and

\[y(t) = \begin{bmatrix} 1 & 0 & 0 & 0 \end{bmatrix}\mathbf{x}(t) + \lbrack 0\rbrack u(t).\]

Although the input feedforward canonical form of Figure 3.12 represents the same transfer function as the phase variable canonical form of Figure 3.10, the state variables of each graph are not equal. Furthermore we recognize that the initial conditions of the system can be represented by the initial conditions of the integrators,

(a)

(b)

FIGURE 3.12 (a) Alternative flow graph state model for Equation (3.46). This model is called the input feedforward canonical form. (b) Block diagram of the input feedforward canonical form.

\(x_{1}(0),x_{2}(0),\ldots,x_{n}(0)\). Let us consider a control system and determine the state differential equation by utilizing the two forms of flow graph state models.

122. EXAMPLE 3.2 Two state variable models

Consider a closed-loop transfer function

\[T(s) = \frac{Y(s)}{U(s)} = \frac{2s^{2} + 8s + 6}{s^{3} + 8s^{2} + 16s + 6}. \]

Multiplying the numerator and denominator by \(s^{- 3}\), we have

\[T(s) = \frac{Y(s)}{U(s)} = \frac{2s^{- 1} + 8s^{- 2} + 6s^{- 3}}{1 + 8s^{- 1} + 16s^{- 2} + 6s^{- 3}}. \]

FIGURE 3.13

(a) Phase variable flow graph state model for \(T(s)\).

(b) Block diagram for the phase variable canonical form.

(a)

(b)

The first model is the phase variable state model using the feedforward of the state variables to provide the output signal. The signal-flow graph and block diagram are shown in Figures 3.13(a) and (b), respectively. The state differential equation is

\[\overset{˙}{\mathbf{x}}(t) = \begin{bmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \\ - 6 & - 16 & - 8 \end{bmatrix}\mathbf{x}(t) + \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}u(t),\]

and the output is

\[y(t) = \begin{bmatrix} 6 & 8 & 2 \end{bmatrix}\mathbf{x}(t).\]

The second model uses the feedforward of the input variable, as shown in Figure 3.14. The vector differential equation for the input feedforward model is

\[\overset{˙}{\mathbf{x}}(t) = \begin{bmatrix} - 8 & 1 & 0 \\ - 16 & 0 & 1 \\ - 6 & 0 & 0 \end{bmatrix}\mathbf{x}(t) + \begin{bmatrix} 2 \\ 8 \\ 6 \end{bmatrix}u(t)\]

and the output is

\[y(t) = \begin{bmatrix} 1 & 0 & 0 \end{bmatrix}\mathbf{x}(t).\]

FIGURE 3.14

(a) Alternative flow graph state model for \(T(s)\) using the input feedforward canonical form.

(b) Block diagram model.

(a)

(b)

We note that it was not necessary to factor the numerator or denominator polynomial to obtain the state differential equations for the phase variable model or the input feedforward model. Avoiding the factoring of polynomials permits us to avoid the tedious effort involved. Both models require three integrators because the system is third order. However, it is important to emphasize that the state variables of the state model of Figure 3.13 are not identical to the state variables of the state model of Figure 3.14. Of course, one set of state variables is related to the other set of state variables by an appropriate linear transformation of variables. A linear matrix transformation is represented by \(\mathbf{z} = \mathbf{Mx}\), which transforms the \(\mathbf{x}\)-vector into the \(\mathbf{z}\)-vector by means of the \(\mathbf{M}\) matrix. Finally, we note that the transfer function of Equation (3.41) represents a single-output linear constant coefficient system; thus, the transfer function can represent an \(n\) th-order differential equation

\[\begin{matrix} \frac{d^{n}y(t)}{dt^{n}} + a_{n - 1}\frac{d^{n - 1}y(t)}{dt^{n - 1}} + \cdots + a_{0}y(t) = & \frac{d^{m}u(t)}{dt^{m}} + b_{m - 1}\frac{d^{m - 1}u(t)}{dt^{m - 1}} \\ & \ + \cdots + b_{0}u(t) \end{matrix}\]

Accordingly, we can obtain the \(n\) first-order equations for the \(n\) th-order differential equation by utilizing the phase variable model or the input feedforward model of this section.

122.1. ALTERNATIVE SIGNAL-FLOW GRAPH AND BLOCK DIAGRAM MODELS

Often the control system designer studies an actual control system block diagram that represents physical devices and variables. An example of a model of a DC motor with shaft velocity as the output is shown in Figure 3.15 [9]. We wish to select the physical variables as the state variables. Thus, we select: \(x_{1}(t) = y(t)\), the velocity output; \(x_{2}(t) = i(t)\), the field current; and the third state variable, \(x_{3}(t)\), is selected to be \(x_{3}(t) = \frac{1}{4}r(t) - \frac{1}{20}u(t)\), where \(u(t)\) is the field voltage. We may draw the models for these physical variables, as shown in Figure 3.16. Note that the state variables \(x_{1}(t),x_{2}(t)\), and \(x_{3}(t)\) are identified on the models. We will denote this format as the physical state variable model. This model is particularly useful when we can measure the physical state variables. Note that the model of each block is separately determined. For example, note that the transfer function for the controller is

\[\frac{U(s)}{R(s)} = G_{c}(s) = \frac{5(s + 1)}{s + 5} = \frac{5 + 5s^{- 1}}{1 + 5s^{- 1}}, \]

and the flow graph between \(R(s)\) and \(U(s)\) represents \(G_{c}(s)\).

FIGURE 3.15

A block diagram model of an openloop DC motor control with velocity as the output.

(a)

(b)

FIGURE 3.16 (a) The physical state variable signal-flow graph for the block diagram of Figure 3.15. (b) Physical state block diagram. The state variable differential equation is directly obtained from Figure 3.16 as

\[\overset{˙}{\mathbf{x}}(t) = \begin{bmatrix} - 3 & 6 & 0 \\ 0 & - 2 & - 20 \\ 0 & 0 & - 5 \end{bmatrix}\mathbf{x}(t) + \begin{bmatrix} 0 \\ 5 \\ 1 \end{bmatrix}r(t)\]

and

\[y = \begin{bmatrix} 1 & 0 & 0 \end{bmatrix}\mathbf{x}(t).\]

A second form of the model we need to consider is the decoupled response modes. The overall input-output transfer function of the block diagram system shown in Figure 3.15 is

\[\frac{Y(s)}{R(s)} = T(s) = \frac{30(s + 1)}{(s + 5)(s + 2)(s + 3)} = \frac{q(s)}{\left( s - s_{1} \right)\left( s - s_{2} \right)\left( s - s_{3} \right)}, \]

and the transient response has three modes dictated by \(s_{1},s_{2}\), and \(s_{3}\). These modes are indicated by the partial fraction expansion as

\[\frac{Y(s)}{R(s)} = T(s) = \frac{k_{1}}{s + 5} + \frac{k_{2}}{s + 2} + \frac{k_{3}}{s + 3}, \]

where we find that \(k_{1} = - 20,k_{2} = - 10\), and \(k_{3} = 30\). The decoupled state variable model representing Equation (3.61) is shown in Figure 3.17. The state variable matrix differential equation is

\[\overset{˙}{\mathbf{x}}(t) = \begin{bmatrix} - 5 & 0 & 0 \\ 0 & - 2 & 0 \\ 0 & 0 & - 3 \end{bmatrix}\mathbf{x}(t) + \begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix}r(t)\]

and

\[y(t) = \begin{bmatrix} - 20 & - 10 & 30 \end{bmatrix}\mathbf{x}(t).\]

Note that we chose \(x_{1}(t)\) as the state variable associated with \(s_{1} = - 5,x_{2}(t)\) associated with \(s_{2} = - 2\), and \(x_{3}(t)\) associated with \(s_{3} = - 3\), as indicated in Figure 3.17. This choice of state variables is arbitrary; for example, \(x_{1}(t)\) could be chosen as associated with the factor \(s + 2\).

The decoupled form of the state differential matrix equation displays the distinct model poles \(- s_{1}, - s_{2},\ldots, - s_{n}\), and this format is often called the diagonal canonical form. A system can always be written in diagonal form if it possesses distinct poles; otherwise, it can only be written in a block diagonal form, known as the Jordan canonical form [24].

(a)

(b)

FIGURE 3.17 (a) The decoupled state variable flow graph model for the system shown in block diagram form in Figure 3.15. (b) The decoupled state variable block diagram model.

123. EXAMPLE 3.3 Inverted pendulum control

The problem of balancing a broomstick on a person's hand is illustrated in Figure 3.18. The only equilibrium condition is \(\theta(t) = 0\) and \(d\theta(t)/dt = 0\). The problem of balancing a broomstick on one's hand is not unlike the problem of controlling the attitude of a missile during the initial stages of launch. This problem is the classic and intriguing problem of the inverted pendulum mounted on a cart, as shown in Figure 3.19. The cart must be moved so that mass \(m\) is always in an upright position. The state variables must be expressed in terms of the angular rotation \(\theta(t)\) and the position of the cart \(y(t)\). The differential equations describing the motion of the system can be obtained by writing the sum of the forces in the horizontal direction and the sum of the moments about the pivot point \(\lbrack 2,3,10,23\rbrack\). We will assume that \(M \gg m\) and the angle of rotation \(\theta(t)\), is small so that the equations are linear. The sum of the forces in the horizontal direction is

\[M\overset{¨}{y}(t) + ml\overset{¨}{\theta}(t) - u(t) = 0, \]

where \(u(s)\) equals the force on the cart, and \(l\) is the distance from the mass \(m\) to the pivot point. The sum of the torques about the pivot point is

\[ml\overset{¨}{y}(t) + ml^{2}\overset{¨}{\theta}(t) - mlg\theta(t) = 0. \]

The state variables for the two second-order equations are chosen as $\left( x_{1}(t) \right.\ $, \(\left. \ x_{2}(t),x_{3}(t),x_{4}(t) \right) = (y(t),\overset{˙}{y}(t),\theta(t),\overset{˙}{\theta}(t))\). Then Equations (3.63) and (3.64) are written in terms of the state variables as

\[M{\overset{˙}{x}}_{2}(t) + ml{\overset{˙}{x}}_{4}(t) - u(t) = 0 \]

and

posted @ 2023-12-19 20:59  李白的白  阅读(82)  评论(0编辑  收藏  举报