Feedback Control of Dynamic Systems_P1

GLOBAL EDITION

1. Feedback Control
of Dynamic Systems

EIGHTH EDITION

Franklin \(\cdot\) Powell \(・\) Emami-Naeini

Table of Laplace Transforms

Number $$F(s)$$ $$f(t),t \geq 0$$
1 1 $$\delta(t)$$
2 $$\frac{1}{s}$$ $$1(t)$$
3 $$\frac{1}{s^{2}}$$ $$t$$
4 $$\frac{2!}{s^{3}}$$ $$t^{2}$$
5 $$\frac{3!}{s^{4}}$$ $$t^{3}$$
6 $$\frac{m!}{s^{m + 1}}$$ $$t^{m}$$
7 $$\frac{1}{(s + a)}$$ $$e^{- at}$$
8 $$\frac{1}{(s + a)^{2}}$$ $$te^{- at}$$
9 $$\frac{1}{(s + a)^{3}}$$ $$\frac{1}{2!}t{2}e$$
10 $$\frac{1}{(s + a)^{m}}$$ $$\frac{1}{(m - 1)!}t^{m - 1}e^{- at}$$
11 $$\frac{a}{s(s + a)}$$ $$1 - e^{- at}$$
12 $$\frac{a}{s^{2}(s + a)}$$ $$\frac{1}{a}\left( at - 1 + e^{- at} \right)$$
13 $$\frac{b - a}{(s + a)(s + b)}$$ $$e^{- at} - e^{- bt}$$
14 $$\frac{s}{(s + a)^{2}}$$ $$(1 - at)e^{- at}$$
15 $$\frac{a^{2}}{s(s + a)^{2}}$$ $$1 - e^{- at}(1 + at)$$
16 $$\frac{(b - a)s}{(s + a)(s + b)}$$ $$be^{- bt} - ae^{- at}$$
17 $$\overline{\left( s^{2} + a^{2} \right)}$$ $$sinat$$
18 $$\left( s^{2} + a^{2} \right)$$ $$cosat$$
19 $$\frac{s + a}{(s + a)^{2} + b^{2}}$$ $$e^{- at}cosbt$$
20 $$\frac{b}{(s + a)^{2} + b^{2}}$$ $$e^{- at}sinbt$$
21 $$\frac{a^{2} + b^{2}}{s\left\lbrack (s + a)^{2} + b^{2} \right.\ }$$ $$1 - e^{- at}\left( cosbt + \frac{a}{b}sinbt \right)$$

2. Chronological History of Feedback Control

Driverless cars

Drones

Automotive stability augmentation systems

Farm tractor auto-steering via GPS -

Feedback control of automotive engines

Aircraft auto-landing

Microprocessor

Apollo digital autopilot Aircraft stability augmentation LQG design Inertial navigation

Maximum principle Dynamic programming Numerical optimization

Optimal filtering Sampled data systems

Root locus

Nyquist stability Frequency-response tools Feedback amplifier Stability analysis of governor Routh stability

GPS High precision disk drive control Computer-aided control design Internal model control Autopilot

30

1960

1970

1980

1990

This page intentionally left blank

3. Feedback Control of Dynamic Systems

Eighth Edition

Global Edition

4. Gene F. Franklin

Stanford University

J. David Powell

Stanford University

Abbas Emami-Naeini

SC Solutions, Inc.

Director, Portfolio Management: Engineering, Computer Science & Global Editions: Julian Partridge

Specialist, Higher Ed Portfolio Management: Norrin Dias Portfolio Management Assistant: Emily Egan

Acquisitions Editor, Global Edition: Moasenla Jamir

Managing Content Producer: Scott Disanno

Content Producer: Carole Snyder

Senior Project Editor, Global Edition: K.K. Neelakantan

Web Developer: Steve Wright

Manager, Media Production, Global Edition:

Vikram Kumar

Rights and Permissions Manager: Ben Ferrini
Manufacturing Buyer, Higher Ed, Lake Side

Communications Inc (LSC): Maura Zaldivar-Garcia

Senior Manufacturing Controller, Global Edition: Kay Holman

Inventory Manager: Ann Lam

Product Marketing Manager: Yvonne Vannatta

Field Marketing Manager: Demetrius Hall

Marketing Assistant: Jon Bryant

Cover Designer: Lumina Datamatics, Inc.

Cover Art (or Cover Photo): Dima Zel/ Shutterstock

Full-Service Project Manager: George Jacob and Philip Alexander, Integra Software Services Pvt. Ltd.

Credits and acknowledgments borrowed from other sources and reproduced, with permission, in this textbook appear on appropriate page within text.

Matlab \(\ ^{®}\) and Simulink \(\ ^{®}\) are registered trademarks of The MathWorks, Inc., 3 Apple Hill Drive, Natick, MA.

Pearson Education Limited

KAO Two

KAO Park

Harlow

CM17 9NA

United Kingdom

and Associated Companies throughout the world

Visit us on the World Wide Web at: www.pearsonglobaleditions.com

(c) Pearson Education Limited, 2020

The rights of Gene F. Franklin, J. David Powell, and Abbas Emami-Naeini to be identified as the authors of this work have been asserted by them in accordance with the Copyright, Designs and Patents Act 1988.

Authorized adaptation from the United States edition, entitled Feedback Control of Dynamic Systems, \(8^{\text{th}\text{~}}\) Edition, ISBN 978-0-13-468571-7 by Gene F. Franklin, J. David Powell, and Abbas Emami-Naeini, published by Pearson Education () 2019.

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without either the prior written permission of the publisher or a license permitting restricted copying in the United Kingdom issued by the Copyright Licensing Agency Ltd, Saffron House, 6-10 Kirby Street, London EC1N 8TS.

All trademarks used herein are the property of their respective owners. The use of any trademark in this text does not vest in the author or publisher any trademark ownership rights in such trademarks, nor does the use of such trademarks imply any affiliation with or endorsement of this book by such owners. For information regarding permissions, request forms, and the appropriate contacts within the Pearson Education Global Rights and Permissions department, please visit www.pearsoned.com/permissions.

This eBook is a standalone product and may or may not include all assets that were part of the print version. It also does not provide access to other Pearson digital products like MyLab and Mastering. The publisher reserves the right to remove any material in this eBook at any time.

British Library Cataloguing-in-Publication Data

A catalogue record for this book is available from the British Library

ISBN 10: 1-292-27452-2

ISBN 13: 978-1-292-27452-2

eBook ISBN 13: 978-1-292-27454-6

Typeset by Integra

To Valerie, Daisy, Annika, Davenport, Malahat, Sheila, Nima, and to the memory of Gene

This page intentionally left blank

5. Contents

Preface 15

  1. An Overview and Brief

History of Feedback Control 23

A Perspective on Feedback Control 23

Chapter Overview 24

1.1 A Simple Feedback System 25

1.2 A First Analysis of Feedback 28

1.3 Feedback System Fundamentals 32

1.4 A Brief History 33

1.5 An Overview of the Book 40

Summary 41

Review Questions 42

Problems 42

2 Dynamic Models 46

A Perspective on Dynamic Models 46

Chapter Overview 47

2.1 Dynamics of Mechanical Systems 47

2.1.1 Translational Motion 47

2.1.2 Rotational Motion 54

2.1.3 Combined Rotation and Translation 65

2.1.4 Complex Mechanical Systems (W)** 68

2.1.5 Distributed Parameter Systems 68

2.1.6 Summary: Developing Equations of Motion for Rigid Bodies 70

2.2 Models of Electric Circuits 71

2.3 Models of Electromechanical Systems 76

2.3.1 Loudspeakers 76

2.3.2 Motors 78

\[\begin{matrix} \bigtriangleup & 2.3.3\text{~}\text{Gears}\text{~}82 \end{matrix}\]

\(\Delta\ 2.4\) Heat and Fluid-Flow Models 83

2.4.1 Heat Flow 84

2.4.2 Incompressible Fluid Flow 88

2.5 Historical Perspective 95

Summary 98

Review Questions 98

Problems 99

**Sections with (W) indicates that additional material is located on the web at www.pearsonglobaleditions.com.
3.1 Review of Laplace Transforms 112
3.1.1 Response by Convolution 113
3.1.2 Transfer Functions and Frequency Response 118
3.1.3 The \(\mathcal{L}_{-}\)Laplace Transform 128
3.1.4 Properties of Laplace Transforms 130
3.1.5 Inverse Laplace Transform by Partial-Fraction
Expansion 132
3.1.6 The Final Value Theorem ..... 134
3.1.7 Using Laplace Transforms to Solve Differential
Equations ..... 136
3.1.8 Poles and Zeros ..... 138
3.1.9 Linear System Analysis Using Matlab ..... 139
3.2 System Modeling Diagrams ..... 145
3.2.1 The Block Diagram ..... 145
3.2.2 Block-Diagram Reduction Using Matlab ..... 149
3.2.3 Mason's Rule and the Signal Flow Graph (W) ..... 150
3.3 Effect of Pole Locations ..... 150
3.4 Time-Domain Specifications ..... 159
3.4.1 Rise Time ..... 159
3.4.2 Overshoot and Peak Time ..... 160
3.4.3 Settling Time ..... 161
3.5 Effects of Zeros and Additional Poles ..... 164
3.6 Stability ..... 174
3.6.1 Bounded Input-Bounded Output Stability ..... 174
3.6.2 Stability of LTI Systems ..... 176
3.6.3 Routh's Stability Criterion ..... 177
\(\bigtriangleup \ 3.7\) Obtaining Models from Experimental Data:
System Identification (W) ..... 184
$\Delta\ 3.8\ $ Amplitude and Time Scaling (W) ..... 184
3.9 Historical Perspective ..... 184
Summary ..... 185
Review Questions ..... 187
Problems ..... 187
4 A First Analysis of Feedback ..... 208
A Perspective on the Analysis of Feedback ..... 208
Chapter Overview ..... 209
4.1 The Basic Equations of Control ..... 210
4.1.1 Stability ..... 211
4.1.2 Tracking ..... 212
4.1.3 Regulation ..... 213
4.1.4 Sensitivity ..... 214
4.2 Control of Steady-State Error to Polynomial Inputs: System Type 216
4.2.1 System Type for Tracking ..... 217
4.2.2 System Type for Regulation and Disturbance Rejection ..... 222
4.3 The Three-Term Controller: PID Control ..... 224
4.3.1 Proportional Control (P) ..... 224
4.3.2 Integral Control (I) ..... 226
4.3.3 Derivative Control (D) ..... 229
4.3.4 Proportional Plus Integral Control (PI) ..... 229
4.3.5 PID Control ..... 233
4.3.6 Ziegler-Nichols Tuning of the PID Controller ..... 238
4.4 Feedforward Control by Plant Model Inversion ..... 244
$\Delta\ 4.5\ $ Introduction to Digital Control (W) ..... 246
\(\Delta\) 4.6 Sensitivity of Time Response to Parameter Change (W) ..... 247
4.7 Historical Perspective ..... 247
Summary ..... 249
Review Questions ..... 250
Problems ..... 251
5 The Root-Locus Design Method 270
A Perspective on the Root-Locus Design Method ..... 270
Chapter Overview ..... 271
5.1 Root Locus of a Basic Feedback System ..... 271
5.2 Guidelines for Determining a Root Locus ..... 276
5.2.1 Rules for Determining a Positive \(\left( 180^{\circ} \right)\)
Root Locus ..... 278
5.2.2 Summary of the Rules for Determining a Root Locus ..... 284
5.2.3 Selecting the Parameter Value ..... 285
5.3 Selected Illustrative Root Loci ..... 288
5.4 Design Using Dynamic Compensation ..... 301
5.4.1 Design Using Lead Compensation ..... 302
5.4.2 Design Using Lag Compensation ..... 307
5.4.3 Design Using Notch Compensation ..... 310
$\Delta\ $ 5.4.4 Analog and Digital Implementations (W) ..... 312
5.5 Design Examples Using the Root Locus ..... 312
5.6 Extensions of the Root-Locus Method ..... 323
5.6.1 Rules for Plotting a Negative \(\left( 0^{\circ} \right)\) Root Locus ..... 323
$\Delta\ $ 5.6.2 Successive Loop Closure ..... 326
\(\begin{matrix} \bigtriangleup & 5.6.3 & \text{~}\text{Time Delay (W)}\text{~} \end{matrix}\) ..... 331
5.7 Historical Perspective ..... 331
Summary ..... 333
Review Questions ..... 335
Problems ..... 335
6 The Frequency-Response Design Method 353
A Perspective on the Frequency-Response Design Method ..... 353
Chapter Overview ..... 354
6.1 Frequency Response ..... 354
6.1.1 Bode Plot Techniques ..... 362
6.1.2 Steady-State Errors ..... 374
6.2 Neutral Stability ..... 376
6.3 The Nyquist Stability Criterion ..... 379
6.3.1 The Argument Principle ..... 379
6.3.2 Application of The Argument Principle to Control Design ..... 380
6.4 Stability Margins ..... 393
6.5 Bode's Gain-Phase Relationship ..... 402
6.6 Closed-Loop Frequency Response ..... 407
6.7 Compensation ..... 408
6.7.1 PD Compensation ..... 409
6.7.2 Lead Compensation (W) ..... 410
6.7.3 PI Compensation ..... 420
6.7.4 Lag Compensation ..... 420
6.7.5 PID Compensation ..... 426
6.7.6 Design Considerations ..... 433
$\Delta\ $ 6.7.7 Specifications in Terms of the Sensitivity
Function ..... 435
\(\Delta\ 6.7.8\) Limitations on Design in Terms of the Sensitivity Function ..... 440
$\Delta\ 6.8\ $ Time Delay ..... 443
6.8.1 Time Delay via the Nyquist Diagram (W) ..... 445
\(\Delta\ 6.9\) Alternative Presentation of Data ..... 445
6.9.1 Nichols Chart ..... 445
6.9.2 The Inverse Nyquist Diagram (W) ..... 450
6.10 Historical Perspective ..... 450
Summary ..... 451
Review Questions ..... 453
Problems ..... 454
7 State-Space Design ..... 479
A Perspective on State-Space Design ..... 479
Chapter Overview ..... 480
7.1 Advantages of State-Space ..... 480
7.2 System Description in State-Space ..... 482
7.3 Block Diagrams and State-Space ..... 488
7.4 Analysis of the State Equations ..... 491
7.4.1 Block Diagrams and Canonical Forms ..... 491
7.4.2 Dynamic Response from the State
Equations ..... 503
7.5 Control-Law Design for Full-State Feedback ..... 508
7.5.1 Finding the Control Law ..... 509
7.5.2 Introducing the Reference Input with Full-State Feedback ..... 518
7.6 Selection of Pole Locations for Good Design ..... 522
7.6.1 Dominant Second-Order Poles ..... 522
7.6.2 Symmetric Root Locus (SRL) ..... 524
7.6.3 Comments on the Methods ..... 533
7.7 Estimator Design ..... 534
7.7.1 Full-Order Estimators ..... 534
7.7.2 Reduced-Order Estimators ..... 540
7.7.3 Estimator Pole Selection ..... 544
7.8 Compensator Design: Combined Control Law and Estimator (W) ..... 547
7.9 Introduction of the Reference Input with the Estimator (W) ..... 559
7.9.1 General Structure for the Reference Input ..... 561
7.9.2 Selecting the Gain ..... 570
7.10 Integral Control and Robust Tracking ..... 571
7.10.1 Integral Control ..... 571
$\Delta\ $ 7.10.2 Robust Tracking Control: The Error-Space Approach 573
$\Delta\ $ 7.10.3 Model-Following Design ..... 585
\(\begin{matrix} \Delta & 7.10.4\text{~}\text{The Extended Estimator}\text{~} \end{matrix}\) ..... 589
\(\Delta 7.11\) Loop Transfer Recovery ..... 592
\(\Delta 7.12\) Direct Design with Rational Transfer Functions ..... 598
\(\Delta 7.13\) Design for Systems with Pure Time Delay ..... 602
7.14 Solution of State Equations (W) ..... 605
7.15 Historical Perspective ..... 607
Summary ..... 608
Review Questions ..... 611
Problems ..... 612
8 Digital Control ..... 636
A Perspective on Digital Control ..... 636
Chapter Overview ..... 636
8.1 Digitization ..... 637
8.2 Dynamic Analysis of Discrete Systems ..... 640
8.2.1 \(z\)-Transform ..... 640
8.2.2 \(z\)-Transform Inversion ..... 641
8.2.3 Relationship Between \(s\) and \(z\) ..... 643
8.2.4 Final Value Theorem ..... 645
8.3 Design Using Discrete Equivalents ..... 647
8.3.1 Tustin's Method ..... 647
8.3.2 Zero-Order Hold (ZOH) Method ..... 651
8.3.3 Matched Pole-Zero (MPZ) Method ..... 653
8.3.4 Modified Matched Pole-Zero
(MMPZ) Method ..... 657
8.3.5 Comparison of Digital Approximation Methods ..... 658
8.3.6 Applicability Limits of the Discrete Equivalent Design Method ..... 659
8.4 Hardware Characteristics ..... 659
8.4.1 Analog-to-Digital (A/D) Converters ..... 660
8.4.2 Digital-to-Analog Converters ..... 660
8.4.3 Anti-Alias Prefilters ..... 661
8.4.4 The Computer ..... 662
8.5 Sample-Rate Selection ..... 663
8.5.1 Tracking Effectiveness ..... 664
8.5.2 Disturbance Rejection ..... 665
8.5.3 Effect of Anti-Alias Prefilter ..... 665
8.5.4 Asynchronous Sampling ..... 666
\(\bigtriangleup 8.6\) Discrete Design ..... 666
8.6.1 Analysis Tools ..... 667
8.6.2 Feedback Properties ..... 668
8.6.3 Discrete Design Example ..... 670
8.6.4 Discrete Analysis of Designs ..... 672
8.7 Discrete State-Space Design Methods (W) ..... 674
8.8 Historical Perspective ..... 674
Summary ..... 675
Review Questions ..... 677
Problems ..... 677
9 Nonlinear Systems ..... 683
A Perspective on Nonlinear Systems ..... 683
Chapter Overview ..... 684
9.1 Introduction and Motivation: Why Study Nonlinear Systems? ..... 685
9.2 Analysis by Linearization ..... 687
9.2.1 Linearization by Small-Signal Analysis ..... 687
9.2.2 Linearization by Nonlinear Feedback ..... 692
9.2.3 Linearization by Inverse Nonlinearity ..... 693
9.3 Equivalent Gain Analysis Using the Root Locus ..... 694
9.3.1 Integrator Antiwindup ..... 701
9.4 Equivalent Gain Analysis Using Frequency Response: Describing Functions 706
9.4.1 Stability Analysis Using Describing Functions ..... 712
\(\Delta\ 9.5\) Analysis and Design Based on Stability ..... 716
9.5.1 The Phase Plane ..... 717
9.5.2 Lyapunov Stability Analysis ..... 723
9.5.3 The Circle Criterion ..... 731
9.6 Historical Perspective ..... 737
Summary ..... 738
Review Questions ..... 739
Problems ..... 739
10 Control System Design: Principles and Case
Studies ..... 751
A Perspective on Design Principles ..... 751
Chapter Overview ..... 751
10.1 An Outline of Control ..... Systems
Design ..... 753
10.2 Design of a Satellite's Attitude Control ..... 759
10.3 Lateral and Longitudinal Control of a Boeing 747 ..... 777
10.3.1 Yaw Damper ..... 782
10.3.2 Altitude-Hold Autopilot ..... 789
10.4 Control of the Fuel-Air Ratio in an Automotive Engine 795
10.5 Control of a Quadrotor Drone ..... 803
10.6 Control of RTP Systems in Semiconductor Wafer Manufacturing ..... 819
10.7 Chemotaxis, or How E. Coli Swims Away from Trouble ..... 833
10.8 Historical Perspective ..... 843
Summary ..... 845
Review Questions ..... 847
Problems ..... 847
Appendix A Laplace Transforms ..... 865
A.1 The \(\mathcal{L}_{-}\)Laplace Transform ..... 865
A.1.1 Properties of Laplace Transforms ..... 866
A.1.2 Inverse Laplace Transform by Partial-Fraction Expansion ..... 874
A.1.3 The Initial Value Theorem ..... 877
A.1.4 Final Value Theorem ..... 878
Appendix B Solutions to the Review Questions ..... 880
Appendix C Matlab Commands ..... 897
Bibliography ..... 903
Index ..... 912
List of Appendices on the web at www. pearsonglobaleditions.com
Appendix WA: A Review of Complex Variables
Appendix WB: Summary of Matrix Theory
Appendix WC: Controllability and Observability
Appendix WD: Ackermann's Formula for Pole Placement
Appendix W2.1.4: Complex Mechanical Systems
Appendix W3.2.3: Mason's Rule and the Signal-Flow Graph
Appendix W.3.6.3.1: Routh Special Cases
Appendix W3.7: System Identification
Appendix W3.8: Amplitude and Time Scaling
Appendix W4.1.4.1: The Filtered Case
Appendix W4.2.2.1: Truxal's Formula for the Error Constants
Appendix W4.5: Introduction to Digital Control
Appendix W4.6: Sensitivity of Time Response to Parameter Change
Appendix W5.4.4: Analog and Digital Implementations
Appendix W5.6.3: Root Locus with Time Delay
Appendix W6.7.2: Digital Implementation of Example 6.15
Appendix W6.8.1: Time Delay via the Nyquist Diagram
Appendix W6.9.2: The Inverse Nyquist Diagram
Appendix W7.8: Digital Implementation of Example 7.31
Appendix W7.9: Digital Implementation of Example 7.33
Appendix W7.14: Solution of State Equations
Appendix W8.7: Discrete State-Space Design Methods

6. Preface

In this Eighth Edition we again present a text in support of a first course in control and have retained the best features of our earlier editions. For this edition, we have responded to a survey of users by adding some new material (for example, drone dynamics and control) and deleted other little-used material from the book. We have also updated the text throughout so that it uses the improved features of MATLAB \(\ ^{®}\). Drones have been discussed extensively in the controls literature as well as the common press. They are being used in mining, construction, aerial photography, search and rescue, movie industry, package delivery, mapping, surveying, farming, animal research, hurricane hunting, and defense. Since feedback control is a necessary component of all the drones, we develop the equations of motion in Chapter 2, and follow that with control design examples in the chapters 5,6,7, and 10. They have great potential for many tasks and could speed up and lessen the cost of these activities. The figure below symbolizes the widespread interest in this exciting new field.

Source: Edward Koren/The New Yorker (c) Conde Nast

The basic structure of the book is unchanged and we continue to combine analysis with design using the three approaches of the root locus, frequency response, and state-variable equations. The text continues to include many carefully worked out examples to illustrate the material. As before, we provide a set of review questions at the end of each chapter with answers in the back of the book to assist the students in verifying that they have learned the material.

In the three central chapters on design methods we continue to expect the students to learn how to perform the very basic calculations by hand and make a rough sketch of a root locus or Bode plot as a sanity check on the computer results and as an aid to design. However, we introduce the use of Matlab early on in recognition of the universal use of software tools in control analysis and design. As before, we have prepared a collection of all the Matlab files (both "m" \(m\) "files and SIMULINK \(\ ^{®}\) "slx" files) used to produce the figures in the book. These are available along with the advanced material described above at our website at www.pearsonglobaleditions.com.

7. New to this Edition

We feel that this Eighth Edition presents the material with good pedagogical support, provides strong motivation for the study of control, and represents a solid foundation for meeting the educational challenges. We introduce the study of feedback control, both as a specialty of itself and as support for many other fields.

A more detailed list of the changes is:

  • Deleted the disk drive and tape drive examples from Chapters 2, 7, and 10

  • Added drone examples and/or problems in Chapters 2, 5, 6, 7, and 10

  • Added a thermal system control example to Chapters 2 and 4

  • Added a section on anti-windup for integral control in Chapter 9

  • Added Cramer's Rule to chapter 2 and Appendix WB

  • Updated Matlab commands throughout the book and in Appendix C

  • Updated the section on PID tuning in chapter 4

  • Updated the engine control and chemotaxis case studies in Chapter 10

  • Over 60 of the problems in this edition are either new or revised from the 7 th edition

8. Addressing the Educational Challenges

Some of the educational challenges facing students of feedback control are long-standing; others have emerged in recent years. Some of the challenges remain for students across their entire engineering education; others are unique to this relatively sophisticated course. Whether they
are old or new, general or particular, the educational challenges we perceived were critical to the evolution of this text. Here, we will state several educational challenges and describe our approaches to each of them.

  • CHALlENGE Students must master design as well as analysis techniques.

Design is central to all of engineering and especially so to control systems. Students find that design issues, with their corresponding opportunities to tackle practical applications, are particularly motivating. But students also find design problems difficult because design problem statements are usually poorly posed and lack unique solutions. Because of both its inherent importance and its motivational effect on students, design is emphasized throughout this text so confidence in solving design problems is developed from the start.

The emphasis on design begins in Chapter 4 following the development of modeling and dynamic response. The basic idea of feedback is introduced first, showing its influence on disturbance rejection, tracking accuracy, and robustness to parameter changes. The design orientation continues with uniform treatments of the root locus, frequency response, and state variable feedback techniques. All the treatments are aimed at providing the knowledge necessary to find a good feedback control design with no more complex mathematical development than is essential to clear understanding.

Throughout the text, examples are used to compare and contrast the design techniques afforded by the different design methods and, in the capstone case studies of Chapter 10, complex real-world design problems are attacked using all the methods in a unified way.

  • CHALLENGE New ideas continue to be introduced into control.

Control is an active field of research and hence there is a steady influx of new concepts, ideas, and techniques. In time, some of these elements develop to the point where they join the list of things every control engineer must know. This text is devoted to supporting students equally in their need to grasp both traditional and more modern topics.

In each of our editions, we have tried to give equal importance to root locus, frequency response, and state-variable methods for design. In this edition, we continue to emphasize solid mastery of the underlying techniques, coupled with computer-based methods for detailed calculation. We also provide an early introduction to data sampling and discrete controllers in recognition of the major role played by digital controllers in our field. While this material can be skipped to save time without harm to the flow of the text, we feel that it is very important for students to understand that computer control is widely used and that the most basic techniques of computer control are easily mastered.

  • CHALLENGE Students need to manage a great deal of information.

The vast array of systems to which feedback control is applied and the growing variety of techniques available for the solution of control problems means that today's student of feedback control must learn many new ideas. How do students keep their perspective as they plow through lengthy and complex textual passages? How do they identify highlights and draw appropriate conclusions? How do they review for exams? Helping students with these tasks was a criterion for the Fourth, Fifth, Sixth, and Seventh Editions and continues to be addressed in this Eighth Edition. We outline these features below.

9. FEATURE

  1. Chapter openers offer perspective and overview. They place the specific chapter topic in the context of the discipline as a whole, and they briefly overview the chapter sections.

  2. Margin notes help students scan for chapter highlights. They point to important definitions, equations, and concepts.

  3. Shaded highlights identify key concepts within the running text. They also function to summarize important design procedures.

  4. Bulleted chapter summaries help with student review and prioritization. These summaries briefly reiterate the key concepts and conclusions of the chapter.

  5. Synopsis of design aids. Relationships used in design and throughout the book are collected inside the back cover for easy reference.

  6. The color blue is used (1) to highlight useful pedagogical features, (2) to highlight components under particular scrutiny within block diagrams, (3) to distinguish curves on graphs, and (4) to lend a more realistic look to figures of physical systems.

  7. Review questions at the end of each chapter with solutions in the back to guide the student in self-study

  8. Historical perspectives at the end of each chapter provide some background and color on how or why the material in that particular chapter evolved.

  • CHALlENGE Students of feedback control come from a wide range of disciplines.

Feedback control is an interdisciplinary field in that control is applied to systems in every conceivable area of engineering. Consequently, some schools have separate introductory courses for control within the standard disciplines and some, such as Stanford, have a single set of courses taken by students from many disciplines. However, to restrict the examples to one field is to miss much of the range and power of feedback but to cover the whole range of applications is overwhelming. In this book, we develop the interdisciplinary nature of the field and
provide review material for several of the most common technologies so that students from many disciplines will be comfortable with the presentation. For Electrical Engineering students who typically have a good background in transform analysis, we include in Chapter 2 an introduction to writing equations of motion for mechanical mechanisms. For mechanical engineers, we include in Chapter 3 a review of the Laplace transform and dynamic response as needed in control. In addition, we introduce other technologies briefly and, from time to time, we present the equations of motion of a physical system without derivation but with enough physical description to be understood from a response point of view. Examples of some of the physical systems represented in the text include a quadrotor drone, a satellite tracking system, the fuel-air ratio in an automobile engine, and an airplane automatic pilot system.

10. Outline of the Book

The contents of the printed book are organized into ten chapters and three appendices. Optional sections of advanced or enrichment material marked with a triangle \((\Delta)\) are included at the end of some chapters. Examples and problems based on this material are also marked with a triangle \((\Delta)\). There are also four full appendices on the website plus numerous appendices that supplement the material in most of the chapters. The appendices in the printed book include Laplace transform tables, answers to the end-of-chapter review questions, and a list of Matlab commands. The appendices on the website include a review of complex variables, a review of matrix theory, some important results related to state-space design, and optional material supporting or extending several of the chapters.

In Chapter 1, the essential ideas of feedback and some of the key design issues are introduced. This chapter also contains a brief history of control, from the ancient beginnings of process control to flight control and electronic feedback amplifiers. It is hoped that this brief history will give a context for the field, introduce some of the key people who contributed to its development, and provide motivation to the student for the studies to come.

Chapter 2 is a short presentation of dynamic modeling and includes mechanical, electrical, electromechanical, fluid, and thermodynamic devices. This material can be omitted, used as the basis of review homework to smooth out the usual nonuniform preparation of students, or covered in-depth depending on the needs of the students.

Chapter 3 covers dynamic response as used in control. Again, much of this material may have been covered previously, especially by electrical engineering students. For many students, the correlation between pole locations and transient response and the effects of extra zeros and poles on dynamic response represent new material. Stability of dynamic
systems is also introduced in this chapter. This material needs to be covered carefully.

Chapter 4 presents the basic equations and transfer functions of feedback along with the definitions of the sensitivity function. With these tools, open-loop and closed-loop control are compared with respect to disturbance rejection, tracking accuracy, and sensitivity to model errors. Classification of systems according to their ability to track polynomial reference signals or to reject polynomial disturbances is described with the concept of system type. Finally, the classical proportional, integral, and derivative (PID) control structure is introduced and the influence of the controller parameters on a system's characteristic equation is explored along with PID tuning methods.

Following the overview of feedback in Chapter 4, the core of the book presents the design methods based on root locus, frequency response, and state-variable feedback in Chapters 5, 6, and 7, respectively.

Chapter 8 develops the tools needed to design feedback control for implementation in a digital computer. However, for a complete treatment of feedback control using digital computers, the reader is referred to the companion text, Digital Control of Dynamic Systems, by Franklin, Powell, and Workman; Ellis-Kagle Press, 1998.

In Chapter 9, the nonlinear material includes techniques for the linearization of equations of motion, analysis of zero memory nonlinearity as a variable gain, frequency response as a describing function, the phase plane, Lyapunov stability theory, and the circle stability criterion.

In Chapter 10, the three primary approaches are integrated in several case studies, and a framework for design is described that includes a touch of the real-world context of practical control design.

11. Course Configurations

The material in this text can be covered flexibly. Most first-course students in controls will have some dynamics and Laplace transforms. Therefore, Chapter 2 and most of Chapter 3 would be a review for those students. In a ten-week quarter, it is possible to review Chapter 3, and cover all of Chapters 1,4,5, and 6. Most optional sections should be omitted. In the second quarter, Chapters 7 and 9 can be covered comfortably including the optional sections. Alternatively, some optional sections could be omitted and selected portions of Chapter 8 included. A semester course should comfortably accommodate Chapters 1-7, including the review materials of Chapters 2 and 3, if needed. If time remains after this core coverage, some introduction of digital control from Chapter 8, selected nonlinear issues from Chapter 9, and some of the case studies from Chapter 10 may be added.

The entire book can also be used for a three-quarter sequence of courses consisting of modeling and dynamic response (Chapters 2
and 3), classical control (Chapters 4-6), and modern control (Chapters \(7 - 10)\).

Two basic 10-week courses are offered at Stanford and are taken by seniors and first-year graduate students who have not had a course in control, mostly in the departments of Aeronautics and Astronautics, Mechanical Engineering, and Electrical Engineering. The first course reviews Chapters 2 and 3 and covers Chapters 4-6. The more advanced course is intended for graduate students and reviews Chapters 4-6 and covers Chapters 7-10. This sequence complements a graduate course in linear systems and is the prerequisite to courses in digital control, nonlinear control, optimal control, flight control, and smart product design. Some of the subsequent courses include extensive laboratory experiments. Prerequisites for the course sequence include dynamics or circuit analysis and Laplace transforms.

12. Prerequisites to This Feedback Control Course

This book is for a first course at the senior level for all engineering majors. For the core topics in Chapters 4-7, prerequisite understanding of modeling and dynamic response is necessary. Many students will come into the course with sufficient background in those concepts from previous courses in physics, circuits, and dynamic response. For those needing review, Chapters 2 and 3 should fill in the gaps.

An elementary understanding of matrix algebra is necessary to understand the state-space material. While all students will have much of this in prerequisite math courses, a review of the basic relations is given in online Appendix WB and a brief treatment of particular material needed in control is given at the start of Chapter 7. The emphasis is on the relations between linear dynamic systems and linear algebra.

13. Supplements

The website \(www\).pearsonglobaleditions. com includes the dot-m and dotslx files used to generate all the Matlab figures in the book, and these may be copied and distributed to the students as desired. The websites also contain some more advanced material and appendices which are outlined in the Table of Contents. A Solutions Manual with complete solutions to all homework problems is available to instructors only.

14. Acknowledgments

Finally, we wish to acknowledge our great debt to all those who have contributed to the development of feedback control into the exciting field it is today and specifically to the considerable help and education we have received from our students and our colleagues. In particular, we have benefited in this effort by many discussions with the following
who taught introductory control at Stanford: A. E. Bryson, Jr., R. H. Cannon, Jr., D. B. DeBra, S. Rock, S. Boyd, C. Tomlin, P. Enge, A. Okamura, and C. Gerdes. Other colleagues who have helped us include D. Fraser, N. C. Emami, B. Silver, M. Dorfman, K. Rudie, L. Pao, F. Khorrami, K. Lorell, M. Tischler, D. de Roover, R. Patrick, M. Berrios, J. K. Lee, J. L. Ebert, I. Kroo, K. Leung, and M. Schwager. Special thanks go to the many students who have provided almost all the solutions to the problems in the book.

We especially want to express our great appreciation for the contributions to the book by Gene Franklin. Gene was a mentor, teacher, advisor, and good friend to us both. We had many meetings as we collaborated on earlier editions of the book over the last 28 years of his life, and every single one of those meetings has been friendly and enjoyable as we meshed our views on how to present the material. We learned control along with humor from Gene in grad school classes, and we benefitted from his mentoring: in one case as a new assistant professor, and in the other as a Ph.D. advisee. Collectively, we collaborated on research, created new courses and laboratories, and written two textbooks over a period of 40 years. Gene always had a smile with a twinkle in his eye, and was a pleasure to work with; he was a true gentleman.

J.D.P.

A.E.-N.

Stanford, California

15. Acknowledgments for the Global Edition

Pearson would like to thank and acknowledge Benjamin Chong, University of Leeds, Mehmet Canevi, Istanbul Technical University, and Turan Söylemez, Istanbul Technical University, for contributing to the Global Edition, and Murat Dogruel, Marmara University, Ivo Grondman, Quang Ha, University of Technology Sydney, Philippe Mullhaupt, Ecole Polytechnique Fédérale de Lausanne, and Rahul Sharma, The University of Queensland for reviewing the Global Edition. We would also like to thank Benjamin Chong, Li Li, University of Technology Sydney, Rahul Sharma, Turan Söylemez, and Mark Vanpaemel, Universiteit Antwerpen, for their valuable feedback on the Global Edition.

16. An Overview and Brief History of Feedback Control

17. A Perspective on Feedback Control

Feedback control of dynamic systems is a very old concept with many characteristics that have evolved over time. The central idea is that a dynamic system's output can be measured and fed back to a controller of some kind then used to affect the system. There are several variations on this theme.

A system that involves a person controlling a machine, as in driving an automobile, is called manual control. A system that involves machines only, as when room temperature can be set by a thermostat, is called automatic control. Systems designed to hold an output steady against unknown disturbances are called regulators, while systems designed to track a reference signal are called tracking or servo systems. Control systems are also classified according to the information used to compute the controlling action. If the controller does not use a measure of the system output being controlled in computing the control action to take, the system is called openloop control. If the controlled output signal is measured and fed back for use in the control computation, the system is called closed-loop or feedback control. There are many other important properties of control systems in addition to these most basic characteristics. For example, we will mainly consider feedback of current measurements
as opposed to predictions of the future; however, a very familiar example illustrates the limitation imposed by that assumption. When driving a car, the use of simple feedback corresponds to driving in a thick fog where one can only see the road immediately at the front of the car and is unable to see the future required position! Looking at the road ahead is a form of predictive control and this information, which has obvious advantages, would always be used where it is available. In most automatic control situations studied in this book, observation of the future track or disturbance is not possible. In any case, the control designer should study the process to see if any information could anticipate either a track to be followed or a disturbance to be rejected. If such a possibility is feasible, the control designer should use it to feedforward an early warning to the control system. An example of this is in the control of steam pressure in the boiler of an electric power generation plant. The electricity demand cycle over a day is well known; therefore, when it is known that there will soon be an increased need for electrical power, that information can be fed forward to the boiler controller in anticipation of a soon-to-be-demanded increase in steam flow.

The applications of feedback control have never been more exciting than they are today. Feedback control is an essential element in aircraft of all types: most manned aircraft, and all unmanned aircraft from large military aircraft to small drones. The FAA has predicted that the number of drones registered in the U.S. will reach 7 million by 2020 ! Automatic landing and collision avoidance systems in airliners are now being used routinely, and the use of satellite navigation in future designs promises a revolution in our ability to navigate aircraft in an ever more crowded airspace. The use of feedback control in driverless cars is an essential element to their success. They are now under extensive development, and predictions have been made that driverless cars will ultimately reduce the number of cars on the road by a very large percentage. The use of feedback control in surgical robotic systems is also emerging. Control is essential to the operation of systems from cell phones to jumbo jets and from washing machines to oil refineries as large as a small city. The list goes on and on. In fact, many engineers refer to control as a hidden technology because of its essential importance to so many devices and systems while being mainly out of sight. The future will no doubt see engineers create even more imaginative applications of feedback control.

18. Chapter Overview

In this chapter, we begin our exploration of feedback control using a simple familiar example: a household furnace controlled by a thermostat. The generic components of a control system are identified within the context of this example. In another example in Section 1.2-an automobile cruise control-we will develop the
elementary static equations and assign numerical values to elements of the system model in order to compare the performance of open-loop control to that of feedback control when dynamics are ignored. Section 1.3 then introduces the key elements in control system design. In order to provide a context for our studies, and to give you a glimpse of how the field has evolved, Section 1.4 provides a brief history of control theory and design. In addition, later chapters have brief sections of additional historical notes on the topics covered there. Finally, Section 1.5 provides a brief overview of the contents and organization of the entire book.

18.1. A Simple Feed back System

In feedback systems, the variable being controlled - such as temperature or speed - is measured by a sensor and the measured information is fed back to the controller to influence the controlled variable. The principle is readily illustrated by a very common system, the household furnace controlled by a thermostat. The components of this system and their interconnections are shown in Fig. 1.1. Such an illustration identifies the major parts of the system and shows the directions of information flow from one component to another.

We can easily analyze the operation of this system qualitatively from the graph. Suppose both the temperature in the room where the thermostat is located and the outside temperature are significantly below the reference temperature (also called the setpoint) when power is applied. The thermostat will be on and the control logic will open the furnace gas valve and light the fire box. This will cause heat \(Q_{in}\) to be supplied to the house at a rate that will be significantly larger than the heat loss \(Q_{out}\). As a result, the room temperature will rise until it exceeds the thermostat reference setting by a small amount. At this time, the furnace will be turned off and the room temperature will start to fall toward the outside value. When it falls a small amount below the setpoint, \(\ ^{1}\) the thermostat will come on again and the cycle will repeat. Typical plots of room temperature along with the furnace cycles of on and off are shown in Fig. 1.1. The outside temperature remains at \(50^{\circ}F\) and the thermostat is initially set at \(55^{\circ}F\). At 6 a.m., the thermostat is stepped to \(65^{\circ}F\) and the furnace brings it to that level and cycles the temperature around that value thereafter. Notice the house is well insulated, so the fall of temperature with the furnace off is significantly slower than the rise with the furnace on. From this example, we can identify the generic components of the elementary feedback control system, as shown in Fig. 1.2.

The central component of this feedback system is the process whose output is to be controlled. In our example the process would be the house whose output is the room temperature and the disturbance to

(a)

(b)

Figure 1.1

Feedback control: (a) component block diagram of a room temperature control system; (b) plot of room temperature and furnace action

the process is the flow of heat from the house, \(Q_{\text{out}\text{~}}\), due to conduction through the walls and roof to the lower outside temperature. (The outward flow of heat also depends on other factors such as wind, open doors, and so on.) The design of the process can obviously have a major impact on the effectiveness of the controls. The temperature of a well-insulated house with thermopane windows is clearly easier to control than otherwise. Similarly, the design of aircraft with control in mind makes a world of difference to the final performance. In every case, the earlier the concepts of control are introduced into the process design, the better. The actuator is the device that can influence the controlled variable of the process. In our case, the actuator is a gas furnace. Actually, the furnace usually has a pilot light or striking mechanism, a gas valve, and a blower fan, which turns on or off depending on the air temperature in the furnace. These details illustrate the fact that many feedback systems contain components that themselves

Figure 1.2

Component block diagram of an elementary feedback control

form other feedback systems. \(\ ^{2}\) The central issue with the actuator is its ability to move the process output with adequate speed and range. The furnace must produce more heat than the house loses on the worst day, and must distribute it quickly if the house temperature is to be kept in a narrow range. Power, speed, and reliability are usually more important than accuracy. Generally, the process and the actuator are intimately connected and the control design centers on finding a suitable input or control signal to send to the actuator. The combination of process and actuator is called the plant, and the component that actually computes the desired control signal is the controller. Because of the flexibility of electrical signal processing, the controller typically works on electrical signals, although the use of pneumatic controllers based on compressed air has a long and important place in process control. With the development of digital technology, cost-effectiveness and flexibility have led to the use of digital signal processors as the controller in an increasing number of cases. The component labeled thermostat in Fig. 1.1 measures the room temperature and is called the sensor in Fig. 1.2, a device whose output inevitably contains sensor noise. Sensor selection and placement are very important in control design, for it is sometimes not possible for the true controlled variable and the sensed variable to be the same. For example, although we may really wish to control the house temperature as a whole, the thermostat is in one particular room, which may or may not be at the same temperature as the rest of the house. For instance, if the thermostat is set to \(68^{\circ}F\) but is placed in the living room near a roaring fireplace, a person working in the study could still feel uncomfortably cold. \(\ ^{3,4}\) As we will see, in addition to placement, important properties of a sensor are the accuracy of the measurements as well as low noise, reliability, and linearity. The sensor will typically convert the physical variable into an electrical signal for use by the controller. Our general system also includes an input filter whose role is to convert the reference signal to electrical form for later manipulation by the controller. In some cases, the input filter can modify the reference command input in ways that improve the system response. Finally, there is a controller to compute the difference between the reference signal and the sensor output to give the controller a measure of the system error. The thermostat on the wall includes the sensor, input filter, and the controller. A few decades ago, the user simply set the thermostat manually to achieve the desired room temperature at the thermostat location. Over the last few decades, the addition of a small computer in the thermostat has enabled storing the desired temperature over the day and week and more recently, thermostats have gained the ability to learn what the desired temperature should be and to base that value, in part, on whether anybody will be home soon! A thermostat system that includes a motion detector can determine whether anybody is home and learns from the patterns observed what the desired temperature profile should be. The process of learning the desired setpoint is an example of artificial intelligence (AI) or machine learning, which is gaining acceptance in many fields as the power and affordability of computers improve. The combination of feedback control, AI, sensor fusion, and logic to tie it all together will become an essential feature in many future devices such as drones, driverless cars, and many others.

This text will present methods for analyzing feedback control systems and will describe the most important design techniques engineers can use in applying feedback to solve control problems. We will also study the specific advantages of feedback that compensate for the additional complexity it demands.

18.2. A First Analysis of Feedback

The value of feedback can be readily demonstrated by quantitative analysis of a simplified model of a familiar system, the cruise control of an automobile (see Fig. 1.3). To study this situation analytically, we

Figure 1.3

Component block diagram of automobile cruise control

Figure 1.4

Block diagram of the cruise control plant

need a mathematical model of our system in the form of a set of quantitative relationships among the variables. For this example, we ignore the dynamic response of the car and consider only the steady behavior. (Dynamics will, of course, play a major role in later chapters.) Furthermore, we assume that for the range of speeds to be used by the system, we can approximate the relations as linear. After measuring the speed of the vehicle on a level road at \(65mph\), we find that a \(1^{\circ}\) change in the throttle angle (our control variable, \(u\) ) causes a \(10mph\) change in speed (the output variable, \(y\) ), hence the value 10 in the box between \(u\) and \(y\) in Fig. 1.4, which is a block diagram of the plant. Generally, the block diagram shows the mathematical relationships of a system in graphical form. From observations while driving up and down hills, it is found that when the grade changes by \(1\%\), we measure a speed change of \(5mph\), hence the value 0.5 in the upper box in Fig. 1.4, which reflects that a \(1\%\) grade change has half the effect of a \(1^{\circ}\) change in the throttle angle. The speedometer is found to be accurate to a fraction of \(1mph\) and will be considered exact. In the block diagram, the connecting lines carry signals and a block is like an ideal amplifier which multiplies the signal at its input by the value marked in the block to give the output signal. To sum two or more signals, we show lines for the signals coming into a summer, a circle with the summation sign \(\Sigma\) inside. An algebraic sign (plus or minus) beside each arrow head indicates whether the input

Figure 1.5

Open-loop cruise control

adds to or subtracts from the total output of the summer. For this analysis, we wish to compare the effects of a \(1\%\) grade on the output speed when the reference speed is set for 65 with and without feedback to the controller.

In the first case, shown in Fig. 1.5, the controller does not use the speedometer reading but sets \(u = r/10\), where \(r\) is the reference speed, which is, \(65mph\). This is an example of an open-loop control system. The term open-loop refers to the fact that there is no closed path or loop around which the signals go in the block diagram; that is, the control variable \(u\) is independent of the output variable, \(y\). In our simple example, the open-loop output speed, \(y_{ol}\), is given by the equations

\[\begin{matrix} y_{ol} & \ = 10(u - 0.5w) \\ & \ = 10\left( \frac{r}{10} - 0.5w \right) \\ & \ = r - 5w. \end{matrix}\]

The error in output speed is

\[\begin{matrix} e_{ol} & \ = r - y_{ol} \\ & \ = 5w, \end{matrix}\]

and the percent error is

\[\%\text{~}\text{error}\text{~} = 500\frac{w}{r}. \]

If \(r = 65\) and the road is level, then \(w = 0\) and the speed will be 65 with no error. However, if \(w = 1\) corresponding to a \(1\%\) grade, then the speed will be 60 and we have a 5 -mph error, which is a \(7.69\%\) error in the speed. For a grade of \(2\%\), the speed error would be \(10mph\), which is an error of \(15.38\%\), and so on. The example shows that there would be no error when \(w = 0\), but this result depends on the controller gain being the exact inverse of the plant gain of 10. In practice, the plant gain is subject to change and if it does, errors are introduced by this means also. If there is an error in the plant gain in open-loop control, the percent speed error would be the same as the percent plant-gain error.

The block diagram of a feedback scheme is shown in Fig. 1.6, where the controller gain has been set to 10 . In this simple example, we have assumed that we have an ideal sensor providing a measurement of \(y_{cl}\). In this case, the equations are

Figure 1.6

Closed-loop cruise control

\[\begin{matrix} y_{cl} & \ = 10u - 5w \\ u & \ = 10\left( r - y_{cl} \right) \end{matrix}\]

Combining them yields

\[\begin{matrix} y_{cl} & \ = 100r - 100y_{cl} - 5w, \\ 101y_{cl} & \ = 100r - 5w, \\ y_{cl} & \ = \frac{100}{101}r - \frac{5}{101}w, \\ e_{cl} & \ = \frac{r}{101} + \frac{5w}{101}. \end{matrix}\]

Thus, the feedback has reduced the sensitivity of the speed error to the grade by a factor of 101 when compared with the open-loop system. Note, however, that there is now a small speed error on level ground because even when \(w = 0\),

\[y_{cl} = \frac{100}{101}r = 0.99rmph \]

This error will be small as long as the loop gain (product of plant and controller gains) is large. \(\ ^{5}\) If we again consider a reference speed of 65 \(mph\) and compare speeds with a \(1\%\) grade, the percent error in the output speed is

\[\begin{matrix} \%\text{~}\text{error}\text{~} & \ = 100\frac{\frac{65 \times 100}{101} - \left( \frac{65 \times 100}{101} - \frac{5}{101} \right)}{\frac{65 \times 100}{101}} \\ & \ = 100\frac{5 \times 101}{101 \times 65 \times 100} \\ & \ = 0.0769\%. \end{matrix}\]

The design trade-off

The reduction of the speed sensitivity to grade disturbances and plant gain in our example is due to the loop gain of 100 in the feedback case. Unfortunately, there are limits to how high this gain can be made; when dynamics are introduced, the feedback can make the response worse than before, or even cause the system to become unstable. The dilemma is illustrated by another familiar situation where it is easy to change a feedback gain. If one tries to raise the gain of a public-address amplifier too much, the sound system will squeal in a most unpleasant way. This is a situation where the gain in the feedback loop-from the speakers to the microphone through the amplifier back to the speakers - is too much. The issue of how to get the gain as large as possible to reduce the errors without making the system become unstable is called the design trade-off and is what much of feedback control design is all about.

18.3. Feedback System Fundamentals

To achieve good control there are typical goals:

  • Stability. The system must be stable at all times. This is an absolute requirement.

  • Tracking. The system output must track the command reference signal as closely as possible.

  • Disturbance rejection. The system output must be as insensitive as possible to disturbance inputs.

  • Robustness. The aforementioned goals must be met even if the model used in the design is not completely accurate or if the dynamics of the physical system change over time.

The requirement of stability is basic and instability may have two causes. In the first place, the system being controlled may be unstable. This is illustrated by the Segway vehicle, which will simply fall over if the control is turned off. A second cause of instability may be the addition of feedback! Such an instability is called a "vicious circle," where the feedback signal that is circled back makes the situation worse rather than better. Stability will be discussed in much more detail in Chapters 3 and 4 .

There are many examples of the requirement of having the system's output track a command signal. For example, driving a car so the vehicle stays in its lane is command tracking. Today, this is done by the driver; however, there are schemes now under development where the car's "autodriver" will carry out this task using feedback control while the driver does other things, for example, surfing the Internet. Similarly, flying an airplane on the approach to landing requires that a glide path be accurately tracked by the pilot or an autopilot. It is routine for today's aircraft autopilots to carry this out including the flare to the actual touchdown. The autopilot accepts inputs from the Instrument Landing System (ILS) that provides an electronic signal showing the
desired landing trajectory, then commands the aircraft control surfaces so it follows the desired trajectory as closely as possible.

Disturbance rejection is one of the very oldest applications of feedback control. In this case, the "command" is simply a constant setpoint to which the output is to be held as the environment changes. A very common example of this is the room thermostat whose job it is to hold the room temperature close to the setpoint as outside temperature and wind change, and as doors and windows are opened and closed.

Finally, to design a controller for a dynamic system, it is necessary to have a mathematical model of the dynamic response of the system being controlled in all but the simplest cases. Unfortunately, almost all physical systems are very complex and often nonlinear. As a result, the design will usually be based on a simplified model and must be robust enough that the system meets its performance requirements when applied to the real device. Furthermore, as time and the environment change, even the best of models will be in error because the system dynamics have changed. Again, the design must not be too sensitive to these inevitable changes and it must work well enough regardless.

The tools available to control engineers to design and build feedback control systems have evolved over time. The development of digital computers has been especially important both as computation aids and as embedded control devices. As computation devices, computers have permitted identification of increasingly complex models and the application of very sophisticated control design methods. Also, as embedded devices, digital controllers have permitted the implementation of very complex control laws. Control engineers must not only be skilled in using these design tools, but also need to understand the concepts behind these tools to be able to make the best use of them. Also important is that the control engineer understands both the capabilities and the limitations of the controller devices available.

18.4. A Brief History

Interesting histories of early work on feedback control have been written by Mayr (1970) and Åström (2014), who trace the control of mechanisms to antiquity. Two of the earliest examples are the control of flow rate to regulate a water clock and the control of liquid level in a wine vessel, which is thereby kept full regardless of how many cups are dipped from it. The control of fluid flow rate is reduced to the control of fluid level, since a small orifice will produce constant flow if the pressure is constant, which is the case if the level of the liquid above the orifice is constant. The mechanism of the liquid-level control invented in antiquity and still used today (for example, in the water tank of the Liquid-level control ordinary flush toilet) is the float valve. As the liquid level falls, so does the float, allowing the flow into the tank to increase; as the level rises, the flow is reduced and if necessary cut off. Figure 1.7 shows how a float valve operates. Notice here the sensor and actuator are not separate

19. Chapter 1 An Overview and Brief History of Feedback Control

Figure 1.7

Early historical control of liquid level and flow

20. Drebbel's incubator

Figure 1.8

Drebbel's incubator for hatching chicken eggs

devices but are contained in the carefully shaped float-and-supply-tube combination.

A more recent invention described by Mayr (1970) is a system, designed by Cornelis Drebbel in about 1620, to control the temperature of a furnace used to heat an incubator \(\ ^{6}\) (see Fig. 1.8). The furnace consists of a box to contain the fire, with a flue at the top fitted with a damper. Inside the fire box is the double-walled incubator box, the hollow walls of which are filled with water to transfer the heat evenly to the incubator. The temperature sensor is a glass vessel filled with alcohol and mercury and placed in the water jacket around the incubator box. As the fire heats the box and water, the alcohol expands and the riser floats up, lowering the damper on the flue. If the box is too cold, the alcohol contracts, the damper is opened, and the fire burns hotter.

Fly-ball governor

Figure 1.9

Photograph of an early Watt steam engine

Source: Chronicle/Alamy Stock Photo
The desired temperature is set by the length of the riser, which sets the opening of the damper for a given expansion of the alcohol.

A famous problem in the chronicles of control systems was the search for a means to control the rotation speed of a shaft. Much early work (Fuller, 1976) seems to have been motivated by the desire to automatically control the speed of the grinding stone in a wind-driven flour mill. Of various methods attempted, the one with the most promise used a conical pendulum, or fly-ball governor, to measure the speed of the mill. The sails of the driving windmill were rolled up or let out with ropes and pulleys, much like a window shade, to maintain fixed speed. However, it was adaptation of these principles to the steam engine in the laboratories of James Watt around 1788 that made the fly-ball governor famous. An early version is shown in Fig. 1.9, while Figs. 1.10 and 1.11 show a close-up of a fly-ball governor and a sketch of its components.

The action of the fly-ball governor (also called a centrifugal governor) is simple to describe. Suppose the engine is operating in equilibrium. Two weighted balls spinning around a central shaft can be seen to describe a cone of a given angle with the shaft. When a load is suddenly applied to the engine, its speed will slow, and the balls of the governor will drop to a smaller cone. Thus the ball angle is used to sense the output speed. This action, through the levers, will open the main valve to the steam chest (which is the actuator) and admit more steam to the engine, restoring most of the lost speed. To hold the steam valve at a new position, it is necessary for the fly balls to rotate at a different angle, implying that the speed under load is not exactly the same as before. We saw this effect earlier with cruise control, where feedback control gave a very small error. To recover the exact same speed in the system, it would require resetting the desired speed setting by changing the length of the rod from the lever to the valve. Subsequent inventors

Figure 1.10

Close-up of the fly-ball governor

Source: Washington

Imaging/Alamy Stock Photo
Figure 1.11

Operating parts of a fly-ball governor
Beginnings of control theory

introduced mechanisms that integrated the speed error to provide automatic reset. In Chapter 4, we will analyze these systems to show that such integration can result in feedback systems with zero steady-state error to constant disturbances.

Because Watt was a practical man, he did not engage in theoretical analysis of the governor, similar to the millwrights earlier. Fuller (1976) has traced the early development of control theory to a period of studies from Christiaan Huygens in 1673 to James Clerk Maxwell in 1868. Fuller gives particular credit to the contributions of G. B. Airy,

Stability analysis

Frequency response professor of mathematics and astronomy at Cambridge University from 1826 to 1835 and Astronomer Royal at Greenwich Observatory from 1835 to 1881 . Airy was concerned with speed control; if his telescopes could be rotated counter to the rotation of the earth, a fixed star could be observed for extended periods. Using the centrifugal-pendulum governor he discovered that it was capable of unstable motion- "and the machine (if I may so express myself) became perfectly wild" (Airy, 1840; quoted in Fuller, 1976). According to Fuller, Airy was the first worker to discuss instability in a feedback control system and the first to analyze such a system using differential equations. These attributes signal the beginnings of the study of feedback control dynamics.

The first systematic study of the stability of feedback control was apparently given in the paper "On Governors" by Maxwell (1868). \(\ ^{7}\) In this paper, Maxwell developed the differential equations of the governor, linearized them about equilibrium, and stated that stability depends on the roots of a certain (characteristic) equation having negative real parts. Maxwell attempted to derive conditions on the coefficients of a polynomial that would hold if all the roots had negative real parts. He was successful only for second- and third-order cases. Determining criteria for stability was the problem for the Adams Prize of 1877, which was won by E. J. Routh. \(\ ^{8}\) His criterion, developed in his essay, remains of sufficient interest that control engineers are still learning how to apply his simple technique. Analysis of the characteristic equation remained the foundation of control theory until the invention of the electronic feedback amplifier by H. S. Black in 1927 at Bell Telephone Laboratories.

Shortly after publication of Routh's work, the Russian mathematician Lyapunov (1892) began studying the question of stability of motion. His studies were based on the nonlinear differential equations of motion, and also included results for linear equations that are equivalent to Routh's criterion. His work was fundamental to what is now called the state-variable approach to control theory, but was not introduced into the control literature until about 1958.

The development of the feedback amplifier is briefly described in an interesting article based on a talk by Bode (1960) reproduced in Bellman and Kalaba (1964). With the introduction of electronic amplifiers, longdistance telephoning became possible in the decades following World War I. However, as distances increased, so did the loss of electrical energy; in spite of using larger-diameter wires, increasing numbers of amplifiers were needed to replace the lost energy. Unfortunately, large numbers of amplifiers resulted in much distortion since the small nonlinearity of the vacuum tubes then used in electronic amplifiers were multiplied many times. To solve the problem of reducing distortion, Black proposed the feedback amplifier. As mentioned earlier in connection with the automobile cruise control, the more we wish to reduce errors (or distortion), the more feedback we need to apply. The loop gain from actuator to plant to sensor to actuator must be made very large. With high gain the feedback loop begins to squeal and is unstable. Here was Maxwell's and Routh's stability problem again, except that in this technology, the dynamics were so complex (with differential equations of order 50 being common) that Routh's criterion was not very helpful. So the communications engineers at Bell Telephone Laboratories, familiar with the concept of frequency response and the mathematics of complex variables, turned to complex analysis. In 1932, H. Nyquist published a paper describing how to determine stability from a graphical plot of the loop frequency response. From this theory developed an extensive methodology of feedback-amplifier design described by Bode (1945) and still extensively used in the design of feedback controls. Nyquist and Bode plots will be discussed in more detail in Chapter 6.

Simultaneous with the development of the feedback amplifier, feedback control of industrial processes was becoming standard. This field, characterized by processes that are not only highly complex but also nonlinear and subject to relatively long time delays between actuator and sensor, developed the concept of proportional-integral-derivative PID control (PID) control. The PID controller was first described by Callender et al. (1936). This technology was based on extensive experimental work and simple linearized approximations to the system dynamics. It led to standard experiments suitable to application in the field and eventually to satisfactory "tuning" of the coefficients of the PID controller. (PID controllers will be covered in Chapter 4.) Also under development at this time were devices for guiding and controlling aircraft; especially important was the development of sensors for measuring aircraft altitude and speed. An interesting account of this branch of control theory is given in McRuer (1973).

An enormous impulse was given to the field of feedback control during World War II. In the United States, engineers and mathematicians at the MIT Radiation Laboratory combined their knowledge to bring together not only Bode's feedback amplifier theory and the PID control of processes, but also the theory of stochastic processes developed by Wiener (1930). The result was the development of a comprehensive set of techniques for the design of servomechanisms, as control mechanisms came to be called. Much of this work was collected and published in the records of the Radiation Laboratory by James et al. (1947).

Another approach to control systems design was introduced in 1948 by W. R. Evans, who was working in the field of guidance and control of aircraft. Many of his problems involved unstable or neutrally stable dynamics, which made the frequency methods difficult, so he
suggested returning to the study of the characteristic equation that had been the basis of the work of Maxwell and Routh nearly 70 years earlier. However, Evans developed techniques and rules allowing one to follow graphically the paths of the roots of the characteristic equation as a parameter was changed. His method, the root locus, is suitable for design as well as for stability analysis and remains an important technique today. The root-locus method developed by Evans will be covered in Chapter 5.

During the 1950s, several authors, including R. Bellman and R. E. Kalman in the United States and L. S. Pontryagin in the U.S.S.R., began again to consider the ordinary differential equation (ODE) as a model for control systems. Much of this work was stimulated by the new field of control of artificial earth satellites, in which the ODE is a natural form for writing the model. Supporting this endeavor were digital computers, which could be used to carry out calculations unthinkable 10 years before. (Now, of course, these calculations can be done by any engineering student with a laptop computer.) The work of Lyapunov was translated into the language of control at about this time, and the study of optimal controls, begun by Wiener and Phillips during World War II, was extended to optimizing trajectories of nonlinear systems based on the calculus of variations. Much of this work was presented at the first conference of the newly formed International Federation of Automatic Control held in Moscow in \(1960.\ ^{9}\) This work did not use the frequency response or the characteristic equation but worked directly with the ODE in "normal" or "state" form and typically called for extensive use of computers. Even though the foundations of the study of ODEs were laid in the late 19th century, this approach is now often called modern control to distinguish it from classical control, which uses Laplace transforms and complex variable methods of Bode and others. In the period from the 1970 s continuing through the present, we find a growing body of work that seeks to use the best features of each technique.

Thus, we come to the current state of affairs where the principles of control are applied in a wide range of disciplines, including every branch of engineering. The well-prepared control engineer needs to understand the basic mathematical theory that underlies the field and must be able to select the best design technique suited to the problem at hand. With the ubiquitous use of computers, it is especially important that the engineer is able to use his or her knowledge to guide and verify calculations done on the computer. \(\ ^{10}\)

20.1. An Overview of the Book

The central purpose of this book is to introduce the most important techniques for single-input-single-output control systems design. Chapter 2 will review the techniques necessary to obtain physical models of the dynamic systems that we wish to control. These include model making for mechanical, electric, electromechanical, and a few other physical systems, including a simple model for a quadrotor drone, which will be used in subsequent chapters. Chapter 2 will also briefly describe the linearization of nonlinear models, although this will be discussed more thoroughly in Chapter 9.

In Chapter 3 and Appendix A, we will discuss the analysis of dynamic response using Laplace transforms along with the relationship between time response and the poles and zeros of a transfer function. The chapter also includes a discussion of the critical issue of system stability, including the Routh test.

In Chapter 4, we will cover the basic equations and features of feedback. An analysis of the effects of feedback on disturbance rejection, tracking accuracy, sensitivity to parameter changes, and dynamic response will be given. The idea of elementary PID control is discussed.

In Chapters 5, 6, and 7, we introduce the techniques for realizing the control objectives first identified in Chapter 4 in more complex dynamic systems. These include the root locus, frequency response, and state variable techniques. These are alternative means to the same end and have different advantages and disadvantages as guides to design of controls. The methods are fundamentally complementary, and each needs to be understood to achieve the most effective control systems design.

In Chapter 8, we will develop the ideas of implementing controllers in a digital computer. The chapter addresses how one "digitizes" the control equations developed in Chapters 4 through 7, how the sampling introduces a delay that tends to destabilize the system, and how the sample rate needs to be a certain multiple of the system frequencies for good performance. Just as the Laplace transform does for nonsampled signals, the analysis of sampled systems requires another analysis toolthe \(z\)-transform - and that tool is described and its use is illustrated.

Most real systems are nonlinear to some extent. However, the analyses and design methods in most of the book up to here are for linear systems. In Chapter 9, we will explain why the study of linear systems is pertinent, why it is useful for design even though most systems are nonlinear, and how designs for linear systems can be modified to handle many common nonlinearities in the systems being controlled. The chapter will cover saturation, describing functions, adaptive control and the anti-windup controller, and contains a brief introduction to Lyapunov stability theory.

Application of all the techniques to problems of substantial complexity will be discussed in Chapter 10. The design methods discussed in Chapters 4-7 are all brought to bear simultaneously on specific case
studies which are representative of real world problems. These cases are somewhat simplified versions of control systems that are in use today in satellites on orbit, in most commercial aircraft, in all automobiles sold in the Western world today, in semiconductor manufacturing throughout the world, and in the drones being used in many fields.

Control designers today make extensive use of computer-aided control systems design software that is commercially available. Furthermore, most instructional programs in control systems design make software tools available to the students. The most widely used software for the purpose are Matlab \(\ ^{®}\) and Simulink \(\ ^{®}\) from The MathWorks. Matlab routines have been included throughout the text to help illustrate this method of solution and many problems require computer aids for solution. Many of the figures in the book were created using Matlab and the files for their creation are available free of charge on the web at www.pearsonglobaleditions.com. Students and instructors are invited to use these files as it is believed that they should be helpful in learning how to use computer methods to solve control problems.

Needless to say, many topics are not treated in the book. We do not extend the methods to multivariable controls, which are systems with more than one input and/or output, except as part of the case study of the rapid thermal process in Chapter 10. Nor is optimal control treated in more than a very introductory manner in Chapter 7.

Also beyond the scope of this text is a detailed treatment of the experimental testing and modeling of real hardware, which is the ultimate test of whether any design really works. The book concentrates on analysis and design of linear controllers for linear plant models-not because we think that is the final test of a design, but because that is the best way to grasp the basic ideas of feedback and is usually the first step in arriving at a satisfactory design. We believe that mastery of the material here will provide a foundation of understanding on which to build knowledge of the actual physical behavior of control systems - a foundation strong enough to allow one to build a personal design method in the tradition of all those who worked to give us the knowledge we present here.

21. SUMMARY

  • Control is the process of making a system variable adhere to a particular value, called the reference value. A system designed to follow a changing reference is called tracking control or a servo. A system designed to maintain an output fixed regardless of the disturbances present is called a regulating control or a regulator.

  • Two kinds of control were defined and illustrated based on the information used in control and named by the resulting structure. In open-loop control, the system does not measure the output and there is no correction of the actuating signal to make that output conform to the reference signal. In closed-loop control, the system
    includes a sensor to measure the output and uses feedback of the sensed value to influence the control variable.

  • A simple feedback system consists of the process (or plant) whose output is to be controlled, the actuator whose output causes the process output to change, a reference command signal, and output sensors that measure these signals, and the controller that implements the logic by which the control signal that commands the actuator is calculated.

  • Block diagrams are helpful for visualizing system structure and the flow of information in control systems. The most common block diagrams represent the mathematical relationships among the signals in a control system.

  • A well-designed feedback control system will be stable, track a desired input or setpoint, reject disturbances, and be insensitive (or robust) to changes in the math model used for design.

  • The theory and design techniques of control have come to be divided into two categories: classical control methods use Laplace transforms (or \(z\)-transform) and were the dominant methods for control design until modern control methods based on ODEs in state form were introduced into the field starting in the 1960s. Many connections have been discovered between the two categories and well-prepared engineers must be familiar with both techniques.

22. REVIEW QUESTIONS

1.1 What are the main components of a feedback control system?

1.2 What is the purpose of the sensor?

1.3 Give three important properties of a good sensor.

1.4 What is the purpose of the actuator?

1.5 Give three important properties of a good actuator.

1.6 What is the purpose of the controller? Give the input(s) and output(s) of the controller.

1.7 What physical variable is measured by a tachometer?

1.8 Describe three different techniques for measuring temperature.

1.9 Why do most sensors have an electrical output, regardless of the physical nature of the variable being measured?

23. PROBLEMS

1.1 Draw a component block diagram for each of the following feedback control systems:

(a) The manual steering system of an automobile

(b) Drebbel's incubator
(d) Watt's steam engine with fly-ball governor

In each case, indicate the location of the elements listed below and give the units associated with each signal:

  • The process

  • The process desired output signal

  • The sensor

  • The actuator

  • The actuator output signal

  • The controller

  • The controller output signal

  • The reference signal

  • The error signal

Notice that in a number of cases the same physical device may perform more than one of these functions.

1.2 Identify the physical principles and describe the operation of the thermostat in your home or office.

1.3 A machine for making paper is diagrammed in Fig. 1.12. There are two main parameters under feedback control: the density of fibers as controlled by the consistency of the thick stock that flows from the headbox onto the wire, and the moisture content of the final product that comes out of the dryers. Stock from the machine chest is diluted by white water returning from under the wire as controlled by a control valve (CV). A meter supplies a reading of the consistency. At the "dry end" of the machine, there is a moisture sensor. Draw a block diagram and identify the nine components listed in Problem 1.1 part (d) for the following:

(a) Control of consistency

(b) Control of moisture

1.4 Many variables in the human body are under feedback control. For each of the following controlled variables, draw a block diagram showing the process being controlled, the sensor that measures the variable, the actuator that causes it to increase and/or decrease, the information path that completes the feedback path, and the disturbances that upset the variable. You may need to consult an encyclopedia or textbook on human physiology for information on this problem.

Figure 1.12

A papermaking machine
(a) Blood pressure

(b) Blood sugar concentration

(c) Heart rate

(d) Eye-pointing angle

(e) Eye-pupil diameter

1.5 Draw a block diagram of the components for an elevator-position control. Indicate how you would measure the position of the elevator car. Consider a combined coarse and fine measurement system. What accuracies do you suggest for each sensor? Your system should be able to correct for the fact that in elevators for tall buildings there is significant cable stretch as a function of cab load.

1.6 Feedback control requires being able to sense the variable being controlled. Because electrical signals can be transmitted, amplified, and processed easily, often we want to have a sensor whose output is a voltage or current proportional to the variable being measured. Describe a sensor that would give an electrical output proportional to the following:

(a) Temperature

(b) Pressure

(c) Liquid level

(d) Flow of liquid along a pipe (or blood along an artery)

(e) Linear position

(f) Rotational position

(g) Linear velocity

(h) Rotational speed

(i) Translational acceleration

(j) Torque

1.7 Each of the variables listed in Problem 1.6 can be brought under feedback control. Describe an actuator that could accept an electrical input and be used to control the variables listed. Give the units of the actuator output signal.

23.1. Feedback in Biology

(a) Negative Feedback in Biology: When a person is under long-term stress (say, a couple of weeks before an exam!), hypothalamus (in the brain) secretes a hormone called Corticotropin Releasing Factor (CRF) which binds to a receptor in the pituitary gland stimulating it to produce Adrenocorticotropic hormone (ACTH), which in turn stimulates the adrenal cortex (outer part of the adrenal glands) to release the stress hormone Glucocorticoid (GC). This in turn shuts down (turns off the stress response) for both CRF and ACTH production by negative feedback via the bloodstream until GC returns to its normal level. Draw a block diagram of this closed-loop system.

(b) Positive Feedback in Biology: This happens in some unique circumstances. Consider the birth process of a baby. Pressure from the head of
the baby going through the birth canal causes contractions via secretion of a hormone called oxytocin which causes more pressure which in turn intensifies contractions. Once the baby is born, the system goes back to normal (negative feedback). Draw a block diagram of this closed-loop system.

24. Dynamic Models

25. A Perspective on Dynamic Models

The overall goal of feedback control is to use feedback to cause the output variable of a dynamic process to follow a desired reference variable accurately, regardless of the reference variable's path and regardless of any external disturbances or any changes in the dynamics of the process. This complex design goal is met by a number of simple, distinct steps. The first of these is to develop a mathematical description (called a dynamic model or mathematical model) of the process to be controlled. The term model, as it is used and understood by control engineers, means a set of differential equations that describe the dynamic behavior of the process. A model can be obtained using principles of the underlying physics or by testing a prototype of the device, measuring its response to inputs, and using the data to construct an analytical model. We will focus only on using physics in this chapter. There are entire books written on experimentally determining models, sometimes called system identification, and these techniques will be described very briefly in Chapter 3. A careful control system designer will typically rely on at least some experiments to verify the accuracy of the model when it is derived from physical principles.

In many cases, the modeling of complex processes is difficult and expensive, especially when the important steps of building and testing prototypes are included. However, in this introductory text, we will focus on the most basic principles of modeling for the most common physical systems. More comprehensive sources and specialized texts will be referenced throughout where appropriate for those wishing more detail.

In later chapters, we will explore a variety of analysis methods for dealing with the dynamic equations and their solution for purposes of designing feedback control systems.

26. Chapter Overview

The fundamental step in building a dynamic model is writing the dynamic equations for the system. Through discussion and a variety of examples, Section 2.1 demonstrates how to write the dynamic equations for a variety of mechanical systems. In addition, the section demonstrates the use of Matlab to find the time response of a simple system to a step input. Furthermore, the ideas of transfer functions and block diagrams are introduced, along with the idea that problems can also be solved via Simulink.

Electric circuits and electromechanical systems will be modeled in Sections 2.2 and 2.3 , respectively.

For those wanting modeling examples for more diverse dynamic systems, Section 2.4, which is optional, will extend the discussion to heat- and fluid-flow systems.

The chapter then concludes with Section 2.5, a discussion of the history behind the discoveries that led to the knowledge that we take for granted today.

The differential equations developed in modeling are often nonlinear. Because nonlinear systems are significantly more challenging to solve than linear ones, and because linear models are usually adequate for purposes of control design, the emphasis in the early chapters is primarily on linear systems. However, we do show how to linearize simple nonlinearities in this chapter and show how to use Simulink to numerically solve for the motion of a nonlinear system. A much more extensive discussion of linearization and analysis of nonlinear systems is contained in Chapter 9.

In order to focus on the important first step of developing mathematical models, we will defer explanation of the computational methods used to solve the dynamic equations developed in this chapter until Chapter 3.

Newton's law for translational motion

26.1. Dynamics of Mechanical Systems

26.1.1. Translational Motion

The cornerstone for obtaining a mathematical model, or the dynamic equations, \(\ ^{1}\) for any mechanical system is Newton's law,

\[\mathbf{F} = m\mathbf{a} \]

Use of free-body diagram in applying Newton's law

27. EXAMPLE 2.1

Figure 2.1

Cruise control model where

\(\mathbf{F} =\) the vector sum of all forces applied to each body in a system, newtons \((N)\),

\(\mathbf{a} =\) the vector acceleration of each body with respect to an inertial reference frame (that is, one that is neither accelerating nor rotating with respect to the stars); often called inertial acceleration, \(m/\sec^{2}\),

\(m =\) mass of the body, \(kg\).

Note that here in Eq. (2.1), as throughout the text, we use the convention of boldfacing the type to indicate that the quantity is a matrix or vector, possibly a vector function.

A force of \(1\text{ }N\) will impart an acceleration of \(1\text{ }m/\sec^{2}\) to a mass of \(1\text{ }kg\). The "weight" of an object is \(mg\), where \(g\) is the acceleration of gravity \(\left( = 9.81\text{ }m/\sec^{2} \right)\), which is the quantity measured on scales. Scales are typically calibrated in kilograms, which is used as a direct measure of mass assuming the standard value for \(g\).

Application of this law typically involves defining convenient coordinates to account for the body's motion (position, velocity, and acceleration), determining the forces on the body using a free-body diagram, then writing the equations of motion from Eq. (2.1). The procedure is simplest when the coordinates chosen express the position with respect to an inertial reference frame because, in this case, the accelerations needed for Newton's law are simply the second derivatives of the position coordinates.

A Simple System; Cruise Control Model

  1. Write the equations of motion for the speed and forward motion of the car shown in Fig. 2.1, assuming the engine imparts a force \(u\) as shown. Take the Laplace transform of the resulting differential equation and find the transfer function between the input \(u\) and the output \(v\).

 

  1. Use Matlab to find the response of the velocity of the car for the case in which the input jumps from being \(u = 0\) at time \(t = 0\) to a constant \(u = 500\text{ }N\) thereafter. Assume the car mass \(m\) is \(1000\text{ }kg\) and viscous drag coefficient, \(b = 50\text{ }N \cdot sec/m\).

28. Solution

  1. Equations of motion: For simplicity, we assume the rotational inertia of the wheels is negligible, and that there is friction retarding the motion of the car that is proportional to the car's speed with a proportionality constant, \(b.^{2}\) The car can then be approximated for modeling purposes using the free-body diagram seen in Fig. 2.2, which defines coordinates, shows all forces acting on the body (heavy lines), and indicates the acceleration (dashed line). The coordinate of the car's position, \(x\), is the distance from the reference line shown and is chosen so positive is to the right. Note in this case, the inertial acceleration is simply the second derivative of \(x\) (that is, \(\mathbf{a} = \overset{¨}{x}\) ) because the car position is measured with respect to an inertial reference frame. The equation of motion is found using Eq. (2.1). The friction force acts opposite to the direction of motion; therefore it is drawn opposite to the direction of positive motion and entered as a negative force in Eq. (2.1). The result is

\[u - b\overset{˙}{x} = m\overset{¨}{x} \]

or

\[\overset{¨}{x} + \frac{b}{m}\overset{˙}{x} = \frac{u}{m}\text{.}\text{~} \]

For the case of the automotive cruise control where the variable of interest is the speed, \(v( = \overset{˙}{x})\), the equation of motion becomes

\[\overset{˙}{v} + \frac{b}{m}v = \frac{u}{m} \]

The solution of such an equation will be covered in detail in Chapter 3; however, the essence is that you assume a solution of

Figure 2.2

Free-body diagram for cruise control

\(\ ^{2}\) If the speed is \(v\), the aerodynamic portion of the friction force is actually proportional to \(v^{2}\). We have assumed it to be linear here for simplicity.

Transfer function

Step response with Matlab the form \(v = V_{o}e^{st}\) given an input of the form \(u = U_{o}e^{st}\). Then, since \(\overset{˙}{v} = sV_{o}e^{st}\), the differential equation can be written as \(\ ^{3}\)

\[\left( s + \frac{b}{m} \right)V_{o}e^{st} = \frac{1}{m}U_{o}e^{st} \]

The \(e^{st}\) term cancels out, and we find that

\[\frac{V_{o}}{U_{o}} = \frac{\frac{1}{m}}{s + \frac{b}{m}} \]

For reasons that will become clear in Chapter 3, this is often written using capital letters to signify that it is the "transform" of the solution, or

\[\frac{V(s)}{U(s)} = \frac{\frac{1}{m}}{s + \frac{b}{m}} \]

This expression of the differential equation (2.4) is called the transfer function and will be used extensively in later chapters. Note that, in essence, we have substituted \(s\) for \(d/dt\) in Eq. (2.4). This transfer funtion serves as a math model that relates the car's velocity to the forces propelling the car, that is, inputs from the accelerator pedal. Transfer functions of a system will be used in later chapters to design feedback controllers such as a cruise control device found in many modern cars.

  1. Time response: The dynamics of a system can be prescribed to Matlab in terms of its transfer function as can be seen in the Matlab statements below that implements Eq. (2.7). The step function in Matlab calculates the time response of a linear system to a unit step input. Because the system is linear, the output for this case can be multiplied by the magnitude of the input step to derive a step response of any amplitude. Equivalently, sys can be multiplied by the magnitude of the input step.

The statements

\[\begin{matrix} & s = tf\left( s^{'} \right); \\ & \text{~}\text{sys}\text{~} = (1/1000)/(s + 50/1000);\ \begin{matrix} \text{~}\text{\textbackslash\% sets up the mode to define the}\text{~} \\ \text{~}\text{transfer function}\text{~} \end{matrix} \\ & \text{~}\text{step}\text{~}\left( 500^{*}\text{~}\text{sys}\text{~} \right); \\ & \text{~}\text{Eq.}\text{~}(2.7)\text{~}\text{with the numbers filled in.}\text{~} \\ & \%\text{~}\text{plots the step response for}\text{~}u = 500. \end{matrix}\]

calculate and plot the time response of velocity for an input step with a \(500 - N\) magnitude. The step response is shown in Fig. 2.3.

Newton's law also can be applied to systems with more than one mass. In this case, it is particularly important to draw the free-body

Figure 2.3

Response of the car velocity to a step in \(u\)

EXAMPLE 2.2

Figure 2.4

Automobile suspension

diagram of each mass, showing the applied external forces as well as the equal and opposite internal forces that act from each mass on the other.

29. A Two-Mass System: Suspension Model

Figure 2.4 shows an automobile suspension system. Write the equations of motion for the automobile and wheel motion assuming onedimensional vertical motion of one quarter of the car mass above one wheel. A system consisting of one of the four-wheel suspensions is usually referred to as a quarter-car model. The system can be approximated by the simplified system shown in Fig. 2.5 where two spring constants and a damping coefficient are defined. Assume the model is for a car with a mass of \(1580\text{ }kg\), including the four wheels, which have a mass of \(20\text{ }kg\) each. By placing a known weight (an author) directly over a wheel and measuring the car's deflection, we find that \(k_{s} = 130,000\text{ }N/m\). Measuring the wheel's deflection for the same applied weight, we find that \(k_{w} \simeq 1,000,000\text{ }N/m\). By using the step response data in Fig. 3.19(b) and qualitatively observing that the car's response to a step change matches the damping coefficient curve for \(\zeta = 0.7\) in the figure, we conclude that \(b = 9800\text{ }N \cdot sec/m\).

30. Chapter 2 Dynamic Models

Figure 2.5

The quarter-car model

Solution. The system can be approximated by the simplified system shown in Fig. 2.5. The coordinates of the two masses, \(x\) and \(y\), with the reference directions as shown, are the displacements of the masses from their equilibrium conditions. The equilibrium positions are offset from the springs' unstretched positions because of the force of gravity. The shock absorber is represented in the schematic diagram by a dashpot symbol with friction constant \(b\). The magnitude of the force from the shock absorber is assumed to be proportional to the rate of change of the relative displacement of the two masses- that is, the force \(= b(\overset{˙}{y} -\) \(\overset{˙}{x})\). The force of gravity could be included in the free-body diagram; however, its effect is to produce a constant offset of \(x\) and \(y\). By defining \(x\) and \(y\) to be the distance from the equilibrium position, the need to include the gravity forces is eliminated.

The force from the car suspension acts on both masses in proportion to their relative displacement with spring constant \(k_{s}\). Figure 2.6 shows the free-body diagram of each mass. Note the forces from the spring on the two masses are equal in magnitude but act in opposite directions, which is also the case for the damper. A positive displacement \(y\) of mass \(m_{2}\) will result in a force from the spring on \(m_{2}\) in the direction shown and a force from the spring on \(m_{1}\) in the direction shown. However, a positive displacement \(x\) of mass \(m_{1}\) will result in a force from the spring \(k_{s}\) on \(m_{1}\) in the opposite direction to that drawn in Fig. 2.6, as indicated by the minus \(x\) term for the spring force.

The lower spring \(k_{w}\) represents the tire compressibility, for which there is insufficient damping (velocity-dependent force) to warrant including a dashpot in the model. The force from this spring is proportional to the distance the tire is compressed and the nominal equilibrium force would be that required to support \(m_{1}\) and \(m_{2}\) against gravity. By defining \(x\) to be the distance from equilibrium, a force will result if either the road surface has a bump ( \(r\) changes from its equilibrium value of zero) or the wheel bounces ( \(x\) changes). The motion of the simplified car over a bumpy road will result in a value of \(r(t)\) that is not constant.

As previously noted, there is a constant force of gravity acting on each mass; however, this force has been omitted, as have been the equal and opposite forces from the springs. Gravitational forces can always be omitted from vertical-spring mass systems (1) if the position coordinates are defined from the equilibrium position that results when gravity is acting, and (2) if the spring forces used in the analysis are actually the perturbation in spring forces from those forces acting at equilibrium.

Applying Eq. (2.1) to each mass, and noting that some forces on each mass are in the negative (down) direction, yields the system of equations

\[\begin{matrix} b(\overset{˙}{y} - \overset{˙}{x}) + k_{s}(y - x) - k_{w}(x - r) & \ = m_{1}\overset{¨}{x} \\ - k_{s}(y - x) - b(\overset{˙}{y} - \overset{˙}{x}) & \ = m_{2}\overset{¨}{y}. \end{matrix}\]

Some rearranging results in

\[\begin{matrix} & \overset{¨}{x} + \frac{b}{m_{1}}(\overset{˙}{x} - \overset{˙}{y}) + \frac{k_{s}}{m_{1}}(x - y) + \frac{k_{w}}{m_{1}}x = \frac{k_{w}}{m_{1}}r, \\ & \overset{¨}{y} + \frac{b}{m_{2}}(\overset{˙}{y} - \overset{˙}{x}) + \frac{k_{s}}{m_{2}}(y - x) = 0. \end{matrix}\]

The most common source of error in writing equations for systems such as these are sign errors. The method for keeping the signs straight in the preceding development entailed mentally picturing the displacement of the masses and drawing the resulting force in the direction that the displacement would produce. Once you have obtained the equations for a system, a check on the signs for systems that are obviously stable from physical reasoning can be quickly carried out. As we will see when we study stability in Section 3.6 of Chapter 3, a stable system always has the same signs on similar variables. For this system, Eq. (2.8) shows that the signs on the \(\overset{¨}{x},\overset{˙}{x}\), and \(x\) terms are all positive, as they must be for stability. Likewise, the signs on the \(\overset{¨}{y},\overset{˙}{y}\), and \(y\) terms are all positive in Eq. (2.9).

The transfer function is obtained in a similar manner as before for zero initial conditions. Substituting \(s\) for \(d/dt\) in the differential equations yields

\[\begin{matrix} s^{2}X(s) + s\frac{b}{m_{1}}(X(s) - Y(s)) + \frac{k_{s}}{m_{1}}(X(s) - Y(s)) + \frac{k_{w}}{m_{1}}X(s) = \frac{k_{w}}{m_{1}}R(s), \\ s^{2}Y(s) + s\frac{b}{m_{2}}(Y(s) - X(s)) + \frac{k_{s}}{m_{2}}(Y(s) - X(s)) = 0, \end{matrix}\]

which can also be written in matrix form as

\[\begin{bmatrix} s^{2} + s\frac{b}{m_{1}} + \frac{k_{s}}{m_{1}} + \frac{k_{w}}{m_{1}} & - s\frac{b}{m_{1}} - \frac{k_{s}}{m_{1}} \\ - s\frac{b}{m_{2}} - \frac{k_{s}}{m_{2}} & s^{2} + s\frac{b}{m_{2}} + \frac{k_{s}}{m_{2}} \end{bmatrix}\begin{bmatrix} X(s) \\ Y(s) \end{bmatrix} + \begin{bmatrix} \frac{k_{w}}{m_{1}} \\ 0 \end{bmatrix}R(s).\]

for which Cramer's Rule (see Appendix WB) can be used to find the transfer function

\[\frac{Y(s)}{R(s)} = \frac{\frac{k_{w}b}{m_{1}m_{2}}\left( s + \frac{k_{s}}{b} \right)}{s^{4} + \left( \frac{b}{m_{1}} + \frac{b}{m_{2}} \right)s^{3} + \left( \frac{k_{s}}{m_{1}} + \frac{k_{s}}{m_{2}} + \frac{k_{w}}{m_{1}} \right)s^{2} + \left( \frac{k_{w}b}{m_{1}m_{2}} \right)s + \frac{k_{w}k_{s}}{m_{1}m_{2}}} \]

To determine numerical values, we subtract the mass of the four wheels from the total car mass of \(1580\text{ }kg\) and divide it by 4 to find that \(m_{2} = 375\text{ }kg\). The wheel mass was measured directly to be \(m_{1} = 20\text{ }kg\). Therefore, the transfer function with the numerical values is

\[\frac{Y(s)}{R(s)} = \frac{1.31e06(s + 13.3)}{s^{4} + (516.1)s^{3} + (5.685e04)s^{2} + (1.307e06)s + 1.733e07} \]

We will see in Chapter 3 (and later chapters) how this sort of transfer function will allow us to find the response of the car body to inputs resulting from the car motion over a bumpy road.

30.0.1. Rotational Motion

Application of Newton's law to one-dimensional rotational systems requires that Eq. (2.1) be modified to

\[M = I\alpha \]

where

\[\begin{matrix} M = & \text{~}\text{the sum of all external moments about the center of mass of a}\text{~} \\ & \text{~}\text{body,}\text{~}N \cdot m, \\ I = & \text{~}\text{the body's mass moment of inertia about its center of mass,}\text{~} \\ & kg \cdot m^{2}, \\ \alpha = & \text{~}\text{the angular acceleration of the body,}\text{~}rad/\sec^{2}. \end{matrix}\]

31. Rotational Motion: Satellite Attitude Control Model

Satellites, as shown in Fig. 2.7, usually require attitude control so antennas, sensors, and solar panels are properly oriented. Antennas are usually pointed toward a particular location on earth, while solar panels need to be oriented toward the sun for maximum power generation. To

Figure 2.7

Communications satellite

Source: Courtesy Thaicom PLC and Space Systems/Loral

gain insight into the full three-axis attitude control system, it is helpful to consider one axis at a time. Write the equations of motion for one axis of this system then show how they would be depicted in a block diagram. In addition, determine the transfer function of this system and construct the system as if it were to be evaluated via Matlab's Simulink.

Solution. Figure 2.8 depicts this case, where motion is allowed only about the axis perpendicular to the page. The angle \(\theta\) that describes the satellite orientation must be measured with respect to an inertial reference - that is, a reference that has no angular acceleration. The control force comes from reaction jets that produce a moment of \(F_{c}d\) about the mass center. There may also be small disturbance moments \(M_{D}\) on the satellite, which arise primarily from solar pressure acting on any asymmetry in the solar panels. Applying Eq. (2.10) yields the equation of motion

\[F_{c}d + M_{D} = I\overset{¨}{\theta} \]

Double-integrator plant
The output of this system, \(\theta\), results from integrating the sum of the input torques twice; hence, this type of system is often referred to as

Figure 2.8

Satellite control

schematic

32. Figure 2.9

Block diagrams

representing Eq. (2.11)

in the upper half and

Eq. (2.12) in the lower half

Figure 2.10

Simulink block diagram of the

double-integrator plant

\(1/s^{2}\) plant

the double-integrator plant. The transfer function can be obtained as described for Eq. (2.7) and is

\[\frac{\Theta(s)}{U(s)} = \frac{1}{I}\frac{1}{s^{2}} \]

where \(U = F_{c}d + M_{D}\). In this form, the system is often referred to as the \(1/s^{2}\) plant.

Figure 2.9 shows a block diagram representing Eq. (2.11) in the upper half, and a block diagram representing Eq. (2.12) in the lower half. This simple system can be analyzed using the linear analysis techniques that will be described in later chapters, or via Matlab as we saw in Example 2.1. It can also be numerically evaluated for an arbitrary input time history using Simulink, which is a sister software package to Matlab for interactive, nonlinear simulation and has a graphical user interface with drag and drop properties. Figure 2.10 shows a block diagram of the system as depicted by Simulink.

33. EXAMPLE 2.4

Figure 2.11

Model of the flexible satellite
In many cases a system, such as the satellite shown in Fig. 2.7, has some flexibility in the structure. Depending on the nature of the flexibility, it can cause challenges in the design of a control system. Particular difficulty arises when there is flexibility between the sensor and actuator locations. Therefore, it is often important to include this flexibility in the model even when the system seems to be quite rigid.

34. Flexibility: Flexible Satellite Attitude Control

Figure 2.11(a) shows the situation where there is some flexibility between the satellite attitude sensor \(\left( \theta_{2} \right)\) and the body of the satellite \(\left( \theta_{1} \right)\) where the actuators are placed. Find the equations of motion and transfer function relating the motion of the instrument package to a control torque applied to the body of the satellite. For comparison, also determine the transfer function between the control torque to the attitude of the body of the satellite as if the sensors were located there. Retain the fexible model of the overall satellite for this second case, however.

Solution. The dynamic model for this situation is shown schematically in Fig. 2.11(b). This model is dynamically similar to the resonant system shown in Fig. 2.5, and results in equations of motion that are similar in form to Eqs. (2.8) and (2.9). The moments on each body are shown in the free-body diagrams in Fig. 2.12. The discussion of the moments on each body is essentially the same as the discussion for Example 2.2,

(a)

(b)
Figure 2.12

Free-body diagrams of the flexible satellite

Non-collocated sensor and actuator

Collocated sensor and actuator except the springs and damper in that case produced forces, instead of moments that act on each inertia, as in this case. When the moments are summed, equated to the accelerations according to Eq. (2.10), and rearranged, the result is

\[\begin{matrix} & I_{1}{\overset{¨}{\theta}}_{1} + b\left( {\overset{˙}{\theta}}_{1} - {\overset{˙}{\theta}}_{2} \right) + k\left( \theta_{1} - \theta_{2} \right) = T_{c} \\ & I_{2}{\overset{¨}{\theta}}_{2} + b\left( {\overset{˙}{\theta}}_{2} - {\overset{˙}{\theta}}_{1} \right) + k\left( \theta_{2} - \theta_{1} \right) = 0. \end{matrix}\]

Ignoring the damping \(b\) for simplicity, and substituting \(s\) for \(d/dt\) in the differential equations as we did for Example 2.2 yields

\[\begin{matrix} \left( I_{1}s^{2} + k \right)\Theta_{1}(s) - k\Theta_{2}(s) & \ = T_{c} \\ - k\Theta_{1}(s) + \left( I_{2}s^{2} + k \right)\Theta_{2}(s) & \ = 0 \end{matrix}\]

Using Cramer's Rule as we did for Example 2.2, we find the transfer function between the control torque, \(T_{c}\), and the sensor angle, \(\theta_{2}\), to be

\[\frac{\Theta_{2}(s)}{T_{c}(s)} = \frac{k}{I_{1}I_{2}s^{2}\left( s^{2} + \frac{k}{I_{1}} + \frac{k}{I_{2}} \right)} \]

For the second case, where we assume the attitude sensor is on the main body of the satellite, we want the transfer function between the control torque, \(T_{c}\), and the satellite body angle, \(\theta_{1}\). Using Cramer's Rule again, we find that

\[\frac{\Theta_{1}(s)}{T_{c}(s)} = \frac{I_{2}s^{2} + k}{I_{1}I_{2}s^{2}\left( s^{2} + \frac{k}{I_{1}} + \frac{k}{I_{2}} \right)} \]

These two cases are typical of many situations in which the sensor and actuator may or may not be placed in the same location in a flexible body. We refer to the situation between sensor and actuator in Eq. (2.13) as the "noncollocated" case, whereas Eq. (2.14) describes the "collocated" case. You will see in Chapter 5 that it is far more difficult to control a system when there is flexibility between the sensor and actuator (noncollocated case) than when the sensor and actuator are rigidly attached to one another (the collocated case).

Rotational Motion: Quadrotor Drone

Figure 2.13 shows a small drone with four rotors. Find the equations of motion between an appropriate command to the individual motors and the three degrees of freedom; that is, pitch, roll, and yaw as defined by Fig. 2.14. The \(x\) and \(y\) axes are in the horizontal plane, while the \(z\)-axis is straight down. For this example, we only wish to describe the situation for very small motion about the initially level position of the coordinate system shown in Fig. 2.14. Note rotors 1 and 3 are rotating clockwise (CW) and rotors 2 and 4 are rotating counter clockwise \((CCW)\); therefore, rotors 1 and 3 have an angular momentum in the

Figure 2.13

Quadcopter with a camera

Source: narongpon chaibot/Shutterstock

Figure 2.14

Orientation of the four rotors and definition of the attitude angles

\(+ z\) direction, while rotors 2 and 4 have an angular velocity in the \(- z\) direction. Also, define what the "appropriate" commands would be in order to only produce the desired angular motion of pitch, roll, or yaw, without disturbing the other axes. The equations of motion for larger motions are complex and involve coupling between the axes as well as nonlinear terms due to angular motion, inertia asymmetry, and aerodynamics. These terms will be discussed in Chapter 10.

Solution. First, we need to establish what the commands should be to the motors attached to each of the four rotor blades in order to produce the desired motion without producing any undesired motion in another axis. Let's define the torque to each rotor as \(T_{1},T_{2},T_{3},T_{4}\). In steady hovering flight, there will be a torque applied to each rotor that maintains a steady rotor speed and thus a constant lift. The rotor speed stays constant because the torque from the motor just balances the aerodynamic drag on the rotor. If we were to add a perturbation that increased the torque magnitude applied to a rotor, the angular speed
would increase until it reached a new equilibrium with the drag, and the rotor would produce an increased amount of lift. Likewise, for a negative perturbation in the torque magnitude on a rotor, the speed of the rotor and the lift would decrease. Note rotors 1 and 3 are rotating in a positive (CW) direction, hence there will be positive torques $\left( T_{1} \right.\ $ and \(\left. \ T_{3} \right)\) applied to those rotors, and negative torques $\left( T_{2} \right.\ $ and \(\left. \ T_{4} \right)\) applied to rotors 2 and 4 to maintain their negative (CCW) rotation. Another important aspect of this arrangement results from Newton's Third Law, that is: For every action, there is an equal and opposite reaction. This law tells us there are equal and opposite torques applied on the motors. Thus there are negative torques being applied to the 1 and 3 motors, while there are positive torques being applied to the 2 and 4 motors. In steady hovering flight, the torques being applied to the motors are all of equal magnitude and the two positive torques cancel out the two negative torques, hence the body of the quadrotor has no net torque applied about the \(z\)-axis and there is no yaw motion produced. (This is not the case for a single rotor helicopter where there is a large reaction torque applied to the engine, and that torque must be balanced by the tail rotor mounted perpendicular to the large lift rotor on top.)

To produce a control action to increase pitch, \(\theta\), without producing a torque about the other two axes, it makes sense to apply a small increase to the torque on rotor 1 with an equally small decrease to the torque on rotor 3. Thus, there is no net increase in the overall lift on the drone, and there is no change in the balance of the torques on the rotors nor their reaction torques on the drone itself. However, the positive change in lift from rotor 1 coupled with the negative change in lift from rotor 3 will produce a positive torque about the \(y\)-axis which will act to increase \(\theta\). Therefore, we produce the control torque for positive \(\theta\) motion, \(T_{\theta}\), by setting \(\delta T_{1} = + T_{\theta}\) and \(\delta T_{3} = - T_{\theta}\). Following Example 2.3 , the transfer function for pitch is

\[\frac{\Theta(s)}{T_{\theta}(s)} = \frac{1}{I_{y}}\frac{1}{s^{2}} \]

Similarly, for roll control, we produce a positive roll torque, \(T_{\phi}\), by setting \(\delta T_{4} = - T_{\phi}\), thus increasing the negative rotation rate for rotor 4 and increasing it's resulting lift. Furthermore, we set \(\delta T_{2} = + T_{\phi}\), which reduces the lift from rotor 2 , thus keeping the overall lift constant and contributing to the desired roll torque. The resulting transfer function for roll is

\[\frac{\Phi(s)}{T_{\phi}(s)} = \frac{1}{I_{x}}\frac{1}{s^{2}} \]

Positive yaw control is accomplished by increasing the torque magntitude on rotors 2 and 4 , while decreasing the torque magnitude on rotors 1 and 3 an equal amount. This will increase the lift from rotors 2 and 4 while decreasing the lift on rotors 1 and 3 , thus producing no net change in the lift nor a torque that would influence \(\theta\) or \(\phi\). But,
the reaction torques will be in the positive direction for all four motors! This comes about because rotors 1 and 3 are rotating in a CW (positive direction) so a decrease in the torque applied to their rotors is a negative perturbation, thus resulting in positive reaction torques on the motors. Rotors 2 and 4 are rotating in a CCW (negative direction) so an increase in the torque magntitude applied to their rotors is also a negative perturbation, thus adding to the positive reaction torque applied to the motors. Therefore, the control torque for positive \(\psi\) motion, \(T_{\psi}\), is produced by setting \(\delta T_{1} = \delta T_{2} = \delta T_{3} = \delta T_{4} = - T_{\psi}\). The resulting transfer function is

\[\frac{\Psi(s)}{T_{\psi}(s)} = \frac{1}{I_{z}}\frac{1}{s^{2}} \]

These three equations assume there is small motion from the horizontal orientation and thus any damping from aerodynamic forces are assumed negligible and the equations remain linear.

This example shows why quadrotors have become so popular for small drones; it is a well-balanced simple arrangement and does not require any complex mechanical arrangements to balance the torques. All the control can be accomplished by simply controlling the torque to the four rotors. Furthermore, with the definitions developed above for the motor commands and repeated here,

\[\begin{matrix} \text{~}\text{For pitch;}\text{~} & \delta T_{1} = + T_{\theta};\ \delta T_{3} = - T_{\theta}, \\ \text{~}\text{For roll;}\text{~} & \delta T_{2} = + T_{\phi};\ \delta T_{4} = - T_{\phi}, \\ \text{~}\text{For yaw;}\text{~} & \delta T_{1} = \delta T_{2} = \delta T_{3} = \delta T_{4} = - T_{\psi} \end{matrix}\]

the dynamics for each degree of attitude motion is uncoupled from the motion in the other axes.

In the special case in which a point in a rotating body is fixed with respect to an inertial reference frame, as is the case with a pendulum, Eq. (2.10) can be applied such that \(M\) is the sum of all moments about the fixed point, and \(I\) is the moment of inertia about the fixed point.

  1. Write the equations of motion for the simple pendulum shown in Fig. 2.15, where all the mass is concentrated at the end point and there is a torque, \(T_{c}\), applied at the pivot.

  2. Use Matlab to determine the time history of \(\theta\) to a step input in \(T_{c}\) of \(1\text{ }N \cdot m\). Assume \(l = 1\text{ }m,m = 1\text{ }kg\), and \(g = 9.81\text{ }m/\sec^{2}\).

35. Solution

  1. Equations of motion: The moment of inertia about the pivot point is \(I = ml^{2}\). The sum of moments about the pivot point contains a

Figure 2.15

Pendulum

term from gravity as well as the applied torque \(T_{c}\). The equation of motion, obtained from Eq. (2.10), is

\[T_{c} - mglsin\theta = I\overset{¨}{\theta} \]

which is usually written in the form

\[\overset{¨}{\theta} + \frac{g}{l}sin\theta = \frac{T_{c}}{ml^{2}} \]

This equation is nonlinear due to the \(sin\theta\) term. A general discussion of nonlinear equations will be contained in Chapter 9; however, we can proceed with a linearization of this case by assuming the motion is small enough that \(sin\theta \cong \theta\). Then, Eq. (2.22) becomes the linear equation

\[\overset{¨}{\theta} + \frac{g}{l}\theta = \frac{T_{c}}{ml^{2}} \]

With no applied torque, the natural motion is that of a harmonic oscillator with a natural frequency of \(\ ^{4}\)

\[\omega_{n} = \sqrt{\frac{g}{l}} \]

The transfer function can be obtained as described for Eq. (2.7), yielding

\[\frac{\Theta(s)}{T_{c}(s)} = \frac{\frac{1}{ml^{2}}}{s^{2} + \frac{g}{l}} \]

  1. Time history: The dynamics of a system can be prescribed to Matlab in terms of its transfer function and the step response via the step function. The Matlab statements

[TABLE]

will produce the desired time history shown in Fig. 2.16.

As we saw in this example, the resulting equations of motion are often nonlinear. Such equations are much more difficult to solve than linear ones, and the kinds of possible motions resulting from a nonlinear model are much more difficult to categorize than those resulting from a linear model. It is therefore useful to linearize models in order to gain access to linear analysis methods. It may be that the linear models and linear analysis are used only for the design of the control system (whose function may be to maintain the system in the linear region). Once a control system is synthesized and shown to have desirable performance based on linear analysis, it is then prudent to carry out further analysis or an accurate numerical simulation of the system with the sig-

Simulink

Figure 2.16

Response of the pendulum to a step input of \(1\text{ }N \cdot m\) in the applied torque nificant nonlinearities in order to validate that performance. Simulink is an expedient way to carry out these simulations and can handle most

Figure 2.17

The Simulink block diagram representing the linear equation (2.26)

Figure 2.18

The Simulink block diagram representing the nonlinear equation (2.27)
EXAMPLE 2.7

nonlinearities. Use of this simulation tool is carried out by constructing a block diagram \(\ ^{5}\) that represents the equations of motion. The linear equation of motion for the pendulum with the parameters as specified in Example 2.6 can be seen from Eq. (2.23) to be

\[\overset{¨}{\theta} = - 9.81*\theta + 1 \]

and this is represented in Simulink by the block diagram in Fig. 2.17. Note the circle on the left side of the figure with the + and - signs indicating addition and subtraction, implements Eq. (2.26).

The result of running this numerical simulation will be essentially identical to the linear solution shown in Fig. 2.16 because the solution is for relatively small angles where \(sin\theta \cong \theta\). However, using Simulink to solve for the response enables us to simulate the nonlinear equation so we could analyze the system for larger motions. In this case, Eq. (2.26) becomes

\[\overset{¨}{\theta} = - 9.81*sin\theta + 1\text{,}\text{~} \]

and the Simulink block diagram shown in Fig. 2.18 implements this nonlinear equation.

Simulink is capable of simulating all commonly encountered nonlinearities, including deadzones, on-off functions, stiction, hysteresis, aerodynamic drag (a function of \(v^{2}\) ), and trigonometric functions. All real systems have one or more of these characteristics in varying degrees. These nonlinearities will be expanded upon in detail in Chapter 9.

36. Use of Simulink for Nonlinear Motion: Pendulum

Use Simulink to determine the time history of \(\theta\) for the pendulum in Example 2.6. Compare it against the linear solution for \(T_{c}\) values of \(1\text{ }N \cdot m\) and \(4\text{ }N \cdot m\).

Figure 2.19

Block diagram of the pendulum for both the linear and nonlinear models

37. EXAMPLE 2.8

Solution. Time history: The Simulink block diagrams for the two cases discussed above are combined and both outputs in Figs. 2.17 and 2.18 are sent via a "multiplexer block (Mux)" to the "scope" so they can be plotted on the same graph. Figure 2.19 shows the combined block diagram where the gain, \(K\), represents the values of \(T_{c}\). The outputs of this system for \(T_{c}\) values of \(1\text{ }N \cdot m\) and \(4\text{ }N \cdot m\) are shown in Fig. 2.20. Note for \(T_{c} = 1\text{ }N \cdot m\), the outputs at the top of the figure remain at \(12^{\circ}\) or less, and the linear approximation is extremely close to the nonlinear output. For \(T_{c} = 4\text{ }N \cdot m\), the output angle grows near to \(50^{\circ}\) and a substantial difference in the response magnitude and frequency is apparent due to \(\theta\) being a poor approximation to \(sin\theta\) at these magnitudes. In fact, since \(sin\theta\) compared to \(\theta\) signifies a reduced gravitational restoring force at the higher angles, we see an increased amplitude and slower frequency.

Chapter 9 will be devoted to the analysis of nonlinear systems and greatly expands on these ideas.

37.0.1. Combined Rotation and Translation

In some cases, mechanical systems contain both translational and rotational portions. The procedure is the same as that described in Sections 2.1.1 and 2.1.2: sketch the free-body diagrams, define coordinates and positive directions, determine all forces and moments acting, and apply Eqs. (2.1) and/or (2.10). An exact derivation of the equations for these systems can become quite involved; therefore, the complete analysis for the following example is contained in Appendix W2.1.4 located at www.pearsonglobaleditions.com, and only the linearized equations of motion and their transfer functions are given here.

38. Rotational and Translational Motion: Hanging Crane

Write the equations of motion for the hanging crane shown schematically in Fig. 2.21. Linearize the equations about \(\theta = 0\), which would typically be valid for the hanging crane. Also, linearize the equations for

Figure 2.20

Response of the pendulum Simulink numerical simulation for the linear and nonlinear models:

(a) for \(T_{c} = 1\text{ }N \cdot m\);

(b) \(T_{c} = 4\text{ }N \cdot m\)

Figure 2.21

Schematic of the crane with hanging load

\(\theta = \pi\), which represents the situation for the inverted pendulum shown in Fig. 2.22. The trolley has mass \(m_{t}\) and the hanging crane (or pendulum) has mass \(m_{p}\) and inertia about its mass center of \(I\). The distance from the pivot to the mass center of the pendulum is \(l\); therefore, the moment of inertia of the pendulum about the pivot point is \(\left( I + m_{p}l^{2} \right)\).

Figure 2.22

Inverted pendulum

Solution. Free-body diagrams need to be drawn for the trolley and the pendulum and the reaction forces considered where the two attach to one another. We carry out this process in Appendix W2.1.3. After Newton's laws are applied for the translational motion of the trolley and the rotational motion of the pendulum, it will be found that the reaction forces between the two bodies can be eliminated, and the only unknowns will be \(\theta\) and \(x\). The results are two coupled second-order nonlinear differential equations in \(\theta\) and \(x\) with the input being the force applied to the trolley, \(u\). They can be linearized in a manner similar to that done for the simple pendulum by assuming small angles. For small motions about \(\theta = 0\), we let \(cos\theta \cong 1,sin\theta \cong \theta\), and \({\overset{˙}{\theta}}^{2} \cong 0\); thus the equations are approximated by

\[\begin{matrix} \left( I + m_{p}l^{2} \right)\overset{¨}{\theta} + m_{p}gl\theta & \ = - m_{p}l\overset{¨}{x} \\ \left( m_{t} + m_{p} \right)\overset{¨}{x} + b\overset{˙}{x} + m_{p}l\overset{¨}{\theta} & \ = u. \end{matrix}\]

Note the first equation is very similar to the simple pendulum, Eq. (2.21), where the applied torque arises from the trolley accelerations. Likewise, the second equation representing the trolley motion, \(x\), is very similar to the car translation in Eq. (2.3), where the forcing term arises from the angular acceleration of the pendulum. Eliminating \(x\) in these two coupled equations leads to the desired transfer function. Neglecting the friction term, \(b\), simplifies the algebra and leads to an approximate transfer function from the control input \(u\) to hanging crane angle \(\theta\) :

\[\frac{\Theta(s)}{U(s)} = \frac{- m_{p}l}{\left( \left( I + m_{p}l^{2} \right)\left( m_{t} + m_{p} \right) - m_{p}^{2}l^{2} \right)s^{2} + m_{p}gl\left( m_{t} + m_{p} \right)} \]

For the inverted pendulum in Fig. 2.22, where \(\theta \cong \pi\), assume \(\theta =\) Inverted pendulum equations \(\pi + \theta^{'}\), where \(\theta^{'}\) represents motion from the vertical upward direction. In this case, \(sin\theta \cong - \theta^{'},cos\theta \cong - 1\), and the nonlinear equations become \(\ ^{6}\)

\[\begin{matrix} \left( I + m_{p}l^{2} \right){\overset{¨}{\theta}}^{'} - m_{p}gl\theta^{'} & \ = m_{p}l\overset{¨}{x} \\ \left( m_{t} + m_{p} \right)\overset{¨}{x} + b\overset{˙}{x} - m_{p}l{\overset{¨}{\theta}}^{'} & \ = u. \end{matrix}\]

As noted in Example 2.2, a stable system will always have the same signs on each variable, which is the case for the stable hanging crane modeled by Eqs. (2.28). However, the signs on \(\theta\) and \(\overset{¨}{\theta}\) in the first equation in Eq. (2.30) are opposite, thus indicating instability, which is the characteristic of the inverted pendulum.

The transfer function, again without friction, is

\[\frac{\Theta^{'}(s)}{U(s)} = \frac{m_{p}l}{\left( \left( I + m_{p}l^{2} \right)\left( m_{t} + m_{p} \right) - m_{p}^{2}l^{2} \right)s^{2} - m_{p}gl\left( m_{t} + m_{p} \right)} \]

Evaluation of this transfer function for an infinitessimal step in \(u\) will result in a diverging value of \(\theta^{'}\) thus requiring feedback to remain upright, a subject for Chapter 5.

In Chapter 5, you will learn how to stabilize systems using feedback and will see that even unstable systems like an inverted pendulum can be stabilized provided there is a sensor that measures the output quantity and a control input. For the case of the inverted pendulum perched on a trolley, it would be required to measure the pendulum angle, \(\theta^{'}\), and provide a control input, \(u\), that accelerated the trolley in such a way that the pendulum remained pointing straight up. In years past, this system existed primarily in university control system laboratories as an educational tool. However, more recently, there is a practical device in production and being sold that employs essentially this same dynamic system: the Segway. It uses a gyroscope so the angle of the device is known with respect to vertical, and electric motors provide a torque on the wheels so it balances the device and provides the desired forward or backward motion. It is shown in Fig. 2.23.

38.0.1. Complex Mechanical Systems

This section contains the derivation of the equations of motion for mechanical systems. In particular, it contains the full derivation of the equations of motion for the hanging crane in Example 2.8 and the inverted pendulum on a cart. See Appendix W2.1.4 at www.pearsonglobaleditions.com.

38.0.2. Distributed Parameter Systems

All the preceding examples contained one or more rigid bodies, although some were connected to others by springs. Actual structuresfor example, satellite solar panels, airplane wings, or robot armsusually bend, as shown by the flexible beam in Fig. 2.24(a). The equation describing its motion is a fourth-order partial differential equation that arises because the mass elements are continuously distributed along the beam with a small amount of flexibility between

Figure 2.23

The Segway, which is similar to the inverted pendulum and is kept upright by a feedback control system

Source: Photo courtesy of David Powell

elements. This type of system is called a distributed parameter system. The dynamic analysis methods presented in this section are not sufficient to analyze this case; however, more advanced texts (Thomson and Dahleh, 1998) show the result is

\[EI\frac{\partial^{4}w}{\partial x^{4}} + \rho\frac{\partial^{2}w}{\partial t^{2}} = 0 \]

where

\[\begin{matrix} E & \ = \text{~}\text{Young's modulus,}\text{~} \\ I & \ = \text{~}\text{beam area moment of inertia,}\text{~} \\ \rho & \ = \text{~}\text{beam density,}\text{~} \\ w & \ = \text{~}\text{beam deflection at length}\text{~}x\text{~}\text{along the beam.}\text{~} \end{matrix}\]

The exact solution to Eq. (2.32) is too cumbersome to use in designing control systems, but it is often important to account for the gross effects of bending in control systems design.

The continuous beam in Fig. 2.24(b) has an infinite number of vibration-mode shapes, all with different frequencies. Typically, the lowest-frequency modes have the largest amplitude and are the most

39. Figure 2.24

(a) Flexible robot arm used for research at Stanford University; (b) model for continuous flexible beam; (c) simplified model for the first bending mode;

(d) model for the first and second bending modes

Source: Photo courtesy of E. Schmitz

A flexible structure can be approximated by a lumped parameter model

(a)

(b)

(c)

(d) important to approximate well. The simplified model in Fig. 2.24(c) can be made to duplicate the essential behavior of the first bending mode shape and frequency, and would usually be adequate for controller design. If frequencies higher than the first bending mode are anticipated in the control system operation, it may be necessary to model the beam as shown in Fig. 2.24(d), which can be made to approximate the first two bending modes and frequencies. Likewise, higher-order models can be used if such accuracy and complexity are deemed necessary (Schmitz, 1985; Thomson and Dahleh, 1998). When a continuously bending object is approximated as two or more rigid bodies connected by springs, the resulting model is sometimes referred to as a lumped parameter model.

39.0.1. Summary: Developing Equations of Motion for Rigid Bodies

The physics necessary to write the equations of motion of a rigid body is entirely given by Newton's laws of motion. The method is as follows:

  1. Assign variables such as \(x\) and \(\theta\) that are both necessary and sufficient to describe an arbitrary position of the object.

  2. Draw a free-body diagram of each component. Indicate all forces acting on each body and their reference directions. Also indicate the accelerations of the center of mass with respect to an inertial reference for each body.

  3. Apply Newton's law in translation [Eq. (2.1)] and/or rotation [Eq. (2.10)] form.

  4. Combine the equations to eliminate internal forces.

  5. The number of independent equations should equal the number of unknowns.

39.1. Models of Electric Circuits

Electric circuits are frequently used in control systems largely because of the ease of manipulation and processing of electric signals. Although controllers are increasingly implemented with digital logic, many functions are still performed with analog circuits. Analog circuits are faster than digital and, for very simple controllers, an analog circuit would be less expensive than a digital implementation. Furthermore, the power amplifier for electromechanical control and the anti-alias prefilters for digital control must be analog circuits.

Electric circuits consist of interconnections of sources of electric voltage and current, and other electronic elements such as resistors, capacitors, and transistors. An important building block for circuits is an operational amplifier (or op-amp), \(\ ^{7}\) which is also an example of a complex feedback system. Some of the most important methods of feedback system design were developed by the designers of highgain, wide-bandwidth feedback amplifiers, mainly at the Bell Telephone Laboratories between 1925 and 1940. Electric and electronic components also play a central role in electromechanical energy conversion devices such as electric motors, generators, and electrical sensors. In this brief survey, we cannot derive the physics of electricity or give a comprehensive review of all the important analysis techniques. We will define the variables, describe the relations imposed on them by typical elements and circuits, and describe a few of the most effective methods available for solving the resulting equations.

Symbols for some linear circuit elements and their current-voltage relations are given in Fig. 2.25. Passive circuits consist of interconnections of resistors, capacitors, and inductors. With electronics, we increase the set of electrical elements by adding active devices, including diodes, transistors, and amplifiers.

Kirchhoff's laws

Figure 2.25

Elements of electric circuits
The basic equations of electric circuits, called Kirchhoff's laws, are as follows:

  1. Kirchhoff's current law (KCL). The algebraic sum of currents leaving a junction or node equals the algebraic sum of currents entering that node.

  2. Kirchhoff's voltage law (KVL). The algebraic sum of all voltages taken around a closed path in a circuit is zero.

With complex circuits of many elements, it is essential to write the equations in a careful, well-organized way. Of the numerous methods for doing this, we choose for description and illustration the popular and powerful scheme known as node analysis. One node is selected as a reference and we assume the voltages of all other nodes to be unknowns. The choice of reference is arbitrary in theory, but in actual electronic circuits the common, or ground, terminal is the obvious and standard choice. Next, we write equations for the selected unknowns using the current law (KCL) at each node. We express these currents in terms of the selected unknowns by using the element equations in Fig. 2.25. If the circuit contains voltage sources, we must substitute a voltage law (KVL) for such sources. Example 2.9 illustrates how node analysis works.

Figure 2.26

Bridged tee circuit

40. EXAMPLE 2.9

41. Equations for a Circuit with a Current Source

Determine the differential equations for the circuit shown in Fig. 2.27. Choose the capacitor voltages and the inductor current as the unknowns.

Solution. We select node 3 as the reference and the voltages \(v_{1}\) and \(v_{2}\), and the current through the inductor, \(i_{L}\), as unknowns. We start the KCL relationships: relationships:

At node 1 , the KCL is

\[i(t) = i_{L} + i_{1} \]

and at node 2, the KCL is

\[i_{L} + i_{1} = i_{2} + i_{3} \]

differential equation that describes the dynamic relationship between the input, \(v_{i}\left( = v_{1} \right)\), and output, \(v_{3}\left( = v_{o} \right)\).

42. Chapter 2 Dynamic Models

Figure 2.27

Circuit for Example 2.10

Furthermore, from Fig. 2.27, we see that

\[\begin{matrix} i_{3} & \ = \frac{v_{2}}{R_{2}}, \\ i_{1} & \ = C_{1}\frac{dv_{1}}{dt}, \\ i_{2} & \ = C_{2}\frac{dv_{2}}{dt}, \\ v_{R} & \ = i_{L}R, \\ L\frac{di_{L}}{dt} & \ = v_{1} - v_{R}. \end{matrix}\]

These reduce to three differential equations in the three unknowns,

\[\begin{matrix} L\frac{di_{L}}{dt} & \ = v_{1} - i_{L}R \\ C_{1}\frac{dv_{1}}{dt} & \ = i(t) - i_{L} \\ C_{2}\frac{dv_{2}}{dt} & \ = i(t) - \frac{v_{2}}{R_{2}} \end{matrix}\]

Kirchhoff's laws can also be applied to circuits that contain an operational amplifier. The simplified circuit of the op-amp is shown in Fig. 2.28(a) and the schematic symbol is drawn in Fig. 2.28(b). If the positive terminal is not shown, it is assumed to be connected to ground, \(v_{+} = 0\), and the reduced symbol of Fig. 2.28(c) is used. For use in control circuits, it is usually assumed that the op-amp is ideal with the values \(R_{1} = \infty,R_{0} = 0\), and \(A = \infty\). The equations of the ideal op-amp are extremely simple, being

\[\begin{matrix} i_{+} & \ = i_{-} = 0, \\ v_{+} - v_{-} & \ = 0. \end{matrix}\]

The gain of the amplifier is assumed to be so high that the output voltage becomes \(v_{\text{out}\text{~}} =\) whatever it takes to satisfy these equations. Of

Figure 2.28

(a) Op-amp simplified circuit; (b) op-amp schematic symbol;

(c) reduced symbol for \(v_{+} = 0\)

43. EXAMPLE 2.11

The op-amp summer

Figure 2.29

The op-amp summer

\[\sqrt[20]{2 - 2x} \]

\[0 \]

(a)

(b)

(c) course, a real amplifier only approximates these equations, but unless they are specifically described, we will assume all op-amps are ideal. More realistic models are the subject of several problems given at the end of the chapter.

Op-Amp Summer

Find the equations and transfer functions of the circuit shown in Fig. 2.29.

Solution. Equation (2.47) requires that \(v_{-} = 0\), and thus the currents are \(i_{1} = v_{1}/R_{1},i_{2} = v_{2}/R_{2}\), and \(i_{\text{out}\text{~}} = v_{\text{out}\text{~}}/R_{f}\). To satisfy Eq. (2.46), \(i_{1} + i_{2} + i_{\text{out}\text{~}} = 0\), from which it follows that \(v_{1}/R_{1} + v_{2}/R_{2} + v_{\text{out}\text{~}}/R_{f} = 0\), and we have

\[v_{\text{out}\text{~}} = - \left\lbrack \frac{R_{f}}{R_{1}}v_{1} + \frac{R_{f}}{R_{2}}v_{2} \right\rbrack \]

From this equation, we see the circuit output is a weighted sum of the input voltages with a sign change. The circuit is called a summer.

A second important example for control is given by the op-amp integrator.

Figure 2.30

The op-amp integrator

EXAMPLE 2.12

Op-amp as integrator

Law of motors

EXAMPLE 2.13

2.3.1 Loudspeakers

44. Modeling a Loudspeaker

A typical geometry for a loudspeaker for producing sound is sketched in Fig. 2.31. The permanent magnet establishes a radial field in the cylindrical gap between the poles of the magnet. The force on the conductor

Figure 2.31

Geometry of a

loudspeaker: (a) overall configuration; (b) the electromagnet and voice coil

(a)

(b) wound on the bobbin causes the voice coil to move, producing sound. The effects of the air can be modeled as if the cone had equivalent mass \(M\) and viscous friction coefficient \(b\). Assume the magnet establishes a uniform field \(B\) of 0.4 tesla and the bobbin has 18 turns at a \(1.9 - cm\) diameter. Write the equations of motion of the device.

Solution. The current is at right angles to the field, and the force of interest is at right angles to the plane of \(i\) and \(B\), so Eq. (2.53) applies. In this case the field strength is \(B = 0.4\) tesla and the conductor length is

\[l = 18 \times 2\pi\frac{0.95}{100} = 1.074\text{ }m \]

Thus, the force is

\[F = 0.4 \times 1.074 \times i = 0.43i\text{ }N. \]

The mechanical equation follows from Newton's laws, and for a mass \(M\) and friction coefficient \(b\), the equation is

\[M\overset{¨}{x} + b\overset{˙}{x} = 0.43i. \]

This second-order differential equation describes the motion of the loudspeaker cone as a function of the input current \(i\) driving the system. Substituting \(s\) for \(d/dt\) in Eq. (2.54) as before, the transfer function is easily found to be

\[\frac{X(s)}{I(s)} = \frac{\frac{0.43}{M}}{s\left( s + \frac{b}{M} \right)} \]

The second important electromechanical relationship is the effect of mechanical motion on electric voltage. If a conductor of length \(l\text{ }m\) is moving in a magnetic field of \(B\) teslas at a velocity of \(v\text{ }m/sec\) at mutually right angles, an electric voltage is established across the conductor with magnitude

\[e = Blv\text{ }V \]

Figure 2.32

A loudspeaker showing the electric circuit

This expression is called the law of generators.

45. Loudspeaker with Circuit

For the loudspeaker in Fig. 2.31 and the circuit driving it in Fig. 2.32, find the differential equations relating the input voltage \(v_{a}\) to the output cone displacement \(x\). Assume the effective circuit resistance is \(R\) and the inductance is \(L\).

Solution. The loudspeaker motion satisfies Eq. (2.54), and the motion results in a voltage across the coil as given by Eq. (2.56), with the velocity \(\overset{˙}{x}\). The resulting voltage is

\[e_{\text{coil}\text{~}} = Bl\overset{˙}{x} = 0.43\overset{˙}{x} \]

This induced voltage effect needs to be added to the analysis of the circuit. The equation of motion for the electric circuit is

\[L\frac{di}{dt} + Ri = v_{a} - 0.43\overset{˙}{x} \]

These two coupled equations, (2.54) and (2.58), constitute the dynamic model for the loudspeaker.

Again, substituting \(s\) for \(d/dt\) in these equations and replacing all the parameters with the given numerical values, the transfer function between the applied voltage and the loudspeaker displacement is found to be

\[\frac{X(s)}{V_{a}(s)} = \frac{0.43}{s\left\lbrack (Ms + b)(Ls + R) + (0.43)^{2} \right\rbrack} \]

45.0.1. Motors

A common actuator based on the laws of motors and generators and used in control systems is the direct current (DC) motor to provide rotarymotion. A sketch of the basic components of a DC motor is given in Fig. 2.33. In addition to housing and bearings, the nonturning part (stator) has magnets, which establish a field across the rotor. The magnets may be electromagnets or, for small motors, permanent magnets. The brushes contact the rotating commutator, which causes the current

Figure 2.33

Sketch of a DC motor

Back emf

always to be in the proper conductor windings so as to produce maximum torque. If the direction of the current is reversed, the direction of the torque is reversed.

The motor equations give the torque \(T\) on the rotor in terms of the armature current \(i_{a}\) and express the back emf voltage in terms of the shaft's rotational velocity \({\overset{˙}{\theta}}_{m}\ ^{8}\)

Thus,

\[\begin{matrix} T & \ = K_{t}i_{a} \\ e & \ = K_{e}{\overset{˙}{\theta}}_{m} \end{matrix}\]

In consistent units, the torque constant \(K_{t}\) equals the electric constant \(K_{e}\), but in some cases, the torque constant will be given in other units, such as ounce-inches per ampere, and the electric constant may be expressed in units of volts per \(1000rpm\). In such cases, the engineer must make the necessary translations to be certain the equations are correct.

46. Modeling a DC Motor

Find the equations for a DC motor with the equivalent electric circuit shown in Fig. 2.34(a). Assume the rotor has inertia \(J_{m}\) and viscous friction coefficient \(b\).

Solution. The free-body diagram for the rotor, shown in Fig. 2.34(b), defines the positive direction and shows the two applied torques, \(T\) and \(b{\overset{˙}{\theta}}_{m}\). Application of Newton's laws yields

\[J_{m}{\overset{¨}{\theta}}_{m} + b{\overset{˙}{\theta}}_{m} = K_{t}i_{a} \]

Analysis of the electric circuit, including the back emf voltage, shows the electrical equation to be

\[L_{a}\frac{di_{a}}{dt} + R_{a}i_{a} = v_{a} - K_{e}{\overset{˙}{\theta}}_{m} \]

Figure 2.34

DC motor: (a) electric circuit of the armature;

(b) free-body diagram of the rotor

(a)

(b)

With \(s\) substituted for \(d/dt\) in Eqs. (2.62) and (2.63), the transfer function for the motor is readily found to be

\[\frac{\Theta_{m}(s)}{V_{a}(s)} = \frac{K_{t}}{s\left\lbrack \left( J_{m}s + b \right)\left( L_{a}s + R_{a} \right) + K_{t}K_{e} \right\rbrack} \]

In many cases the relative effect of the inductance is negligible compared with the mechanical motion and can be neglected in Eq. (2.63). If so, we can combine Eqs. (2.62) and (2.63) into one equation to get

\[J_{m}{\overset{¨}{\theta}}_{m} + \left( b + \frac{K_{t}K_{e}}{R_{a}} \right){\overset{˙}{\theta}}_{m} = \frac{K_{t}}{R_{a}}v_{a} \]

From Eq. (2.65) it is clear that in this case the effect of the back emf is indistinguishable from the friction, and the transfer function is

\[\begin{matrix} \frac{\Theta_{m}(s)}{V_{a}(s)} & \ = \frac{\frac{K_{t}}{R_{a}}}{J_{m}s^{2} + \left( b + \frac{K_{t}K_{e}}{R_{a}} \right)s} \\ & \ = \frac{K}{s(\tau s + 1)} \end{matrix}\]

where

\[\begin{matrix} K & \ = \frac{K_{t}}{bR_{a} + K_{t}K_{e}} \\ \tau & \ = \frac{R_{a}J_{m}}{bR_{a} + K_{t}K_{e}} \end{matrix}\]

In many cases, a transfer function between the motor input and the output speed \(\left( \omega = {\overset{˙}{\theta}}_{m} \right)\) is required. In such cases, the transfer function would be

\[\frac{\Omega(s)}{V_{a}(s)} = s\frac{\Theta_{m}(s)}{V_{a}(s)} = \frac{K}{\tau s + 1} \]

Another device used for electromechanical energy conversion is the \(AC\) motor actuators alternating current (AC) induction motor invented by N. Tesla. Elementary analysis of the AC motor is more complex than that of the DC motor. A typical experimental set of curves of torque versus speed for
fixed frequency and varying amplitude of applied (sinusoidal) voltage is given in Fig. 2.35. Although the data in the figure are for a constant engine speed, they can be used to extract the motor constants that will provide a dynamic model for the motor. For analysis of a control problem involving an \(AC\) motor such as that described by Fig. 2.35, we make a linear approximation to the curves for speed near zero and at a midrange voltage to obtain the expression

\[T = K_{1}v_{a} - K_{2}{\overset{˙}{\theta}}_{m} \]

The constant \(K_{1}\) represents the ratio of a change in torque to a change in voltage at zero speed, and is proportional to the distance between the curves at zero speed. The constant \(K_{2}\) represents the ratio of a change in torque to a change in speed at zero speed and a midrange voltage; therefore, it is the slope of a curve at zero speed as shown by the line at \(V_{2}\). For the electrical portion, values for the armature resistance \(R_{a}\) and inductance \(L_{a}\) are also determined by experiment. Once we have values for \(K_{1},K_{2},R_{a}\), and \(L_{a}\), the analysis proceeds as the analysis in Example 2.15 for the DC motor. For the case in which the inductor can be neglected, we can substitute \(K_{1}\) and \(K_{2}\) into Eq. (2.65) in place of \(K_{t}/R_{a}\) and \(K_{t}K_{e}/R_{a}\), respectively.

In addition to the \(DC\) and \(AC\) motors mentioned here, control systems use brushless DC motors (Reliance Motion Control Corp., 1980) and stepping motors (Kuo, 1980). Models for these machines, developed in the works just cited, do not differ in principle from the motors considered in this section. In general, the analysis, supported by experiment, develops the torque as a function of voltage and speed similar to the AC motor torque-speed curves given in Fig. 2.35. From such curves, one can obtain a linearized formula such as Eq. (2.71) to use in

(a)

(b)

Figure 2.35

Torque-speed curves for a servo motor showing four amplitudes of armature voltage: (a) lowrotor-resistance machine; (b) high-rotor-resistance machine showing four values of armature voltage, \(v_{a}\)
the mechanical part of the system and an equivalent circuit consisting of a resistance and an inductance to use in the electrical part.

47. \(\bigtriangleup \ 2.3.3\) Gears

The motors used for control purposes are often used in conjunction with gears as shown in Fig. 2.36 in order to multiply the torque. The force transmitted by the teeth of one gear is equal and opposite to the force applied to the other gear as shown in Fig. 2.36(a); therefore, since torque \(=\) force \(\times\) distance, the torques applied to and from each shaft by the teeth obeys

\[\frac{T_{1}}{r_{1}} = \frac{T_{2}}{r_{2}} = f,\text{~}\text{force applied by teeth}\text{~} \]

and thus, we see that the torque multiplication is proportional to the radius of the gears, \(r\), or equivalently, the number of teeth, \(N\), in each gear,

\[\frac{T_{2}}{T_{1}} = \frac{r_{2}}{r_{1}} = \frac{N_{2}}{N_{1}} = n \]

where we have defined the quantity, \(n\), to be the gear ratio.

Similarly, the velocity of the contact tooth of one gear is the same as the velocity of the tooth on the opposite gear, and since velocity \(=\) \(\omega r\), where \(\omega\) is the angular velocity,

\[\omega_{1}r_{1} = \omega_{2}r_{2} = v \]

Thus,

\[\frac{\omega_{1}}{\omega_{2}} = \frac{r_{2}}{r_{1}} = \frac{N_{2}}{N_{1}} = n \]

Furthermore, the angles will change in proportion to the angular velocities, so

\[\frac{\theta_{1}}{\theta_{2}} = \frac{\omega_{1}}{\omega_{2}} = \frac{N_{2}}{N_{1}} = n \]

Note these are all geometric relationships in the sense that we have not considered any inertias or accelerations of the gear train. These relationships simply change the scale factor on the torque and speed from a motor. There is also another effect that must be considered: the

Figure 2.36

(a) Geometry definitions and forces on teeth; (b) definitions for the dynamic analysis

(a)

(b)
effective rotational inertia and damping of the system when considering the dynamics. Suppose the servo motor whose output torque is \(T_{m}\) is attached to gear 1. Also suppose the servo's gear 1 is meshed with gear 2, and the angle \(\theta_{2}\) describes its position (body 2). Furthermore, the inertia of gear 1 and all that is attached to it (body 1) is \(J_{1}\), while the inertia of the second gear and all the attached load (body 2) is \(J_{2}\), similarly for the friction \(b_{1}\) and \(b_{2}\). We wish to determine the transfer function between the applied torque, \(T_{m}\), and the output \(\theta_{2}\), that is, \(\Theta_{2}(s)/T_{m}(s)\). The equation of motion for body 1 is

\[J_{1}{\overset{¨}{\theta}}_{1} + b_{1}{\overset{˙}{\theta}}_{1} = T_{m} - T_{1} \]

where \(T_{1}\) is the reaction torque from gear 2 acting back on gear 1 . For body 2 , the equation of motion is

\[J_{2}{\overset{¨}{\theta}}_{2} + b_{2}{\overset{˙}{\theta}}_{2} = T_{2}\text{,}\text{~} \]

where \(T_{2}\) is the torque applied on gear 2 by gear 1 . Note that these are not independent systems because the motion is tied together by the gears. Substituting \(\theta_{2}\) for \(\theta_{1}\) in Eq. (2.76) using the relationship from Eq. (2.75), replacing \(T_{2}\) with \(T_{1}\) in Eq. (2.77) using the relationship in Eq. (2.73), and eliminating \(T_{1}\) between the two equations results in

\[\left( J_{2} + J_{1}n^{2} \right){\overset{¨}{\theta}}_{2} + \left( b_{2} + b_{1}n^{2} \right){\overset{˙}{\theta}}_{2} = nT_{m}. \]

So the transfer function is

\[\frac{\Theta_{2}(s)}{T_{m}(s)} = \frac{n}{J_{eq}s^{2} + b_{eq}s} \]

where

\[J_{eq} = J_{2} + J_{1}n^{2},\text{~}\text{and}\text{~}b_{eq} = b_{2} + b_{1}n^{2} \]

These quantities are referred to as the "equivalent" inertias and damping coefficients. \(\ ^{9}\) If the transfer function had been desired between the applied torque, \(T_{m}\), and \(\theta_{1}\), a similar analysis would be required to arrive at the equivalent inertias and damping, which would be different from those above.

48. \(\Delta 2.4\) Heat and Fluid-Flow Models

Thermodynamics, heat transfer, and fluid dynamics are each the subject of complete textbooks. For purposes of generating dynamic models for use in control systems, the most important aspect of the physics is to represent the dynamic interaction between the variables. Experiments are usually required to determine the actual values of the parameters, and thus to complete the dynamic model for purposes of control systems design.

\(\ ^{9}\) The equivalent inertia is sometimes referred to as "reflected impedance"; however, this term is more typically applied to electronic circuits.

48.0.1. Heat Flow

Some control systems involve regulation of temperature for portions of the system. The dynamic models of temperature control systems involve the flow and storage of heat energy. Heat energy flows through substances at a rate proportional to the temperature difference across the substance; that is,

where

\[q = \frac{1}{R}\left( T_{1} - T_{2} \right) \]

\(q =\) heat-energy flow, joules per second (J/sec),

\(R =\) thermal resistance, \(\ ^{\circ}C/J \cdot sec\),

\(T =\) temperature, \(\ ^{\circ}C\).

The net heat-energy flow into a substance affects the temperature of the substance according to the relation

\[\overset{˙}{T} = \frac{1}{C}q \]

where \(C\) is the thermal capacity. Typically, there are several paths for heat to flow into or out of a substance, and \(q\) in Eq. (2.82) is the sum of heat flows obeying Eq. (2.81).

49. EXAMPLE 2.16

50. Heat Flow from a Room

A room with all but two sides insulated \((1/R = 0)\) is shown in Fig. 2.37. Find the differential equations that determine the temperature in the room.

Solution. Application of Eqs. (2.81) and (2.82) yields

\[{\overset{˙}{T}}_{I} = \frac{1}{C_{I}}\left( \frac{1}{R_{1}} + \frac{1}{R_{2}} \right)\left( T_{O} - T_{I} \right) \]

where

\[\begin{matrix} C_{I} & \ = \text{~}\text{thermal capacity of air within the room,}\text{~} \\ T_{O} & \ = \text{~}\text{temperature outside,}\text{~} \\ T_{I} & \ = \text{~}\text{temperature inside,}\text{~} \\ R_{2} & \ = \text{~}\text{thermal resistance of the room ceiling,}\text{~} \\ R_{1} & \ = \text{~}\text{thermal resistance of the room wall.}\text{~} \end{matrix}\]

Figure 2.37

Dynamic model for room temperature

Specific heat

Thermal conductivity

51. EXAMPLE 2.17

Figure 2.38

A Thermal Control System
Normally the material properties are given in tables as follows:

  1. The specific heat at constant volume \(c_{v}\), which is converted to heat capacity by

\[C = mc_{v}, \]

where \(m\) is the mass of the substance;

  1. The thermal conductivity \(\ ^{10}k\), which is related to thermal resistance \(R\) by

\[\frac{1}{R} = \frac{kA}{l} \]

where \(A\) is the cross-sectional area and \(l\) is the length of the heatflow path.

52. A Thermal Control System

The system consists of two thermal masses in contact with one another where heat is being applied to the mass on the left, as shown in Fig. 2.38. There is also heat transferred directly to the second mass in contact with it, and heat is lost to the environmnet from both masses. Find the relevant dynamic equations and the transfer function between the heat input, \(u\), and the temperature of the mass on the right.

Solution. Applying Eqs. (2.81) and (2.82) yields

\[\begin{matrix} & C_{1}{\overset{˙}{T}}_{1} = u - H_{1}T_{1} - H_{x}\left( T_{1} - T_{2} \right) \\ & C_{2}{\overset{˙}{T}}_{2} = H_{x}\left( T_{1} - T_{2} \right) - H_{2}T_{2} \end{matrix}\]

where

\[\begin{matrix} & C_{1} = \text{~}\text{thermal capacity of mass}\text{~}1, \\ & C_{2} = \text{~}\text{thermal capacity of mass}\text{~}2, \\ & T_{o} = \text{~}\text{temperature outside the masses,}\text{~} \\ & T_{1} = T_{1}^{*} - T_{o}\text{~}\text{temperature difference of mass}\text{~}1, \\ & T_{2} = T_{2}^{*} - T_{o}\text{~}\text{temperature difference of mass}\text{~}2 \\ & H_{1} = 1/R_{1} = \text{~}\text{thermal resistance from mass}\text{~}1, \\ & H_{2} = 1/R_{2} = \text{~}\text{thermal resistance from mass}\text{~}2 \\ & H_{x} = 1/R_{x} = \text{~}\text{thermal resistance from mass}\text{~}1\text{~}\text{to mass}\text{~}2. \end{matrix}\]

Figure 2.39

Heat exchanger

Using Cramer's Rule with Eqs. (2.84) and (2.85) yields the desired transfer function

\[\frac{T_{2}(s)}{U(s)} = \frac{H_{x}}{\left( C_{1}s + H_{x} + H_{1} \right)\left( C_{2}s + H_{x} + H_{2} \right)} \]

In addition to flow due to transfer, as expressed by Eq. (2.81), heat can also flow when a warmer mass flows into a cooler mass, or vice versa. In this case,

\[q = wc_{v}\left( T_{1} - T_{2} \right) \]

where \(w\) is the mass flow rate of the fluid at \(T_{1}\) flowing into the reservoir at \(T_{2}\). For a more complete discussion of dynamic models for temperature control systems, see Cannon (1967) or textbooks on heat transfer.

53. EXAMPLE 2.18

54. Equations for Modeling a Heat Exchanger

A heat exchanger is shown in Fig. 2.39. Steam enters the chamber through the controllable valve at the top, and cooler steam leaves at the bottom. There is a constant flow of water through the pipe that winds through the middle of the chamber so it picks up heat from the steam. Find the differential equations that describe the dynamics of the measured water outflow temperature as a function of the area \(A_{s}\) of the steam-inlet control valve when open. The sensor that measures the water outflow temperature, being downstream from the exit temperature in the pipe, lags the temperature by \(t_{d}\) sec.

Solution. The temperature of the water in the pipe will vary continuously along the pipe as the heat flows from the steam to the water. The temperature of the steam will also reduce in the chamber as it passes
over the maze of pipes. An accurate thermal model of this process is therefore quite involved because the actual heat transfer from the steam to the water will be proportional to the local temperatures of each fluid. For many control applications, it is not necessary to have great accuracy because the feedback will correct for a considerable amount of error in the model. Therefore, it makes sense to combine the spatially varying temperatures into single temperatures \(T_{s}\) and \(T_{w}\) for the outflow steam and water temperatures, respectively. We then assume the heat transfer from steam to water is proportional to the difference in these temperatures, as given by Eq. (2.81). There is also a flow of heat into the chamber from the inlet steam that depends on the steam flow rate and its temperature according to Eq. (2.87),

\[q_{in} = w_{s}c_{vs}\left( T_{si} - T_{s} \right) \]

where

\[\begin{matrix} w_{s} & \ = K_{S}A_{S},\text{~}\text{mass flow rate of the steam,}\text{~} \\ A_{s} & \ = \text{~}\text{area of the steam inlet valve,}\text{~} \\ K_{s} & \ = \text{~}\text{flow coefficient of the inlet valve,}\text{~} \\ c_{vs} & \ = \text{~}\text{specific heat of the steam}\text{~} \\ T_{si} & \ = \text{~}\text{temperature of the inflow steam}\text{~} \\ T_{s} & \ = \text{~}\text{temperature of the outflow steam.}\text{~} \end{matrix}\]

The net heat flow into the chamber is the difference between the heat from the hot incoming steam and the heat flowing out to the water. This net flow determines the rate of temperature change of the steam according to Eq. (2.82),

\[C_{S}{\overset{˙}{T}}_{s} = A_{S}K_{s}c_{vs}\left( T_{si} - T_{S} \right) - \frac{1}{R}\left( T_{s} - T_{w} \right) \]

where

\[\begin{matrix} C_{S} = & m_{S}c_{vS}\text{~}\text{is the thermal capacity of the steam in the chamber}\text{~} \\ & \text{~}\text{with mass}\text{~}m_{s} \\ R = & \text{~}\text{the thermal resistance of the heat flow averaged over the}\text{~} \\ & \text{~}\text{entire exchanger.}\text{~} \end{matrix}\]

Likewise, the differential equation describing the water temperature is

\[C_{w}{\overset{˙}{T}}_{w} = w_{w}c_{cw}\left( T_{wi} - T_{w} \right) + \frac{1}{R}\left( T_{s} - T_{w} \right) \]

where

\[\begin{matrix} w_{w} & \ = \text{~}\text{mass flow rate of the water,}\text{~} \\ c_{cw} & \ = \text{~}\text{specific heat of the water,}\text{~} \\ T_{wi} & \ = \text{~}\text{temperature of the incoming water,}\text{~} \\ T_{w} & \ = \text{~}\text{temperature of the outflowing water.}\text{~} \end{matrix}\]

To complete the dynamics, the time delay between the measurement and the exit flow is described by the relation

\[T_{m}(t) = T_{w}\left( t - t_{d} \right) \]

where \(T_{m}\) is the measured downstream temperature of the water and \(t_{d}\) is the time delay. There may also be a delay in the measurement of the steam temperature \(T_{s}\), which would be modeled in the same manner.

Equation (2.88) is nonlinear because the quantity \(T_{S}\) is multiplied by the control input \(A_{s}\). The equation can be linearized about \(T_{so}\) (a specific value of \(T_{s}\) ) so \(T_{si} - T_{s}\) is assumed constant for purposes of approximating the nonlinear term, which we will define as \(\Delta T_{s}\). In order to eliminate the \(T_{wi}\) term in Eq. (2.89), it is convenient to measure all temperatures in terms of deviation in degrees from \(T_{wi}\). The resulting equations are then

\[\begin{matrix} C_{S}{\overset{˙}{T}}_{s} & \ = - \frac{1}{R}T_{s} + \frac{1}{R}T_{w} + K_{s}c_{vs}\Delta T_{s}A_{s} \\ C_{w}{\overset{˙}{T}}_{w} & \ = - \left( \frac{1}{R} + w_{w}c_{vw} \right)T_{w} + \frac{1}{R}T_{s} \\ T_{m} & \ = T_{w}\left( t - t_{d} \right) \end{matrix}\]

Although the time delay is not a nonlinearity, we will see in Chapter 3 that operationally, \(T_{m} = e^{- t_{d}s}T_{w}\). Therefore, the transfer function of the heat exchanger has the form

\[\frac{T_{m}(s)}{A_{s}(s)} = \frac{Ke^{- t_{d}s}}{\left( \tau_{1}s + 1 \right)\left( \tau_{2}s + 1 \right)} \]

Hydraulic actuator

The continuity relation

54.0.1. Incompressible Fluid Flow

Fluid flows are common in many control system components. One example is the hydraulic actuator, which is used extensively in control systems because it can supply a large force with low inertia and low weight. They are often used to move the aerodynamic control surfaces of airplanes; to gimbal rocket nozzles; to move the linkages in earth-moving equipment, farm tractor implements, snow-grooming machines; and to move robot arms.

The physical relations governing fluid flow are continuity, force equilibrium, and resistance. The continuity relation is simply a statement of the conservation of matter; that is,

\[\overset{˙}{m} = w_{\text{in}\text{~}} - w_{\text{out}\text{~}}, \]

where

\(m =\) fluid mass within a prescribed portion of the system,

\(w_{\text{in}\text{~}} =\) mass flow rate into the prescribed portion of the system,

\(w_{\text{out}\text{~}} =\) mass flow rate out of the prescribed portion of the system.

Figure 2.40

Water tank example
Equations for Describing Water Tank Height

Determine the differential equation describing the height of the water in the tank in Fig. 2.40.

Solution. Application of Eq. (2.91) yields

\[\overset{˙}{h} = \frac{1}{A\rho}\left( w_{\text{in}\text{~}} - w_{\text{out}\text{~}} \right) \]

where

\[\begin{matrix} A & \ = \text{~}\text{area of the tank}\text{~} \\ \rho & \ = \text{~}\text{density of water,}\text{~} \\ h & \ = m/A\rho = \text{~}\text{height of water,}\text{~} \\ m & \ = \text{~}\text{mass of water in the tank.}\text{~} \end{matrix}\]

Force equilibrium must apply exactly as described by Eq. (2.1) for mechanical systems. Sometimes in fluid-flow systems, some forces result from fluid pressure acting on a piston. In this case, the force from the fluid is

\[f = pA, \]

where

\[\begin{matrix} f & \ = \text{~}\text{force}\text{~} \\ p & \ = \text{~}\text{pressure in the fluid}\text{~} \\ A & \ = \text{~}\text{area on which the fluid acts.}\text{~} \end{matrix}\]

55. Modeling a Hydraulic Piston

Determine the differential equation describing the motion of the piston actuator shown in Fig. 2.41, given that there is a force \(F_{D}\) acting on it and a pressure \(p\) in the chamber.

Solution. Equations (2.1) and (2.93) apply directly, where the forces include the fluid pressure as well as the applied force. The result is

\[M\overset{¨}{x} = Ap - F_{D} \]

Figure 2.41

Hydraulic piston actuator

where

\[\begin{matrix} A & \ = \text{~}\text{area of the piston}\text{~} \\ p & \ = \text{~}\text{pressure in the chamber}\text{~} \\ M & \ = \text{~}\text{mass of the piston}\text{~} \\ x & \ = \text{~}\text{position of the piston.}\text{~} \end{matrix}\]

In many cases of fluid-flow problems, the flow is resisted either by a constriction in the path or by friction. The general form of the effect of resistance is given by

\[w = \frac{1}{R}\left( p_{1} - p_{2} \right)^{1/\alpha}, \]

where

\[w = \text{~}\text{mass flow rate}\text{~} \]

\(p_{1},p_{2} =\) pressures at ends of the path through which flow is occurring,

\(R,\alpha =\) constants whose values depend on the type of restriction.

Or, as is more commonly used in hydraulics,

\[Q = \frac{1}{\rho R}\left( p_{1} - p_{2} \right)^{1/\alpha} \]

where

\[\begin{matrix} & Q = \text{~}\text{volume flow rate, where}\text{~}Q = w/\rho \\ & \rho = \text{~}\text{fluid density.}\text{~} \end{matrix}\]

The constant \(\alpha\) takes on values between 1 and 2 . The most common value is approximately 2 for high flow rates (those having a Reynolds number \(Re > 10^{5}\) ) through pipes or through short constrictions or nozzles. For very slow flows through long pipes or porous plugs wherein the flow remains laminar ( \(Re \lesssim 1000),\alpha = 1\). Flow rates between these extremes can yield intermediate values of \(\alpha\). The Reynolds number indicates the relative importance of inertial forces and viscous forces in the flow. It is proportional to a material's velocity and density and to

56. EXAMPLE 2.21

the size of the restriction, and it is inversely proportional to the viscosity. When Re is small, the viscous forces predominate and the flow is laminar. When Re is large, the inertial forces predominate and the flow is turbulent.

Note a value of \(\alpha = 2\) indicates that the flow is proportional to the square root of the pressure difference and therefore will produce a nonlinear differential equation. For the initial stages of control systems analysis and design, it is typically very useful to linearize these equations so the design techniques described in this book can be applied. Linearization involves selecting an operating point and expanding the nonlinear term to be a small perturbation from that point.

57. Linearization of Water Tank Height and Outflow

Find the nonlinear differential equation describing the height of the water in the tank in Fig. 2.40. Assume there is a relatively short restriction at the outlet and that \(\alpha = 2\). Also linearize your equation about the operating point \(h_{o}\).

Solution. Applying Eq. (2.94) yields the flow out of the tank as a function of the height of the water in the tank:

\[w_{\text{out}\text{~}} = \frac{1}{R}\left( p_{1} - p_{a} \right)^{1/2} \]

Here,

\[\begin{matrix} & p_{1} = \rho gh + p_{a},\text{~}\text{the hydrostatic pressure,}\text{~} \\ & p_{a} = \text{~}\text{ambient pressure outside the restriction.}\text{~} \end{matrix}\]

Substituting Eq. (2.96) into Eq. (2.92) yields the nonlinear differential equation for the height:

\[\overset{˙}{h} = \frac{1}{A\rho}\left( w_{in} - \frac{1}{R}\sqrt{p_{1} - p_{a}} \right) \]

Linearization involves selecting the operating point \(p_{o} = \rho gh_{o} + p_{a}\) and substituting \(p_{1} = p_{o} + \Delta p\) into Eq. (2.96). Then, we expand the nonlinear term according to the relation

\[(1 + \varepsilon)^{\beta} \cong 1 + \beta\varepsilon \]

where \(\varepsilon \ll 1\). Equation (2.96) can thus be written as

\[\begin{matrix} w_{\text{out}\text{~}} & \ = \frac{\sqrt{p_{o} - p_{a}}}{R}\left( 1 + \frac{\Delta p}{p_{o} - p_{a}} \right)^{1/2} \\ & \ \cong \frac{\sqrt{p_{o} - p_{a}}}{R}\left( 1 + \frac{1}{2}\frac{\Delta p}{p_{o} - p_{a}} \right). \end{matrix}\]

The linearizing approximation made in Eq. (2.99) is valid as long as \(\Delta p \ll p_{o} - p_{a}\); that is, as long as the deviations of the system pressure from the chosen operating point are relatively small.

Combining Eqs. (2.92) and (2.99) yields the following linearized equation of motion for the water tank level:

\[\Delta\overset{˙}{h} = \frac{1}{A\rho}\left\lbrack w_{in} - \frac{\sqrt{p_{o} - p_{a}}}{R}\left( 1 + \frac{1}{2}\frac{\Delta p}{p_{o} - p_{a}} \right) \right\rbrack. \]

Because \(\Delta p = \rho g\Delta h\), this equation reduces to

\[\Delta\overset{˙}{h} = - \frac{g}{2AR\sqrt{p_{o} - p_{a}}}\Delta h + \frac{w_{in}}{A\rho} - \frac{\sqrt{p_{o} - p_{a}}}{\rho AR} \]

which is a linear differential equation for \(\Delta\overset{˙}{h}\). The operating point is not an equilibrium point because some control input is required to maintain it. In other words, when the system is at the operating point \((\Delta h = 0)\) with no input \(\left( w_{\text{in}\text{~}} = 0 \right)\), it will move from that point because \(\Delta\overset{˙}{h} \neq 0\). So, if no water is flowing into the tank, the tank will drain, thus moving it from the reference point. To define an operating point that is also an equilibrium point, we need to require that there be a nominal flow rate,

\[\frac{w_{in_{o}}}{A\rho} = \frac{\sqrt{p_{o} - p_{a}}}{\rho AR} \]

and define the linearized input flow to be a perturbation from that value.

Hydraulic actuators

58. EXAMPLE 2.22

Hydraulic actuators obey the same fundamental relationships we saw in the water tank: continuity [Eq. (2.91)], force balance [Eq. (2.93)], and flow resistance [Eq. (2.94)]. Although the development here assumes the fluid to be perfectly incompressible, in fact, hydraulic fluid has some compressibility due primarily to entrained air. This feature causes hydraulic actuators to have some resonance because the compressibility of the fluid acts like a stiff spring. This resonance limits their speed of response.

59. Modeling a Hydraulic Actuator

  1. Find the nonlinear differential equations relating the movement \(\theta\) of the control surface to the input displacement \(x\) of the valve for the hydraulic actuator shown in Fig. 2.42.

  2. Find the linear approximation to the equations of motion when \(\overset{˙}{y} =\) constant, with and without an applied load-that is, when \(F \neq 0\) and when \(F = 0\). Assume \(\theta\) motion is small.

60. Solution

  1. Equations of motion: When the valve is at \(x = 0\), both passages are closed and no motion results. When \(x > 0\), as shown in Fig. 2.42, the oil flows clockwise as shown and the piston is forced to the left. When \(x < 0\), the fluid flows counterclockwise. The oil supply at high pressure \(p_{s}\) enters the left side of the large piston chamber, forcing the piston to the right. This causes the oil to flow out of the valve chamber from the rightmost channel instead of the left.

Figure 2.42

Hydraulic actuator with valve

We assume the flow through the orifice formed by the valve is proportional to \(x\); that is,

\[Q_{1} = \frac{1}{\rho R_{1}}\left( p_{s} - p_{1} \right)^{1/2}x \]

Similarly,

\[Q_{2} = \frac{1}{\rho R_{2}}\left( p_{2} - p_{e} \right)^{1/2}x \]

The continuity relation yields

\[A\overset{˙}{y} = Q_{1} = Q_{2}, \]

where

\[A = \text{~}\text{piston area}\text{~}. \]

The force balance on the piston yields

\[A\left( p_{1} - p_{2} \right) - F = m\overset{¨}{y}, \]

where

\[\begin{matrix} m = & \text{~}\text{mass of the piston and the attached rod}\text{~} \\ F = & \text{~}\text{force applied by the piston rod to the control surface}\text{~} \\ & \text{~}\text{attachment point.}\text{~} \end{matrix}\]

Furthermore, the moment balance of the control surface using Eq. (2.10) yields

\[I\overset{¨}{\theta} = Flcos\theta - F_{a}d\text{,}\text{~} \]

where

\[\begin{matrix} I = & \text{~}\text{moment of inertia of the control surface and attachment}\text{~} \\ & \text{~}\text{about the hinge,}\text{~} \\ F_{a} = & \text{~}\text{applied aerodynamic load.}\text{~} \end{matrix}\]

To solve this set of five equations, we require the following additional kinematic relationship between \(\theta\) and \(y\) :

\[y = lsin\theta. \]

The actuator is usually constructed so the valve exposes the two passages equally; therefore, \(R_{1} = R_{2}\), and we can infer from Eqs. (2.101) to (2.103) that

\[p_{s} - p_{1} = p_{2} - p_{e} \]

These relations complete the nonlinear differential equations of motion; they are formidable and difficult to solve.

  1. Linearization and simplification: For the case in which \(\overset{˙}{y} =\) a constant \((\overset{¨}{y} = 0)\) and there is no applied load \((F = 0)\), Eqs. (2.104) and (2.107) indicate that

\[p_{1} = p_{2} = \frac{p_{s} + p_{e}}{2} \]

Therefore, using Eq. (2.103) and letting \(sin\theta = \theta\) (since \(\theta\) is assumed to be small), we get

\[\overset{˙}{\theta} = \frac{\sqrt{p_{s} - p_{e}}}{\sqrt{2}A\rho Rl}x \]

This represents a single integration between the input \(x\) and the output \(\theta\), where the proportionality constant is a function only of the supply pressure and the fixed parameters of the actuator. For the case \(\overset{˙}{y} =\) constant but \(F \neq 0\), Eqs. (2.104) and (2.107) indicate that

\[p_{1} = \frac{p_{s} + p_{e} + F/A}{2} \]

and

\[\overset{˙}{\theta} = \frac{\sqrt{p_{s} - p_{e} - F/A}}{\sqrt{2}A_{\rho}Rl}x. \]

This result is also a single integration between the input \(x\) and the output \(\theta\), but the proportionality constant now depends on the applied load \(F\).

As long as the commanded values of \(x\) produce \(\theta\) motion that has a sufficiently small value of \(\overset{¨}{\theta}\), the approximation given by Eq. (2.109) or (2.110) is valid and no other linearized dynamic relationships are necessary. However, as soon as the commanded values of \(x\) produce accelerations in which the inertial forces ( \(m\overset{¨}{y}\) and the reaction to \(I\overset{¨}{\theta}\) ) are a significant fraction of \(p_{s} - p_{e}\), the approximations are no longer valid. We must then incorporate these forces into the equations, thus obtaining a dynamic relationship between \(x\) and \(\theta\) that is much more involved than the pure integration implied by Eq. (2.109) or (2.110). Typically, for initial control system designs, hydraulic actuators are assumed to obey the simple relationship of Eq. (2.109) or (2.110). When hydraulic
actuators are used in feedback control systems, resonances have been encountered that are not explained by using the approximation that the device is a simple integrator as in Eq. (2.109) or (2.110). The source of the resonance is the neglected accelerations discussed above along with the additional feature that the oil is slightly compressible due to small quantities of entrained air. This phenomenon is called the "oil-mass resonance."

60.1. Historical Perspective

Newton's second law of motion (Eq. 2.1) was first published in his Philosophice Naturalis Principia Mathematica in 1686 along with his two other famous laws of motion. The first: A body will continue with the same uniform motion unless acted on by an external unbalanced force, and the third: To every action, there is an equal and opposite reaction. Isaac Newton also published his law of gravitation in this same publication, which stated that every mass particle attracts all other particles by a force proportional to the inverse of the square of the distance between them and the product of their two masses. His basis for developing these laws was the work of several other early scientists, combined with his own development of the calculus in order to reconcile all the observations. It is amazing that these laws still stand today as the basis for almost all dynamic analysis with the exception of Einstein's additions in the early 1900s for relativistic effects. It is also amazing that Newton's development of calculus formed the foundation of our mathematics that enable dynamic modeling. In addition to being brilliant, he was also very eccentric. As Brennan writes in Heisenberg Probably Slept Here, "He was seen about campus in his disheveled clothes, his wig askew, wearing run-down shoes and a soiled neckpiece. He seemed to care about nothing but his work. He was so absorbed in his studies that he forgot to eat." Another interesting aspect of Newton is that he initially developed the calculus and the now famous laws of physics about 20 years prior to publishing them! The incentive to publish them arose from a bet between three men having lunch at a pub in 1684: Edmond Halley, Christopher Wren, and Robert Hooke. They all had the opinion that Kepler's elliptical characterization of planetary motion could be explained by the inverse square law, but nobody had ever proved it, so they "placed a bet as to who could first prove the conjecture." " Halley went to Newton for help due to his fame as a mathematician, who responded he had already done it many years ago and would forward the papers to him. He not only did that shortly afterward, but followed it up with the Principia with all the details two years later.

The basis for Newton's work started with the astronomer Nicholas Copernicus more than a hundred years before the Principia was published. He was the first to speculate that the planets revolved around the sun, rather than everything in the skies revolving around the earth. But Copernicus' heretical notion was largely ignored at the time, except by the church who banned his publication. However, two scientists did take note of his work: Galileo Galilei in Italy and Johannes Kepler in Austria. Kepler relied on a large collection of astronomical data taken by a Danish astronomer, Tycho Brahe, and concluded that the planetary orbits were ellipses rather than the circles that Copernicus had postulated. Galileo was an expert telescope builder and was able to clearly establish that the earth was not the center of all motion, partly because he was able to see moons revolving around other planets. He also did experiments with rolling balls down inclined planes that strongly suggested that \(F = ma\) (alas, it's a myth that he did his experiments by dropping objects out of the Leaning Tower of Pisa). Galileo published his work in 1632, which raised the ire of the church who then later banned him to house arrest until he died. \(\ ^{12}\) It was not until 1985 that the church recognized the important contributions of Galileo! These men laid the groundwork for Newton to put it all together with his laws of motion and the inverse square gravitational law. With these two physical principles, all the observations fit together with a theoretical framework that today forms the basis for the modeling of dynamic systems.

The sequence of discoveries that ultimately led to the laws of dynamics that we take for granted today were especially remarkable when we stop to think that they were all carried out without a computer, a calculator, or even a slide rule. On top of that, Newton had to invent calculus in order to reconcile the data.

After publishing the Principia, Newton went on to be elected to Parliament and was given high honors, including being the first man of science to be knighted by the Queen. He also got into fights with other scientists fairly regularly and used his powerful positions to get what he wanted. In one instance, he wanted data from the Royal Observatory that was not forthcoming fast enough. So he created a new board with authority over the Observatory and had the Astronomer Royal expelled from the Royal Society. Newton also had other less scientific interests. Many years after his death, John Maynard Keynes found that Newton had been spending as much of his time on metaphysical occult, alchemy, and biblical works as he had been on physics.

More than a hundred years after Newton's Principia, Michael Faraday performed a multitude of experiments and postulated the notion of electromagnetic lines of force in free space. He also discovered induction (Faraday's Law), which led to the electric motor and the laws of electrolysis. Faraday was born into a poor family, had virtually no schooling, and became an apprentice to a bookbinder at age 14. There \(\ ^{12}\) Galileo's life, accomplishments, and house arrest are very well described in Dava Sobel's
book, Galileo's Daughter.
he read many of the books being bound and became fascinated by science articles. Enthralled by these, he maneuvered to get a job as a bottle washer for a famous scientist, eventually learned enough to be a competitor to him, and ultimately became a professor at the Royal Institution in London. But lacking a formal education, he had no mathematical skills, and lacked the ability to create a theoretical framework for his discoveries. Faraday became a famous scientist in spite of his humble origins. After he had achieved fame for his discoveries and was made a Fellow of the Royal Society, the prime minister asked him what good his inventions could be. \(\ ^{13}\) Faraday's answer was, "Why Prime Minister, someday you can tax it." But in those days, scientists were almost exclusively men born into privilege; so Faraday had been treated like a second-class citizen by some of the other scientists. As a result, he rejected knighthood as well as burial at Westminster Abbey. Faraday's observations, along with those by Coulomb and Ampere, led James Clerk Maxwell to integrate all their knowledge on magnetism and electricity into Maxwell's equations. Against the beliefs of most prominent scientists of the day (Faraday being an exception), Maxwell invented the concepts of fields and waves that explained magnetic and electrostatic forces and was the key to creating the unifying theory. Although Newton had discovered the spectrum of light, Maxwell was also the first to realize that light was one type of the same electromagnetic waves, and its behavior was explained as well by Maxwell's equations. In fact, the only constants in his equations are \(\mu\) and \(\varepsilon\). The constant speed of light is \(c = 1/\sqrt{\mu\varepsilon}\).

Maxwell was a Scottish mathematician and theoretical physicist. His work has been called the second great unification in physics, the first being that due to Newton. Maxwell was born into the privileged class and was given the benefits of an excellent education and excelled at it. In fact, he was an extremely gifted theoretical and experimental scientist as well as a very generous and kind man with many friends and little vanity. In addition to unifying the observations of electromagnetics into a theory that still governs our engineering analyses today, he was the first to present an explanation of how light travels, the primary colors, the kinetic theory of gases, the stability of Saturn's rings, and the stability of feedback control systems! His discovery of the three primary colors (red, green, and blue) forms the basis of our color television to this day. His theory showing the speed of light is a constant was difficult to reconcile with Newton's laws and led Albert Einstein to create the special theory of relativity in the early 1900s. This led Einstein to say, "One scientific epoch ended and another began with James Clerk Maxwell." 14

\(\ ^{13}E = MC^{2}\), A Biography of the World's Most Famous Equation, by David Bodanis, Walker and Co., New York, 2000.

\(\ ^{14}\) The Man Who Changed Everything: The Life of James Clerk Maxwell, Basil Mahon, Wiley, Chichester, UK, 2003.

61. SUMMARY

Mathematical modeling of the system to be controlled is the first step in analyzing and designing the required system controls. In this chapter we developed analytical models for representative systems. Important equations for each category of system are summarized in Table 2.1. It is also possible to obtain a mathematical model using experimental data exclusively. This approach will be discussed briefly in Chapter 3 and more extensively in Chapter 12 of Franklin, Powell, and Workman (1998).

62. TABLE 2.1

Key Equations for Dynamic Models

[TABLE]

63. REVIEW QUESTIONS

2.1 What is a "free-body diagram"?

2.2 What are the two forms of Newton's law?

2.3 For a structural process to be controlled, such as a robot arm, what is the meaning of "collocated control"? "Noncollocated control"?

2.4 State Kirchhoff's current law.

2.5 State Kirchhoff's voltage law.

2.6 When, why, and by whom was the device named an "operational amplifier"?
amplifier?

2.8 Why is it important to have a small value for the armature resistance \(R_{a}\) of an electric motor?

2.9 What are the definition and units of the electric constant of a motor?

2.10 What are the definition and units of the torque constant of an electric motor?

2.11 Why do we approximate a physical model of the plant (which is always nonlinear) with a linear model?

\(\bigtriangleup \ 2.12\) Give the relationships for the following:

(a) Heat flow across a substance

(b) Heat storage in a substance

\(\bigtriangleup \ 2.13\) Name and give the equations for the three relationships governing fluid flow.

64. PROBLEMS

Figure 2.43

Mechanical systems

65. Problems for Section 2.1: Dynamics of Mechanical Systems

2.1 Write the differential equations for the mechanical systems shown in Fig. 2.43. For Fig. 2.43(a) and (b), state whether you think the system will eventually decay so it has no motion at all, given that there are nonzero

(a)

(b)

(c)

Figure 2.44

Mechanical system for Problem 2.2 initial conditions for both masses and there is no input; give a reason for your answer. Also, for part (c), answer the question for \(F = 0\).

2.2 Write the differential equation for the mechanical system shown in Fig. 2.44. State whether you think the system will eventually decay so it has no motion at all, given that there are nonzero initial conditions for both masses, and give a reason for your answer.

2.3 Write the equations of motion for the double-pendulum system shown in Fig. 2.45. Assume the displacement angles of the pendulums are small enough to ensure the spring is always horizontal. The pendulum rods are taken to be massless, of length \(l\), and the springs are attached threefourths of the way down.

2.4 Write the equations of motion of a pendulum consisting of a thin, \(2\text{ }kg\) stick of length \(l\) suspended from a pivot. How long should the rod be in order for the period to be exactly \(1sec\) ? (The inertia \(I\) of a thin stick about an end point is \(\frac{1}{3}ml^{2}\). Assume \(\theta\) is small enough that \(sin\theta \cong \theta\).) Why do you think grandfather clocks are typically about \(6ft\) high?

2.5 For the car suspension discussed in Example 2.2, plot the position of the car and the wheel after the car hits a "unit bump"(that is, \(r\) is a unit step) using Matlab. Assume \(m_{1} = 10\text{ }kg,m_{2} = 350\text{ }kg,K_{w} = 500,000\text{ }N/m\), and \(K_{s} = 10,000\text{ }N/m\). Find the value of \(b\) that you would prefer if you were a passenger in the car.

2.6 For the quadcopter shown in Figs. 2.13 and 2.14:

(a) Determine the appropriate commands to rotor #s 1, 2, 3, & 4 so a pure vertical force will be applied to the quadcopter, that is, a force that will have no effect on pitch, roll, or yaw.

(b) Determine the transfer function between \(F_{h}\), and altitude, \(h\). That is, find \(h(s)/F_{h}(s)\).
Figure 2.45

Double pendulum

Figure 2.46

Schematic of a system with flexibility
2.7 Automobile manufacturers are contemplating building active suspension systems. The simplest change is to make shock absorbers with a changeable damping, \(b\left( u_{1} \right)\). It is also possible to make a device to be placed in parallel with the springs that has the ability to supply an equal force, \(u_{2}\), in opposite directions on the wheel axle and the car body.

(a) Modify the equations of motion in Example 2.2 to include such control inputs.

(b) Is the resulting system linear?

(c) Is it possible to use the force \(u_{2}\) to completely replace the springs and shock absorber? Is this a good idea?

2.8 In many mechanical positioning systems, there is flexibility between one part of the system and another. An example is shown in Fig. 2.7 where there is flexibility of the solar panels. Figure 2.46 depicts such a situation, where a force \(u\) is applied to the mass \(M\) and another mass \(m\) is connected to it. The coupling between the objects is often modeled by a spring constant \(k\) with a damping coefficient \(b\), although the actual situation is usually much more complicated than this.

(a) Write the equations of motion governing this system.

(b) Find the transfer function between the control input \(u\) and the output \(y\).

2.9 Modify the equation of motion for the cruise control in Example 2.1, Eq. (2.4), so it has a control law; that is, let

\[u = K\left( v_{r} - v \right), \]

where

\[\begin{matrix} & v_{r} = \text{~}\text{reference speed}\text{~} \\ & K = \text{~}\text{constant}\text{~} \end{matrix}\]

This is a "proportional"control law in which the difference between \(v_{r}\) and the actual speed is used as a signal to speed the engine up or slow it down. Revise the equations of motion with \(v_{r}\) as the input and \(v\) as the output and find the transfer function. Assume \(m = 1500\text{ }kg\) and \(b = 70\) \(N \cdot sec/m\), and find the response for a unit step in \(v_{r}\) using Matlab. Using trial and error, find a value of \(K\) that you think would result in a control system in which the actual speed converges as quickly as possible to the reference speed with no objectionable behavior.

Figure 2.47

Robot for delivery of hospital supplies

Source: Bill Clark/Daily Progress/AP Images

Figure 2.48

Model for robot motion
2.10 Determine the dynamic equations for lateral motion of the robot in Fig. 2.47. Assume it has three wheels with a single, steerable wheel in the front where the controller has direct control of the rate of change of the steering angle, \(U_{\text{steer}\text{~}}\), with geometry as shown in Fig. 2.48. Assume the robot is going in approximately a straight line and its angular deviation from that straight line is very small. Also assume the robot is traveling at a constant speed, \(V_{o}\). The dynamic equations relating the lateral velocity of the center of the robot as a result of commands in \(U_{\text{steer}\text{~}}\) are desired.

2.11 Determine the pitch, yaw, and roll control equations for the hexacopter shown in Fig. 2.49 that are similar to those for the quadcopter given in Eqs. (2.18) to (2.20).

Assume rotor \(\# 1\) is in the direction of flight, and the remaining rotors are numbered \(CW\) from that rotor. In other words, rotors \(\# 1\) and #4 will determine the pitch motion. Rotor #s 2, 3, 5, & 6 will determine roll motion. Pitch, roll and yaw motions are defined by the coordinate system shown in Fig. 2.14 in Example 2.5. In addition to developing the equations for the 3 degrees of freedom in terms of how the six rotor motors should be commanded (similar to those for the quadrotor in Eqs. (2.18)-(2.20)), it will also be necessary to decide which rotors are

Figure 2.49

Hexacopter

turning \(CW\) and which ones are turning \(CCW\). The direction of rotation for the rotors needs to be selected so there is no net torque about the vertical axis; that is, the hexicopter will have no tendancy for yaw rotation in steady-state. Furthermore, a control action to affect pitch should have no effect on yaw or roll. Likewise, a control action for roll should have no effect on pitch or yaw, and a control action for yaw should have no effect on pitch or roll. In other words, the control actions should produce no cross-coupling between pitch, roll, and yaw just as was the case for the quadcopter in Example 2.5.

2.12 In most cases, quadcopters have a camera mounted that does not swivel in the \(x - y\) plane and its direction of view is oriented at \(45^{\circ}\) to the arms supporting the rotors. Therefore, these drones typically fly in a direction that is aligned with the camera rather than along an axis containing two of the rotors. To simplify the flight dynamics, the \(x\)-direction of the coordinate system is aligned with the camera direction. Based on the coordinate definitions for the axes in Fig. 2.14, assume the \(x\)-axis lies half way between rotors # 1 and 2 and determine the rotor commands for the four rotors that would accomplish independent motion for pitch, roll, and yaw.

66. Problems for Section 2.2: Models of Electric Circuits

2.13 A first step toward a realistic model of an op-amp is given by the following equations and is shown in Fig. 2.50:

\[\begin{matrix} V_{out} & \ = \frac{10^{7}}{s + 1}\left\lbrack v_{+} - v_{-} \right\rbrack \\ i_{+} & \ = i_{-} = 0 \end{matrix}\]

Find the transfer function of the simple amplification circuit shown using this model.

Figure 2.50

Circuit for Problem 2.13

Figure 2.51

Circuit for Problem 2.14

2.14 Show the op-amp connection shown in Fig. 2.51 results in \(V_{\text{out}\text{~}} = V_{\text{in}\text{~}}\) if the op-amp is ideal. Give the transfer function if the op-amp has the nonideal transfer function of Problem 2.13.

2.15 A common connection for a motor power amplifier is shown in Fig. 2.52. The idea is to have the motor current follow the input voltage, and the connection is called a current amplifier. Assume the sense resistor \(r_{S}\) is very small compared with the feedback resistor \(R\), and find the transfer function from \(V_{in}\) to \(I_{a}\). Also show the transfer function when \(R_{f} = \infty\).

Figure 2.52

Op-amp circuit for Problem 2.15

Figure 2.53

Op-amp circuit for Problem 2.16

2.16 An op-amp connection with feedback to both the negative and the positive terminals is shown in Fig. 2.53. If the op-amp has the nonideal transfer function given in Problem 2.13, give the maximum value possible for the positive feedback ratio, \(P = \frac{r}{r + R}\), in terms of the negative feedback ratio, \(N = \frac{R_{in}}{R_{in} + R_{f}}\), for the circuit to remain stable.

Figure 2.54

(a) Passive lead;

(b) active lead;

(c) active lag; and

(d) passive notch circuits
2.17 Write the dynamic equations and find the transfer functions for the circuits shown in Fig. 2.54.

(a) Passive lead circuit

(b) Active lead circuit

(c) Active lag circuit

(d) Passive notch circuit

(a)

(b)

(c)

(d)

2.18 The very flexible circuit shown in Fig. 2.55 is called a biquad because its transfer function can be made to be the ratio of two second-order or quadratic polynomials. By selecting different values for \(R_{a},R_{b},R_{c}\), and \(R_{d}\), the circuit can realize a low-pass, band-pass, high-pass, or bandreject (notch) filter.

(a) Show that if \(R_{a} = R\) and \(R_{b} = R_{c} = R_{d} = \infty\), the transfer function from \(V_{\text{in}\text{~}}\) to \(V_{\text{out}\text{~}}\) can be written as the low-pass filter

\[\frac{V_{\text{out}\text{~}}}{V_{\text{in}\text{~}}} = \frac{A}{\frac{s^{2}}{\omega_{n}^{2}} + 2\zeta\frac{s}{\omega_{n}} + 1} \]

Figure 2.55

Op-amp biquad

where

\[\begin{matrix} A & \ = \frac{R}{R_{1}} \\ \omega_{n} & \ = \frac{1}{RC} \\ \zeta & \ = \frac{R}{2R_{2}} \end{matrix}\]

(b) Using the Matlab command step, compute and plot on the same graph the step responses for the biquad of Fig. 2.55 for \(A = 2\), \(\omega_{n} = 3\), and \(\zeta = 0.1,0.5\), and 1.0 .

2.19 Find the equations and transfer function for the biquad circuit of Fig. 2.55 if \(R_{a} = R,R_{d} = R_{1}\), and \(R_{b} = R_{c} = \infty\).

67. Problems for Section 2.3: Models of Electromechanical Systems

2.20 The torque constant of a motor is the ratio of torque to current and is often given in ounce-inches per ampere. (Ounce-inches have dimension force \(\times\) distance, where an ounce is \(1/16\) of a pound.) The electric constant of a motor is the ratio of back emf to speed and is often given in volts per \(1000rpm\). In consistent units, the two constants are the same for a given motor.

(a) Show that the units ounce-inches per ampere are proportional to volts per \(1000rpm\) by reducing both to MKS (SI) units.

(b) A certain motor has a back emf of \(30\text{ }V\) at \(1000rpm\). What is its torque constant in ounce-inches per ampere?

(c) What is the torque constant of the motor of part (b) in newton-meters per ampere?

Figure 2.56

Simplified model for capacitor microphone
2.21 The electromechanical system shown in Fig. 2.56 represents a simplified model of a capacitor microphone. The system consists in part of a parallel plate capacitor connected into an electric circuit. Capacitor plate \(a\) is rigidly fastened to the microphone frame. Sound waves pass through the mouthpiece and exert a force \(f_{e}(t)\) on plate \(b\), which has mass \(M\) and is connected to the frame by a set of springs and dampers. The capacitance \(C\) is a function of the distance \(x\) between the plates, as follows:

\[C(x) = \frac{\varepsilon A}{x} \]

where

\[\begin{matrix} & \varepsilon = \text{~}\text{dielectric constant of the material between the plates,}\text{~} \\ & A = \text{~}\text{surface area of the plates.}\text{~} \end{matrix}\]

The charge \(q\) and the voltage \(e\) across the plates are related by

\[q = C(x)e. \]

The electric field in turn produces the following force \(f_{e}\) on the movable plate that opposes its motion:

\[f_{e} = \frac{q^{2}}{2\varepsilon A} \]

(a) Write differential equations that describe the operation of this system. (It is acceptable to leave in nonlinear form.)

(b) Can one get a linear model?

(c) What is the output of the system?

2.22 A very typical problem of electromechanical position control is an electric motor driving a load that has one dominant vibration mode. The problem arises in computer-disk-head control, reel-to-reel tape drives, and many other applications. A schematic diagram is sketched in Fig. 2.57. The motor has an electrical constant \(K_{e}\), a torque constant \(K_{t}\), an armature inductance \(L_{a}\), and a resistance \(R_{a}\). The rotor has an inertia \(J_{1}\) and a viscous friction \(B\). The load has an inertia \(J_{2}\). The two inertias are connected by a shaft with a spring constant \(k\) and an equivalent viscous damping \(b\). Write the equations of motion.

Figure 2.57

Motor with a flexible load

\(\bigtriangleup \ \mathbf{2.23}\) For the robot in Fig. 2.47, assume you have command of the torque on a servo motor that is connected to the drive wheels with gears that have a 2:1 ratio, so the torque on the wheels is increased by a factor of 2 over that delivered by the servo. Determine the dynamic equations relating the speed of the robot with respect to the torque command of the servo. Your equations will require certain quantities, for example, mass of vehicle, inertia, and radius of the wheels. Assume you have access to whatever you need.

$\bigtriangleup \ $ 2.24 Using Fig. 2.36, derive the transfer function between the applied torque, \(T_{m}\), and the output, \(\theta_{2}\), for the case when there is a spring attached to the output load. That is, there is a torque applied to the output load, \(T_{S}\), where \(T_{S} = - K_{S}\theta_{2}\).

Figure 2.58

(a) Precision table kept level by actuators; (b) side view of one actuator

(a)

(b)

68. Problems for Section 2.4: Heat and Fluid-Flow Models

2.25 A precision table-leveling scheme shown in Fig. 2.58 relies on thermal expansion of actuators under two corners to level the table by raising or lowering their respective corners. The parameters are as follows:

\[\begin{matrix} T_{act} & \ = \text{~}\text{actuator temperature,}\text{~} \\ T_{amb} & \ = \text{~}\text{ambient air temperature,}\text{~} \\ R_{f} & \ = \text{~}\text{heat-flow coefficient between the actuator and the air,}\text{~} \\ C & \ = \text{~}\text{thermal capacity of the actuator,}\text{~} \\ R & \ = \text{~}\text{resistance of the heater.}\text{~} \end{matrix}\]

Assume (1) the actuator acts as a pure electric resistance, (2) the heat flow into the actuator is proportional to the electric power input, and (3) the

Figure 2.59

Building

air-conditioning:

(a) high-rise building;

(b) floor plan of the fourth floor
Figure 2.60

Two-tank fluid-flow system for Problem 2.27 motion \(d\) is proportional to the difference between \(T_{act}\) and \(T_{amb}\) due to thermal expansion. Find the differential equations relating the height of the actuator \(d\) versus the applied voltage \(v_{i}\).

2.26 An air conditioner supplies cold air at the same temperature to each room on the fourth floor of the high-rise building shown in Fig. 2.59(a). The floor plan is shown in Fig. 2.59(b). The cold airflow produces an equal amount of heat flow \(q\) out of each room. Write a set of differential equations governing the temperature in each room, where

(b)

(a)

\[\begin{matrix} & T_{O} = \text{~}\text{temperature outside the building,}\text{~} \\ & R_{O} = \text{~}\text{resistance to heat flow through the outer walls,}\text{~} \\ & R_{i} = \text{~}\text{resistance to heat flow through the inner walls.}\text{~} \end{matrix}\]

Assume (1) all rooms are perfect squares, (2) there is no heat flow through the floors or ceilings, and (3) the temperature in each room is uniform throughout the room. Take advantage of symmetry to reduce the number of differential equations to three.

2.27 For the two-tank fluid-flow system shown in Fig. 2.60, find the differential equations relating the flow into the first tank to the flow out of the second tank.

Figure 2.61

Two-tank fluid-flow system for Problem 2.28
2.28 A laboratory experiment in the flow of water through two tanks is sketched in Fig. 2.61. Assume Eq. (2.96) describes flow through the equal-sized holes at points \(A,B\), or \(C\).

(a) With holes at \(B\) and \(C\), but none at \(A\), write the equations of motion for this system in terms of \(h_{1}\) and \(h_{2}\). Assume when \(h_{2} = 15\text{ }cm\), the outflow is \(200\text{ }g/min\).

(b) At \(h_{1} = 30\text{ }cm\) and \(h_{2} = 10\text{ }cm\), compute a linearized model and the transfer function from pump flow (in cubic-centimeters per minute) to \(h_{2}\).

(c) Repeat parts (a) and (b) assuming hole B is closed and hole A is open. Assume \(h_{3} = 20\text{ }cm,h_{1} > 20\text{ }cm\), and \(h_{2} < 20\text{ }cm\).

2.29 The equations for heating a house are given by Eqs. (2.81) and (2.82), and in a particular case can be written with time in hours as

\[C\frac{dT_{h}}{dt} = Ku - \frac{T_{h} - T_{o}}{R} \]

where

(a) \(C\) is the thermal capacity of the house, \(BTU/\ ^{\circ}F\),

(b) \(T_{h}\) is the temperature in the house, \(\ ^{\circ}F\),

(c) \(T_{o}\) is the temperature outside the house, \(\ ^{\circ}F\),

(d) \(K\) is the heat rating of the furnace, \(= 90,000BTU/h\),

(e) \(R\) is the thermal resistance, \(\ ^{\circ}F\) per BTU/h,

(f) \(u\) is the furnace switch, \(= 1\) if the furnace is on and \(= 0\) if the furnace is off.

It is measured that, with the outside temperature at \(32^{\circ}F\) and the house at \(60^{\circ}F\), the furnace raises the temperature \(2^{\circ}F\) in \(sixmin(0.1\text{ }h)\). With the furnace off, the house temperature falls \(2^{\circ}F\) in \(40\text{ }\min\). What are the values of \(C\) and \(R\) for the house?

69. Dynamic Response

70. A Perspective on System Response

We discussed in Chapter 2 how to obtain the dynamic model of a system. In designing a control system, it is important to see how well a trial design matches the desired performance. We do this by solving the equations of the system model.

There are two ways to approach solving the dynamic equations. For a quick, approximate analysis, we use linear analysis techniques. The resulting approximations of system response provide insight into why the solution has certain features and how the system might be changed to modify the response in a desired direction. In contrast, a precise picture of the system response typically calls for numerical simulation of nonlinear equations of motion using computer aids. This chapter focuses on linear analysis and computer tools that can be used to solve for the time response of linear systems.

There are three domains within which to study dynamic response: the Laplace transform (s-plane), the frequency response, and the state space (analysis using the state-variable description). The
well-prepared control engineer needs to be fluent in all of them, so they will be treated in depth in Chapters 5, 6, and 7, respectively. The purpose of this chapter is to discuss some of the fundamental mathematical tools needed before studying analysis in the s-plane, frequency response, and state space.

71. Chapter Overview

The Laplace transform, reviewed in Section 3.1 (and Appendix A), is the mathematical tool for transforming differential equations into an easier-to-manipulate algebraic form. In addition to the mathematical tools at our disposal, there are graphical tools that can help us to visualize the model of a system and evaluate the pertinent mathematical relationships between elements of the system. One approach is the block diagram, which was introduced in Chapter 1. Blockdiagram manipulation will be discussed in Section 3.2 and allows the manipulation of transfer functions.

Once the transfer function has been determined, we can identify its poles and zeros, which tell us a great deal about the system characteristics, including its frequency response introduced in Section 3.1. Sections 3.3 to 3.5 will focus on poles and zeros and some of the ways for manipulating them to steer system characteristics in a desired way. When feedback is introduced, the possibility that the system may become unstable is introduced. To study this effect, in Section 3.6 we consider the definition of stability and Routh's test, which can determine stability by examining the coefficients of the system's characteristic equation. Finally, Section 3.7 will provide a historical perspective for the material in this chapter. An alternative representation of a system in graphical form is the signal-flow graph and flow graphs that allow the determination of complicated transfer functions, which are discussed in Appendix W3.2.3 online at www.pearsonglobaleditions.com.

71.1. Review of Laplace Transforms

Two attributes of linear time-invariant systems (LTIs) form the basis for almost all analytical techniques applied to these systems:

  1. A linear system response obeys the principle of superposition.

  2. The response of an LTI system can be expressed as the convolution of the input with the unit impulse response of the system.

The concepts of superposition, convolution, and impulse response will be defined shortly.

From the second property (as we will show), it follows immediately that the response of an LTI system to an exponential input is also exponential. This result is the principal reason for the usefulness of Fourier and Laplace transforms in the study of LTI systems.

Superposition principle

72. EXAMPLE 3.1

72.0.1. Response by Convolution

The principle of superposition states that if the system has an input that can be expressed as a sum of signals, then the response of the system can be expressed as the sum of the individual responses to the respective signals. We can express superposition mathematically. Consider the system to have input \(u\) and output \(y\). Suppose further that, with the system at rest, we apply the input \(u_{1}(t)\) and observe the output \(y_{1}(t)\). After restoring the system to rest, we apply a second input \(u_{2}(t)\) and again observe the output, which we call \(y_{2}(t)\). Then, we form the composite input \(u(t) = \alpha_{1}u_{1}(t) + \alpha_{2}u_{2}(t)\). Finally, if superposition applies, then the response will be \(y(t) = \alpha_{1}y_{1}(t) + \alpha_{2}y_{2}(t)\). Superposition will apply if and only if the system is linear.

73. Superposition

Show that superposition holds for the system modeled by the first-order linear differential equation

\[\overset{˙}{y} + ky = u. \]

Solution. We let \(u = \alpha_{1}u_{1} + \alpha_{2}u_{2}\) and assume \(y = \alpha_{1}y_{1} + \alpha_{2}y_{2}\). Then \(\overset{˙}{y} = \alpha_{1}{\overset{˙}{y}}_{1} + \alpha_{2}{\overset{˙}{y}}_{2}\). If we substitute these expressions into the system equation, we get

\[\alpha_{1}{\overset{˙}{y}}_{1} + \alpha_{2}{\overset{˙}{y}}_{2} + k\left( \alpha_{1}y_{1} + \alpha_{2}y_{2} \right) = \alpha_{1}u_{1} + \alpha_{2}u_{2}. \]

From this, it follows that

\[\alpha_{1}\left( {\overset{˙}{y}}_{1} + ky_{1} - u_{1} \right) + \alpha_{2}\left( {\overset{˙}{y}}_{2} + ky_{2} - u_{2} \right) = 0. \]

If \(y_{1}\) is the solution with input \(u_{1}\) and \(y_{2}\) is the solution with input \(u_{2}\), then Eq. (3.1) is satisfied, the response is the sum of the individual responses, and superposition holds.

Notice the superposition result of Eq. (3.1) would also hold if \(k\) were a function of time. If \(k\) were constant, we call the system time invariant. In that case, it follows that if the input is delayed or shifted in time, then the output is unchanged except also being shifted by exactly the same amount. Mathematically, this is expressed by saying that, if \(y_{1}(t)\) is the output caused by \(u_{1}(t)\) then \(y_{1}(t - \tau)\) will be the response to \(u_{1}(t - \tau)\).

Time Invariance

Consider

\[{\overset{˙}{y}}_{1}(t) + k(t)y_{1}(t) = u_{1}(t) \]

and

\[{\overset{˙}{y}}_{2}(t) + k(t)y_{2}(t) = u_{1}(t - \tau) \]

where \(\tau\) is a constant shift. Assume that \(y_{2}(t) = y_{1}(t - \tau)\); then

\[\frac{dy_{1}(t - \tau)}{dt} + k(t)y_{1}(t - \tau) = u_{1}(t - \tau) \]

Let us make the change of variable \(t - \tau = \eta\), then

\[\frac{dy_{1}(\eta)}{d\eta} + k(\eta + \tau)y_{1}(\eta) = u_{1}(\eta) \]

Eq. (3.3) can satisfy Eq. (3.2) only if \(\tau = 0\), or if \(k(\eta + \tau) = k =\) constant, in which case

\[\frac{dy_{1}(\eta)}{d\eta} + ky_{1}(\eta) = u(\eta) \]

which is Eq. (3.1). Therefore, we conclude that if the system is time invariant, \(y(t - \tau)\) will be the response to \(u(t - \tau)\); that is, if the input is delayed by \(\tau\) sec, then the output is also delayed by \(\tau\) sec.

We are able to solve for the response of a linear system to a general signal simply by decomposing the given signal into a sum of the elementary components and, by superposition, concluding that the response to the general signal is the sum of the responses to the elementary signals. In order for this process to work, the elementary signals need to be sufficiently "rich" that any reasonable signal can be expressed as a sum of them, and their responses have to be easy to find. The most common candidates for elementary signals for use in linear systems are the impulse and the exponential.

Suppose the input signal to an LTI system is a short pulse as \(u_{1}(t) =\) \(p(t)\), and the corresponding output signal is \(y_{1}(t) = h(t)\), as shown in Fig. 3.1(a). Now if the input is scaled to \(u_{1}(t) = u(0)p(t)\), then by the scaling property of superposition, the output response will be \(y_{1}(t) =\) \(u(0)h(t)\). We showed that an LTI system obeys time invariance. If we delay the short pulse signal in time by \(\tau\), then the input is of the form \(u_{2}(t) = p(t - \tau)\) and the output response will also be delayed by the same amount \(y_{2}(t) = h(t - \tau)\) as shown in Fig. 3.1(b). Now, by superposition, the response to the two short pulses will be the sum of their individual outputs as shown in Fig. 3.1(c). If we have four pulses as the input, then the output will be the sum of the four individual responses as shown in Fig. 3.1(d). Any arbitrary input signal \(u(t)\) may be approximated by a series of pulses as shown in Fig. 3.2. We define a short pulse \(p_{\Delta}(t)\) as a rectangular pulse having unit area such that

\[p_{\Delta}(t) = \left\{ \begin{matrix} \frac{1}{\Delta}, & 0 \leq t \leq \Delta \\ 0, & \text{~}\text{elsewhere}\text{~} \end{matrix} \right.\ \]

as shown in Fig. 3.1(a). Suppose the response of the system to \(p_{\Delta}(t)\) is defined as \(h_{\Delta}(t)\). The response at time \(n\Delta\) to \(\Delta u(k\Delta)p_{\Delta}(k\Delta)\) is

\[\Delta u(k\Delta)h_{\Delta}(n\Delta - k\Delta) \]

By superposition, the total response to the series of the short pulses at time \(t\) is given by

\[y(t) = \sum_{k = 0}^{k = \infty}\mspace{2mu}\Delta u(k\Delta)h_{\Delta}(t - k\Delta) \]

Figure 3.1

Illustration of convolution as the response of a system to a series of short pulse (impulse) input signals

(a) System output

(b)

(c)

(d)

()

Illustration of the representation of a general input signal as the sum of short pulses

Impulse response

Definition of impulse

Sifting property of impulse
If we take the limit as \(\Delta \rightarrow 0\), the basic pulse gets more and more narrow and taller while holding a constant area. We then have the concept of an impulse signal, \(\delta(t)\), and that will allow us to treat continuous signals. In that case, we have

\[\begin{matrix} \lim_{\Delta \rightarrow 0}\mspace{2mu} p_{\Delta}(t) = \delta(t) \\ \lim_{\Delta \rightarrow 0}\mspace{2mu} h_{\Delta}(t) = h(t) = \text{~}\text{the impulse response.}\text{~} \end{matrix}\]

Moreover, in the limit as \(\Delta \rightarrow 0\), the summation in Eq. (3.5) is replaced by the integral

\[y(t) = \int_{0}^{\infty}\mspace{2mu} u(\tau)h(t - \tau)d\tau \]

which is the convolution integral.

The idea for the impulse comes from dynamics. Suppose we wish to study the motion of a baseball hit by a bat. The details of the collision between the bat and ball can be very complex as the ball deforms and the bat bends; however, for purposes of computing the path of the ball, we can summarize the effect of the collision as the net velocity change of the ball over a very short time period. We assume the ball is subjected to an impulse, a very intense force for a very short time. The physicist Paul Dirac suggested that such forces could be represented by the mathematical concept of an impulse \(\delta(t)\), which has the property that

\[\begin{matrix} \delta(t) & \ = 0\ t \neq 0 \\ \int_{- \infty}^{\infty}\mspace{2mu}\mspace{2mu}\delta(t)dt & \ = 1 \end{matrix}\]

If \(f(t)\) is continuous at \(t = \tau\), then it has the "sifting property."

\[\int_{- \infty}^{\infty}\mspace{2mu} f(\tau)\delta(t - \tau)d\tau = f(t). \]

In other words, the impulse is so short and so intense that no value of \(f\) matters except over the short range where the \(\delta\) occurs. Since integration is a limit of a summation process, Eq. (3.11) can be viewed as representing the function \(f\) as a sum of impulses. If we replace \(f\) by \(u\), then Eq. (3.11) represents an input \(u(t)\) as a sum of impulses of intensity \(u(t - \tau)\). To find the response to an arbitrary input, the principle of superposition tells us that we need only find the response to a unit impulse.

If the system is not only linear but also time invariant (LTI), then the impulse response is given by \(h(t - \tau)\) because the response at \(t\) to an input applied at \(\tau\) depends only on the difference between the time the impulse is applied and the time we are observing the response, that is, the elapsed time. Time-invariant systems are called shift invariant for this reason. For time-invariant systems, the output for a general input is given by the integral

\[y(t) = \int_{- \infty}^{\infty}\mspace{2mu} u(\tau)h(t - \tau)d\tau \]

The convolution integral EXAMPLE 3.3

Unit step or by changing of variables as \(\tau_{1} = t - \tau\)

\[y(t) = \int_{\infty}^{- \infty}\mspace{2mu} u\left( t - \tau_{1} \right)h\left( \tau_{1} \right)\left( - d\tau_{1} \right) = \int_{- \infty}^{\infty}\mspace{2mu} h(\tau)u(t - \tau)d\tau \]

This is the convolution integral.

74. Convolution

We can illustrate convolution with a simple system. Determine the impulse response for the system described by the differential equation

\[\overset{˙}{y} + ky = u = \delta(t) \]

with an initial condition of \(y(0) = 0\) before the impulse.

Solution. Because \(\delta(t)\) has an effect only near \(t = 0\), we can integrate this equation from just before zero to just after zero with the result that

\[\int_{0^{-}}^{0^{+}}\mspace{2mu}\overset{˙}{y}dt + k\int_{0^{-}}^{0^{+}}\mspace{2mu} ydt = \int_{0^{-}}^{0^{+}}\mspace{2mu}\delta(t)dt \]

The integral of \(\overset{˙}{y}\) is simply \(y\), the integral of \(y\) over so small a range is zero, and the integral of the impulse over the same range is unity. Therefore,

\[y\left( 0^{+} \right) - y\left( 0^{-} \right) = 1. \]

Because the system was at rest before application of the impulse, \(y\left( 0^{-} \right) = 0\). Thus the effect of the impulse is that \(y\left( 0^{+} \right) = 1\). For positive time, we have the differential equation

\[\overset{˙}{y} + ky = 0,\ y\left( 0^{+} \right) = 1. \]

If we assume a solution \(y = Ae^{st}\), then \(\overset{˙}{y} = Ase^{st}\). The preceding equation then becomes

\[\begin{matrix} Ase^{st} + kAe^{st} & \ = 0, \\ s + k & \ = 0, \\ s & \ = - k. \end{matrix}\]

Because \(y\left( 0^{+} \right) = 1\), it is necessary that \(A = 1\). Thus the solution for the impulse response is \(y(t) = h(t) = e^{- kt}\) for \(t > 0\). To take care of the fact that \(h(t) = 0\) for negative time, we define the unit-step function

\[1(t) = \left\{ \begin{matrix} 0, & t < 0 \\ 1, & t \geq 0 \end{matrix} \right.\ \]

With this definition, the impulse response of the first-order system becomes

\[h(t) = e^{- kt}1(t) \]

The response of this system to a general input is given by the convolution of this impulse response with the input

\[\begin{matrix} y(t) & \ = \int_{- \infty}^{\infty}\mspace{2mu}\mspace{2mu} h(\tau)u(t - \tau)d\tau \\ & \ = \int_{- \infty}^{\infty}\mspace{2mu}\mspace{2mu} e^{- k\tau}1(\tau)u(t - \tau)d\tau \\ & \ = \int_{0}^{\infty}\mspace{2mu}\mspace{2mu} e^{- k\tau}u(t - \tau)d\tau \end{matrix}\]

For time-invariant systems, the output for a general input is given by the integral

\[y(t) = \int_{- \infty}^{\infty}\mspace{2mu} u(\tau)h(t - \tau)d\tau \]

Notice the limits on the integral are at infinity. Thus, either or both \(h\) and \(u\) may be nonzero for negative time. If \(h\) has values for negative time, it means that the system response starts before the input is applied! Systems which do this are called non-causal because they do not obey the usual law of cause and effect. \(\ ^{1}\) Of course, all physical systems are causal. Furthermore, in most cases of interest we take \(t = 0\) as the time when the input starts. In this case, with causal systems, the integral may be written as

\[y(t) = \int_{0}^{t}\mspace{2mu} u(\tau)h(t - \tau)d\tau \]

74.0.1. Transfer Functions and Frequency Response

A simple version of the transfer function concept was developed in Chapter 2. A more rigorous treatment of this concept using the convolution integral follows. The evaluation of the convolution integral Eq. (3.14) can be difficult and an indirect approach has been developed using the Laplace transform \(\ ^{2}\) defined as

\[Y(s) = \int_{- \infty}^{\infty}\mspace{2mu} y(t)e^{- st}dt \]

Applying this transform to the convolution,

\[Y(s) = \int_{- \infty}^{\infty}\mspace{2mu}\left\lbrack \int_{- \infty}^{\infty}\mspace{2mu}\mspace{2mu} h(\tau)u(t - \tau)d\tau \right\rbrack e^{- st}dt \]

Next, we exchange the order of integration such that we integrate with respect to \(t\) first

\[Y(s) = \int_{- \infty}^{\infty}\mspace{2mu}\left\lbrack \int_{- \infty}^{\infty}\mspace{2mu}\mspace{2mu} u(t - \tau)e^{- st}dt \right\rbrack h(\tau)d\tau \]

Changing variables of the inner integral by defining \(t - \tau = \eta\), we get

\[\begin{matrix} & Y(s) = \int_{- \infty}^{\infty}\mspace{2mu}\mspace{2mu}\left\lbrack \int_{- \infty}^{\infty}\mspace{2mu}\mspace{2mu} u(\eta)e^{- s(\eta + \tau)}dt \right\rbrack h(\tau)d\tau, \\ & Y(s) = \left\lbrack \int_{- \infty}^{\infty}\mspace{2mu}\mspace{2mu} u(\eta)e^{- s\eta}d\eta \right\rbrack\int_{- \infty}^{\infty}\mspace{2mu}\mspace{2mu} h(\tau)e^{- s\tau}d\tau, \\ & Y(s) = U(s)H(s). \end{matrix}\]

In this solution, \(U(s)\) is the Laplace transform of the input time function and \(H(s)\), the Laplace transform of the impulse response, is defined as the transfer function. By this operation, the complicated convolution integral is replaced by a simple multiplication of the transforms. What remains is to interpret the transforms and the transfer function. In the first instance, the integrals of the transforms usually do not converge for all values of the variable \(s\), and they are only defined for a finite region in the \(s\)-plane.

An immediate consequence of convolution is that an input of the form \(e^{st}\) results in an output \(H(s)e^{st}\). Note that both the input and output are exponential time functions, and that the output differs from the input only in the amplitude \(H(s).H(s)\) is defined as the transfer function of the system. The specific constant \(s\) may be complex, expressed as \(s = \sigma_{1} + j\omega\). Thus, both the input and the output may be complex. If we let \(u(t) = e^{st}\) in Eq. (3.13), then

\[\begin{matrix} & y(t) = \int_{- \infty}^{\infty}\mspace{2mu}\mspace{2mu} h(\tau)u(t - \tau)d\tau, \\ & y(t) = \int_{- \infty}^{\infty}\mspace{2mu}\mspace{2mu} h(\tau)e^{s(t - \tau)}d\tau, \\ & y(t) = \int_{- \infty}^{\infty}\mspace{2mu}\mspace{2mu} h(\tau)e^{st}e^{- s\tau}d\tau, \\ & y(t) = \int_{- \infty}^{\infty}\mspace{2mu}\mspace{2mu} h(\tau)e^{- s\tau}d\tau e^{st}, \\ & y(t) = H(s)e^{st}, \end{matrix}\]

where

\[H(s) = \int_{- \infty}^{\infty}\mspace{2mu} h(\tau)e^{- s\tau}d\tau^{3} \]

Laplace defined this integral and it is called the Laplace transform. Notice the limits on the integral are \(- \infty\) to \(+ \infty\), implying that \(h(t)\) may have values at any time. Equation (3.18) needs to be interpreted carefully. \(\ ^{4}\) Notice this input is exponential for all time \(( - \infty < t < \infty)\) and Eq. (3.18) represents the response for all time and hence there are no initial conditions, and Eq. (3.17) gives the steady-state behavior of the system. Therefore, if the input is an exponential for all time, and if we know the transfer function \(H(s)\), the output is readily computed by multiplication and the need for convolution goes away! The important conclusion is that if the input is an exponential time function so is the output and the scaling term is the transfer function. For any real, causal system, \(h(t) = 0\) for \(t < 0\), and the limits on the integral can be set from 0 to \(\infty\)

\[H(s) = \int_{0}^{\infty}\mspace{2mu} h(\tau)e^{- s\tau}d\tau \]

For a causal system, Eq. (3.13) simplifies to

\[y(t) = \int_{0}^{\infty}\mspace{2mu} h(\tau)u(t - \tau)d\tau \]

EXAMPLE 3.4

Transfer function

75. Transfer Function

Compute the transfer function for the system of Example 3.1, and find the output \(y\) for all time \(( - \infty < t < \infty)\) when the input \(u = e^{st}\) for all time and \(s\) is a given complex number.

Solution. The system equation from Example 3.3 is

\[\overset{˙}{y}(t) + ky(t) = u(t) = e^{st}. \]

We assume we can express \(y(t)\) as \(H(s)e^{st}\). With this form, we have \(\overset{˙}{y} =\) \(sH(s)e^{st}\), and Eq. (3.20) reduces to

\[sH(s)e^{st} + kH(s)e^{st} = e^{st}. \]

Solving for the transfer function \(H(s)\), we get

\[H(s) = \frac{1}{s + k} \]

Substituting this back into Eq. (3.17) yields the output for all time

\[y(t) = \frac{e^{st}}{s + k} \]

The integral in Eq. (3.18) does not need to be computed to find the transfer function of a system. Instead, one can assume a solution of the form of Eq. (3.17), substitute that into the differential equation of the system, then solve for the transfer function \(H(s)\).

The transfer function can be formally defined as follows: The function \(H(s)\), which is the transfer gain from \(U(s)\) to \(Y(s)\)-input to output - is called the transfer function of the system. It is the ratio of the

Laplace transform of the output of the system to the Laplace transform of the input. We can derive the transfer function explicitly. If we take the Laplace transform of both sides of Eq. (3.19), we have \(Y(s) = H(s)U(s)\) and

\[\frac{Y(s)}{U(s)} = H(s) \]

with the key assumption that all of the initial conditions on the system are zero.

The transfer function \(H(s)\) is the ratio of the Laplace transform of the output of the system to its input assuming all zero initial conditions.

If the input \(u(t)\) is the unit impulse \(\delta(t)\), then \(y(t)\) is the unit impulse response. The Laplace transform of \(u(t)\) is 1 and the transform of \(y(t)\) is \(H(s)\) because

\[Y(s) = H(s)\text{.}\text{~} \]

In words, this is to say

The transfer function \(H(s)\) is the Laplace transform of the unit impulse response \(h(t)\).

Thus, one way to characterize an LTI system is by applying a unit impulse and measuring the resulting response, which is a description (the inverse Laplace transform) of the transfer function.

For example, given the ordinary differential equation describing a third-order system with the output \(y(t)\) and input \(u(t)\)

\[\dddot{y} + a_{1}\overset{¨}{y} + a_{2}\overset{˙}{y} + a_{3}y = b_{1}\overset{¨}{u} + b_{2}\overset{˙}{u} + b_{3}u, \]

we take the Laplace transform of both sides of the equation, assuming zero initial conditions \(\left( y\left( 0^{-} \right) = \overset{˙}{y}\left( 0^{-} \right) = \overset{¨}{y}\left( 0^{-} \right) = u\left( 0^{-} \right) = \overset{˙}{u}\left( 0^{-} \right) = 0 \right)\), to obtain

\[\begin{matrix} s^{3}Y(s) + a_{1}s^{2}Y(s) + a_{2}sY(s) + a_{3}Y(s) = b_{1}s^{2}U(s) + b_{2}sU(s) + b_{3}U(s) \\ \left( s^{3} + a_{1}s^{2} + a_{2}s + a_{3} \right)Y(s) = \left( b_{1}s^{2} + b_{2}s + b_{3} \right)U(s) \end{matrix}\]

which leads to the transfer function \(H(s)\),

\[H(s) = \frac{Y(s)}{U(s)} = \frac{b_{1}s^{2} + b_{2}s + b_{3}}{s^{3} + a_{1}s^{2} + a_{2}s + a_{3}} = \frac{b(s)}{a(s)} \]

This idea can then be easily extended to a system of any order \(n\).

76. EXAMPLE 3.5

Frequency response

Figure 3.3

\(RC\) circuit diagram

77. Transfer Function for an RC Circuit

Compute the transfer function for the RC circuit driven by a voltage source as shown in Fig. 3.3.

Solution. The system equation from Kirchhoff's voltage law is

\[\begin{matrix} Ri(t) + y(t) & \ = u(t) \\ i(t) & \ = C\frac{dy(t)}{dt}, \end{matrix}\]

or

\[RC\overset{˙}{y} + y = u(t). \]

If the input voltage is a unit impulse signal

\[RC\overset{˙}{y} + y = \delta(t), \]

and we take the Laplace transform of both sides of the above equation (see Appendix A)

\[RC\left( sY(s) - y\left( 0^{-} \right) \right) + Y(s) = U(s) = 1 \]

Then assuming zero initial condition \(\left( y\left( 0^{-} \right) = 0 \right)\) we find

\[H(s) = \frac{Y(s)}{U(s)} = Y(s) = \frac{1}{RCs + 1} \]

The output, that is, the inverse Laplace transform of \(Y(s)\), is the impulse response

\[y(t) = h(t) = \frac{1}{RC}e^{- \frac{t}{RC}}1(t). \]

Therefore, the transfer function for this system is

\[H(s) = \mathcal{L}\{ h(t)\} = \frac{1}{RCs + 1} \]

A very common way to use the exponential response of LTIs is in finding the frequency response, or response to a sinusoid. First, we express the sinusoid as a sum of two exponential expressions (Euler's relation):

\[Acos(\omega t) = \frac{A}{2}\left( e^{j\omega t} + e^{- j\omega t} \right). \]

If we let \(s = j\omega\) in the basic response formula Eq. (3.17), then the response to \(u(t) = e^{j\omega t}\) is \(y(t) = H(j\omega)e^{j\omega t}\); similarly, the response to \(u(t) = e^{- j\omega t}\) is \(H( - j\omega)e^{- j\omega t}\). By superposition, the response to the sum of these two exponentials, which make up the cosine signal, is the sum of the responses:

\[y(t) = \frac{A}{2}\left\lbrack H(j\omega)e^{j\omega t} + H( - j\omega)e^{- j\omega t} \right\rbrack \]

The transfer function \(H(j\omega)\) is a complex number that can be represented in polar form or in magnitude-and-phase form as \(H(j\omega) =\) \(M(\omega)e^{j\varphi(\omega)}\), or simply \(H = Me^{j\varphi}\). With this substitution, Eq. (3.27) becomes

\[\begin{matrix} y(t) & \ = \frac{A}{2}M\left( e^{j(\omega t + \varphi)} + e^{- j(\omega t + \varphi)} \right) \\ & \ = AMcos(\omega t + \varphi) \end{matrix}\]

where

\[M = |H(j\omega)|,\varphi = \angle H(j\omega). \]

This means if a system represented by the transfer function \(H(s)\) has a sinusoidal input with magnitude \(A\), the output will be sinusoidal at the same frequency with magnitude \(AM\) and will be shifted in phase by the angle \(\varphi\).

78. Frequency Response

For the system in Example 3.1, find the response to the sinusoidal input \(u = Acos(\omega t)\). That is, find the frequency response and plot the response for \(k = 1\).

Solution. In Example 3.4, we found the transfer function of the system in Example 3.1. To find the frequency response, we let \(s = j\omega\) so

\[H(s) = \frac{1}{s + k} \Longrightarrow H(j\omega) = \frac{1}{j\omega + k} \]

From this, we get

\[M = \left| \frac{1}{j\omega + k} \right| = \frac{1}{\sqrt{\omega^{2} + k^{2}}}\ \text{~}\text{and}\text{~}\ \varphi = - \tan^{- 1}\left( \frac{\omega}{k} \right) \]

According to Eq. (3.28), the response of this system to a sinusoid will be

\[y(t) = AMcos(\omega t + \varphi). \]

\(M\) is usually referred to as the amplitude ratio and \(\varphi\) is referred to as the phase, and they are both functions of the input frequency, \(\omega\). The Matlab program that follows is used to compute the amplitude ratio and phase for \(k = 1\), as shown in Fig. 3.4. The logspace command is used to set the frequency range (on a logarithmic scale) and the bode command is used to compute the frequency response in Matlab. Presenting frequency response in this manner (that is, on a log-log scale)

The key property of Laplace transforms

Figure 3.4

Frequency-response plot for \(k = 1\)

Source: Reprinted with permission of The

MathWorks, Inc. was originated by H. W. Bode; thus, these plots are referred to as "Bode plots." 5 (See Chapter 6, Section 6.1.)

\(k = 1\);

\(tf = (^{'}\text{ }s)\);

sysH \(= 1/(s + k)\);

\(w =\) logspace \(( - 2,2)\);

[mag, phase] \(=\) bode \((\) sysh, w \()\); \(loglog(w,squeeze(mag));\ \% log\)-log plot of magnitude

semilogx(w,squeeze(phase)); % semi-log plot of phase

We can generalize the frequency response by study of the Laplace transform of a signal \(f(t)\) as a generalization of Eq. (3.18),

\[F(s) = \int_{- \infty}^{\infty}\mspace{2mu} f(t)e^{- st}dt \]

If we apply this definition to both \(u(t)\) and \(y(t)\) and use the convolution integral Eq. (3.13), we find that

\[Y(s) = H(s)U(s), \]

where \(Y(s)\) and \(U(s)\) are the Laplace transforms of \(y(t)\) and \(u(t)\), respectively.

\[\omega(rad/sec) \]

Laplace transforms such as Eq. (3.30) can be used to study the complete response characteristics of feedback systems, including the transient response - that is, the time response to an initial condition or suddenly applied signal. This is in contrast to the use of Fourier transforms, which only take into account the steady-state response. A standard problem in control is to find the response \(y(t)\) of a system given the input \(u(t)\) and a model of the system. With Eq. (3.30), we have a means for computing the response of LTI systems to quite general inputs. Given any input into a system, we compute the transform of the input and the transfer function for the system. The transform of the output is then given by Eq. (3.31) as the product of these two. If we wanted the time function of the output, we would need to "invert" \(Y(s)\) to get what is called the inverse transform; this step is typically not carried out explicitly. Nevertheless, understanding the process necessary for deriving \(y(t)\) from \(Y(s)\) is important because it leads to insight into the behavior of linear systems. Hence, given a general linear system with transfer function \(H(s)\) and an input signal \(u(t)\), the procedure for determining \(y(t)\) using the Laplace transform is given by the following steps:

STEP 1. Determine the transfer function: \(H(s) = \mathcal{L}\) {impulse response of the system}. Compute \(H(s)\) by the following steps:

(a) Take the Laplace transform of the equations of motion. A table of transform properties is frequently useful in this process.

(b) Solve the resulting algebraic equations. Often this step is greatly helped by drawing the corresponding block diagram and solving the equations by graphical manipulation of the blocks or using Matlab.

STEP 2. Determine the Laplace transform of the input signal: \(U(s) = \mathcal{L}\{ u(t)\}\).

STEP 3. Compute the Laplace transform of the output: \(Y(s) = H(s)U(s)\).

STEP 4. Break up \(Y(s)\) by partial-fraction expansion.

STEP 5. Find the output of the system by computing the inverse Laplace transform of \(Y(s)\) in Step 4, \(y(t) = \mathcal{L}^{- 1}\{ Y(s)\}\) [that is, invert \(Y(s)\) to get \(y(t)\) ]:

(a) Look up the components of \(y(t)\) in a table of transform-time function pairs.

(b) Combine the components to give the total solution in the desired form.

As mentioned above, Steps 4 and 5 are almost never carried out in practice, and a modified solution for a qualitative rather than a quantitative solution is often adequate and almost always used for control design

79. EXAMPLE 3.7

purposes. The process begins with the first three steps as before. However, rather than inverting \(Y(s)\), one can use prior knowledge and intuition about the effects of pole and zero locations in \(Y(s)\) on the response \(y(t)\) to estimate key features of \(y(t)\). That is, we get information about \(y(t)\) from the pole-zero constellation of \(Y(s)\) without actually inverting it, as discussed in the rest of this chapter. We can also obtain equivalent information from the Bode plot if that is available (see Chapter 6).

While it is possible to determine the transient response properties of the system using Eq. (3.30), it is generally more useful to use a simpler version of the Laplace transform based on the input beginning at time zero.

Frequency Response (Example 3.6 continued)

To continue with the system in Example 3.6, determine the response to an input that begins at \(t = 0\) as \(u(t) = sin(10t)1(t)\), notice from Laplace transform tables (see Appendix A, Table A.2), we have

\[\mathcal{L}\{ u(t)\} = \mathcal{L}\{ sin(10t)\} = \frac{10}{s^{2} + 100} \]

where \(\mathcal{L}\) denotes the Laplace transform, and the output of the system using partial fraction expansion (see Section 3.1.5) is given by

\[\begin{matrix} Y(s) & \ = H(s)U(s) \\ & \ = \frac{1}{s + 1}\frac{10}{s^{2} + 100}, \\ & \ = \frac{\alpha_{1}}{s + 1} + \frac{\alpha_{0}}{s + j10} + \frac{\alpha_{0}^{*}}{s - j10}, \\ & \ = \frac{\frac{10}{101}}{s + 1} + \frac{\frac{j}{2(1 - j10)}}{s + j10} + \frac{\frac{- j}{2(1 + j10)}}{s - j10}. \end{matrix}\]

The inverse Laplace transform of the output is given by (see Appendix A)

\[\begin{matrix} y(t) & \ = \frac{10}{101}e^{- t} + \frac{1}{\sqrt{101}}sin(10t + \varphi) \\ & \ = y_{1}(t) + y_{2}(t), \end{matrix}\]

where

\[\varphi = \tan^{- 1}( - 10) = - {84.2}^{\circ} \]

The component \(y_{1}(t)\) is called the transient response as it decays to zero as time goes on, and the component \(y_{2}(t)\) is called the steady state and equals the response given by Eq. (3.29). Fig. 3.5(a) is a plot of the time history of the output showing the different components $\left( y_{1} \right.\ $, \(y_{2}\) ) and the composite \((y)\) output response. The output frequency is 10 \(rad/sec\) and the steady-state phase difference measured from Fig. 3.5(b)

Figure 3.5

(a) Complete transient response; (b) phase lag between output and input

(a)

(b)

is approximately \(10^{*}\delta t = 1.47rad = {84.2}^{\circ}\ ^{6}\) Figure \(3.5(\text{ }b)\) shows the output lags the input by \({84.2}^{\circ}\). It also shows that the steady-state amplitude of the output is the amplitude ratio \(\frac{1}{\sqrt{101}} = 0.0995\) (that is, the amplitude of the input signal times the magnitude of the transfer function evaluated at \(\omega = 10rad/sec\) ).

This example illustrates that the response of an LTI system to a sinusoid of frequency \(\omega\) is a sinusoid with the same frequency and with an amplitude ratio equal to the magnitude of the transfer function

\(\ ^{6}\) The phase difference may also be determined by a Lissajous pattern.

Definition of Laplace transform evaluated at the input frequency. Furthermore, the phase difference between input and output signals is given by the phase of the transfer function evaluated at the input frequency. The magnitude ratio and phase difference can be computed from the transfer function as just discussed; they can also be measured experimentally quite easily in the laboratory by driving the system with a known sinusoidal input and measuring the steady-state amplitude and phase of the system's output. The input frequency is set to sufficiently many values so curves such as the one in Fig. 3.4 are obtained.

79.0.1. The \(\mathcal{L}_{-}\)Laplace Transform

In this book, it is useful to define a one-sided (or unilateral) Laplace transform, which uses \(0^{-}\)(that is, a value just before \(t = 0\) ) as the lower limit of integration in Eq. (3.30). The \(\mathcal{L}_{-}\)Laplace transform of \(f(t)\), denoted by \(\mathcal{L}_{-}\{ f(t)\} = F(s)\), is a function of the complex variable \(s =\) \(\sigma_{1} + j\omega\), where

\[F(s) \triangleq \int_{0^{-}}^{\infty}\mspace{2mu} f(t)e^{- st}dt \]

The decaying exponential term in the integrand in effect provides a built-in convergence factor if \(\sigma_{1} > 0\). This means that even if \(f(t)\) does not vanish as \(t \rightarrow \infty\), the integrand will vanish for sufficiently large values of \(\sigma\) if \(f\) does not grow at a faster-than-exponential rate. The fact that the lower limit of integration is at \(0^{-}\)allows the use of an impulse function at \(t = 0\), as illustrated in Example 3.3; however, this distinction between \(t = 0^{-}\)and \(t = 0\) does not usually come up in practice. We will therefore, for the most part, drop the minus superscript on \(t = 0\); however, we will return to using the notation \(t = 0^{-}\)when an impulse at \(t = 0\) is involved and the distinction is of practical value.

If Eq. (3.32) is a one-sided transform, then by extension, Eq. (3.30) is a two-sided Laplace transform. \(\ ^{7}\) We will use the \(\mathcal{L}\) symbol from here on to mean \(\mathcal{L}_{-}\).

On the basis of the formal definition in Eq. (3.32), we can ascertain the properties of Laplace transforms and compute the transforms of common time functions. The analysis of linear systems by means of Laplace transforms usually involves using tables of common properties and time functions, so we have provided this information in Appendix A. The tables of time functions and their Laplace transforms, together with the table of properties, permit us to find transforms of complex signals from simpler ones. For a thorough study of Laplace transforms and extensive tables, see Churchill (1972) and Campbell and Foster (1948). For more study of the two-sided transform, see Van der Pol and Bremmer (1955). These authors show that the time function can be obtained from the Laplace transform by the inverse relation

\[f(t) = \frac{1}{2\pi j}\int_{\sigma_{c} - j\infty}^{\sigma_{c} + j\infty}\mspace{2mu} F(s)e^{st}ds \]

where \(\sigma_{c}\) is a selected value to the right of all the singularities of \(F(s)\) in the \(s\)-plane. In practice, this relation is seldom used. Instead, complex Laplace transforms are broken down into simpler ones that are listed in the tables along with their corresponding time responses.

Let us compute a few Laplace transforms of some typical time functions.

Step and Ramp Transforms

Find the Laplace transform of the step \(a1(t)\) and ramp \(bt1(t)\) functions.

Solution. For a step of size \(a,f(t) = a1(t)\), and from Eq. (3.32), we have

\[F(s) = \int_{0}^{\infty}\mspace{2mu} ae^{- st}dt = \left. \ \frac{- ae^{- st}}{s} \right|_{0}^{\infty} = 0 - \frac{- a}{s} = \frac{a}{s},\ Re(s) > 0. \]

For the ramp signal \(f(t) = bt1(t)\), again from Eq. (3.32), we have

\[F(s) = \int_{0}^{\infty}\mspace{2mu} bte^{- st}dt = \left\lbrack - \frac{bte^{- st}}{s} - \frac{be^{- st}}{s^{2}} \right\rbrack_{0}^{\infty} = \frac{b}{s^{2}},\ Re(s) > 0 \]

where we employed the technique of integration by parts,

\[\int_{}^{}\ udv = uv - \int_{}^{}\ vdu \]

with \(u = bt\) and \(dv = e^{- st}dt\). We can then extend the domain of the validity of \(F(s)\) to the entire \(s\)-plane except at the pole location namely the origin (see Appendix A).

A more subtle example is that of the impulse function.

80. EXAMPLE 3.9

81. Impulse Function Transform

Find the Laplace transform of the unit-impulse function.

Solution. From Eq. (3.32), we get

\[F(s) = \int_{0^{-}}^{\infty}\mspace{2mu}\delta(t)e^{- st}dt = \int_{0^{-}}^{0^{+}}\mspace{2mu}\delta(t)dt = 1 \]

Sinusoid Transform

Find the Laplace transform of the sinusoid function.

Solution. Again, we use Eq. (3.32) to get

\[\mathcal{L}\{ sin\omega t\} = \int_{0}^{\infty}\mspace{2mu}(sin\omega t)e^{- st}dt \]

If we substitute the relation from Eq. (WA.34) in Appendix WA (available online at www.pearsonglobaleditions.com),

\[sin\omega t = \frac{e^{j\omega t} - e^{- j\omega t}}{2j} \]

into Eq. (3.35), we find that

\[\begin{matrix} \mathcal{L}\{ sin\omega t\} & \ = \int_{0}^{\infty}\mspace{2mu}\mspace{2mu}\left( \frac{e^{j\omega t} - e^{- j\omega t}}{2j} \right)e^{- st}dt \\ & \ = \frac{1}{2j}\int_{0}^{\infty}\mspace{2mu}\mspace{2mu}\left( e^{(j\omega - s)t} - e^{- (j\omega + s)t} \right)dt \\ & \ = \left. \ \frac{1}{2j}\left\lbrack \frac{1}{j\omega - s}e^{(j\omega - s)t} - \frac{1}{j\omega + s}e^{- (j\omega + s)t} \right\rbrack \right|_{0}^{\infty} \\ & \ = \frac{\omega}{s^{2} + \omega^{2}},\ Re(s) > 0 \end{matrix}\]

We can then extend the domain of the validity of computed Laplace transform to the entire \(s\)-plane except at the pole locations \(s = \pm j\omega\) (see Appendix A).

Table A.2 in Appendix A lists Laplace transforms for elementary time functions. Each entry in the table follows from direct application of the transform definition of Eq. (3.32), as demonstrated by Examples 3.8 through 3.10.

81.0.1. Properties of Laplace Transforms

In this section, we will address each of the significant properties of the Laplace transform listed in Table A.1. For the proofs of these properties and related examples as well as the Initial Value Theorem, the reader is referred to Appendix A.

82. Superposition

One of the most important properties of the Laplace transform is that it is linear, which means that the principle of superposition applies:

\[\mathcal{L}\left\{ \alpha f_{1}(t) + \beta f_{2}(t) \right\} = \alpha F_{1}(s) + \beta F_{2}(s) \]

The amplitude scaling property is a special case of this; that is,

\[\mathcal{L}\{\alpha f(t)\} = \alpha F(s) \]

83. Time Delay

Suppose a function \(f(t)\) is delayed by \(\lambda > 0\) units of time, \(f_{1}(t) = f(t -\) \(\lambda)\). Its Laplace transform is

\[F_{1}(s) = \int_{0}^{\infty}\mspace{2mu} f(t - \lambda)e^{- st}dt = e^{- s\lambda}F(s) \]

From this result, we see that a time delay of \(\lambda\) corresponds to multiplication of the transform by \(e^{- s\lambda}\).

84. Time Scaling

It is sometimes useful to time-scale equations of motion. For example, in the control system of a disk drive, it is meaningful to measure time in milliseconds (see also Chapter 10). If the time \(t\) is scaled by a factor \(a\), \(f_{1}(t) = f(at)\), then the Laplace transform of the time-scaled signal is

\[F_{1}(s) = \int_{0}^{\infty}\mspace{2mu} f(at)e^{- st}dt = \frac{1}{|a|}F\left( \frac{s}{a} \right). \]

85. Shift in Frequency

Multiplication (modulation) of \(f(t)\) by an exponential expression in the time domain, \(f_{1}(t) = e^{- at}f(t)\), corresponds to a shift in the frequency domain:

\[F_{1}(s) = \int_{0}^{\infty}\mspace{2mu} e^{- at}f(t)e^{- st}dt = F(s + a) \]

86. Differentiation

The transform of the derivative of a signal is related to its Laplace transform and its initial condition as follows:

\[\mathcal{L}\left\{ \frac{df}{dt} \right\} = \int_{0^{-}}^{\infty}\mspace{2mu}\left( \frac{df}{dt} \right)e^{- st}dt = - f\left( 0^{-} \right) + sF(s). \]

Another application of Eq. (3.41) leads to

\[\mathcal{L}\{\overset{¨}{f}\} = s^{2}F(s) - sf\left( 0^{-} \right) - \overset{˙}{f}\left( 0^{-} \right) \]

Repeated application of Eq. (3.41) leads to

\[\mathcal{L}\left\{ f^{m}(t) \right\} = s^{m}F(s) - s^{m - 1}f\left( 0^{-} \right) - s^{m - 2}\overset{˙}{f}\left( 0^{-} \right) - \cdots - f^{(m - 1)}\left( 0^{-} \right), \]

where \(f^{m}(t)\) denotes the \(m\) th derivative of \(f(t)\) with respect to time.

87. Integration

The Laplace transform of the integral of a time function \(f(t);f_{1}(t) =\) \(\int_{0}^{t}\mspace{2mu} f(\xi)d\xi\), is given by,

\[F_{1}(s) = \mathcal{L}\left\{ \int_{0}^{t}\mspace{2mu}\mspace{2mu} f(\xi)d\xi \right\} = \frac{1}{s}F(s) \]

which means that we simply multiply the function's Laplace transform by \(\frac{1}{s}\).

88. Convolution

We have seen previously that the response of a system is determined by convolving the input with the impulse response of the system, or by forming the product of the transfer function and the Laplace transform of the input. The discussion that follows extends this concept to various time functions.

Convolution in the time domain corresponds to multiplication in the frequency domain. Assume \(\mathcal{L}\left\{ f_{1}(t) \right\} = F_{1}(s)\) and \(\mathcal{L}\left\{ f_{2}(t) \right\} = F_{2}(s)\). Then,

\[\mathcal{L}\left\{ f_{1}(t)*f_{2}(t) \right\} = \int_{0}^{\infty}\mspace{2mu} f_{1}(t)*f_{2}(t)e^{- st}dt = F_{1}(s)F_{2}(s) \]

where \(*\) is the convolution operator. This implies that

\[\mathcal{L}^{- 1}\left\{ F_{1}(s)F_{2}(s) \right\} = f_{1}(t)*f_{2}(t) \]

A similar, or dual, of this result is discussed next.

89. Time Product

Multiplication in the time domain corresponds to convolution in the frequency domain:

\[\mathcal{L}\left\{ f_{1}(t)f_{2}(t) \right\} = \frac{1}{2\pi j}F_{1}(s)*F_{2}(s) \]

90. Multiplication by Time

Multiplication by time \(f_{1}(t) = tf(t)\) corresponds to differentiation in the frequency domain:

\[F_{1}(s) = \mathcal{L}\{ tf(t)\} = - \frac{d}{ds}F(s) \]

90.0.1. Inverse Laplace Transform by Partial-Fraction Expansion

The easiest way to find \(f(t)\) from its Laplace transform \(F(s)\), if \(F(s)\) is rational, is to expand \(F(s)\) as a sum of simpler terms that can be found in the tables. The basic tool for performing this operation is called partialfraction expansion. Consider the general form for the rational function \(F(s)\) consisting of the ratio of two polynomials:

\[F(s) = \frac{b_{1}s^{m} + b_{2}s^{m - 1} + \cdots + b_{m + 1}}{s^{n} + a_{1}s^{n - 1} + \cdots + a_{n}} \]

By factoring the polynomials, this same function could also be expressed in terms of the product of factors as

\[F(s) = K\frac{\Pi_{i = 1}^{m}\left( s - z_{i} \right)}{\prod_{i = 1}^{n}\mspace{2mu}\mspace{2mu}\left( s - p_{i} \right)} \]

We will discuss the simple case of distinct poles here. For a transform \(F(s)\) representing the response of any physical system, \(m \leq n\). When \(s = z_{i},s\) is referred to as a zero of the function, and when \(s = p_{i},s\) is referred to as a pole of the function. Assuming for now the poles \(\left\{ p_{i} \right\}\) are real or complex but distinct, we rewrite \(F(s)\) as the partial fraction

\[F(s) = \frac{C_{1}}{s - p_{1}} + \frac{C_{2}}{s - p_{2}} + \cdots + \frac{C_{n}}{s - p_{n}} \]

The cover-up method of determining coefficients
EXAMPLE 3.11
Next, we determine the set of constants \(\left\{ C_{i} \right\}\). We multiply both sides of Eq. (3.51) by the factor \(s - p_{1}\) to get

\[\left( s - p_{1} \right)F(s) = C_{1} + \frac{s - p_{1}}{s - p_{2}}C_{2} + \cdots + \frac{\left( s - p_{1} \right)}{s - p_{n}}C_{n} \]

If we let \(s = p_{1}\) on both sides of Eq. (3.52), then all the \(C_{i}\) terms will equal zero except for the first one. For this term,

\[C_{1} = \left. \ \left( s - p_{1} \right)F(s) \right|_{s = p_{1}}\text{.}\text{~} \]

The other coefficients can be expressed in a similar form:

\[C_{i} = \left. \ \left( s - p_{i} \right)F(s) \right|_{s = p_{i}}. \]

This process is called the cover-up method because, in the factored form of \(F(s)\) [Eq. (3.50)], we can cover up the individual denominator terms, evaluate the rest of the expression with \(s = p_{i}\), and determine the coefficients \(C_{i}\). Once this has been completed, the time function becomes

\[f(t) = \sum_{i = 1}^{n}\mspace{2mu} C_{i}e^{p_{i}t}1(t) \]

because, as entry 7 in Table A. 2 shows, if

\[F(s) = \frac{1}{s - p_{i}} \]

then

\[f(t) = e^{p_{i}t}1(t). \]

For the cases of quadratic factors or repeated roots in the denominator, see Appendix A.

91. Partial-Fraction Expansion: Distinct Real Roots

Suppose you have computed \(Y(s)\) and found that

\[Y(s) = \frac{(s + 4)(s + 3)}{s(s + 9)(s + 2)} \]

Find \(y(t)\).

Solution. We may write \(Y(s)\) in terms of its partial-fraction expansion:

\[Y(s) = \frac{C_{1}}{s} + \frac{C_{2}}{s + 9} + \frac{C_{3}}{s + 2} \]

Using the cover-up method, we get

\[C_{1} = \left. \ \frac{(s + 4)(s + 3)}{(s + 9)(s + 2)} \right|_{s = 0} = \frac{2}{3}\text{.}\text{~} \]

In a similar fashion,

\[C_{2} = \left. \ \frac{(s + 4)(s + 3)}{s(s + 2)} \right|_{s = - 9} = \frac{10}{21} \]

and

\[C_{3} = \left. \ \frac{(s + 4)(s + 3)}{s(s + 9)} \right|_{s = - 2} = - \frac{1}{7} \]

We can check the correctness of the result by adding the components again to verify that the original function has been recovered. With the partial fraction the solution can be looked up in the tables at once to be

\[y(t) = \frac{2}{3}1(t) + \frac{10}{21}e^{- 9t}1(t) - \frac{1}{7}e^{- 2t}1(t) \]

The partial fraction expansion may be computed using the residue function in Matlab:

\[\begin{matrix} \text{~}\text{num}\text{~} = conv\left( \begin{bmatrix} 1 & 4 \end{bmatrix},\begin{bmatrix} 1 & 3 \end{bmatrix} \right); & \%\text{~}\text{form numerator polynomial}\text{~} \\ \text{~}\text{den}\text{~} = conv\left( \begin{bmatrix} 1 & 9 & 0 \end{bmatrix},\begin{bmatrix} 1 & 2 \end{bmatrix} \right); & \%\text{~}\text{form denominator polynomial}\text{~} \\ \lbrack r,p,k\rbrack = residue(\text{~}\text{num}\text{~},\text{~}\text{den}\text{~}); & \%\text{~}\text{compute the residues}\text{~} \end{matrix}\]

which yields the result

\[r = \begin{bmatrix} 0.4762 & - 0.1429 & 0.6667 \end{bmatrix}^{'};\ p = \begin{bmatrix} - 9 & - 2 & 0 \end{bmatrix}^{'};\ k = \lbrack\rbrack;\]

and agrees with the hand calculations. Note the conv function in Matlab is used to multiply two polynomials. (The arguments of the functions are the polynomials coefficients.)

91.0.1. The Final Value Theorem

An especially useful property of the Laplace transform in control known as the Final Value Theorem allows us to compute the constant steady-state value of a time function given its Laplace transform. The theorem follows from the development of partial-fraction expansion. Suppose we have a transform \(Y(s)\) of a signal \(y(t)\) and wish to know the final value \(y(t)\) from \(Y(s)\). There are three possibilities for the limit. It can be constant, undefined, or unbounded. If \(Y(s)\) has any poles (that is, denominator roots, as described in Section 3.1.5) in the right half of the \(s\)-plane - that is, if the real part of any \(p_{i} > 0\) - then \(y(t)\) will grow and the limit will be unbounded. If \(Y(s)\) has a pair of poles on the imaginary axis of the \(s\)-plane (that is, \(\left. \ p_{i} = \pm j\omega \right)\), then \(y(t)\) will contain a sinusoid that persists forever, and the final value will not be defined. Only one case can provide a nonzero constant final value: If all poles of \(Y(s)\) are in the left half of the \(s\)-plane, except for one at \(s = 0\), then all terms of \(y(t)\) will decay to zero except the term corresponding to the pole at \(s = 0\), and that term corresponds to a constant in time. Thus, the

The Final Value Theorem

92. EXAMPLE 3.12

Use the Final Value

Theorem on stable systems only

93. EXAMPLE 3.13

Computing \(DC\) gain by the Final Value Theorem final value is given by the coefficient associated with the pole at \(s = 0\). Therefore, the Final Value Theorem is as follows:

If all poles of \(sY(s)\) are in the left half of the s-plane, then

\[\lim_{t \rightarrow \infty}\mspace{2mu} y(t) = \lim_{s \rightarrow 0}\mspace{2mu} sY(s) \]

This relationship is proved in Appendix A.

94. Final Value Theorem

Find the final value of the system corresponding to

\[Y(s) = \frac{7(s + 5)}{2s\left( s^{2} + 2s + 3 \right)} \]

Solution. Applying the Final Value Theorem, we obtain

\[y(\infty) = \left. \ sY(s) \right|_{s = 0} = \frac{7 \cdot 5}{2 \cdot 3} = 5.83 \]

Thus, after the transients have decayed to zero, \(y(t)\) will settle to a constant value of 5.83 .

Care must be taken to apply the Final Value Theorem only to stable systems (see Section 3.6). While one could use Eq. (3.54) on any \(Y(s)\), doing so could result in erroneous results, as shown in the next example.

Incorrect Use of the Final Value Theorem

Find the final value of the signal corresponding to

\[Y(s) = \frac{4}{s\left( s^{2} - 5s + 6 \right)} \]

Solution. If we blindly apply Eq. (3.54), we obtain

\[y(\infty) = \left. \ sY(s) \right|_{s = 0} = \frac{2}{3} \]

However,

\[y(t) = \frac{2}{3} - 2e^{2t} + \frac{4}{3}e^{3t} \]

which leads to an unbounded final value when \(t\) is equal to \(\infty\). This is due to the presence of the unstable poles at \(s = 2\) and \(s = 3\).

The theorem can also be used to find the DC gain of a system. The DC gain is the ratio of the output of a system to its input (presumed constant) after all transients have decayed. To find the DC gain, we assume there is a unit-step input \(\lbrack U(s) = 1/s\rbrack\) and we use the Final Value

Theorem to compute the steady-state value of the output. Therefore, for a system transfer function \(G(s)\),

\[DC\text{~}\text{gain}\text{~} = \lim_{s \rightarrow 0}\mspace{2mu} sG(s)\frac{1}{s} = \lim_{s \rightarrow 0}\mspace{2mu} G(s) \]

95. Gain

A system whose transfer function is

\[G(s) = \frac{4(s + A)}{s^{2} + 7s + 5} \]

has a DC gain of 2.5. Find the value of \(A\).

Solution. Applying Eq. (3.55) we get

\[DC\text{~}\text{gain}\text{~} = \left. \ G(s) \right|_{s = 0} = \frac{4A}{5} = 2.5\text{.}\text{~} \]

Therefore, \(A = 3.125\).

95.0.1. Using Laplace Transforms to Solve Differential Equations

Laplace transforms can be used to solve differential equations using the properties described in Appendix A. First, we find the Laplace transform of the differential equation using the differentiation properties in Eqs. (A.12) and (A.13) in Appendix A. Then we find the Laplace transform of the output; using partial-fraction expansion and Table A.2, this can be converted to a time response function. We will illustrate this with three examples.

96. Homogeneous Differential Equation

Find the solution to the differential equation

\[\overset{¨}{y}(t) + y(t) = 0,\ \text{~}\text{where}\text{~}\ y(0) = \alpha,\overset{˙}{y}(0) = \beta\text{.}\text{~} \]

Solution. Using Eq. (3.42), the Laplace transform of the differential equation is

\[\begin{matrix} s^{2}Y(s) - \alpha s - \beta + Y(s) & \ = 0 \\ \left( s^{2} + 1 \right)Y(s) & \ = \alpha s + \beta \\ Y(s) & \ = \frac{\alpha s}{s^{2} + 1} + \frac{\beta}{s^{2} + 1} \end{matrix}\]

After looking up in the transform tables (see Appendix A, Table A.2) the two terms on the right side of the preceding equation, we get

\[y(t) = \lbrack\alpha cost + \beta sint\rbrack 1(t), \]

where \(1(t)\) denotes a unit-step function. We can verify this solution is correct by substituting it back into the differential equation.

Another example will illustrate the solution when the equations are not homogeneous - that is, when the system is forced.

97. Forced Differential Equation

Find the solution to the differential equation \(\overset{¨}{y}(t) + 5\overset{˙}{y}(t) + 4y(t) = 3\), where \(y(0) = \alpha,\overset{˙}{y}(0) = \beta\).

Solution. Taking the Laplace transform of both sides using Eqs. (3.41) and (3.42), we get

\[s^{2}Y(s) - s\alpha - \beta + 5\lbrack sY(s) - \alpha\rbrack + 4Y(s) = \frac{3}{s} \]

Solving for \(Y(s)\) yields

\[Y(s) = \frac{s(s\alpha + \beta + 5\alpha) + 3}{s(s + 1)(s + 4)}. \]

The partial-fraction expansion using the cover-up method is

\[Y(s) = \frac{\frac{3}{4}}{s} - \frac{\frac{3 - \beta - 4\alpha}{3}}{s + 1} + \frac{\frac{3 - 4\alpha - 4\beta}{12}}{s + 4}\text{.}\text{~} \]

Therefore, the time function is given by

\[y(t) = \left( \frac{3}{4} + \frac{- 3 + \beta + 4\alpha}{3}e^{- t} + \frac{3 - 4\alpha - 4\beta}{12}e^{- 4t} \right)1(t). \]

By differentiating this solution twice and substituting the result in the original differential equation, we can verify this solution satisfies the differential equation.

The solution is especially simple if the initial conditions are all zero.

Forced Equation Solution with Zero Initial Conditions

Find the solution to \(\overset{˙}{y}(t) + 6y(t) = u(t),y(0) = 0,u(t) = 1 - 0.3e^{- 3t}\),

  1. using partial-fraction expansion,

  2. using Matlab.

98. Solution

  1. Taking the Laplace transform of both sides, we get

Solving for \(Y(s)\) yields

\[sY(s) + 6Y(s) = \frac{1}{s} - \frac{0.3}{s + 3} \]

\[Y(s) = \frac{0.7s + 3}{s(s + 3)(s + 6)} \]

The partial-fraction expansion using the cover-up method is

\[Y(s) = \frac{\frac{1}{6}}{s} + \frac{\frac{1}{10}}{s + 3} + \frac{\frac{2}{30}}{s + 6}\text{.}\text{~} \]

Poles indicate response character.
Therefore, the time function is given by

\[y(t) = \left( \frac{1}{6} - \frac{1}{10}e^{- 3t} - \frac{2}{30}e^{- 6t} \right)1(t) \]

  1. The partial-fraction expansion may also be computed using the Matlab residue function,

\[\begin{matrix} \text{~}\text{num}\text{~} = \lbrack 0.73\rbrack; & \%\text{~}\text{form numerator}\text{~} \\ \text{~}\text{den}\text{~} = poly(\lbrack 0; - 3; - 6\rbrack); & \%\text{~}\text{form denominator polynomial}\text{~} \\ & \%\text{~}\text{from its roots}\text{~} \\ \lbrack r,p,k\rbrack = \text{~}\text{residue(num,den}\text{~}); & \%\text{~}\text{compute the residues}\text{~} \end{matrix}\]

which results in the desired answer

\[r = \begin{bmatrix} - 0.0667 & - 0.1000 & 0.1667 \end{bmatrix}^{'};\ p = \begin{bmatrix} - 6 & - 30 \end{bmatrix}^{'};\ k = \lbrack\rbrack;\]

and agrees with the hand calculations.

The primary value of using the Laplace transform method of solving differential equations is that it provides information concerning the qualitative characteristic behavior of the response. Once we know the values of the poles of \(Y(s)\), we know what kind of characteristic terms will appear in the response. In the last example, the pole at \(s = - 1\) produced a decaying \(y = Ce^{- t}\) term in the response. The pole at \(s = - 4\) produced a \(y = Ce^{- 4t}\) term in the response, which decays faster. If there had been a pole at \(s = + 1\), there would have been a growing \(y = Ce^{+ t}\) term in the response. Using the pole locations to understand in essence how the system will respond is a powerful tool, and will be developed further in Section 3.3. Control systems designers often manipulate design parameters so that the poles have values that would give acceptable responses, and they skip the steps associated with converting those poles to actual time responses until the final stages of the design. They use trial-and-error design methods (as will be described in Chapter 5) that graphically present how changes in design parameters affect the pole locations. Once a design has been obtained, with pole locations predicted to give acceptable responses, the control designer determines a time response to verify that the design is satisfactory. This is typically done by computer, which solves the differential equations directly by using numerical computer methods.

98.0.1. Poles and Zeros

A rational transfer function can be described either as a ratio of two polynomials in \(s\),

\[H(s) = \frac{b_{1}s^{m} + b_{2}s^{m - 1} + \cdots + b_{m + 1}}{s^{n} + a_{1}s^{n - 1} + \cdots + a_{n}} = \frac{N(s)}{D(s)} \]

or as a ratio in factored zero pole form

\[H(s) = K\frac{\prod_{i = 1}^{m}\mspace{2mu}\mspace{2mu}\left( s - z_{i} \right)}{\prod_{i = 1}^{n}\mspace{2mu}\mspace{2mu}\left( s - p_{i} \right)} \]

\(K\) is called the transfer function gain. The roots of the numerator \(z_{1},z_{2},\ldots,z_{m}\) are called the finite zeros of the system. The zeros are locations in the \(s\)-plane where the transfer function is zero. If \(s = z_{i}\), then

\[\left. \ H(s) \right|_{s = z_{i}} = 0. \]

The zeros also correspond to the signal transmission-blocking properties of the system and are also called the transmission zeros of the system. The system has the inherent capability to block frequencies coinciding with its zero locations. If we excite the system with the nonzero input, \(u = u_{0}e^{s_{0}t}\), where \(s_{0}\) is not a pole of the system, then the output is identically zero, \(\ ^{8}y \equiv 0\), for frequencies where \(s_{0} = z_{i}\). The zeros also have a significant effect on the transient properties of the system (see Section 3.5).

The roots of the denominator, \(p_{1},p_{2},\ldots,p_{n}\), are called the poles \(\ ^{9}\) of the system. The poles are locations in the \(s\)-plane where the magnitude of the transfer function becomes infinite. If \(s = p_{i}\), then

\[|H(s)|_{s = p_{i}} = \infty \]

The poles of the system determine its stability properties, as we shall see in Section 3.6. The poles of the system also determine the natural or unforced behavior of the system, referred to as the modes of the system. The zeros and poles may be complex quantities, and we may display their locations in a complex plane, which we refer to as the s-plane. The locations of the poles and zeros lie at the heart of feedback control design and have significant practical implications for control system design. The system is said to have \(n - m\) zeros at infinity if \(m < n\) because the transfer function approaches zero as \(s\) approaches infinity. If the zeros at infinity are also counted, the system will have the same number of poles and zeros. No physical system can have \(n < m\); otherwise, it would have an infinite response at \(\omega = \infty\). If \(z_{i} = p_{j}\), then there are cancellations in the transfer function, which may lead to undesirable system properties as will be discussed in Chapter 7.

98.0.2. Linear System Analysis Using Matlab

The first step in analyzing a system is to write down (or generate) the set of time-domain differential equations representing the dynamic behavior of the physical system. These equations are generated from the physical laws governing the system behavior-for example, rigid body dynamics, thermo-fluid mechanics, and electromechanics, as described in Chapter 2. The next step in system analysis is to determine and designate inputs and outputs of the system then to compute the transfer function characterizing the input-output behavior of the dynamic system. Earlier in this chapter, we discussed that a linear dynamic system may also be represented by the Laplace transform of its differential equation - that is, its transfer function. The transfer function may be expressed as a ratio of two polynomials as in Eq. (3.56) or in factored zero-pole form as in Eq. (3.57). By analyzing the transfer function, we can determine the dynamic properties of the system, both in a qualitative and quantitative manner. One way of extracting useful system information is simply to determine the pole-zero locations and deduce the essential characteristics of the dynamic properties of the system. Another way is to determine the time-domain properties of the system by determining the response of the system to typical excitation signals such as impulses, steps, ramps, and sinusoids. Yet another way is to determine the time response analytically by computing the inverse Laplace transform using partial-fraction expansions and Tables A.1 and A.2 in Appendix A. Of course, it is also possible to determine the system response to an arbitrary input.

We will now illustrate this type of analysis by carrying out the preceding calculations for some of the physical systems addressed in the examples in Chapter 2 in order of increasing degree of difficulty. We will go back and forth between the different representations of the system, transfer function, pole-zero, and so on, using Matlab as our computational engine. Matlab typically accepts the specification of a system in several forms, including transfer function and zero-pole, and refers to these two descriptions as tf and zp, respectively. Furthermore, it can transform the system description from any one form to another.

Find the transfer function between the input \(u\) and the position of the car \(x\) in the cruise control system in Example 2.1 of Chapter 2.

Solution. From Example 2.1 of Chapter 2, we find the transfer function of the system is

\[H(s) = \frac{0.001}{s^{2} + 0.05s} = \frac{0.001}{s(s + 0.05)} \]

In Matlab, the transfer function is specified as follows:

\[\begin{matrix} s = tf\left( s^{'} \right); & \%\text{~}\text{define Laplace variable}\text{~} \\ sysH = 0.001/\left( s^{\land}2 + {0.05}^{*}\text{ }s \right); & \%\text{~}\text{form transfer function}\text{~} \end{matrix}\]

The pole-zero description is computed using the following Matlab commands:

\[p = \text{~}\text{pole}\text{~}(sysH);\ \%\text{~}\text{compute poles}\text{~} \]

\[\lbrack z,k\rbrack = \text{~}\text{zero}\text{~}(sysH);\ \%\text{~}\text{compute zeros and transfer function gain}\text{~} \]

and would result in the transfer function in factored form, where \(z = \lbrack\rbrack\), \(p = \begin{bmatrix} 0 & - 0.05 \end{bmatrix}^{'}\), and \(k = 0.001\).

In Example 2.15 of Chapter 2, assume that \(J_{m} = 0.02\text{ }kg \cdot m^{2},b =\) \(0.005\text{ }N \cdot m \cdot sec,K_{t} = K_{e} = 0.5,R_{a} = 2.5\Omega\), and \(L_{a} = 0.1H\). Find the transfer function between the input \(v_{a}\) and

  1. the output \(\theta_{m}\),

  2. the output \(\omega = {\overset{˙}{\theta}}_{m}\).

99. Solution.

  1. Substituting the preceding parameters into Example 2.15 of Chapter 2, we find that the transfer function of the system is

\[H(s) = \frac{250}{s^{3} + 25.25s^{2} + 131.25s} = \frac{250}{s\left( s^{2} + 25.25s^{2} + 131.25 \right)} \]

In Matlab, the transfer function is specified as

\[\begin{matrix} s = tf\left( s^{'} \right); & \%\text{~}\text{define Laplace variable}\text{~} \\ sysH = 250/\left( s^{\land}3 + {25.25}^{*}s^{\land}2 + {131.25}^{*}s \right); & \%\text{~}\text{form transfer function}\text{~} \end{matrix}\]

Again, the pole-zero description is computed using the Matlab commands

\[\begin{matrix} p = \text{~}\text{pole}\text{~}(\text{~}\text{sysh}\text{~}); & \%\text{~}\text{compute poles}\text{~} \\ \lbrack z,k\rbrack = \text{~}\text{zero}\text{~}(sysH); & \%\text{~}\text{compute zeros and transfer function gain}\text{~} \end{matrix}\]

which results in

\[z = \lbrack\rbrack,\ p = \begin{bmatrix} 0 & - 17.9298 & - 7.3202 \end{bmatrix}^{'},\ k = 250\]

and yields the transfer function in factored form:

\[H(s) = \frac{250}{s(s + 17.9298)(s + 7.3202)} \]

  1. If we consider the velocity \({\overset{˙}{\theta}}_{m}\) as the output, the transfer function is

\[G(s) = \frac{250s}{s^{3} + 25.25s^{2} + 131.25s} = \frac{250}{s^{2} + 25.25s + 131.25}. \]

This is as expected, because \({\overset{˙}{\theta}}_{m}\) is simply the derivative of \(\theta_{m}\); thus \(\mathcal{L}\left\{ {\overset{˙}{\theta}}_{m} \right\} = s\mathcal{L}\left\{ \theta_{m} \right\}\). For a unit-step command in \(v_{a}\), we can compute the step response in Matlab as follows (recall Example 2.1 of Chapter 2):

\[\begin{matrix} s = tf\left( \ ^{'}s^{'} \right); & \%\text{~}\text{define Laplace variable}\text{~} \\ \text{~}\text{sysG}\text{~} = 250/\left( s^{\land}2 + {25.25}^{*}s + 131.25 \right); & \%\text{~}\text{form transfer function}\text{~} \\ t = 0:0.01:4; & \%\text{~}\text{form time vector}\text{~} \\ y = step(sysG,t); & \%\text{~}\text{compute step response;}\text{~} \\ \text{~}\text{plot}\text{~}(t,y); & \%\text{~}\text{plot step response}\text{~} \end{matrix}\]

The system yields a steady-state constant angular velocity as shown in Fig.3.6. Since the system does not have unity DC gain, this is not unity.

Figure 3.6

Transient response for DC motor

When a dynamic system is represented by a single differential equation of any order, finding the polynomial form of the transfer function from that differential equation is usually easy. Therefore, you will find it best in these cases to specify a system directly in terms of its transfer function.

100. Transformations Using Matlab

Find the transfer function of the system whose differential equation is

\[2\overset{¨}{y} + 5\overset{˙}{y} + 6 = 7u + 3\overset{˙}{u}. \]

Solution. Using the differentiation rules given by Eqs. (3.41) and (3.42), and considering zero initial conditions, we see by inspection that

\[G(s) = \frac{Y(s)}{U(s)} = \frac{3s + 7}{2s^{2} + 5s + 6} \]

The Matlab statements are as follows:

\[\begin{matrix} s = tf\left( s^{'} \right); & \%\text{~}\text{define Laplace variable}\text{~} \\ sysG = \left( 3^{*}\text{ }s + 7 \right)/\left( 2^{*}{\text{ }s}^{\land}2 + 5^{*}\text{ }s + 6 \right); & \%\text{~}\text{form transfer function}\text{~} \end{matrix}\]

If the transfer function poles and zeros and transfer function gain are desired they can be obtained by the following Matlab statements:

\(\%\) compute poles, zeros, and transfer function gain

\(p =\) pole(sysG); \(\ \%\) compute poles

\(\lbrack z,k\rbrack =\) zero(sysG); \(\%\) compute zeros and transfer function gain
would result in $z = - 2.333,p = \begin{bmatrix}

  • 1.25 + j1.199 & - 1.25 - j1.199
    \end{bmatrix}^{'}$, \(k = 15\). This means that the transfer function could also be written as

\[G(s) = \frac{Y(s)}{U(s)} = \frac{1.5(s + 2.333)}{(s + 1.25 + j1.199)(s + 1.25 - j1.199)} \]

  1. Find the transfer function between the input \(F_{c}\) and the satellite attitude \(\theta\) in Example 2.3, and;

  2. Determine the response of the system to a \(25 - N\) pulse of \(0.1 - sec\) duration, starting at \(t = 5sec\). Let \(d = 1\text{ }m\) and \(I = 5000\text{ }kg \cdot m^{2}\).

101. Solution

  1. From Example 2.3, \(\frac{d}{I} = \frac{1}{5000} = 0.0002\left\lbrack \frac{m}{kg \cdot m^{2}} \right\rbrack\) and this means that the transfer function of the system is

\[H(s) = \frac{0.0002}{s^{2}} \]

which can also be determined by inspection for this particular case. We may display the coefficients of the numerator polynomial as the row vector num and the denominator as the row vector den. The results for this example are

\[\text{~}\text{numG}\text{~} = \begin{bmatrix} 0 & 0 & 0.0002 \end{bmatrix}\text{~}\text{and}\text{~}denG = \begin{bmatrix} 1 & 0 & 0 \end{bmatrix}\]

[TABLE]

  1. The following Matlab statements compute the response of the system to a \(25 - N,0.1 - sec\) duration thrust pulse input:

The system is excited with a short pulse (an impulse input) that has the effect of imparting a nonzero angle \(\theta_{0}\) at time \(t = 5sec\) on the system. Because the system is undamped, in the absence of any control it drifts with constant angular velocity with a value imparted by the impulse at \(t = 5sec\). The time response of the input is shown in Fig. 3.7(a), along with the drift in angle \(\theta\) in Fig. 3.7(b).

102. Figure 3.7

Transient response for satellite: (a) thrust input; (b) satellite attitude

(a)

(b)

We now excite the system with the same positive-magnitude thrust pulse at time \(t = 5sec\), but follow that with a negative pulse with the same magnitude and duration at time \(t = 6.1sec\). (See Fig. 3.8(a) for the input thrust.) Then the attitude response of the system is as shown in Fig. 3.8(b). This is actually how the satellite attitude angle is controlled in practice. The additional relevant Matlab statements are as follows:

\(\%\) double pulse input

u \(2 = \lbrack zeros(1,500)25*\) ones \((1,10)zeros(1,100) - 25*\) ones \((1,10)\) zeros \((1,381)\rbrack\);

\(\lbrack y2\rbrack = lsim(sysG,u2,t);\ \%\) linear simulation

plot \((t,u2);\ \%\) plot input signal

\(ff = 180/pi;\ \%\) conversion factor from radians to degrees

\(y2 = ff^{*}y2;\ \%\) output in degrees

plot(t,y2); \(\ \%\) plot output response

Figure 3.8

Transient response for satellite (double pulse):

(a) thrust input;

(b) satellite attitude

(a)

(b)

102.1. System Modeling Diagrams

102.1.1. The Block Diagram

To obtain the transfer function, we need to find the Laplace transform of the equations of motion and solve the resulting algebraic equations for the relationship between the input and the output. In many control systems, the system equations can be written so their components do not interact except by having the input of one part be the output of another part. In these cases, it is easy to draw a block diagram that represents the mathematical relationships in a manner similar to that used for the component block diagram in Fig. 1.2. The transfer function of each component is placed in a box, and the input-output relationships between components are indicated by lines and arrows. We can then

Negative feedback

Positive feedback

Unity feedback system solve the equations by graphical simplification, which is often easier and more informative than algebraic manipulation, even though the methods are in every way equivalent. Drawings of three elementary block diagrams are shown in Fig. 3.9. It is convenient to think of each block as representing an electronic amplifier with the transfer function printed inside. The interconnections of blocks include summing points, where any number of signals may be added together. These are represented by a circle with the symbol \(\Sigma\) inside. In Fig. 3.9(a), the block with transfer function \(G_{1}(s)\) is in series with the block with transfer function \(G_{2}(s)\), and the overall transfer function is given by the product \(G_{2}G_{1}\). In Fig. 3.9(b) two systems are in parallel with their outputs added, and the overall transfer function is given by the sum \(G_{1} + G_{2}\). These diagrams derive simply from the equations that describe them.

Figure 3.9(c) shows a more complicated case. Here the two blocks are connected in a feedback arrangement so each feeds into the other. When the feedback \(Y_{2}(s)\) is subtracted, as shown in the figure, we call it negative feedback. As you will see, negative feedback is usually required for system stability. For now, we will simply solve the equations then relate them back to the diagram. The equations are

\[\begin{matrix} & U_{1}(s) = R(s) - Y_{2}(s), \\ & Y_{2}(s) = G_{2}(s)G_{1}(s)U_{1}(s), \\ & Y_{1}(s) = G_{1}(s)U_{1}(s), \end{matrix}\]

and their solution is

\[Y_{1}(s) = \frac{G_{1}(s)}{1 + G_{1}(s)G_{2}(s)}R(s) \]

We can express the solution by the following rule:

The gain of a single-loop negative feedback system is given by the forward gain divided by the sum of 1 plus the loop gain.

When the feedback is added instead of subtracted, we call it positive feedback. In this case, the gain is given by the forward gain divided by the sum of 1 minus the loop gain.

The three elementary cases given in Fig. 3.9 can be used in combination to solve, by repeated reduction, any transfer function defined by a block diagram. However, the manipulations can be tedious and subject to error when the topology of the diagram is complicated. Fig. 3.10 shows examples of block-diagram algebra that complement those shown in Fig. 3.9. Figures 3.10(a) and (b) show how the interconnections of a block diagram can be manipulated without affecting the mathematical relationships. Figure 3.10(c) shows how the manipulations can be used to convert a general system (on the left) to a system without a component in the feedback path, usually referred to as a unity feedback system.

\[\overset{U_{1}(s)}{\longrightarrow}G_{1} \longrightarrow G_{2}\overset{Y_{2}(s)}{\longrightarrow} \]

\[\frac{Y_{2}(s)}{U_{1}(s)} = G_{2}G_{1} \]

(a)

\[\frac{Y(s)}{U(s)} = G_{2} + G_{1} \]

(b)

\[\frac{Y(s)}{R(s)} = \frac{G_{1}}{1 + G_{2}G_{1}} \]

(c)

Figure 3.9

Three examples of elementary block diagrams: (a) series; (b) parallel; (c) feedback

Figure 3.10

Examples of block-diagram algebra:

(a) moving a pickoff point; (b) moving a summer; (c) conversion to unity feedback

(a)

(b)

(c)

In all cases, the basic principle is to simplify the topology while maintaining exactly the same relationships among the remaining variables of the block diagram. In relation to the algebra of the underlying linear equations, block-diagram reduction is a pictorial way to solve equations by eliminating variables.

Transfer Function from a Simple Block Diagram

Find the transfer function of the system shown in Fig. 3.11(a).

Solution. First we simplify the block diagram by reducing the parallel combination of the controller path. This results in the diagram of Fig. 3.11(b), and we use the feedback rule to obtain the closed-loop transfer function:

Figure 3.11

Block diagram of a second-order system

EXAMPLE 3.23

(a)

(b)

\[T(s) = \frac{Y(s)}{R(s)} = \frac{\frac{2s + 4}{s^{2}}}{1 + \frac{2s + 4}{s^{2}}} = \frac{2s + 4}{s^{2} + 2s + 4} \]

103. Transfer Function from the Block Diagram

Find the transfer function of the system shown in Fig. 3.12(a).

Solution. First, we simplify the block diagram. Using the principles of Eq. (3.58), we replace the feedback loop involving \(G_{1}\) and \(G_{3}\) by its equivalent transfer function, noting that it is a positive feedback loop. The result is Fig. 3.12(b). The next step is to move the pick off point preceding \(G_{2}\) to its output [see Fig. 3.12(a)], as shown in Fig. 3.12(c). The negative feedback loop on the left is in series with the subsystem on the right, which is composed of the two parallel blocks \(G_{5}\) and \(G_{6}/G_{2}\). The overall transfer function can be written using all three rules for reduction given by Fig. 3.9:

\[\begin{matrix} T(s) & \ = \frac{Y(s)}{R(s)} = \frac{\frac{G_{1}G_{2}}{1 - G_{1}G_{3}}}{1 + \frac{G_{1}G_{2}G_{4}}{1 - G_{1}G_{3}}}\left( G_{5} + \frac{G_{6}}{G_{2}} \right), \\ & \ = \frac{G_{1}G_{2}G_{5} + G_{1}G_{6}}{1 - G_{1}G_{3} + G_{1}G_{2}G_{4}}. \end{matrix}\]

As we have seen, a system of algebraic equations may be represented by a block diagram that represents individual transfer functions by blocks, and has interconnections that correspond to the system equations. A block diagram is a convenient tool to visualize the system as a collection of interrelated subsystems that emphasize the relationships among the system variables.

Figure 3.12

Example for block-diagram simplification

(a)

(b)

(c)

103.0.1. Block-Diagram Reduction Using Matlab

If the individual transfer functions are available for components in a control system, it is possible to use Matlab commands to compute the transfer functions of interconnected systems. The three commands series, parallel, and feedback can be used for this purpose. They compute the transfer functions of two component block transfer functions in series, parallel, and feedback configurations, respectively. The next simple example illustrates their use.

104. Transfer Function of a Simple System Using Matlab

Repeat the computation of the transfer function for the block diagram in Fig. 3.11(a) using Matlab.

Solution. We label the transfer function of the separate blocks shown in Fig. 3.11(a) as illustrated in Fig. 3.13. Then we combine the two parallel blocks \(G_{1}\) and \(G_{2}\) by

Figure 3.13

Example for block-diagram simplification

\[s = tf(^{'}s)^{'} \]

sysG1=2;

sysG \(2 = 4/s\);

sysG3=parallel(sysG1,sysG2);

sysG \(4 = 1/s\);

sysG \(5 =\) series(sysG3,sysG4);

sysG \(6 = 1\);

sysCL=feedback(sysG5,sysG6,-1); % feedback combination of G5 and G6

The Matlab results are sysCL of the form

\[\frac{Y(s)}{R(s)} = \frac{2s + 4}{s^{2} + 2s + 4} \]

and this is the same result as the one obtained by block-diagram reduction.

104.0.1. Mason's Rule and the Signal Flow Graph

An alternative to block-diagram reduction is the Mason's rule that is a useful technique for determining transfer functions of complicated interconnected systems. See Appendix W3.2.3 online at www. pearsonglobaleditions.com.

104.1. Effect of Pole Locations

Once the transfer function has been determined by any of the available methods, we can start to analyze the response of the system it represents. When the system equations are simultaneous ordinary differential equations (ODEs), the transfer function that results will be a ratio of polynomials; that is,

\[H(s) = b(s)/a(s) \]

If we assume \(b\) and \(a\) have no common factors (as is usually the case), then values of \(s\) such that \(a(s) = 0\) will represent points where \(H(s)\) is

Poles

Zeros infinity. As discussed in Section 3.1.5, these \(s\)-values are called poles of \(H(s)\). Values of \(s\) such that \(b(s) = 0\) are points where \(H(s) = 0\) and the corresponding \(s\)-locations are called zeros. The effect of zeros on the transient response will be discussed in Section 3.5. These poles and

The impulse response is the natural response.

First-order system impulse response

Stability

Time constant \(\tau\)

EXAMPLE 3.25 zeros completely describe \(H(s)\) except for a constant multiplier. Because the impulse response is given by the time function corresponding to the transfer function, we call the impulse response the natural response of the system. We can use the poles and zeros to compute the corresponding time response and thus identify time histories with pole locations in the \(s\)-plane. For example, the poles identify the classes of signals contained in the impulse response, as may be seen by a partial-fraction expansion of \(H(s)\). For a first-order pole,

\[H(s) = \frac{1}{s + \sigma}\text{.}\text{~} \]

Table A.2, Appendix A entry 7 indicates that the impulse response will be an exponential function; that is,

\[h(t) = e^{- \sigma t}1(t) \]

When \(\sigma > 0\), the pole is located at \(s < 0\), the exponential expression decays, and we say the impulse response is stable. If \(\sigma < 0\), the pole is to the right of the origin. Because the exponential expression here grows with time, the impulse response is referred to as unstable (see Section 3.6). Fig. 3.14(a) shows a typical stable response and defines the time constant

\[\tau = 1/\sigma \]

as the time when the response is \(1/e\) times the initial value. Hence, it is a measure of the rate of decay. The straight line is tangent to the exponential curve at \(t = 0\) and terminates at \(t = \tau\). This characteristic of an exponential expression is useful in sketching a time plot or checking computer results.

Figure 3.14 (b) shows the impulse and step responses for a firstorder system computed using Matlab. It also shows the percentage rise in the step response for integral multiples of the time constant, \(\tau\), which is a metric for the speed of response of the system. In particular, we observe that after one time constant ( \(\tau\) seconds), the system reaches \(63\%\) of its steady-state value, and after about 5 time constants \((5\tau\) seconds), the system is at steady-state.

105. Response versus Pole Locations, Real Roots

Compare the time response with the pole locations for the system with a transfer function between input and output given by

\[H(s) = \frac{2s + 1}{s^{2} + 3s + 2}. \]

Solution. The numerator is

\[b(s) = 2\left( s + \frac{1}{2} \right) \]

and the denominator is

\[a(s) = s^{2} + 3s + 2 = (s + 1)(s + 2)\text{.}\text{~} \]

Figure 3.14

First-order system response: (a) impulse response; (b) impulse response and step response using Matlab

(a)

(b)

Figure 3.15

Sketch of \(s\)-plane showing poles as crosses and zeros as circles

The poles of \(H(s)\) are therefore at \(s = - 1\) and \(s = - 2\) and the one (finite) zero is at \(s = - \frac{1}{2}\). A complete description of this transfer function is shown by the plot of the locations of the poles and the zeros in the s-plane (see Fig. 3.15) using the Matlab pzmap(num,den) function with

\[\begin{matrix} & \text{~}\text{num=[2}\text{~}1\text{~}\text{1];}\text{~} \end{matrix}\]

A partial-fraction expansion of \(H(s)\) results in

"Fast poles" and "slow poles" refer to relative rate of signal decay

\[H(s) = - \frac{1}{s + 1} + \frac{3}{s + 2} \]

From Table A.2, in Appendix A, we can look up the inverse of each term in \(H(s)\), which will give us the time function \(h(t)\) that would result if the system input were an impulse. In this case,

\[h(t) = \left\{ \begin{matrix} - e^{- t} + 3e^{- 2t} & t \geq 0 \\ 0 & t < 0 \end{matrix} \right.\ \]

We see that the shape of the component parts of \(h(t)\), which are \(e^{- t}\) and \(e^{- 2t}\), are determined by the poles at \(s = - 1\) and -2 . This is true of more complicated cases as well: In general, the shapes of the components of the natural response are determined by the locations of the poles of the transfer function.

A sketch of these pole locations and corresponding natural responses is given in Fig. 3.16, along with other pole locations including complex ones, which will be discussed shortly.

The role of the numerator in the process of partial-fraction expansion is to influence the size of the coefficient that multiplies each component. Because \(e^{- 2t}\) decays faster than \(e^{- t}\), the signal corresponding to the pole at -2 decays faster than the signal corresponding to the pole at -1 . For brevity, we simply say that the pole at -2 is faster than the pole at -1 . In general, poles farther to the left in the \(s\)-plane are associated with natural signals that decay faster than those associated with poles closer to the imaginary axis. If the poles had been located with positive values of \(s\) (in the right half of the s-plane), the response would have been a growing exponential function and thus unstable. Figure 3.17 shows that the fast \(3e^{- 2t}\) term dominates the early part of the time history, and that the \(- e^{- t}\) term is the primary contributor later on.

Figure 3.16

Time functions

associated with points in the s-plane (LHP, left half-plane; RHP, right half-plane)

Figure 3.17

Impulse response of Example 3.25

[Eq. (3.60)]

Impulse response using Matlab

Damping ratio; damped and undamped natural frequency

The purpose of this example is to illustrate the relationship between the poles and the character of the response, which can be done exactly only by finding the inverse Laplace transform and examining each term as before. However, if we simply wanted to plot the impulse response for this example, the expedient way would be to use the following Matlab sequence:

\[\begin{matrix} s = tf\left( s^{'} \right); & \begin{matrix} \text{~}\text{\textbackslash\% define Laplace variable}\text{~} \\ sysH = \left( 2^{*}\text{ }s + 1 \right)/\left( s^{\land}2 + 3^{*}\text{ }s + 2 \right); \end{matrix} \\ & \begin{matrix} \text{~}\text{\textbackslash\% define system from its numerator}\text{~} \\ \text{~}\text{and denominator}\text{~} \end{matrix} \\ \text{~}\text{impulse(sysH);}\text{~} & \%\text{~}\text{compute impulse response}\text{~} \end{matrix}\]

The result is shown in Fig. 3.17.

Complex poles can be defined in terms of their real and imaginary parts, traditionally referred to as

\[s = - \sigma \pm j\omega_{d} \]

This means a pole has a negative real part if \(\sigma\) is positive. Since complex poles always come in complex conjugate pairs, the denominator corresponding to a complex pair will be

\[a(s) = \left( s + \sigma - j\omega_{d} \right)\left( s + \sigma + j\omega_{d} \right) = (s + \sigma)^{2} + \omega_{d}^{2} \]

When finding the transfer function from second-order differential equations, we typically write the result in the polynomial form

\[H(s) = \frac{\omega_{n}^{2}}{s^{2} + 2\zeta\omega_{n}s + \omega_{n}^{2}} \]

By multiplying out the form given by Eq. (3.62) and comparing it with the coefficients of the denominator of \(H(s)\) in Eq. (3.63), we find the correspondence between the parameters to be

\[\sigma = \zeta\omega_{n}\ \text{~}\text{and}\text{~}\ \omega_{d} = \omega_{n}\sqrt{1 - \zeta^{2}} \]

where the parameter \(\zeta\) is the damping ratio \(\ ^{10}\) and \(\omega_{n}\) is the undamped natural frequency. The poles of this transfer function are located at

Figure 3.18

\(s\)-plane plot for a pair of complex poles

Standard second-order system impulse response

Stability depends on whether natural response grows or decays

a radius \(\omega_{n}\) in the \(s\)-plane and at an angle \(\theta = \sin^{- 1}\zeta\), as shown in Fig. 3.18. Therefore, the damping ratio reflects the level of damping as a fraction of the critical damping value where the poles become real. In rectangular coordinates, the poles are at \(s = - \sigma \pm j\omega_{d}\). When \(\zeta = 0\), we have no damping, \(\theta = 0\), and the damped natural frequency \(\omega_{d} = \omega_{n}\), the undamped natural frequency.

For purposes of finding the time response from Table A.2 in Appendix A, corresponding to a complex transfer function, it is easiest to manipulate the \(H(s)\) so the complex poles fit the form of Eq. (3.62), because then the time response can be found directly from the table. Equation (3.63) can be rewritten as

\[H(s) = \frac{\omega_{n}^{2}}{\left( s + \zeta\omega_{n} \right)^{2} + \omega_{n}^{2}\left( 1 - \zeta^{2} \right)} \]

Therefore, from entry number 20 in Table A.2 and the definitions in Eq. (3.64), we see that the impulse response is

\[h(t) = \frac{\omega_{n}}{\sqrt{1 - \zeta^{2}}}e^{- \sigma t}\left( sin\omega_{d}t \right)1(t) \]

Fig. 3.19(a) plots \(h(t)\) for several values of \(\zeta\) such that time has been normalized to the undamped natural frequency \(\omega_{n}\). Note the actual frequency \(\omega_{d}\) decreases slightly as the damping ratio increases. Note also for very low damping the response is oscillatory, while for large damping \((\zeta\) near 1\()\) the response shows no oscillation. A few of these responses are sketched in Fig. 3.16 to show qualitatively how changing pole locations in the \(s\)-plane affect impulse responses. You will find it useful as a control designer to commit the image of Fig. 3.16 to memory so you can understand instantly how changes in pole locations influence the time response.

Three pole locations are shown in Fig. 3.20 for comparison with the corresponding impulse responses in Fig. 3.19(a). The negative real part of the pole, \(\sigma\), determines the decay rate of an exponential envelope

Figure 3.19

Responses of second-order systems versus \(\zeta\) : (a) impulse responses; (b) step responses

(a)

(b)

that multiplies the sinusoid, as shown in Fig. 3.21. Note if \(\sigma < 0\) (and the pole is in the RHP), then the natural response will grow with time, so, as defined earlier, the system is said to be unstable. If \(\sigma = 0\), the natural response neither grows nor decays, so stability is open to debate. If \(\sigma > 0\), the natural response decays, so the system is stable.

It is also interesting to examine the step response of \(H(s)\)-that is, the response of the system \(H(s)\) to a unit-step input \(u = 1(t)\), where \(U(s) = 1/s\). The step-response transform is given by \(Y(s) = H(s)U(s)\), which is found in Appendix A, Table A.2, entry 21. Figure 3.19(b),

Figure 3.20

Pole locations corresponding to three values of \(\zeta\)

Figure 3.21

Second-order system response with an exponential envelope bound

which plots \(y(t)\) for several values of \(\zeta\), shows the basic transient response characteristics from the impulse response carry over quite well to the step response; the difference between the two responses is that the step response's final value is the commanded unit step.

106. Oscillatory Time Response

Discuss the correlation between the poles of

\[H(s) = \frac{2s + 1}{s^{2} + 2s + 5} \]

and the impulse response of the system, then find the exact impulse response.

Solution. From the form of \(H(s)\) given by Eq. (3.63), we see that

\[\omega_{n}^{2} = 5 \Rightarrow \omega_{n} = \sqrt{5} = 2.24rad/sec \]

and

\[2\zeta\omega_{n} = 2 \Rightarrow \zeta = \frac{1}{\sqrt{5}} = 0.447 \]

Impulse response by Matlab

Figure 3.22

System response for Example 3.26
This indicates that we should expect a frequency of around \(2rad/sec\) with very little oscillatory motion. In order to obtain the exact response, we manipulate \(H(s)\) until the denominator is in the form of Eq. (3.62):

\[H(s) = \frac{2s + 1}{s^{2} + 2s + 5} = \frac{2s + 1}{(s + 1)^{2} + 2^{2}} \]

From this equation, we see the poles of the transfer function are complex, with real part -1 and imaginary parts \(\pm 2j\). Table A. 2 in Appendix A has two entries, numbers 19 and 20, that match the denominator. The right side of the preceding equation needs to be broken into two parts so they match the numerators of the entries in the table:

\[H(s) = \frac{2s + 1}{(s + 1)^{2} + 2^{2}} = 2\frac{s + 1}{(s + 1)^{2} + 2^{2}} - \frac{1}{2}\frac{2}{(s + 1)^{2} + 2^{2}}. \]

Thus, the impulse response is

\[h(t) = \left( 2e^{- t}cos2t - \frac{1}{2}e^{- t}sin2t \right)1(t) \]

Fig. 3.22 is a plot of the response and shows how the envelope attenuates the sinusoid, the domination of the \(2cos2t\) term, and the small phase shift caused by the \(- \frac{1}{2}sin2t\) term.

As in the previous example, the expedient way of determining the impulse response would be to use the following Matlab sequence:

\[\begin{matrix} s = tf\left( \left( s^{'} \right); \right.\ & \begin{matrix} \text{~}\text{\textbackslash\% define Laplace variable}\text{~} \\ sysH = \left( 2^{*}\text{ }s + 1 \right)/\left( s^{\land}2 + 2^{*}\text{ }s + 5 \right); \end{matrix} \\ & \begin{matrix} \text{~}\text{\textbackslash\% define system by its numerator}\text{~} \\ \text{~}\text{and denominator}\text{~} \end{matrix} \\ t = 0:0.1:6; & \%\text{~}\text{form time vector}\text{~} \\ y = \text{~}\text{impulse(sysH}\text{~},t); & \%\text{~}\text{compute impulse response}\text{~} \\ \text{~}\text{plot}\text{~}(t,y); & \%\text{~}\text{plot impulse response}\text{~} \end{matrix}\]

as shown in Fig. 3.22.

Definitions of rise time, settling time, overshoot, and peak time

Figure 3.23

Definition of rise time \(t_{r}\), settling time \(t_{s}\), and overshoot \(M_{p}\)

106.1. Time-Domain Specifications

Performance specifications for a control system design often involve certain requirements associated with the time response of the system. The requirements for a step response are expressed in terms of the standard quantities illustrated in Fig. 3.23:

  1. The rise time \(t_{r}\) is the time it takes the system to reach the vicinity of its new set point.

  2. The settling time \(t_{S}\) is the time it takes the system transients to decay.

  3. The overshoot \(M_{p}\) is the maximum amount the system overshoots its final value divided by its final value (and is often expressed as a percentage).

  4. The peak time \(t_{p}\) is the time it takes the system to reach the maximum overshoot point.

106.1.1. Rise Time

For a second-order system, the time responses shown in Fig. 3.19(b) yield information about the specifications that is too complex to be remembered unless converted to a simpler form. By examining these curves in light of the definitions given in Fig. 3.23, we can relate the curves to the pole-location parameters \(\zeta\) and \(\omega_{n}\). For example, all the curves rise in roughly the same time. If we consider the curve for \(\zeta = 0.5\) to be an average, the rise time \(\ ^{11}\) from \(y = 0.1\) to \(y = 0.9\) is approximately \(\omega_{n}t_{r} = 1.8\). Thus, we can say

\[t_{r} \cong \frac{1.8}{\omega_{n}} \]

Although this relationship could be embellished by including the effect of the damping ratio, it is important to keep in mind how Eq. (3.68) is typically used. It is accurate only for a second-order system with no zeros; for all other systems, it is a rough approximation to the

\(\ ^{11}\) Rise time \(t_{r}\).

Standard second-order system step response

Peak time \(t_{p}\) relationship between \(t_{r}\) and \(\omega_{n}\). Most systems being analyzed for control systems design are more complicated than the pure second-order system, so designers use Eq. (3.68) with the knowledge that it is only a rough approximation.

106.1.2. Overshoot and Peak Time

For the overshoot \(M_{p}\), we can be more analytical. This value occurs when the derivative is zero, which can be found from calculus. The time history of the curves in Fig. 3.19(b), found from the inverse Laplace transform of \(H(s)/s\), is

\[y(t) = 1 - e^{- \sigma t}\left( cos\omega_{d}t + \frac{\sigma}{\omega_{d}}sin\omega_{d}t \right) \]

where \(\omega_{d} = \omega_{n}\sqrt{1 - \zeta^{2}}\) and \(\sigma = \zeta\omega_{n}\). We may rewrite the preceding equation using the trigonometric identity

\[Asin(\alpha) + Bcos(\alpha) = Ccos(\alpha - \beta) \]

or

\[\begin{matrix} & C = \sqrt{A^{2} + B^{2}} = \frac{1}{\sqrt{1 - \zeta^{2}}} \\ & \beta = \tan^{- 1}\left( \frac{A}{B} \right) = \tan^{- 1}\left( \frac{\zeta}{\sqrt{1 - \zeta^{2}}} \right) = \sin^{- 1}(\zeta), \end{matrix}\]

with \(A = \frac{\sigma}{\omega_{d}},B = 1\), and \(\alpha = \omega_{d}t\), in a more compact form as

\[y(t) = 1 - \frac{e^{- \sigma t}}{\sqrt{1 - \zeta^{2}}}cos\left( \omega_{d}t - \beta \right) \]

When \(y(t)\) reaches its maximum value, its derivative will be zero:

\[\begin{matrix} \overset{˙}{y}(t) & \ = \sigma e^{- \sigma t}\left( cos\omega_{d}t + \frac{\sigma}{\omega_{d}}sin\omega_{d}t \right) - e^{- \sigma t}\left( - \omega_{d}sin\omega_{d}t + \sigma cos\omega_{d}t \right) = 0, \\ & \ = e^{- \sigma t}\left( \frac{\sigma^{2}}{\omega_{d}} + \omega_{d} \right)sin\omega_{d}t = 0 \end{matrix}\]

This occurs when \(sin\omega_{d}t = 0\), so

\[\omega_{d}t_{p} = \pi \]

and thus,

\[t_{p} = \frac{\pi}{\omega_{d}} \]

Substituting Eq. (3.71) into the expression for \(y(t)\), we compute

\[\begin{matrix} y\left( t_{p} \right) \triangleq 1 + M_{p} & \ = 1 - e^{- \sigma\pi/\omega_{d}}\left( cos\pi + \frac{\sigma}{\omega_{d}}sin\pi \right) \\ & \ = 1 + e^{- \sigma\pi/\omega_{d}} \end{matrix}\]

Thus, we have the formula

\[M_{p} = e^{- \pi\zeta/\sqrt{1 - \zeta^{2}}},\ 0 \leq \zeta < 1 \]

Figure 3.24

Overshoot versus damping ratio for the second-order system
Settling time \(t_{S}\)

which is plotted in Fig. 3.24. Two frequently used values from this curve are \(M_{p} = 0.16\) for \(\zeta = 0.5\) and \(M_{p} = 0.05\) for \(\zeta = 0.7\), that is, \(16\%\) and \(5\%\) overshoot, respectively.

106.1.3. Settling Time

The final parameter of interest from the transient response is the settling time \(t_{s}\). This is the time required for the transient to decay to a small value so that \(y(t)\) is almost in the steady state. Various measures of smallness are possible. For illustration, we will use \(1\%\) as a reasonable measure; in other cases, \(2\%\) or \(5\%\) are used. As an analytic computation, we notice that the deviation of \(y\) from 1 is the product of the decaying exponential \(e^{- \sigma t}\) and the circular functions sine and cosine. The duration of this error is essentially decided by the transient exponential, so we can define the settling time as that value of \(t_{s}\) when the decaying exponential reaches \(1\%\) :

\[e^{- \zeta\omega_{n}t_{s}} = 0.01 \]

Therefore,

\[\zeta\omega_{n}t_{s} = 4.6 \]

or

\[t_{s} = \frac{4.6}{\zeta\omega_{n}} = \frac{4.6}{\sigma}, \]

where \(\sigma\) is the negative real part of the pole, as may be seen in Fig. 3.18.

Equations (3.68), (3.72), and (3.73) characterize the transient response of a system having no finite zeros and two complex poles and with undamped natural frequency \(\omega_{n}\), damping ratio \(\zeta\), and negative real part \(\sigma\). In analysis and design, they are used to estimate rise time, overshoot, and settling time, respectively, for just about any system. In design synthesis, we wish to specify \(t_{r},M_{p}\), and \(t_{s}\) and to ask where the

(a)

(b)

(c)

(d)

Figure 3.25

Graphs of regions in the s-plane delineated by certain transient requirements: (a) rise time;

(b) overshoot; (c) settling time; (d) composite of all three requirements

First-order system step response poles need to be so that the actual responses are less than or equal to these specifications. For specified values of \(t_{r},M_{p}\), and \(t_{s}\), the synthesis form of the equation is then

\[\begin{matrix} \omega_{n} & \ \geq \frac{1.8}{t_{r}} \\ \zeta & \ \geq \zeta\left( M_{p} \right)\ \text{~}\text{(from Fig. 3.24)}\text{~} \\ \sigma & \ \geq \frac{4.6}{t_{s}} \end{matrix}\]

These equations, which can be graphed in the s-plane as shown in Fig. 3.25(a-c), will be used in later chapters to guide the selection of pole and zero locations to meet control system specifications for dynamic response.

It is important to keep in mind that Eqs. (3.74)-(3.76) are qualitative guides and not precise design formulas. They are meant to provide only a starting point for the design iteration. After the control design is complete, the time response should always be checked by an exact calculation, usually by numerical simulation, to verify whether the time specifications have actually been met. If not, another iteration of the design is required.

For a first-order system,

\[H(s) = \frac{\sigma}{s + \sigma} \]

and the transform of the step response is

\[Y(s) = \frac{\sigma}{s(s + \sigma)} \]

We see from entry 11 in Table A.2 (see Appendix A) that \(Y(s)\) corresponds to

\[y(t) = \left( 1 - e^{- \sigma t} \right)1(t). \]

Comparison with the development for Eq. (3.73) shows the value of \(t_{s}\) for a first-order system is the same:

Time constant \(\tau\)

EXAMPLE 3.27

Figure 3.26

Time domain specifications region in \(s\)-plane for Example 3.27

\[t_{s} = \frac{4.6}{\sigma} \]

No overshoot is possible, so \(M_{p} = 0\). The rise time from \(y = 0.1\) to \(y = 0.9\) can be seen from Fig. 3.14 to

\[t_{r} = \frac{ln0.9 - ln0.1}{\sigma} = \frac{2.2}{\sigma} \]

However, it is more typical to describe a first-order system in terms of its time constant, which was defined in Fig. 3.14 to be \(\tau = 1/\sigma\).

107. Transformation of the Specifications to the s-Plane

Find the allowable regions in the \(s\)-plane for the poles of the transfer function of the system if the system response requirement is \(t_{r} \leq 0.6\) sec, \(M_{p} \leq 10\%\), and \(t_{s} \leq 3sec\).

Solution. Without knowing whether or not the system is second order with no zeros, it is impossible to find the allowable region accurately. Regardless of the system, we can obtain a first approximation using the relationships for a second-order system. Equation (3.74) indicates that

\[\omega_{n} \geq \frac{1.8}{t_{r}} = 3.0rad/sec \]

Eq. (3.75) and Fig. 3.24 indicate that

\[\zeta \geq 0.6 \]

and Eq. (3.76) indicates that

\[\sigma \geq \frac{4.6}{3} = 1.5sec \]

The allowable region is anywhere to the left of the solid line in Fig. 3.26. Note any pole meeting the \(\zeta\) and \(\omega_{n}\) restrictions will automatically meet the \(\sigma\) restriction.

108. Effect of zeros

The effect of zeros near poles

108.1. Effects of Zeros and Additional Poles

Relationships such as those shown in Fig. 3.25 are correct for the simple second-order system; for more complicated systems, they can be used only as guidelines. If a certain design has an inadequate rise time (is too slow), we must raise the natural frequency; if the transient has too much overshoot, then the damping needs to be increased; if the transient persists too long, the poles need to be moved to the left in the \(s\)-plane.

Thus far only the poles of \(H(s)\) have entered into the discussion. There may also be zeros of \(H(s).\ ^{12}\) At the level of transient analysis, the zeros exert their influence by modifying the coefficients of the exponential terms whose shape is decided by the poles, as seen in Example 3.25. To illustrate this further, consider the following two transfer functions, which have the same poles but different zeros:

\[\begin{matrix} H_{1}(s) & \ = \frac{2}{(s + 1)(s + 2)} \\ & \ = \frac{2}{s + 1} - \frac{2}{s + 2}, \\ H_{2}(s) & \ = \frac{2(s + 1.1)}{1.1(s + 1)(s + 2)} \\ & \ = \frac{2}{1.1}\left( \frac{0.1}{s + 1} + \frac{0.9}{s + 2} \right) \\ & \ = \frac{0.18}{s + 1} + \frac{1.64}{s + 2}. \end{matrix}\]

They are normalized to have the same DC gain (that is, gain at \(s = 0\) ). Notice the coefficient of the \((s + 1)\) term has been modified from 2 in \(H_{1}(s)\) to 0.18 in \(H_{2}(s)\). This dramatic reduction is brought about by the zero at \(s = - 1.1\) in \(H_{2}(s)\), which almost cancels the pole at \(s = - 1\). If we put the zero exactly at \(s = - 1\), this term will vanish completely. In general, a zero near a pole reduces the amount of that term in the total response. From the equation for the coefficients in a partial-fraction expansion, Eq. (3.51),

\[C_{1} = \left. \ \left( s - p_{1} \right)F(s) \right|_{s = p_{1}}, \]

we can see that if \(F(s)\) has a zero near the pole at \(s = p_{1}\), the value of \(F(s)\) will be small because the value of \(s\) is near the zero. Therefore, the coefficient \(C_{1}\), which reflects how much of that term appears in the response, will be small.

In order to take into account how zeros affect the transient response when designing a control system, we consider transfer functions with two complex poles and one zero. To expedite the plotting for a wide range of cases, we write the transform in a form with normalized time and zero locations:

\[H(s) = \frac{\left( s/\alpha\zeta\omega_{n} \right) + 1}{\left( s/\omega_{n} \right)^{2} + 2\zeta\left( s/\omega_{n} \right) + 1} \]

The zero is located at \(s = - \alpha\zeta\omega_{n} = - \alpha\sigma\). If \(\alpha\) is large, the zero will be far removed from the poles, and the zero will have little effect on the response. If \(\alpha \cong 1\), the value of the zero will be close to that of the real part of the poles, and can be expected to have a substantial influence on the response. The step-response curves for \(\zeta = 0.5\) and \(\zeta = 0.707\) for several values of \(\alpha\) are plotted in Figs. 3.27 and 3.28. We see that the major effect of the zero is to increase the overshoot \(M_{p}\) and reduce rise time, \(t_{r}\), whereas it has very little influence on the settling time. A plot of \(M_{p}\) versus \(\alpha\) is given in Fig. 3.29. The plot shows the zero has very little effect on \(M_{p}\) if \(\alpha > 3\), but as \(\alpha\) decreases below 3, it has an increasing effect, especially when \(\alpha = 1\) or less.

Figure 3.27 can be explained in terms of Laplace-transform analysis. First, we replace \(s/\omega_{n}\) with \(s\) :

\[H(s) = \frac{s/\alpha\zeta + 1}{s^{2} + 2\zeta s + 1} \]

This has the effect of normalizing frequency in the transfer function and normalizing time in the corresponding step responses; thus \(\tau = \omega_{n}t\). We then rewrite the transfer function as the sum of two terms:

\[H(s) = \frac{1}{s^{2} + 2\zeta s + 1} + \frac{1}{\alpha\zeta}\frac{s}{s^{2} + 2\zeta s + 1} \]

Figure 3.27

Plots of the step response of a second-order system with a zero \((\zeta = 0.5)\)

Figure 3.28

Plots of the step response of a second-order system with a zero \((\zeta = 0.707)\)

Figure 3.29

Plot of overshoot \(M_{p}\) as a function of normalized zero location \(\alpha\). At \(\alpha = 1\), the real part of the zero equals the real part of the poles
RHP or

nonminimum-phase zero

The first term, which we shall call \(H_{0}(s)\), is the original term (having no finite zero), and the second term \(H_{d}(s)\), which is introduced by the zero, is a product of a constant \((1/\alpha\zeta)\) times \(s\) times the original term. The Laplace transform of \(df/dt\) is \(sF(s)\), so \(H_{d}(s)\) corresponds to a product of a constant times the derivative of the original term, that is,

\[y(t) = y_{0}(t) + y_{d}(t) = y_{0}(t) + \frac{1}{\alpha\zeta}{\overset{˙}{y}}_{0}(t) \]

The step responses of \(H_{0}(s)\) denoted by \(y_{0}(t)\) and \(H_{d}(s)\) denoted by \(y_{d}(t)\) are plotted in Fig. 3.30. Looking at these curves, we can see why the zero increased the overshoot: The derivative has a large hump in the early part of the curve, and adding this to the \(H_{0}(s)\) response lifts up the total response of \(H(s)\) to produce the overshoot. This analysis is also very informative for the case when \(\alpha < 0\) and the zero is in the RHP where \(s > 0\). (This is typically called an RHP zero and is sometimes referred

Figure 3.30

Second-order step responses \(y(t)\) of the transfer functions \(H(s)\), \(H_{0}(s)\), and \(H_{d}(s)\)

Figure 3.31

Step responses \(y(t)\) of a second-order system with a zero in the RHP: a nonminimum-phase system

to as a nonminimum-phase zero, a topic to be discussed in more detail in Section 6.1.1.) In this case, the derivative term is subtracted rather than added. A typical case is sketched in Fig. 3.31.

Effect of the Proximity of the Zero to the Pole Locations on the Transient Response

Consider the second-order system with a finite zero and unity DC gain,

\[H(s) = \frac{24}{z}\frac{(s + z)}{(s + 4)(s + 6)} \]

Determine the effect of the zero location \((s = - z)\) on the unit-step response when \(z = \{ 1,2,3,4,5,6\}\).

Solution. The step response is the inverse Laplace transform of

\[\begin{matrix} H_{1}(s) = & H(s)\frac{1}{s} = \frac{24}{z}\frac{(s + z)}{s(s + 4)(s + 6)} = \frac{24}{z}\frac{s}{s(s + 4)(s + 6)} \\ & \ + \frac{24}{s(s + 4)(s + 6)} \end{matrix}\]

and is the sum of the two parts,

\[y(t) = y_{1}(t) + y_{2}(t), \]

where

\[\begin{matrix} & y_{1}(t) = \frac{12}{z}e^{- 4t} - \frac{12}{z}e^{- 6t}, \\ & y_{2}(t) = z\int_{0}^{t}\mspace{2mu}\mspace{2mu} y_{1}(\tau)d\tau = - 3e^{- 4t} + 2e^{- 6t} + 1 \end{matrix}\]

and

\[y(t) = 1 + \left( \frac{12}{z} - 3 \right)e^{- 4t} + \left( 2 - \frac{12}{z} \right)e^{- 6t} \]

If \(z = 4\) or \(z = 6\), one of the modes of the system is absent from the output, and the response is first order due to the pole-zero cancellations. The step responses of the system is shown in Fig. \(3.32(z = 4\), dashed, \(z = 6\) dot dashed). The effect of the zero is most pronounced in terms of the additional overshoot for \(z = 1\) (zero location closest to the origin). The system also has overshoot for \(z = 2,3\). For \(z = 4\) or \(z = 6\) the responses are first order as expected. It is interesting that for \(z = 5\), where the zero is located between the two poles, there is no overshoot.

Figure 3.32

Effect of zero on transient response

EXAMPLE 3.29

Figure 3.33

Locations of complex zeros
This is generally the case because the zero effectively compensates for the effect of the second pole, rendering the system as first order at any given frequency.

109. Effect of the Proximity of the Complex Zeros

to the Lightly Damped Poles

Consider the third-order feedback system with a pair of lightly damped poles and a pair of complex zeros with the transfer function,

\[H(s) = \frac{(s + \alpha)^{2} + \beta^{2}}{(s + 1)\left\lbrack (s + 0.1)^{2} + 1 \right\rbrack} \]

Determine the effect of the complex zero locations \((s = - \alpha \pm j\beta)\) on the unit-step response of the system for the three different zero locations \((\alpha,\beta) = (0.1,1.0),(\alpha,\beta) = (0.25,1.0)\), and \((\alpha,\beta) = (0.5,1.0)\), as shown in Fig. 3.33.

Solution. We plot the three unit-step responses using Matlab as shown in Fig. 3.34. The effect of the lightly damped modes are clearly seen as oscillations in the step responses for the cases where \((\alpha,\beta) = (0.25,1.0)\) or \((\alpha,\beta) = (0.5,1.0)\), that is, when the complex zeros are not close to the locations of the lightly damped poles as shown in Fig. 3.33. On the other hand, if the complex zeros cancel the lightly damped poles exactly as is the case for \((\alpha,\beta) = (0.1,1.0)\), the oscillations are completely eliminated in the step response. In practice, the locations of the lightly damped poles are not known precisely, and exact cancellation is

Figure 3.34

Effect of complex zeros on transient response

not really possible. However, placing the complex zeros near the locations of the lightly damped poles may provide sufficient improvement in step response performance. We will come back to this technique later in Chapters 5, 7, and 10 in the context of dynamic compensator design.

110. Aircraft Response Using Matlab

The transfer function between the elevator and altitude of the Boeing 747 aircraft described in Section 10.3.2 can be approximated as

\[\frac{h(s)}{\delta_{e}(s)} = \frac{30(s - 6)}{s\left( s^{2} + 4s + 13 \right)} \]

  1. Use Matlab to plot the altitude time history for a \(1^{\circ}\) impulsive elevator input. Explain the response, noting the physical reasons for the nonminimum-phase nature of the response.

  2. Examine the accuracy of the approximations for \(t_{r},t_{s}\), and \(M_{p}\) [see Eqs. (3.68) and (3.73) and Fig. 3.24].

111. Solution

  1. The Matlab statements to create the impulse response for this case are as follows:

\[\begin{matrix} \begin{matrix} u = - 1; \\ \text{~}\text{sysG}\text{~} = u^{*}30^{*}(s - 6)/ \end{matrix} & \% u = \text{~}\text{delta e}\text{~} \\ \left( s^{\land}3 + 4^{*}{\text{ }s}^{\land}2 + 13^{*}\text{ }s \right) & \%\text{~}\text{define system by its transfer function}\text{~} \\ y = impulse(syss); & \%\text{~}\text{compute impulse response}\text{~};y = h \\ \text{~}\text{plot}\text{~}(y); & \%\text{~}\text{plot impulse response}\text{~} \end{matrix}\]

Response of a nonminimum-phase system

Figure 3.35

Response of an airplane's altitude to an impulsive elevator input
The result is the plot shown in Fig. 3.35. Notice how the altitude drops initially and then rises to a new final value. The final value is predicted by the Final Value Theorem:

\[h(\infty) = \left. \ s\frac{30(s - 6)( - 1)}{s\left( s^{2} + 4s + 13 \right)} \right|_{s = 0} = \frac{30( - 6)( - 1)}{13} = + 13.8 \]

The fact that the response has a finite final value for an impulsive input is due to the s-term in the denominator. This represents a pure integration, and the integral of an impulse function is a finite value. If the input had been a step, the altitude would have continued to increase with time; in other words the integral of a step function is a ramp function.

The initial drop is predicted by the RHP zero in the transfer function. The negative elevator deflection is defined to be upward by convention (see Fig. 10.30). The upward deflection of the elevators drives the tail down, which rotates the craft nose up and produces the climb. The deflection at the initial instant causes a downward force before the craft has rotated; therefore, the initial altitude response is down. After rotation, the increased lift resulting from the increased angle of attack of the wings causes the airplane to climb.

  1. The rise time from Eq. (3.68) is

\[t_{r} = \frac{1.8}{\omega_{n}} = \frac{1.8}{\sqrt{13}} = 0.5sec \]

We find the damping ratio \(\zeta\) from the relation

\[2\zeta\omega_{n} = 4 \Rightarrow \zeta = \frac{2}{\sqrt{13}} = 0.55 \]

From Fig. 3.24 we find the overshoot \(M_{p}\) to be 0.14. Because \(2\zeta\omega_{n} = 2\sigma = 4\), Eq. (3.73) shows that

\[t_{s} = \frac{4.6}{\sigma} = \frac{4.6}{2} = 2.3sec \]

Detailed examination of the time history \(h(t)\) from Matlab output shows that \(t_{r} \cong 0.43sec,M_{p} \cong 0.14\), and \(t_{s} \cong 2.6sec\), which are

Effect of extra pole

Figure 3.36

Step responses for several third-order systems with \(\zeta = 0.5\)

Figure 3.37

Step responses for several third-order systems with \(\zeta = 0.707\) reasonably close to the estimates. The only significant effect of the nonminimum-phase zero was to cause the initial response to go in the "wrong direction" and make the response somewhat sluggish.

In addition to studying the effects of zeros, it is useful to consider the effects of an extra pole on the standard second-order step response. In this case, we take the transfer function to be

\[H(s) = \frac{1}{\left( s/\alpha\zeta\omega_{n} + 1 \right)\left\lbrack \left( s/\omega_{n} \right)^{2} + 2\zeta\left( s/\omega_{n} \right) + 1 \right\rbrack} \]

Plots of the step response for this case are shown in Fig. 3.36 for \(\zeta = 0.5\), and in Fig. 3.37 for \(\zeta = 0.707\) for several values of \(\alpha\). In this case, the

Figure 3.38

Normalized rise time for several locations of an additional pole

major effect is to increase the rise time. A plot of the rise time versus \(\alpha\) is shown in Fig. 3.38 for several values of \(\zeta\).

From this discussion, we can draw several conclusions about the dynamic response of a simple system as revealed by its pole-zero patterns:

112. Effects of Pole-Zero Patterns on Dynamic Response

  1. For a second-order system with no finite zeros, the transient response parameters are approximated as follows:

Rise time: \(\ t_{r} \cong \frac{1.8}{\omega_{n}}\),

Overshoot: $\ M_{p} \cong \left{ \begin{matrix}
5%, & \zeta = 0.7, \
16%, & \zeta = 0.5 \
35%, & \zeta = 0.3,
\end{matrix}\ \right.\ $ (see Fig. 3.24),

Settling time: \(\ t_{s} \cong \frac{4.6}{\sigma}\).

  1. A zero in the left half-plane (LHP) will increase the overshoot if the zero is within a factor of 4 of the real part of the complex poles. A plot is given in Fig. 3.29.

  2. A zero in the RHP will depress the overshoot (and may cause the step response to start out in the wrong direction).

  3. An additional pole in the LHP will increase the rise time significantly if the extra pole is within a factor of 4 of the real part of the complex poles. A plot is given in Fig. 3.38.

112.1. Stability

For nonlinear and time-varying systems, the study of stability is a complex and often difficult subject. In this section, we will consider only LTI systems for which we have the following condition for stability:

An LTI system is said to be stable if all the roots of the transfer function denominator polynomial have negative real parts (that is, they are all in the left hand \(s\)-plane), and is unstable otherwise.

Stable system

Unstable system

A system is stable if its initial conditions decay to zero and is unstable if they diverge. As just stated, an LTI (constant parameter) system is stable if all the poles of the system are strictly inside the left half \(s\)-plane [that is, all the poles have negative real parts \((s = - \sigma + j\omega,\sigma > 0)\) ]. If any pole of the system is in the right half \(s\)-plane (that is, has a positive real part, \(s = - \sigma + j\omega,\sigma < 0\) ), then the system is unstable, as shown in Fig. 3.16. With any simple pole on the \(j\omega\) axis \((\sigma = 0)\), small initial conditions will persist. For any other pole with \(\sigma = 0\), oscillatory motion will persist. Therefore, a system is stable if its transient response decays and unstable if it does not. Figure 3.16 shows the time response of a system due to its pole locations.

In later chapters, we will address more advanced notions of stability, such as Nyquist's frequency-response stability test (see Chapter 6) and Lyapunov stability (see Chapter 9).

112.1.1. Bounded Input-Bounded Output Stability

A system is said to have bounded input-bounded output (BIBO) stability if every bounded input results in a bounded output (regardless of what goes on inside the system). A test for this property is readily available when the system response is given by convolution. If the system has input \(u(t)\), output \(y(t)\), and impulse response \(h(t)\), then

\[y(t) = \int_{- \infty}^{\infty}\mspace{2mu} h(\tau)u(t - \tau)d\tau \]

If \(u(t)\) is bounded, then there is a constant \(M\) such that \(|u| \leq M < \infty\), and the output is bounded by

\[\begin{matrix} |y| & \ = \left| \int_{}^{}\ hud\tau \right| \\ & \ \leq \int_{}^{}\ |h||u|d\tau \\ & \ \leq M\int_{- \infty}^{\infty}\mspace{2mu}\mspace{2mu}|h(\tau)|d\tau. \end{matrix}\]

Thus, the output will be bounded if \(\int_{- \infty}^{\infty}\mspace{2mu}|h|d\tau\) is bounded.

Mathematical definition of BIBO stability

113. EXAMPLE 3.31

Figure 3.39

Capacitor driven by current source
On the other hand, suppose the integral is not bounded and the bounded input \(u(t - \tau) = + 1\) if \(h(\tau) > 0\) and \(u(t - \tau) = - 1\) if \(h(\tau) < 0\). In this case,

\[y(t) = \int_{- \infty}^{\infty}\mspace{2mu}|h(\tau)|d\tau \]

and the output is not bounded. We conclude that

The system with impulse response \(h(t)\) is BIBO-stable if and only if the integral

\[\int_{- \infty}^{\infty}\mspace{2mu}|h(\tau)|d\tau < \infty \]

114. BIBO Stability for a Capacitor

As an example, determine the capacitor driven by a current source sketched in Fig. 3.39. The capacitor voltage is the output and the current is the input.

Solution. The impulse response of this setup is \(h(t) = 1(t)\), the unit step. Now for this response,

\[\int_{- \infty}^{\infty}\mspace{2mu}|h(\tau)|d\tau = \int_{0}^{\infty}\mspace{2mu} d\tau \]

is not bounded. The capacitor is not BIBO-stable. Notice the transfer function of the system is \(1/s\) and has a pole on the imaginary axis. Physically, we can see that constant input current will cause the voltage to grow, and thus the system response is neither bounded nor stable. In general, if an LTI system has any pole \(\ ^{13}\) on the imaginary axis or in the RHP, the response will not be BIBO-stable; if every pole is inside the LHP, then the response will be BIBO-stable. Thus for these systems, pole locations of the transfer function can be used to check for stability.

An alternative to computing the integral of the impulse response or even to locating the roots of the characteristic equation is given by Routh's stability criterion, which we will discuss in Section 3.6.3.

\(\ ^{13}\) Determination of BIBO stability by pole location.

114.0.1. Stability of LTI Systems

Consider the LTI system whose transfer function denominator polynomial leads to the characteristic equation

\[s^{n} + a_{1}s^{n - 1} + a_{2}s^{n - 2} + \cdots + a_{n} = 0. \]

Assume the roots \(\left\{ p_{i} \right\}\) of the characteristic equation are real or complex, but are distinct. Note Eq. (3.86) shows up as the denominator in the transfer function for the system as follows before any cancellation of poles by zeros is made:

\[\begin{matrix} T(s) & \ = \frac{Y(s)}{R(s)} = \frac{b_{0}s^{m} + b_{1}s^{m - 1} + \cdots + b_{m}}{s^{n} + a_{1}s^{n - 1} + \cdots + a_{n}} \\ & \ = \frac{K\prod_{i = 1}^{m}\mspace{2mu}\mspace{2mu}\left( s - z_{i} \right)}{\prod_{i = 1}^{n}\mspace{2mu}\mspace{2mu}\left( s - p_{i} \right)},\ m \leq n. \end{matrix}\]

The solution to the differential equation whose characteristic equation is given by Eq. (3.86) may be written using partial-fraction expansion as

\[y(t) = \sum_{i = 1}^{n}\mspace{2mu} K_{i}e^{p_{i}t} \]

where \(\left\{ p_{i} \right\}\) are the roots of Eq. (3.86) and \(\left\{ K_{i} \right\}\) depend on the initial conditions and zero locations. If a zero were to cancel a pole in the RHP for the transfer function, the corresponding \(K_{i}\) would equal zero in the output, but the unstable transient would appear in some internal variable.

The system is stable if and only if (necessary and sufficient condition) every term in Eq. (3.88) goes to zero as \(t \rightarrow \infty\) :

\[e^{p_{i}t} \rightarrow 0\text{~}\text{for all}\text{~}p_{i} \]

This will happen if all the poles of the system are strictly in the LHP, where

\[Re\left\{ p_{i} \right\} < 0. \]

If any poles are repeated, the response must be changed from that of Eq. (3.88) by including a polynomial in \(t\) in place of \(K_{i}\), but the conclusion is the same. This is called internal stability. Therefore, the stability of a system can be determined by computing the location of the roots of the characteristic equation and determining whether they are all in the LHP. If the system has any poles in the RHP, it is unstable. Hence the \(j\omega\) axis is the stability boundary between asymptotically stable and unstable response. If the system has nonrepeated \(j\omega\) axis poles, then it is said to be neutrally stable. For example, a pole at the origin (an integrator) results in a nondecaying transient. A pair of complex \(j\omega\) axis poles results in an oscillating response (with constant amplitude). If the system has repeated poles on the \(j\omega\) axis, then it is unstable [as it results in \(te^{\pm j\omega_{i}t}\) terms in Eq. (3.88)]. For example, a pair of poles at the origin

(double integrator) results in an unbounded response. Matlab software

Internal stability occurs when all poles are strictly in the LHP.

The \(j\omega\) axis is the stability boundary.

A necessary condition for Routh stability

A necessary and sufficient condition for stability makes the computation of the poles, and therefore determination of the stability of the system, relatively easy.

An alternative to locating the roots of the characteristic equation is given by Routh's stability criterion, which we will discuss next.

114.0.2. Routh's Stability Criterion

There are several methods of obtaining information about the locations of the roots of a polynomial without actually solving for the roots. These methods were developed in the 19th century and were especially useful before the availability of Matlab software. They are still useful for determining the ranges of coefficients of polynomials for stability, especially when the coefficients are in symbolic (nonnumerical) form. Consider the characteristic equation of an \(n\) th-order system \(\ ^{14}\) :

\[a(s) = s^{n} + a_{1}s^{n - 1} + a_{2}s^{n - 2} + \cdots + a_{n - 1}s + a_{n} \]

It is possible to make certain statements about the stability of the system without actually solving for the roots of the polynomial. This is a classical problem, and several methods exist for the solution.

A necessary condition for stability of the system is that all of the roots of Eq. (3.90) have negative real parts, which in turn requires that all the \(\left\{ a_{i} \right\}\) be positive. \(\ ^{15}\)

A necessary (but not sufficient) condition for stability is that all the coefficients of the characteristic polynomial be positive.

If any of the coefficients are missing (are zero) or are negative, then the system will have poles located outside the LHP. This condition can be checked by inspection. Once the elementary necessary conditions have been satisfied, we need a more powerful test. Equivalent tests were independently proposed by Routh in 1874 and Hurwitz in 1895; we will discuss the former. Routh's formulation requires the computation of a triangular array that is a function of the \(\left\{ a_{i} \right\}\). He showed that a necessary and sufficient condition for stability is that all of the elements in the first column of this array be positive.

A system is stable if and only if all the elements in the first column of the Routh array are positive.

To determine the Routh array, we first arrange the coefficients of the characteristic polynomial in two rows, beginning with the first and second coefficients, then followed by the even-numbered and odd-numbered coefficients:

\[\begin{matrix} s^{n} & : & 1 & a_{2} & a_{4} & \cdots \\ s^{n - 1}: & a_{1} & a_{3} & a_{5} & & \cdots \end{matrix}\]

We then add subsequent rows to complete the Routh array:

Row $$n$$ $$s^{n}:$$ 1 $$a_{2}$$ $$a_{4}$$ $$\cdots$$
Row $$n - 1$$ $$s^{n - 1}:$$ $$a_{1}$$ $$a_{3}$$ $$a_{5}$$ $$\cdots$$
Row $$n - 2$$ $$s^{n - 2}:$$ $$b_{1}$$ $$b_{2}$$ $$b_{3}$$ $$\cdots$$
Row $$n - 3$$ $$s^{n - 3}:$$ $$c_{1}$$ $$c_{2}$$ $$c_{3}$$ $$\cdots$$
$$\vdots$$ $$\vdots$$ $$\vdots$$ $$\vdots$$ $$\vdots$$
Row 2 $$s^{2}:$$ $$*$$ $$*$$
Row 1 $$s^{1}:$$ $$*$$
Row 0 $$s^{0}:$$ $$*$$

We compute the elements from the \((n - 2)^{\text{th}\text{~}}\) and \((n - 3)^{\text{th}\text{~}}\) rows as follows:

\[\begin{matrix} & b_{1} = - \frac{det\begin{bmatrix} 1 & a_{2} \\ a_{1} & a_{3} \end{bmatrix}}{a_{1}} = \frac{a_{1}a_{2} - a_{3}}{a_{1}}, \\ & b_{2} = - \frac{det\begin{bmatrix} 1 & a_{4} \\ a_{1} & a_{5} \end{bmatrix}}{a_{1}} = \frac{a_{1}a_{4} - a_{5}}{a_{1}}, \\ & b_{3} = - \frac{det\begin{bmatrix} 1 & a_{6} \\ a_{1} & a_{7} \end{bmatrix}}{a_{1}} = \frac{a_{1}a_{6} - a_{7}}{a_{1}}, \\ & c_{1} = - \frac{det\begin{bmatrix} a_{1} & a_{3} \\ b_{1} & b_{2} \end{bmatrix}}{b_{1}} = \frac{b_{1}a_{3} - a_{1}b_{2}}{b_{1}}, \\ & c_{2} = - \frac{det\begin{bmatrix} a_{1} & a_{5} \\ b_{1} & b_{3} \end{bmatrix}}{b_{1}} = \frac{b_{1}a_{5} - a_{1}b_{3}}{b_{1}}, \\ & c_{3} = - \frac{det\begin{bmatrix} a_{1} & a_{7} \\ b_{1} & b_{4} \end{bmatrix}}{b_{1}} = \frac{b_{1}a_{7} - a_{1}b_{4}}{b_{1}}. \end{matrix}\]

Note the elements of the \((n - 2)^{\text{th}\text{~}}\) row and the rows beneath it are formed from the two previous rows using determinants, with the two elements in the first column and other elements from successive columns. Normally, there are \(n + 1\) elements in the first column when the array terminates. If these are all positive, then all the roots of the characteristic polynomial are in the LHP. However, if the elements of the first column are not all positive, then the number of roots in the RHP equals the number of sign changes in the column. A pattern of,,+-+ is counted as two
sign changes: one change from + to - , and another from - to + . For a simple proof of the Routh test, the reader is referred to Ho et al. (1998).

115. Routh's Test

The polynomial

\[a(s) = s^{6} + s^{5} + s^{4} + 2s^{3} + 5s^{2} + 2s + 3 \]

satisfies the necessary condition for stability since all the \(\left\{ a_{i} \right\}\) are positive and nonzero. Determine whether any of the roots of the polynomial are in the RHP.

Solution. The Routh array for this polynomial is

\[\begin{matrix} s^{6}: & 1 & 1 & 5 \\ s^{5}: & 1 & 2 & 2 \\ s^{4}: & - 1 = \frac{1 \cdot 1 - 1 \cdot 2}{4} & 3 = \frac{1 \cdot 5 - 1 \cdot 2}{1} & 3 = \frac{1 \cdot 3 - 1 \cdot 0}{1} \\ s^{3}: & 5 = \frac{- 1 \cdot 2 - 1 \cdot 3}{- 1} & 5 = \frac{- 1 \cdot 2 - 1 \cdot 3}{- 1} & 0 \\ s^{2}: & 4 = \frac{5 \cdot 3 + 1 \cdot 5}{5} & 3 = \frac{5 \cdot 3 + 1 \cdot 0}{5} & \\ s: & \frac{5}{4} = \frac{4 \cdot 5 - 5 \cdot 3}{4} & 0 & \\ s^{0}: & & & 3 = \frac{\frac{5}{4} \cdot 3 - 4.0}{\frac{5}{4}} \end{matrix}\]

We conclude that the polynomial has RHP roots, since the elements of the first column are not all positive. In fact, there are two poles in the RHP because there are two sign changes.

Note, in computing the Routh array, we can simplify the rest of the calculations by multiplying or dividing a row by a positive constant. Also note the last two rows each have one nonzero element.

Routh's method is also useful in determining the range of parameters for which a feedback system remains stable.

Figure 3.40

A feedback system for testing stability

116. EXAMPLE 3.33

Computing roots by Matlab

117. Stability versus Parameter Range

Consider the system shown in Fig. 3.40. The stability properties of the system are a function of the proportional feedback gain \(K\). Determine the range of \(K\) over which the system is stable.

Solution. The characteristic equation for the system is given by

\[1 + K\frac{s + 1}{s(s - 1)(s + 6)} = 0, \]

or

\[s^{3} + 5s^{2} + (K - 6)s + K = 0 \]

The corresponding Routh array is

\[\begin{matrix} s^{3}: & 1 & K - 6 \\ s^{2}: & 5 & K \\ s: & (4K - 30)/5 & \\ s^{0}: & K. & \end{matrix}\]

For the system to be stable, it is necessary that

\[\frac{4K - 30}{5} > 0\text{~}\text{and}\text{~}K > 0\text{,}\text{~} \]

or

\[K > 7.5\text{~}\text{and}\text{~}K > 0\text{.}\text{~} \]

Thus, Routh's method provides an analytical answer to the stability question. Although any gain satisfying this inequality stabilizes the system, the dynamic response could be quite different depending on the specific value of \(K\). Given a specific value of the gain, we may compute the closed-loop poles by finding the roots of the characteristic polynomial. The characteristic polynomial has the coefficients represented by the row vector (in descending powers of \(s\) )

and we may compute the roots using the Matlab function \(roots(denT)\).

For \(K = 7.5\) the roots are at -5 and \(\pm 1.22j\), and the system is neutrally stable. Note that Routh's method predicts the presence of poles on the \(j\omega\) axis for \(K = 7.5\). If we set \(K = 13\), the closed-loop poles are at -4.06 and \(- 0.47 \pm 1.7j\), and for \(K = 25\), they are at -1.90 and \(- 1.54 \pm 3.27j\). In both these cases, the system is stable as predicted by Routh's method. Fig. 3.41 shows the transient responses for

Figure 3.41

Transient responses for the system in Fig. 3.40

EXAMPLE 3.34

Figure 3.42

System with proportional-integral (PI) control

the three gain values. To obtain these transient responses, we compute the closed-loop transfer function

\[\begin{matrix} & T(s) = \frac{Y(s)}{R(s)} = \frac{K(s + 1)}{s^{3} + 5s^{2} + (K - 6)s + K}, \\ & s = tf\left( \ ^{'}s^{'} \right)\text{;}\text{~} \\ & \%\text{~}\text{define the Laplace variable}\text{~} \\ & \text{~}\text{sys}\text{~}T = K^{*}(s + 1)/\left( s^{\land}3 + 5^{*}s^{\land}2 + (K - 6)^{*}s + K \right);\ \%\text{~}\text{define transfer function}\text{~} \\ & \text{~}\text{step(sysT);}\text{~}\ \%\text{~}\text{compute the step response}\text{~} \end{matrix}\]

produce a plot of the (unit) step response.

118. Stability versus Two Parameter Ranges

Find the range of the controller gains \(\left( K,K_{I} \right)\) so the PI (proportionalintegral; see Chapter 4) feedback system in Fig. 3.42 is stable.

Solution. The characteristic equation of the closed-loop system is

\[1 + \left( K + \frac{K_{I}}{s} \right)\frac{1}{(s + 1)(s + 2)} = 0 \]

which we may rewrite as

\[s^{3} + 3s^{2} + (2 + K)s + K_{I} = 0. \]

The corresponding Routh array is

\[\begin{matrix} s^{3}: & 1 & 2 + K \\ s^{2}: & 3 & K_{I} \\ s: & \left( 6 + 3K - K_{I} \right)/3 & \\ s^{0}: & K_{I}. & \end{matrix}\]

For internal stability we must have

\[K_{I} > 0\ \text{~}\text{and}\text{~}\ K > \frac{1}{3}K_{I} - 2 \]

The allowable region can be plotted in Matlab using the following commands

\[\begin{matrix} & fh = @(KI,K)6 + 3*\text{ }K - KI; \\ & \text{~}\text{ezplot(fh)}\text{~} \\ & \text{~}\text{hold on;}\text{~} \\ & f = @(KI,K)KI; \\ & \text{~}\text{ezplot(f);}\text{~} \end{matrix}\]

and is the shaded area in the \(\left( K_{I},K \right)\) plane shown in Fig. 3.43, which represents an analytical solution to the stability question. This example illustrates the real value of Routh's approach and why it is superior to the numerical approaches. It would have been more difficult to arrive at these bounds on the gains using numerical search techniques. The closed-loop transfer function is

\[T(s) = \frac{Y(s)}{R(s)} = \frac{Ks + K_{I}}{s^{3} + 3s^{2} + (2 + K)s + K_{I}}. \]

As in Example 3.33, we may compute the closed-loop poles for different values of the dynamic compensator gains by using the Matlab function

Matlab roots

Figure 3.43

Allowable region for stability roots on the denominator polynomial:

\[denT = \begin{bmatrix} 1 & 3 & 2 + KKI \end{bmatrix};\ \%\text{~}\text{form denominator}\text{~}\]

Figure 3.44

Transient response for the system in Fig. 3.42

Similarly, we may find the zero by finding the roots of the numerator polynomial

\[\text{~}\text{numT}\text{~} = \lbrack KKI\rbrack;\%\text{~}\text{form numerator}\text{~} \]

The closed-loop zero of the system is at \(- K_{I}/K\). Fig. 3.44 shows the transient response for three sets of feedback gains. For \(K = 1\) and \(K_{I} = 0\), the closed-loop poles are at 0 and \(- 1.5 \pm 0.86j\), and there is a zero at the origin. For \(K = K_{I} = 1\), the poles and zeros are all at -1 . For \(K = 10\) and \(K_{I} = 5\), the closed-loop poles are at -0.46 and \(- 1.26 \pm 3.3j\) and the zero is at -0.5 . The step responses were obtained using the following Matlab function:

sysT=tf(numT,denT); % define system by its numerator and denominator step(sysT) \(\ \%\) compute step response

There is a large steady-state error in this case when \(K_{I} = 0\). (See Chapter 4.)

If the first term in one of the rows is zero or if an entire row is zero, then the standard Routh array cannot be formed, so we have to use one of the special techniques described next.

119. Special Cases

If only the first element in one of the rows of the Routh array is zero or an entire row of the Routh array is zero, special modifications to the Routh array computations are necessary. For details, see Appendix W.3.6.3 available online at www.pearsonglobaleditions.com.

The Routh-Hurwitz result assumes the characteristic polynomial coefficients are known precisely. It is well known the roots of a polynomial can be very sensitive to even slight perturbations in the polynomial coefficients. If the range of variation on each one of the polynomial coefficients is known, then a remarkable result called the Kharitonov

Theorem (1978) allows one to test just four so-called Kharitonov polynomials, using the Routh test, to see if the polynomial coefficient variations result in instability.

120. \(\Delta\ 3.7\) Obtaining Models from Experimental Data: System Identification

There are several reasons for using experimental data to obtain a model of the dynamic system to be controlled. The available information and related techniques in this area are under the banner of system identification. See Appendix W3.7 available online at www.pearsonglobaleditions.com.

121. \(\Delta\ 3.8\) Amplitude and Time Scaling

In some cases in practice, due to extreme variations in magnitudes of real data, amplitude scaling is necessary. See Appendix W3.8 available online at www.pearsonglobaleditions.com.

121.1. Historical Perspective

Oliver Heaviside (1850-1925) was an eccentric English electrical engineer, mathematician, and physicist. He was self-taught and left school at the age of 16 to become a telegraph operator. He worked mostly outside the scientific community that was hostile to him. He reformulated Maxwell's equations into the form that is used today. He also laid down the foundations of telecommunication and hypothesized the existence of the ionosphere. He developed the symbolic procedure known as Heaviside's operational calculus for solving differential equations. The Heaviside calculus was widely popular among electrical engineers in the 1920s and 1930s. This was later shown to be equivalent to the more rigorous Laplace transform named after the French mathematician Pierre-Simon Laplace (1749-1827) who had earlier worked on operational calculus.

Laplace was also an astronomer and a mathematician who is sometimes referred to as the "The Newton of France." He studied the origin and dynamical stability of the solar system completing Newton's work in his five-volume Méchanique céleste (Celestial Mechanics). Laplace invented the general concept of potential as in a gravitational or electric field and described by Laplace's equation. Laplace had a brief political career as Napoleon's Interior Minister. During a famous exchange with Napoleon who asked Laplace why he had not mentioned God in Méchanique céleste, Laplace is said to have replied "Sir, there was no need for that hypothesis." He was an opportunist and changed sides as the political winds shifted. Laplace's operational property transforms a differential equation into an algebraic operation that is much easier to manipulate in engineering applications. It is also applicable to
solutions of partial differential equations, the original problem that Laplace was concerned with while developing the transform. Laplace formulated the Laplace's equation with applications to electromagnetic theory, fluid dynamics, and astronomy. Laplace also made fundamental contributions to probability theory.

Laplace and Fourier transforms are intimately related (see Appendix A). The Fourier series and the Fourier transform, developed in that order, provide methods for representing signals in terms of exponential functions. Fourier series are used to represent a periodic signal with discrete spectra in terms of a series. Fourier transforms are used to represent a non-periodic signal with continuous spectra in terms of an integral. The Fourier transform is named after the French mathematician Jean Batiste Joseph Fourier (1768-1830) who used Fourier series to solve the heat conduction equation expressed in terms of Fourier series. Laplace and Fourier were contemporaries and knew each other very well. In fact, Laplace was one of Fourier's teachers. Fourier accompanied Napoleon on his Egyptian expedition in 1798 as a science advisor, and is also credited with the discovery of the greenhouse effect.

Transform methods provide a unifying method in applications to solving many engineering problems. Linear transforms such as the Laplace transform and Fourier transform are useful for studying linear systems. While Fourier transforms are useful to study the steady-state behavior, Laplace transforms are used for studying the transient and closed-loop behavior of dynamic systems. The book by Gardner and Barnes in 1942 was influential in popularizing the Laplace transform in the United States.

122. SUMMARY

  • The Laplace transform is the primary tool used to determine the behavior of linear systems. The Laplace transform of a time function \((t)\) is given by

\[\mathcal{L}\lbrack f(t)\rbrack = F(s) = \int_{0^{-}}^{\infty}\mspace{2mu} f(t)e^{- st}dt \]

  • This relationship leads to the key property of Laplace transforms, namely,

\[\mathcal{L}\lbrack\overset{˙}{f}(t)\rbrack = sF(s) - f\left( 0^{-} \right). \]

  • This property allows us to find the transfer function of a linear ODE. Given the transfer function \(G(s)\) of a system and the input \(u(t)\), with transform \(U(s)\), the system output transform is \(Y(s) = G(s)U(s)\).

  • Normally, inverse transforms are found by referring to tables, such as Table A.2 in Appendix A, or by computer. Properties of Laplace transforms and their inverses are summarized in Table A.1 in Appendix A.

  • The Final Value Theorem is useful in finding steady-state errors for stable systems: If all the poles of \(sY(s)\) are in the LHP, then

\[\lim_{t \rightarrow \infty}\mspace{2mu} y(t) = \lim_{s \rightarrow 0}\mspace{2mu} sY(s) \]

  • Block diagrams are a convenient way to show the relationships between the components of a system. They can usually be simplified using the relations in Fig. 3.10 and Eq. (3.58); that is, the transfer function of the block diagram

is equivalent to

\[Y_{1}(s) = \frac{G_{1}(s)}{1 + G_{1}(s)G_{2}(s)}R_{1}(s) \]

  • The locations of poles in the \(s\)-plane determine the character of the response, as shown in Fig. 3.16.

  • The location of a pole in the \(s\)-plane is defined by the parameters shown in Fig. 3.18. These parameters are related to the timedomain quantities of rise time \(t_{r}\), settling time \(t_{s}\), and overshoot \(M_{p}\), which are defined in Fig. 3.23. The correspondences between them, for a second-order system with no zeros, are given by

\[\begin{matrix} t_{r} & \ \cong \frac{1.8}{\omega_{n}} \\ M_{p} & \ = e^{- \pi\zeta/\sqrt{1 - \zeta^{2}}}, \\ t_{s} & \ = \frac{4.6}{\zeta\omega_{n}} \end{matrix}\]

  • When a zero in the LHP is present, the overshoot increases. This effect is summarized in Figs. 3.27, 3.28 and 3.29.

  • When a real RHP is present, the step response starts off in the "wrong direction," and the response is more sluggish. This effect is summarized in Fig. 3.31, and is called the nonminimum phase behavior.

  • When an additional stable pole is present, the system response is more sluggish. This effect is summarized in Figs. 3.36, 3.37 and 3.38 .

  • For a stable system, all the closed-loop poles must be in the LHP.

  • A system is stable if and only if all the elements in the first column of the Routh array are positive. To determine the Routh array, refer to the formulas in Section 3.6.3.

123. REVIEW QUESTIONS

3.1 What is the definition of a "transfer function"?

3.2 What are the properties of systems whose responses can be described by transfer functions?

3.3 What is the Laplace transform of \(f(t - \lambda)1(t - \lambda)\) if the transform of \(f(t)\) is \(F(s)\) ?

3.4 State the Final Value Theorem (FVT).

3.5 What is the most common use of the FVT in control?

3.6 Given a second-order transfer function with damping ratio \(\zeta\) and natural frequency \(\omega_{n}\), what is the estimate of the step response rise time? What is the estimate of the percent overshoot in the step response? What is the estimate of the settling time?

3.7 What is the major effect of a zero in the left half-plane on the secondorder step response?

3.8 What is the most noticeable effect of a zero in the right half-plane on the step response of the second-order system?

3.9 What is the main effect of an extra real pole on the second-order step response?

3.10 Why is stability an important consideration in control system design?

3.11 What is the main use of Routh's criterion?

3.12 Under what conditions might it be important to know how to estimate a transfer function from experimental data?

124. PROBLEMS

125. Problems for Section 3.1: Review of Laplace Transforms

3.1 Show that, in a partial-fraction expansion, complex conjugate poles have coefficients that are also complex conjugates. (The result of this relationship is that whenever complex conjugate pairs of poles are present, only one of the coefficients needs to be computed.)

3.2 Find the Laplace transform of the following time functions:

(a) \(f(t) = 0.5 + 2.5t\)

(b) \(f(t) = 1.5 + 9t + 0.3t^{2} + \delta(t)\), where \(\delta(t)\) is the unit impulse function

(c) \(f(t) = 5.5e^{- t} + 3e^{- 2t} + 2.5t^{2}e^{- 3t}\)

(d) \(f(t) = (2t + 1)^{2}\)

(e) \(f(t) = cosh0.2t\)

3.3 Find the Laplace transform of the following time functions:

(a) \(f(t) = 8sin0.75t\)

(b) \(f(t) = cos1.5t + 4sin1.5t + 1.7e^{- 0.5t}cos1.5t\)

(c) \(f(t) = 0.4t^{3} + 1.8e^{t}sin2.2t\)

3.4 Find the Laplace transform of the following time functions:

(a) \(f(t) = tcost\)

(b) \(f(t) = tsin0.9t\)
(c) \(f(t) = te^{- 0.1t} + 2tcos0.6t\)

(d) \(f(t) = 3t^{2} - 2tsin1.3t + 5tcos7.2t\)

(e) \(f(t) = t^{2}cost + t^{2}sin0.9t\)

3.5 Find the Laplace transform of the following time functions \((*\) denotes convolution):

(a) \(f(t) = sin5tcost\)

(b) \(f(t) = 2 + 3\sin^{2}4t + 5\cos^{2}t\)

(c) \(f(t) = (sint)/t\)

(d) \(f(t) = sint*cos2t\)

(e) \(f(t) = \int_{0}^{t}\mspace{2mu} sin(t - \tau)sin\tau d\tau\)

3.6 Given the Laplace transform of \(f(t)\) is \(F(s)\), find the Laplace transform of the following:

(a) \(g(t) = f(t)cost\)

(b) \(g(t) = \int_{0}^{t}\mspace{2mu}\int_{0}^{t_{1}}\mspace{2mu} f(\tau)d\tau dt_{1}\)

3.7 Find the time function corresponding to each of the following Laplace transforms using partial-fraction expansions:

(a) \(F(s) = \frac{5}{s(s + 7)}\)

(b) \(F(s) = \frac{6}{s(s + 1)(s + 2)}\)

(c) \(F(s) = \frac{8s + 2}{s^{2} + s + 20}\)

(d) \(F(s) = \frac{5s + 3}{(s + 1)\left( s^{2} + 2s + 30 \right)}\)

(e) \(F(s) = \frac{s + 4}{s^{2} + 2}\)

(f) \(F(s) = \frac{s + 1}{s\left( s^{2} + 4 \right)}\)

(g) \(F(s) = \frac{s + 9}{s^{2}(s + 1)}\)

(h) \(F(s) = \frac{5}{s^{5}}\)

(i) \(F(s) = \frac{18}{s^{4} + 9}\)

(j) \(F(s) = \frac{e^{- 3}}{s^{3}}\)

3.8 Find the time function corresponding to each of the following Laplace transforms:

(a) \(F(s) = \frac{1}{s(s + 2)^{2}}\)

(b) \(F(s) = \frac{s^{2} + s + 1}{s^{3} - 1}\)

(c) \(F(s) = \frac{2\left( s^{2} + s + 1 \right)}{s(s + 1)^{2}}\)

(d) \(F(s) = \frac{s^{3} + 2s + 4}{s^{4} - 16}\)

(e) \(F(s) = \frac{2(s + 2)(s + 5)^{2}}{(s + 1)\left( s^{2} + 4 \right)^{2}}\)

(f) \(F(s) = \frac{s^{2} - 1}{\left( s^{2} + 1 \right)^{2}}\)

(g) \(F(s) = \tan^{- 1}\left( \frac{1}{s} \right)\)
(a) \(\overset{¨}{y}(t) + \overset{˙}{y}(t) + 3y(t) = 0;y(0) = 1,\overset{˙}{y}(0) = 2\)

(b) \(\overset{¨}{y}(t) - 2\overset{˙}{y}(t) + 4y(t) = 0;y(0) = 1,\overset{˙}{y}(0) = 2\)

(c) \(\overset{¨}{y}(t) + \overset{˙}{y}(t) = sint;y(0) = 1,\overset{˙}{y}(0) = 2\)

(d) \(\overset{¨}{y}(t) + 3y(t) = sint;y(0) = 1,\overset{˙}{y}(0) = 2\)

(e) \(\overset{¨}{y}(t) + 2\overset{˙}{y}(t) = e^{t};y(0) = 1,\overset{˙}{y}(0) = 2\)

(f) \(\overset{¨}{y}(t) + y(t) = t;y(0) = 1,\overset{˙}{y}(0) = - 1\)

3.10 Using the convolution integral, find the step response of the system whose impulse response is given below and shown in Fig. 3.45:

\[h(t) = \left\{ \begin{matrix} te^{- t} & t \geq 0 \\ 0 & t < 0 \end{matrix} \right.\ \]

Figure 3.45

Impulse response for Problem 3.10

Figure 3.46

Impulse response for Problem 3.11

3.11 Using the convolution integral, find the step response of the system whose impulse response is given below and shown in Fig. 3.46:

\[h(t) = \left\{ \begin{matrix} t/3,\ 0 \leq t \leq 3 \\ 0,\ t < 0\text{~}\text{and}\text{~}t > 3. \end{matrix} \right.\ \]

3.12 Consider the standard second-order system

\[G(s) = \frac{\omega_{n}^{2}}{s^{2} + 2\zeta\omega_{n}s + \omega_{n}^{2}} \]

(a) Write the Laplace transform of the signal in Fig. 3.47.

(b) What is the transform of the output if this signal is applied to \(G(s)\) ?

(c) Find the output of the system for the input shown in Fig. 3.47.

Figure 3.47

Plot of input signal for Problem 3.12

3.13 A rotating load is connected to a field-controlled DC motor with negligible field inductance. A test results in the output load reaching a speed of \(1rad/sec\) within \(1/2sec\) when a constant input of \(100\text{ }V\) is applied to the motor terminals. The output steady-state speed from the same test is found to be \(2rad/sec\). Determine the transfer function \(\frac{\Theta(s)}{V_{f}(s)}\) of the motor.

3.14 For the system in Fig. 2.57, compute the transfer function from the motor voltage to position \(\theta_{2}\).

3.15 Compute the transfer function for the two-tank system in Fig. 2.61 with holes at \(A\) and \(C\).

3.16 For a second-order system with transfer function

\[G(s) = \frac{5}{s^{2} + s + 4} \]

Determine the following:

(a) The DC gain and whether the system is stable.

(b) The final value of the output if the input is applied with a step of 2 units or \(R(s) = \frac{2}{s}\).

3.17 Consider the continuous rolling mill depicted in Fig. 3.48. Suppose the motion of the adjustable roller has a damping coefficient \(b\), and the force exerted by the rolled material on the adjustable roller is proportional to the material's change in thickness: \(F_{S} = c(T - x)\). Suppose further the DC motor has a torque constant \(K_{t}\) and a back emf constant \(K_{e}\), and the rack-and-pinion has effective radius of \(R\).

(a) What are the inputs to this system? The output?

(b) Without neglecting the effects of gravity on the adjustable roller, draw a block diagram of the system that explicitly shows the following quantities: \(V_{S}(s),I_{0}(s),F(s)\) (the force the motor exerts on the adjustable roller), and \(X(s)\).

(c) Simplify your block diagram as much as possible while still identifying each output and input separately.

Figure 3.48

Continuous rolling mill

126. Problems for Section 3.2: System Modeling Diagrams

3.18 Consider the block diagram shown in Fig. 3.49. Note \(a_{i}\) and \(b_{i}\) are constants. Compute the transfer function for this system. This special structure is called the "control canonical form", and will be discussed further in Chapter 7.

Figure 3.49

Block diagram for

Problem 3.18

3.19 Find the transfer functions for the block diagrams in Fig. 3.50.

Figure 3.50

Block diagrams for

Problem 3.19

(a)

(b)

(c)

3.20 Find the transfer functions for the block diagrams in Fig. 3.51, using the ideas of block-diagram simplification. The special structure in Fig. 3.51(b) is called the "observer canonical form", and will be discussed in Chapter 7.

3.21 Use block-diagram algebra to determine the transfer function between \(R(s)\) and \(Y(s)\) in Fig. 3.52.

\(\bigtriangleup\) 3.22 Find the transfer functions for the block diagrams in Fig. 3.51, using Mason's rule.

\(\bigtriangleup\) 3.23 Use Mason's rule to determine the transfer function between \(R(s)\) and \(Y(s)\) in Fig. 3.52.

127. Problems for Section 3.3: Effect of Pole Locations

3.24 For the electric circuit shown in Fig. 3.53, find the following:

(a) The time-domain equation relating \(i(t)\) and \(v_{1}(t)\);

(b) The time-domain equation relating \(i(t)\) and \(v_{2}(t)\);

(c) Assuming all initial conditions are zero, the transfer function \(\frac{v_{2}(s)}{v_{1}(s)}\) and the damping ratio \(\zeta\) and undamped natural frequency \(\omega_{n}\) of the system;

(d) The range of \(C\) values that will result in \(v_{2}(t)\) having an overshoot of no more than \(20\%\), assuming \(v_{1}(t)\) is a unit step, \(L = 1.5mH\), and \(R = 5\Omega\).

(a)

(b)

(c)

(d)

Figure 3.51

Block diagrams for Problem 3.20

Figure 3.52

Block diagram for Problem 3.21

Figure 3.53

Circuit for Problem 3.24

Figure 3.54

Unity feedback system

for Problem 3.25

Figure 3.55

Unity feedback system

for Problem 3.26

3.25 For the unit feedback system shown in Fig. 3.54, specify the gain \(K\) of the proportional controller so that the output \(y(t)\) has an overshoot of no more than \(12\%\) in response to a unit step.

3.26 For the unity feedback system shown in Fig. 3.55, specify the gain and pole location of the compensator so that the overall closed-loop response to a unit-step input has an overshoot of no more than \(18\%\) and a \(1\%\) settling time of no more than \(0.01sec\). Verify your design using Matlab.

128. Problems for Section 3.4: Time-Domain Specification

3.27 Suppose you desire the peak time of a given second-order system to be less than \(t_{p}^{'}\). Draw the region in the \(s\)-plane that corresponds to values of the poles that meet the specification \(t_{p} < t_{p}^{'}\).

3.28 A certain servomechanism system has dynamics dominated by a pair of complex poles and no finite zeros. The time-domain specifications on the rise time \(\left( t_{r} \right)\), percent overshoot \(\left( M_{p} \right)\), and settling time \(\left( t_{s} \right)\) are given by:

\[\begin{matrix} t_{r} & \ \leq 0.6sec \\ M_{p} & \ \leq 17\% \\ t_{S} & \ \leq 9.2sec \end{matrix}\]

(a) Sketch the region in the \(s\)-plane where the poles could be placed so that the system will meet all three specifications.

(b) Indicate on your sketch the specific locations (denoted by \(x\) ) that will have the smallest rise time and also meet the settling time specification exactly.

3.29 A feedback system has the following response specifications:

Figure 3.56

Unity feedback system for Problem 3.30

Figure 3.57

Desired closed-loop pole locations for Problem 3.30

  • Percent overshoot \(M_{p} \leq 16\%\)

  • Settling time \(t_{S} \leq 6.9sec\)

  • Rise time \(t_{r} \leq 1.8sec\)

(a) Sketch the region of acceptable closed-loop poles in the s-plane for the system, assuming the transfer function can be approximated as simple second order.

(b) What is the expected overshoot if the rise time and settling time specifications are met exactly?

3.30 Suppose you are to design a unity feedback controller for a first-order plant depicted in Fig. 3.56. (As you will learn in Chapter 4, the configuration shown is referred to as a proportional-integral controller.) You are to design the controller so that the closed-loop poles lie within the shaded regions shown in Fig. 3.57.

(a) What values of \(\omega_{n}\) and \(\zeta\) correspond to the shaded regions in Fig. 3.57? (A simple estimate from the figure is sufficient.)

(b) Let \(K_{\alpha} = \alpha = 2\). Find values for \(K\) and \(K_{I}\) so the poles of the closed-loop system lie within the shaded regions.

(c) Prove that no matter what the values of \(K_{\alpha}\) and \(\alpha\) are, the controller provides enough flexibility to place the poles anywhere in the complex (left-half) plane.

3.31 The open-loop transfer function of a unity feedback system is

\[G(s) = \frac{K}{s(s + 2)} \]

The desired system response to a step input is specified as peak time \(t_{p} =\) \(1sec\) and overshoot \(M_{p} = 5\%\).

Figure 3.58

(a) Mechanical system

for Problem 3.32;

(b) step response for

Problem 3.32 (a) Determine whether both specifications can be met simultaneously by selecting the right value of \(K\).

(b) Sketch the associated region in the s-plane where both specifications are met, and indicate what root locations are possible for some likely values of \(K\).

(c) Relax the specifications in part (a) by the same factor and pick a suitable value for \(K\), and use Matlab to verify that the new specifications are satisfied.

3.32 A simple mechanical system is shown in Fig. 3.58(a). The parameters are \(k =\) spring constant, \(b =\) viscous friction constant, \(m =\) mass. A step of \(2\text{ }N\) force is applied as \(F = 2 \times 1(t)\) and the resulting step response is shown in Fig. 3.58(b). What are the values of the system parameters \(k,b\), and \(m\) ?

(a)

(b)

3.33 A mechanical system is shown in Fig. 3.59. The mass \(M = 18\text{ }kg\) and the control force, \(u\), is proportional to the reference input, \(u = Ar\).

(a) Derive the transfer function from \(R\) to \(Y\).

Figure 3.59

Simple mechanical

system for Problem 3.33

(b) Determine the values of the parameters \(k,b,A\) such that the system has a rise time of \(t_{r} = 0.7\text{ }s\) and overshoot of \(M_{p} = 14\%\), and zero steady-state error to a step in \(r\).

3.34 The equations of motion for the DC motor shown in Fig. 2.33 were given in Eqs (2.65) as

\[J_{m}{\overset{¨}{\theta}}_{m} + \left( b + \frac{K_{t}K_{e}}{R_{a}} \right){\overset{˙}{\theta}}_{m} = \frac{K_{t}}{R_{a}}v_{a} \]

Assume that

\[\begin{matrix} J_{m} & \ = 0.05\text{ }kg \cdot m^{2}, \\ b & \ = 0.009\text{ }N \cdot m \cdot sec, \\ K_{e} & \ = 0.07\text{ }V \cdot sec \\ K_{t} & \ = 0.07\text{ }N \cdot m/A \\ R_{a} & \ = 12\Omega \end{matrix}\]

(a) Find the transfer function between the applied voltage \(v_{a}\) and the motor speed \({\overset{˙}{\theta}}_{m}\).

(b) What is the steady-state speed of the motor after a voltage \(v_{a} = 15\text{ }V\) has been applied?

(c) Find the transfer function between the applied voltage \(v_{a}\) and the shaft angle \(\theta_{m}\).

(d) Suppose feedback is added to the system in part (c) so it becomes a position servo device such that the applied voltage is given by

\[v_{a} = K\left( \theta_{r} - \theta_{m} \right) \]

where \(K\) is the feedback gain. Find the transfer function between \(\theta_{r}\) and \(\theta_{m}\).

(e) What is the maximum value of \(K\) that can be used if an overshoot \(M < 16\%\) is desired?

(f) What values of \(K\) will provide a rise time of less than \(5.2sec\) ? (Ignore the \(M_{p}\) constraint.)

(g) Use Matlab to plot the step response of the position servo system for values of the gain \(K = 0.6,1\), and 2. Find the overshoot and rise time for each of the three step responses by examining your plots. Are the plots consistent with your calculations in parts (e) and (f)?

3.35 You wish to control the elevation of the satellite-tracking antenna shown in Fig. 3.60 and Fig. 3.61. The antenna and drive parts have a moment of inertia \(J\) and a damping \(B\); these arise to some extent from bearing

Figure 3.60

Satellite-tracking antenna

Source:

fstockfoto/Shutterstock

Figure 3.61

Schematic of antenna

for Problem 3.35

and aerodynamic friction, but mostly from the back emf of the DC drive motor. The equations of motion are

\[J\overset{¨}{\theta} + B\overset{˙}{\theta} = T_{c}, \]

where \(T_{c}\) is the torque from the drive motor. Assume

\[J = 600,000\text{ }kg \cdot m^{2}\ B = 20,000\text{ }N \cdot m \cdot sec. \]

(a) Find the transfer function between the applied torque \(T_{c}\) and the antenna angle \(\theta\).

(b) Suppose the applied torque is computed so \(\theta\) tracks a reference command \(\theta_{r}\) according to the feedback law

\[T_{c} = K\left( \theta_{r} - \theta \right) \]

where \(K\) is the feedback gain. Find the transfer function between \(\theta_{r}\) and \(\theta\).

(c) What is the maximum value of \(K\) that can be used if you wish to have an overshoot \(M_{p} < 10\%\) ?

(d) What values of \(K\) will provide a rise time of less than \(80sec\) ? (Ignore the \(M_{p}\) constraint.)
(e) Use Matlab to plot the step response of the antenna system for \(K =\) 200, 400, 1000, and 2000. Find the overshoot and rise time of the four step responses by examining your plots. Do the plots to confirm your calculations in parts (c) and (d)?

3.36 Show that the second-order system

\[\overset{¨}{y} + 2\zeta\omega_{n}\overset{˙}{y} + \omega_{n}^{2}y = 0,\ y(0) = y_{o},\ \overset{˙}{y}(0) = 0 \]

has the initial condition response

\[y(t) = y_{o}\frac{e^{- \sigma t}}{\sqrt{1 - \zeta^{2}}}sin\left( \omega_{d}t + \cos^{- 1}\zeta \right) \]

Prove that, for the underdamped case \((\zeta < 1)\), the response oscillations decay at a predictable rate (see Fig. 3.62) called the logarithmic decrement, \(\delta\).

\[\begin{matrix} \delta & \ = ln\frac{y_{o}}{y_{1}} = lne^{\sigma\tau_{d}} = \sigma\tau_{d} = \frac{2\pi\zeta}{\sqrt{1 - \zeta^{2}}} \\ & \ \cong ln\frac{\Delta y_{1}}{y_{1}} \cong ln\frac{\Delta y_{i}}{y_{i}} \end{matrix}\]

where

\[\tau_{d} = \frac{2\pi}{\omega_{d}} = \frac{2\pi}{\omega_{n}\sqrt{1 - \zeta^{2}}} \]

is the damped natural period of vibration. The damping coefficient in terms of the logarithmic decrement is then

\[\zeta = \frac{\delta}{\sqrt{4\pi^{2} + \delta^{2}}} \]

129. Problems for Section 3.5: Effects of Zeros and Additional Poles

3.37 In aircraft control systems, an ideal pitch response \(\left( q_{o} \right)\) versus a pitch command \(\left( q_{c} \right)\) is described by the transfer function

\[\frac{Q_{o}(s)}{Q_{c}(s)} = \frac{\tau\omega_{n}^{2}\left( s + \frac{1}{\tau} \right)}{s^{2} + 2\zeta\omega_{n}s + \omega_{n}^{2}} \]

Figure 3.62

Definition of logarithmic decrement

The actual aircraft response is more complicated than this ideal transfer function: nevertheless, the ideal model is used as guide for autopilot design. Assume that \(t_{r}\) is the desired rise time and

\[\begin{matrix} \omega_{n} & \ = \frac{1.789}{t_{r}} \\ \frac{1}{\tau} & \ = \frac{2}{t_{r}} \\ \zeta & \ = 0.89. \end{matrix}\]

Show that this ideal response possesses fast transient response with minimal overshoot by plotting the step response for \(t_{r} = 1.0\) and \(1.5sec\).

3.38 Approximate each of the following transfer functions with a second-order transfer function.

\[\begin{matrix} & G_{1}(s) = \frac{(0.6s + 1)(0.35s + 1)}{(0.38s + 1)(0.55s + 1)\left( s^{2} + 1.1s + 1 \right)}, \\ & G_{2}(s) = \frac{(0.6s + 1)(0.35s + 1)}{(0.38s + 1)(0.55s + 1)\left( s^{2} + 0.2s + 1 \right)}, \\ & G_{3}(s) = \frac{(0.6s + 1)( - 0.35s + 1)}{(0.08s + 1)(0.55s + 1)\left( s^{2} + 1.1s + 1 \right)}, \\ & G_{4}(s) = \frac{(0.6s + 1)(0.35s + 1)}{(0.08s + 1)(0.55s + 1)\left( s^{2} + 1.1s + 1 \right)}, \\ & G_{5}(s) = \frac{(0.01s + 1)(0.35s + 1)}{(0.38s + 1)(0.55s + 1)\left( s^{2} + 1.1s + 1 \right)}. \end{matrix}\]

3.39 A system has the closed-loop transfer function

\[\frac{Y(s)}{R(s)} = T(s) = \frac{1500(s + 30)}{(s + 1.5)(s + 16)(s + 31)(s + 10s + 50)} \]

where \(R\) is a step of size 5 .

(a) Given an expression for the form of the output time history as a sum of terms showing the shape of each component of the response.

(b) Given an estimate of the settling time of this step response.

3.40 Consider the system shown in Fig. 3.63, where

\[G(s) = \frac{1}{s(s + 3)}\ \text{~}\text{and}\text{~}\ D_{c}(s) = \frac{K(s + z)}{s + p} \]

Figure 3.63

Unity feedback system for Problem 3.40
Find \(K,z\), and \(p\) so the closed-loop system has a \(10\%\) overshoot to a step input and a settling time of \(1.5sec(1\%\) criterion).

3.41 Sketch the step response of a system with the transfer function

\[G(s) = \frac{s/2 + 1}{(s/40 + 1)\left\lbrack (s/4)^{2} + s/4 + 1 \right\rbrack}. \]

Justify your answer on the basis of the locations of the poles and zeros. (Do not find the inverse Laplace transform.) Then compare your answer with the step response computed using Matlab.

3.42 A closed-loop transfer function is given:

\[H(s) = \frac{\left\lbrack \left( \frac{s}{10} \right)^{2} + 0.1\left( \frac{s}{10} \right) + 1 \right\rbrack\left\lbrack \frac{s}{2} + 1 \right\rbrack\left\lbrack \frac{s}{0.1} + 1 \right\rbrack}{\left\lbrack \left( \frac{s}{4} \right)^{2} + \left( \frac{s}{4} \right) + 1 \right\rbrack\left\lbrack \left( \frac{s}{10} \right)^{2} + 0.09\left( \frac{s}{10} \right) + 1 \right\rbrack\left\lbrack \frac{s}{0.02} + 1 \right\rbrack}. \]

Estimate the percent overshoot, \(M_{p}\), and the transient settling time, \(t_{s}\), for this system.

3.43 A transfer function, \(G(s)\), is given:

\[G(s) = \frac{\left\lbrack \left( \frac{s}{100} \right)^{2} + 0.01\left( \frac{s}{100} \right) + 1 \right\rbrack}{\left\lbrack \left( \frac{s}{10} \right)^{2} + \left( \frac{s}{10} \right) + 1 \right\rbrack\left\lbrack \frac{s}{5} + 1 \right\rbrack\left\lbrack \left( \frac{s}{100} \right)^{2} + 0.1\left( \frac{s}{100} \right) + 1 \right\rbrack} \]

If a step input is applied to this plant, what do you estimate the rise-time, settling time, and overshoot to be? Give a brief statement of your reasons in each case.

3.44 Three closed-loop transfer functions are given below.

\[\begin{matrix} & \frac{Y(s)}{R(s)} = T_{1}(s) = \frac{2.7}{s^{2} + 1.64s + 2.7}, \\ & \frac{Y(s)}{R(s)} = T_{2}(s) = \frac{2(s + 1.5)}{1.11\left( s^{2} + 1.64s + 2.7 \right)} \\ & \frac{Y(s)}{R(s)} = T_{3}(s) = \frac{4.1}{(s + 1.5)\left( s^{2} + 1.64s + 2.7 \right)} \end{matrix}\]

In each case, provide estimates of the rise-time, settling time, and percent overshoot to a unit step input in \(r\).

3.45 Six transfer functions with unity DC gain are given below.

(a) Which transfer function(s) will meet an overshoot specification of \(M_{p} \leq 17\%\) ?

(b) Which transfer function(s) will meet a rise time specification of \(t_{r} \leq\) \(0.3sec\) ?

(c) Which transfer function(s) will meet a settling time specification of \(t_{s} \leq 1.3sec\) ?

\[\begin{matrix} & G_{1}(s) = \frac{53.5}{\left( s^{2} + 7.31s + 53.5 \right)}, \\ & G_{2}(s) = \frac{313}{(s + 5.85)\left( s^{2} + 7.31s + 53.5 \right)}, \\ & G_{3}(s) = \frac{313}{0.5319(s + 11)\left( s^{2} + 7.31s + 53.5 \right)}, \\ & G_{4}(s) = \frac{5.9(s + 9.1)}{\left( s^{2} + 7.31s + 53.5 \right)}, \\ & G_{5}(s) = \frac{9.8\left( s^{2} + 8s + 60 \right)}{(s + 11)\left( s^{2} + 7.31s + 53.5 \right)} \\ & G_{6}(s) = \frac{1.78\left( s^{2} + 8s + 60 \right)}{(s + 2)\left( s^{2} + 7.31s + 53.5 \right)} \end{matrix}\]

3.46 Consider the following two nonminimum-phase systems:

\[\begin{matrix} & G_{1}(s) = - \frac{2(s - 1)}{(s + 1)(s + 2)}, \\ & G_{2}(s) = \frac{3(s - 1)(s - 2)}{(s + 1)(s + 2)(s + 3)}. \end{matrix}\]

(a) Sketch the unit-step responses for \(G_{1}(s)\) and \(G_{2}(s)\), paying close attention to the transient part of the response.

(b) Explain the difference in the behavior of the two responses as it relates to the zero locations.

(c) Consider a stable, strictly proper system (that is, \(m\) zeros and \(n\) poles, where \(m < n)\). Let \(y(t)\) denote the step response of the system. The step response is said to have an undershoot if it initially starts off in the "wrong" direction. Prove that a stable, strictly proper system has an undershoot if and only if its transfer function has an odd number of real RHP zeros.

3.47 Find the relationships for the impulse response and the step response corresponding to Eq. (3.65) for the cases where

(a) the roots are repeated.

(b) the roots are both real. Express your answers in terms of hyperbolic functions (sinh, cosh) to best show the properties of the system response.

(c) the value of the damping coefficient, \(\zeta\), is negative.

3.48 Consider the following second-order system with an extra pole:

\[H(s) = \frac{\omega_{n}^{2}p}{(s + p)\left( s^{2} + 2\zeta\omega_{n}s + \omega_{n}^{2} \right)} \]

Show the unit-step response is

\[y(t) = 1 + Ae^{- pt} + Be^{- \sigma t}sin\left( \omega_{d}t - \theta \right) \]

where

\[\begin{matrix} A & \ = \frac{- \omega_{n}^{2}}{\omega_{n}^{2} - 2\zeta\omega_{n}p + p^{2}} \\ B & \ = \frac{p}{\sqrt{\left( p^{2} - 2\zeta\omega_{n}p + \omega_{n}^{2} \right)\left( 1 - \zeta^{2} \right)}} \\ \theta & \ = \tan^{- 1}\left( \frac{\sqrt{1 - \zeta^{2}}}{- \zeta} \right) + \tan^{- 1}\left( \frac{\omega_{n}\sqrt{1 - \zeta^{2}}}{p - \zeta\omega_{n}} \right). \end{matrix}\]

(a) Which term dominates \(y(t)\) as \(p\) gets large?

(b) Give approximate values for \(A\) and \(B\) for small values of \(p\).

(c) Which term dominates as \(p\) gets small? (Small with respect to what?)

(d) Using the preceding explicit expression for \(y(t)\) or the step command in Matlab, and assuming \(\omega_{n} = 1\) and \(\zeta = 0.7\), plot the step response of the preceding system for several values of \(p\) ranging from very small to very large. At what point does the extra pole cease to have much effect on the system response?

3.49 Consider the second-order unity DC gain system with an extra zero:

\[H(s) = \frac{\omega_{n}^{2}(s + z)}{z\left( s^{2} + 2\zeta\omega_{n}s + \omega_{n}^{2} \right)} \]

(a) Show that the unit-step response for the system is given by

\[y(t) = 1 - \frac{\sqrt{1 + \frac{\omega_{n}^{2}}{z^{2}} - \frac{2\zeta\omega_{n}}{z}}}{\sqrt{1 - \zeta^{2}}}e^{- \sigma t}cos\left( \omega_{d}t + \beta_{1} \right) \]

where

\[\beta_{1} = \tan^{- 1}\left( \frac{- \zeta + \frac{\omega_{n}}{z}}{\sqrt{1 - \zeta^{2}}} \right) \]

(b) Derive an expression for the step response overshoot, \(M_{p}\), of this system.

(c) For a given value of overshoot, \(M_{p}\), how do we solve for \(\zeta\) and \(\omega_{n}\) ?

3.50 The block diagram of an autopilot designed to maintain the pitch attitude \(\theta\) of an aircraft is shown in Fig. 3.64. The transfer function relating the elevator angle \(\delta_{e}\) and the pitch attitude \(\theta\) is

\[\frac{\Theta(s)}{\delta_{e}(s)} = G(s) = \frac{50(s + 1)(s + 2)}{\left( s^{2} + 5s + 40 \right)\left( s^{2} + 0.03s + 0.06 \right)} \]

where \(\theta\) is the pitch attitude in degrees and \(\delta_{e}\) is the elevator angle in degrees. The autopilot controller uses the pitch attitude error \(e\) to adjust the elevator according to the transfer function

\[\frac{\delta_{e}(s)}{E(s)} = D_{c}(s) = \frac{K(s + 3)}{s + 10} \]

Figure 3.64

Block diagram of autopilot for

Problem 3.50

Using Matlab, find a value of \(K\) that will provide an overshoot of less than \(10\%\) and a rise time faster than \(0.5sec\) for a unit-step change in \(\theta_{r}\). After examining the step response of the system for various values of \(K\), comment on the difficulty associated with meeting rise time and overshoot specifications for complicated systems.

Figure 3.65

Time to double

130. Problems for Section 3.6: Stability

3.51 A measure of the degree of instability in an unstable aircraft response is the amount of time it takes for the amplitude of the time response to double (see Fig. 3.65), given some nonzero initial condition.

(a) For a first-order system, show that the time to double is

\[\tau_{2} = \frac{ln2}{p} \]

where \(p\) is the pole location in the RHP.

(b) For a second-order system (with two complex poles in the RHP), show that

\[\tau_{2} = \frac{ln2}{- \zeta\omega_{n}} \]

3.52 Suppose that unity feedback is to be applied around the listed openloop systems. Use Routh's stability criterion to determine whether the resulting closed-loop systems will be stable.

(a) \(K(s)G(s) = \frac{5(s + 5)}{(s + 1)\left( s^{3} + 2s + 5 \right)}\)

(b) \(K(s)G(s) = \frac{0.2\left( s^{2} + 0.95s + 0.11 \right)}{s\left( s^{2} + 0.36s + 0.72 \right)}\)

(c) \(K(s)G(s) = \frac{\left( s^{3} + 15.5s^{2} + 12.2s + 100 \right)}{(s + 1)^{2}\left( 47.7s^{3} + 23.4s^{2} + 20.3s + 1 \right)}\)

3.53 Use Routh's stability criterion to determine how many roots with positive real parts the following equations have:

(a) \(s^{4} + 5.2s^{3} + 18.9s^{2} + 43.2s + 45.4\)

Figure 3.66

Magnetic levitation system for Problem 3.56 (b) \(s^{5} + 0.102s^{4} + 1.123s^{3} + 0.686s^{2} + 0.154s + 2\)

(c) \(s^{4} + 152s^{3} + 12s^{2} - 1932s - 4921\)

(d) \(99s^{3} - s^{2} - 6s - 7\)

(e) \(s^{4} + 8s^{2} + 36\)

3.54 Find the range of \(K\) for which all the roots of the following polynomial are in the LHP:

\[s^{5} + 5s^{4} + 10s^{3} + 10s^{2} + 5s + K = 0. \]

Use Matlab to verify your answer by plotting the roots of the polynomial in the \(s\)-plane for various values of \(K\).

3.55 The transfer function of a typical tape-drive system is given by

\[KG(s) = \frac{K(s + 6)}{s\left\lbrack (s + 0.7)(s + 1.2)\left( s^{2} + 0.8s + 6 \right) \right\rbrack} \]

where time is measured in milliseconds. Using Routh's stability criterion, determine the range of \(K\) for which this system is stable when the characteristic equation is \(1 + KG(s) = 0\).

3.56 Consider the closed-loop magnetic levitation system shown in Fig. 3.66. Determine the conditions on the system parameters \(\left( a,K,z,p,K_{\circ} \right)\) to guarantee closed-loop system stability.

3.57 Consider the system shown in Fig. 3.67.

(a) Compute the closed-loop characteristic equation.

(b) For what values of \((T,A)\) is the system stable? Hint: An approximate answer may be found using

\[e^{- Ts} \cong 1 - Ts \]

or

\[\begin{matrix} & \ e^{- Ts} \cong \frac{1 - \frac{T}{2}s}{1 + \frac{T}{2}s} \\ & \text{~}\text{for the pure delay. As an alternative, you could use the computer}\text{~} \\ & \text{~}\text{Matlab (Simulink) to simulate the system or to find the roots of the}\text{~} \\ & \text{~}\text{system's characteristic equation for various values of}\text{~}T\text{~}\text{and}\text{~}A\text{.}\text{~} \end{matrix}\]

Figure 3.67

Control system for

Problem 3.57

3.58 Modify the Routh criterion so that it applies to the case in which all the poles are to be the left \(- \alpha\) when \(\alpha > 0\). Apply the modified test to the polynomial

\[s^{3} + (0.5 + K)s^{2} + (1 + 2K)s + 3K = 0 \]

finding those values of \(K\) for which all poles have a real part less than -0.7 .

3.59 Suppose the characteristic polynomial of a given closed-loop system is computed to be

\(s^{4} + \left( 11 + K_{2} \right)s^{3} + \left( 121 + K_{1} \right)s^{2} + \left( K_{1} + K_{1}K_{2} + 110K_{2} + 210 \right)s + 11K_{1} + 100 = 0\).

Find constraints on the two gains \(K_{1}\) and \(K_{2}\) that guarantee a stable closed-loop system, and plot the allowable region(s) in the \(\left( K_{1},K_{2} \right)\) plane. You may wish to use the computer to help solve this problem.

3.60 Overhead electric power lines sometimes experience a low-frequency, high-amplitude vertical oscillation, or gallop, during winter storms when the line conductors become covered with ice. In the presence of wind, this ice can assume aerodynamic lift and drag forces that result in a gallop up to several meters in amplitude. Large-amplitude gallop can cause clashing conductors and structural damage to the line support structures caused by the large dynamic loads. These effects in turn can lead to power outages. Assume the line conductor is a rigid rod, constrained to vertical motion only, and suspended by springs and dampers as shown in Fig. 3.68. A simple model of this conductor galloping is

where

\[m\overset{¨}{y} + \frac{D(\alpha)\overset{˙}{y} - L(\alpha)v}{\left( {\overset{˙}{y}}^{2} + v^{2} \right)^{1/2}} + T\left( \frac{n\pi}{\mathcal{l}} \right)y = 0 \]

\[\begin{matrix} m & \ = \text{~}\text{mass of conductor}\text{~} \\ y & \ = \text{~}\text{conductor's vertical displacement}\text{~} \\ D & \ = \text{~}\text{aerodynamic drag force,}\text{~} \\ L & \ = \text{~}\text{aerodynamic lift force,}\text{~} \\ v & \ = \text{~}\text{wind velocity}\text{~} \\ \alpha & \ = \text{~}\text{aerodynamic angle of attack}\text{~} = - \tan^{- 1}(\overset{˙}{y}/v) \end{matrix}\]

Figure 3.68

Electric power-line conductor

\[\frac{\partial L}{\partial\alpha} + D_{0} < 0 \]

\[\begin{matrix} T & \ = \text{~}\text{conductor tension}\text{~} \\ n & \ = \text{~}\text{number of harmonic frequencies,}\text{~} \\ \mathcal{l} & \ = \text{~}\text{length of conductor.}\text{~} \end{matrix}\]

Assume \(L(0) = 0\) and \(D(0) = D_{0}\) (a constant), and linearize the equation around the value \(y = \overset{˙}{y} = 0\). Use Routh's stability criterion to show that galloping can occur whenever

\[\frac{\partial L}{\partial\alpha} + D_{0} < 0 \]

131. A First Analysis of Feedback

132. A Perspective on the Analysis of Feedback

In the next three chapters, we will introduce three techniques for the design of controllers. Before doing so, it is useful to develop the assumptions to be used and to derive the equations that are common to each of the design approaches we will describe. As a general observation, the dynamics of systems to which control is applied are nonlinear and very complex. However, in this initial analysis, we assume the plant to be controlled as well as the controller can be represented as dynamic systems which are linear and time invariant (LTI). We also assume they have only single inputs and single outputs, for the most part, and may thus be represented by simple scalar transfer functions. As we mentioned in Chapter 1, our basic concerns for control are stability, tracking, regulation, and sensitivity. The goal of the analysis in this chapter is to revisit each of these requirements in a linear dynamic setting, and to develop equations that will expose constraints placed on the controller and identify elementary objectives to be suggested for the controllers.

The two fundamental structures for realizing controls are the open-loop structure, as shown in Fig. 4.1, and the closed-loop structure, also known as feedback control, as shown in Fig. 4.2. The definition of open-loop control is that there is no closed signal path whereby the output influences the control effort. In the structure shown in Fig. 4.1, the controller transfer function modifies the
reference input signal before it is applied to the plant. This controller might cancel the unwanted dynamics of the plant and replace them with the more desirable dynamics of the controller. In other cases, open-loop control actions are taken on the plant as the environment changes, actions that are calibrated to give a good response but are not dependent on measuring the actual response. An example of this would be an aircraft autopilot whose parameters are changed with altitude or speed but not by feedback of the craft's motion. Feedback control, on the other hand, uses a sensor to measure the output and by feedback indirectly modifies the dynamics of the system. Although it is possible that feedback may cause an otherwise stable system to become unstable (a vicious circle), feedback gives the designer more flexibility and a preferable response to each of our objectives when compared to open-loop control.

133. Chapter Overview

The chapter begins with consideration of the basic equations of a simple open-loop structure and of an elementary feedback structure. In Section 4.1, the equations for the two structures are presented in general form and compared in turn with respect to stability, tracking, regulation, and sensitivity. In Section 4.2, the steady-state errors in response to polynomial inputs will be analyzed in more detail. As part of the language of steady-state performance, control systems are assigned a type number according to the maximum degree of the input polynomial for which the steady-state error is a finite constant. For each type, an appropriate error constant is defined, which allows the designer to easily compute the size of this error.

Although Maxwell and Routh developed a mathematical basis for assuring stability of a feedback system, design of controllers from the earliest days was largely trial and error based on experience. From this tradition, there emerged an almost universal controller, the proportional-integral-derivative (PID) structure considered in Section 4.3. This device has three elements: a Proportional term to close the feedback loop, an Integral term to assure zero error to constant reference and disturbance inputs, and a Derivative term to improve (or realize!) stability and good dynamic response. In this section, these terms will be considered and their respective effects illustrated. As part of the evolution of the PID controller design, a major step was the development of a simple procedure for selecting the three parameters, a process called "tuning the controller." Ziegler and Nichols developed and published a set of experiments to be run, characteristics to be measured, and tuning values to be recommended as a result. These procedures are discussed in this section. The concept of feedforward control by plant model inversion will be discussed in Section 4.4. In the optional Section 4.5, a brief introduction to
the increasingly common digital implementation of controllers will be given. Sensitivity of time response to parameter changes will be discussed in Section 4.6. Finally, Section 4.7 will provide the historical perspective for the material in this chapter.

133.1. The Basic Equations of Control

We begin by collecting a set of equations and transfer functions that will be used throughout the rest of the text. For the open-loop system of Fig. 4.1, if we take the disturbance to be at the input of the plant, the output is given by

\[Y_{ol} = GD_{ol}R + GW \]

and the error, the difference between reference input and system output, is given by

\[\begin{matrix} E_{ol} & \ = R - Y_{ol}, \\ & \ = R - \left\lbrack GD_{ol}R + GW \right\rbrack, \\ & \ = \left\lbrack 1 - GD_{ol} \right\rbrack R - GW \end{matrix}\]

The open-loop transfer function in this case is \(T_{ol}(s) = G(s)D_{ol}(s)\).

For feedback control, Fig. 4.2 gives the basic unity feedback structure of interest. There are three external inputs: the reference, \(R\), which the output is expected to track; the plant disturbance, \(W\), which the control is expected to counteract so it does not disturb the output; and the sensor noise, \(V\), which the controller is supposed to ignore.

For the feedback block diagram of Fig. 4.2, the equations for the output and the control are given by the superposition of the responses to the three inputs individually, as follows:

\[\begin{matrix} Y_{cl} & \ = \frac{GD_{cl}}{1 + GD_{cl}}R + \frac{G}{1 + GD_{cl}}W - \frac{GD_{cl}}{1 + GD_{cl}}V, \\ U & \ = \frac{D_{cl}}{1 + GD_{cl}}R - \frac{GD_{cl}}{1 + GD_{cl}}W - \frac{D_{cl}}{1 + GD_{cl}}V. \end{matrix}\]

Figure 4.1

Open-loop system showing reference, \(R\), control, \(U\), disturbance, \(W\), and output \(Y\)

Figure 4.2

Closed-loop system showing the reference, \(R\), control, \(U\), disturbance, \(W\), output, \(Y\), and sensor noise, \(V\)

Perhaps more important than these is the equation for the error, \(E_{cl} = R - Y_{cl}\).

\[\begin{matrix} E_{cl} & \ = R - \left\lbrack \frac{GD_{cl}}{1 + GD_{cl}}R + \frac{G}{1 + GD_{cl}}W - \frac{GD_{cl}}{1 + GD_{cl}}V \right\rbrack \\ & \ = \frac{1}{1 + GD_{cl}}R - \frac{G}{1 + GD_{cl}}W + \frac{GD_{cl}}{1 + GD_{cl}}V \end{matrix}\]

We can rewrite Eqs. (4.5), (4.6) and (4.8) in a nice compact form:

\[\begin{matrix} Y_{cl} & \ = \mathcal{T}R + G\mathcal{S}W - \mathcal{T}V \\ U & \ = D_{cl}\mathcal{S}R - \mathcal{T}W - D_{cl}\mathcal{S}V \\ E_{cl} & \ = \mathcal{S}R - G\mathcal{S}W + \mathcal{T}V \end{matrix}\]

where we define the two transfer functions

\[\mathcal{S} = \frac{1}{1 + GD_{cl}} \]

and

\[\mathcal{T} = \frac{GD_{cl}}{1 + GD_{cl}} \]

In this case, the closed-loop transfer function is \(T_{cl} = \mathcal{T} = \frac{GD_{cl}}{1 + GD_{cl}}\). The significance of these two transfer functions will become apparent later in this section.

With these equations, we will explore the four basic objectives of stability, tracking, regulation, and sensitivity for both the open-loop and the closed-loop cases.

133.1.1. Stability

As we discussed in Chapter 3, the requirement for stability is simply stated: All poles of the transfer function must be in the left half-plane (LHP). In the open-loop case described by Eq. (4.1), these are the poles of \(GD_{ol}\). To see the restrictions this requirement places on the controller, we define the polynomials \(a(s),b(s),c(s)\), and \(d(s)\) so \(G(s) = \frac{b(s)}{a(s)}\) and \(D_{ol}(s) = \frac{c(s)}{d(s)}\). Therefore \(GD_{ol} = \frac{bc}{ad}\). With these definitions, the stability requirement is that neither \(a(s)\) nor \(d(s)\) may have roots in the right halfplane (RHP). A naive engineer might believe that if the plant is unstable with \(a(s)\) having a root in the RHP, the system might be made stable by canceling this pole with a zero of \(c(s)\). However, the unstable pole remains and the slightest noise or disturbance will cause the output to grow until stopped by saturation or system failure. Likewise, if the plant shows poor response because of a zero of \(b(s)\) in the RHP, an attempt to fix this by cancellation using a root of \(d(s)\) will similarly result in an unstable system. We conclude that an open-loop structure cannot be used to make an unstable plant to be stable, and therefore cannot be used if the plant is already unstable.

For the feedback system, from Eq. (4.8), the system poles are the roots of \(1 + GD_{cl} = 0\). Again using the polynomials defined above, the system characteristic equation is

\[\begin{matrix} 1 + GD_{cl} & \ = 0, \\ 1 + \frac{b(s)c(s)}{a(s)d(s)} & \ = 0, \\ a(s)d(s) + b(s)c(s) & \ = 0. \end{matrix}\]

From this equation, it is clear that the feedback case grants considerably more freedom to the controller design than does the open-loop case. However, one must still avoid unstable cancellations. For example, if the plant is unstable and therefore \(a(s)\) has a root in the RHP, we might cancel this pole by putting a zero of \(c(s)\) at the same place. However, Eq. (4.16) shows that as a result, the unstable pole remains a pole of the system and this method will not work. However, unlike the open-loop case, having a pole of \(a(s)\) in the RHP does NOT prevent the design of a feedback controller that will make the system stable. For example, in Chapter 2, we derived the transfer function for the inverted pendulum, which, for simple values, might be \(G(s) = \frac{1}{s^{2} - 1}\) for which we have \(b(s) = 1\) and \(a(s) = s^{2} - 1 = (s + 1)(s - 1)\). Suppose we try \(D_{cl}(s) = \frac{K(s + \gamma)}{s + \delta}\). The characteristic equation that results for the system is

\[(s + 1)(s - 1)(s + \delta) + K(s + \gamma) = 0 \]

This is the problem that Maxwell faced in his study of governors: Under what conditions on the parameters will all the roots of this equation be in the LHP? The problem was solved by Routh. In our case, a simple solution is to take \(\gamma = 1\) and the common (stable) factor cancels. Note the cancellation is fine in this case, because \((s + 1)\) is a stable pole. The resulting second-order equation can be easily solved to place the remaining two poles at any point desired.

Exercise. If we wish to force the characteristic equation to be \(s^{2} +\) \(2\zeta\omega_{n}s + \omega_{n}^{2} = 0\), solve for \(K\) and \(\delta\) in terms of \(\zeta\) and \(\omega_{n}\).

133.1.2. Tracking

The tracking problem is to cause the output to follow the reference input as closely as possible. In the open-loop case, if the plant is stable and has neither poles nor zeros in the RHP, then in principle, the controller can be selected to cancel the transfer function of the plant and substitute whatever desired transfer function the engineer wishes. This apparent freedom, however, comes with three caveats. First, in order to physically build it, the controller transfer function must be proper, meaning that it cannot be given more zeros than it has poles. Second, the engineer must not get greedy and request an unrealistically fast design. This entire analysis has been based on the assumption that the plant is linear and a demand for a fast response will demand large inputs to the plant,
inputs that will be sure to saturate the system if the demand is too great. Again, it is the responsibility of the engineer to know the limits of the plant and to set the desired overall transfer function to a reasonable value with this knowledge. Third and finally, although one can, in principle, stably cancel any pole in the LHP, the next section on sensitivity faces up to the fact that the plant transfer function is subject to change and if one tries to cancel a pole that is barely inside the LHP, there is a good chance of disaster as that pole moves a bit and exposes the system response to unacceptable transients.

Exercise. For a plant having the transfer function \(G(s) = \frac{1}{s^{2} + 3s + 9}\) it is proposed to use a controller in a unity feedback system and having the transfer function \(D_{cl}(s) = \frac{c_{2}s^{2} + c_{1}s + c_{0}}{s\left( s + d_{1} \right)}\). Solve for the parameters of this controller so the closed loop will have the characteristic equation \((s + 6)\) \((s + 3)\left( s^{2} + 3s + 9 \right) = 0.\ ^{1}\)

{Answer: \(c_{2} = 18,c_{1} = 54,c_{0} = 162,d_{1} = 9\) }.

Exercise. Show that if the reference input to the system of the above exercise is a step of amplitude \(A\), the steady-state error will be zero.

133.1.3. Regulation

The problem of regulation is to keep the error small when the reference is at most a constant setpoint and disturbances are present. A quick look at the open-loop block diagram reveals that the controller has no influence at all on the system response to either of the disturbances, \(w\), or \(v\), so this structure is useless for regulation. We turn to the feedback case. From Eq. (4.8), we find a conflict between \(w\) and \(v\) in the search for a good controller. For example, the term giving the contribution of the plant disturbance to the system error is \(\frac{G}{1 + GD_{cl}}W\). To select \(D_{cl}\) to make this term small, we should make \(D_{cl}\) as large as possible and infinite if that is feasible. On the other hand, the error term for the sensor noise is \(\frac{GD_{cl}}{1 + GD_{cl}}V\). In this case, unfortunately, if we select \(D_{cl}\) to be large, the transfer function tends to unity and the sensor noise is not reduced at all! What are we to do? The resolution of the dilemma is to observe that each of these terms is a function of frequency so one of them can be large for some frequencies and small for others. With this in mind, we also note the frequency content of most plant disturbances occurs at very low frequencies and, in fact, the most common case is a bias, which is all at zero frequency! On the other hand, a good sensor will have no bias and can be constructed to have very little noise over the entire range of low frequencies of interest. Thus, using this information, we design the controller transfer function to be large at the low frequencies, where it will reduce the effect of \(w\), and we make it small at the higher frequencies, where it will reduce the effects of the high frequency sensor noise. The control engineer must determine in each case the best place on the frequency scale to make the crossover from amplification to attenuation.

Exercise. Show that if \(w\) is a constant bias and if \(D_{cl}\) has a pole at \(s = 0\), then the error due to this bias will be zero. However, show that if \(G\) has a pole at zero, the error due to this bias will not be zero.

133.1.4. Sensitivity

Suppose a plant is designed with gain \(G\) at a particular frequency, but in operation it changes to be \(G + \delta G\). This represents a fractional or percent change of gain of \(\delta G/G\). For the purposes of this analysis, we set the frequency at zero and take the open-loop controller gain to be fixed at \(D_{ol}(0)\). In the open-loop case, the nominal overall gain is thus \(T_{ol} =\) \(GD_{ol}\), and with the perturbed plant gain, the overall gain would be

\[T_{ol} + \delta T_{ol} = D_{ol}(G + \delta G) = D_{ol}G + D_{ol}\delta G = T_{ol} + D_{ol}\delta G \]

Therefore, the gain change is \(\delta T_{ol} = D_{ol}\delta G\). The sensitivity, \(\mathcal{S}_{G}^{T}\), of a transfer function, \(T_{ol}\), to a plant gain, \(G\), is defined to be the ratio of the fractional change in \(T_{ol}\) defined as \(\frac{\delta T_{ol}}{T_{ol}}\) to the fractional change in \(G\). In equation form,

\[\begin{matrix} \mathcal{S}_{G}^{T} & \ = \frac{\frac{\delta T_{ol}}{T_{ol}}}{\frac{\delta G}{G}} \\ & \ = \frac{G}{T_{ol}}\frac{\delta T_{ol}}{\delta G} \end{matrix}\]

Substituting the values, we find that

\[\frac{\delta T_{ol}}{T_{ol}} = \frac{D_{ol}\delta G}{D_{ol}G} = \frac{\delta G}{G} \]

This means that a \(10\%\) error in \(G\) would yield a \(10\%\) error in \(T_{ol}\). In the open-loop case, therefore, we have computed that \(\mathcal{S} = 1\).

From Eq. (4.5), the same change in \(G\) in the feedback case yields the new steady-state feedback gain as

\[T_{cl} + \delta T_{cl} = \frac{(G + \delta G)D_{cl}}{1 + (G + \delta G)D_{cl}} \]

where \(T_{cl}\) is the closed-loop gain. We can compute the sensitivity of this closed-loop gain directly using differential calculus. The closed-loop steady-state gain is

\[T_{cl} = \frac{GD_{cl}}{1 + GD_{cl}} \]

The first-order variation is proportional to the derivative and is given by

\[\delta T_{cl} = \frac{dT_{cl}}{dG}\delta G \]

Advantage of feedback

Sensitivity function

A fundamental relationship of feedback systems
The general expression for sensitivity from Eq. (4.18) is given by

\[\begin{matrix} & \mathcal{S}_{G}^{T_{cl}} \triangleq \text{~}\text{sensitivity of}\text{~}T_{cl}\text{~}\text{with respect to}\text{~}G, \\ & \mathcal{S}_{G}^{T_{cl}} \triangleq \frac{G}{T_{cl}}\frac{dT_{cl}}{dG} \end{matrix}\]

so

\[\begin{matrix} \mathcal{S}_{G}^{T_{cl}} & \ = \frac{G}{GD_{cl}/\left( 1 + GD_{cl} \right)}\frac{\left( 1 + GD_{cl} \right)D_{cl} - D_{cl}\left( GD_{cl} \right)}{\left( 1 + GD_{cl} \right)^{2}} \\ & \ = \frac{1}{1 + GD_{cl}} \end{matrix}\]

This result exhibits a major advantage of feedback \(\ ^{2}\) :

In feedback control, the error in the overall transfer function gain is less sensitive to variations in the plant gain by a factor of \(\mathcal{S} = \frac{1}{1 + GD_{cl}}\) compared to errors in open-loop control gain.

If the gain is such that \(1 + GD_{cl} = 100\), a \(10\%\) change in plant gain \(G\) will cause only a \(0.1\%\) change in the steady-state gain. The open-loop controller is 100 times more sensitive to gain changes than the closedloop system with loop gain of 100 . The example of the unity feedback case is so common that we will refer to the result of Eq. (4.22) simply as the sensitivity, \(\mathcal{S}\), without subscripts or superscripts. Hence, we define the sensitivity function for a feedback system as

\[\mathcal{S} \triangleq \frac{1}{1 + GD_{cl}} \]

Its usefulness will be demonstrated for dynamic feedback controller design in Chapter 6 . The complementary sensitivity function is defined as (a fancy alternative name for the closed-loop transfer function!)

\[\mathcal{T} \triangleq \frac{GD_{cl}}{1 + GD_{cl}} = 1 - \mathcal{S} \]

These two transfer functions are very important for feedback control design, and they illustrate the fundamental relationship of feedback systems (that also will be explored further in Chapter 6)

\[\mathcal{S} + \mathcal{T} = 1 \]

The results in this section so far have been computed under the assumption of the steady-state error in the presence of constant inputs, either reference or disturbance. Very similar results can be obtained for the steady-state behavior in the presence of a sinusoidal reference or disturbance signal. This is important because there are times when such signals naturally occur as, for example, with a disturbance of \(60\text{ }Hz\) due to power-line interference in an electronic system. The concept is also important because more complex signals can be described as containing sinusoidal components over a band of frequencies and analyzed using superposition of one frequency at a time. For example, it is well known that human hearing is restricted to signals in the frequency range of about 60 to \(15,000\text{ }Hz\). A feedback amplifier and loudspeaker system designed for high-fidelity sound must accurately track any sinusoidal (pure tone) signal in this range. If we take the controller in the feedback system shown in Fig. 4.2 to have the transfer function \(D_{cl}(s)\), and we take the process to have the transfer function \(G(s)\), then the steadystate open-loop gain at the sinusoidal signal of frequency \(\omega_{o}\) will be \(\left| G\left( j\omega_{o} \right)D_{cl}\left( j\omega_{o} \right) \right|\), and the error of the feedback system will be

\[\left| E\left( j\omega_{o} \right) \right| = \left| R\left( j\omega_{o} \right) \right|\left| \frac{1}{1 + G\left( j\omega_{o} \right)D_{cl}\left( j\omega_{o} \right)} \right| \]

Thus, to reduce errors to \(1\%\) of the input at the frequency \(\omega_{o}\), we must make \(\left| 1 + GD_{cl} \right| \geq 100\) or, effectively, \(\left| G\left( j\omega_{o} \right)D_{cl}\left( j\omega_{o} \right) \right| \gtrsim 100\) and a good audio amplifier must have this loop gain over the range \(2\pi 60 \leq\) \(\omega \leq 2\pi 15,000\). We will revisit this concept in Chapter 6 as part of the design based on frequency-response techniques.

134. The Filtered Case

For the case where there is a non-unity pre filter \(F(s)\) following the reference input, \(R(s)\), and non-unity sensor dynamics \(H(s)\), the equations for the system output and the various sensitivity functions need to be re-derived. The details are available in Appendix W4.1.4.1 online at www.pearsonglobaleditions.com.

134.1. Control of Steady-State Error to Polynomial Inputs: System Type

In studying the regulator problem, the reference input is taken to be a constant. It is also the case that the most common plant disturbance is a constant bias. Even in the general tracking problem, the reference input is often constant for long periods of time or may be adequately approximated as if it were a polynomial in time, usually one of low degree. For example, when an antenna is tracking the elevation angle to a satellite, the time history as the satellite approaches overhead is an \(S\)-shaped curve as sketched in Fig. 4.3. This signal may be approximated

Figure 4.3

Signal for satellite tracking

by a linear function of time (called a ramp function or velocity input) for a significant time relative to the speed of response of the servomechanism. As another example, the position control of an elevator has a ramp function reference input, which will direct the elevator to move with constant speed until it comes near the next floor. In rare cases, the input can even be approximated over a substantial period as having a constant acceleration. Consideration of these cases leads us to consider steady-state errors in stable systems with polynomial inputs.

As part of the study of steady-state errors to polynomial inputs, a terminology has been developed to express the results. For example, we classify systems as to "type" according to the degree of the polynomial that they can reasonably track. For example, a system that can track a polynomial of degree 1 with a constant error is called Type 1. Also, to quantify the tracking error, several "error constants" are defined. In all of the following analysis, it is assumed that the systems are stable, else the analysis makes no sense at all.

134.1.1. System Type for Tracking

In the unity feedback case shown in Fig. 4.2, the system error is given by Eq. (4.8). If we consider tracking the reference input alone and set \(W = V = 0\), then the equation for the error is simply

\[E = \frac{1}{1 + GD_{cl}}R = \mathcal{S}R,\ \text{~}\text{where}\text{~}\ \mathcal{S} = \frac{1}{1 + GD_{cl}} \]

To consider polynomial inputs, we let \(r(t) = t^{k}/k!1(t)\) for which the transform is \(R = \frac{1}{s^{k + 1}}\). We take a mechanical system as the basis for a generic reference nomenclature, calling step inputs for which \(k = 0\) "position" inputs, ramp inputs for which \(k = 1\) are called "velocity" inputs, and if \(k = 2\), the inputs are called "acceleration" inputs, regardless of the units of the actual signals. Application of the Final Value Theorem to the error formula gives the result

\[\begin{matrix} \lim_{t \rightarrow \infty}\mspace{2mu} e(t) & \ = e_{ss} = \lim_{s \rightarrow 0}\mspace{2mu} sE(s), \\ & \ = \lim_{s \rightarrow 0}\mspace{2mu} s\frac{1}{1 + GD_{cl}}R(s), \\ & \ = \lim_{s \rightarrow 0}\mspace{2mu} s\frac{1}{1 + GD_{cl}}\frac{1}{s^{k + 1}}. \end{matrix}\]

We consider first a system for which \(GD_{cl}\) has no pole at the origin, that is, no integrator, and a unit-step input for which \(R(s) = 1/s\). Thus, \(r(t)\) is a polynomial of degree 0 . In this case, Eq. (4.30) reduces to

\[\begin{matrix} e_{ss} & \ = \lim_{s \rightarrow 0}\mspace{2mu} s\frac{1}{1 + GD_{cl}}\frac{1}{s} \\ \frac{e_{Ss}}{r_{ss}} & \ = \frac{e_{ss}}{1} = e_{ss} = \frac{1}{1 + GD_{cl}(0)}, \end{matrix}\]

where \(r_{sS} = \lim_{t \rightarrow \infty}\mspace{2mu} r(t) = 1\). We define this system to be Type 0 and we define the constant, \(GD_{cl}(0) \triangleq K_{p}\), as the "position error constant." Notice the above equation yields the relative error and if the input should be a polynomial of degree higher than 1, the resulting error would grow without bound. A polynomial of degree 0 is the highest degree a system of Type 0 can track at all. If \(GD_{cl}(s)\) has one pole at the origin, we could continue this line of argument and consider first-degree polynomial inputs but it is quite straightforward to evaluate Eq. (4.30) in a general setting. For this case, it is necessary to describe the behavior of the controller and plant as \(s\) approaches 0 . For this purpose, we collect all the terms except the pole \((s)\) at the origin into a function \(GD_{clo}(s)\), which is finite at \(s = 0\) so that we can define the constant \(GD_{clo}(0) = K_{n}\) and write the loop transfer function as

\[GD_{cl}(s) = \frac{GD_{clo}(s)}{s^{n}} \]

For example, if \(GD_{cl}\) has no integrator, then \(n = 0\). If the system has one integrator, then \(n = 1\), and so forth. Substituting this expression into Eq. (4.30),

\[\begin{matrix} e_{ss} & \ = \lim_{s \rightarrow 0}\mspace{2mu} s\frac{1}{1 + \frac{GD_{clo}(s)}{s^{n}}}\frac{1}{s^{k + 1}}, \\ & \ = \lim_{s \rightarrow 0}\mspace{2mu}\frac{s^{n}}{s^{n} + K_{n}}\frac{1}{s^{k}} \end{matrix}\]

From this equation, we can see at once that if \(n > k\) then \(e = 0\) and if \(n < k\) then \(e \rightarrow \infty\). If \(n = k = 0\), then \(e_{ss} = \frac{1}{1 + K_{0}}\) and if \(n = k \neq 0\), then \(e_{ss} = 1/K_{n}\). As we discussed above, if \(n = k = 0\), the input is a zero-degree polynomial otherwise known as a step or position, the constant \(K_{o}\) is called the "position constant" written as \(K_{p}\), and the system is classified as "Type 0." If \(n = k = 1\), the input is a first-degree polynomial otherwise known as a ramp or velocity input and the constant \(K_{1}\) is called the "velocity constant" written as \(K_{v}\). This system is classified "Type 1 " (read "type one"). In a similar way, systems of Type 2 and higher types may be defined. A clear picture of the situation is given by the plot in Fig. 4.4 for a system of Type 1 having a ramp reference input. The error between input and output of size \(\frac{1}{K_{v}}\) is clearly marked.

Using Eq. (4.33), these results can be summarized by the following Error constants equations:

\[\begin{matrix} K_{p} = \lim_{s \rightarrow 0}\mspace{2mu} GD_{cl}(s), & n = 0, \\ K_{v} = \lim_{s \rightarrow 0}\mspace{2mu} sGD_{cl}(s), & n = 1, \\ K_{a} = \lim_{s \rightarrow 0}\mspace{2mu} s^{2}GD_{cl}(s), & n = 2. \end{matrix}\]

The type information can also be usefully gathered in a table of error values as a function of the degree of the input polynomial and the type of the system, as shown in Table 4.1.

Figure 4.4

Relationship between ramp response and \(K_{V}\)

TABLE 4.1

135. EXAMPLE 4.1

EXAMPLE 4.2

Errors as a Function of System Type

Type Input Step (position) Ramp (velocity) Parabola (acceleration)
Type 0 $$\frac{1}{1 + K_{p}}$$ $$\infty$$ $$\infty$$
Type 1 0 $$\frac{1}{K_{v}}$$ $$\infty$$
Type 2 0 0 $$\frac{1}{K_{a}}$$

System Type for Speed Control

Determine the system type and the relevant error constant for speed control with proportional feedback given by \(D_{cl}(s) = k_{P}\). The plant transfer function is \(G = \frac{A}{\tau s + 1}\).

Solution. In this case, \(GD_{cl} = \frac{k_{p}A}{\tau s + 1}\) and applying Eq. (4.36), we see \(n = 0\) as there is no pole at \(s = 0\). Thus, the system is Type 0 , and the error constant is a position constant given by \(K_{p} = k_{P}A\).

System Type Using Integral Control

Determine the system type and the relevant error constant for the speed-control example with proportional plus integral control having controller given by \(D_{cl} = k_{P} + k_{I}/s\). The plant transfer function is \(G = \frac{A}{\tau s + 1}\).

Solution. In this case, the loop transfer function is \(GD_{cl}(s) = \frac{A\left( k_{p}s + k_{l} \right)}{s(\tau s + 1)}\) and, as a unity feedback system with a single pole at \(s = 0\), the system is immediately seen as Type 1 . The velocity constant is given by Eq. (4.37) to be \(K_{v} = \lim_{s \rightarrow 0}\mspace{2mu} sGD_{cl}(s) = Ak_{I}\).

The definition of system type helps us to identify quickly the ability of a system to track polynomials. In the unity feedback structure, if the process parameters change without removing the pole at the origin in a Type 1 system, the velocity constant will change but the system will still have zero steady-state error in response to a constant input and will still be Type 1 . Similar statements can be made for systems of Type 2 or higher. Thus, we can say that system type is a robust property with respect to parameter changes in the unity feedback structure. Robustness is a major reason for preferring unity feedback over other kinds of control structure.

Another form of the formula for the error constants can be developed directly in terms of the closed-loop transfer function \(\mathcal{T}(s)\). From Fig. 4.5, the transfer function including a sensor transfer function is

\[\frac{Y(s)}{R(s)} = \mathcal{T}(s) = \frac{GD_{c}}{1 + GD_{c}H} \]

and the system error is

\[E(s) = R(s) - Y(s) = R(s) - \mathcal{T}(s)R(s). \]

The reference-to-error transfer function is thus

\[\frac{E(s)}{R(s)} = 1 - \mathcal{T}(s) \]

and the system error transform is

\[E(s) = \lbrack 1 - \mathcal{T}(s)\rbrack R(s) \]

We assume the conditions of the Final Value Theorem are satisfied, namely that all poles of \(sE(s)\) are in the LHP. In that case, the steady-state error is given by applying the Final Value Theorem to get

\[e_{ss} = \lim_{t \rightarrow \infty}\mspace{2mu} e(t) = \lim_{s \rightarrow 0}\mspace{2mu} sE(s) = \lim_{s \rightarrow 0}\mspace{2mu} s\lbrack 1 - \mathcal{T}(s)\rbrack R(s) \]

If the reference input is a polynomial of degree \(k\), the error transform becomes

\[E(s) = \frac{1}{s^{k + 1}}\lbrack 1 - \mathcal{T}(s)\rbrack \]

and the steady-state error is given again by the Final Value Theorem:

\[e_{SS} = \lim_{s \rightarrow 0}\mspace{2mu} s\frac{1 - \mathcal{T}(s)}{s^{k + 1}} = \lim_{s \rightarrow 0}\mspace{2mu}\frac{1 - \mathcal{T}(s)}{s^{k}} \]

As before, the result of evaluating the limit in Eq. (4.45) can be zero, a nonzero constant, or infinite, and if the solution to Eq. (4.45) is a nonzero constant, the system is referred to as Type \(k\). Notice a system of Type 1 or higher has a closed-loop DC gain of 1.0, which means that \(T(0) = 1\) in these cases.

136. System Type for a Servo with Tachometer Feedback

Consider an electric motor position control problem including a nonunity feedback system caused by having a tachometer fixed to the motor

Figure 4.5

Closed-loop system with sensor dynamics. \(R =\) reference, \(U =\) control, \(Y\) = output, \(V =\) sensor noise

shaft and its voltage (which is proportional to shaft speed) is fed back as part of the control as shown in Fig. 4.5. The parameters are

\[\begin{matrix} G(s) & \ = \frac{1}{s(\tau s + 1)} \\ D_{c}(s) & \ = k_{P} \\ H(s) & \ = 1 + k_{t}s. \end{matrix}\]

Determine the system type and relevant error constant with respect to reference inputs.

Solution. The system error is

\[\begin{matrix} E(s) & \ = R(s) - Y(s), \\ & \ = R(s) - \mathcal{T}(s)R(s), \\ & \ = R(s) - \frac{D_{c}(s)G(s)}{1 + H(s)D_{c}(s)G(s)}R(s), \\ & \ = \frac{1 + (H(s) - 1)D_{c}(s)G(s)}{1 + H(s)D_{c}(s)G(s)}R(s). \end{matrix}\]

The steady-state system error from Eq. (4.45) is

\[e_{SS} = \lim_{s \rightarrow 0}\mspace{2mu} sR(s)\lbrack 1 - \mathcal{T}(s)\rbrack \]

For a polynomial reference input, \(R(s) = 1/s^{k + 1}\) and hence

\[\begin{matrix} e_{ss} & \ = \lim_{s \rightarrow 0}\mspace{2mu}\frac{\lbrack 1 - \mathcal{T}(s)\rbrack}{s^{k}} = \lim_{s \rightarrow 0}\mspace{2mu}\frac{1}{s^{k}}\frac{s(\tau s + 1) + \left( 1 + k_{t}s - 1 \right)k_{P}}{s(\tau s + 1) + \left( 1 + k_{t}s \right)k_{P}} \\ & \ = 0,\ k = 0 \\ & \ = \frac{1 + k_{t}k_{P}}{k_{P}},\ k = 1 \end{matrix}\]

therefore the system is Type 1 and the velocity constant is \(K_{v} = \frac{k_{P}}{1 + k_{t}k_{P}}\). Notice if \(k_{t} > 0\), perhaps to improve stability or dynamic response, the velocity constant is smaller than with simply the unity feedback value of \(k_{P}\). The conclusion is that if tachometer feedback is used to improve dynamic response, the steady-state error is usually increased, that is, there is a trade-off between improving stability and reducing steadystate error.

136.0.1. System Type for Regulation and Disturbance Rejection

A system can also be classified with respect to its ability to reject polynomial disturbance inputs in a way analogous to the classification scheme based on reference inputs. The transfer function from the disturbance input \(W(s)\) to the error \(E(s)\) is

\[\frac{E(s)}{W(s)} = - \frac{Y(s)}{W(s)} = T_{w}(s) \]

because, if the reference is equal to zero, the output is the error. In a similar way as for reference inputs, the system is Type 0 if a step disturbance input results in a nonzero constant steady-state error, and is Type 1 if a ramp disturbance input results in a steady-state value of the error that is a nonzero constant, and so on. In general, following the same approach used in developing Eq. (4.35), we assume a constant \(n\) and a function \(T_{o,w}(s)\) can be defined with the properties that \(T_{o,w}(0) = 1/K_{n,w}\) and the disturbance-to-error transfer function can be written as

\[T_{w}(s) = s^{n}T_{o,w}(s) \]

Then, the steady-state error to a disturbance input, which is a polynomial of degree \(k\), is

\[\begin{matrix} y_{ss} & \ = \lim_{s \rightarrow 0}\mspace{2mu}\left\lbrack sT_{w}(s)\frac{1}{s^{k + 1}} \right\rbrack, \\ & \ = \lim_{s \rightarrow 0}\mspace{2mu}\left\lbrack T_{o,w}(s)\frac{s^{n}}{s^{k}} \right\rbrack. \end{matrix}\]

From Eq. (4.48), if \(n > k\), then the error is zero and if \(n < k\), then the error is unbounded. If \(n = k\), then the system is type \(k\) and the error is given by \(1/K_{n,w}\).

Consider the simplified model of a DC motor in unity feedback as shown in Fig. 4.6, where the disturbance torque is labeled \(W(s)\). This case was considered in Example 2.11.

(a) Use the controller

\[D_{c}(s) = k_{P} \]

and determine the system type and steady-state error properties with respect to disturbance inputs.

(b) Let the controller transfer function be given by

\[D_{c}(s) = k_{P} + \frac{k_{I}}{s} \]

and determine the system type and the steady-state error properties for disturbance inputs.

Figure 4.6

DC motor with unity feedback

Solution. (a) The closed-loop transfer function from \(W\) to \(E\) (where \(R = 0)\) is

\[\begin{matrix} T_{w}(s) & \ = - \frac{B}{s(\tau s + 1) + Ak_{P}}, \\ & \ = s^{0}T_{o,w} \\ n & \ = 0 \\ K_{o,w} & \ = - \frac{Ak_{P}}{B}. \end{matrix}\]

Applying Eq. (4.48), we see that the system is Type 0 and the steadystate error to a unit-step torque input is \(e_{ss} = - B/Ak_{P}\). From the earlier section, this system is seen to be Type 1 for reference inputs, and illustrates that system type can be different for different inputs to the same system.

(b) For this controller the disturbance error transfer function is

\[\begin{matrix} T_{w}(s) & \ = - \frac{Bs}{s^{2}(\tau s + 1) + \left( k_{P}s + k_{I} \right)A}, \\ n & \ = 1 \\ K_{n,w} & \ = - \frac{Ak_{I}}{B} \end{matrix}\]

therefore the system is Type 1 , and the error to a unit-ramp disturbance input will be

\[e_{ss} = - \frac{B}{Ak_{I}} \]

137. Truxal's Formula

Truxal (1955) derived a formula for the velocity constant of a Type 1 system in terms of the closed-loop poles and zeros. See Appendix W4.2.2.1 online at www.pearsonglobaleditions.com.

137.1. The Three-Term Controller: PID Control

In later chapters, we will study three general analytic and graphical design techniques based on the root locus, the frequency response, and the state-space formulation of the equations. Here, we describe a control method having an older pedigree that was developed through long experience and by trial and error. Starting with simple proportional feedback, engineers early discovered integral control action as a means of eliminating bias offset. Then, finding poor dynamic response in many cases, an "anticipatory" term based on the derivative was added. The result is called the three-term or PID controller, and has the transfer function

\[D_{c}(s) = k_{P} + \frac{k_{I}}{s} + k_{D}s \]

where \(k_{P}\) is the proportional term, \(k_{I}\) is the integral term, and \(k_{D}\) is the derivative term. We will discuss them in turn.

137.1.1. Proportional Control ( \(P\) )

When the feedback control signal is linearly proportional to the system error

\[u(t) = k_{P}e(t) \]

we call the result proportional feedback. Hence, the control signal is related to the system error instantaneously. This was the case for the feedback used in the controller of speed in Section 4.1, for which the controller transfer function is

\[\frac{U(s)}{E(s)} = D_{cl}(s) = k_{P} \]

The controller is purely algebraic with no dynamics and \(k_{P}\) is called the proportional gain. We can view the proportional controller as an amplifier with a "knob" that can be adjusted up or down. If the plant is second order, as, for example, for a motor with non-negligible inductance, \(\ ^{3}\) then the plant transfer function can be written as

\[G(s) = \frac{A}{s^{2} + a_{1}s + a_{2}}\text{.}\text{~} \]

In this case, the characteristic equation for the closed-loop system with proportional control is

\[1 + k_{P}G(s) = 0 \]

that results in

\[s^{2} + a_{1}s + a_{2} + k_{P}A = 0. \]

The designer can control the constant term, \(\left( a_{2} + k_{P}A \right)\), in this equation by selecting \(k_{P}\), which determines the natural frequency but cannot control the damping term \(a_{1}\) since it is independent of \(k_{P}\). The system is

Type 0 and if \(k_{P}\) is made large to get adequately small steady-state error, the damping may be much too low for satisfactory transient response with proportional control alone. To illustrate these features of proportional control, assume we have the plant \(G(s)\) under proportional control as shown in Fig. 4.2 and assume \(a_{1} = 1.4,a_{2} = 1\), and \(A = 1\). The proportional controller is indicated by Eq. (4.57). Figure 4.7 shows the closed-loop response of

\[\frac{Y(s)}{R(s)} = \mathcal{T}(s) = \frac{k_{P}G(s)}{1 + k_{P}G(s)} \]

for a unit-step command input, \(r = 1(t)\), with \(k_{P} = 1.5\) and \(k_{P} = 6\). The output, \(y\), of the system exhibits a steady-state tracking error that decreases as the proportional feedback gain is increased. Furthermore, the response also clearly exhibits a decrease in damping as the gain is increased and an increase in the speed of response. Using the Final Value Theorem would also show that the steady-state error decreases as the gain, \(k_{P}\), is increased as well as the fact that the control value, \(u(t)\), reaches a steady non zero value.

The output and the control signal due to a disturbance are given by

\[\frac{Y(s)}{W(s)} = \frac{G(s)}{1 + k_{P}G(s)},\ \frac{U(s)}{W(s)} = - \frac{k_{P}G(s)}{1 + k_{P}G(s)} \]

By comparing the closed-loop transfer functions between the disturbance response and the command response, it can be seen that a step disturbance, \(w\), will also yield a steady-state tracking error and control value in a similar manner to the reference input shown in Fig. 4.7.

Figure 4.7

Illustration of the steady-state tracking error and the effect of the different proportional feedback gain values on the system damping

The error due to the disturbance will also decrease as the gain, \(k_{P}\), is increased and the damping will degrade.

For systems beyond second order, the situation is more complicated than that illustrated above. The damping of some of the poles might increase while decreasing in others as the gain is increased. Also, a higher gain will increase the speed of response but typically at the cost of a larger transient overshoot and less overall damping. For systems of large order, increasing proportional gain will typically lead to instability for a high enough gain. Any Type 0 system with proportional control will have a nonzero steady-state offset in response to a constant reference input, and will not be capable of completely rejecting a constant disturbance input. One way to improve the steady-state accuracy of control without using extremely high proportional gain is to introduce integral control, which we will discuss next.

137.1.2. Integral Control (I)

When a feedback control signal is linearly proportional to the integral of the system error, we call the result integral feedback. The goal of integral control is to minimize the steady-state tracking error and the steady-state output response to disturbances. This control law is of the form

\[u(t) = k_{I}\int_{t_{0}}^{t}\mspace{2mu} e(\tau)d\tau \]

and \(k_{I}\) is called the integral gain. This means that the control signal at each instant of time is a summation of all past values of the tracking error; therefore, the control action is based on the "history" of the system error. Figure 4.8 illustrates that the control signal at any instant of time is proportional to the area under the system error curve (shown here for time \(\left. \ t_{1} \right)\). The controller becomes

\[\frac{U(s)}{E(s)} = D_{cl}(s) = \frac{k_{I}}{s} \]

which is dynamic and we see it has infinite gain at DC (that is, for \(s = 0\) ). Hence, we would certainly expect superior performance in the steadystate from such a controller. That is indeed the case as illustrated shortly. This feedback has the primary virtue that it can provide a finite value of control with zero system error. This comes about because \(u(t)\) is a function of all past values of \(e(t)\) rather than just the current value, as in the proportional case. This feature means that constant disturbances can be canceled with zero error because \(e(t)\) no longer has to be finite to produce a control signal that will counteract the constant disturbance.

Again, assume we have the plant \(G(s)\) under integral control as shown in Fig. 4.2, and \(G(s)\) is for the same motor that we used in Section 4.3.1. This simple system can be stabilized by integral control alone. From Fig. 4.2 and using the controller in Eq. (4.63), we see the tracking error, the control signal, and the output due to a reference input are given by

Figure 4.8

Integral control is based on the history of the system error

\[\begin{matrix} \frac{E(s)}{R(s)} = \frac{1}{1 + \frac{k_{I}}{s}G(s)} = \frac{s}{s + k_{I}G(s)},\ \frac{U(s)}{R(s)} = \frac{\frac{k_{I}}{s}}{1 + \frac{k_{I}}{s}G(s)} = \frac{k_{I}}{s + k_{I}G(s)} \\ \frac{Y(s)}{R(s)} = \mathcal{T}(s) = \frac{\frac{k_{I}}{s}G(s)}{1 + \frac{k_{I}}{s}G(s)} = \frac{k_{I}G(s)}{s + k_{I}G(s)} \end{matrix}\]

Now assume a unit-step reference input \(r(t) = 1(t)\) with \(R(s) =\) \(1/s\). From Eqs. (4.64) and (4.65) and using the Final Value Theorem (noting \(G(0) = 1\) ), we have

\[\begin{matrix} y(\infty) = \frac{k_{I}G(0)}{0 + k_{I}G(0)} = 1,\ e(\infty) = \frac{0}{0 + k_{I}G(0)} = 0, \\ u(\infty) = \frac{k_{I}}{0 + k_{I}G(0)} = G(0)^{- 1} = 1. \end{matrix}\]

Note the steady-state tracking error will be zero no matter what the value of \(k_{I}\) is, whereas there was always a tracking error with the proportional controller no matter what the value of \(k_{P}\) was. The integral gain \(k_{I}\) can be selected purely to provide an acceptable dynamic response; however, typically it will cause instability if raised sufficiently high. Note also the steady-state control is a constant and is equal to the inverse DC gain of the plant, which makes a lot of sense intuitively.

The output and the control signal due to a disturbance input are given by

\[\frac{Y(s)}{W(s)} = \frac{sG(s)}{s + k_{I}G(s)},\ \frac{U(s)}{W(s)} = - \frac{k_{I}G(s)}{s + k_{I}G(s)} \]

Robustness property of integral control

Figure 4.9

Illustration of constant disturbance rejection property of integral control: (a) system output; (b) control effort
Now assume a unit-step disturbance input \(w(t) = 1(t)\) with \(W(s) = 1/s\). From Eq. (4.68) and using the Final Value Theorem we have

\[y(\infty) = \frac{0 \cdot G(0)}{0 + k_{I}G(0)} = 0,\ u(\infty) = - \frac{k_{I}G(0)}{0 + k_{I}G(0)} = - 1. \]

These two equations show a zero steady-state error in the output and a final value of the control signal that cancels the disturbance exactly. Figure 4.9 illustrates the responses for \(k_{I} = 0.5\). The conclusion is that in this case, integral feedback results in zero steady-state output error in both tracking and disturbance rejection. Furthermore, plant parameter changes can be tolerated; that is, the results above are independent of the plant parameter values. Also, regardless of the value of the integral gain, \(k_{I}\), the asymptotic tracking and disturbance rejection properties are preserved, provided that the closed-loop system remains stable. These properties of integral control are referred to as robust. The addition of integral control to the \(G(s)\) above caused the closed-loop system to become Type 1 and those features will occur for any Type 1 system. However, as already discussed in Section 4.2.2, Type 1 systems do have a constant tracking error to a ramp reference input as will this example of integral control.

Given these remarkable properties of integral control, it is certainly worth the additional cost in implementation complexity. Whenever an actuator is used that can saturate (which is almost always the case), extra care is required in implementing integral control. The controller must

(a)

(b)

Derivative control

Proportional plus integral control be augmented with an anti-windup feature to deal with the actuator saturation (see Chapter 9).

137.1.3. Derivative Control (D)

The final term in the classical controller is derivative feedback, also called rate feedback. The goal of derivative feedback is to improve closed-loop system stability as well as speeding up the transient response and reducing overshoot. Therefore, whenever increased stability is desired, the use of derivative feedback is called for. In derivative feedback, the control law is

\[u(t) = k_{D}\overset{˙}{e}(t) \]

where \(k_{D}\) is the derivative gain and the control signal is proportional to the rate of change (or derivative) of the system error for which the \(D_{cl}(s)\) in Fig. 4.2 becomes

\[\frac{U(s)}{E(s)} = D_{cl}(s) = k_{D}s \]

Derivative control is almost never used by itself; it is usually augmented by proportional control. The key reason is that the derivative does not supply information on the desired end state. In addition, if \(e(t)\) were to remain constant, the output of a derivative controller would be zero and a proportional or integral control would be needed to provide a control signal at this time. A key feature of derivative control is that derivative control "knows" the slope of the error signal, so it takes control action based on the trend in the error signal. Hence, it is said to have an "anticipatory" behavior. One disadvantage of derivative control is that it tends to amplify noise, a subject that will be discussed in more depth in Chapter 6.

An important effect of the derivative term is that it gives a sharp response to suddenly changing signals. Because of this, the derivative term is sometimes introduced into the feedback path as shown in Fig. 4.10(a) in order to eliminate an excessive response to a step in the reference input. This could be either a part of the standard controller, or could describe a velocity sensor such as a tachometer on the shaft of a motor. The closed-loop characteristic equation is the same as if the term were in the forward path as given by Eq. (4.55) and drawn in Fig. 4.10(b). It is important to notice the zeros from the reference to the output are different in the two cases. With the derivative in the feedback path, the reference is not differentiated, which is how the undesirable response to sudden changes is avoided.

137.1.4. Proportional Plus Integral Control (PI)

Adding an integral term to the proportional controller to achieve the lower steady-state errors results in the proportional plus integral (PI) control equation in the time domain:

\[u(t) = k_{P}e(t) + k_{I}\int_{t_{0}}^{t}\mspace{2mu} e(\tau)d\tau \]

Figure 4.10

Block diagram of the PID controller: (a) with the D-term in the feedback path; (b) with the D-term in the forward path

(a)

(b)

for which the \(D_{cl}(s)\) in Fig. 4.2 becomes

\[\frac{U(s)}{E(s)} = D_{cl}(s) = k_{P} + \frac{k_{I}}{s} \]

Most controllers implemented in practice, if they have an integral term, will also have a proportional term. This combination generally allows for a faster response than a pure integral control alone. Introduction of the integral term raises the type to Type 1 , and the system can therefore reject completely constant bias disturbances. If the system is second order or higher the use of PID control is required if we wish to have arbitrary dynamics.

138. PI Control of a Thermal System \(\ ^{4}\)

Consider the thermal system consisting of the lumped second-order model of two thermal masses connected by conduction, as shown in Figure 2.38 of Chapter 2. The transfer function from the heater to the sensed output was derived to be of the form

\[G(s) = \frac{K_{o}}{\left( \tau_{1}s + 1 \right)\left( \tau_{2}s + 1 \right)} \]

where

\(\tau_{1} = \frac{C_{1}}{H_{x} + H_{1}},\tau_{2} = \frac{C_{2}}{H_{x} + H_{1}},K_{o} = \frac{H_{x}}{\left( H_{x} + H_{1} \right)\left( H_{x} + H_{2} \right)}\)and we select realistic values for the system parameters so \(\tau_{1} = 1\), \(\tau_{2} = 10\), and \(K_{o} = 1000\). The goal is to design a PI controller to track the reference input temperature signal, \(r(t)\), which is a ramp with a slope of \(30^{\circ}C/sec\) and a steady-state value of \(300^{\circ}C\) and a duration of 30 seconds as shown in Figure 4.11. It is desired that the system exhibit negligible overshoot. Robustness with respect to perturbations in system parameters \(K_{o},\tau_{1}\), and \(\tau_{2}\) is also desired as usually the exact values of these parameters are not known. Explore the use of openloop control, P control, and PI control to achieve the goal of tracking the reference signal accurately.

Solution. We will now discuss each controller design case separately.

Open-loop Control: One idea that comes to mind is to excite the system with an input step of size 0.3 since the DC gain of the system is 1000. The response of the open-loop system is shown in Figure 4.11 and exhibits a slow response with a settling time of \(t_{s} = 47.1sec\) and has zero steady-state error. The system could be controlled with an open-loop controller, but such a system is highly sensitive to errors in the plant gain. In this case, a \(5\%\) error in plant gain would result in a steady-state error of \(5\%\) in the output, which would typically be unacceptable.

P Control: A proportional gain of \(k_{P} = 0.03\) corresponding to a closedloop damping ratio of \(\varsigma = 0.3\) results in a constant DC offset (bias) of \(10^{\circ}C\) as shown in Figure 4.12. Although the response is significantly faster than open-loop control, this level of offset is unacceptable in applications such as Rapid Thermal Processing (see Chapter 10). Fig. 4.12 shows the response for the nominal case as well as \(\pm 10\%\) of the gain value. The fact that the three responses are indistinguishable shows that gain changes for the feedback case have little effect and the system is robust. The associated control effort signals are shown in Figure 4.13.

Figure 4.11

Open-loop step response

Figure 4.12

Closed-loop response for the \(P\) controller

Figure 4.13

Closed-loop control signals for the \(P\) controller

Note the fact that the effect of the gain changes are noticeable in the associated control signals in Fig. 4.13.

PI Control: Let us use the same proportional gain as before, \(k_{P} = 0.03\), and choose an integral gain that is an order of magnitude lower, \(k_{I} =\) 0.003 to obtain the PI controller

\[D_{c}(s) = 0.03 + \frac{0.003}{s}\text{.}\text{~} \]

The response of the closed-loop PI-controlled system is shown in Figure 4.14 and the bias is eliminated as expected. The response settles at \(t_{s} =\) \(13.44sec\), but there is some overshoot. The system is also robust with respect to a \(\pm 10\%\) change in the gain, \(K_{o}\), as shown in Figure 4.14. The associated control effort signals are shown in Figure 4.15. Note in this

Figure 4.14

Closed-loop response for the PI controller

Figure 4.15

Closed-loop control signals for the PI controller

case, the controller has a zero at -0.1 that cancels the open-loop stable pole of the plant at -0.1 , effectively rendering the closed-loop system as second-order.

138.0.1. PID Control

Putting all the three terms together results in the proportional plus integral plus derivative (PID) control equation in the time domain:

\[u(t) = k_{P}e(t) + k_{I}\int_{t_{0}}^{t}\mspace{2mu} e(\tau)d\tau + k_{D}\overset{˙}{e}(t) \]

for which the \(D_{cl}(s)\) in Fig. 4.2 becomes

\[\frac{U(s)}{E(s)} = D_{cl}(s) = k_{P} + \frac{k_{I}}{s} + k_{D}s \]

To illustrate the effect of PID control, consider speed control but with the second-order plant as in Eq. (4.58). In that case, the characteristic equation from \(1 + GD_{cl} = 0\) becomes

\[\begin{matrix} s^{2} + a_{1}s + a_{2} + A\left( k_{P} + \frac{k_{I}}{s} + k_{D}s \right) & \ = 0, \\ s^{3} + a_{1}s^{2} + a_{2}s + A\left( k_{P}s + k_{I} + k_{D}s^{2} \right) & \ = 0. \end{matrix}\]

Collecting like powers of \(s\) terms results in

\[s^{3} + \left( a_{1} + Ak_{D} \right)s^{2} + \left( a_{2} + Ak_{P} \right)s + Ak_{I} = 0. \]

The point here is that this equation, whose three roots determine the nature of the dynamic response of the system, has three free parameters in \(k_{P},k_{I}\), and \(k_{D}\) and that by selection of these parameters, the roots can be uniquely and, in theory, arbitrarily determined. Without the derivative term, there would be only two free parameters, but with three roots, the choice of roots of the characteristic equation would be restricted. To illustrate the effect more concretely, a numerical example is useful.

PID Control of Motor Speed

Consider the DC motor speed control with parameters \(\ ^{5}\)

\[\begin{matrix} J_{m} = 1.13 \times 10^{- 2} & b = 0.028\text{ }N \cdot m \cdot sec/rad, & L_{a} = 10^{- 1}H, \\ N \cdot m \cdot \sec^{2}/rad, & K_{t} = 0.067\text{ }N \cdot m/amp, & K_{e} = 0.067\text{ }V \cdot sec/rad. \end{matrix}\]

These parameters were defined in Example 2.15 in Chapter 2. Use the controller parameters

\[k_{P} = 3,\ k_{I} = 15sec,\ k_{D} = 0.3sec \]

and discuss the responses of this system to steps in a disturbance torque and steps in the reference input using the three different controllers: P, PI, and PID. Let the unused controller parameters be zero.

Solution. Figure 4.16(a) illustrates the effects of P, PI, and PID feedback on the step disturbance response of the system. Note adding

(a)

(b)

Figure 4.16

Responses of P, PI, and PID control to: (a) step disturbance input; (b) step reference input

139. EXAMPLE 4.7

the integral term increases the oscillatory behavior, but eliminates the steady-state error, and adding the derivative term reduces the oscillation while maintaining zero steady-state error. Figure 4.16(b) illustrates the effects of P, PI, and PID feedback on the step reference response with similar results. The step responses can be computed by forming the numerator and denominator coefficient vectors (in descending powers of \(s\) ) and using the step function in Matlab.

140. PI Control for a DC-DC Voltage Converter

Consider the control of a DC-DC converter using the unity feedback structure as shown in Fig. 4.5 where

\[G(s) = V_{\text{in}\text{~}}\frac{1}{LCs^{2} + \frac{L}{R_{c}}s + 1} \]

\(H(s) = 1\) and \(V(s) = 0\). Assume the inductor \(L\) and capacitor \(C\) constitute the output filter for the converter while the converter input voltage is \(V_{\text{in}\text{~}}\) and the load is \(R_{c}\).

(a) Use the proportional controller

\[D_{c}(s) = k_{P}, \]

and determine the system type and steady-state error properties with respect to disturbance inputs.

(b) Let the control be PI as given by

\[D_{c}(s) = k_{P} + \frac{k_{I}}{s} \]

and determine the system type and steady-state error properties with respect for disturbance inputs.

Solution. (a) The closed-loop transfer function from \(W\) to \(E\) (where \(\left. \ R_{c} = 0 \right)\) is

\[\begin{matrix} T_{w}(s) & \ = - \frac{V_{in}}{\left( LCs^{2} + \frac{L}{R_{c}}s + 1 \right) + V_{in}K_{p}} \\ & \ = s^{0}T_{o,w} \\ n & \ = 0 \\ K_{o,w} & \ = - \frac{\left( 1 + V_{in}K_{p} \right)}{V_{\text{in}\text{~}}}. \end{matrix}\]

Applying Eq. (4.48) we see the system is Type 0 and the steady-state error to a unit-step disturbance input is

\[e_{SS} = - \frac{V_{in}}{1 + V_{in}k_{p}} \]

(b) If the controller is PI, the disturbance error transfer function is

\[\begin{matrix} T_{w}(s) & \ = - \frac{V_{in}s}{s\left( LCs^{2} + \frac{L}{R_{c}}s + 1 \right) + V_{in}\left( k_{p}s + k_{I} \right)} \\ n & \ = 1 \\ K_{1,w} & \ = - k_{I} \end{matrix}\]

and therefore the system is Type 1 . The error to a unit-ramp disturbance input in this case will be

\[e_{SS} = - \frac{1}{k_{I}} \]

which is independent of \(V_{\text{in}\text{~}}\).

Consider the closed-loop control system for regulating the output cone displacement of the loudspeaker discussed in Example 2.14. With a PD controller, the block diagram for the system is shown in Fig. 4.17(a), and with a PID controller, it is re-drawn as Fig. 4.17(b). \(N\) is the noise that affects \(V_{a}\), the voltage applied to the loudspeaker, while \(B,b,L,l,M\), and \(R\) were defined in Examples 2.13 and 2.14 in Chapter 2. Assume the control results in a stable system and determine the system types and error responses to disturbances of the control system for

(a) System Fig. 4.17(a);

(b) System Fig. 4.17(b).

Figure 4.17

Cone displacement control for a loudspeaker: (a) PD control; (b) PID control \(\ ^{6}\)

(a)

(b)

Solution. (a) We see from inspection of Fig. 4.17(a), that with one pole at the origin in the plant, the system is Type 1 with respect to the reference inputs. The transfer function from disturbance to error is

\[\begin{matrix} T_{w}(s) & \ = - \frac{Bl}{s\left\lbrack (Ms + b)(Ls + R) + (Bl)^{2} \right\rbrack + \left( k_{D}s + k_{p} \right)Bl}, \\ & \ = T_{o,w}, \end{matrix}\]

for which \(n = 0,K_{o,w} = k_{P}\). The system is Type 0 , and the error to a unit disturbance step is \(- 1/k_{P}\).

(b) With PID control, the forward gain has two poles at the origin, so this system is Type 2 for reference inputs, but the disturbance transfer function is

\[\begin{matrix} T_{w}(s) & \ = - \frac{Bls}{s^{2}\left\lbrack (Ms + b)(Ls + R) + (Bl)^{2} \right\rbrack + \left( k_{D}s^{2} + k_{p}s + k_{I} \right)Bl}, \\ n & \ = 1, \\ T_{o,w}(s) & \ = \frac{Bl}{s^{2}\left\lbrack (Ms + b)(Ls + R) + (Bl)^{2} \right\rbrack + \left( k_{D}s^{2} + K_{p}s + k_{I} \right)Bl}, \end{matrix}\]

Transfer function for a high-order system with a characteristic process reaction curve

Figure 4.18

Process reaction curve from which the system is Type 1 and the error constant is \(k_{I}\); the error to a disturbance ramp of unit slope will be \(- 1/k_{I}\).

140.0.1. Ziegler-Nichols Tuning of the PID Controller

When the PID controller was being developed, selecting values for the several terms (known as "tuning" the controller) was often a hit and miss affair. To bring order to the situation and make life easier for plant operators, control engineers looked for ways to make the tuning more systematic. Callender et al. (1936) proposed a design for PID controllers by specifying satisfactory values for the terms based on estimates of the plant parameters that an operating engineer could make from experiments on the process itself. This approach was extended by Ziegler and Nichols \((1942,1943)\) who recognized that the step responses of a large number of process control systems exhibit a process reaction curve such as that shown in Fig. 4.18, which can be generated from experimental step response data. The \(S\)-shape of the curve is characteristic of many systems and can be approximated by the step response of a plant with transfer function

\[\frac{Y(s)}{U(s)} = \frac{Ae^{- st_{d}}}{\tau s + 1} \]

which is a first-order system with a time delay or "transportation lag" of \(t_{d}\) sec. The constants in Eq. (4.93) can be determined from the unit-step response of the process. If a tangent is drawn at the inflection point of the reaction curve, then the slope of the line is \(R = A/\tau\), the intersection of the tangent line with the time axis identifies the time delay \(L = t_{d}\) and the final value gives the value of \(A\).

Tuning by decay ratio of 0.25

Figure 4.19

Quarter decay ratio
Ziegler and Nichols gave two methods for tuning the PID controller for such a model. In the first method, the choice of controller parameters is designed to result in a closed-loop step response transient with a decay ratio of approximately 0.25 . This means that the transient decays to a quarter of its value after one period of oscillation, as shown in Fig. 4.19. A quarter decay corresponds to \(\zeta = 0.21\) and, while low for many applications, was seen as a reasonable compromise between quick response and adequate stability margins for the process controls being considered. The authors simulated the equations for the system on an analog computer and adjusted the controller parameters until the transients showed the decay of \(25\%\) in one period. The regulator parameters suggested by Ziegler and Nichols for the controller terms defined by

\[D_{c}(s) = k_{P}\left( 1 + \frac{1}{T_{I}s} + T_{D}s \right) \]

are given in Table 4.2.

[TABLE]

Tuning by evaluation at limit of stability, (ultimate sensitivity method)

Figure 4.20

Determination of ultimate gain and period

Figure 4.21

Neutrally stable system
In the ultimate sensitivity method, the criteria for adjusting the parameters are based on evaluating the amplitude and frequency of the oscillations of the system at the limit of stability, rather than on taking a step response. To use the method, the proportional gain is increased until the system becomes marginally stable and continuous oscillations just begin with amplitude limited by the saturation of the actuator. The corresponding gain is defined as \(K_{u}\) (called the ultimate gain) and the period of oscillation is \(P_{u}\) (called the ultimate period). These are determined as shown in Figs. 4.20 and 4.21. \(P_{u}\) should be measured when the amplitude of oscillation is as small as possible. Then, the tuning parameters are selected as shown in Table 4.3.

Experience has shown that the controller settings according to Ziegler-Nichols rules provide acceptable closed-loop response for many

Ziegler-Nichols Tuning for the Regulator

TABLE 4.3

\(D_{c}(s) = k_{P}\left( 1 + 1/T_{I}s + T_{D}s \right)\), Based on the Ultimate

Sensitivity Method

[TABLE]

Figure 4.22

Matlab's pidTuner GUI

Source: Franklin, Gene F.

Feedback Control of Dynamic

Systems, 8E, 2019, Pearson

Education, Inc., New York, NY.

systems. As seen from the ensuing examples, the step response method generally suggests gains that are higher than the ultimate sensitivity method. The process operator will often perform final tuning of the controller iteratively on the actual process to yield satisfactory control.

Several variations on Zeigler-Nichols tuning rules and automatic tuning techniques have been developed for industrial applications by several authors. \(\ ^{7}\)

PID tuning can also be done using Matlab's PID Tuner App. The pidTuner App is an interface that lets the user see how the time response changes as you vary the gains of the PID controller. Matlab's algorithm for PID tuning meets the three-fold objectives of stability, performance, and robustness by tuning the PID gains to achieve a good balance between performance and robustness. Figure 4.22 shows the GUI interface for the Matlab pidTuner App. PID tuning can be done using optimization techniques (Hast et al., 2013).

141. Tuning of a Heat Exchanger: Quarter Decay Ratio

Consider the heat exchanger discussed in Chapter 2. The process reaction curve of this system is shown in Fig. 4.23. Determine proportional and PI regulator gains for the system using the Ziegler-Nichols

Figure 4.23

A measured process reaction curve

rules to achieve a quarter decay ratio. Plot the corresponding step responses.

Solution. From the process reaction curve, we measure the maximum slope to be \(R \cong \frac{1}{90}\) and the time delay to be \(L \cong 13sec\). According to the Ziegler-Nichols rules of Table 4.2, the gains are

\[\begin{matrix} \text{~}\text{Proportional}\text{~}:k_{P} & \ = \frac{1}{RL} = \frac{90}{13} = 6.92 \\ \text{~}\text{PI}\text{~}:k_{P} & \ = \frac{0.9}{RL} = 6.22\text{~}\text{and}\text{~}T_{I} = \frac{L}{0.3} = \frac{13}{0.3} = 43.3 \end{matrix}\]

(a)

(b)

Figure 4.24

Closed-loop step responses

142. EXAMPLE 4.10

Figure 4.25

Ultimate period of heat exchanger
Figure 4.24(a) shows the step responses of the closed-loop system to these two regulators. Note the proportional regulator results in a steadystate offset, while the PI regulator tracks the step exactly in the steady state. Both regulators are rather oscillatory and have considerable overshoot. If we arbitrarily reduce the gain \(k_{P}\) by a factor of 2 in each case, the overshoot and oscillatory behaviors are substantially reduced, as shown in Fig. 4.24(b).

143. Tuning of a Heat Exchanger: Oscillatory Behavior

Proportional feedback was applied to the heat exchanger in the previous example until the system showed nondecaying oscillations in response to a short pulse (impulse) input, as shown in Fig. 4.25. The ultimate gain is measured to be \(K_{u} = 15.3\), and the period was measured at \(P_{u} =\) \(42sec\). Determine the proportional and PI regulators according to the Ziegler-Nichols rules based on the ultimate sensitivity method. Plot the corresponding step responses.

Solution. The regulators from Table 4.3 are

Proportional : \(k_{P} = 0.5K_{u},\ k_{P} = 7.65\),

\[PI:k_{P} = 0.45K_{u},\ k_{P} = 6.885,\ \text{~}\text{and}\text{~}\ T_{I} = \frac{1}{1.2}P_{u} = 35 \]

The step responses of the closed-loop system are shown in Fig. 4.26(a). Note the responses are similar to those in Example 4.9. If we reduce \(k_{P}\) by \(50\%\), then the overshoot is substantially reduced, as shown in Fig. 4.26(b). This shows that the tuning rules provide a good starting point, but considerable fine tuning may still be needed.

(a)

(b)

Figure 4.26

Closed-loop step responses

143.1. Feedforward Control by Plant Model Inversion

Section 4.3 showed that proportional control typically yields a steadystate error in the output due to disturbances or input commands. Integral control was introduced in order to reduce those errors to zero for steady disturbances or constant reference commands; however, integral control typically decreases the damping or stability of a system.

Feedforward One way to partly resolve this conflict is to provide some feedforward of the control that will eliminate the steady-state errors due to command inputs. This is possible because the command inputs are known and can be determined directly by the controller; thus, we should be able to compute the value of the control input that will produce the desired outputs being commanded. Disturbances are not always measurable, but can also be used for feedforward control whenever they are measured. The solution is simply to determine the inverse of the DC gain of the plant transfer function model and incorporate that into the controller as shown in Fig. 4.27. If this is done, the feedforward will provide the control effort required for the desired command input, and the feedback takes care of the differences between the real plant and the plant model plus the effects of any disturbances.

Consider the same DC motor speed-control system (Eq. 4.58) of Section 4.3 with the two different values of proportional controller gain \(k_{P} = 1.5,6\). (a) Use feedforward control to eliminate the steady-state tracking error for a step reference input. (b) Also use feedforward

Figure 4.27

Feedforward control structure for:

(a) tracking;

(b) disturbance rejection

(a)

(b)

control to eliminate the effect of a constant output disturbance signal on the output of the system.

Solution. (a) In this case, the plant inverse DC gain is \(G^{- 1}(0) = 1\). We implement the closed-loop system as shown in Fig. 4.27(a) with \(G(s)\) given by Eq. (4.58) and \(D_{c}(s) = k_{P}\). The closed-loop transfer function is

\[\begin{matrix} Y(s) & \ = G(s)\left\lbrack k_{P}E(s) + R(s) \right\rbrack, \\ E(s) & \ = R(s) - Y(s) \\ \frac{Y(s)}{R(s)} & \ = \mathcal{T}(s) = \frac{\left( 1 + k_{P} \right)G(s)}{1 + k_{P}G(s)}. \end{matrix}\]

Note the closed-loop DC gain is unity \((\mathcal{T}(0) = 1)\). Figure 4.28 illustrates the effect of feedforward control in eliminating the steady-state tracking error due to a step reference input for the two values of \(k_{P}\). Addition of the feedforward control results in zero steady-state tracking error.

(b) Similarly, we implement the closed-loop system as shown in Fig. 4.27(b) with \(G(s)\) given by Eq. (4.58) and \(D_{c}(s) = k_{P}\). The closed-loop transfer function is

Figure 4.28

Tracking performance with addition of feedforward
Tracking response

\[\begin{matrix} Y(s) & \ = W(s) + G(s)\left\lbrack k_{P}E(s) - W(s) \right\rbrack \\ E(s) & \ = R(s) - Y(s),\text{~}\text{with}\text{~}R(s) = 0 \\ \frac{Y(s)}{W(s)} & \ = \mathcal{T}_{w}(s) = \frac{1 - G(s)}{1 + k_{P}G(s)} \end{matrix}\]

Note the closed-loop DC gain is zero \(\left( \mathcal{T}_{w}(0) = 0 \right)\). Figure 4.29 illustrates the effect of feedforward control in eliminating the steady-state error for a constant output disturbance, again for the two values of \(k_{P}\). We observe that by using the inverse of the DC gain, this feedforward only controls the steady-state effect of the reference and disturbance inputs. More complex feedforward control can be used by inverting \(G(s)\) over an entire frequency range.

144. \(\Delta\ 4.5\) Introduction to Digital Control

So far, we have assumed the systems and controllers are all continuous time systems, and they obey differential equations. That implies that the controllers would be implemented using analog circuits such as those discussed in Section 2.2. In fact, most control systems today are implemented in digital computers which are not able to implement the continuous controllers exactly. Instead, they approximate the continuous control by algebraic equations called difference equations. A very short description of how one would convert a continuous \(D_{c}(s)\) to

Figure 4.29

Constant disturbance rejection performance with addition of feedforward
Disturbance rejection response

difference equations that can be coded directly into a computer is contained in Appendix W4.5 online at www.pearsonglobaleditions.com. For more details, see Chapter 8 in this text or see Digital Control of Dynamic Systems, by Franklin, Powell, and Workman, 3rd ed, 1998, for a complete discussion of the topic.

145. $\Delta\ $ 4.6 Sensitivity of Time Response to Parameter Change

Since many control specifications are in terms of the step response, the sensitivity of the time response to parameter changes is sometimes very useful to explore. To learn more, see Appendix W4.6 online at www.pearsonglobaleditions.com.

145.1. Historical Perspective

The field of control is characterized by two paths: theory and practice. Control theory is basically the application of mathematics to solve control problems, whereas control practice, as used here, is the practical application of feedback in devices where it is found to be useful. Historically, practical applications have come first with control being introduced by trial and error. Although the applicable mathematics is often known, the theory describing how the control works and pointing the way to improvements has typically been applied later. For example, James Watt's company began manufacturing steam engines using
the fly-ball governor in 1788, but it was not until 1840 that G. B. Airy described instability in a similar device, and not until 1868 when J. C. Maxwell published "On Governors" with a theoretical description of the problem. Then it was not until 1877, almost 100 years after the steam engine control was introduced, that E. J. Routh published a solution giving the requirements for stability. This situation has been called the "Gap between Theory and Practice" and continues to this day as a source of creative tension that stimulates both theory and practice.

Regulation is central to the process industries, from making beer to making gasoline. In these industries, there are a host of variables that need to be kept constant. Typical examples are temperature, pressure, volume, flow rates, composition, and chemical properties such as \(pH\) level. However, before one can regulate by feedback, one must be able to measure the variable of interest. Before there was control, there were sensors. In 1851, George Taylor and David Kendall founded the company that later became the Taylor Instrument Company in Rochester, \(NY\), to make thermometers and barometers for weather forecasting. In 1855 , they were making thermometers for several industries, including the brewing industry where they were used for manual control. Other early entries into the instrument field were the Bristol Company, founded in Naugatuck, CT, in 1889 by William Bristol, and the Foxboro Company, founded in Foxboro, MA, in 1908 by William's father and two of his brothers. For example, one of Bristol's instruments was used by Henry Ford to measure (and presumably control) steam pressure while he worked at the Detroit Edison Company. The Bristol Company pioneered in telemetry that permitted instruments to be placed at a distance from the process so a plant manager could monitor several variables at once. As the instruments became more sophisticated, and devices such as motor-driven valves became available, they were used in feedback control often using simple on-off methods, as described in Chapter 1 for the home furnace. An important fact was that the several instrument companies agreed upon standards for the variables used so a plant could mix and match instruments and controllers from different suppliers. In 1920, Foxboro introduced a controller based on compressed air that included reset or integral action. Eventually, each of these companies introduced instruments and controllers that could implement full PID action. A major step was taken for tuning PID controllers in 1942 when Ziegler and Nichols, working for Taylor Instruments, published their method for tuning based on experimental data.

The poster child for the tracking problem was that of the antiaircraft gun, whether on land or at sea. The idea was to use radar to track the target and to have a controller that would predict the path of the aircraft and aim the gun to a position such that the projectile would hit the target when it got there. The Radiation Laboratory was set up at MIT during World War II to develop such radars, one of which was the SCR-584. Interestingly, one of the major contributors to the control methods developed for this project was none other than Nick Nichols
who had earlier worked on tuning PID controllers. When the record of the Rad Lab was written, Nichols was selected to be one of the editors of volume 25 on control.

H. S. Black joined Bell Laboratories in 1921 and was assigned to find a design for an electronic amplifier suitable for use as a repeater on the long lines of the telephone company. The basic problem was that the gain of the vacuum tube components he had available drifted over time and he needed a design that, over the audio frequency range, maintained a specific gain with great precision in the face of these drifts. Over the next few years he tried many approaches, including a feed forward technique designed to cancel the tube distortion. While this worked in the laboratory, it was much too sensitive to be practical in the field. Finally, in August of \(1927,\ ^{8}\) while on the ferry boat from Staten Island to Manhattan, he realized that negative feedback might work and he wrote the equations on the only paper available, a page of the New York Times. He applied for a patent in 1928 but it was not issued until December 1937. \(\ ^{9}\) The theory of sensitivity and many other theories of feedback were worked out by \(H\). W. Bode.

146. SUMMARY

  • The most important measure of the performance of a control system is the system error to all inputs.

  • Compared to open-loop control, feedback can be used to stabilize an otherwise unstable system, to reduce errors to plant disturbances, to improve the tracking of reference inputs, and to reduce the system's transfer function sensitivity to parameter variations.

  • Sensor noise introduces a conflict between efforts to reduce the error caused by plant disturbances and efforts to reduce the errors caused by the sensor noise.

  • Classifying a system as Type \(k\) indicates the ability of the system to achieve zero steady-state error to polynomials of degree less than but not equal to \(k\). A stable unity feedback system is Type \(k\) with respect to reference inputs if the loop gain \(G(s)D_{c}(s)\) has \(k\) poles at the origin in which case we can write

\[G(s)D_{c}(s) = \frac{A\left( s + z_{1} \right)\left( s + z_{2} \right)\cdots}{s^{k}\left( s + p_{1} \right)\left( s + p_{2} \right)\cdots} \]

and the error constant is given by

\(K_{k} = \lim_{s \rightarrow 0}\mspace{2mu} s^{k}G(s)D_{c}(s) = \frac{Az_{1}z_{2}\cdots}{p_{1}p_{2}\cdots}\)- A table of steady-state errors for unity feedback systems of Types 0,1 , and 2 to reference inputs is given in Table 4.1.

  • Systems can be classified as to type for rejecting disturbances by computing the system error to polynomial disturbance inputs. The system is Type \(k\) to disturbances if the error is zero to all disturbance polynomials of degree less than \(k\), but nonzero for a polynomial of degree \(k\).
  • Increasing the proportional feedback gain reduces steady-state errors but high gain almost always destabilizes the system. Integral control provides robust reduction in steady-state errors, but also may make the system less stable. Derivative control increases damping and improves stability. These three kinds of control combined to form the classical three-term PID controller.
  • The standard PID controller is described by the equations

\[\begin{matrix} & U(s) = \left( k_{P} + \frac{k_{I}}{s} + k_{D}s \right)E(s)\ \text{~}\text{or}\text{~} \\ & U(s) = k_{P}\left( 1 + \frac{1}{T_{I}s} + T_{D}s \right)E(s) = D_{c}(s)E(s) \end{matrix}\]

This latter form is ubiquitous in the process-control industry and describes the basic controller in many control systems.

  • Useful guidelines for tuning PID controllers were presented in Tables 4.2 and 4.3.

  • Matlab can compute a discrete equivalent with the command c2d.

147. REVIEW QUESTIONS

4.1 Give three advantages of feedback in control.

4.2 Give two disadvantages of feedback in control.

4.3 A temperature control system is found to have zero error to a constant tracking input and an error of \({0.5}^{\circ}C\) to a tracking input that is linear in time, rising at the rate of \(40^{\circ}C/sec\). What is the system type of this control system and what is the relevant error constant $\left( K_{p} \right.\ $ or \(K_{v}\) or \(\left. \ K_{a} \right)\) ?

4.4 What are the units of \(K_{p},K_{v}\), and \(K_{a}\) ?

4.5 What is the definition of system type with respect to reference inputs?

4.6 What is the definition of system type with respect to disturbance inputs?

4.7 Why does system type depend on where the external signal enters the system?

4.8 What is the main objective of introducing integral control?

4.9 What is the major objective of adding derivative control?

4.10 Why might a designer wish to put the derivative term in the feedback rather than in the error path?

4.11 What is the advantage of having a "tuning rule" for PID controllers?

Figure 4.30

Three-amplifier topologies for Problem 4.2
4.12 Give two reasons to use a digital controller rather than an analog controller.

4.13 Give two disadvantages to using a digital controller.

148. PROBLEMS

149. Problems for Section 4.1: The Basic Equations of Control

4.1 If \(S\) is the sensitivity of the unity feedback system to changes in the plant transfer function and \(T\) is the transfer function from reference to output, show that \(S + T = 1\).

4.2 We define the sensitivity of a transfer function \(G\) to one of its parameters \(K\) as the ratio of percent change in \(G\) to percent change in \(K\).

\[S_{K}^{G} = \frac{dG/G}{dK/K} = \frac{dlnG}{dlnK} = \frac{K}{G}\frac{dG}{dK} \]

The purpose of this problem is to examine the effect of feedback on sensitivity. In particular, we would like to compare the topologies shown in Fig. 4.30 for connecting three amplifier stages with a gain of \(- K\) into a single amplifier with a gain of -10 .

(a)

(b)

(c)

(a) For each topology in Fig. 4.30, compute \(\beta_{i}\) so if \(K = 10,Y = - 10R\).

(b) For each topology, compute \(S_{K}^{G}\) when \(G = \frac{Y}{R}\). (Use the respective \(\beta_{i}\) values found in part (a).) Which case is the least sensitive?

(c) Compute the sensitivities of the systems in Fig. 4.30(b,c) to \(\beta_{2}\) and \(\beta_{3}\). Using your results, comment on the relative need for precision in sensors and actuators.

4.3 Compare the two structures shown in Fig. 4.31 with respect to sensitivity to changes in the overall gain due to changes in the amplifier gain. Use the relation

\[S = \frac{dlnF}{dlnK} = \frac{K}{F}\frac{dF}{dK} \]

as the measure. Select \(H_{1}\) and \(H_{2}\) so the nominal system outputs satisfy \(F_{1} = F_{2}\), and assume \(KH_{1} > 0\).

(a)

(b)

Figure 4.31

Block diagrams for Problem 4.3

4.4 A unity feedback control system has the open-loop transfer function

\[G(s) = \frac{A}{s(s + a)} \]

(a) Compute the sensitivity of the closed-loop transfer function to changes in the parameter \(A\).

(b) Compute the sensitivity of the closed-loop transfer function to changes in the parameter \(a\).

(c) If the unity gain in the feedback changes to a value of \(\beta \neq 1\), compute the sensitivity of the closed-loop transfer function with respect to \(\beta\).

4.5 Compute the equation for the system error for the feedback system shown in Fig. 4.5.

150. Problems for Section 4.2: Control of Steady-State Error

4.6 Consider the DC motor control system with rate (tachometer) feedback shown in Fig. 4.32(a).

(a)

(b)

Figure 4.32

Control system for Problem 4.6

Figure 4.33

Closed-loop system for Problem 4.7

Figure 4.34

Control system for

Problem 4.10 (a) Find values for \(K^{'}\) and \(k_{t}^{'}\) so the system of Fig. 4.32(b) has the same transfer function as the system of Fig. 4.32(a).

(b) Determine the system type with respect to tracking \(\theta_{r}\) and compute the system \(K_{v}\) in terms of parameters \(K^{'}\) and \(k_{t}^{'}\).

(c) Does the addition of tachometer feedback with positive \(k_{t}\) increase or decrease \(K_{v}\) ?

4.7 A block diagram of a control system is shown in Fig. 4.33.

(a) If \(r\) is a step function and the system is closed-loop stable, what is the steady-state tracking error?

(b) What is the system type?

(c) What is the steady-state error to a ramp velocity 2.5 if \(K_{2} = 2\) and \(K_{1}\) is adjusted so that the system step response approximately has a rise time of \(0.65\text{ }s\) and a settling time of \(0.23\text{ }s\) ?

4.8 A standard feedback control block diagram is shown in Fig. 4.5 with

\[G(s) = \frac{1.5}{s};D_{c}(s) = \frac{(s + 9)}{(s + 3)};H(s) = \frac{70}{(s + 70)};V_{(s)} = 0 \]

(a) Let \(W = 0\) and compute the transfer function from \(R\) to \(Y\).

(b) Let \(R = 0\) and compute the transfer function from \(W\) to \(Y\).

(c) What is the tracking error if \(R\) a unit-step input and \(W = 0\) ?

(d) What is the tracking error if \(R\) is a unit-ramp input and \(W = 0\) ?

(e) What is the system type with respect to the reference inputs and the corresponding error coefficient?

4.9 A generic negative feedback system with non-unity transfer function in the feedback path is shown in Fig. 4.5.

(a) Suppose,

\[G(s) = \frac{1}{s(s + 1)^{2}};D_{cl}(s) = 0.42;H(s) = P\frac{(0.58s + 1)}{(0.35s + 1)};V(s) = 0, \]

showing a lead compensation in the feedback path. What is the requirement on \(P\) such that the system will remain a Type 1 system with respect to the reference input?

Figure 4.35

Control system for Problem 4.11 (b) For part (a), find the steady-state tracking error for this system to a unit ramp reference input if \(P = 1\).

(c) For part (b), what is the value of the velocity error coefficient, \(K_{v}\) ?

4.10 Consider the system shown in Fig. 4.34 where

\[D_{c}(s) = K\frac{\left( s^{2} + \alpha s + 1 \right)}{\left( s^{2} + \omega_{o}^{2} \right)} \]

(a) Prove that if the system is stable, it is capable of tracking a sinusoidal reference input \(r = sin\omega_{o}t\) with a zero steady-state error. (Hint: Look at the transfer function from \(R\) to \(E\) and consider the gain at \(\omega_{o}\).)

(b) Use the Routh's criterion to find the range of \(K\) such that the closedloop system remains stable if \(\omega_{0} = 1\) and \(\alpha = 0.3\).

4.11 Consider the system shown in Fig. 4.35, which represents control of the angle of a pendulum that has no damping.

(a) What condition must \(D_{c}(s)\) satisfy so the system can track a ramp reference input with constant steady-state error?

(b) For a transfer function \(D_{c}(s)\) that stabilizes the system and satisfies the condition in part (a), find the class of disturbances \(w(t)\) that the system can reject with zero steady-state error.

4.12 A unity feedback system has the overall transfer function

\[\frac{Y(s)}{R(s)} = \mathcal{T}(s) = \frac{6}{(7s + 2)(s + 3)} \]

Give the system type and the corresponding error constant for tracking polynomial reference inputs in terms of \(\zeta\) and \(\omega_{n}\).

4.13 Consider the second-order system

\[G(s) = \frac{2}{s^{2} + 4\zeta s + 2} \]

We would like to add a transfer function of the form \(D_{c}(s) = K(s + a)/\) \((s + b)\) in cascade with \(G(s)\) in a unity-feedback structure.

(a) Ignoring stability for the moment, what are the constraints on \(K,a\), and \(b\) so that the system is Type 1 ?

(b) What are the constraints on \(a\), and \(b\) so that the system is both Type 1 and remains stable for every positive value for \(K\) ?
state tracking error is less than 0.1 unit when the reference input to the feedback system is a unit step?

4.14 Consider the system shown in Fig. 4.36(a).

Figure 4.36

Control system for

Problem 4.14

(a)

(b)

(a) What is the system type? Compute the steady-state tracking error due to a ramp input \(r(t) = r_{o}t1(t)\).

(b) For the modified system with a feed forward path shown in Fig. 4.36(b), give the value of \(H_{f}\) so the system is Type 2 for reference inputs and compute the \(K_{a}\) in this case.

(c) Is the resulting Type 2 property of this system robust with respect to changes in \(H_{f}\), that is, will the system remain Type 2 if \(H_{f}\) changes slightly?

4.15 A controller for a DC servo motor with transfer function \(G(s) =\) \(\frac{5}{s(s + 10)}\) has been designed with a unity feedback structure and has the transfer function \(D_{c}(s) = 6\frac{(s + 7)(s + 9)}{s(s + 12)}\).

(a) Find the system type for reference tracking and the corresponding error constant for this system.

(b) If a disturbance torque \(w\) adds to the control so that the input to the process is \(u + w\), what is the system type and corresponding error constant with respect to disturbance rejection?

4.16 A compensated motor position control system is shown in Fig. 4.37. Assume the sensor dynamics are \(H(s) = 1\).

(a) Can the system track a step reference input \(r\) with zero steady-state error? If yes, give the value of the velocity constant.

(b) Can the system reject a step disturbance \(w\) with zero steady-state error? If yes, give the value of the velocity constant.

(c) Compute the sensitivity of the closed-loop transfer function to changes in the plant pole at -3 .

(d) In some instances there are dynamics in the sensor. Repeat parts (a) to (c) for \(H(s) = \frac{25}{s + 25}\) and compare the corresponding velocity constants.

Figure 4.37

Control system for

Problem 4.16

Figure 4.38

Single input-single output unity feedback system with disturbance inputs

Figure 4.39

System using integral control

4.17 The general unity feedback system shown in Fig. 4.38 has disturbance inputs \(w_{1},w_{2}\), and \(w_{3}\) and is asymptotically stable. Also,

\[G_{1}(s) = \frac{K_{1}\prod_{i = 1}^{m_{1}}\mspace{2mu}\mspace{2mu}\left( s + z_{1i} \right)}{s^{l_{1}}\prod_{i = 1}^{m_{1}}\mspace{2mu}\mspace{2mu}\left( s + p_{1i} \right)},G_{2}(s) = \frac{K_{2}\prod_{i = 1}^{m_{1}}\mspace{2mu}\mspace{2mu}\left( s + z_{2i} \right)}{s^{l_{2}}\prod_{i = 1}^{m_{1}}\mspace{2mu}\mspace{2mu}\left( s + p_{2i} \right)} \]

Show that the system is of Type 0 , Type \(l_{1}\), and Type \(\left( l_{1} + l_{2} \right)\) with respect to disturbance inputs \(w_{1},w_{2}\), and \(w_{3}\), respectively.

4.18 One possible representation of an automobile speed-control system with integral control is shown in Fig. 4.39.

(a) With a zero reference velocity input \(v_{c} = 0\), find the transfer function relating the output speed \(v\) to the wind disturbance \(w\).

(b) What is the steady-state response of \(v\) if \(w\) is a unit-ramp function?

(c) What type is this system in relation to reference inputs? What is the value of the corresponding error constant?

(d) What is the type and corresponding error constant of this system in relation to tracking the disturbance \(w\) ?

4.19 For the feedback system shown in Fig. 4.40, find the value of \(\alpha\) that will make the system Type 1 for \(K = 3\). Give the corresponding velocity constant. Show that the system is not robust by using this value of \(\alpha\) and computing the tracking error \(e = r - y\) to a step reference for \(K = 4\) and \(K = 6\).

Figure 4.40

Control system for

Problem 4.19

4.20 Suppose you are given the system depicted in Fig. 4.41(a), where the plant parameter \(a\) is subject to variations.

(a)

(b)

Figure 4.41

Control system for Problem 4.20

(a) Find \(G(s)\) so that the system shown in Fig. 4.41(b) has the same transfer function from \(r\) to \(y\) as the system in Fig. 4.41(a).

(b) Assume that \(a = 1\) is the nominal value of the plant parameter. What is the system type and the error constant in this case?

(c) Now assume that \(a = 1 + \delta a\), where \(\delta a\) is some perturbation to the plant parameter. What is the system type and the error constant for the perturbed system?

4.21 Two feedback systems are shown in Fig. 4.42.

(a) Determine values for \(K_{1},K_{2}\), and \(K_{3}\) so that both systems:

(i) Exhibit zero steady-state error to step inputs (that is, both are Type 1), and

(ii) whose static velocity error constant \(K_{v} = 10\) when \(K_{0} = 7.5\).

(b) Suppose \(K_{0}\) undergoes a small perturbation: \(K_{0} \rightarrow K_{0} + \delta K_{0}\). What effect does this have on the system type in each case? Which system has a type which is robust? Which system do you think would be preferred?

(a)

(b)

Figure 4.42

Two feedback systems for Problem 4.21

Figure 4.43

Control system for

Problem 4.22

4.22 You are given the system shown in Fig. 4.43, where the feedback gain \(\beta\) is subject to variations. You are to design a controller for this system so that the output \(y(t)\) accurately tracks the reference input \(r(t)\).

(a) Let \(\beta = 1\). You are given the following three options for the controller \(D_{ci}(s)\) :

\[D_{c1}(s) = k_{P},\ D_{c2}(s) = \frac{k_{P}s + k_{I}}{s},\ D_{c3}(s) = \frac{k_{P}s^{2} + k_{I}s + k_{2}}{s^{2}} \]

Choose the controller (including particular values for the controller constants) that will result in a Type 1 system with a steady-state error to a unit reference ramp of less than \(\frac{1}{15}\).

(b) Next, suppose there is some attenuation in the feedback path that is modeled by \(\beta = 0.85\). Find the steady-state error due to a ramp input for your choice of \(D_{ci}(s)\) in part (a).

(c) If \(\beta = 0.85\), what is the system type for part (b)? What are the values of the appropriate error constant?

4.23 Consider the system shown in Fig. 4.44.

(a) Find the transfer function from the reference input to the tracking error.

(b) For this system to respond to inputs of the form \(r(t) = t^{n}1(t)\) (where \(n < q\) ) with zero steady-state error, what constraint is placed on the open-loop poles \(p_{1},p_{2},\cdots,p_{q}\) ?

Figure 4.44

Control system for

Problem 4.23

4.24 Consider the system shown in Fig. 4.45.

(a) Compute the transfer function from \(R(s)\) to \(E(s)\) and determine the steady-state error \(\left( e_{SS} \right)\) for a unit-step reference input signal, and a unit-ramp reference input signal.

(b) Determine the locations of the closed-loop poles of the system.

(c) Select the system parameters \(\left( k,k_{P},k_{I} \right)\) such that the closed-loop system has damping coefficient \(\zeta = 0.707\) and \(\omega_{n} = 1\). What percent overshoot would you expect in \(y(t)\) for unit-step reference input?

(d) Find the tracking error signal as a function of time, \(e(t)\), if the reference input to the system, \(r(t)\), is a unit-ramp.

(e) How can we select the PI controller parameters \(\left( k_{P},k_{I} \right)\) to ensure that the amplitude of the transient tracking error, \(|e(t)|\), from part (d) is small?

(f) What is the transient behavior of the tracking error, \(e(t)\), for a unitramp reference input if the magnitude of the integral gain, \(k_{I}\), is very large? Does the unit-ramp response have an overshoot in that case?

Figure 4.45

Control system diagram for Problem 4.24

4.25 A linear ODE model of the DC motor with negligible armature inductance \(\left( L_{a} = 0 \right)\) and with a disturbance torque \(w\) was given earlier in the chapter; it is restated here, in slightly different form, as

\[\frac{JR_{a}}{K_{t}}{\overset{¨}{\theta}}_{m} + K_{e}{\overset{˙}{\theta}}_{m} = v_{a} + \frac{R_{a}}{K_{t}}w \]

where \(\theta_{m}\) is measured in radians. Dividing through by the coefficient of \({\overset{¨}{\theta}}_{m}\), we obtain

\[{\overset{¨}{\theta}}_{m} + a_{1}{\overset{˙}{\theta}}_{m} = b_{0}v_{a} + c_{0}w \]

where

\[a_{1} = \frac{K_{e}K_{t}}{JR_{a}},\ b_{0} = \frac{K_{t}}{JR_{a}},\ c_{0} = \frac{1}{J} \]

With rotating potentiometers, it is possible to measure the positioning error between \(\theta\) and the reference angle \(\theta_{r}\) or \(e = \theta_{r} - \theta_{m}\). With a tachometer, we can measure the motor speed \({\overset{˙}{\theta}}_{m}\). Consider using feedback of the error \(e\) and the motor speed \({\overset{˙}{\theta}}_{m}\) in the form

\[v_{a} = K\left( e - T_{D}{\overset{˙}{\theta}}_{m} \right) \]

where \(K\) and \(T_{D}\) are controller gains to be determined.

(a) Draw a block diagram of the resulting feedback system showing both \(\theta_{m}\) and \({\overset{˙}{\theta}}_{m}\) as variables in the diagram representing the motor.

(b) Suppose the numbers work out so that \(a_{1} = 80,b_{0} = 320\), and \(c_{0} = 11\). If there is no load torque \((w = 0)\), what speed (in rpm) results from \(v_{a} = 120\text{ }V\) ?
(c) Using the parameter values given in part (b), find \(K\) and \(T_{D}\) so that using the results in Chapter 3, a step change in \(\theta_{r}\) with zero load torque results in a transient that has an approximately \(14\%\) overshoot and that settles to within \(4\%\) of steady state in less than \(0.03sec\).

(d) Derive an expression for the steady-state error to a reference angle input, and compute its value for your design in part (c) assuming \(\theta_{r} = 1rad\).

(e) Derive an expression for the steady-state error to a constant disturbance torque when \(\theta_{r} = 0\) and compute its value for your design in part (c) assuming \(w = 1.2\).

4.26 We wish to design an automatic speed control for an automobile. Assume that (1) the car has a mass \(m\) of \(1100\text{ }kg\), (2) the accelerator is the control \(U\) and supplies a force on the automobile of \(12\text{ }N\) per degree of accelerator motion, and (3) air drag provides a friction force proportional to velocity of \(11\text{ }N \cdot sec/m\).

(a) Assume the velocity changes are given by

\[V(s) = \frac{1}{s + 0.01}U(s) + \frac{0.07}{s + 0.01}W(s) \]

where \(V\) is given in meters per second, \(U\) is in degrees, and \(W\) is the percent grade of the road. Design a proportional control law \(U =\) \(- k_{P}\left( V - V_{d} \right)\) that will maintain a velocity error of less than \(1\text{ }m/sec\) in the presence of a constant \(1.5\%\) grade.

(b) Discuss what advantage (if any) integral control would have for this problem.

(c) Assuming that pure integral control (that is, no proportional term) is advantageous, select the feedback gain so that the roots have critical damping \((\zeta = 1)\).

4.27 Consider the automobile speed control system depicted in Fig. 4.46.

Figure 4.46

Automobile

speed-control system

(a) Find the transfer functions from \(W(s)\) and from \(R(s)\) to \(Y(s)\).

(b) Assume the desired speed is a constant reference \(r\), so \(R(s) = \frac{r_{o}}{s}\). Assume the road is level, so \(w(t) = 0\). Compute values of the gains \(k_{P},H_{r}\), and \(H_{y}\) to guarantee that

Figure 4.47

Unity feedback system

Figure 4.48

Feedback system for Problem 4.30

\[\lim_{t \rightarrow \infty}\mspace{2mu} y(t) = r_{o}. \]

Include both the open-loop (assuming \(H_{y} = 0\) ) and feedback cases \(\left( H_{y} \neq 0 \right)\) in your discussion.

(c) Repeat part (b) assuming a constant grade disturbance \(W(s) = \frac{w_{o}}{s}\) is present in addition to the reference input. In particular, find the variation in speed due to the grade change for both the feedforward and feedback cases. Use your results to explain (1) why feedback control is necessary and (2) how the gain \(k_{P}\) should be chosen to reduce steady-state error.

(d) Assume \(w(t) = 0\) and the gain \(A\) undergoes the perturbation \(A + \delta A\). Determine the error in speed due to the gain change for both the feedforward and feedback cases. How should the gains be chosen in this case to reduce the effects of \(\delta A\) ?

4.28 Prove that the step response of a Type II closed-loop stable system must always have a non-zero overshoot.

4.29 Consider the feedback control system shown in Figure 4.47.

(a) Assume \(D_{c}(s) = K\). What values of \(K\) would make the closed-loop system stable? Explain all your reasoning.

(b) Now consider the controller of the form \(D_{c}(s) = \frac{1}{s^{n}}\) with \(n\) being a non-negative integer. For what values of \(n\) is the closed-loop system stable? Explain all your reasoning.

4.30 A feedback control system is shown in Fig. 4.48.

(a) Determine the system type with respect to the reference input.

(b) Compute the steady-state tracking errors, \(e\), for unit step and ramp inputs.

(c) Determine the system type with respect to the disturbance input, \(w\).

(d) Compute the steady-state errors, \(e\), for unit step and ramp disturbance inputs.

4.31 Consider the closed-loop system shown in Fig. 4.49.

(a) What is the condition on the gain, \(K\), for the closed-loop system to be stable?

(b) What is the system type with respect to the reference input, \(r\) ?

Figure 4.49

Feedback system for

Problem 4.31

(c) What is the system type with respect to the disturbance input, \(w\) ?

(d) Prove that the system can track a sinusoidal input, \(r = sin(0.2t)\), with zero steady-state error.

4.32 A servomechanism system is shown in Fig. 4.50.

(a) Determine the conditions on the PID gain parameters to guarantee closed-loop stability.

(b) What is the system type with respect to the reference input?

(c) What is the system type with respect to the disturbance inputs \(w_{1}\) and \(w_{2}\) ?

Figure 4.50

Feedback system for Problem 4.32

Figure 4.51

Multivariable control system for Problem 4.33

4.33 Consider the multivariable system shown in Fig. 4.51. Assume that the system is stable. Find the transfer functions from each disturbance input to each output and determine the stead-state values of \(y_{1}\) and \(y_{2}\) for constant disturbances. We define a multivariable system to be Type \(k\) with respect to polynomial inputs at \(w_{i}\) if the steady-state value of every output is zero for any combination of inputs of degree less than \(k\) and at least one input is a non-zero constant for an input of degree \(k\). What is the system type with respect to disturbance rejection at \(w_{1}\) ? At \(w_{2}\) ?

151. Problems for Section 4.3: The Three-Term Controller. PID Control

4.34 For the system shown in Figure 4.47,

(a) Design a proportional controller to stabilize the system.

(b) Design a PD controller to stabilize the system.

(c) Design a PI controller to stabilize the system.

(d) What is the velocity error coefficient \(K_{v}\) for the system in part (c)?

4.35 Consider the feedback control system with the plant transfer function \(G(s) = \frac{1}{(s + 0.1)(s + 0.5)}\).

(a) Design a proportional controller so the closed-loop system has damping of \(\zeta = 0.707\). Under what conditions on \(k_{P}\) is the closed-loop system stable?

(b) Design a PI controller so that the closed-loop system has no overshoot. Under what conditions on \(\left( k_{P},k_{I} \right)\) is the closed-loop system is stable?

(c) Design a PID controller such that the settling time is less than \(1.7sec\).

4.36 Consider the liquid level control system with the plant transfer function \(G(s) = \frac{14}{s^{2} + 9s + 14}\).

(a) Design a proportional controller so that the damping ratio is \(\zeta = 0.6\).

(b) Design a PI controller so that the rise time is less than \(1sec\).

(c) Design a PD controller so that the rise time is less than \(0.7sec\).

(d) Design a PID controller so that the settling time is less than \(1.8sec\).

4.37 Consider the process control system with the plant transfer function \(G(s) = \frac{10}{(8s + 1)(7s + 1)}\).

(a) Design a PI controller such that the rise time is less than \(2.5sec\).

(b) Design a PID controller so that the system has no overshoot and the settling time is \(5sec\).

(c) Design a controller such that the peak time is less than \(4.5sec\).

4.38 Consider the multiple-integrator plant feedback control system shown in Fig. 4.52, where \(\mathcal{l}\) is an integer.

(a) Assume \(\mathcal{l} = 1\) (voltage controlled oscillator used in the phase-locked loop of telecommunication systems). Let \(D_{c}(s) = \frac{k(s + 5)}{s}\). Prove that it is possible to stabilize the system with this dynamic controller. Use the Routh test to determine the range of the gain \(K\) for the closedloop stability.

(b) Assume \(\mathcal{l} = 2\) (drone or satellite). Let \(D_{c}(s) = \frac{K(s + 5)^{2}}{s}\). Prove that it is possible to stabilize the system with this dynamic controller. Again use the Routh test to determine the range of the gain \(K\) for the closedloop stability.

(c) Assume \(\mathcal{l} = 3\) (hospital delivery robot or the Apollo Lunar Module). Let \(D_{c}(s) = \frac{K(s + 5)^{3}}{s}\). Prove that it is possible to stabilize the system with this dynamic controller. Again use the Routh test to determine the range of the gain \(K\) for the closed-loop stability.

(d) Assume \(\mathcal{l} \geq 4\). What form of controller will be required to stabilize the system?

Figure 4.52

Multiple-integrator plant systsem

Figure 4.53

Feedback system for Problem 4.39

Figure 4.54

Feedback system for Problem 4.40

4.39 The transfer functions for a generator speed control system are shown in Fig. 4.53. The speed sensor is fast enough that its dynamics can be neglected and the diagram shows the equivalent unity feedback system.

(a) Assuming the reference is zero, what is the steady-state error due to a step disturbance torque of \(1\text{ }N.m\) ? What must the amplifier gain \(K\) be in order to make the steady-state error to have a magnitude of less than \(0.008rad/sec\) or \(\left| e_{Ss} \right| \leq 0.008rad/sec\) ?

(b) Plot the roots of the closed-loop system in the complex plane, and accurately sketch the time response of the output for a step reference input using the gain \(K\) determined in part (a).

(c) Plot the region in the complex plane of acceptable closed-loop poles corresponding to the specifications of a \(1\%\) settling time of \(t_{s} \leq 0.23\) sec and an overshoot \(M_{p} \leq 2\%\).

(d) A PD controller is added in the feedback loop while using the gain \(K\) determined in part (a). Select the values for \(k_{p}\) and \(k_{d}\) for the PD controller which will meet the specifications in part (c).

(e) How would the disturbance-induced steady-state error change with the new control scheme in part (d)? How could the steady-state error to a disturbance torque be eliminated entirely?

4.40 Consider the system shown in Fig. 4.54 with PI control.

(a) Determine the transfer function from \(R\) to \(Y\).

(b) Determine the transfer function from \(W\) to \(Y\).

(c) Under what conditions on \(\left( k_{p},k_{I} \right)\) is the closed-loop system is stable?

reference input and with respect to disturbance rejection?

4.41 Consider the second-order plant with transfer function

\[G(s) = \frac{1}{(s + 4.5)(s + 5.7)} \]

and in a unity feedback structure.

(a) Determine the system type and error constant with respect to tracking polynomial reference inputs of the system for \(P\left\lbrack D = k_{p} \right\rbrack,PD\) \(\left\lbrack D = k_{p} + k_{D}s \right\rbrack\), and PID \(\left\lbrack D = k_{p} + \frac{k_{I}}{s} + k_{D}s \right\rbrack\) controllers. Let \(k_{p} = 75,k_{1} = 38\), and \(k_{D} = 0.1\).

(b) Determine the system type and error constant of the system with respect to disturbance inputs for each of the three regulators in part (a) assuming the disturbance \(w(t)\) is introduced at the input to the plant.

(c) Is this system better at tracking references or rejecting disturbances? Explain your responses briefly.

With PID, verify your results for parts (a) and (b) using Matlab by plotting unit step and ramp responses for both tracking and disturbance rejection.

4.42 The DC motor speed control shown in Fig. 4.55 is described by the differential equation

\[\overset{˙}{y} + 60y = 600v_{a} - 1500w \]

where \(y\) is the motor speed, \(v_{a}\) is the armature voltage, and \(w\) is the load torque. Assume the armature voltage is computed using the PI control law

\[v_{a} = - \left( k_{P}e + k_{I}\int_{0}^{t}\mspace{2mu}\mspace{2mu} edt \right) \]

where \(e = r - y\).

(a) Compute the transfer function from \(W\) to \(Y\) as a function of \(k_{P}\) and \(k_{I}\).

(b) Compute values for \(k_{P}\) and \(k_{I}\) so the characteristic equation of the closed-loop system will have roots at \(- 60 \pm 60j\).

4.43 For the system in Fig. 4.55, compute the following steady-state errors:

(a) to a unit-step reference input;

(b) to a unit-ramp reference input;

Figure 4.55

DC Motor speed-control block diagram for Problems 4.42 and 4.43

(c) to a unit-step disturbance input;

(d) for a unit-ramp disturbance input.

(e) Verify your answers to (a) and (d) using Matlab. Note a ramp response can be generated as a step response of a system modified by an added integrator at the reference input.

4.44 Consider the satellite-attitude control problem shown in Fig. 4.56 where the normalized parameters are

\[\begin{matrix} J & \ = 10\ \text{~}\text{spacecraft inertia,}\text{~}N \cdot m \cdot \sec^{2}/rad. \\ \theta_{r} & \ = \text{~}\text{reference satellite attitude,}\text{~}rad. \\ \theta & \ = \text{~}\text{actual satellite attitude,}\text{~}rad. \\ H_{y} & \ = 1\ \text{~}\text{sensor scale factor,}\text{~}V/rad. \\ H_{r} & \ = 1\ \text{~}\text{reference sensor scale factor,}\text{~}V/rad. \\ w & \ = \text{~}\text{disturbance torque,}\text{~}N \cdot m. \end{matrix}\]

(a) Use proportional control, \(P\), with \(D_{c}(s) = k_{P}\), and give the range of values for \(k_{P}\) for which the system will be stable.

(b) Use PD control, let \(D_{c}(s) = \left( k_{P} + k_{D}s \right)\), and determine the system type and error constant with respect to reference inputs.

(c) Use PD control, let \(D_{c}(s) = \left( k_{P} + k_{D}s \right)\), and determine the system type and error constant with respect to disturbance inputs.

(d) Use PI control, let \(D_{c}(s) = \left( k_{P} + \frac{k_{I}}{s} \right)\), and determine the system type and error constant with respect to reference inputs.

(e) Use PI control, let \(D_{c}(s) = \left( k_{P} + \frac{k_{I}}{s} \right)\), and determine the system type and error constant with respect to disturbance inputs.

(f) Use PID control, let \(D_{c}(s) = \left( k_{P} + \frac{k_{I}}{s} + k_{D}s \right)\), and determine the system type and error constant with respect to reference inputs.

(g) Use PID control, let \(D_{c}(s) = \left( k_{P} + \frac{k_{I}}{s} + k_{D}s \right)\), and determine the system type and error constant with respect to disturbance inputs.

Figure 4.56

Satellite attitude control

4.45 Automatic ship steering is particularly useful in heavy seas when it is important to maintain the ship along an accurate path. Such a control system for a large tanker is shown in Fig. 4.57, with the plant transfer function relating heading changes to rudder deflection in radians.

(a) Write the differential equation that relates the heading angle to rudder angle for the ship without feedback.

Figure 4.57

Ship-steering control system for Problem 4.45 (b) This control system uses simple proportional feedback with the gain of unity. Is the closed-loop system stable as shown? (Hint: use Routh's criterion.)

(c) Is it possible to stabilize this system by changing the proportional gain from unity to a lower value?

(d) Use Matlab to design a dynamic controller of the form \(D_{c}(s) =\) \(K\left( \frac{s + a}{s + b} \right)^{2}\) so the closed-loop system is stable and in response to a step heading command it has zero steady-state error and less than \(10\%\) overshoot. Are these reasonable values for a large tanker?

4.46 The unit-step response of a paper machine is shown in Fig. 4.58(a) where the input into the system is stock flow onto the wire and the output is basis weight (thickness). The time delay and slope of the transient response may be determined from the figure.

Figure 4.58

Paper-machine response data for Problem 4.46

(a)

(b) (a) Find the proportional-, PI-, and PID-controller parameters using the Ziegler-Nichols transient-response method.

(b) Using proportional feedback control, control designers have obtained a closed-loop system with the unit impulse response shown in Fig. 4.58(b). When the gain \(K_{u} = 8.556\), the system is on the verge of instability. Determine the proportional-, PI-, and PID-controller parameters according to the Ziegler-Nichols ultimate sensitivity method.

4.47 A paper machine has the transfer function

\[G(s) = \frac{e^{- 2s}}{3s + 1}, \]

Figure 4.59

Unit impulse response for the paper machine in Problem 4.47

Figure 4.60

Block diagram for Problem 4.48 where the input is stock flow onto the wire and the output is basis weight or thickness.

(a) Find the PID-controller parameters using the Ziegler-Nichols tuning rules.

(b) The system becomes marginally stable for a proportional gain of \(K_{u} = 3.044\) as shown by the unit impulse response in Fig. 4.59. Find the optimal PID-controller parameters according to the ZieglerNichols tuning rules.

$\bigtriangleup \ $ Problems for Section 4.4: Feedforward Control by Plant

152. Model Inversion

4.48 Consider the DC motor speed-control system shown in Fig. 4.60 with proportional control. (a) Add feedforward control to eliminate the steady-state tracking error for a step reference input. (b) Also add feedforward control to eliminate the effect of a constant output disturbance signal, \(w\), on the output of the system.

153. $\bigtriangleup \ $ Problems for Section 4.5: Introduction to Digital Control

4.49 Compute the discrete equivalents for the following possible controllers using the trapezoid rule (Tustin's method) discussed in Appendix W4.5 available online at www.pearsonglobaleditions.com and in Section 8.3.1. Let \(T_{S} = 0.05sec\) in each case.

(a) \(D_{c1}(s) = (s + 2)/2\),

(b) \(D_{c2}(s) = 2\frac{s + 2}{s + 4}\),
(c) \(D_{c3}(s) = 5\frac{s + 2}{s + 10}\),

(d) \(D_{c4}(s) = 5\frac{(s + 2)(s + 0.1)}{(s + 10)(s + 0.01)}\).

4.50 Give the difference equations corresponding to the discrete controllers found in Problem 4.49, respectively.

(a) Part 1.

(b) Part 2.

(c) Part 3 .

(d) Part 4.

154. The Root-Locus Design Method

155. A Perspective on the Root-Locus Design Method

In Chapter 3, we related the features of a step response, such as rise time, overshoot, and settling time, to pole locations in the s-plane of the transform of a second-order system characterized by the natural frequency \(\omega_{n}\), the damping ratio \(\zeta\), and the real part \(\sigma\). This relationship is shown graphically in Fig. 3.16. We also examined the changes in these transient-response features when a pole or a zero is added to the transfer function. In Chapter 4, we saw how feedback can improve steady-state errors and can also influence dynamic response by changing the system's pole locations. In this chapter, we present a specific technique that shows how changes in one of a system's parameters will modify the roots of the characteristic equation, which are the closed-loop poles, and thus change the system's dynamic response. The method was developed by W. R. Evans who gave rules for plotting the paths of the roots, a plot he called the Root Locus. With the development of Matlab and similar software, the rules are no longer needed for detailed plotting, but we feel it is essential for a control designer to understand how proposed dynamic controllers will influence a locus as a guide in the design process. We
also feel that it is important to understand the basics of how loci are generated in order to perform sanity checks on the computer results. For these reasons, the study of the Evans rules is important.

The root locus is most commonly used to study the effect of loop gain variations; however, the method is general and can be used to plot the roots of any polynomial with respect to any one real parameter that enters the equation linearly. For example, the root-locus method can be used to plot the roots of a characteristic equation as the gain of a velocity sensor feedback changes, or the parameter can be a physical parameter, such as motor inertia or armature inductance.

156. Chapter Overview

We open Section 5.1 by illustrating the root locus for some simple feedback systems for which the equations can be solved directly. In Section 5.2, we will show how to put an equation into the proper form for developing the rules for the root-locus behavior. In Section 5.3, this approach will be applied to determine the locus for a number of typical control problems, which illustrate the factors that influence the final shape. Matlab is used for detailed plotting of specific loci. When adjustment of the selected parameter alone cannot produce a satisfactory design, designs using other parameters can be studied or dynamic elements such as lead, lag, or notch compensations can be introduced, as is described in Section 5.4. In Section 5.5, the uses of the root locus for design will be demonstrated in two examples including a comprehensive design for the attitude control of a small airplane. In Section 5.6, the root-locus method will be extended to guide the design of systems with a negative parameter, systems with more than one variable parameter, and systems with simple time delay. Finally, Section 5.7 will give historical notes on the origin of root-locus design.

156.1. Root Locus of a Basic Feedback System

We begin with the basic feedback system shown in Fig. 5.1. For this system, the closed-loop transfer function is

\[\frac{Y(s)}{R(s)} = \mathcal{T}(s) = \frac{D_{c}(s)G(s)}{1 + D_{c}(s)G(s)H(s)} \]

and the characteristic equation, whose roots are the poles of this transfer function, is

\[1 + D_{c}(s)G(s)H(s) = 0 \]

To put the equation in a form suitable for study of the roots as a parameter changes, we first put the equation in polynomial form and select the parameter of interest, which we will call \(K\). We assume we can define component polynomials \(a(s)\) and \(b(s)\) so the characteristic polynomial is

Figure 5.1

Basic closed-loop block diagram

Evans's method

in the form \(a(s) + Kb(s)\). We then define the transfer function \(L(s) = \frac{b(s)}{a(s)}\) so the characteristic equation can be written as \(\ ^{1}\)

\[1 + KL(s) = 0\text{~}\text{where}\text{~}L(s) = \frac{b(s)}{a(s)} \]

If, as is often the case, the parameter is the gain of the controller, then \(L(s)\) is simply proportional to \(D_{c}(s)G(s)H(s)\). Evans suggested we plot the locus of all possible roots of Eq. (5.3) as \(K\) varies from zero to infinity, and then use the resulting plot to aid us in selecting the best value of \(K\). Furthermore, by studying the effects of additional poles and zeros on this graph, we can determine the consequences of additional dynamics added to \(D_{c}(s)\) as compensation in the loop. We thus have a tool not only for selecting the specific parameter value, but for designing the dynamic compensation as well. The graph of all possible roots of Eq. (5.3) relative to parameter \(K\) is called the root locus, and the set of rules to construct this graph is called the root-locus method of Evans. We begin our discussion of the method with the mechanics of constructing a root locus, using the equation in the form of Eq. (5.3) and \(K\) as the variable parameter.

To set the notation for our study, we assume here the transfer function \(L(s)\) is a rational function whose numerator is a monic \(\ ^{2}\) polynomial \(b(s)\) of degree \(m\) and whose denominator is a monic polynomial \(a(s)\) of degree \(n\) such that \(\ ^{3}n \geq m\). Therefore, \(m =\) the number of zeros, while \(n =\) the number of poles. We can factor these polynomials as

\[\begin{matrix} b(s) & \ = s^{m} + b_{1}s^{m - 1} + \cdots + b_{m} \\ & \ = \left( s - z_{1} \right)\left( s - z_{2} \right)\cdots\left( s - z_{m} \right) \end{matrix}\]

\[\begin{matrix} & \ = \prod_{i = 1}^{m}\mspace{2mu}\mspace{2mu}\left( s - z_{i} \right), \\ a(s) & \ = s^{n} + a_{1}s^{n - 1} + \cdots + a_{n}, \\ & \ = \prod_{i = 1}^{n}\mspace{2mu}\mspace{2mu}\left( s - p_{i} \right). \end{matrix}\]

The roots of \(b(s) = 0\) are the zeros of \(L(s)\) and are labeled \(z_{i}\), and the roots of \(a(s) = 0\) are the poles of \(L(s)\) and are labeled \(p_{i}\). The roots of the characteristic equation itself are \(r_{i}\) from the factored form \((n > m)\),

\[a(s) + Kb(s) = \left( s - r_{1} \right)\left( s - r_{2} \right)\cdots\left( s - r_{n} \right) \]

We may now state the root-locus problem expressed in Eq. (5.3) in several equivalent but useful ways. Each of the following equations has the same roots:

\[\begin{matrix} 1 + KL(s) & \ = 0, \\ 1 + K\frac{b(s)}{a(s)} & \ = 0, \\ a(s) + Kb(s) & \ = 0, \\ L(s) & \ = - \frac{1}{K}. \end{matrix}\]

Equations (5.6)-(5.9) are sometimes referred to as the root-locus form or Evans form of a characteristic equation. The root locus is the set of values of \(s\) for which Eqs. (5.6)-(5.9) hold for some positive real value \(\ ^{4}\) of \(K\). Because the solutions to Eqs. (5.6)-(5.9) are the roots of the closed-loop system characteristic equation and are thus closedloop poles of the system, the root-locus method can be thought of as a method for inferring dynamic properties of the closed-loop system as the parameter \(K\) changes.

157. Root Locus of a Motor Position Control

In Chapter 2, we saw that a normalized transfer function of a DC motor voltage-to-position can be

\[\frac{\Theta_{m}(s)}{V_{a}(s)} = \frac{Y(s)}{U(s)} = G(s) = \frac{A}{s(s + c)} \]

Solve for the root locus of closed-loop poles of the system created by feeding back the output \(\Theta_{m}\) as shown in Fig. 5.1 with respect to the parameter \(A\) if \(D_{c}(s) = H(s) = 1\) and also \(c = 1\).

Figure 5.2

Root locus for

\[L(s) = \frac{1}{s(s + 1)} \]

Solution. In terms of our notation, the values are

\[\begin{matrix} L(s) & \ = \frac{1}{s(s + 1)},\ b(s) = 1,\ m = 0,\ z_{i} = \{\text{~}\text{empty}\text{~}\} \\ K & \ = A,\ a(s) = s^{2} + s,\ n = 2,\ p_{i} = 0, - 1. \end{matrix}\]

From Eq. (5.8), the root locus is a graph of the roots of the quadratic equation

\[a(s) + Kb(s) = s^{2} + s + K = 0. \]

Using the quadratic formula, we can immediately express the roots of Eq. (5.11) as

\[r_{1},r_{2} = - \frac{1}{2} \pm \frac{\sqrt{1 - 4K}}{2} \]

A plot of the corresponding root locus is shown in Fig. 5.2. For \(0 \leq K \leq\) \(1/4\), the roots are real between -1 and 0 . At \(K = 1/4\) there are two roots at \(- 1/2\), and for \(K > 1/4\) the roots become complex with real parts constant at \(- 1/2\) and imaginary parts that increase essentially in proportion to the square root of \(K\). The dashed lines in Fig. 5.2 correspond to roots with a damping ratio \(\zeta = 0.5\). The poles of \(L(s)\) at \(s = 0\) and \(s = - 1\) are marked by the symbol \(\times\), and the points where the locus crosses the lines where the damping ratio equals 0.5 are marked with dots \(( \bullet )\). We can compute \(K\) at the point where the locus crosses \(\zeta = 0.5\) because we know that if \(\zeta = 0.5\), then \(\theta = 30^{\circ}\) and the magnitude of the imaginary part of the root is \(\sqrt{3}\) times the magnitude of the real part. Since the size of the real part is \(\frac{1}{2}\), from Eq. (5.12) we have

\[\frac{\sqrt{4K - 1}}{2} = \frac{\sqrt{3}}{2} \]

and, therefore, \(K = 1\).

We can observe several features of this simple locus by looking at Eqs. (5.11) and (5.12) and Fig. 5.2. First, there are two roots and, thus, two loci which we call branches of the root locus. At \(K = 0\) these branches begin at the poles of \(L(s)\) (which are at 0 and -1 ), as they should, since for \(K = 0\) the system is open-loop and the characteristic

Breakaway points are where roots move away from the real axis equation is \(a(s) = 0\). As \(K\) is increased, the roots move toward each other, coming together at \(s = - \frac{1}{2}\), and at that point they break away from the real axis. After the breakaway point, the roots move off to infinity with equal real parts, so the sum of the two roots is always -1 . From the viewpoint of design, we see that by altering the value of the parameter \(K\), we can cause the closed-loop poles to be at any point along the locus in Fig. 5.2. If some points along this locus correspond to a satisfactory transient response, then we can complete the design by choosing the corresponding value of \(K\); otherwise, we are forced to consider a more complex controller. As we pointed out earlier, the root locus technique is not limited to focusing on the system gain ( \(K = A\) in Example 5.1); the same ideas are applicable for finding the locus with respect to any parameter that enters linearly in the characteristic equation.

Root Locus with Respect to Plant Open-Loop Pole and Zero

Consider the characteristic equation as in Example 5.1, again with \(D_{c}(s) = H(s) = 1\) except now, instead of a constant \(A\), let there be a zero in the form of \(A = s + 2c\). Select \(c\) as the parameter of interest in the equation

\[1 + G(s) = 1 + \frac{s + 2c}{s(s + c)} \]

Find the root locus of the characteristic equation with respect to \(c\).

Solution. The corresponding closed-loop characteristic equation in polynomial form is

\[s^{2} + s + c(s + 2) = 0 \]

Equation (5.6) applies directly if we rearrange Eq. (5.14) with the following definitions:

\[\begin{matrix} & L(s) = \frac{s + 2}{s(s + 1)},\ b(s) = s + 2,\ m = 1,\ z_{i} = - 2, \\ & K = c,\ a(s) = s(s + 1),\ n = 2,\ p_{i} = 0, - 1. \end{matrix}\]

Thus, the root-locus form of the characteristic equation is

\[1 + \frac{c(s + 2)}{s(s + 1)} = 0\text{.}\text{~} \]

The solutions to Eq. (5.14) are easily computed using the quadratic formula as

\[r_{1},r_{2} = - \frac{c + 1}{2} \pm \frac{\sqrt{c^{2} - 6c + 1}}{2} \]

The locus of solutions is shown in Fig. 5.3, with the poles [roots of \(a(s)\) ] again indicated by \(x\) 's and the zero [root of \(b(s)\) ] by the circle \((◯)\). Note that when \(c^{2} - 6c + 1 < 0\), the roots become complex. This happens when \(0.172 < c < 5.828\). When \(c < 0.172\), the roots are on the real axis between \(s = 0\) and -1 points. There are two roots at \(s = - 0.586\), when \(c = 0.172\) and another two roots at \(s = - 3.41\) when \(c = 5.828\); these point of multiple roots where two or more roots merge at the real axis is called a break-in point. When \(c > 5.828\), the two locus segments

Figure 5.3

Root locus versus parameter \(c\) for

\(1 + G(s) =\) \(1 + \frac{s + 2c}{s(s + c)} = 0\)

Break-in point

move in opposite along the real axis; one is moving towards the infinite \(s\)-plane and the other towards the location of the zero.

Of course, computing the root locus for a quadratic equation is easy to do since we can solve the characteristic equation for the roots, as was done in Eqs. (5.12) and (5.16), and directly plot these as a function of the parameter \(K\) or \(c\). To be useful, the method must be suitable for higher-order systems for which explicit solutions are difficult to obtain; therefore, rules for the construction of a general root locus were developed by Evans. With the availability of Matlab, these rules are no longer necessary to plot a specific locus because the command rlocus(sys) will do that. However, in control design we are interested not only in a specific locus but also in how to modify the dynamics in such a way as to propose a system that will meet the dynamic response specifications for good control performance. For this purpose, it is very useful to be able to roughly sketch a locus so as to be able to evaluate the consequences of possible compensation alternatives. It is also important to be able to quickly evaluate the correctness of a computer-generated locus to verify that what is plotted by Matlab is in fact what was meant to be plotted. It is easy to get a constant wrong or to leave out a term and \({GIGO}^{5}\) is the well-known first rule of computation.

157.1. Guidelines for Determining a Root Locus

We begin with a formal definition of a root locus. From the form of Eq. (5.6), we define the root locus this way:

Definition I. The root locus is the set of values of \(s\) for which \(1 + KL(s) = 0\) is satisfied as the real parameter \(K\) varies from 0 to \(+ \infty\). Typically, \(1 + KL(s) = 0\) is the characteristic equation of the system, and in this case the roots on the locus are the closed-loop poles of that system.

The basic root-locus rule; the phase of \(L(s) = 180^{\circ}\)

Figure 5.4

Measuring the phase of Eq. (5.18)
Now suppose we look at Eq. (5.9). If \(K\) is to be real and positive, \(L(s)\) must be real and negative. In other words, if we arrange \(L(s)\) in polar form as magnitude and phase, then the phase of \(L(s)\) must be \(180^{\circ}\) in order to satisfy Eq. (5.9). We can thus define the root locus in terms of this phase condition as follows.

Definition II. The root locus of \(L(s)\) is the set of points in the \(s\)-plane where the phase of \(L(s)\) is \(180^{\circ}\). To test whether a point in the \(s\)-plane is on the locus, we define the angle to the test point from a zero as \(\psi_{i}\) and the angle to the test point from a pole as \(\phi_{i}\) then Definition II is expressed as those points in the \(s\)-plane where, for an integer \(\mathcal{l}\),

\[\sum_{}^{}\ \psi_{i} - \sum_{}^{}\ \phi_{i} = 180^{\circ} + 360^{\circ}(l - 1) \]

The immense merit of Definition II is that, while it is very difficult to solve a high-order polynomial by hand, computing the phase of a transfer function is relatively easy. The usual case is when \(K\) is real and positive, and we call this case the positive or \(\mathbf{180}^{\circ}\) locus. When \(K\) is real and negative, \(L(s)\) must be real and positive with a phase of \(0^{\circ}\), and this case is called the negative or \(0^{\circ}\) locus.

From Definition II we can, in principle, determine a positive root locus for a complex transfer function by measuring the phase and marking those places where we find \(180^{\circ}\). This direct approach can be illustrated by considering the example

\[L(s) = \frac{s + 1}{s(s + 5)\left\lbrack (s + 2)^{2} + 4 \right\rbrack} \]

In Fig. 5.4, the poles of this \(L(s)\) are marked \(\times\) and the zero is marked \(◯\). Suppose we select the test point \(s_{0} = - 1 + 2j\). We would like to test whether or not \(s_{0}\) lies on the root locus for some value of \(K\). For

this point to be on the locus, we must have \(\angle L\left( s_{0} \right) = 180^{\circ} + 360^{\circ}(l - 1)\) for some integer \(l\), or equivalently, from Eq. (5.18),

\[\angle\left( s_{0} + 1 \right) - \angle s_{0} - \angle\left( s_{0} + 5 \right) - \angle\left\lbrack \left( s_{0} + 2 \right)^{2} + 4 \right\rbrack = 180^{\circ} + 360^{\circ}(l - 1). \]

The angle from the zero term \(s_{0} + 1\) can be computed \(\ ^{6}\) by drawing a line from the location of the zero at -1 to the test point \(s_{0}\). In this case the line is vertical and has a phase angle marked \(\psi_{1} = 90^{\circ}\) in Fig. 5.4. In a similar fashion, the vector from the pole at \(s = 0\) to the test point \(s_{0}\) is shown with angle \(\phi_{1}\), and the angles of the two vectors from the complex poles at \(- 2 \pm 2j\) to \(s_{0}\) are shown with angles \(\phi_{2}\) and \(\phi_{3}\). The phase of the vector \(s_{0} + 5\) is shown with angle \(\phi_{4}\). From Eq. (5.19), we find the total phase of \(L(s)\) at \(s = s_{0}\) to be the sum of the phases of the numerator term corresponding to the zero minus the phases of the denominator terms corresponding to the poles:

\[\begin{matrix} \angle L & \ = \psi_{1} - \phi_{1} - \phi_{2} - \phi_{3} - \phi_{4} \\ & \ = 90^{\circ} - {116.6}^{\circ} - 0^{\circ} - 76^{\circ} - {26.6}^{\circ} \\ & \ = - {129.2}^{\circ}. \end{matrix}\]

Since the phase of \(L(s)\) is not \(180^{\circ}\), we conclude that \(s_{0}\) is not on the root locus, so we must select another point and try again. Although measuring phase is not particularly hard, measuring phase at every point in the \(s\)-plane is hardly practical. Therefore, to make the method practical, we need some general guidelines for determining where the root locus is. Evans developed a set of rules for this purpose, which we will illustrate by applying them to the root locus for

\[L(s) = \frac{1}{s\left\lbrack (s + 4)^{2} + 16 \right\rbrack} \]

We begin by considering the positive locus, which is by far the most common case. \(\ ^{7}\) The first three rules are relatively simple to remember and are essential for any reasonable sketch. The last two are less useful but are used occasionally. As usual, we assume Matlab or its equivalent is always available to make an accurate plot of a promising locus.

157.1.1. Rules for Determining a Positive \(\left( 180^{\circ} \right)\) Root Locus

RULE 1. The \(n\) branches of the locus start at the poles of \(L(s)\) and \(m\) of these branches end on the zeros of \(L(s)\). From the equation \(a(s) +\) \(Kb(s) = 0\), if \(K = 0\), the equation reduces to \(a(s) = 0\), whose roots are the poles. When \(K\) approaches infinity, \(s\) must be such that either \(b(s) = 0\) or \(s \rightarrow \infty\). Since there are \(m\) zeros where \(b(s) = 0,m\) branches can end in these places. The case for \(s \rightarrow \infty\) is considered in Rule 3 .

Figure 5.5

Rule 2. The real-axis parts of the locus are to the left of an odd number of poles and zeros

RULE 2. The loci are on the real axis to the left of an odd number of poles and zeros.

If we take a test point on the real axis, such as \(s_{0}\) in Fig. 5.5, we find that the angles \(\phi_{1}\) and \(\phi_{2}\) of the two complex poles cancel each other, as would the angles from complex conjugate zeros. Angles from real poles or zeros are \(0^{\circ}\) if the test point is to the right and \(180^{\circ}\) if the test point is to the left of a given pole or zero. Therefore, for the total angle to add to \(180^{\circ} + 360^{\circ}(l - 1)\), the test point must be to the left of an odd number of real-axis poles plus zeros as shown in Fig. 5.5.

RULE 3. For large \(s\) and \(K,n - m\) branches of the loci are asymptotic to lines at angles \(\phi_{l}\) radiating out from the point \(s = \alpha\) on the real axis, where

\[\begin{matrix} \phi_{l} & \ = \frac{180^{\circ} + 360^{\circ}(l - 1)}{n - m},\ l = 1,2,\ldots,n - m \\ \alpha & \ = \frac{\sum_{}^{}\ p_{i} - \sum_{}^{}\ z_{i}}{n - m} \end{matrix}\]

As \(K \rightarrow \infty\), the equation

\[L(s) = - \frac{1}{K} \]

can be satisfied only if \(L(s) = 0\). This can occur in two apparently different ways. In the first instance, as discussed in Rule \(1,m\) roots will be found to approach the zeros of \(L(s)\). The second manner in which \(L(s)\) may go to zero is if \(s \rightarrow \infty\) since, by assumption, \(n\) is larger than \(m\). The asymptotes describe how these \(n - m\) roots approach \(s \rightarrow \infty\). For large \(s\), the equation

\[1 + K\frac{s^{m} + b_{1}s^{m - 1} + \cdots + b_{m}}{s^{n} + a_{1}s^{n - 1} + \cdots + a_{n}} = 0 \]

can be approximated \(\ ^{8}\) by

\[1 + K\frac{1}{(s - \alpha)^{n - m}} = 0 \]

This is the equation for a system in which there are \(n - m\) poles, all clustered at \(s = \alpha\). Another way to visualize this same result is to consider the picture we would see if we could observe the locations of poles and zeros from a vantage point of very large \(s\) : They would appear to cluster near the \(s\)-plane origin. Thus, \(m\) zeros would cancel the effects of \(m\) of the poles, and the other \(n - m\) poles would appear to be in the same place. We say the locus of Eq. (5.23) is asymptotic to the locus of Eq. (5.24) for large values of \(K\) and \(s\). We need to compute \(\alpha\) to find the locus for the resulting asymptotic system. To find the locus, we choose our search point \(s_{0}\) such that \(s_{0} = Re^{j\phi}\) for some large fixed value of \(R\) and variable \(\phi\). Since all poles of this simple system are in the same place, the angle of its transfer function is \(180^{\circ}\) if all \(n - m\) angles, each equal to \(\phi_{l}\), sum to \(180^{\circ}\). Therefore, \(\phi_{l}\) is given by

\[(n - m)\phi_{l} = 180^{\circ} + 360^{\circ}(l - 1), \]

The angles of the asymptotes for some integer \(l\). Thus, the asymptotic root locus consists of radial lines at the \(n - m\) distinct angles given by

\[\phi_{l} = \frac{180^{\circ} + 360^{\circ}(l - 1)}{n - m},\ l = 1,2,\ldots,n - m \]

For the system described by Eq. (5.20), \(n - m = 3\) and \(\phi_{1,2,3} = 60^{\circ}\), \(180^{\circ}\), and \(300^{\circ}\) or \(\pm 60^{\circ},180^{\circ}\).

The lines of the asymptotic locus come from \(s_{0} = \alpha\) on the real axis. To determine \(\alpha\), we make use of a simple property of polynomials. Suppose we consider the monic polynomial \(a(s)\) with coefficients \(a_{i}\) and roots \(p_{i}\), as in Eq. (5.4), and we equate the polynomial form with the factored form

\[s^{n} + a_{1}s^{n - 1} + a_{2}s^{n - 2} + \cdots + a_{n} = \left( s - p_{1} \right)\left( s - p_{2} \right)\cdots\left( s - p_{n} \right) \]

If we multiply out the factors on the right side of this equation, we see that the coefficient of \(s^{n - 1}\) is \(- p_{1} - p_{2} - \cdots - p_{n}\). On the left side of the equation, we see that this term is \(a_{1}\). Thus \(a_{1} = - \sum p_{i}\); in other words, the coefficient of the second highest term in a monic polynomial is the negative sum of its roots - in this case, the poles of \(L(s)\). Applying this result to the polynomial \(b(s)\), we find the negative sum of the zeros to be \(b_{1}\). These results can be written as

\[\begin{matrix} & \ - b_{1} = \sum_{}^{}\ z_{i} \\ & \ - a_{1} = \sum_{}^{}\ p_{i} \end{matrix}\]

Finally, we apply this result to the closed-loop characteristic polynomial obtained from Eq. (5.23):

The center of the asymptotes

Figure 5.6

The asymptotes are \(n - m\) radial lines from \(\alpha\) at equal angles

\[\begin{matrix} s^{n} & \ + a_{1}s^{n - 1} + \cdots + a_{n} + K\left( s^{m} + b_{1}s^{m - 1} + \cdots + b_{m} \right) \\ & \ = \left( s - r_{1} \right)\left( s - r_{2} \right)\cdots\left( s - r_{n} \right) = 0 \end{matrix}\]

Note the sum of the roots is the negative of the coefficient of \(s^{n - 1}\) and is independent of \(K\) if \(m < n - 1\). Therefore, if \(L(s)\) has at least two more poles than zeros, we have \(a_{1} = - \sum r_{i}\). We have thus shown that the center point of the roots does not change with \(K\) if \(m < n - 1\), and that the open-loop and closed-loop sum is the same and is equal to \(- a_{1}\), which can be expressed as

\[- \sum_{}^{}\ r_{i} = - \sum_{}^{}\ p_{i} \]

For large values of \(K\), we have seen that \(m\) of the roots \(r_{i}\) approach the zeros \(z_{i}\) and \(n - m\) of the roots approach the branches of the asymptotic system \(\frac{1}{(s - \alpha)^{n - m}}\) whose poles add up to \((n - m)\alpha\). Combining these results, we conclude that the sum of all the roots equals the sum of those roots that go to infinity plus the sum of those roots that go to the zeros of \(L(s)\) :

\[- \sum_{}^{}\ r_{i} = - (n - m)\alpha - \sum_{}^{}\ z_{i} = - \sum_{}^{}\ p_{i} \]

Solving for \(\alpha\), we get

\[\alpha = \frac{\sum_{}^{}\ p_{i} - \sum_{}^{}\ z_{i}}{n - m} \]

Notice in the sums \(\sum p_{i}\) and \(\sum z_{i}\), the imaginary parts always add to zero, since complex poles and zeros always occur in complex conjugate pairs. Thus, Eq. (5.29) requires information about the real parts only. For Eq. (5.20),

\[\begin{matrix} \alpha & \ = \frac{- 4 - 4 + 0}{3 - 0} \\ & \ = - \frac{8}{3} = - 2.67. \end{matrix}\]

The asymptotes at \(\pm 60^{\circ}\) are shown dashed in Fig. 5.6. Notice they cross the imaginary axis at \(\pm (2.67)j\sqrt{3} = \pm 4.62j\). The asymptote at \(180^{\circ}\) was already found on the real axis by Rule 2 .

158. Rule for departure angles

Rule for arrival angles

159. Figure 5.7

The departure and arrival angles are found by looking near a pole or zero
RULE 4. The angle of departure of a branch of the locus from a single pole is given by

\[\phi_{dep} = \sum_{}^{}\ \psi_{i} - \sum_{i \neq dep}^{}\mspace{2mu}\phi_{i} - 180^{\circ} \]

where \(\sum\phi_{i}\) is the sum of the angles to the remaining poles and \(\sum\psi_{i}\) is the sum of the angles to all the zeros. The angles of departure for repeated poles with multiplicity, \(q\), is given by

\[q\phi_{l,dep} = \sum_{}^{}\ \psi_{i} - \sum_{i \neq l,dep}^{}\mspace{2mu}\phi_{i} - 180^{\circ} - 360^{\circ}(l - 1) \]

where \(l\) is an integer and takes on the values \(1,2,\ldots,q\). Note if there are \(q\) repeated poles, there will be \(q\) branches of the locus departing from the poles.

Likewise, the angle(s) of arrival of a branch at a zero with multiplicity \(q\) is given by

\[q\psi_{l,arr} = \sum_{}^{}\ \phi_{i} - \sum_{i \neq l,arr}^{}\mspace{2mu}\psi_{i} + 180^{\circ} + 360^{\circ}(l - 1) \]

where \(\sum\phi_{i}\) is the sum of the angles to all the poles, \(\sum\psi_{i}\) is the sum of the angles to the remaining zeros, and again, \(l\) takes on the values \(1,2,\ldots,q\) so there will be \(q\) branches of the locus arriving at the zeros.

The rules above all arise from the basic root locus phase condition in Eq. (5.17) as we will now demonstrate. To compute the angle by which a branch of the locus departs from one of the poles, we take a test point \(s_{0}\) very near the pole in question, define the angle from that pole to the test point as \(\phi_{1}\), and transpose all other terms of Eq. (5.17) to the righthand side. We can illustrate the process by taking the test point \(s_{0}\) to be near the pole at \(- 4 + 4j\) of our example and computing the angle of \(L\left( s_{0} \right)\). The situation is sketched in Fig. 5.7, and the angle from \(- 4 + 4j\) to the test point we define as \(\phi_{1}\). We select the test point close enough to the pole that the angles \(\phi_{2}\) and \(\phi_{3}\) to the test point can be considered the same as those angles to the pole. Thus, \(\phi_{2} = 90^{\circ},\phi_{3} = 135^{\circ}\), and \(\phi_{1}\) can be calculated from the angle condition as whatever it takes to make the total be \(180^{\circ}\). The calculation is

\[\begin{matrix} \phi_{1} & \ = - 90^{\circ} - 135^{\circ} - 180^{\circ} \\ & \ = - 405^{\circ} \\ & \ = - 45^{\circ}. \end{matrix}\]

By the complex conjugate symmetry of the plots, the angle of departure of the locus near the pole at \(- 4 - 4j\) will be \(+ 45^{\circ}\).

If there had been zeros in \(L(s)\), the angles from the pole to the zeros would have been added to the right side of Eq. (5.33). For the general case, we can see from Eq. (5.33) that the angle of departure from a single pole is that given by Eq. (5.30). For a multiple pole of order \(q\), we must count the angle from the pole \(q\) times. This alters Eq. (5.30) to Eq. (5.31) where \(l\) takes on \(q\) values because there are \(q\) branches of the locus that depart from such a multiple pole.

The process of calculating a departure angle for small values of \(K\), as shown in Fig. 5.7, is also valid for computing the angle by which a root locus arrives at a zero of \(L(s)\) for large values of \(K\). The general formula that results is that given by Eq. (5.32).

This rule is particularly useful if a system has poles near the imaginary axis, because it will show if the locus branch from the pole starts off toward the stable left half-plane (LHP) or heads toward the unstable right half-plane (RHP).

RULE 5. The locus can have multiple roots at points on the locus and the branches will approach a point of \(q\) roots at angles separated by

\[\frac{180^{\circ} + 360^{\circ}(l - 1)}{q} \]

and will depart at angles with the same separation. As with any polynomial, it is possible for a characteristic polynomial of a degree greater than 1 to have multiple roots. For example, in the second-order locus of Fig. 5.2, there are two roots at \(s = - 1/2\) when \(K = 1/4\). Here the horizontal branches of the locus come together and the vertical branches break away from the real axis, becoming complex for \(K > 1/4\). The locus arrives at \(0^{\circ}\) and \(180^{\circ}\) and departs at \(+ 90^{\circ}\) and \(- 90^{\circ}\).

In order to compute the angles of arrival and departure from a point of multiple roots, it is useful to use a trick we call the continuation locus. We can imagine plotting a root locus for an initial range of \(K\), perhaps for \(0 \leq K \leq K_{1}\). If we let \(K = K_{1} + K_{2}\), we can then plot a new locus with parameter \(K_{2}\), a locus which is the continuation of the original locus, and whose starting poles are the roots of the original system at \(K = K_{1}\). To see how this works, we return to the second-order root locus of Eq. (5.11) and let \(K_{1}\) be the value corresponding to the breakaway point \(K_{1} = 1/4\). If we let \(K = 1/4 + K_{2}\), we have the locus equation \(s^{2} + s + 1/4 + K_{2} = 0\), or

\[\left( s + \frac{1}{2} \right)^{2} + K_{2} = 0 \]

The steps for plotting this locus are, the same as for any other, except that now the initial departure of the locus of Eq. (5.37) corresponds to

Figure 5.8

Root locus for \(L(s) = \frac{1}{s\left( s^{2} + 8s + 32 \right)}\)

the breakaway point of the original locus of Eq. (5.11), i.e., \(s = - 1/2\) on Fig. 5.2. Applying the rule for departure angles [Eq. (5.31)] from the double pole at \(s = - 1/2\), we find that

\[\begin{matrix} 2\phi_{dep} & \ = - 180^{\circ} - 360^{\circ}(l - 1) \\ \phi_{dep} & \ = - 90^{\circ} - 180^{\circ}(l - 1) \\ \phi_{dep} & \ = \pm 90^{\circ}\text{~}\text{(departure angles at breakaway)}\text{~} \end{matrix}\]

In this case, the arrival angles at \(s = - 1/2\) are, from the original root locus, along the real axis and are clearly \(0^{\circ}\) and \(180^{\circ}\).

The complete locus for our third-order example is drawn in Fig. 5.8. It combines all the results found so far-that is, the real-axis segment, the center of the asymptotes and their angles, and the angles of departure from the poles. It is usually sufficient to draw the locus by using only Rules 1 to 3, which should be memorized. Rule 4 is sometimes useful to understand how locus segments will depart, especially if there is a pole near the \(j\omega\) axis. Rule 5 is sometimes useful to help interpret plots that come from the computer and, as we will see in the next section, to explain qualitative changes in some loci as a pole or zero is moved. The actual locus in Fig. 5.8 was drawn using the Matlab commands

\[\begin{matrix} & s = tf\left( \ ^{'}s^{'} \right) \\ & \text{~}\text{sysL}\text{~} = 1/\left( s^{*}\left( (s + 4)^{\land}2 + 16 \right) \right) \\ & \text{~}\text{rlocus(sysL)}\text{~} \end{matrix}\]

We will next summarize the rules for drawing a root locus.

159.0.1. Summary of the Rules for Determining a Root Locus

RULE 1. The \(n\) branches of the locus start at the poles of \(L(s)\) and \(m\) branches end on the zeros of \(L(s)\).

RULE 2. The loci are on the real axis to the left of an odd number of poles and zeros.

RULE 3. For large \(s\) and \(K,n - m\) branches of the loci are asymptotic to lines at angles \(\phi_{l}\) radiating out from the center point \(s = \alpha\) on the real axis, where

\[\begin{matrix} \phi_{l} & \ = \frac{180^{\circ} + 360^{\circ}(l - 1)}{n - m},\ l = 1,2,\ldots,n - m \\ \alpha & \ = \frac{\sum_{}^{}\ p_{i} - \sum_{}^{}\ z_{i}}{n - m} \end{matrix}\]

RULE 4. The angle(s) of departure of a branch of the locus from a pole of multiplicity \(q\) is given by

\[q\phi_{l,dep} = \sum_{}^{}\ \psi_{i} - \sum_{}^{}\ \phi_{i} - 180^{\circ} - 360^{\circ}(l - 1) \]

where \(l = 1,2,\ldots,q\) and the angle(s) of arrival of a branch at a zero of multiplicity \(q\) is given by

\[q\psi_{l,arr} = \sum_{}^{}\ \phi_{i} - \sum_{}^{}\ \psi_{i} + 180^{\circ} + 360^{\circ}(l - 1). \]

RULE 5. The locus can have multiple roots at points on the locus of multiplicity \(q\). The branches will approach a point of \(q\) roots at angles separated by

\[\frac{180^{\circ} + 360^{\circ}(l - 1)}{q} \]

and will depart at angles with the same separation, forming an array of \(2q\) rays equally spaced. If the point is on the real axis, then the orientation of this array is given by the real-axis rule. If the point is in the complex plane, then the angle of departure rule must be applied.

159.0.2. Selecting the Parameter Value

The positive root locus is a plot of all possible locations for roots to the equation \(1 + KL(s) = 0\) for some real positive value of \(K\). The purpose of design is to select a particular value of \(K\) that will meet the specifications for static and dynamic response. We now turn to the issue of selecting \(K\) from a particular locus so the roots are at specific places. Although we shall show how the gain selection can be made by hand calculations from a plot of the locus, this is almost never done by hand because the determination can be accomplished easily by Matlab. It is useful, however, to be able to perform a rough sanity check by hand on the computer-based results.

Using Definition II of the locus, we developed rules to sketch a root locus from the phase of \(L(s)\) alone. If the equation is actually to have a root at a particular place when the phase of \(L(s)\) is \(180^{\circ}\), then a magnitude condition must also be satisfied. This condition is given by Eq. (5.9), rearranged as

\[K = - \frac{1}{L(s)} \]

160. Figure 5.9

Root locus for

\(L(s) = \frac{1}{s\left\lbrack (s + 4)^{2} + 16 \right\rbrack}\) showing calculations of gain \(K\)

Graphical calculation of the desired gain

For values of \(s\) on the root locus, the phase of \(L(s)\) is \(180^{\circ}\), so we can write the magnitude condition as

\[K = \frac{1}{|L|} \]

Equation (5.47) has both an algebraic and a graphical interpretation. To see the latter, consider the locus of \(1 + KL(s)\), where

\[L(s) = \frac{1}{s\left\lbrack (s + 4)^{2} + 16 \right\rbrack} \]

For this transfer function, the locus is plotted in Fig. 5.9. In Fig. 5.9, the lines corresponding to a damping ratio of \(\zeta = 0.5\) are sketched and the points where the locus crosses these lines are marked with dots \(( \bullet )\). Suppose we wish to set the gain so the roots are located at the dots. This corresponds to selecting the gain so that two of the closed-loop system poles have a damping ratio of \(\zeta = 0.5\). (We will find the third pole shortly.) What is the value of \(K\) when a root is at the dot? From Eq. (5.47), the value of \(K\) is given by 1 over the magnitude of \(L\left( s_{0} \right)\), where \(s_{0}\) is the coordinate of the dot. On the figure we have plotted three vectors marked \(s_{0} - s_{1},s_{0} - s_{2}\), and \(s_{0} - s_{3}\), which are the vectors from the poles of \(L(s)\) to the point \(s_{0}\). (Since \(s_{1} = 0\), the first vector equals \(s_{0}\).) Algebraically, we have

\[L\left( s_{0} \right) = \frac{1}{s_{0}\left( s_{0} - s_{2} \right)\left( s_{0} - s_{3} \right)} \]

Using Eq. (5.47), this becomes

\[K = \frac{1}{\left| L\left( s_{0} \right) \right|} = \left| s_{0} \right|\left| s_{0} - s_{2} \right|\left| s_{0} - s_{3} \right|. \]

The graphical interpretation of Eq. (5.50) shows that its three magnitudes are the lengths of the corresponding vectors drawn on Fig. 5.9 (see Appendix WD online at www.pearsonglobaleditions.com.). Hence, we can compute the gain to place the roots at the \(dot\left( s = s_{0} \right)\) by
measuring the lengths of these vectors and multiplying the lengths together, provided that the scales of the imaginary and real axes are identical. Using the scale of the figure, we estimate that

\[\begin{matrix} \left| s_{0} \right| & \ \cong 4.0 \\ \left| s_{0} - s_{2} \right| & \ \cong 2.1 \\ \left| s_{0} - s_{3} \right| & \ \cong 7.7 \end{matrix}\]

Thus, the gain is estimated to be

\[K = 4.0(2.1)(7.7) \cong 65 \]

We conclude that if \(K\) is set to the value 65 , then a root of \(1 + KL\) will be at \(s_{0}\), which has the desired damping ratio of 0.5 . Another root is at the conjugate of \(s_{0}\). Where is the third root? The third branch of the locus lies along the negative real axis. If performing the calculations by hand, we would need to take a test point, compute a trial gain, and repeat this process until we have found the point where \(K = 65\). However, if performing a check on Matlab's determination, it is sufficient to merely use the procedure above to verify the gain at the root location indicated by the computer.

To use Matlab, plot the locus using the command rlocus(sysL), for example, then the command \(\lbrack K,p\rbrack =\) rlocfind \((sysL)\) will produce a crosshair on the plot and, when spotted at the desired location of the root and selected with a mouse click, the value of the gain \(K\) is returned as well as the roots corresponding to that \(K\) in the variable \(p\). The use of sisotool makes this even easier, and will be discussed in more detail in Example 5.7.

Finally, with the gain selected, it is possible to compute the error constant of the control system. A process with the transfer function given by Eq. (5.48) has one integrator and, in a unity feedback configuration, will be a Type 1 control system. In this case, the steady-state error in tracking a ramp input is given by the velocity constant:

\[\begin{matrix} K_{v} & \ = \lim_{s \rightarrow 0}\mspace{2mu} sKL(s) \\ & \ = \lim_{s \rightarrow 0}\mspace{2mu} s\frac{K}{s\left\lbrack (s + 4)^{2} + 16 \right\rbrack} \\ & \ = \frac{K}{32}. \end{matrix}\]

With the gain set for complex roots at a damping \(\zeta = 0.5\), the root-locus gain is \(K = 65\), so from Eq. (5.53) we get \(K_{v} = 65/32 \cong 2\sec^{- 1}\). If the closed-loop dynamic response, as determined by the root locations, is satisfactory and the steady-state accuracy, as measured by \(K_{v}\), is good enough, then the design can be completed by gain selection alone. However, if no value of \(K\) satisfies all of the constraints, as is typically the case, then additional modifications are necessary to meet the system specifications.

160.1. Selected Illustrative Root Loci

A number of important control problems are characterized by a process with the simple "double integrator" transfer function

\[G(s) = \frac{1}{s^{2}} \]

In Chapter 2, Example 2.3 showed that the attitude control of a satellite is described by this equation. Also, Example 2.5 showed that the basic attitude motions of a drone obey this transfer function. Furthermore, it will be shown in Example 5.16 that the translational motion of a drone obeys the same dynamics. The result is a plant described by Eq. (5.54). If we form a unity feedback system with this plant, and a proportional controller, the root locus with respect to controller gain is

\[1 + k_{p}\frac{1}{s^{2}} = 0 \]

If we apply the rules to this (trivial) case, the results are as follows:

RULE 1. The locus has two branches that start at \(s = 0\).

RULE 2. There are no parts of the locus on the real axis.

RULE 3. The two asymptotes intersect at \(s = 0\) and are at the angles of \(\pm 90^{\circ}\).

RULE 4. The loci depart from \(s = 0\) at the angles of \(\pm 90^{\circ}\).

Conclusion: The locus consists of the imaginary axis, and the transient would be oscillatory for any value of \(k_{p}\). A more useful design results with the use of proportional plus derivative control.

The characteristic equation with PD control is

\[1 + \left\lbrack k_{p} + k_{D}s \right\rbrack\frac{1}{s^{2}} = 0 \]

To put the equation in root-locus form, we define \(K = k_{D}\), and for the moment arbitrarily select the gain ratio \(\ ^{9}\) as \(k_{p}/k_{D} = 1\), which results in the root-locus form

\[1 + K\frac{s + 1}{s^{2}} = 0 \]

Solution. Again we compute the results of the rules:

RULE 1. There are two branches that start at \(s = 0\), one of which terminates on the zero at \(s = - 1\) and the other of which approaches infinity.

Figure 5.10

Root locus for \(L(s) = G(s) = \frac{(s + 1)}{s^{2}}\)

RULE 2. The real axis to the left of \(s = - 1\) is on the locus.

RULE 3. Since \(n - m = 1\), there is one asymptote along the negative real axis.

RULE 4. The angles of departure from the double pole at \(s = 0\) are \(\pm 90^{\circ}\).

RULE 5. From Rules \(1 - 4\), it should be clear that the locus will curl around the zero, rejoin the real axis to the left of the zero, and terminate as indicated by Rule 1. It turns out that the locus segments rejoin the real axis at \(s = - 2\), which creates a point of multiple roots. Evaluation of the angle of arrival at this point will show that the segments arrive at \(\pm 90^{\circ}\).

We conclude that two branches of the locus leave the origin going north and south, and that they curve around \(\ ^{10}\) without passing into the RHP and break into the real axis at \(s = - 2\), from which point one branch goes west toward infinity and the other goes east to rendezvous with the zero at \(s = - 1\). The locus is plotted in Fig. 5.10 with the commands

\(s = tf(s\) ');

sys \(S = (s + 1)/\left( s^{\land}2 \right)\);

rlocus( sys \()\)

Comparing this case with that for the simple \(1/s^{2}\), we see that

The addition of the zero has pulled the locus into the LHP, a point of general importance in constructing a compensation.

\(\ ^{10}\) You can prove that the path is a circle by assuming that \(s + 1 = e^{j\theta}\) and showing that the equation has a solution for a range of positive \(K\) and real \(\theta\) under this assumption. (See Problem 5.18.)

In the previous case, we considered pure PD control. However, as we have mentioned earlier, the physical operation of differentiation is not practical and in practice PD control is approximated by

\[D_{c}(s) = k_{p} + \frac{k_{D}s}{s/p + 1} \]

which can be put in root-locus form by defining \(K = k_{p} + pk_{D}\) and \(z = pk_{p}/K\) so that \(\ ^{11}\)

\[D_{c}(s) = K\frac{s + z}{s + p} \]

For reasons we will see when we consider design by frequency response, this controller transfer function is called a "lead compensator" provided \(z < p\) or, referring to the frequent implementation by electrical components, a "lead network." The characteristic equation for the \(1/s^{2}\) plant with this controller is

\[\begin{matrix} 1 + D_{c}(s)G(s) & \ = 1 + KL(s) = 0 \\ 1 + K\frac{s + z}{s^{2}(s + p)} & \ = 0 \end{matrix}\]

To evaluate the effect of the added pole, we will again set \(z = 1\) and consider three different values for \(p\). We begin with a somewhat large value, \(p = 12\), and consider the root locus for

\[1 + K\frac{s + 1}{s^{2}(s + 12)} \]

Solution. Again, we apply the rules for plotting a root locus:

RULE 1. There are now three branches to the locus, two starting at \(s = 0\) and one starting at \(s = - 12\).

RULE 2. The real axis segment \(- 12 \leq s \leq - 1\) is part of the locus.

RULE 3. There are \(n - m = 3 - 1 = 2\) asymptotes centered at \(\alpha =\) \(\frac{- 12 - ( - 1)}{2} = - 11/2\) and at the angles \(\pm 90^{\circ}\).

RULE 4. The angles of departure of the branches at \(s = 0\) are again \(\pm 90^{\circ}\). The angle of departure from the pole at \(s = - 12\) is at \(0^{\circ}\).

There are several possibilities on how the locus segments behave while still adhering to the guidance above. Matlab is the expedient way to discover the paths. The Matlab commands

Figure 5.11

Root locus for

\[L(s) = \frac{(s + 1)}{s^{2}(s + 12)} \]

show that two branches of locus break vertically from the poles at \(s = 0\), curve around to the left without passing into the RHP, and break in at \(s = - 2.3\), where one branch goes right to meet the zero at \(s = - 1\) and the other goes left, where it is met by the root that left the pole at \(s = - 12\). These two form a multiple root at \(s = - 5.2\) and break away there and approach the vertical asymptotes located at \(s = - 5.5\). The locus is plotted in Fig. 5.11.

Considering this locus, we see that the effect of the added pole has been to distort the simple circle of the PD control but, for points near the origin, the locus is quite similar to the earlier case. The situation changes when the pole is brought closer in.

Root Locus of the Satellite Control with Lead Having a Relatively Small Value for the Pole

Now consider \(p = 4\) and draw the root locus for

\[1 + K\frac{s + 1}{s^{2}(s + 4)} = 0. \]

Solution. Again, by the rules, we have the following:

RULE 1. There are again three branches to the locus, two starting from \(s = 0\) and one from \(s = - 4\).

RULE 2. The segment of the real axis \(- 4 \leq s \leq - 1\) is part of the locus.

RULE 3. There are two asymptotes centered at \(\alpha = - 3/2\) and at the angles \(\pm 90^{\circ}\).

RULE 4. The branches again depart from the poles at \(s = 0\) at \(\pm 90^{\circ}\).

RULE 5. The Matlab commands

Figure 5.12

Root locus for

\[L(s) = \frac{(s + 1)}{s^{2}(s + 4)} \]

\(s = tf\left( s^{'} \right)\);

sysL \(= (s + 1)/\left( \left( s^{\land}2 \right)^{*}(s + 4) \right)\);

rlocus(sysL)

show that two branches of this locus break away vertically from the poles at \(s = 0\), curve slightly to the left and join the asymptotes going north and south. The locus segment from the root at \(s = - 4\) goes east and terminates at the zero. In this case, the locus differs from the case when \(s = - 12\) in that there are no break-in or breakaway points on the real axis as part of the locus. The Matlab plot is given in Fig. 5.12.

In these two cases we have similar systems, but in one case, \(p = 12\), there were both break-in and breakaway points on the real axis, whereas for \(p = 4\), these features have disappeared. A logical question might be to ask at what point they went away. As a matter of fact, it happens at \(p = 9\), and we'll look at that locus next.

The Root Locus for the Satellite with a Transition Value for the Pole

Plot the root locus for

\[1 + K\frac{s + 1}{s^{2}(s + 9)} = 0 \]

161. Solution.

RULE 1. The locus has three branches, starting from \(s = 0\) and \(s = - 9\).

RULE 2. The real axis segment \(- 9 \leq s \leq - 1\) is part of the locus.

RULE 3. The two asymptotes are centered at \(\alpha = - 8/2 = - 4\).

RULE 4. The departures are, as before, at \(\pm 90^{\circ}\) from \(s = 0\).

RULE 5. The Matlab commands

Figure 5.13

Root locus for

\[L(s) = \frac{(s + 1)}{s^{2}(s + 9)} \]

\[s = tf(^{'}s^{'}) \]

sysL \(= (s + 1)/\left( \left( s^{\land}2 \right)*(s + 9) \right)\);

rlocus(sysL)

produces the locus in Fig. 5.13. It shows the two branches of this locus break away vertically from the poles at \(s = 0\) and curl around and join the real axis again at \(s = - 3\) with an angle of arrival of \(\pm 60^{\circ}\), while the branch from the pole at \(s = - 9\) heads east and joins the other two poles at \(s = - 3\) with an angle of arrival of \(0^{\circ}\). These three locus segments continue on by splitting out of \(s = - 3\) at the departure angles of \(0^{\circ}\) and \(\pm 120^{\circ}\), with one heading into the zero and the other two heading away to the northwest to join the asymptotes. Using Rule 5 would confirm these angles of arrival and departure. \(\ ^{12}\)

Note this special locus shape only occurs when the ratio of the pole value to the zero value is exactly \(9:1\) for this form of \(L(s)\). It is the transition locus between the two types depicted by Examples 5.4 and 5.5. This transition is discussed in more detail below, and will be demonstrated via Matlab in Example 5.7.

From Figs. 5.11 through 5.13, it is evident that when the third pole is near the zero ( \(p\) near 1 ), there is only a modest distortion of the locus that would result for \(D_{c}(s)G(s) \cong \frac{K}{s^{2}}\), which consists of two straight-line locus branches departing at \(\pm 90^{\circ}\) from the two poles at \(s = 0\). Then, as we increase \(p\), the locus changes until at \(p = 9\); the locus breaks in at -3 in a triple multiple root. As the pole \(p\) is moved to the left beyond -9 , the locus exhibits distinct break-in and breakaway points, approaching, as \(p\) gets very large, the circular locus of one zero and two poles. Figure 5.13, when \(p = 9\), is thus a transition locus between the two second-order extremes, which occur at \(p = 1\) (when the zero is canceled) and \(p \rightarrow \infty\) (where the extra pole has no effect).

162. EXAMPLE 5.7

Figure 5.14

sisotool graphical user interface

Source: Reprinted with permission of The MathWorks, Inc.

163. An Exercise to Repeat the Prior Examples Using sisotool

Repeat Examples 5.3 through 5.6 using Matlab's sisotool feature.

Solution. sisotool is an interactive design tool in Matlab that provides a graphical user interface (GUI) for performing analysis and design. sisotool provides an easy way to design feedback controllers because it allows rapid iterations and quickly shows their effect on the resulting root-locus and the other aspects of the control performance. To illustrate the use of the tool, the Matlab commands

\[\begin{matrix} & s = tf\left( \ ^{'}s^{'} \right) \\ & \text{~}\text{sysL}\text{~} = (s + 1)/\left( s^{\land}2 \right) \\ & \text{~}\text{sisotool('rlocus', sysL)}\text{~} \end{matrix}\]

will initiate the GUI and produce the root locus shown in Fig. 5.10, which is similar to Examples 5.4 through 5.6, but without the pole on the negative real axis that was moved around for illustration purposes in the three prior examples. By clicking on "Compensator Editor" in the "Control and Estimation Tools Manager" window, right clicking on the "Dynamics" dialog window and selecting "add pole/zero," you can add a pole at the location \(s = - 12\). This will produce the locus that is shown in Figs. 5.11 and 5.14. Now put your mouse on the pole at \(s = - 12\),

hold down the mouse button, and slide it from \(s = - 12\) to \(s = - 4\) slowly, so you can examine the locus shapes at all intermediate points. Be especially careful (and slow) as you pass through \(s = - 9\) because the locus shape changes very quickly with the pole in this region. Note you can also put your mouse on one of the closed-loop poles (squares) and slide that along the locus. It will show you the location of the other roots that correspond to that value of the gain, \(K\), and the frequency and damping of the closed-loop roots will be shown for when the roots are complex pairs. More detail can be found in the sisotool Tutorial in Appendix WR online at www.pearsonglobaleditions.com.

A useful conclusion drawn from this example is the following:

An additional pole moving in from the far left tends to push the locus branches to the right as it approaches a given locus.

The double integrator is the simplest model of the examples, assuming a rigid body with no friction. A more realistic case would include the effects of flexibility in the satellite attitude control, where at least the solar panels would be flexible. Another possibility is that the sensor is not rigidly attached to the base of the satellite that contains the thrusters, as discussed in Example 2.4 in Chapter 2. So, we see there are two possibilities, depending on whether the sensor is on the same rigid body as the actuator, which is called the collocated case, \(\ ^{13}\) or is on another body, which is called the noncollocated case. \(\ ^{14}\) We begin with consideration of the collocated case similar to that given by Eq. (2.14). As we saw in Chapter 2, the transfer function in the collocated case has not only a pair of complex poles but also a pair of nearby complex zeros located at a lower natural frequency than the poles. The numbers in the examples that follow are chosen more to illustrate the root-locus properties than to represent particular physical models.

Root Locus of the Satellite Control with a Collocated Flexibility

Plot the root locus of the characteristic equation \(1 + G(s)D_{c}(s) = 0\), where

\[G(s) = \frac{(s + 0.1)^{2} + 6^{2}}{s^{2}\left\lbrack (s + 0.1)^{2} + {6.6}^{2} \right\rbrack} \]

Figure 5.15

Figure for computing a departure angle for \(L(s) =\) \(\frac{s + 1}{s + 12}\frac{(s + 0.1)^{2} + 6^{2}}{s^{2}\left\lbrack (s + 0.1)^{2} + {6.6}^{2} \right\rbrack}\)

is in a unity feedback structure with the controller transfer function

\[D_{c}(s) = K\frac{s + 1}{s + 12}\text{.}\text{~} \]

Solution. In this case,

\[L(s) = \frac{s + 1}{s + 12}\frac{(s + 0.1)^{2} + 6^{2}}{s^{2}\left\lbrack (s + 0.1)^{2} + {6.6}^{2} \right\rbrack} \]

has both poles and zeros near the imaginary axis and we should expect to find the departure angles of particular importance.

164. Solution

RULE 1. There are five branches to the locus, three of which approach finite zeros and two of which approach the asymptotes.

RULE 2. The real-axis segment \(- 12 \leq s \leq - 1\) is part of the locus.

RULE 3. The center of the two asymptotes is at

\[\alpha = \frac{- 12 - 0.1 - 0.1 - ( - 0.1 - 0.1 - 1)}{5 - 3} = - \frac{11}{2} \]

The angle of the asymptotes is \(\pm 90^{\circ}\).

RULE 4. We compute the departure angle from the pole at \(s = - 0.1 +\) j6.6. The angle at this pole we will define to be \(\phi_{1}\). The other angles are marked on Fig. 5.15. The root-locus condition is

\[\begin{matrix} \phi_{1} = & \psi_{1} + \psi_{2} + \psi_{3} - \left( \phi_{2} + \phi_{3} + \phi_{4} + \phi_{5} \right) - 180^{\circ}, \\ \phi_{1} = & 90^{\circ} + 90^{\circ} + \tan^{- 1}(6.6) - \left\lbrack 90^{\circ} + 90^{\circ} + 90^{\circ} \right.\ \\ & \left. \ + \tan^{- 1}\left( \frac{6.6}{12} \right) \right\rbrack - 180^{\circ}, \\ \phi_{1} = & {81.4}^{\circ} - 90^{\circ} - {28.8}^{\circ} - 180^{\circ}, \\ = & \ - {217.4}^{\circ} = {142.6}^{\circ}, \end{matrix}\]

so the root leaves this pole up and to the left, into the stable region of the plane. An interesting exercise would be to compute the arrival angle at the zero located at \(s = - 0.1 + j6\).

Figure 5.16

Root locus for \(L(s) =\) \(\frac{s + 1}{s + 12}\frac{(s + 0.1)^{2} + 6^{2}}{s^{2}\left\lbrack (s + 0.1)^{2} + {6.6}^{2} \right\rbrack}\)

Using Matlab, the locus is plotted in Fig. 5.16. Note all the attributes that were determined using the simple rules were exhibited by the plot, thus verifying in part that the data were entered correctly.

The previous example showed

In the collocated case, the presence of a single flexible mode introduces a lightly damped root to the characteristic equation but does not cause the system to be unstable.

The departure angle calculation showed the root departs from the pole introduced by the flexible mode toward the LHP. Next, let's consider the noncollocated case, which was also discussed in Example 2.4 and resulted in Eq. (2.13). Using that as a guide, we assume here the plant transfer function is

\[G(s) = \frac{1}{s^{2}\left\lbrack (s + 0.1)^{2} + {6.6}^{2} \right\rbrack} \]

and is compensated again by the lead

\[D_{c}(s) = K\frac{s + 1}{s + 12}\text{.}\text{~} \]

As these equations show, the noncollocated transfer function has the complex poles but does not have the associated complex zeros as in the previous example, and that we also saw for the collocated case of Chapter 2 in Eq. (2.14). This will have a substantial effect, as illustrated by Example 5.9.

165. Root locus for the Noncollocated Cased

Apply the rules and draw the root locus for

\[KL(s) = D_{c}G = \frac{K(s + 3)}{s + 18}\frac{1}{(s + 1)^{2}\left\lbrack (s + 0.5)^{2} + 9^{2} \right\rbrack} \]

paying special attention to the departure angles from the complex poles.

166. Solution

RULE 1. There are five branches to the root locus, of which one approaches the zero and four approach the asymptotes.

RULE 2. The real-axis segment defined by \(- 18 \leq s \leq - 3\) is part of the locus.

RULE 3. The center of the asymptotes is located at

\[\alpha = \frac{- 18 - (1)(2) - (0.5)(2) - ( - 3)}{5 - 1} = \frac{- 18}{4} \]

and the angles for the four asymptotic branches area at \(\pm 45^{\circ}, \pm 135^{\circ}\).

RULE 4. We again compute the departure angle from the pole at \(s = - 0.5 + j9\). We will define the angle at this pole to be \(\phi_{1}\). The other angles are marked on Fig. 5.17. The root locus condition is

\[\begin{matrix} \phi_{1} = & \psi_{1} - \left( \phi_{2} + \phi_{3} + \phi_{4} + \phi_{5} \right) - 180^{\circ}, \\ \phi_{1} = & \tan^{- 1}\left( \frac{9}{2.5} \right) - \left\lbrack 2 \times \tan^{- 1}\left( \frac{9}{0.5} \right) + \right.\ \\ & \left. \ 90^{\circ} + \tan^{- 1}\left( \frac{9}{17.5} \right) \right\rbrack - 180^{\circ}, \\ \phi_{1} = & {74.48}^{\circ} - {173.64}^{\circ} - 90^{\circ} - {27.22}^{\circ} - 180^{\circ}, \\ \phi_{1} = & \ - {36.38}^{\circ}. \end{matrix}\]

In this case, the root leaves the pole down and to the right, toward the unstable region. We would expect the system to become unstable as gain is increased.

RULE 5. The locus is plotted in Fig. 5.18 with the commands

\[\begin{matrix} & s = tf\left( \ ^{'}s^{'} \right) \\ & \left. \ sysG = 1/\left( (s + 1)^{\land}2 \right)^{*}\left( (s + 0.5)^{\land}2 + 9^{\land}2 \right) \right) \\ & \text{~}\text{sys}\text{~}D = (s + 3)/(s + 18) \\ & \text{~}\text{sysL}\text{~} = \text{~}\text{sysG*sysD;}\text{~} \\ & \text{~}\text{rlocfind}\text{~}(\text{~}\text{sys}\text{~}L) \end{matrix}\]

and is seen to agree with the calculations above. By using sisotool, we see that the locus from the complex poles enter into the RHP almost

Figure 5.17

Figure to compute departure angle for \(L(s) =\) \(\frac{s + 3}{s + 18}\frac{1}{(s + 1)^{2}\left\lbrack (s + 0.5)^{2} + 9^{2} \right\rbrack}\)

Figure 5.18

Root locus for \(L(s) =\) \(\frac{s + 3}{s + 18}\frac{1}{(s + 1)^{2}\left\lbrack (s + 0.5)^{2} + 9^{2} \right\rbrack}\)

immediately as the gain is increased. Furthermore, by selecting those roots so that they are just to the left of the imaginary axis, it can be seen that the dominant slow roots down near the origin have extremely low damping. Therefore, this system will have a very lightly damped response with very oscillatory flexible modes. It would not be considered acceptable with the lead compensator as chosen for this example.

167. A Locus with Complex Multiple Roots

We have seen loci with break-in and breakaway points on the real axis. Of course, an equation of fourth or higher order can have multiple roots that are complex. Although such a feature of a root locus is a rare event, it is an interesting curiosity that is illustrated by the next example.

168. Root Locus Having Complex Multiple Roots

Sketch the root locus \(1 + KL(s) = 0\), where

\[L(s) = \frac{1}{(s + 4)(s + 1)\left\lbrack (s + 2.5)^{2} + 16 \right\rbrack} \]

169. Solution

RULE 1. There are four branches of the roots, all of which approach the four asymptotes.

RULE 2. The real-axis segment \(- 4 \leq s \leq - 1\) is on the locus.

RULE 3. The center of the asymptotes is at

\[\alpha = \frac{- 4 - 1 - (2.5)(2)}{4} = - \frac{10}{4} = - 2.5 \]

and the angles are \(\phi_{l} = \pm 45^{\circ}, \pm 135^{\circ}\).

RULE 4 . The departure angle \(\phi_{\text{dep}\text{~}}\) from the pole at \(= - 2.5 + 4j\), based on Fig. 5.19, is

Figure 5.19

Figure to compute departure angle for \(L(s) =\) \(\frac{1}{(s + 4)(s + 1)\left\lbrack (s + 2.5)^{2} + 16 \right\rbrack}\)

Figure 5.20

Root locus for \(L(s) =\) \(\frac{1}{(s + 4)(s + 1)\left\lbrack (s + 2.5)^{2} + 16 \right\rbrack}\)

\[\begin{matrix} \phi_{dep} & \ = \phi_{3} = - \phi_{1} - \phi_{2} - \phi_{4} + 180^{\circ} \\ & \ = - \left( 180^{\circ} - \tan^{- 1}\left( \frac{4}{1.5} \right) \right) - \tan^{- 1}\left( \frac{4}{1.5} \right) - 90^{\circ} + 180^{\circ} \\ & \ = - 90^{\circ}. \end{matrix}\]

We can observe at once that, along the line \(s = - 2.5 + j\omega,\phi_{2}\) and \(\phi_{1}\) are angles of an isosceles triangle and always add to \(180^{\circ}\). Hence, the entire line from one complex pole to the other is on the locus in this special case.

RULE 5. Using Matlab, we see there are multiple roots at \(s =\) \(- 2.5 \pm 2.62j\), and branches of the locus (Fig. 5.20) come together at \(- 2.5 \pm 2.62j\). Using Rule 5 , we can verify that the locus segments break away at \(0^{\circ}\) and \(180^{\circ}\), as shown by Matlab. The codes are given below:

\(s = tf\left( s^{'} \right)\);

\(L = 1/\left( (s + 4)^{*}(\text{ }s + 1)^{*}\left( (\text{ }s + 2.5)^{\land}2 + 16 \right) \right)\);

\(rlocus(L)\);

Lead and lag compensations

Notch compensation
The locus in this example is a transition between two types of loci: one where the complex poles are to the left of the example case and approach the asymptotes at \(\pm 135^{\circ}\), and another where the complex poles are to the right of their positions in the example and approach the asymptotes at \(\pm 45^{\circ}\).

169.1. Design Using Dynamic Compensation

Consideration of control design begins with the design of the process itself. The importance of early consideration of potential control problems in the design of the process and selection of the actuator and sensor cannot be overemphasized. It is not uncommon for a first study of the control to suggest that the process itself can be changed by, for example, adding damping or stiffness to a structure to make a flexibility easier to control. Once these factors have been taken into account, the design of the controller begins. If the process dynamics are of such a nature that a satisfactory design cannot be obtained by adjustment of the proportional gain alone, then some modification or compensation of the dynamics is indicated. While the variety of possible compensation schemes is great, three categories have been found to be particularly simple and effective. These are lead, lag, and notch compensations. \(\ ^{15}\) Lead compensation approximates the function of PD control and acts mainly to speed up a response by lowering rise time and decreasing the transient overshoot. Lag compensation approximates the function of PI control, and is usually used to improve the steady-state accuracy of the system. Notch compensation will be used to achieve stability for systems with lightly damped flexible modes, as we saw with the satellite attitude control having noncollocated actuator and sensor. In this section, we will examine techniques to select the parameters of these three schemes. Lead, lag, and notch compensations have historically been implemented using analog electronics and, hence were often, referred to as networks. Today, however, most new control system designs use digital computer technology, in which the compensation is implemented in the software. In this case, one needs to compute discrete equivalents to the analog transfer functions, as will be described in Chapter 8 , and in Franklin et al. (1998).

Compensation with a transfer function of the form

\[D_{c}(s) = K\frac{s + z}{s + p} \]

is called lead compensation if \(z < p\) and lag compensation if \(z > p\). Compensation is typically placed in series with the plant, as shown in Fig. 5.21. It can also be placed in the feedback path, and in that location has the same effect on the overall system poles, but results in different

\(\ ^{15}\) The names of these compensation schemes derive from their frequency (sinusoidal) responses, wherein the output leads the input in one case (a positive phase shift) and lags the input in another (a negative phase shift). The frequency response of the third looks as if a notch had been cut in an otherwise flat frequency response. (See Chapter 6.)

Figure 5.21

Feedback system with compensation

Figure 5.22

Root loci for

\(1 + D_{c}(s)G(s) = 0\),

\(G(s) = \frac{1}{s(s + 1)}\) : with

compensation

\(D_{c}(s) = K\) (solid lines)

and with

\[D_{c}(s) = K(s + 2) \]

(dashed lines)

transient responses from reference inputs. The characteristic equation of the system in Fig. 5.21 is

\[\begin{matrix} 1 + D_{c}(s)G(s) & \ = 0 \\ 1 + KL(s) & \ = 0 \end{matrix}\]

where \(K\) and \(L(s)\) are selected to put the equation in root-locus form as before.

169.1.1. Design Using Lead Compensation

To explain the basic stabilizing effect of lead compensation on a system, we first consider proportional control for which \(D_{c}(s) = K\). If we apply this compensation to a second-order position control system with normalized transfer function

\[G(s) = \frac{1}{s(s + 1)} \]

the root locus with respect to \(K\) is shown as the solid-line portion of the locus in Fig. 5.22. Also shown in Fig. 5.22 is the locus produced by proportional plus derivative control, where \(D_{c}(s) = K(s + 2)\). The modified locus is the circle sketched with dashed lines. As we saw in the previous examples, the effect of the zero is to move the locus to the left, toward the more stable part of the \(s\)-plane. Now, if our speedof-response specification calls for \(\omega_{n} \cong 2rad/sec\), then proportional control alone \(\left( D_{c} = K \right)\) can produce only a very low value of damping ratio \(\zeta\) when the roots are put at the required value of \(\omega_{n}\). Hence, at

Selection of the zero and pole of a lead

Figure 5.23

Root loci for three cases with \(G(s) = \frac{1}{s(s + 1)}\) :

(a) \(D_{c}(s) = \frac{(s + 2)}{(s + 20)}\);

(b) \(D_{c}(s) = \frac{(s + 2)}{(s + 10)}\);

(c) \(D_{c}(s) = s + 2\) (solid lines) the required gain, the transient overshoot will be substantial. However, by adding the zero of PD control, we can move the locus to a position having closed-loop roots at \(\omega_{n} = 2rad/sec\) and damping ratio \(\zeta \geq 0.5\). We have "compensated" the given dynamics by using \(D_{c}(s) = K(s + 2)\).

As we observed earlier, pure derivative control is not normally practical because of the amplification of sensor noise implied by the differentiation and must be approximated. If the pole of the lead compensation is placed well outside the range of the design \(\omega_{n}\), then we would not expect it to upset the dynamic response of the design in a serious way. For example, consider the lead compensation

\[D_{c}(s) = K\frac{s + 2}{s + p} \]

The root loci for two cases with \(p = 10\) and \(p = 20\) are shown in Fig. 5.23, along with the locus for PD control. The important fact about these loci is that for small gains, before the real root departing from \(- p\) approaches -2 , the loci with lead compensation are almost identical to the locus for which \(D_{c}(s) = K(s + 2)\). Note the effect of the pole is to lower the damping, but for the early part of the locus, the effect of the pole is not great if \(p > 10\).

Selecting exact values of \(z\) and \(p\) in Eq. (5.70) for particular cases is often done by trial and error, which can be minimized with experience. In general, the zero is placed in the neighborhood of the closed-loop \(\omega_{n}\), as determined by rise-time or settling-time requirements, and the pole is located at a distance 5 to 25 times the value of the zero location. But there are trade-offs to consider. The choice of the exact pole location is a compromise between the conflicting effects of noise suppression, for which one wants a small value for \(p\), and compensation effectiveness for which one wants a large \(p\). In general, if the pole is too close to the zero, then, as seen in Fig. 5.23, the root locus does not move as much from its uncompensated shape, and the zero is not as successful in doing its job.

170. EXAMPLE 5.11

On the other hand, for reasons that are perhaps easier to understand from the frequency response, when the pole is too far to the left, the magnification of sensor noise appearing at the output of \(D_{c}(s)\) is too great and the motor or other actuator of the process can be overheated by noise energy in the control signal, \(u(t)\). With a large value of \(p\), the lead compensation approaches pure PD control. A simple example will illustrate the approach.

171. Design Using Lead Compensation

Find a compensation for \(G(s) = 1/\lbrack s(s + 1)\rbrack\) that will provide overshoot of no more than \(20\%\) and rise time of no more than \(0.3sec\).

Solution. From Chapter 3 , we estimate that a damping ratio of \(\zeta \geq 0.5\) and a natural frequency of \(\omega_{n} \cong \frac{1.8}{0.3} \cong 6rad/sec\) should satisfy the requirements. To provide some margin, we will shoot for \(\zeta \geq 0.5\) and \(\omega_{n} \geq 7rad/sec\). Considering the root loci plotted in Fig. 5.23, we will first try

\[D_{c}(s) = K\frac{s + 2}{s + 10} \]

Figure 5.24 shows that \(K = 70\) will yield \(\zeta = 0.56\) and \(\omega_{n} = 7.7rad/sec\), which satisfies the goals based on the initial estimates. The third pole will be at \(s = - 2.4\) with \(K = 70\). Because this third pole is so near the lead zero at -2 , the overshoot should not be increased very much from the second-order case. However, Fig. 5.25 shows that the step response of the system exceeds the overshoot specification by a small amount. Typically, lead compensation in the forward path will increase the step-response overshoot because the zero of the compensation has a differentiating effect, as was discussed in Chapter 3. The rise-time specification has been met because the time for the amplitude to go from 0.1 to 0.9 is less than \(0.3sec\).

Figure 5.24

Root locus for lead design

Figure 5.25

Step response for Example 5.11

We want to adjust the compensator to achieve better damping in order to reduce the overshoot in the transient response. Generally, it is best to increase \(p\) in order to increase damping, providing the \(p/z\) ratio stays below approximately 25 . Clearly, there is not much increase in damping required for this example. So a logical choice would be to increase \(p\) by a modest amount, say, from 10 to 13 . This means the lead compensator becomes

\[D_{c}(s) = K\frac{(s + 2)}{(s + 13)}\text{.}\text{~} \]

The root locus with this change can be created using the Matlab statements:

\[\begin{matrix} & s = tf\left( s^{'} \right) \\ & \text{~}\text{sysG=1/(}\text{~}\left. \ s^{*}(\text{ }s + 1) \right) \\ & \text{~}\text{sys}\text{~}D = (s + 2)/(s + 13) \\ & \text{~}\text{rlocus}\text{~}\left( sysG^{*}syss \right) \\ & \text{~}\text{grid on}\text{~} \end{matrix}\]

It is shown in Fig. 5.26. It shows that complex roots are possible at a natural frequency greater than \(8rad/sec\) at a damping greater than 0.64 . Placing your cursor on the locus at the point marked in the figure shows that \(K = 91\) at that location and it will produce a damping, \(\zeta = 0.67\) and \(\omega_{n} = 8.63rad/sec\). These values appear to be better than the first iteration so that the overshoot and time response should be satisfied. In fact, the additional Matlab statements:

sys \(D = 91*(s + 2)/(s + 13)\);

sysCL=feedback(sysG*sysD, 1);

step(sysCL)

produce the time response shown in Fig. 5.27, which shows that the time domain specifications are met. That is, \(t_{r} < 0.3sec\). and \(M_{p} < 20\%\).

Figure 5.26

Root Locus for

\(D_{c}(s) = K\frac{(s + 2)}{(s + 13)}\) with dotted lines for constant \(\varsigma\) and \(\omega_{n}\)

Figure 5.27

Time response for \(D_{c}(s) = 91\frac{(s + 2)}{(s + 13)}\)

As stated earlier, the name lead compensation is a reflection of the fact that to sinusoidal signals, these transfer functions impart phase lead. For example, the phase of Eq. (5.70) at \(s = j\omega\) is given by

Design Procedure for Lead Compensation

\[\phi = \tan^{- 1}\left( \frac{\omega}{z} \right) - \tan^{- 1}\left( \frac{\omega}{p} \right) \]

If \(z < p\), then \(\phi\) is positive, which by definition indicates phase lead. The details of design using the phase angle of the lead compensation will be treated in Chapter 6.

  1. Determine where the closed-loop roots need to be in the \(s\) plane in order to meet the desired specifications on the speed of response and damping (or overshoot).

(a) pick the limits for \(\omega_{n}\) and \(\zeta\), or

(b) pick the limits for \(\sigma\) and \(\omega_{d}\).

  1. Create the root locus vs. \(K\) with no compensation.

  2. If more damping is required, select a value of \(z\) in Eq. (5.70) to be approximately \(1/4\) to 1 times the value of the desired \(\omega_{n}\) or \(\omega_{d}\) and pick \(p\) to be \(10*z\).

  3. Examine the resulting root locus; and adjust as necessary to meet the required specifications as determined in step 1 .

(a) decrease \(p\) if less damping is needed,

(b) increase \(p\) if more damping is needed, and/or decrease \(z\),

(c) it is desirable to keep the value of \(p/z\) as low as possible \((p/z \lesssim 25)\) in order to minimize the amplification of sensor noise by the compensation.

  1. When the values of \(z\) and \(p\) are selected so that the resulting locus passes through an acceptable region of the \(s\)-plane, determine the value of \(K\) to select the closed-loop root locations.

  2. Verify that all time domain specifications are met by examining the time response to a unit step input, and adjust the desired s-plane specifications if needed and go back to step 2.

  3. Determine if the resulting value of \(K\) meets the steady-state error requirements, if any. If a value of \(K\) can not be found that meets the requirement, then add Integral Control or a Lag Compensator.

171.0.1. Design Using Lag Compensation

Once satisfactory dynamic response has been obtained, perhaps by using one or more lead compensations, we may discover that the lowfrequency gain - the value of the relevant steady-state error constant, such as \(K_{v}\)-is still too low. As we saw in Chapter 4, the system type, which determines the degree of the polynomial the system is capable of following, is determined by the order of the pole of the transfer function \(D_{c}(s)G(s)\) at \(s = 0\). If the system is Type 1 , the velocity-error constant, which determines the magnitude of the error to a ramp input, is given by \(\lim_{s \rightarrow 0}\mspace{2mu} sD_{c}(s)G(s)\). In order to increase this constant, it is necessary to do so in a way that does not upset the already satisfactory dynamic

An example of lag compensation response. Thus, we want an expression for \(D_{c}(s)\) that will yield a significant gain at \(s = 0\) to raise \(K_{v}\) (or some other steady-state error constant) but is nearly unity (no effect) at the higher frequency \(\omega_{n}\), where dynamic response is determined. The result is

\[D_{c}(s) = \frac{s + z}{s + p},\ z > p \]

where the values of \(z\) and \(p\) are very small compared with \(\omega_{n}\), yet \(D_{c}(0) = z/p = 3\) to 10 (the value depending on the extent to which the steady-state gain requires boosting). Because \(z > p\), the phase \(\phi\) given by Eq. (5.71) is negative, corresponding to phase lag. Hence, a device with this transfer function is called lag compensation.

The effects of lag compensation on dynamic response can be studied by looking at the corresponding root locus. Again, we take \(G(s) =\) \(\frac{1}{s(s + 1)}\), include the lead compensation \(KD_{c1}(s) = \frac{K(s + 2)}{(s + 13)}\) that produced the locus in Fig. 5.26. With the gain of \(K = 91\) from the previous tuned example, we find that the velocity constant is

\[\begin{matrix} K_{v} & \ = \lim_{s \rightarrow 0}\mspace{2mu} sKD_{c1}G \\ & \ = \lim_{s \rightarrow 0}\mspace{2mu} s(91)\frac{s + 2}{s + 13}\frac{1}{s(s + 1)} \\ & \ = \frac{91*2}{13} = 14 \end{matrix}\]

Suppose we require that \(K_{v} = 70\sec^{- 1}\) in order to reduce the velocity error by a factor of 5 . To obtain this, we require a lag compensation with \(z/p = 5\) in order to increase the velocity constant by a factor of 5. This can be accomplished with a pole at \(p = - 0.01\) and a zero at \(z = - 0.05\), which keeps the values of both \(z\) and \(p\) very small so \(D_{c2}(s)\) would have little effect on the portions of the locus representing the dominant dynamics around \(\omega_{n} = 7rad/sec\). The result is a lag compensation with the transfer function of

\[D_{c2}(s) = \frac{(s + 0.05)}{(s + 0.01)} \]

The root locus with both lead and lag compensation is plotted in Fig. 5.28 and we see that, for the large scale on the left, the locus is not noticeably different from that in Fig. 5.26. This was the result of selecting very small values for the lag compensator pole and zero. With \(K = 91\), the dominant roots are at \(- 5.8 \pm j6.5\). The effect of the lag compensation can be seen by expanding the region of the locus around the origin as shown on the right side of Fig. 5.28. Here we can see the circular locus that is a result of the very small lag pole and zero. A closed-loop root remains very near the lag-compensation zero at \(- 0.05 + 0j\); therefore, the transient response corresponding to this root will be a very slowly decaying term, which will have a small magnitude because the zero will almost cancel the pole in the transfer

Figure 5.28

Root locus with both lead and lag compensation

Design Procedure for Lag Compensation function. Still, the decay is so slow that this term may seriously influence the settling time. Furthermore, the zero will not be present in the step response to a disturbance torque and the slow transient will be much more evident there. Because of this effect, it is important to place the lag pole-zero combination at as high a frequency as possible without causing major shifts in the dominant root locations.

  1. Determine the amount of gain amplification to be contributed by the lag compensation at low frequencies in order to achieve the desired \(K_{p}\) or \(K_{v}\) or \(K_{a}\) as determined by Eqs. (4.36-4.38).

  2. Select the value of \(z\) in Eq. (5.72) so it is approximately a factor of 100 to 200 smaller than the system dominant natural frequency.

  3. Select the value of \(p\) in Eq. (5.72) so that \(z/p\) is equal to the desired gain amplification determined in step 1.

  4. Examine the resulting root locus to verify that the frequency and damping of the dominant closed-loop roots are still satisfactory. If not, adjust the lead compensation as needed.

  5. Verify that all time domain specifications are met by examining the time response to a unit step input. If the slow root introduced by the lag compensation is too slow, increase the values of \(z\) and \(p\) somewhat while keeping \(z/p\) constant, and go back to step 4. However, do so with the understanding that the closer the values of the lag compensator's \(z\) and \(p\) come to the dominant roots of the closed loop system, the more they will affect those dominant root characteristics.

171.0.2. Design Using Notch Compensation \(\ ^{16}\)

Suppose the design has been completed with lead and lag compensation given by

\[KD_{c}(s) = 91\left\lbrack \frac{s + 2}{s + 13} \right\rbrack\left\lbrack \frac{s + 0.05}{s + 0.01} \right\rbrack \]

but is found to have a substantial oscillation at about \(50rad/sec\) when tested, because there was an unsuspected flexibility of the noncollocated type at a natural frequency of \(\omega_{n} = 50rad/sec\). On reexamination, the plant transfer function, including the effect of the flexibility, is estimated to be

\[G(s) = \frac{2500}{s(s + 1)\left( s^{2} + s + 2500 \right)} \]

A mechanical engineer claims that some of the "control energy" has spilled over into the lightly damped flexible mode and caused it to be excited. In other words, as we saw from the similar system whose root locus is shown in Fig. 5.18, the very lightly damped roots at \(50rad/sec\) have been made even less damped or perhaps unstable by the feedback. The best method to fix this situation is to modify the structure so there is a mechanical increase in damping. Unfortunately, this is often not possible because it is found too late in the design cycle. If it isn't possible, how else can this oscillation be corrected? There are at least two possibilities. An additional lag compensation might lower the loop gain far enough that there is greatly reduced spillover and the oscillation is eliminated. Reducing the gain at the high frequency is called gain stabi-

Gain stabilization

Phase stabilization lization. If the response time resulting from gain stabilization is too long, a second alternative is to add a zero near the resonance so as to shift the departure angles from the resonant poles so as to cause the closed-loop root to move into the LHP, thus causing the associated transient to die out. This approach is called phase stabilization, and its action is similar to that of flexibility in the collocated motion control discussed earlier. Gain and phase stabilization will be explained more precisely by their effect on the frequency response in Chapter 6 where these methods of stabilization will be discussed further. For phase stabilization, the result is called a notch compensation, and an example has a transfer function

\[D_{notch}(s) = \frac{s^{2} + 2\zeta\omega_{o}s + \omega_{o}^{2}}{\left( s + \omega_{o} \right)^{2}} \]

A necessary design decision is whether to place the notch frequency above or below that of the natural resonance of the flexibility in order to get the necessary phase. A check of the angle of departure shows that with the plant as compensated by Eq. (5.73) and the notch as given, it is necessary to place the frequency of the notch above that of the resonance to get the departure angle to point toward the LHP. Thus the compensation is added with the transfer function

Figure 5.29

Root locus with lead, lag, and notch

Figure 5.30

Step response with lead and lag, with and without the notch filter

\[D_{\text{notch}\text{~}}(s) = \frac{s^{2} + 0.8s + 3600}{(s + 60)^{2}}\text{.}\text{~} \]

The gain of the notch at \(s = 0\) has been kept at 1 so as not to change the \(K_{v}\). The new root locus is shown in Fig. 5.29, and the step response is shown in Fig. 5.30 for the system with and without the notch compensation included. Note from the step responses that the notch damps the oscillations well but degrades the overshoot somewhat. The rise time specification was not affected. To rectify the increased overshoot and strictly meet all the specifications, further iteration should be carried out in order to provide more damping of the fast roots in the vicinity of \(\omega_{n} = 7rad/sec\).

Figure 5.31

Possible circuit of a lead compensation

When considering notch or phase stabilization, it is important to understand that its success depends on maintaining the correct phase at the frequency of the resonance. If that frequency is subject to significant change, which is common in many cases, then the notch needs to be removed far enough from the nominal frequency in order to work for all cases. The result may be interference of the notch with the rest of the dynamics and poor performance. As a general rule, gain stabilization is substantially more robust to plant changes than is phase stabilization.

172. \(\Delta\) 5.4.4 Analog and Digital Implementations

Compensation can be physically realized in various ways. Most compensation can be implemented using analog electronics similar to that described in Section 2.2. However, it is very common today to implement compensation using digital devices.

As an example of an analog realization, a circuit diagram for lead compensation using an operational amplifier is shown in Fig. 5.31. The transfer function of the circuit in Fig. 5.31 is readily found by the methods from Chapter 2 to be

\[D_{\text{lead}\text{~}}(s) = - a\frac{s + z}{s + p} \]

where

\[\begin{matrix} a & \ = \frac{p}{z},\ \text{~}\text{if}\text{~}\ R_{f} = R_{1} + R_{2}, \\ z & \ = \frac{1}{R_{1}C}, \\ p & \ = \frac{R_{1} + R_{2}}{R_{2}} \cdot \frac{1}{R_{1}C}. \end{matrix}\]

A short section describing the implementation of a lead compensation using a digital device and a comparison of the results with an analog implementation is contained in online Appendix W5.4.4. (See www.pearsonglobaleditions.com)

172.1. Design Examples Using the Root Locus

173. Control of a Quadrotor Drone Pitch Axis

For the quadrotor shown in Fig. 2.13, the transfer function between a pitch control input, \(T_{lon}\), and the pitch angle, \(\theta\), is

Figure 5.32

Block diagram for the quadrotor design Example 5.12

\[\frac{\theta(s)}{T_{lon}(s)} = G_{1}(s) = \frac{1}{s^{2}(s + 2)} \]

This is similar to the transfer function obtained in Eq. (2.15) in Chapter 2; however, an extra term has been added to account for the lag associated with the rotor coming up to the newly commanded thrust and speed. The lag term selected, \((s + 2)\), is for a fairly large quadrotor of perhaps 2 meters in diameter. The more detailed drone example in Chapter 10 (see Example 10.5) will include this term along with some of the aerodynamic terms. However, for purposes of understanding the essential control features, this simplified example should suffice. The block diagram of the control system is shown in Fig. 5.32. It shows the quadrotor dynamics given by \(\theta(s)/T_{\text{lon}\text{~}}(s)\) and shows the compensator, \(D_{c}(s)\), to be designed via the root locus method. The desired specifications for this system are:

\[\begin{matrix} \omega_{n} & \ \geq 1rad/sec \\ \zeta & \ \geq 0.44 \end{matrix}\]

Using lead compensation, find a set of parameters for \(D_{c}(s)\) that meet the required specifications.

Solution. Knowing the desired \(\omega_{n}\) and \(\zeta\) values is the first step in the Lead Compensation Design Procedure. The second step in the process is to determine a root locus for the uncompensated system. The ensuing Matlab commands will generate such a locus:

Note use of the grid command places the \(\omega_{n}\) and \(\zeta\) values on the root locus plot as an aid in the determination of whether the specifications are met. The result is shown in Fig. 5.33.

The uncompensated system exhibits increasing instability as the gain, \(K\), is increased; therefore, it is likely that significant more lead will be required compared to Example 5.11 where the uncompensated

Figure 5.33

Uncompensated system, i.e., with \(D_{c}(s) = K\).

system was always stable, as was shown in Fig. 5.22. For the third step we select \(z = 1\) and \(p = 10\) in Eq. (5.70) so

\[D_{c}(s) = K\frac{s + 1}{s + 10} \]

This compensation is implemented into the quadrotor control system by the Matlab commands

\[\begin{matrix} & s = tf\left( s^{'} \right) \\ & sysG1 = 1/\left( \left( s^{\land}2 \right)^{*}(s + 2) \right); \\ & \text{~}\text{sys}\text{~}D = (s + 1)/(s + 10); \\ & \text{~}\text{rlocus(sysG1*sysD)}\text{~} \\ & axis\left( \begin{bmatrix} - 3 & 1 & - 2 & 2 \end{bmatrix} \right) \\ & \text{~}\text{grid on}\text{~} \end{matrix}\]

which produce the root locus in Fig. 5.34. It shows that no value of \(K\) will produce the level of damping required, that is, \(\zeta \geq 0.44\).

Clearly, significantly more damping from the compensator is required so we move on to step \(\mathbf{4}\) in the procedure. For our next attempt, let's choose a value of \(z = 0.5\) instead of 1 . However, it will show that it is still not possible to meet both specifications. Therefore, let's also increase \(p\) to 15 and examine whether that will create a locus with \(\zeta \geq 0.44\). Therefore the compensation is now

\[D_{c}(s) = K\frac{s + 0.5}{s + 15} \]

Figure 5.34

Compensated system with \(D_{c}(s) = K\frac{s + 1}{s + 10}\)

A root locus of the system with this compensator is found from the ensuing Matlab statements

\[\begin{matrix} & s = tf\left( s^{'} \right) \\ & \text{~}\text{sysG1=1/((s\textasciicircum2)*(s+2));}\text{~} \\ & \text{~}\text{sys}\text{~}D = (s + 0.5)/(s + 15) \\ & \text{~}\text{rlocus(sysG1* sysD)}\text{~} \\ & \text{~}\text{axis([-3}\text{~}1 - 2\text{~}\text{2}\text{~}2\rbrack) \\ & \text{~}\text{grid on}\text{~} \end{matrix}\]

which produces the locus shown in Fig. 5.35.

Comparing the locus with the lines of constant damping shows that it comes very close to the \(\zeta = 0.5\) line, and thus most likely will satisfy the requirement that \(\zeta \geq 0.44\). Also note that the point on the locus that is closest to the \(\zeta = 0.5\) line is approximately at \(\omega_{n} = 1rad/sec\). Thus, step \(\mathbf{5}\) consists of verifying this result. This can be carried out by placing your cursor on the Matlab generated root locus at the point of best damping. Doing so shows that

\[\begin{matrix} K & \ = 30, \\ \omega_{n} & \ = 1.03,\text{~}\text{and}\text{~} \\ \zeta & \ = 0.446, \end{matrix}\]

which satisfies step \(\mathbf{5}\) in the design procedure and yields the value of \(K\) in the lead compensation. Therefore, we now have the complete set of parameters, and the final design is

\[D_{c}(s) = 30\frac{s + 0.5}{s + 15} \]

Figure 5.35

Root locus of the quadrotor with \(D_{c}(s) = K\frac{s + 0.5}{s + 15}\)

174. Control of a Small Airplane

For the Piper Dakota shown in Fig. 5.36, the transfer function between the elevator input and the pitch attitude is

\[G(s) = \frac{\theta(s)}{\delta_{e}(s)} = \frac{160(s + 2.5)(s + 0.7)}{\left( s^{2} + 5s + 40 \right)\left( s^{2} + 0.03s + 0.06 \right)}, \]

where

\[\begin{matrix} \theta & \ = \text{~}\text{pitch attitude, degrees (see Fig. 10.30)}\text{~} \\ \delta_{e} & \ = \text{~}\text{elevator angle, degrees.}\text{~} \end{matrix}\]

(For a more detailed discussion of longitudinal aircraft motion, refer to Section 10.3.) steady-state requirements have been made, steps \(\mathbf{6}\) and \(\mathbf{7}\) do not apply met, it would be necessary to return to step \(\mathbf{2}\) and revise the desired \(\omega_{n}\) and \(\zeta\) so as to improve the situation. A higher value of \(\omega_{n}\) would overshoot. If the steady-state error requirements had not been met, it is sometimes possible to increase \(K\) and still meet the other specifications; however, in this case any increase in \(K\) from the selected value a lag compensator or integral control had a higher value of \(K\) been necessary.
rer to Section 10.3.)

Figure 5.36

Autopilot design in the Piper Dakota, showing elevator and trim tab Source: Photos courtesy of Denise Freeman

(a)

(b)

  1. Design an autopilot so the response to a step elevator input has a rise time of \(1sec\) or less and an overshoot less than \(10\%\).

  2. When there is a constant disturbing moment acting on the aircraft so the pilot must supply a constant force on the controls for steady flight, it is said to be out of trim. The transfer function between the disturbing moment and the attitude is the same as that due to the elevator; that is,

\[\frac{\theta(s)}{M_{d}(s)} = \frac{160(s + 2.5)(s + 0.7)}{\left( s^{2} + 5s + 40 \right)\left( s^{2} + 0.03s + 0.06 \right)}, \]

where \(M_{d}\) is the moment acting on the aircraft. There is a separate aerodynamic surface for trimming, \(\delta_{t}\), that can be actuated and will change the moment on the aircraft. It is shown in the closeup of the tail in Fig. 5.36(b), and its influence is depicted in the block diagram shown in Fig. 5.37(a). For both manual and autopilot flight, it is desirable to adjust the trim so there is no steady-state control effort required from the elevator (that is, so \(\delta_{e} = 0\) ). In manual flight, this means no force is required by the pilot to keep the aircraft at a constant altitude, whereas in autopilot control it means reducing the amount of electrical power required and saving

(a)

(b)

Figure 5.37

Block diagrams for autopilot design: (a) open loop; (b) feedback scheme excluding trim

wear and tear on the servomotor that drives the elevator. Design an autopilot that will command the trim \(\delta_{t}\) so as to drive the steadystate value of \(\delta_{e}\) to zero for an arbitrary constant moment \(M_{d}\) as well as meet the specifications in part (a).

175. Solution

  1. To satisfy the requirement that the rise time \(t_{r} \leq 1sec\), Eq. (3.68) indicates that, for the ideal second-order case, \(\omega_{n}\) must be greater than \(1.8rad/sec\). And to provide an overshoot of less than \(10\%\), Fig. 3.24 indicates that \(\zeta\) should be greater than 0.6, again, for the ideal second-order case. In the design process, we can examine a root locus for a candidate for feedback compensation and then look at the resulting time response when the roots appear to satisfy the design guidelines. However, since this is a fourth-order system, the design guidelines might not be sufficient, or they might be overly restrictive.

To initiate the design process, it is often instructive to look at the system characteristics with proportional feedback, that is, where \(D_{c}(s) = 1\) in Fig. \(5 \cdot 37(\text{ }b)\). The statements in Matlab to create a root locus with respect to \(K\) and a time response for the proportional feedback case with \(K = 0.3\) are as follows:

\(s = tf('s)\);

sysG $= (160(s + 2.5)(s + 0.7))/\left( \left( s^{\land}2 + 5^{}s + 40 \right)\left( s^{\land}2 + 0.03*s + \right.\ \right.\ $ \(0.06))\);

rlocus(sysG)

\(K = 0.3\);

sysL \(= K^{*}\) sysG;

[sysT] = feedback (sysL,1);

step(sysT)

The resulting root locus and time response are shown with dashed lines in Figs. 5.38 and 5.39. Notice from Fig. 5.38 that the two faster roots will always have a damping ratio \(\zeta\) that is less than 0.4; therefore, proportional feedback will not be acceptable. Also, the slower roots have some effect on the time response shown

Figure 5.38

Root loci for autopilot design

Figure 5.39

Time-response plots for autopilot design

in Fig. 5.39 (dashed curve) with \(K = 0.3\) in that they cause a long-term settling. However, the dominating characteristic of the response that determines whether or not the compensation meets the specifications is the behavior in the first few seconds, which is dictated by the fast roots. The low damping of the fast roots causes the time response to be oscillatory, which leads to excess overshoot and a longer settling time than desired.

Lead compensation via Matlab
We saw in Section 5.4.1 that lead compensation causes the locus to shift to the left, a change needed here to increase the damping. Some trial and error will be required to arrive at a suitable pole and zero location. Values of \(z = 3\) and \(p = 20\) in Eq. (5.70) have a substantial effect in moving the fast branches of the locus to the left; thus

\[D_{c}(s) = \frac{s + 3}{s + 20} \]

Trial and error is also required to arrive at a value of \(K\) that meets the specifications. The statements in Matlab to add this compensation are as follows:

sys \(D = (s + 3)/(s + 20)\);

sys \(DG =\) sys \(D\) *sysG;

rlocus(sysDG)

\(K = 1.5\);

sysKDG \(= K^{*}\) sys DG;

sys \(T =\) feedback(sysKDG,1);

step(sysT)

The root locus for this case and the corresponding time response are also shown in Figs. 5.38 and 5.39 by the solid lines. Note the damping of the fast roots that corresponds to \(K =\) 1.5 is \(\zeta = 0.52\), which is slightly lower than we would like; also, the natural frequency is \(\omega_{n} = 15rad/sec\), much faster than we need. However, these values are close enough to meeting the guidelines to suggest a look at the time response. In fact, the time response shows that \(t_{r} \cong 0.9sec\) and \(M_{p} \cong 8\%\), both within the specifications, although by a very slim margin.

In summary, the primary design path consisted of adjusting the compensation to influence the fast roots, examining their effect on the time response, and continuing the design iteration until the time specifications were satisfied.

  1. The purpose of the trim is to provide a moment that will eliminate a steady-state nonzero value of the elevator. Therefore, if we integrate the elevator command \(\delta_{e}\) and feed this integral to the trim device, the trim should eventually provide the moment required to hold an arbitrary altitude, thus eliminating the need for a steadystate \(\delta_{e}\). This idea is shown in Fig. 5.40(a). If the gain on the integral term \(K_{I}\) is small enough, the destabilizing effect of adding the integral should be small and the system should behave approximately as before, since that feedback loop has been left intact. The block diagram in Fig. 5.40(a) can be reduced to that in Fig. 5.40(b) for analysis purposes by defining the compensation to include the PI form

\[D_{I}(s) = KD_{c}(s)\left( 1 + \frac{K_{I}}{s} \right) \]

(a)

(b)

Figure 5.40

Block diagram showing the trim-command loop

However, it is important to keep in mind that, physically, there will be two outputs from the compensation: \(\delta_{e}\) (used by the elevator servomotor) and \(\delta_{t}\) (used by the trim servomotor).

The characteristic equation of the system with the integral term is

\[1 + KD_{c}G + \frac{K_{I}}{s}KD_{c}G = 0 \]

To aid in the design process, it is desirable to find the locus of roots with respect to \(K_{I}\), but the characteristic equation is not in any of the root-locus forms given by Eqs. (5.6)-(5.9). Therefore, dividing by \(1 + KD_{c}G\) yields

\[1 + \frac{\left( K_{I}/s \right)KD_{c}G}{1 + KD_{c}G} = 0 \]

To put this system in root locus form, we define

\[L(s) = \frac{1}{s}\frac{KD_{c}G}{1 + KD_{c}G} \]

so \(K_{I}\) becomes the root locus parameter. In Matlab, with \(\frac{KD_{c}G}{1 + KD_{c}G}\) already computed as sysT, we construct the integrator as sysIn \(=\) \(1/s\), the loop gain of the system with respect to \(K_{I}\) as sysL \(=\) sysIn*sysT, and the root locus with respect to \(K_{I}\) is found with sisotool('rlocus', sysL).

It can be seen from the locus in Fig. 5.41 that the damping of the fast roots decreases as \(K_{I}\) increases, as is typically the case when integral control is added. This shows the necessity for keeping the value of \(K_{I}\) as low as possible. After some trial and error, we select \(K_{I} = 0.15\). This value has little effect on the roots-note the roots are virtually on top of the previous roots obtained without the integral term - and little effect on the short-term behavior of the step response, as shown in Fig. 5.42(a), so the specifications are still met. \(K_{I} = 0.15\) does cause the longer-term attitude behavior to approach the commanded value with no error, as we

Figure 5.41

Root locus versus \(K_{I}\) : assumes an added integral term and lead compensation with a gain \(K = 1.5\); roots for \(K_{I} = 0.15\) marked with \(\bullet\)

Figure 5.42

Step response for the case with an integral term and \(5^{\circ}\) command

(a)

(b)
would expect with integral control. It also causes \(\delta_{e}\) to approach zero (Fig. 5.42(b) shows it settling in approximately \(30sec\) ) which is good because this is the reason for choosing integral control in the first place. The time for the integral to reach the correct value is predicted by the new, slow real root that is added by the integral term at \(s = - 0.14\). The time constant associated with this root is \(\tau = 1/0.14 \cong 7sec\). The settling time to \(1\%\) for a root with \(\sigma = 0.14\) is shown by Eq. (3.73) to be \(t_{s} = 33sec\), which agrees with the behavior in Fig. 5.42(b).

175.1. Extensions of the Root-Locus Method

As we have seen in this chapter, the root-locus technique is a graphical scheme to show locations of possible roots of an algebraic equation as a single real parameter varies. The method can be extended to consider negative values of the parameter, a sequential consideration of more than one parameter, and systems with time delay. In this section, we examine these possibilities. Another interesting extension to nonlinear systems will be discussed in Chapter 9.

175.1.1. Rules for Plotting a Negative \(\left( 0^{\circ} \right)\) Root Locus

We now consider modifying the root-locus procedure to permit analysis of negative values of the parameter. In a number of important cases, the transfer function of the plant has a zero in the RHP and is said to be nonminimum phase. The result is often a locus of the form $1 + A\left( z_{i} - \right.\ $ s) \(G^{'}(s) = 1 + ( - A)\left( s - z_{i} \right)G^{'}(s) = 0\), and in the standard form the parameter \(K = - A\) must be negative. Another important issue calling for understanding the negative locus arises in building a control system. In any physical implementation of a control system there are inevitably a number of amplifiers and components whose gain sign must be selected. By Murphy's Law,,\(\ ^{17}\) when the loop is first closed, the sign will be wrong and the behavior will be unexpected unless the engineer understands how the response will go if the gain which should be positive is instead negative. So what are the rules for a negative locus (a root locus relative to a negative parameter)? First of all, Eqs. (5.6)-(5.9) must be satisfied for negative values of \(K\), which implies that \(L(s)\) is real and positive. In other words, for the negative locus, the phase condition is

Definition of a Negative Root Locus
The angle of \(L(s)\) is \(0^{\circ} + 360^{\circ}(l - 1)\) for \(s\) on the negative locus.

The steps for plotting a negative locus are essentially the same as for the positive locus, except that we search for places where the angle

\(\ ^{17}\) Anything that can go wrong, will go wrong.
of \(L(s)\) is \(0^{\circ} + 360^{\circ}(l - 1)\) instead of \(180^{\circ} + 360^{\circ}(l - 1)\). For this reason, a negative locus is also referred to as a \(0^{\circ}\) root locus. This time we find that the locus is to the left of an even number of real poles plus zeros (the number zero being even). Computation of the center of the asymptotes for large values of \(s\) is, as before, given by

\[\alpha = \frac{\sum_{}^{}\ p_{i} - \sum_{}^{}\ z_{i}}{n - m} \]

but we modify the angles to be

\[\phi_{l} = \frac{360^{\circ}(l - 1)}{n - m},\text{~}\text{where}\text{~}l = 1,2,3,\ldots,n - m \]

(shifted by \(\frac{180^{\circ}}{(n - m)}\) from the \(180^{\circ}\) locus). Following are the guidelines for plotting a \(0^{\circ}\) locus:

RULE 1. (As before) The \(n\) branches of the locus leave the poles and \(m\) branches approach the zeros and \(n - m\) branches approach the asymptotes.

RULE 2. The locus is on the real axis to the left of an even number of real poles plus zeros.

RULE 3. The asymptotes are described by

\[\begin{matrix} \alpha & \ = \frac{\sum_{}^{}\ p_{i} - \sum_{}^{}\ z_{i}}{n - m} = \frac{- a_{1} + b_{1}}{n - m} \\ \phi_{l} & \ = \frac{360^{\circ}(l - 1)}{n - m},\ l = 1,2,3,\ldots,n - m. \end{matrix}\]

Notice the angle condition here is measured from \(0^{\circ}\) rather than from \(180^{\circ}\), as it was in the positive locus.

RULE 4. Departure angles from poles and arrival angles to zeros are found by searching in the near neighborhood of the pole or zero where the phase of \(L(s)\) is \(0^{\circ}\), so that

\[\begin{matrix} q\phi_{dep} & \ = \sum_{}^{}\ \psi_{i} - \sum_{}^{}\ \phi_{i} - 360^{\circ}(l - 1) \\ q\psi_{arr} & \ = \sum_{}^{}\ \phi_{i} - \sum_{}^{}\ \psi_{i} + 360^{\circ}(l - 1) \end{matrix}\]

where \(q\) is the order of the pole or zero and \(l\) takes on \(q\) integer values such that the angles are between \(\pm 180^{\circ}\).

RULE 5. The locus can have multiple roots at points on the locus, and the branches will approach a point of \(q\) roots at angles separated by

\[\frac{180^{\circ} + 360^{\circ}(l - 1)}{q} \]

and will depart at angles with the same separation.

The result of extending the guidelines for constructing root loci to include negative parameters is that we can visualize the root locus as a set of continuous curves showing the location of possible solutions to
the equation \(1 + KL(s) = 0\) for all real values of \(K\), both positive and negative. One branch of the locus departs from every pole in one direction for positive values of \(K\), and another branch departs from the same pole in another direction for negative \(K\). Likewise, all zeros will have two branches arriving, one with positive and the other with negative values of \(K\). For the \(n - m\) excess poles, there will be \(2(n - m)\) branches of the locus asymptotically approaching infinity as \(K\) approaches positive and negative infinity, respectively. For a single pole or zero, the angles of departure or arrival for the two locus branches will be \(180^{\circ}\) apart. For a double pole or zero, the two positive branches will be \(180^{\circ}\) apart, and the two negative branches will be at \(90^{\circ}\) to the positive branches.

The negative locus is often required when studying a nonminimum phase transfer function. A well-known example is that of the control of liquid level in the boiler of a steam power plant. If the level is too low, the actuator valve adds (relatively) cold water to the boiling water in the vessel. As demonstrated in Fig. 3.31, the initial effect of the addition is to slow down the rate of boiling, which reduces the number and size of the bubbles and causes the level to fall momentarily, before the added volume and heat cause it to rise again to the new increased level. This initial underflow is typical of nonminimum phase systems. Another typical nonminimum phase transfer function is that of the altitude control of an airplane. To make the plane climb, the upward deflection of the elevators initially causes the plane to drop before it rotates and climbs. A Boeing 747 in this mode can be described by the scaled and normalized transfer function

\[G(s) = \frac{6 - s}{s\left( s^{2} + 4s + 13 \right)} \]

To put \(1 + KG(s)\) in root-locus form, we need to multiply by -1 to get

\[G(s) = - \frac{s - 6}{s\left( s^{2} + 4s + 13 \right)} \]

Negative Root Locus for an Airplane

Sketch the negative root locus for the equation

\[1 + \frac{K(s - 3)}{s\left( s^{2} + 5s + 19 \right)} = 0 \]

176. Solution

RULE 1. There are three branches and two asymptotes.

RULE 2. A real-axis segment is to the right of \(s = 3\) and a segment is to the left of \(s = 0\).

RULE 3. The angles of the asymptotes are \(\phi_{1} = \frac{(l - 1)360^{\circ}}{2} = 0^{\circ},180^{\circ}\).

RULE 4. The branch departs the pole at \(s = - 2.5 + j3.5707\) at the angle

Figure 5.43

Negative root locus corresponding to \(\frac{K(s - 3)}{s\left( s^{2} + 5s + 19 \right)}\)

177. EXAMPLE 5.15

Figure 5.44

Block diagram of a servomechanism structure, including tachometer feedback

The locus is plotted, in Fig. 5.43 by Matlab, which is seen to be consistent with these values.

178. \(\Delta\) 5.6.2 Successive Loop Closure

An important technique for practical control is to consider a structure with two loops: an inner loop around an actuator or part of the process dynamics, and an outer loop around the entire plant-plus-innercontroller. The process is called successive loop closure. A controller is selected for the inner loop to be robust and give good response alone, and then the outer loop can be designed to be simpler and more effective than if the entire control was done without the aid of the inner loop. The use of the root locus to study such a system with two parameters can be illustrated by a simple example.

Root Locus Using Two Parameters in Succession

A block diagram of a relatively common servomechanism structure is shown in Fig. 5.44. Here a speed-measuring device (a tachometer) is

available and the problem is to use the root locus to guide the selection of the tachometer gain \(K_{T}\) as well as the amplifier gain \(K_{A}\). The characteristic equation of the system in Fig. 5.44 is

\[1 + \frac{K_{A}}{s(s + 1)} + \frac{K_{T}}{s + 1} = 0 \]

which is not in the standard \(1 + KL(s)\) form. After clearing fractions, the characteristic equation becomes

\[s^{2} + s + K_{A} + K_{T}s = 0 \]

which is a function of two parameters, whereas the root locus technique can consider only one parameter at a time. In this case, we set the gain \(K_{A}\) to a nominal value of 4 and consider first the locus with respect to \(K_{T}\). With \(K_{A} = 4\), Eq. (5.85) can be put into root-locus form for a root-locus study with respect to \(K_{T}\) with \(L(s) = \frac{s}{s^{2} + s + 4}\), or

\[1 + K_{T}\frac{s}{s^{2} + s + 4} = 0. \]

For this root locus, the zero is at \(s = 0\) and the poles are at the roots of \(s^{2} + s + 4 = 0\), or \(s = - \frac{1}{2} \pm 1.94j\). A sketch of the locus using the rules as before is shown in Fig. 5.45.

From this locus, we can select \(K_{T}\) so the complex roots have a specific damping ratio or take any other value of \(K_{T}\) that would result in satisfactory roots for the characteristic equation. Consider \(K_{T} = 1\). Having selected a trial value of \(K_{T}\), we can now re-form the equation to consider the effects of changing from \(K_{A} = 4\) by taking the new parameter to be \(K_{1}\) so \(K_{A} = 4 + K_{1}\). The locus with respect to \(K_{1}\) is governed by Eq. (5.50), now with \(L(s) = \frac{1}{s^{2} + 2s + 4}\), so the locus is for the equation

\[1 + K_{1}\frac{1}{s^{2} + 2s + 4} = 0 \]

Note the poles of the new locus corresponding to Eq. (5.87) are the roots of the previous locus, which was drawn versus \(K_{T}\), and the roots were taken at \(K_{T} = 1\). The locus is sketched in Fig. 5.46, with the previous locus versus \(K_{T}\) left dashed. We could draw a locus with respect to

Figure 5.45

Root locus of closed-loop poles of the system in Fig. 5.44 versus \(K_{T}\)

Figure 5.46

Root locus versus

\(K_{1} = K_{A} + 4\) after choosing \(K_{T} = 1\)

\(K_{1}\) for a while, stop, resolve the equation, and continue the locus with respect to \(K_{T}\), in a sort of see-saw between the parameters \(K_{A}\) and \(K_{T}\), and thus use the root locus to study the effects of two parameters on the roots of a characteristic equation. Notice, of course, we can also plot the root locus for negative values of \(K_{1}\), and thus consider values of \(K_{A}\) less than 4.

179. Control of a Quadrotor Drone \(x\)-Axis Position

For the quadrotor pitch angle control shown in Fig. 5.32, the transfer function between the pitch control input, \(T_{\theta}\), and the pitch angle, \(\theta\), is \(G_{1} = \frac{1}{s^{2}(s + 2)}\). We found a lead compensator, \(D_{c1}(s) = 30\frac{s + 0.5}{s + 15}\), that provided well-damped roots of the closed-loop system based on a measurement of the pitch angle and commands to the two pitch rotors, 1 and 3, shown in Fig. 2.14. We can use the pitch angle to control position along the \(x\)-axis since a small non-zero value of the pitch angle, \(\theta\), provides a component of thrust along the negative \(x\) axis \(= - g_{o}sin(\theta) \simeq - g_{o}\theta\). Integrated twice, this thrust component will produce a change in the \(x\)-position. Thus, we have the additional dynamics,

\[G_{2}(s) = \frac{x(s)}{\theta(s)} = - \frac{g_{o}}{s^{2}}. \]

The block diagram of the complete position control system for the \(x\)-axis control is shown in Fig. 5.47. It includes the inner, pitch attitude loop plus the outer loop that provides the position control that depends on a position measurement, typically obtained for drones using a GPS on board. Note that, due to the negative \(x\)-axis thrust produced by the positive \(\theta\), the sign on the outer feedback loop has been made positive for proper control action.

Figure 5.47

Inner and outer loop of the drone position control system

Design the outer loop compensation, \(D_{c2}(s)\), so that the natural frequency, \(\omega_{n}\), of the complex roots are \(\geq 0.4rad/sec\) and \(\zeta \geq 0.5\).

Solution. The inner loop's dynamics were obtained in Example 5.12 and those need to be included in the analysis of the outer loop. To determine the transfer function of that loop, we can use the Matlab feedback function as follows:

\[\begin{matrix} & s = tf\left( \ ^{'}s^{'} \right); \\ & sysG11/\left( \left( s^{\land}2 \right)^{\star}(s + 2) \right); \\ & \text{~}\text{sysD1}\text{~} = (s + 0.5)/(s + 15) \\ & K = 30\text{;}\text{~} \\ & \text{~}\text{sysL1=feedback(sysG1,sysD1*K)}\text{~} \end{matrix}\]

The result, which can also be computed by hand, is that the transfer function of the inner pitch control loop is

\[L_{1}(s) = K\frac{s + 15}{s^{4} + 17s^{3} + 30s^{2} + 30s + 15} \]

The first step in the design of the outer loop is to take a look at the root locus of the loop with the compensation, \(D_{c2}(s) = K_{2}\). The Matlab commands for that step are:

sysG \(2 = 32.2/s^{\land}2\);

rlocus(sysL1*sysG2)

As you might expect with two poles at the origin, the locus departs north and south from the origin and, because of the poles of \(L_{1}(s)\) located as shown in Fig. 5.35, the locus quickly departs into the unstable RHP. Therefore, it is clear that some lead compensation \(\left\lbrack D_{c2}(s) \right\rbrack\) is required for this outer loop to be stable with acceptable characteristics. After some iteration with the pole and zero of \(D_{c2}(s)\), using

180. Figure 5.48

Root locus of the \(x\)-axis control system showing the location of the closed loop roots with \(D_{c2}(s) = .081\frac{s + 0.1}{s + 10}\)

it can be found that

\[D_{c2}(s) = K_{2}\frac{s + 0.1}{s + 10} \]

will provide a root locus that allows closed-loop roots with acceptable frequency and damping. In fact, selecting \(K_{2} = .081\) yields two sets of complex roots, one set with \(\omega_{n} = 0.4\) and \(\zeta \approx 0.7\) and another set with \(\omega_{n} = 0.9\) and \(\zeta \approx 0.6\). In addition, there are real roots at \(s \cong - 0.2\) and -10 . Thus, the design is complete and the closed-loop roots of the entire system meet the desired specifications. The root locus with \(D_{c2}(s)\) showing the location of the closed-loop roots for \(K_{2} = .081\) is shown in Fig. 5.48. Although the general rule is that the pole/zero ratio should be less than 25 , in this case, it can be violated because the GPS sensor systems generally supply the position and velocity. Hence, pure derivative feedback is practical and the pole in that case would essentially be at negative infinity.

It is theoretically possible to compensate this type of system using only the outer-loop output, \(x\). However, in practice, when it is possible to use a sensor for an inner loop closure, this approach is universally used in order to obtain a better control design due to its improved robustness and reduced sensitivity to sensor noise. The relationship between lead compensation characteristics and sensitivity to sensor noise will be discussed in more depth in Chapter 6.

Time delays always reduce the stability of a system

181. \(\Delta\) 5.6.3 Time Delay

Time delays often arise in control systems, both from delays in the process itself and from delays in the processing of sensed signals. Chemical plants often have processes with a time delay representing the time material takes to be transported via pipes or other conveyer. In measuring the attitude of a spacecraft en route to Mars, there is a significant time delay for the sensed quantity to arrive back on Earth due to the speed of light. Time delay always reduces the stability of a system; therefore, it is important to be able to analyze its effect. Use of the Padé approximant adds a rational function that approximates the effect of a time delay so one can analyze its effect on the stability of a system. This method is described in Appendix W5.6.3 found online at www.pearsonglobaleditions.com. The effect of time delays will also be covered via frequency response design in Chapter 6. Using frequency response methods, it is possible to show the effect of a time delay exactly and easily. The destabilizing effect is clearly exposed by Fig. 6.80.

181.1. Historical Perspective

In Chapter 1, we gave an overview of the early development of feedback control analysis and design including frequency response and root-locus design. Root-locus design was introduced in 1948 by Walter R. Evans, who was working in the field of guidance and control of aircraft and missiles at the Autonetics Division of North American Aviation (now a part of The Boeing Co.). Many of his problems involved unstable or neutrally stable dynamics, which made the frequency methods difficult, so he suggested returning to the study of the characteristic equation that had been the basis of the work of Maxwell and Routh nearly 70 years earlier. However, rather than treat the algebraic problem, Evans posed it as a graphical problem in the complex \(s\)-plane. Evans was also interested in the character of the dynamic response of the aerospace vehicles being controlled; therefore, he wanted to solve for the closedloop roots in order to understand the dynamic behavior. To facilitate this understanding, Evans developed techniques and rules allowing one to follow graphically the paths of the roots of the characteristic equation as a parameter was changed. His method is suitable for design as well as for stability analysis and remains an important technique today. Originally, it enabled the solutions to be carried out by hand since computers were not readily available to designers; however, root-loci remain an important tool today for aiding the design process. As we learned in this chapter, Evans method involves finding a locus of points where the angles to the other poles and zeros add up to a certain value. To aid in this determination, Evans invented the "Spirule." It could be used to measure the angles and to perform the addition or subtraction very quickly. A skilled controls engineer could evaluate whether the angle criterion was met for a fairly complex design problem in a few seconds. In
addition, a logarithmic spiral curve on a portion of the device allowed the designer to multiply distances from points on the locus to the poles and zeros, in order to determine the gain at a selected spot on the locus in a manner analogous to a slide rule.

Evans was clearly motivated to aid the engineer in their design and analysis of control systems. Computers were basically not available to designers in the 1940s and 50s. Large mainframe computers started being used, somewhat, for large-scale data processing by corporations in the 1950s, but there were no courses in engineering programs that taught the use of computers for analysis and design until about 1960. Engineering usage became commonplace through the 1960s, but the process involved submitting a job to a mainframe computer via a large deck of punched cards and waiting for the results for hours or overnight, a situation that was not conducive to any kind of design iteration. Mainframe computers in that era were just transitioning from vacuum tubes to transistors, random access memory would be in the neighborhood of \(32k(!)\), and the long-term data storage was by a magnetic tape drive. Random access drums and disks arrived during that decade, thus greatly speeding up the process of retrieving data. A big step forward in computing for engineers occurred when the batch processing based on punched cards was replaced by time share with many users at remote terminals during the late 1960s and early 1970s. Mechanical calculators were also available through the 1940s, 50s, and 60s that could add, subtract, multiply, and divide, and cost about \(\$ 1500\) in the early 1960s. The very highend devices (about $3000) could also do square roots (see Fig. 5.49). These machines were the basis for the complex computations done at Los Alamos and Langley Field during World War II. They were the size of a typewriter, had a large carriage that went back and forth during the calculations, and would occasionally ring a bell at the end of the carriage stroke (see Fig. 5.49). They were accurate to eight or more decimal places and were often used after the advent of computers to perform spot checks of the results, but a square root could take tens of seconds to complete, the machines were noisy, and the process was tedious. Enterprising engineers learned which particular calculations played certain tunes, and it was not unusual to hear favorites such as Jingle Bells.

Figure 5.49

The Friden mechanical calculator

Source: Photo courtesy of David Powell

The personal computer arrived in the late 1970s, although the ones at that time utilized an audio cassette tape for data storage and had very limited random access memory, usually less than 16k. But as these desktop machines matured over the ensuing decade, the age of the computer for engineering design came into its own. First came the floppy disk for long-term data storage, followed by the hard drive toward the mid- and late-1980s. Initially, the BASIC and APL languages were the primary methods of programming. Matlab was introduced by Cleve Moler in the 1970s. Two events took place in 1984: Apple introduced the point-andclick MacIntosh and PC-Matlab was introduced by The MathWorks, which was specifically founded to commercialize Matlab on personal computers. Initially, Matlab was primarily written for control system analysis, but has branched out into many fields since the initial introduction. At that point in the evolution, the engineer could truly perform design iterations with little or no time between trials. Other similar programs were available for mainframe computers before that time; two being CTRL-C and MATRIXx; however, those programs did not adapt to the personal computer revolution, and have faded from general use.

182. SUMMARY

  • A root locus is a graph of the values of \(s\) that are solutions to the equation

\[1 + KL(s) = 0 \]

with respect to a real parameter \(K\).

  1. When \(K > 0,s\) is on the locus if \(\angle L(s) = 180^{\circ}\), producing a \(180^{\circ}\) or positive \(K\) locus.

  2. When \(K < 0,s\) is on the locus if \(\angle L(s) = 0^{\circ}\), producing a \(0^{\circ}\) or negative \(K\) locus.

  • If \(KL(s)\) is the loop transfer function of a system with negative feedback, then the characteristic equation of the closed-loop system is

\[1 + KL(s) = 0 \]

and the root-locus method displays the effect of changing the gain \(K\) on the closed-loop system roots.

  • A specific locus for a system sysL in Matlab notation can be plotted by rlocus(sysL) and sisotool('rlocus', sysL).

  • A working knowledge of how to determine a root locus is useful for verifying computer results and for suggesting design alternatives.

  • The key features for aid in sketching or verifying a computer generated \(180^{\circ}\) locus are as follows:

  1. The locus is on the real axis to the left of an odd number of poles plus zeros.

  2. Of the \(n\) branches, \(m\) approach the zeros of \(L(s)\) and \(n - m\) branches approach asymptotes centered at \(\alpha\) and leaving at angles \(\phi_{l}\) :

\[\begin{matrix} n & \ = \text{~}\text{number of poles,}\text{~} \\ m & \ = \text{~}\text{number of zeros,}\text{~} \\ n - m & \ = \text{~}\text{number of asymptotes,}\text{~} \\ \alpha & \ = \frac{\sum_{}^{}\ p_{i} - \sum_{}^{}\ z_{i}}{n - m}, \\ \phi_{l} & \ = \frac{180^{\circ} + 360^{\circ}(l - 1)}{n - m},\ l = 1,2,\ldots,n - m. \end{matrix}\]

  1. Branches of the locus depart from the poles of order \(q\) and arrive at the zeros of order \(q\) with angles

\[\begin{matrix} \phi_{l,dep} & \ = \frac{1}{q}\left( \sum_{}^{}\ \psi_{i} - \sum_{i \neq dep}^{}\mspace{2mu}\mspace{2mu}\phi_{i} - 180^{\circ} - 360^{\circ}(l - 1) \right) \\ \psi_{l,arr} & \ = \frac{1}{q}\left( \sum_{}^{}\ \phi_{i} - \sum_{i \neq arr}^{}\mspace{2mu}\mspace{2mu}\psi_{i} + 180^{\circ} + 360^{\circ}(l - 1) \right) \end{matrix}\]

where

\[\begin{matrix} q & \ = \text{~}\text{order of the repeated pole or zero,}\text{~} \\ \psi_{i} & \ = \text{~}\text{angles from the zeros,}\text{~} \\ \phi_{i} & \ = \text{~}\text{angles from the poles.}\text{~} \\ l & \ = 1,2,\ldots,q \end{matrix}\]

  • The parameter \(K\) corresponding to a root at a particular point \(s_{0}\) on the locus can be found from

\[K = \frac{1}{\left| L\left( s_{0} \right) \right|} \]

where \(\left| L\left( s_{0} \right) \right|\) can be found graphically by measuring the distances from \(s_{0}\) to each of the poles and zeros.

  • For a locus drawn with rlocus(sysL), the parameter and corresponding roots can be found with \(\lbrack K,p\rbrack = rlocfind(sysL)\) or with sisotool.

  • Lead compensation, given by

\[D_{c}(s) = \frac{s + z}{s + p},\ z < p \]

approximates proportional-derivative (PD) control. For a fixed error coefficient, it generally moves the locus to the left and improves the system damping.

  • Lag compensation, given by

\[D_{c}(s) = \frac{s + z}{s + p},\ z > p \]

approximates proportional-integral (PI) control. It generally improves the steady-state error for fixed speed of response by increasing the low-frequency gain and typically degrades stability.

$\Delta\ $ - The root locus can be used to analyze successive loop closures by studying two (or more) parameters in succession.

183. REVIEW QUESTIONS

5.1 Give two definitions for the root locus.

5.2 Define the negative root locus.

5.3 Where are the sections of the (positive) root locus on the real axis?

5.4 What are the angles of departure from two coincident poles at \(s = - a\) on the real axis? There are no poles or zeros to the right of \(- a\).

5.5 What are the angles of departure from three coincident poles at \(s = - a\) on the real axis? There are no poles or zeros to the right of \(- a\).

5.6 What is the principal effect of a lead compensation on a root locus?

5.7 What is the principal effect of a lag compensation on a root locus in the vicinity of the dominant closed-loop roots?

5.8 What is the principal effect of a lag compensation on the steady-state error to a reference input?

5.9 Why is the angle of departure from a pole near the imaginary axis especially important?

5.10 Define a conditionally stable system.

5.11 Show, with a root-locus argument, that a system having three poles at the origin MUST be either unstable or, at best, conditionally stable.

184. PROBLEMS

185. Problems for Section 5.1: Root Locus of a Basic Feedback System

5.1 Set up the listed characteristic equations in the form suited to Evans's root-locus method. Give \(L(s),a(s)\), and \(b(s)\) and the parameter \(K\) in terms of the original parameters in each case. Be sure to select \(K\) so \(a(s)\) and \(b(s)\) are monic in each case, and the degree of \(b(s)\) is not greater than that of \(a(s)\).

(a) \(s + (1/\tau) = 0\) versus parameter \(\tau\)

(b) \(s^{2} + cs + c + 1 = 0\) versus parameter \(c\)

(c) \((s + c)^{3} + A(Ts + 1) = 0\)

(i) versus parameter \(A\),

(ii) versus parameter \(T\),

(iii) versus the parameter \(c\), if possible. Say why you can or cannot. Can a plot of the roots be drawn versus \(c\) for given constant values of \(A\) and \(T\) by any means at all?
(d) \(1 + \left\lbrack k_{p} + \frac{k_{I}}{s} + \frac{k_{D}s}{\tau s + 1} \right\rbrack G(s) = 0\). Assume \(G(s) = A\frac{c(s)}{d(s)}\), where \(c(s)\) and \(d(s)\) are monic polynomials with the degree of \(d(s)\) greater than that of \(c(s)\).

(i) versus \(k_{p}\)

(ii) versus \(k_{I}\)

(iii) versus \(k_{D}\)

(iv) \(versus\tau\)

Problems for Section 5.2: Guidelines for Sketching a Root

Locus

5.2 Roughly sketch the root loci for the pole-zero maps as shown in Fig. 5.50 without the aid of a computer. Show your estimates of the center and angles of the asymptotes, a rough evaluation of arrival and departure angles for complex poles and zeros, and the loci for positive values of the parameter \(K\). Each pole-zero map is from a characteristic equation of the form

\[1 + K\frac{b(s)}{a(s)} = 0 \]

where the roots of the numerator \(b(s)\) are shown as small circles \(\circ\) and the roots of the denominator \(a(s)\) are shown as \(\times\) 's on the \(s\)-plane. Note in Fig. \(5.50(c)\), there are two poles at the origin.

Figure 5.50

Pole-zero maps

(a)

(d)

(b)

(e)

(c)

(f)

5.3 For the characteristic equation

\[1 + \frac{K}{s^{2}(s + 1)(s + 5)} = 0 \]

(a) Draw the real-axis segments of the corresponding root locus.

(b) Sketch the asymptotes of the locus for \(K \rightarrow \infty\).
(c) Sketch the locus

(d) Verify your sketch with a Matlab plot.

5.4 Real poles and zeros. Sketch the root locus with respect to \(K\) for the equation \(1 + KL(s) = 0\) and the listed choices for \(L(s)\). Be sure to give the asymptotes, and the arrival and departure angles at any complex zero or pole. After completing each hand sketch, verify your results using Matlab. Turn in your hand sketches and the Matlab results on the same scales.

(a) \(L(s) = \frac{2}{s(s + 1)(s + 5)(s + 10)}\)

(b) \(L(s) = \frac{(s + 2)}{s(s + 1)(s + 5)(s + 10)}\)

(c) \(L(s) = \frac{(s + 2)(s + 20)}{s(s + 1)(s + 5)(s + 10)}\)

(d) \(L(s) = \frac{(s + 2)(s + 6)}{s(s + 1)(s + 5)(s + 10)}\)

5.5 Complex poles and zeros. Sketch the root locus with respect to \(K\) for the equation \(1 + KL(s) = 0\) and the listed choices for \(L(s)\). Be sure to give the asymptotes and the arrival and departure angles at any complex zero or pole. After completing each hand sketch, verify your results using Matlab. Turn in your hand sketches and the Matlab results on the same scales.

(a) \(L(s) = \frac{1}{s^{2} + 3s + 10}\)

(b) \(L(s) = \frac{1}{s\left( s^{2} + 3s + 10 \right)}\)

(c) \(L(s) = \frac{\left( s^{2} + 2s + 8 \right)}{s\left( s^{2} + 2s + 10 \right)}\)

(d) \(L(s) = \frac{\left( s^{2} + 2s + 12 \right)}{s\left( s^{2} + 2s + 10 \right)}\)

(e) \(L(s) = \frac{s^{2} + 1}{s\left( s^{2} + 4 \right)}\)

(f) \(L(s) = \frac{s^{2} + 4}{s\left( s^{2} + 1 \right)}\)

5.6 Multiple poles at the origin. Sketch the root locus with respect to \(K\) for the equation \(1 + KL(s) = 0\) and the listed choices for \(L(s)\). Be sure to give the asymptotes and the arrival and departure angles at any complex zero or pole. After completing each hand sketch, verify your results using Matlab. Turn in your hand sketches and the Matlab results on the same scales.

(a) \(L(s) = \frac{1}{s^{2}(s + 10)}\)

(b) \(L(s) = \frac{1}{s^{3}(s + 10)}\)

(c) \(L(s) = \frac{1}{s^{4}(s + 10)}\)

(d) \(L(s) = \frac{(s + 3)}{s^{2}(s + 10)}\)

(e) \(L(s) = \frac{(s + 3)}{s^{3}(s + 4)}\)
(f) \(L(s) = \frac{(s + 1)^{2}}{s^{3}(s + 4)}\)

(g) \(L(s) = \frac{(s + 1)^{2}}{s^{3}(s + 10)}\)

5.7 Mixed real and complex poles. Sketch the root locus with respect to \(K\) for the equation \(1 + KL(s) = 0\) and the listed choices for \(L(s)\). Be sure to give the asymptotes and the arrival and departure angles at any complex zero or pole. After completing each hand sketch, verify your results using Matlab. Turn in your hand sketches and the Matlab results on the same scales.

(a) \(L(s) = \frac{(s + 3)}{s(s + 10)\left( s^{2} + 2s + 2 \right)}\)

(b) \(L(s) = \frac{(s + 3)}{s^{2}(s + 10)\left( s^{2} + 6s + 25 \right)}\)

(c) \(L(s) = \frac{(s + 3)^{2}}{s^{2}(s + 10)\left( s^{2} + 6s + 25 \right)}\)

(d) \(L(s) = \frac{(s + 3)\left( s^{2} + 4s + 68 \right)}{s^{2}(s + 10)\left( s^{2} + 4s + 85 \right)}\)

(e) \(L(s) = \frac{\left\lbrack (s + 1)^{2} + 1 \right\rbrack}{s^{2}(s + 2)(s + 3)}\)

5.8 RHP and zeros. Sketch the root locus with respect to \(K\) for the equation \(1 + KL(s) = 0\) and the listed choices for \(L(s)\). Be sure to give the asymptotes and the arrival and departure angles at any complex zero or pole. After completing each hand sketch, verify your results using Matlab. Turn in your hand sketches and the Matlab results on the same scales.

(a) \(L(s) = \frac{s + 2}{s + 10}\frac{1}{s^{2} - 1}\); the model for a case of magnetic levitation with lead compensation.

(b) \(L(s) = \frac{s + 2}{s(s + 10)}\frac{1}{\left( s^{2} - 1 \right)}\); the magnetic levitation system with integral control and lead compensation.

(c) \(L(s) = \frac{s - 1}{s^{2}}\)

(d) \(L(s) = \frac{s^{2} + 2s + 1}{s(s + 20)^{2}\left( s^{2} - 2s + 2 \right)}\). What is the largest value that can be obtained for the damping ratio of the stable complex roots on this locus?

(e) \(L(s) = \frac{(s + 2)}{s(s - 1)(s + 6)^{2}}\)

(f) \(L(s) = \frac{1}{(s - 1)\left\lbrack (s + 2)^{2} + 3 \right\rbrack}\)

5.9 Put the characteristic equation of the system shown in Fig. 5.51 in root-locus form with respect to the parameter \(\alpha\), and identify the corresponding \(L(s),a(s)\), and \(b(s)\). When \(\alpha = 0.5,1.0\), and 1.5 , find the

Figure 5.51

Control system for Problem 5.9

closed-loop pole locations, verify your results from the root locus with respect to the parameter \(\alpha\) and sketch the corresponding step responses. Use Matlab to check the accuracy of your approximate step responses.

5.10 Use the Matlab function sisotool to study the behavior of the root locus of \(1 + KL(s)\) for

\[L(s) = \frac{(s + a)}{s(s + 1)\left( s^{2} + 8s + 52 \right)} \]

as the parameter \(a\) is varied from 0 to 10 , paying particular attention to the region between 2.5 and 3.5. Verify that a multiple root occurs at a complex value of \(s\) for some value of \(a\) in this range.

5.11 Use Routh's criterion to find the range of the gain \(K\) for which the systems in Fig. 5.52 are stable, and use the root locus to confirm your calculations.

(a)

(b)

Figure 5.52

Feedback systems for Problem 5.11

5.12 Sketch the root locus for the characteristic equation of the system for which

\[L(s) = \frac{(s + 2)}{s^{2}(s + 10)} \]

and determine the value of the root-locus gain for which the complex conjugate poles have the maximum damping ratio. What is the approximate value of the damping?

5.13 For the system in Fig. 5.53,

(a) Find the locus of closed-loop roots with respect to \(K\).

(b) Is there a value of \(K\) that will cause all roots to have a damping ratio greater than 0.5 ?

(c) Find the values of \(K\) that yield closed-loop poles with the damping ratio \(\zeta = 0.707\).

(d) Use Matlab to plot the response of the resulting design to a reference step.

Figure 5.53

Feedback system for

Problem 5.13

Figure 5.54

Feedback system for Problem 5.14
Figure 5.55

Control system for Problem 5.16
5.14 For the feedback system shown in Fig. 5.54, find the value of the gain \(K\) that results in dominant closed-loop poles with a damping ratio \(\zeta = 0.5\).

186. Problems for Section 5.3: Selected Illustrative Root Loci

5.15 A simplified model of the longitudinal motion of a certain helicopter near hover has the transfer function

\[G(s) = \frac{8.5\left( s^{2} - 0.7s + 4 \right)}{(s + 0.5)\left( s^{2} - 0.2s + 2 \right)} \]

and the characteristic equation \(1 + D_{c}(s)G(s) = 0\). Let \(D_{c}(s) = k_{p}\) at first.

(a) Compute the departure and arrival angles at the complex poles and zeros.

(b) Sketch the root locus for this system for parameter \(K = 8.5k_{p}\). Use axes \(- 1.4 \leq x \leq 0.6; - 3 \leq y \leq 3\).

(c) Verify your answer using Matlab. Use the command axis([-1.4 0.6 \(- 33\rbrack)\) to get the right scales.

(d) Suggest a practical (at least as many poles as zeros) alternative compensation \(D_{c}(s)\) which will at least result in a stable system.

5.16 For the system given in Fig. 5.55,

(a) Plot the root locus of the characteristic equation as the parameter \(K_{1}\) is varied from 0 to \(\infty\) with \(\lambda = 2\). Find the corresponding \(L(s),a(s)\), and \(b(s)\).

(b) Repeat part (a) with \(\lambda = 4\). Is there anything special about this value?

(c) Repeat part (a) for fixed \(K_{1} = 2\), with the parameter \(K = \lambda\) varying from 0 to \(\infty\).

5.17 For the system shown in Fig. 5.56, determine the characteristic equation and sketch the root locus of it with respect to positive values of parameter \(a\). Give \(L(s),a(s)\), and \(b(s)\), and be sure to show with arrows the direction in which \(a\) increases on the locus.

Figure 5.56

Control system for

Problem 5.17

Figure 5.57

Feedback system for

Problem 5.19

Figure 5.58

Feedback system for Problem 5.20

5.18 The loop transmission of a system has two poles at \(s = - 1\) and a zero at \(s = - 2\). There is a third real-axis pole \(p\) located somewhere to the left of the zero. Several different root loci are possible, depending on the exact location of the third pole. The extreme cases occur when the pole is located at infinity or when it is located at \(s = - 2\). Give values for \(p\) and sketch the three distinct types of loci.

5.19 For the feedback configuration of Fig. 5.57, use asymptotes, center of asymptotes, angles of departure and arrival, and the Routh array to sketch root loci for the characteristic equations of the listed feedback control systems versus the parameter \(K\). Use Matlab to verify your results.

(a) \(G(s) = \frac{K}{s(s + 2 + 8j)(s + 2 - 8j)},\ H(s) = \frac{s + 1}{s + 6}\)

(b) \(G(s) = \frac{K}{s^{2}},\ H(s) = \frac{s + 2}{s + 5}\)

(c) \(G(s) = \frac{K(s + 4)}{(s + 3)},\ H(s) = \frac{s + 9}{s + 2}\)

(d) \(G(s) = \frac{K(s + 2 + 1j)(s + 2 - 1j)}{s(s + 5 - 7j)(s + 5 + 7j)},\ H(s) = \frac{1}{s + 3}\)

5.20 Consider the system in Fig. 5.58.

(a) Using Routh's stability criterion, determine all values of \(K\) for which the system is stable.

(b) Use Matlab to draw the root locus versus \(K\) and find the values of \(K\) at the imaginary-axis crossings.

posted @ 2023-12-19 22:35  李白的白  阅读(119)  评论(0编辑  收藏  举报